DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* [PATCH v5 12/22] ipc: fix mp message alignment for malloc
  @ 2025-07-23 13:31  8%   ` David Marchand
  0 siblings, 0 replies; 77+ results
From: David Marchand @ 2025-07-23 13:31 UTC (permalink / raw)
  To: dev; +Cc: Tyler Retzlaff

Content (param[]) of received multiprocess messages are aligned with
a 4 bytes constraint.

Before patch:
struct mp_msg_internal {
 int type;                                                     /*   0     4 */
 struct rte_mp_msg {
  char name[64];                                               /*   4    64 */
  /* --- cacheline 1 boundary (64 bytes) was 4 bytes ago --- */
  int len_param;                                               /*  68     4 */
  int num_fds;                                                 /*  72     4 */
  /* typedef uint8_t -> __uint8_t */ unsigned char param[256]; /*  76   256 */
  /* --- cacheline 5 boundary (320 bytes) was 12 bytes ago --- */
  int fds[253];                                                /* 332  1012 */
 } msg;                                                        /*   4  1340 */

 /* size: 1344, cachelines: 21, members: 2 */
};

This results in many unaligned accesses for multiprocess malloc requests.

Examples:
../lib/eal/common/malloc_mp.c:308:32: runtime error:
	member access within misaligned address 0x7f7b35df4684 for type
	'const struct malloc_mp_req', which requires 8 byte alignment

../lib/eal/common/malloc_mp.c:158:9: runtime error:
	member access within misaligned address 0x7f36a535bb5c for type
	'const struct malloc_mp_req', which requires 8 byte alignment

../lib/eal/common/malloc_mp.c:171:8: runtime error:
	member access within misaligned address 0x7f4ba65f296c for type
	'struct malloc_mp_req', which requires 8 byte alignment

Align param[] to 64 bits to avoid unaligned accesses on structures
passed through this array in mp messages.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v4:
- dropped ABI exception and updated RN,

Changes since v3:
- changed rte_mp_msg struct alignment,

---
 doc/guides/rel_notes/release_25_11.rst | 4 +++-
 lib/eal/include/rte_eal.h              | 3 ++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index aa9211dd60..86cc59b4be 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -100,10 +100,12 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* eal: The structure ``rte_mp_msg`` alignment has been updated to 8 bytes to limit unaligned
+  accesses in messages payload.
+
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
-
 Known Issues
 ------------
 
diff --git a/lib/eal/include/rte_eal.h b/lib/eal/include/rte_eal.h
index c826e143f1..08977c61d3 100644
--- a/lib/eal/include/rte_eal.h
+++ b/lib/eal/include/rte_eal.h
@@ -11,6 +11,7 @@
  * EAL Configuration API
  */
 
+#include <stdalign.h>
 #include <stdint.h>
 #include <time.h>
 
@@ -162,7 +163,7 @@ struct rte_mp_msg {
 	char name[RTE_MP_MAX_NAME_LEN];
 	int len_param;
 	int num_fds;
-	uint8_t param[RTE_MP_MAX_PARAM_LEN];
+	alignas(8) uint8_t param[RTE_MP_MAX_PARAM_LEN];
 	int fds[RTE_MP_MAX_FD_NUM];
 };
 
-- 
2.50.0


^ permalink raw reply	[relevance 8%]

* RE: [PATCH 2/2] net: remove v25 ABI compatibility
  @ 2025-07-24 10:10  4%     ` Finn, Emma
  0 siblings, 0 replies; 77+ results
From: Finn, Emma @ 2025-07-24 10:10 UTC (permalink / raw)
  To: Marchand, David; +Cc: dev, thomas, Richardson, Bruce

> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Wednesday 23 July 2025 13:15
> To: Marchand, David <david.marchand@redhat.com>
> Cc: dev@dpdk.org; thomas@monjalon.net
> Subject: Re: [PATCH 2/2] net: remove v25 ABI compatibility
> 
> On Tue, Jul 22, 2025 at 03:24:41PM +0200, David Marchand wrote:
> > Now that the ABI has been bumped to 26, we can drop compatibility
> > symbols for the CRC API.
> >
> > The logtype is not used anymore and can be removed.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> 
> Not an expert in this area, but code changes look ok to me.
> 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> 

I reviewed and tested. Changes look good to me too.

Acked-by: Emma Finn <emma.finn@intel.com>

^ permalink raw reply	[relevance 4%]

* Re: [EXTERNAL] [PATCH] doc: announce DMA configuration structure changes
  @ 2025-07-25  6:04  0%     ` Pavan Nikhilesh Bhagavatula
  2025-07-26  0:55  0%       ` fengchengwen
  0 siblings, 1 reply; 77+ results
From: Pavan Nikhilesh Bhagavatula @ 2025-07-25  6:04 UTC (permalink / raw)
  To: Thomas Monjalon, Amit Prakash Shukla
  Cc: Jerin Jacob, dev, Vamsi Krishna Attunuru, g.singh, sachin.saxena,
	hemant.agrawal, fengchengwen, bruce.richardson, kevin.laatz,
	conor.walsh, Gowrishankar Muthukrishnan, Vidya Sagar Velumuri,
	anatoly.burakov

>> Deprecate rte_dma_conf structure to allow for a more flexible
>> configuration of DMA devices.
>> The new structure will have a flags field instead of multiple
>> boolean fields for each feature.
>>
>> Signed-off-by: Pavan Nikhilesh <mailto:pbhagavatula@marvell.com>
>> ---
>> +* dmadev: The ``rte_dma_conf`` structure is updated to include a new field
>> +  ``rte_dma_conf::flags`` that should be used to configure dmadev features.
>> +  The existing field ``rte_dma_conf::enable_silent`` is removed and replaced
>> +  with the new flag ``RTE_DMA_CFG_FLAG_SILENT``, to configure silent mode
>> +  the flag should be set in ``rte_dma_conf::flags`` during device configuration.
>>
>> Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
>
>There is only 1 ack.
>Per our policy, it will miss the release 25.07.
>
>You can probably do this change anyway,
>and keep ABI compatibility by versioning the function.

Hi Fengchengwen,

Are you ok with this change? If so please ack it so that I can work on getting
an exception from techboard to merge this without deprecation notice in 25.11.

Thanks,
Pavan.

^ permalink raw reply	[relevance 0%]

* Re: [EXTERNAL] [PATCH] doc: announce DMA configuration structure changes
  2025-07-25  6:04  0%     ` Pavan Nikhilesh Bhagavatula
@ 2025-07-26  0:55  0%       ` fengchengwen
  2025-07-28  5:11  4%         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 77+ results
From: fengchengwen @ 2025-07-26  0:55 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Thomas Monjalon, Amit Prakash Shukla
  Cc: Jerin Jacob, dev, Vamsi Krishna Attunuru, g.singh, sachin.saxena,
	hemant.agrawal, bruce.richardson, kevin.laatz, conor.walsh,
	Gowrishankar Muthukrishnan, Vidya Sagar Velumuri,
	anatoly.burakov

Acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2025/7/25 14:04, Pavan Nikhilesh Bhagavatula wrote:
>>> Deprecate rte_dma_conf structure to allow for a more flexible
>>> configuration of DMA devices.
>>> The new structure will have a flags field instead of multiple
>>> boolean fields for each feature.
>>>
>>> Signed-off-by: Pavan Nikhilesh <mailto:pbhagavatula@marvell.com>
>>> ---
>>> +* dmadev: The ``rte_dma_conf`` structure is updated to include a new field
>>> +  ``rte_dma_conf::flags`` that should be used to configure dmadev features.
>>> +  The existing field ``rte_dma_conf::enable_silent`` is removed and replaced
>>> +  with the new flag ``RTE_DMA_CFG_FLAG_SILENT``, to configure silent mode
>>> +  the flag should be set in ``rte_dma_conf::flags`` during device configuration.
>>>
>>> Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
>>
>> There is only 1 ack.
>> Per our policy, it will miss the release 25.07.
>>
>> You can probably do this change anyway,
>> and keep ABI compatibility by versioning the function.
> 
> Hi Fengchengwen,
> 
> Are you ok with this change? If so please ack it so that I can work on getting
> an exception from techboard to merge this without deprecation notice in 25.11.
> 
> Thanks,
> Pavan.
> 


^ permalink raw reply	[relevance 0%]

* Re: [EXTERNAL] [PATCH] doc: announce DMA configuration structure changes
  2025-07-26  0:55  0%       ` fengchengwen
@ 2025-07-28  5:11  4%         ` Pavan Nikhilesh Bhagavatula
  2025-08-12 10:59  0%           ` Thomas Monjalon
  0 siblings, 1 reply; 77+ results
From: Pavan Nikhilesh Bhagavatula @ 2025-07-28  5:11 UTC (permalink / raw)
  To: fengchengwen, techboard, Thomas Monjalon, Amit Prakash Shukla
  Cc: Jerin Jacob, dev, Vamsi Krishna Attunuru, g.singh, sachin.saxena,
	hemant.agrawal, bruce.richardson, kevin.laatz, conor.walsh,
	Gowrishankar Muthukrishnan, Vidya Sagar Velumuri,
	anatoly.burakov

>Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>

Thomas,

Now that Feng Chengwen is ok with this change, can this be merged
along with the ABI breaking changes in 25.11?
Given that techboard approves the change.
This change helps reduce ABI breakage when a new feature is added.

Thanks,
Pavan.

>On 2025/7/25 14:04, Pavan Nikhilesh Bhagavatula wrote:
>>>> Deprecate rte_dma_conf structure to allow for a more flexible
>>>> configuration of DMA devices.
>>>> The new structure will have a flags field instead of multiple
>>>> boolean fields for each feature.
>>>>
>>>> Signed-off-by: Pavan Nikhilesh <mailto:pbhagavatula@marvell.com>
>>>> ---
>>>> +* dmadev: The ``rte_dma_conf`` structure is updated to include a new field
>>>> +  ``rte_dma_conf::flags`` that should be used to configure dmadev features.
>>>> +  The existing field ``rte_dma_conf::enable_silent`` is removed and replaced
>>>> +  with the new flag ``RTE_DMA_CFG_FLAG_SILENT``, to configure silent mode
>>>> +  the flag should be set in ``rte_dma_conf::flags`` during device configuration.
>>>>
>>>> Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
>>>
>>> There is only 1 ack.
>>> Per our policy, it will miss the release 25.07.
>>>
>>> You can probably do this change anyway,
>>> and keep ABI compatibility by versioning the function.
>>
>> Hi Fengchengwen,
>>
>> Are you ok with this change? If so please ack it so that I can work on getting
>> an exception from techboard to merge this without deprecation notice in 25.11.
>>
>> Thanks,
>> Pavan.
>>



^ permalink raw reply	[relevance 4%]

* RE: [PATCH v12 01/10] mbuf: replace term sanity check
  @ 2025-08-11  9:55  0%     ` Morten Brørup
  2025-08-11 15:20  0%       ` Stephen Hemminger
  0 siblings, 1 reply; 77+ results
From: Morten Brørup @ 2025-08-11  9:55 UTC (permalink / raw)
  To: Stephen Hemminger, dev; +Cc: Andrew Rybchenko, Akhil Goyal, Fan Zhang

> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, 3 April 2025 01.23
> 
> Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
> to match the similar macro RTE_VERIFY() in rte_debug.h
> 
> The term sanity check is on the Tier 2 list of words
> that should be replaced.
> 
> For this release keep old API functions but mark them
> as deprecated.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>  app/test/test_cryptodev.c            |  2 +-
>  app/test/test_mbuf.c                 | 28 +++++-----
>  doc/guides/prog_guide/mbuf_lib.rst   |  4 +-
>  doc/guides/rel_notes/deprecation.rst |  3 ++
>  lib/mbuf/rte_mbuf.c                  | 23 +++++---
>  lib/mbuf/rte_mbuf.h                  | 79 +++++++++++++++-------------
>  lib/mbuf/version.map                 |  1 +
>  7 files changed, 79 insertions(+), 61 deletions(-)
> 
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 31a4905a97..d5f3843daf 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -264,7 +264,7 @@ create_mbuf_from_heap(int pkt_len, uint8_t pattern)
>  	m->port = RTE_MBUF_PORT_INVALID;
>  	m->buf_len = MBUF_SIZE - sizeof(struct rte_mbuf) - RTE_PKTMBUF_HEADROOM;
>  	rte_pktmbuf_reset_headroom(m);
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	m->buf_addr = (char *)m + sizeof(struct rte_mbuf) +
> RTE_PKTMBUF_HEADROOM;
> 
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index 17be977f31..3fbb5dea8b 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -262,8 +262,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
>  		GOTO_FAIL("Buffer should be continuous");
>  	memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
> 
> -	rte_mbuf_sanity_check(m, 1);
> -	rte_mbuf_sanity_check(m, 0);
> +	rte_mbuf_verify(m, 1);
> +	rte_mbuf_verify(m, 0);
>  	rte_pktmbuf_dump(stdout, m, 0);
> 
>  	/* this prepend should fail */
> @@ -1162,7 +1162,7 @@ test_refcnt_mbuf(void)
> 
>  #ifdef RTE_EXEC_ENV_WINDOWS
>  static int
> -test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
> +test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
>  {
>  	RTE_SET_USED(pktmbuf_pool);
>  	return TEST_SKIPPED;
> @@ -1181,12 +1181,12 @@ mbuf_check_pass(struct rte_mbuf *buf)
>  }
> 
>  static int
> -test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
> +test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
>  {
>  	struct rte_mbuf *buf;
>  	struct rte_mbuf badbuf;
> 
> -	printf("Checking rte_mbuf_sanity_check for failure conditions\n");
> +	printf("Checking rte_mbuf_verify for failure conditions\n");
> 
>  	/* get a good mbuf to use to make copies */
>  	buf = rte_pktmbuf_alloc(pktmbuf_pool);
> @@ -1708,7 +1708,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
>  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
>  	if (rte_pktmbuf_pkt_len(m) != 0)
>  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> -	rte_mbuf_sanity_check(m, 0);
> +	rte_mbuf_verify(m, 0);
>  	m->ol_flags = ol_flags;
>  	m->tso_segsz = segsize;
>  	ret = rte_validate_tx_offload(m);
> @@ -1915,7 +1915,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
>  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
>  	if (rte_pktmbuf_pkt_len(m) != 0)
>  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> -	rte_mbuf_sanity_check(m, 0);
> +	rte_mbuf_verify(m, 0);
> 
>  	data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
>  	if (data == NULL)
> @@ -1964,7 +1964,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool
> *pktmbuf_pool)
> 
>  	if (rte_pktmbuf_pkt_len(m) != 0)
>  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> -	rte_mbuf_sanity_check(m, 0);
> +	rte_mbuf_verify(m, 0);
> 
>  	/* prepend an ethernet header */
>  	hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
> @@ -2109,7 +2109,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
>  			GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
>  		if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
>  			GOTO_FAIL("%s: Bad packet length\n", __func__);
> -		rte_mbuf_sanity_check(pkt_seg, 0);
> +		rte_mbuf_verify(pkt_seg, 0);
>  		/* Add header only for the first segment */
>  		if (test_data->flags == MBUF_HEADER && seg == 0) {
>  			hdr_len = sizeof(struct rte_ether_hdr);
> @@ -2321,7 +2321,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool
> *pktmbuf_pool)
>  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
>  	if (rte_pktmbuf_pkt_len(m) != 0)
>  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> -	rte_mbuf_sanity_check(m, 0);
> +	rte_mbuf_verify(m, 0);
> 
>  	ext_buf_addr = rte_malloc("External buffer", buf_len,
>  			RTE_CACHE_LINE_SIZE);
> @@ -2482,8 +2482,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool
> *std_pool)
>  		GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
>  			  __func__);
> 
> -	if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
> -		GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
> +	if (test_failing_mbuf_verify(pinned_pool) < 0)
> +		GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
>  			  " failed\n", __func__);
> 
>  	if (test_mbuf_linearize_check(pinned_pool) < 0)
> @@ -2857,8 +2857,8 @@ test_mbuf(void)
>  		goto err;
>  	}
> 
> -	if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
> -		printf("test_failing_mbuf_sanity_check() failed\n");
> +	if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
> +		printf("test_failing_mbuf_verify() failed\n");
>  		goto err;
>  	}
> 
> diff --git a/doc/guides/prog_guide/mbuf_lib.rst
> b/doc/guides/prog_guide/mbuf_lib.rst
> index 4ad2a21f3f..6c96931f8c 100644
> --- a/doc/guides/prog_guide/mbuf_lib.rst
> +++ b/doc/guides/prog_guide/mbuf_lib.rst
> @@ -266,8 +266,8 @@ can be found in several of the sample applications, for
> example, the IPv4 Multic
>  Debug
>  -----
> 
> -In debug mode, the functions of the mbuf library perform sanity checks before
> any operation (such as, buffer corruption,
> -bad type, and so on).
> +In debug mode, the functions of the mbuf library perform consistency checks
> +before any operation (such as, buffer corruption, bad type, and so on).
> 
>  Use Cases
>  ---------
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 36489f6e68..10bb08a634 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -142,3 +142,6 @@ Deprecation Notices
>  * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
>    ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK.
>    Those API functions are used internally by DPDK core and netvsc PMD.
> +
> +* mbuf: The function ``rte_mbuf_sanity_check`` is deprecated.
> +  Use the new function ``rte_mbuf_verify`` instead.
> diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
> index 559d5ad8a7..fc5d4ba29d 100644
> --- a/lib/mbuf/rte_mbuf.c
> +++ b/lib/mbuf/rte_mbuf.c
> @@ -367,9 +367,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned
> int n,
>  	return mp;
>  }
> 
> -/* do some sanity checks on a mbuf: panic if it fails */
> +/* do some checks on a mbuf: panic if it fails */
>  void
> -rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
> +rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
>  {
>  	const char *reason;
> 
> @@ -377,6 +377,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int
> is_header)
>  		rte_panic("%s\n", reason);
>  }
> 
> +/* For ABI compatibility, to be removed in next release */
> +void
> +rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
> +{
> +	rte_mbuf_verify(m, is_header);
> +}
> +
>  int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
>  		   const char **reason)
>  {
> @@ -496,7 +503,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs,
> unsigned int count)
>  		if (unlikely(m == NULL))
>  			continue;
> 
> -		__rte_mbuf_sanity_check(m, 1);
> +		__rte_mbuf_verify(m, 1);
> 
>  		do {
>  			m_next = m->next;
> @@ -546,7 +553,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool
> *mp)
>  		return NULL;
>  	}
> 
> -	__rte_mbuf_sanity_check(mc, 1);
> +	__rte_mbuf_verify(mc, 1);
>  	return mc;
>  }
> 
> @@ -596,7 +603,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> rte_mempool *mp,
>  	struct rte_mbuf *mc, *m_last, **prev;
> 
>  	/* garbage in check */
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	/* check for request to copy at offset past end of mbuf */
>  	if (unlikely(off >= m->pkt_len))
> @@ -660,7 +667,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> rte_mempool *mp,
>  	}
> 
>  	/* garbage out check */
> -	__rte_mbuf_sanity_check(mc, 1);
> +	__rte_mbuf_verify(mc, 1);
>  	return mc;
>  }
> 
> @@ -671,7 +678,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m,
> unsigned dump_len)
>  	unsigned int len;
>  	unsigned int nb_segs;
> 
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m,
> rte_mbuf_iova_get(m),
>  		m->buf_len);
> @@ -689,7 +696,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m,
> unsigned dump_len)
>  	nb_segs = m->nb_segs;
> 
>  	while (m && nb_segs != 0) {
> -		__rte_mbuf_sanity_check(m, 0);
> +		__rte_mbuf_verify(m, 0);
> 
>  		fprintf(f, "  segment at %p, data=%p, len=%u, off=%u,
> refcnt=%u\n",
>  			m, rte_pktmbuf_mtod(m, void *),
> diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
> index 06ab7502a5..53a837e4d5 100644
> --- a/lib/mbuf/rte_mbuf.h
> +++ b/lib/mbuf/rte_mbuf.h
> @@ -339,16 +339,20 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
> 
>  #ifdef RTE_LIBRTE_MBUF_DEBUG
> 
> -/**  check mbuf type in debug mode */
> -#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
> +/**  do mbuf type in debug mode */
> +#define __rte_mbuf_verify(m, is_h) rte_mbuf_verify(m, is_h)
> 
>  #else /*  RTE_LIBRTE_MBUF_DEBUG */
> 
> -/**  check mbuf type in debug mode */
> -#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
> +/**  ignore mbuf checks if not in debug mode */
> +#define __rte_mbuf_verify(m, is_h) do { } while (0)
> 
>  #endif /*  RTE_LIBRTE_MBUF_DEBUG */
> 
> +/* deprecated version of the macro */
> +#define __rte_mbuf_sanity_check(m, is_h)
> RTE_DEPRECATED(__rte_mbuf_sanity_check) \
> +		__rte_mbuf_verify(m, is_h)
> +
>  #ifdef RTE_MBUF_REFCNT_ATOMIC
> 
>  /**
> @@ -514,10 +518,9 @@ rte_mbuf_ext_refcnt_update(struct
> rte_mbuf_ext_shared_info *shinfo,
> 
> 
>  /**
> - * Sanity checks on an mbuf.
> + * Check that the mbuf is valid and panic if corrupted.
>   *
> - * Check the consistency of the given mbuf. The function will cause a
> - * panic if corruption is detected.
> + * Acts assertion that mbuf is consistent. If not it calls rte_panic().
>   *
>   * @param m
>   *   The mbuf to be checked.
> @@ -526,13 +529,17 @@ rte_mbuf_ext_refcnt_update(struct
> rte_mbuf_ext_shared_info *shinfo,
>   *   of a packet (in this case, some fields like nb_segs are not checked)
>   */
>  void
> +rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
> +
> +__rte_deprecated
> +void
>  rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
> 
>  /**
> - * Sanity checks on a mbuf.
> + * Do consistency checks on a mbuf.
>   *
> - * Almost like rte_mbuf_sanity_check(), but this function gives the reason
> - * if corruption is detected rather than panic.
> + * Check the consistency of the given mbuf and if not valid
> + * return the reason.
>   *
>   * @param m
>   *   The mbuf to be checked.
> @@ -551,7 +558,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int
> is_header,
>  		   const char **reason);
> 
>  /**
> - * Sanity checks on a reinitialized mbuf in debug mode.
> + * Do checks on a reinitialized mbuf in debug mode.
>   *
>   * Check the consistency of the given reinitialized mbuf.
>   * The function will cause a panic if corruption is detected.
> @@ -563,7 +570,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int
> is_header,
>   *   The mbuf to be checked.
>   */
>  static __rte_always_inline void
> -__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
> +__rte_mbuf_raw_verify(__rte_unused const struct rte_mbuf *m)
>  {
>  	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
>  	RTE_ASSERT(m->next == NULL);
> @@ -572,11 +579,11 @@ __rte_mbuf_raw_sanity_check(__rte_unused const struct
> rte_mbuf *m)
>  	RTE_ASSERT(!RTE_MBUF_HAS_EXTBUF(m) ||
>  			(RTE_MBUF_HAS_PINNED_EXTBUF(m) &&
>  			rte_mbuf_ext_refcnt_read(m->shinfo) == 1));
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_verify(m, 0);
>  }
> 
>  /** For backwards compatibility. */
> -#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
> +#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_verify(m)
> 
>  /**
>   * Allocate an uninitialized mbuf from mempool *mp*.
> @@ -606,7 +613,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct
> rte_mempool *mp)
> 
>  	if (rte_mempool_get(mp, &ret.ptr) < 0)
>  		return NULL;
> -	__rte_mbuf_raw_sanity_check(ret.m);
> +	__rte_mbuf_raw_verify(ret.m);
>  	return ret.m;
>  }
> 
> @@ -644,7 +651,7 @@ rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct
> rte_mbuf **mbufs, unsigne
>  	int rc = rte_mempool_get_bulk(mp, (void **)mbufs, count);
>  	if (likely(rc == 0))
>  		for (unsigned int idx = 0; idx < count; idx++)
> -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> +			__rte_mbuf_raw_verify(mbufs[idx]);
>  	return rc;
>  }
> 
> @@ -665,7 +672,7 @@ rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct
> rte_mbuf **mbufs, unsigne
>  static __rte_always_inline void
>  rte_mbuf_raw_free(struct rte_mbuf *m)
>  {
> -	__rte_mbuf_raw_sanity_check(m);
> +	__rte_mbuf_raw_verify(m);
>  	rte_mempool_put(m->pool, m);
>  }
> 
> @@ -700,7 +707,7 @@ rte_mbuf_raw_free_bulk(struct rte_mempool *mp, struct
> rte_mbuf **mbufs, unsigned
>  		const struct rte_mbuf *m = mbufs[idx];
>  		RTE_ASSERT(m != NULL);
>  		RTE_ASSERT(m->pool == mp);
> -		__rte_mbuf_raw_sanity_check(m);
> +		__rte_mbuf_raw_verify(m);
>  	}
> 
>  	rte_mempool_put_bulk(mp, (void **)mbufs, count);
> @@ -965,7 +972,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
>  	rte_pktmbuf_reset_headroom(m);
> 
>  	m->data_len = 0;
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
>  }
> 
>  /**
> @@ -1021,22 +1028,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct
> rte_mempool *pool,
>  	switch (count % 4) {
>  	case 0:
>  		while (idx != count) {
> -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> +			__rte_mbuf_raw_verify(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 3:
> -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> +			__rte_mbuf_raw_verify(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 2:
> -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> +			__rte_mbuf_raw_verify(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
>  	case 1:
> -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> +			__rte_mbuf_raw_verify(mbufs[idx]);
>  			rte_pktmbuf_reset(mbufs[idx]);
>  			idx++;
>  			/* fall-through */
> @@ -1267,8 +1274,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf
> *mi, struct rte_mbuf *m)
>  	mi->pkt_len = mi->data_len;
>  	mi->nb_segs = 1;
> 
> -	__rte_mbuf_sanity_check(mi, 1);
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_verify(mi, 1);
> +	__rte_mbuf_verify(m, 0);
>  }
> 
>  /**
> @@ -1423,7 +1430,7 @@ static inline int
> __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
>  static __rte_always_inline struct rte_mbuf *
>  rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
>  {
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_verify(m, 0);
> 
>  	if (likely(rte_mbuf_refcnt_read(m) == 1)) {
> 
> @@ -1494,7 +1501,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
>  	struct rte_mbuf *m_next;
> 
>  	if (m != NULL)
> -		__rte_mbuf_sanity_check(m, 1);
> +		__rte_mbuf_verify(m, 1);
> 
>  	while (m != NULL) {
>  		m_next = m->next;
> @@ -1575,7 +1582,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> rte_mempool *mp,
>   */
>  static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
>  {
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	do {
>  		rte_mbuf_refcnt_update(m, v);
> @@ -1592,7 +1599,7 @@ static inline void rte_pktmbuf_refcnt_update(struct
> rte_mbuf *m, int16_t v)
>   */
>  static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
>  {
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_verify(m, 0);
>  	return m->data_off;
>  }
> 
> @@ -1606,7 +1613,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct
> rte_mbuf *m)
>   */
>  static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
>  {
> -	__rte_mbuf_sanity_check(m, 0);
> +	__rte_mbuf_verify(m, 0);
>  	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
>  			  m->data_len);
>  }
> @@ -1621,7 +1628,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct
> rte_mbuf *m)
>   */
>  static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
>  {
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
>  	while (m->next != NULL)
>  		m = m->next;
>  	return m;
> @@ -1665,7 +1672,7 @@ static inline struct rte_mbuf
> *rte_pktmbuf_lastseg(struct rte_mbuf *m)
>  static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
>  					uint16_t len)
>  {
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	if (unlikely(len > rte_pktmbuf_headroom(m)))
>  		return NULL;
> @@ -1700,7 +1707,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf
> *m, uint16_t len)
>  	void *tail;
>  	struct rte_mbuf *m_last;
> 
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	m_last = rte_pktmbuf_lastseg(m);
>  	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
> @@ -1728,7 +1735,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf
> *m, uint16_t len)
>   */
>  static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
>  {
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	if (unlikely(len > m->data_len))
>  		return NULL;
> @@ -1760,7 +1767,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m,
> uint16_t len)
>  {
>  	struct rte_mbuf *m_last;
> 
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
> 
>  	m_last = rte_pktmbuf_lastseg(m);
>  	if (unlikely(len > m_last->data_len))
> @@ -1782,7 +1789,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m,
> uint16_t len)
>   */
>  static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
>  {
> -	__rte_mbuf_sanity_check(m, 1);
> +	__rte_mbuf_verify(m, 1);
>  	return m->nb_segs == 1;
>  }
> 
> diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
> index 76f1832924..2950f24caa 100644
> --- a/lib/mbuf/version.map
> +++ b/lib/mbuf/version.map
> @@ -31,6 +31,7 @@ DPDK_25 {
>  	rte_mbuf_set_platform_mempool_ops;
>  	rte_mbuf_set_user_mempool_ops;
>  	rte_mbuf_user_mempool_ops;
> +	rte_mbuf_verify;
>  	rte_pktmbuf_clone;
>  	rte_pktmbuf_copy;
>  	rte_pktmbuf_dump;
> --
> 2.47.2

Stephen,

I have submitted another patch [1], where __rte_mbuf_raw_sanity_check()'s successor takes one more parameter, so I had to give it a new name __rte_mbuf_raw_sanity_check_mp() for API compatibility. And then I added:
+/** For backwards compatibility. */
+#define __rte_mbuf_raw_sanity_check(m) __rte_mbuf_raw_sanity_check_mp(m, NULL)

If you proceed with your patch, __rte_mbuf_raw_sanity_check() will be renamed to __rte_mbuf_raw_verify(), and __rte_mbuf_raw_sanity_check() disappears.
Would it make sense to change your patch, so __rte_mbuf_raw_verify() replaces __rte_mbuf_raw_sanity_check_mp() instead of __rte_mbuf_raw_sanity_check()? If your patch changes the API anyway, adding an extra parameter to the function should be acceptable.

[1]: https://patchwork.dpdk.org/project/dpdk/patch/20250722093431.555214-1-mb@smartsharesystems.com/

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v12 01/10] mbuf: replace term sanity check
  2025-08-11  9:55  0%     ` Morten Brørup
@ 2025-08-11 15:20  0%       ` Stephen Hemminger
  0 siblings, 0 replies; 77+ results
From: Stephen Hemminger @ 2025-08-11 15:20 UTC (permalink / raw)
  To: Morten Brørup; +Cc: dev, Andrew Rybchenko, Akhil Goyal, Fan Zhang

On Mon, 11 Aug 2025 11:55:31 +0200
Morten Brørup <mb@smartsharesystems.com> wrote:

> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Thursday, 3 April 2025 01.23
> > 
> > Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
> > to match the similar macro RTE_VERIFY() in rte_debug.h
> > 
> > The term sanity check is on the Tier 2 list of words
> > that should be replaced.
> > 
> > For this release keep old API functions but mark them
> > as deprecated.
> > 
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Acked-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> >  app/test/test_cryptodev.c            |  2 +-
> >  app/test/test_mbuf.c                 | 28 +++++-----
> >  doc/guides/prog_guide/mbuf_lib.rst   |  4 +-
> >  doc/guides/rel_notes/deprecation.rst |  3 ++
> >  lib/mbuf/rte_mbuf.c                  | 23 +++++---
> >  lib/mbuf/rte_mbuf.h                  | 79 +++++++++++++++-------------
> >  lib/mbuf/version.map                 |  1 +
> >  7 files changed, 79 insertions(+), 61 deletions(-)
> > 
> > diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> > index 31a4905a97..d5f3843daf 100644
> > --- a/app/test/test_cryptodev.c
> > +++ b/app/test/test_cryptodev.c
> > @@ -264,7 +264,7 @@ create_mbuf_from_heap(int pkt_len, uint8_t pattern)
> >  	m->port = RTE_MBUF_PORT_INVALID;
> >  	m->buf_len = MBUF_SIZE - sizeof(struct rte_mbuf) - RTE_PKTMBUF_HEADROOM;
> >  	rte_pktmbuf_reset_headroom(m);
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	m->buf_addr = (char *)m + sizeof(struct rte_mbuf) +
> > RTE_PKTMBUF_HEADROOM;
> > 
> > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > index 17be977f31..3fbb5dea8b 100644
> > --- a/app/test/test_mbuf.c
> > +++ b/app/test/test_mbuf.c
> > @@ -262,8 +262,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
> >  		GOTO_FAIL("Buffer should be continuous");
> >  	memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
> > 
> > -	rte_mbuf_sanity_check(m, 1);
> > -	rte_mbuf_sanity_check(m, 0);
> > +	rte_mbuf_verify(m, 1);
> > +	rte_mbuf_verify(m, 0);
> >  	rte_pktmbuf_dump(stdout, m, 0);
> > 
> >  	/* this prepend should fail */
> > @@ -1162,7 +1162,7 @@ test_refcnt_mbuf(void)
> > 
> >  #ifdef RTE_EXEC_ENV_WINDOWS
> >  static int
> > -test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
> > +test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
> >  {
> >  	RTE_SET_USED(pktmbuf_pool);
> >  	return TEST_SKIPPED;
> > @@ -1181,12 +1181,12 @@ mbuf_check_pass(struct rte_mbuf *buf)
> >  }
> > 
> >  static int
> > -test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
> > +test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
> >  {
> >  	struct rte_mbuf *buf;
> >  	struct rte_mbuf badbuf;
> > 
> > -	printf("Checking rte_mbuf_sanity_check for failure conditions\n");
> > +	printf("Checking rte_mbuf_verify for failure conditions\n");
> > 
> >  	/* get a good mbuf to use to make copies */
> >  	buf = rte_pktmbuf_alloc(pktmbuf_pool);
> > @@ -1708,7 +1708,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
> >  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
> >  	if (rte_pktmbuf_pkt_len(m) != 0)
> >  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> > -	rte_mbuf_sanity_check(m, 0);
> > +	rte_mbuf_verify(m, 0);
> >  	m->ol_flags = ol_flags;
> >  	m->tso_segsz = segsize;
> >  	ret = rte_validate_tx_offload(m);
> > @@ -1915,7 +1915,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
> >  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
> >  	if (rte_pktmbuf_pkt_len(m) != 0)
> >  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> > -	rte_mbuf_sanity_check(m, 0);
> > +	rte_mbuf_verify(m, 0);
> > 
> >  	data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
> >  	if (data == NULL)
> > @@ -1964,7 +1964,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool
> > *pktmbuf_pool)
> > 
> >  	if (rte_pktmbuf_pkt_len(m) != 0)
> >  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> > -	rte_mbuf_sanity_check(m, 0);
> > +	rte_mbuf_verify(m, 0);
> > 
> >  	/* prepend an ethernet header */
> >  	hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
> > @@ -2109,7 +2109,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
> >  			GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
> >  		if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
> >  			GOTO_FAIL("%s: Bad packet length\n", __func__);
> > -		rte_mbuf_sanity_check(pkt_seg, 0);
> > +		rte_mbuf_verify(pkt_seg, 0);
> >  		/* Add header only for the first segment */
> >  		if (test_data->flags == MBUF_HEADER && seg == 0) {
> >  			hdr_len = sizeof(struct rte_ether_hdr);
> > @@ -2321,7 +2321,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool
> > *pktmbuf_pool)
> >  		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
> >  	if (rte_pktmbuf_pkt_len(m) != 0)
> >  		GOTO_FAIL("%s: Bad packet length\n", __func__);
> > -	rte_mbuf_sanity_check(m, 0);
> > +	rte_mbuf_verify(m, 0);
> > 
> >  	ext_buf_addr = rte_malloc("External buffer", buf_len,
> >  			RTE_CACHE_LINE_SIZE);
> > @@ -2482,8 +2482,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool
> > *std_pool)
> >  		GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
> >  			  __func__);
> > 
> > -	if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
> > -		GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
> > +	if (test_failing_mbuf_verify(pinned_pool) < 0)
> > +		GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
> >  			  " failed\n", __func__);
> > 
> >  	if (test_mbuf_linearize_check(pinned_pool) < 0)
> > @@ -2857,8 +2857,8 @@ test_mbuf(void)
> >  		goto err;
> >  	}
> > 
> > -	if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
> > -		printf("test_failing_mbuf_sanity_check() failed\n");
> > +	if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
> > +		printf("test_failing_mbuf_verify() failed\n");
> >  		goto err;
> >  	}
> > 
> > diff --git a/doc/guides/prog_guide/mbuf_lib.rst
> > b/doc/guides/prog_guide/mbuf_lib.rst
> > index 4ad2a21f3f..6c96931f8c 100644
> > --- a/doc/guides/prog_guide/mbuf_lib.rst
> > +++ b/doc/guides/prog_guide/mbuf_lib.rst
> > @@ -266,8 +266,8 @@ can be found in several of the sample applications, for
> > example, the IPv4 Multic
> >  Debug
> >  -----
> > 
> > -In debug mode, the functions of the mbuf library perform sanity checks before
> > any operation (such as, buffer corruption,
> > -bad type, and so on).
> > +In debug mode, the functions of the mbuf library perform consistency checks
> > +before any operation (such as, buffer corruption, bad type, and so on).
> > 
> >  Use Cases
> >  ---------
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 36489f6e68..10bb08a634 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -142,3 +142,6 @@ Deprecation Notices
> >  * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
> >    ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to DPDK.
> >    Those API functions are used internally by DPDK core and netvsc PMD.
> > +
> > +* mbuf: The function ``rte_mbuf_sanity_check`` is deprecated.
> > +  Use the new function ``rte_mbuf_verify`` instead.
> > diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
> > index 559d5ad8a7..fc5d4ba29d 100644
> > --- a/lib/mbuf/rte_mbuf.c
> > +++ b/lib/mbuf/rte_mbuf.c
> > @@ -367,9 +367,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned
> > int n,
> >  	return mp;
> >  }
> > 
> > -/* do some sanity checks on a mbuf: panic if it fails */
> > +/* do some checks on a mbuf: panic if it fails */
> >  void
> > -rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
> > +rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
> >  {
> >  	const char *reason;
> > 
> > @@ -377,6 +377,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int
> > is_header)
> >  		rte_panic("%s\n", reason);
> >  }
> > 
> > +/* For ABI compatibility, to be removed in next release */
> > +void
> > +rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
> > +{
> > +	rte_mbuf_verify(m, is_header);
> > +}
> > +
> >  int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
> >  		   const char **reason)
> >  {
> > @@ -496,7 +503,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs,
> > unsigned int count)
> >  		if (unlikely(m == NULL))
> >  			continue;
> > 
> > -		__rte_mbuf_sanity_check(m, 1);
> > +		__rte_mbuf_verify(m, 1);
> > 
> >  		do {
> >  			m_next = m->next;
> > @@ -546,7 +553,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool
> > *mp)
> >  		return NULL;
> >  	}
> > 
> > -	__rte_mbuf_sanity_check(mc, 1);
> > +	__rte_mbuf_verify(mc, 1);
> >  	return mc;
> >  }
> > 
> > @@ -596,7 +603,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> > rte_mempool *mp,
> >  	struct rte_mbuf *mc, *m_last, **prev;
> > 
> >  	/* garbage in check */
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	/* check for request to copy at offset past end of mbuf */
> >  	if (unlikely(off >= m->pkt_len))
> > @@ -660,7 +667,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> > rte_mempool *mp,
> >  	}
> > 
> >  	/* garbage out check */
> > -	__rte_mbuf_sanity_check(mc, 1);
> > +	__rte_mbuf_verify(mc, 1);
> >  	return mc;
> >  }
> > 
> > @@ -671,7 +678,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m,
> > unsigned dump_len)
> >  	unsigned int len;
> >  	unsigned int nb_segs;
> > 
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m,
> > rte_mbuf_iova_get(m),
> >  		m->buf_len);
> > @@ -689,7 +696,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m,
> > unsigned dump_len)
> >  	nb_segs = m->nb_segs;
> > 
> >  	while (m && nb_segs != 0) {
> > -		__rte_mbuf_sanity_check(m, 0);
> > +		__rte_mbuf_verify(m, 0);
> > 
> >  		fprintf(f, "  segment at %p, data=%p, len=%u, off=%u,
> > refcnt=%u\n",
> >  			m, rte_pktmbuf_mtod(m, void *),
> > diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
> > index 06ab7502a5..53a837e4d5 100644
> > --- a/lib/mbuf/rte_mbuf.h
> > +++ b/lib/mbuf/rte_mbuf.h
> > @@ -339,16 +339,20 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
> > 
> >  #ifdef RTE_LIBRTE_MBUF_DEBUG
> > 
> > -/**  check mbuf type in debug mode */
> > -#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
> > +/**  do mbuf type in debug mode */
> > +#define __rte_mbuf_verify(m, is_h) rte_mbuf_verify(m, is_h)
> > 
> >  #else /*  RTE_LIBRTE_MBUF_DEBUG */
> > 
> > -/**  check mbuf type in debug mode */
> > -#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
> > +/**  ignore mbuf checks if not in debug mode */
> > +#define __rte_mbuf_verify(m, is_h) do { } while (0)
> > 
> >  #endif /*  RTE_LIBRTE_MBUF_DEBUG */
> > 
> > +/* deprecated version of the macro */
> > +#define __rte_mbuf_sanity_check(m, is_h)
> > RTE_DEPRECATED(__rte_mbuf_sanity_check) \
> > +		__rte_mbuf_verify(m, is_h)
> > +
> >  #ifdef RTE_MBUF_REFCNT_ATOMIC
> > 
> >  /**
> > @@ -514,10 +518,9 @@ rte_mbuf_ext_refcnt_update(struct
> > rte_mbuf_ext_shared_info *shinfo,
> > 
> > 
> >  /**
> > - * Sanity checks on an mbuf.
> > + * Check that the mbuf is valid and panic if corrupted.
> >   *
> > - * Check the consistency of the given mbuf. The function will cause a
> > - * panic if corruption is detected.
> > + * Acts assertion that mbuf is consistent. If not it calls rte_panic().
> >   *
> >   * @param m
> >   *   The mbuf to be checked.
> > @@ -526,13 +529,17 @@ rte_mbuf_ext_refcnt_update(struct
> > rte_mbuf_ext_shared_info *shinfo,
> >   *   of a packet (in this case, some fields like nb_segs are not checked)
> >   */
> >  void
> > +rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
> > +
> > +__rte_deprecated
> > +void
> >  rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
> > 
> >  /**
> > - * Sanity checks on a mbuf.
> > + * Do consistency checks on a mbuf.
> >   *
> > - * Almost like rte_mbuf_sanity_check(), but this function gives the reason
> > - * if corruption is detected rather than panic.
> > + * Check the consistency of the given mbuf and if not valid
> > + * return the reason.
> >   *
> >   * @param m
> >   *   The mbuf to be checked.
> > @@ -551,7 +558,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int
> > is_header,
> >  		   const char **reason);
> > 
> >  /**
> > - * Sanity checks on a reinitialized mbuf in debug mode.
> > + * Do checks on a reinitialized mbuf in debug mode.
> >   *
> >   * Check the consistency of the given reinitialized mbuf.
> >   * The function will cause a panic if corruption is detected.
> > @@ -563,7 +570,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int
> > is_header,
> >   *   The mbuf to be checked.
> >   */
> >  static __rte_always_inline void
> > -__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
> > +__rte_mbuf_raw_verify(__rte_unused const struct rte_mbuf *m)
> >  {
> >  	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
> >  	RTE_ASSERT(m->next == NULL);
> > @@ -572,11 +579,11 @@ __rte_mbuf_raw_sanity_check(__rte_unused const struct
> > rte_mbuf *m)
> >  	RTE_ASSERT(!RTE_MBUF_HAS_EXTBUF(m) ||
> >  			(RTE_MBUF_HAS_PINNED_EXTBUF(m) &&
> >  			rte_mbuf_ext_refcnt_read(m->shinfo) == 1));
> > -	__rte_mbuf_sanity_check(m, 0);
> > +	__rte_mbuf_verify(m, 0);
> >  }
> > 
> >  /** For backwards compatibility. */
> > -#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
> > +#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_verify(m)
> > 
> >  /**
> >   * Allocate an uninitialized mbuf from mempool *mp*.
> > @@ -606,7 +613,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct
> > rte_mempool *mp)
> > 
> >  	if (rte_mempool_get(mp, &ret.ptr) < 0)
> >  		return NULL;
> > -	__rte_mbuf_raw_sanity_check(ret.m);
> > +	__rte_mbuf_raw_verify(ret.m);
> >  	return ret.m;
> >  }
> > 
> > @@ -644,7 +651,7 @@ rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct
> > rte_mbuf **mbufs, unsigne
> >  	int rc = rte_mempool_get_bulk(mp, (void **)mbufs, count);
> >  	if (likely(rc == 0))
> >  		for (unsigned int idx = 0; idx < count; idx++)
> > -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +			__rte_mbuf_raw_verify(mbufs[idx]);
> >  	return rc;
> >  }
> > 
> > @@ -665,7 +672,7 @@ rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct
> > rte_mbuf **mbufs, unsigne
> >  static __rte_always_inline void
> >  rte_mbuf_raw_free(struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_raw_sanity_check(m);
> > +	__rte_mbuf_raw_verify(m);
> >  	rte_mempool_put(m->pool, m);
> >  }
> > 
> > @@ -700,7 +707,7 @@ rte_mbuf_raw_free_bulk(struct rte_mempool *mp, struct
> > rte_mbuf **mbufs, unsigned
> >  		const struct rte_mbuf *m = mbufs[idx];
> >  		RTE_ASSERT(m != NULL);
> >  		RTE_ASSERT(m->pool == mp);
> > -		__rte_mbuf_raw_sanity_check(m);
> > +		__rte_mbuf_raw_verify(m);
> >  	}
> > 
> >  	rte_mempool_put_bulk(mp, (void **)mbufs, count);
> > @@ -965,7 +972,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
> >  	rte_pktmbuf_reset_headroom(m);
> > 
> >  	m->data_len = 0;
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> >  }
> > 
> >  /**
> > @@ -1021,22 +1028,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct
> > rte_mempool *pool,
> >  	switch (count % 4) {
> >  	case 0:
> >  		while (idx != count) {
> > -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +			__rte_mbuf_raw_verify(mbufs[idx]);
> >  			rte_pktmbuf_reset(mbufs[idx]);
> >  			idx++;
> >  			/* fall-through */
> >  	case 3:
> > -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +			__rte_mbuf_raw_verify(mbufs[idx]);
> >  			rte_pktmbuf_reset(mbufs[idx]);
> >  			idx++;
> >  			/* fall-through */
> >  	case 2:
> > -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +			__rte_mbuf_raw_verify(mbufs[idx]);
> >  			rte_pktmbuf_reset(mbufs[idx]);
> >  			idx++;
> >  			/* fall-through */
> >  	case 1:
> > -			__rte_mbuf_raw_sanity_check(mbufs[idx]);
> > +			__rte_mbuf_raw_verify(mbufs[idx]);
> >  			rte_pktmbuf_reset(mbufs[idx]);
> >  			idx++;
> >  			/* fall-through */
> > @@ -1267,8 +1274,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf
> > *mi, struct rte_mbuf *m)
> >  	mi->pkt_len = mi->data_len;
> >  	mi->nb_segs = 1;
> > 
> > -	__rte_mbuf_sanity_check(mi, 1);
> > -	__rte_mbuf_sanity_check(m, 0);
> > +	__rte_mbuf_verify(mi, 1);
> > +	__rte_mbuf_verify(m, 0);
> >  }
> > 
> >  /**
> > @@ -1423,7 +1430,7 @@ static inline int
> > __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
> >  static __rte_always_inline struct rte_mbuf *
> >  rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_sanity_check(m, 0);
> > +	__rte_mbuf_verify(m, 0);
> > 
> >  	if (likely(rte_mbuf_refcnt_read(m) == 1)) {
> > 
> > @@ -1494,7 +1501,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
> >  	struct rte_mbuf *m_next;
> > 
> >  	if (m != NULL)
> > -		__rte_mbuf_sanity_check(m, 1);
> > +		__rte_mbuf_verify(m, 1);
> > 
> >  	while (m != NULL) {
> >  		m_next = m->next;
> > @@ -1575,7 +1582,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct
> > rte_mempool *mp,
> >   */
> >  static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
> >  {
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	do {
> >  		rte_mbuf_refcnt_update(m, v);
> > @@ -1592,7 +1599,7 @@ static inline void rte_pktmbuf_refcnt_update(struct
> > rte_mbuf *m, int16_t v)
> >   */
> >  static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_sanity_check(m, 0);
> > +	__rte_mbuf_verify(m, 0);
> >  	return m->data_off;
> >  }
> > 
> > @@ -1606,7 +1613,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct
> > rte_mbuf *m)
> >   */
> >  static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_sanity_check(m, 0);
> > +	__rte_mbuf_verify(m, 0);
> >  	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
> >  			  m->data_len);
> >  }
> > @@ -1621,7 +1628,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct
> > rte_mbuf *m)
> >   */
> >  static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> >  	while (m->next != NULL)
> >  		m = m->next;
> >  	return m;
> > @@ -1665,7 +1672,7 @@ static inline struct rte_mbuf
> > *rte_pktmbuf_lastseg(struct rte_mbuf *m)
> >  static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
> >  					uint16_t len)
> >  {
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	if (unlikely(len > rte_pktmbuf_headroom(m)))
> >  		return NULL;
> > @@ -1700,7 +1707,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf
> > *m, uint16_t len)
> >  	void *tail;
> >  	struct rte_mbuf *m_last;
> > 
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	m_last = rte_pktmbuf_lastseg(m);
> >  	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
> > @@ -1728,7 +1735,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf
> > *m, uint16_t len)
> >   */
> >  static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
> >  {
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	if (unlikely(len > m->data_len))
> >  		return NULL;
> > @@ -1760,7 +1767,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m,
> > uint16_t len)
> >  {
> >  	struct rte_mbuf *m_last;
> > 
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> > 
> >  	m_last = rte_pktmbuf_lastseg(m);
> >  	if (unlikely(len > m_last->data_len))
> > @@ -1782,7 +1789,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m,
> > uint16_t len)
> >   */
> >  static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
> >  {
> > -	__rte_mbuf_sanity_check(m, 1);
> > +	__rte_mbuf_verify(m, 1);
> >  	return m->nb_segs == 1;
> >  }
> > 
> > diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
> > index 76f1832924..2950f24caa 100644
> > --- a/lib/mbuf/version.map
> > +++ b/lib/mbuf/version.map
> > @@ -31,6 +31,7 @@ DPDK_25 {
> >  	rte_mbuf_set_platform_mempool_ops;
> >  	rte_mbuf_set_user_mempool_ops;
> >  	rte_mbuf_user_mempool_ops;
> > +	rte_mbuf_verify;
> >  	rte_pktmbuf_clone;
> >  	rte_pktmbuf_copy;
> >  	rte_pktmbuf_dump;
> > --
> > 2.47.2  
> 
> Stephen,
> 
> I have submitted another patch [1], where __rte_mbuf_raw_sanity_check()'s successor takes one more parameter, so I had to give it a new name __rte_mbuf_raw_sanity_check_mp() for API compatibility. And then I added:
> +/** For backwards compatibility. */
> +#define __rte_mbuf_raw_sanity_check(m) __rte_mbuf_raw_sanity_check_mp(m, NULL)
> 
> If you proceed with your patch, __rte_mbuf_raw_sanity_check() will be renamed to __rte_mbuf_raw_verify(), and __rte_mbuf_raw_sanity_check() disappears.
> Would it make sense to change your patch, so __rte_mbuf_raw_verify() replaces __rte_mbuf_raw_sanity_check_mp() instead of __rte_mbuf_raw_sanity_check()? If your patch changes the API anyway, adding an extra parameter to the function should be acceptable.
> 
> [1]: https://patchwork.dpdk.org/project/dpdk/patch/20250722093431.555214-1-mb@smartsharesystems.com/

Sure will go back and rebase old patches in a few weeks.

^ permalink raw reply	[relevance 0%]

* Re: [EXTERNAL] [PATCH] doc: announce DMA configuration structure changes
  2025-07-28  5:11  4%         ` Pavan Nikhilesh Bhagavatula
@ 2025-08-12 10:59  0%           ` Thomas Monjalon
  0 siblings, 0 replies; 77+ results
From: Thomas Monjalon @ 2025-08-12 10:59 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula
  Cc: fengchengwen, techboard, Amit Prakash Shukla, dev, Jerin Jacob,
	Vamsi Krishna Attunuru, g.singh, sachin.saxena, hemant.agrawal,
	bruce.richardson, kevin.laatz, conor.walsh,
	Gowrishankar Muthukrishnan, Vidya Sagar Velumuri,
	anatoly.burakov

28/07/2025 07:11, Pavan Nikhilesh Bhagavatula:
> >Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> >
> 
> Thomas,
> 
> Now that Feng Chengwen is ok with this change, can this be merged
> along with the ABI breaking changes in 25.11?
> Given that techboard approves the change.
> This change helps reduce ABI breakage when a new feature is added.

I would be in favor of this change.
Let's request a vote in the next techboard meeting.
(Cc techboard@dpdk.org and added in the meeting agenda)


> >On 2025/7/25 14:04, Pavan Nikhilesh Bhagavatula wrote:
> >>>> Deprecate rte_dma_conf structure to allow for a more flexible
> >>>> configuration of DMA devices.
> >>>> The new structure will have a flags field instead of multiple
> >>>> boolean fields for each feature.
> >>>>
> >>>> Signed-off-by: Pavan Nikhilesh <mailto:pbhagavatula@marvell.com>
> >>>> ---
> >>>> +* dmadev: The ``rte_dma_conf`` structure is updated to include a new field
> >>>> +  ``rte_dma_conf::flags`` that should be used to configure dmadev features.
> >>>> +  The existing field ``rte_dma_conf::enable_silent`` is removed and replaced
> >>>> +  with the new flag ``RTE_DMA_CFG_FLAG_SILENT``, to configure silent mode
> >>>> +  the flag should be set in ``rte_dma_conf::flags`` during device configuration.
> >>>>
> >>>> Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
> >>>
> >>> There is only 1 ack.
> >>> Per our policy, it will miss the release 25.07.
> >>>
> >>> You can probably do this change anyway,
> >>> and keep ABI compatibility by versioning the function.
> >>
> >> Hi Fengchengwen,
> >>
> >> Are you ok with this change? If so please ack it so that I can work on getting
> >> an exception from techboard to merge this without deprecation notice in 25.11.
> >>
> >> Thanks,
> >> Pavan.




^ permalink raw reply	[relevance 0%]

* RE: [EXTERNAL] Re: [PATCH v2 1/1] ethdev: add support to provide link type
  @ 2025-08-13  7:42  4%           ` Sunil Kumar Kori
  0 siblings, 0 replies; 77+ results
From: Sunil Kumar Kori @ 2025-08-13  7:42 UTC (permalink / raw)
  To: Morten Brørup, Stephen Hemminger
  Cc: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, dev,
	Nithin Kumar Dabilpuram, Jerin Jacob

Hi Morten and Stephen,

To address your comments, I revisited to change and concluded following points:

1. Extending Link Types Alongside Legacy:

I'm aligned on extending link types while retaining legacy support — No concerns here. I will add following link type to the list.

	- NONE
	- TP
	- AUI
	- MII
	- FIBRE
	- BNC
	- DA
	- SGMII
	- QSGMII
	- XFI
	- SFI
	- XLAUI
	- GAUI
	- SFI
	- XAUI
	- XLAUI
	- GAUI
	- GBASE
	- CAUI
	- LAUI
	- SFP
	- SFP_DD
	- SFP_PLUS
	- SFP28
	- QSFP
	- QSFP_PLUS
	- QSFP28
	- QSFP56
	- QSFP_DD
	- OTHER

2. ABI Breakage Concern:
"I'm not entirely clear on how this change results in an ABI breakage, as the new bit field is added within the existing space. Could you please elaborate on the specific aspects that lead to ABI incompatibility ?" Worst case, since this is 25.11, API breakage is fine.

3. Reporting Link Type by Drivers:
General APIs often expose capabilities, and drivers selectively implement them. Setting the link type to 0 when unsupported is a reasonable fallback.
Ensuring 0 is treated as “unknown” or “not supported” rather than misleading.

4. Regarding management interfaces for PHYs or modules:
This patch does not introduce any management APIs for PHYs or modules. Its sole purpose is to expose the link type as an additional attribute to the user. Any support for PHY or module management should be handled separately and is outside the scope of this change.

Thanks
Sunil Kumar Kori

> > From: Sunil Kumar Kori [mailto:skori@marvell.com]
> > Sent: Tuesday, 10 June 2025 07.02
> >
> > > On Fri, 6 Jun 2025 11:54:52 +0200
> > > Morten Brørup <mb@smartsharesystems.com> wrote:
> > >
> > > > > From: skori@marvell.com [mailto:skori@marvell.com]
> > > > > Sent: Friday, 6 June 2025 11.28
> > > > >
> > > > > From: Sunil Kumar Kori <skori@marvell.com>
> > > > >
> > > > > Adding link type parameter to provide the type of port like
> > > > > twisted pair, fibre etc.
> > > > >
> > > > > Also added an API to convert the RTE_ETH_LINK_TYPE_XXX to a
> > > > > readable string.
> > > > >
> > > > > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > > > > Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> > > > > ---
> > > > > +/**@{@name PORT type
> > > > > + * Ethernet port type
> > > > > + */
> > > > > +#define RTE_ETH_LINK_TYPE_NONE  0x00 /**< Not defined */
> > > > > +#define RTE_ETH_LINK_TYPE_TP    0x01 /**< Twisted Pair */
> > > > > +#define RTE_ETH_LINK_TYPE_AUI   0x02 /**< Attachment Unit Interface */
> > > > > +#define RTE_ETH_LINK_TYPE_MII   0x03 /**< Media Independent Interface
> > > > > */
> > > > > +#define RTE_ETH_LINK_TYPE_FIBRE 0x04 /**< Fibre */
> > > > > +#define RTE_ETH_LINK_TYPE_BNC   0x05 /**< BNC */
> > > > > +#define RTE_ETH_LINK_TYPE_DA    0x06 /**< Direct Attach copper */
> > > > > +#define RTE_ETH_LINK_TYPE_OTHER 0x1F /**< Other type */ /**@}*/
> > > >

^ permalink raw reply	[relevance 4%]

* RE: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev
  @ 2025-08-13 16:46  3%                 ` Vamsi Krishna Attunuru
  2025-08-14  0:44  0%                   ` fengchengwen
  0 siblings, 1 reply; 77+ results
From: Vamsi Krishna Attunuru @ 2025-08-13 16:46 UTC (permalink / raw)
  To: fengchengwen, thomas
  Cc: Jerin Jacob, thomas, dev, Pavan Nikhilesh Bhagavatula,
	kevin.laatz, bruce.richardson, Vladimir Medvedkin,
	Anatoly Burakov, Vamsi Krishna Attunuru, techboard

Hi Thomas, Feng

Can this feature be discussed in the next techboard meeting and decide on supporting it
together with the ABI breaking changes in 25.11, rather than using the versions scheme.
Since there are no further comments after we aligned with the Feng's feedback, it would be
good to finalize the approach.

Regards
Vamsi

>-----Original Message-----
>From: Vamsi Krishna Attunuru <vattunuru@marvell.com>
>Sent: Wednesday, July 30, 2025 10:07 AM
>To: fengchengwen <fengchengwen@huawei.com>;
>bruce.richardson@intel.com; Vladimir Medvedkin
><vladimir.medvedkin@intel.com>; Anatoly Burakov
><anatoly.burakov@intel.com>
>Cc: Jerin Jacob <jerinj@marvell.com>; thomas@monjalon.net;
>dev@dpdk.org; Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>kevin.laatz@intel.com
>Subject: RE: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA
>capability support in dmadev
>
>Gentle reminder to share your feedback. >Hi Bruce, Vladimir, Anatoly >
>>Regarding inter-device or inter-domain DMA capability, could please clarify if
>>Intel idxd driver will support this feature. >I believe the changes Feng has
>ZjQcmQRYFpfptBannerStart Prioritize security for external emails:
>Confirm sender and content safety before clicking links or opening
>attachments <https://us-phishalarm-
>ewt.proofpoint.com/EWT/v1/CRVmXkqW!uq3V-
>r_YrnsUWi9yeT3UAKXL4FFc4ONZAyvoCMlneUv-fN8An8AKohi4aES-
>ZG37anFsJG5Rq3NspRNGjPgt5I9U4c9cEVeu_P4sHAEFMopboItEI9hkXOMUrJ
>m_Pyoic-gg6kRF3p9wjJrTGA$>
>Report Suspicious
>
>ZjQcmQRYFpfptBannerEnd
>Gentle reminder to share your feedback.
>
>>Hi Bruce, Vladimir, Anatoly
>>
>>Regarding inter-device or inter-domain DMA capability, could please
>>clarify if Intel idxd driver will support this feature.
>>I believe the changes Feng has suggested here are in line with the
>>earlier "[PATCH v1 0/3] Add support for inter-domain DMA operations"
>>proposal. We are planning to implement this feature support in version
>25.11.
>>
>>Your feedback would be appreciated, we are aiming for a more generic
>>solution.
>>
>>Regards
>>Vamsi
>>
>>
>>>>On 2025/7/16 18:59, Vamsi Krishna Attunuru wrote:
>>>>>
>>>>>>
>>>>>> Thanks for the explanation.
>>>>>>
>>>>>> Let me tell you what I understand:
>>>>>> 1\ Two dmadev (must belong to the same DMA controller?) each
>>>>>> passthrough to diffent domain (VM or container) 2\ The kernel DMA
>>>>>> controller driver could config access groups --- there is a secure
>>>>>> mechanism
>>>>(like Intel IDPTE)
>>>>>>   and the two dmadev could communicate if the kernel DMA
>>>>>> controller driver has put them in the same access groups.
>>>>>> 3\ Application setup access group and get handle (maybe the new
>>>>'dev_idx'
>>>>>> which you announce in this commit),
>>>>>>   and then setup one vchan which config the handle.
>>>>>>   and later launch copy request based on this vchan.
>>>>>> 4\ The driver will pass the request to dmadev-1 hardware, and
>>>>>> dmadev-1 hardware will do some verification,
>>>>>>   and maybe use dmadev-2 stream ID for read/write operations?
>>>>>>
>>>>>> A few question about this:
>>>>>> 1\ What the prototype of 'dev_idx', is it uint16_t?
>>>>> Yes, it can be uint16_t and use two different dev_idx (src_dev_idx
>>>>> &
>>>>> dest_dev_idx) for read & write.
>>>>>
>>>>>> 2\ How to implement read/write between two dmadev ?  use two
>>>>>> different dev_idx? the first for read and the second for write?
>>>>> Yes, two different dev_idx will be used.
>>>>>
>>>>>>
>>>>>>
>>>>>> I also re-read the patchset "[PATCH v1 0/3] Add support for
>>>>>> inter-domain DMA operations", it introduce:
>>>>>> 1\ One 'int controller-id' in the rte_dma_info. which maybe used
>>>>>> in
>>>>>> vendor- specific secure mechanism.
>>>>>> 2\ Two new OP_flag and two new datapath API.
>>>>>> The reason why this patch didn't continue (I guess) is whether
>>>>>> setup one new vchan. Yes, vchan was designed to represents
>>>>>> different transfer contexts. But each vchan has its own
>>>>>> enqueue/dequeue/ring, it more act like one logic dmadev, some of
>>>>>> the hardware can fit this model well, some may not (like Intel in
>>>>>> this
>>case).
>>>>>>
>>>>>>
>>>>>> So how about the following scheme:
>>>>>> 1\ Add inter-domain capability bits, for example:
>>>>>> RTE_DMA_CAPA_INTER_PROCESS_DOMAIN,
>>>>>> RTE_DMA_CAPA_INTER_OS_DOMAIN 2\ Add one
>domain_controller_id
>>>in
>>>>the
>>>>>> rte_dma_info which maybe used in vendor-specific secure
>mechanism.
>>>>>> 3\ Add four OP_FLAGs:
>>>>>> RTE_DMA_OP_FLAG_SRC_INTER_PROCESS_DOMAIN_HANDLE,
>>>>>> RTE_DMA_OP_FLAG_DST_INTER_PROCESS_DOMAIN_HANDLE
>>>>>>                      RTE_DMA_OP_FLAG_SRC_INTER_OS_DOMAIN_HANDLE,
>>>>>> RTE_DMA_OP_FLAG_DST_INTER_OS_DOMAIN_HANDLE
>>>>>> 4\ Reserved 32bit from flag parameter (which all enqueue API both
>>>>>> supports) as the src and dst handle.
>>>>>>   or only reserved 16bit from flag parameter if we restrict don't
>>>>>> support 3rd transfer.
>>>>>
>>>>> Yes, the above approach seems acceptable to me. I believe src & dst
>>>>> handles require 16-bit values. Reserving 32-bits from flag
>>>>> parameter would leave 32 flags available, which should be fine.
>>>>
>>>>Great
>>>>tip: there are still 24bit flag reserved after apply this scheme.
>>>>
>>>>Would like more comments.
>>>>
>>>
>>>If there are no major comments at this time, can we proceed with
>>>accepting and merging this notice in this release. Further review can
>>>continue once the RFC is available next month.
>>>
>>>Thanks & Regards
>>>Vamsi
>>>
>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 2025/7/15 13:35, Vamsi Krishna Attunuru wrote:
>>>>>>> Hi Feng,
>>>>>>>
>>>>>>> Thanks for depicting the feature use case.
>>>>>>>
>>>>>>> From the application’s perspective, inter VM/process
>>>>>>> communication is
>>>>>> required to exchange the src & dst buffer details, however the
>>>>>> specifics of this communication mechanism are outside the scope in
>>>>>> this context. Regarding the address translations, these buffer
>>>>>> addresses can be either IOVA as PA or IOVA as VA. The DMA hardware
>>>>>> must use the appropriate IOMMU stream IDs when initiating the DMA
>>>>>> transfers. For example, in the use case shown in the diagram,
>>>>>> dmadev-1 and dmadev-2 would join an access group managed by the
>>>>>> kernel DMA controller driver. This controller driver will
>>>>>> configure the access group on the DMA hardware, enabling the
>>>>>> hardware to select the correct stream IDs for read/write
>>>>>> operations. New rte_dma APIs could be introduced to join or leave
>>>>>> the access group or to query the access group details.
>>>>>> Additionally, a secure token mechanism (similar to
>>>>vfio-pci token) can be implemented to validate any dmadev attempting
>>>>to join the access group.
>>>>>>>
>>>>>>> Regards.
>>>>>>>
>>>>>>> From: fengchengwen <fengchengwen@huawei.com>
>>>>>>> Sent: Tuesday, July 15, 2025 6:29 AM
>>>>>>> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>;
>>dev@dpdk.org;
>>>>>>> Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>>>>>>> kevin.laatz@intel.com; bruce.richardson@intel.com;
>>>>>>> mb@smartsharesystems.com
>>>>>>> Cc: Jerin Jacob <jerinj@marvell.com>; thomas@monjalon.net
>>>>>>> Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device
>>>>>>> DMA capability support in dmadev
>>>>>>>
>>>>>>> Hi Vamsi, From the commit log, I guess this commit mainly want to
>>>>>>> meet following case: --------------- ---------------- | Container
>>>>>>> |
>>>>>>> | VirtMachine | | | | | | dmadev-1 | | dmadev2 | ---------------
>>>>>>> ---------------- |
>>>>>> | ------------------------------ ZjQcmQRYFpfptBannerStart
>>>>>> | Prioritize security for
>>>>>> external emails:
>>>>>>> Confirm sender and content safety before clicking links or
>>>>>>> opening
>>>>>> attachments
>>>>>>>     Report Suspicious
>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>2Dphishala
>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>>>>> >>2Dphishala  >>>>>>> r
>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>2Dphishala
>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>>>>> >>2Dphishala  >>>>>>> r  >>>>> m-
>>>>2D&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=WllrYaumVkxaWjgKto6
>E
>>_r
>>>t
>>>>DQsh
>>>>>>>
>>>>hIhik2jkvzFyRhW8&m=3uFGFVHxC4YLkjWHNg9s9rNDHd_ozbhLepOYCAki
>Z
>>K
>>>x
>>>>M0sQ0m
>>>>>>>
>>>>43gqgTQl1cK9koZ&s=3_mvLYuMWu7RbHD3mj21CP65O5JY8L8AK8oVFutdT
>>W
>>>U
>>>>&e=
>>>>>>
>>>>ewt.proofpoint.com/EWT/v1/CRVmXkqW!tg3ZldV0Yr_wdSwWmT2aDdK
>Mi
>>-
>>>>>>
>>>>4rn2z58vFaxwfOeocS1P19w1BeRdyGs5sjnhV2rU_6m8MOWj4KFbuXKkKJI
>v
>>c
>>>q
>>>>>> wWD2WEwJW_0$ >
>>>>>>> ZjQcmQRYFpfptBannerEnd
>>>>>>>
>>>>>>> Hi Vamsi,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> From the commit log, I guess this commit mainly want to meet
>>>>>>> following
>>>>>> case:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>      ---------------             ----------------
>>>>>>>
>>>>>>>      |  Container  |             |  VirtMachine |
>>>>>>>
>>>>>>>      |             |             |              |
>>>>>>>
>>>>>>>      |  dmadev-1   |             |   dmadev2    |
>>>>>>>
>>>>>>>      ---------------             ----------------
>>>>>>>
>>>>>>>            |                            |
>>>>>>>
>>>>>>>            ------------------------------
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> App run in the container could launch DMA transfer from local
>>>>>>> buffer to the VirtMachine by config
>>>>>>>
>>>>>>> dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain).
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Could you explain how to use it from application perspective (for
>>>>>>> example address translation) and
>>>>>>>
>>>>>>> application & hardware restrictions?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> BTW: In this case, there are two OS domain communication, and I
>>>>>>> remember there are also inter-process
>>>>>>>
>>>>>>> DMA RFC, so maybe we could design more generic solution if you
>>>>>>> provide
>>>>>> more info.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 2025/7/10 16:51, Vamsi Krishna wrote:
>>>>>>>
>>>>>>>> From: Vamsi Attunuru
>>>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>> Modern DMA hardware supports data transfer between multiple
>>>>>>>
>>>>>>>> DMA devices, enabling data communication across isolated domains
>>>>>>>> or
>>>>>>>
>>>>>>>> containers. To facilitate this, the ``dmadev`` library requires
>>>>>>>> changes
>>>>>>>
>>>>>>>> to allow devices to register with or unregisters from DMA groups
>>>>>>>> for
>>>>>>>
>>>>>>>> inter-device communication. This feature is planned for
>>>>>>>> inclusion
>>>>>>>
>>>>>>>> in DPDK 25.11.
>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>> Signed-off-by: Vamsi Attunuru
>>>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>>>
>>>>>>>> ---
>>>>>>>
>>>>>>>>  doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>>>
>>>>>>>>  1 file changed, 7 insertions(+)
>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>
>>>>>>>> index e2d4125308..46836244dd 100644
>>>>>>>
>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>
>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>
>>>>>>>> @@ -152,3 +152,10 @@ Deprecation Notices
>>>>>>>
>>>>>>>>  * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
>>>>>>>
>>>>>>>>    ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to
>>>DPDK.
>>>>>>>
>>>>>>>>    Those API functions are used internally by DPDK core and
>>>>>>>> netvsc
>>>PMD.
>>>>>>>
>>>>>>>> +
>>>>>>>
>>>>>>>> +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV``
>will
>>>>>>>> +be added
>>>>>>>
>>>>>>>> +  to advertise DMA device's inter-device DMA copy capability.
>>>>>>>> + To enable
>>>>>>>
>>>>>>>> +  this functionality, a few dmadev APIs will be added to
>>>>>>>> + configure the DMA
>>>>>>>
>>>>>>>> +  access groups, facilitating coordinated data communication
>>>>>>>> + between
>>>>>> devices.
>>>>>>>
>>>>>>>> +  A new ``dev_idx`` field will be added to the ``struct
>>>>>>>> + rte_dma_vchan_conf``
>>>>>>>
>>>>>>>> +  structure to configure a vchan for data transfers between any
>>>>>>>> + two DMA
>>>>>> devices.
>>>>>>>
>>>>>>>
>>>>>


^ permalink raw reply	[relevance 3%]

* Re: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev
  2025-08-13 16:46  3%                 ` Vamsi Krishna Attunuru
@ 2025-08-14  0:44  0%                   ` fengchengwen
  0 siblings, 0 replies; 77+ results
From: fengchengwen @ 2025-08-14  0:44 UTC (permalink / raw)
  To: Vamsi Krishna Attunuru, thomas
  Cc: Jerin Jacob, dev, Pavan Nikhilesh Bhagavatula, kevin.laatz,
	bruce.richardson, Vladimir Medvedkin, Anatoly Burakov, techboard

I agree the feature discussed in TB.

The original scheme is too vague and hard to understand, although after several round of clarification.


On 8/14/2025 12:46 AM, Vamsi Krishna Attunuru wrote:
> Hi Thomas, Feng
> 
> Can this feature be discussed in the next techboard meeting and decide on supporting it
> together with the ABI breaking changes in 25.11, rather than using the versions scheme.
> Since there are no further comments after we aligned with the Feng's feedback, it would be
> good to finalize the approach.
> 
> Regards
> Vamsi
> 
>> -----Original Message-----
>> From: Vamsi Krishna Attunuru <vattunuru@marvell.com>
>> Sent: Wednesday, July 30, 2025 10:07 AM
>> To: fengchengwen <fengchengwen@huawei.com>;
>> bruce.richardson@intel.com; Vladimir Medvedkin
>> <vladimir.medvedkin@intel.com>; Anatoly Burakov
>> <anatoly.burakov@intel.com>
>> Cc: Jerin Jacob <jerinj@marvell.com>; thomas@monjalon.net;
>> dev@dpdk.org; Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>> kevin.laatz@intel.com
>> Subject: RE: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device DMA
>> capability support in dmadev
>>
>> Gentle reminder to share your feedback. >Hi Bruce, Vladimir, Anatoly >
>>> Regarding inter-device or inter-domain DMA capability, could please clarify if
>>> Intel idxd driver will support this feature. >I believe the changes Feng has
>> ZjQcmQRYFpfptBannerStart Prioritize security for external emails:
>> Confirm sender and content safety before clicking links or opening
>> attachments <https://us-phishalarm-
>> ewt.proofpoint.com/EWT/v1/CRVmXkqW!uq3V-
>> r_YrnsUWi9yeT3UAKXL4FFc4ONZAyvoCMlneUv-fN8An8AKohi4aES-
>> ZG37anFsJG5Rq3NspRNGjPgt5I9U4c9cEVeu_P4sHAEFMopboItEI9hkXOMUrJ
>> m_Pyoic-gg6kRF3p9wjJrTGA$>
>> Report Suspicious
>>
>> ZjQcmQRYFpfptBannerEnd
>> Gentle reminder to share your feedback.
>>
>>> Hi Bruce, Vladimir, Anatoly
>>>
>>> Regarding inter-device or inter-domain DMA capability, could please
>>> clarify if Intel idxd driver will support this feature.
>>> I believe the changes Feng has suggested here are in line with the
>>> earlier "[PATCH v1 0/3] Add support for inter-domain DMA operations"
>>> proposal. We are planning to implement this feature support in version
>> 25.11.
>>>
>>> Your feedback would be appreciated, we are aiming for a more generic
>>> solution.
>>>
>>> Regards
>>> Vamsi
>>>
>>>
>>>>> On 2025/7/16 18:59, Vamsi Krishna Attunuru wrote:
>>>>>>
>>>>>>>
>>>>>>> Thanks for the explanation.
>>>>>>>
>>>>>>> Let me tell you what I understand:
>>>>>>> 1\ Two dmadev (must belong to the same DMA controller?) each
>>>>>>> passthrough to diffent domain (VM or container) 2\ The kernel DMA
>>>>>>> controller driver could config access groups --- there is a secure
>>>>>>> mechanism
>>>>> (like Intel IDPTE)
>>>>>>>   and the two dmadev could communicate if the kernel DMA
>>>>>>> controller driver has put them in the same access groups.
>>>>>>> 3\ Application setup access group and get handle (maybe the new
>>>>> 'dev_idx'
>>>>>>> which you announce in this commit),
>>>>>>>   and then setup one vchan which config the handle.
>>>>>>>   and later launch copy request based on this vchan.
>>>>>>> 4\ The driver will pass the request to dmadev-1 hardware, and
>>>>>>> dmadev-1 hardware will do some verification,
>>>>>>>   and maybe use dmadev-2 stream ID for read/write operations?
>>>>>>>
>>>>>>> A few question about this:
>>>>>>> 1\ What the prototype of 'dev_idx', is it uint16_t?
>>>>>> Yes, it can be uint16_t and use two different dev_idx (src_dev_idx
>>>>>> &
>>>>>> dest_dev_idx) for read & write.
>>>>>>
>>>>>>> 2\ How to implement read/write between two dmadev ?  use two
>>>>>>> different dev_idx? the first for read and the second for write?
>>>>>> Yes, two different dev_idx will be used.
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I also re-read the patchset "[PATCH v1 0/3] Add support for
>>>>>>> inter-domain DMA operations", it introduce:
>>>>>>> 1\ One 'int controller-id' in the rte_dma_info. which maybe used
>>>>>>> in
>>>>>>> vendor- specific secure mechanism.
>>>>>>> 2\ Two new OP_flag and two new datapath API.
>>>>>>> The reason why this patch didn't continue (I guess) is whether
>>>>>>> setup one new vchan. Yes, vchan was designed to represents
>>>>>>> different transfer contexts. But each vchan has its own
>>>>>>> enqueue/dequeue/ring, it more act like one logic dmadev, some of
>>>>>>> the hardware can fit this model well, some may not (like Intel in
>>>>>>> this
>>> case).
>>>>>>>
>>>>>>>
>>>>>>> So how about the following scheme:
>>>>>>> 1\ Add inter-domain capability bits, for example:
>>>>>>> RTE_DMA_CAPA_INTER_PROCESS_DOMAIN,
>>>>>>> RTE_DMA_CAPA_INTER_OS_DOMAIN 2\ Add one
>> domain_controller_id
>>>> in
>>>>> the
>>>>>>> rte_dma_info which maybe used in vendor-specific secure
>> mechanism.
>>>>>>> 3\ Add four OP_FLAGs:
>>>>>>> RTE_DMA_OP_FLAG_SRC_INTER_PROCESS_DOMAIN_HANDLE,
>>>>>>> RTE_DMA_OP_FLAG_DST_INTER_PROCESS_DOMAIN_HANDLE
>>>>>>>                      RTE_DMA_OP_FLAG_SRC_INTER_OS_DOMAIN_HANDLE,
>>>>>>> RTE_DMA_OP_FLAG_DST_INTER_OS_DOMAIN_HANDLE
>>>>>>> 4\ Reserved 32bit from flag parameter (which all enqueue API both
>>>>>>> supports) as the src and dst handle.
>>>>>>>   or only reserved 16bit from flag parameter if we restrict don't
>>>>>>> support 3rd transfer.
>>>>>>
>>>>>> Yes, the above approach seems acceptable to me. I believe src & dst
>>>>>> handles require 16-bit values. Reserving 32-bits from flag
>>>>>> parameter would leave 32 flags available, which should be fine.
>>>>>
>>>>> Great
>>>>> tip: there are still 24bit flag reserved after apply this scheme.
>>>>>
>>>>> Would like more comments.
>>>>>
>>>>
>>>> If there are no major comments at this time, can we proceed with
>>>> accepting and merging this notice in this release. Further review can
>>>> continue once the RFC is available next month.
>>>>
>>>> Thanks & Regards
>>>> Vamsi
>>>>
>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> On 2025/7/15 13:35, Vamsi Krishna Attunuru wrote:
>>>>>>>> Hi Feng,
>>>>>>>>
>>>>>>>> Thanks for depicting the feature use case.
>>>>>>>>
>>>>>>>> From the application’s perspective, inter VM/process
>>>>>>>> communication is
>>>>>>> required to exchange the src & dst buffer details, however the
>>>>>>> specifics of this communication mechanism are outside the scope in
>>>>>>> this context. Regarding the address translations, these buffer
>>>>>>> addresses can be either IOVA as PA or IOVA as VA. The DMA hardware
>>>>>>> must use the appropriate IOMMU stream IDs when initiating the DMA
>>>>>>> transfers. For example, in the use case shown in the diagram,
>>>>>>> dmadev-1 and dmadev-2 would join an access group managed by the
>>>>>>> kernel DMA controller driver. This controller driver will
>>>>>>> configure the access group on the DMA hardware, enabling the
>>>>>>> hardware to select the correct stream IDs for read/write
>>>>>>> operations. New rte_dma APIs could be introduced to join or leave
>>>>>>> the access group or to query the access group details.
>>>>>>> Additionally, a secure token mechanism (similar to
>>>>> vfio-pci token) can be implemented to validate any dmadev attempting
>>>>> to join the access group.
>>>>>>>>
>>>>>>>> Regards.
>>>>>>>>
>>>>>>>> From: fengchengwen <fengchengwen@huawei.com>
>>>>>>>> Sent: Tuesday, July 15, 2025 6:29 AM
>>>>>>>> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>;
>>> dev@dpdk.org;
>>>>>>>> Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>>>>>>>> kevin.laatz@intel.com; bruce.richardson@intel.com;
>>>>>>>> mb@smartsharesystems.com
>>>>>>>> Cc: Jerin Jacob <jerinj@marvell.com>; thomas@monjalon.net
>>>>>>>> Subject: [EXTERNAL] Re: [PATCH v0 1/1] doc: announce inter-device
>>>>>>>> DMA capability support in dmadev
>>>>>>>>
>>>>>>>> Hi Vamsi, From the commit log, I guess this commit mainly want to
>>>>>>>> meet following case: --------------- ---------------- | Container
>>>>>>>> |
>>>>>>>> | VirtMachine | | | | | | dmadev-1 | | dmadev2 | ---------------
>>>>>>>> ---------------- |
>>>>>>> | ------------------------------ ZjQcmQRYFpfptBannerStart
>>>>>>> | Prioritize security for
>>>>>>> external emails:
>>>>>>>> Confirm sender and content safety before clicking links or
>>>>>>>> opening
>>>>>>> attachments
>>>>>>>>     Report Suspicious
>>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>> 2Dphishala
>>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>>>>>>>> 2Dphishala  >>>>>>> r
>>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>> 2Dphishala
>>>>>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__us-
>>>>>>>>>> 2Dphishala  >>>>>>> r  >>>>> m-
>>>>> 2D&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=WllrYaumVkxaWjgKto6
>> E
>>> _r
>>>> t
>>>>> DQsh
>>>>>>>>
>>>>> hIhik2jkvzFyRhW8&m=3uFGFVHxC4YLkjWHNg9s9rNDHd_ozbhLepOYCAki
>> Z
>>> K
>>>> x
>>>>> M0sQ0m
>>>>>>>>
>>>>> 43gqgTQl1cK9koZ&s=3_mvLYuMWu7RbHD3mj21CP65O5JY8L8AK8oVFutdT
>>> W
>>>> U
>>>>> &e=
>>>>>>>
>>>>> ewt.proofpoint.com/EWT/v1/CRVmXkqW!tg3ZldV0Yr_wdSwWmT2aDdK
>> Mi
>>> -
>>>>>>>
>>>>> 4rn2z58vFaxwfOeocS1P19w1BeRdyGs5sjnhV2rU_6m8MOWj4KFbuXKkKJI
>> v
>>> c
>>>> q
>>>>>>> wWD2WEwJW_0$ >
>>>>>>>> ZjQcmQRYFpfptBannerEnd
>>>>>>>>
>>>>>>>> Hi Vamsi,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> From the commit log, I guess this commit mainly want to meet
>>>>>>>> following
>>>>>>> case:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>      ---------------             ----------------
>>>>>>>>
>>>>>>>>      |  Container  |             |  VirtMachine |
>>>>>>>>
>>>>>>>>      |             |             |              |
>>>>>>>>
>>>>>>>>      |  dmadev-1   |             |   dmadev2    |
>>>>>>>>
>>>>>>>>      ---------------             ----------------
>>>>>>>>
>>>>>>>>            |                            |
>>>>>>>>
>>>>>>>>            ------------------------------
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> App run in the container could launch DMA transfer from local
>>>>>>>> buffer to the VirtMachine by config
>>>>>>>>
>>>>>>>> dmadev-1/2 (the dmadev-1/2 are passthrough to diffent OS domain).
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Could you explain how to use it from application perspective (for
>>>>>>>> example address translation) and
>>>>>>>>
>>>>>>>> application & hardware restrictions?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> BTW: In this case, there are two OS domain communication, and I
>>>>>>>> remember there are also inter-process
>>>>>>>>
>>>>>>>> DMA RFC, so maybe we could design more generic solution if you
>>>>>>>> provide
>>>>>>> more info.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2025/7/10 16:51, Vamsi Krishna wrote:
>>>>>>>>
>>>>>>>>> From: Vamsi Attunuru
>>>>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>> Modern DMA hardware supports data transfer between multiple
>>>>>>>>
>>>>>>>>> DMA devices, enabling data communication across isolated domains
>>>>>>>>> or
>>>>>>>>
>>>>>>>>> containers. To facilitate this, the ``dmadev`` library requires
>>>>>>>>> changes
>>>>>>>>
>>>>>>>>> to allow devices to register with or unregisters from DMA groups
>>>>>>>>> for
>>>>>>>>
>>>>>>>>> inter-device communication. This feature is planned for
>>>>>>>>> inclusion
>>>>>>>>
>>>>>>>>> in DPDK 25.11.
>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>> Signed-off-by: Vamsi Attunuru
>>>>>>>>> <vattunuru@marvell.com<mailto:vattunuru@marvell.com>>
>>>>>>>>
>>>>>>>>> ---
>>>>>>>>
>>>>>>>>>  doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>>>>>>>
>>>>>>>>>  1 file changed, 7 insertions(+)
>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>
>>>>>>>>> index e2d4125308..46836244dd 100644
>>>>>>>>
>>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>
>>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>
>>>>>>>>> @@ -152,3 +152,10 @@ Deprecation Notices
>>>>>>>>
>>>>>>>>>  * bus/vmbus: Starting DPDK 25.11, all the vmbus API defined in
>>>>>>>>
>>>>>>>>>    ``drivers/bus/vmbus/rte_bus_vmbus.h`` will become internal to
>>>> DPDK.
>>>>>>>>
>>>>>>>>>    Those API functions are used internally by DPDK core and
>>>>>>>>> netvsc
>>>> PMD.
>>>>>>>>
>>>>>>>>> +
>>>>>>>>
>>>>>>>>> +* dmadev: a new capability flag ``RTE_DMA_CAPA_INTER_DEV``
>> will
>>>>>>>>> +be added
>>>>>>>>
>>>>>>>>> +  to advertise DMA device's inter-device DMA copy capability.
>>>>>>>>> + To enable
>>>>>>>>
>>>>>>>>> +  this functionality, a few dmadev APIs will be added to
>>>>>>>>> + configure the DMA
>>>>>>>>
>>>>>>>>> +  access groups, facilitating coordinated data communication
>>>>>>>>> + between
>>>>>>> devices.
>>>>>>>>
>>>>>>>>> +  A new ``dev_idx`` field will be added to the ``struct
>>>>>>>>> + rte_dma_vchan_conf``
>>>>>>>>
>>>>>>>>> +  structure to configure a vchan for data transfers between any
>>>>>>>>> + two DMA
>>>>>>> devices.
>>>>>>>>
>>>>>>>>
>>>>>>
> 


^ permalink raw reply	[relevance 0%]

* [RFC 1/3] hash: move table of hash compare functions out of header
  @ 2025-08-21 20:35  7% ` Stephen Hemminger
  2025-08-22  9:05  0%   ` Morten Brørup
    1 sibling, 1 reply; 77+ results
From: Stephen Hemminger @ 2025-08-21 20:35 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin

Remove the definition of the compare jump table from the
header file so the internal details are not exposed.
Prevents future ABI breakage if new sizes are added.

Make other macros local if possible, header should
only contain exposed API.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/rte_cuckoo_hash.c | 74 ++++++++++++++++++++++++++++++-----
 lib/hash/rte_cuckoo_hash.h | 79 +-------------------------------------
 2 files changed, 65 insertions(+), 88 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..619fe0c691 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -25,14 +25,51 @@
 #include <rte_tailq.h>
 
 #include "rte_hash.h"
+#include "rte_cuckoo_hash.h"
 
-/* needs to be before rte_cuckoo_hash.h */
 RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
 #define RTE_LOGTYPE_HASH hash_logtype
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-#include "rte_cuckoo_hash.h"
+/* Macro to enable/disable run-time checking of function parameters */
+#if defined(RTE_LIBRTE_HASH_DEBUG)
+#define RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return retval; \
+} while (0)
+#else
+#define RETURN_IF_TRUE(cond, retval)
+#endif
+
+#if defined(RTE_ARCH_X86)
+#include "rte_cmp_x86.h"
+#endif
+
+#if defined(RTE_ARCH_ARM64)
+#include "rte_cmp_arm64.h"
+#endif
+
+/*
+ * All different options to select a key compare function,
+ * based on the key size and custom function.
+ * Not in rte_cuckoo_hash.h to avoid ABI issues.
+ */
+enum cmp_jump_table_case {
+	KEY_CUSTOM = 0,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	KEY_16_BYTES,
+	KEY_32_BYTES,
+	KEY_48_BYTES,
+	KEY_64_BYTES,
+	KEY_80_BYTES,
+	KEY_96_BYTES,
+	KEY_112_BYTES,
+	KEY_128_BYTES,
+#endif
+	KEY_OTHER_BYTES,
+	NUM_KEY_CMP_CASES,
+};
 
 /* Enum used to select the implementation of the signature comparison function to use
  * eg: a system supporting SVE might want to use a NEON or scalar implementation.
@@ -117,6 +154,25 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 	h->rte_hash_custom_cmp_eq = func;
 }
 
+/*
+ * Table storing all different key compare functions
+ * (multi-process supported)
+ */
+static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
+	[KEY_CUSTOM] = NULL,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
+	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
+	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,
+	[KEY_64_BYTES] = rte_hash_k64_cmp_eq,
+	[KEY_80_BYTES] = rte_hash_k80_cmp_eq,
+	[KEY_96_BYTES] = rte_hash_k96_cmp_eq,
+	[KEY_112_BYTES] = rte_hash_k112_cmp_eq,
+	[KEY_128_BYTES] = rte_hash_k128_cmp_eq,
+#endif
+	[KEY_OTHER_BYTES] = memcmp,
+};
+
 static inline int
 rte_hash_cmp_eq(const void *key1, const void *key2, const struct rte_hash *h)
 {
@@ -390,13 +446,13 @@ rte_hash_create(const struct rte_hash_parameters *params)
 		goto err_unlock;
 	}
 
-/*
- * If x86 architecture is used, select appropriate compare function,
- * which may use x86 intrinsics, otherwise use memcmp
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	/* Select function to compare keys */
 	switch (params->key_len) {
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	/*
+	 * If x86 architecture is used, select appropriate compare function,
+	 * which may use x86 intrinsics, otherwise use memcmp
+	 */
 	case 16:
 		h->cmp_jump_table_idx = KEY_16_BYTES;
 		break;
@@ -421,13 +477,11 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	case 128:
 		h->cmp_jump_table_idx = KEY_128_BYTES;
 		break;
+#endif
 	default:
 		/* If key is not multiple of 16, use generic memcmp */
 		h->cmp_jump_table_idx = KEY_OTHER_BYTES;
 	}
-#else
-	h->cmp_jump_table_idx = KEY_OTHER_BYTES;
-#endif
 
 	if (use_local_cache) {
 		local_free_slots = rte_zmalloc_socket(NULL,
diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h
index 26a992419a..16fe999c4c 100644
--- a/lib/hash/rte_cuckoo_hash.h
+++ b/lib/hash/rte_cuckoo_hash.h
@@ -12,86 +12,9 @@
 #define _RTE_CUCKOO_HASH_H_
 
 #include <stdalign.h>
-
-#if defined(RTE_ARCH_X86)
-#include "rte_cmp_x86.h"
-#endif
-
-#if defined(RTE_ARCH_ARM64)
-#include "rte_cmp_arm64.h"
-#endif
-
-/* Macro to enable/disable run-time checking of function parameters */
-#if defined(RTE_LIBRTE_HASH_DEBUG)
-#define RETURN_IF_TRUE(cond, retval) do { \
-	if (cond) \
-		return retval; \
-} while (0)
-#else
-#define RETURN_IF_TRUE(cond, retval)
-#endif
-
 #include <rte_hash_crc.h>
 #include <rte_jhash.h>
 
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_16_BYTES,
-	KEY_32_BYTES,
-	KEY_48_BYTES,
-	KEY_64_BYTES,
-	KEY_80_BYTES,
-	KEY_96_BYTES,
-	KEY_112_BYTES,
-	KEY_128_BYTES,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	rte_hash_k16_cmp_eq,
-	rte_hash_k32_cmp_eq,
-	rte_hash_k48_cmp_eq,
-	rte_hash_k64_cmp_eq,
-	rte_hash_k80_cmp_eq,
-	rte_hash_k96_cmp_eq,
-	rte_hash_k112_cmp_eq,
-	rte_hash_k128_cmp_eq,
-	memcmp
-};
-#else
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	memcmp
-};
-
-#endif
-
-
 /**
  * Number of items per bucket.
  * 8 is a tradeoff between performance and memory consumption.
@@ -189,7 +112,7 @@ struct __rte_cache_aligned rte_hash {
 	uint32_t hash_func_init_val;    /**< Init value used by hash_func. */
 	rte_hash_cmp_eq_t rte_hash_custom_cmp_eq;
 	/**< Custom function used to compare keys. */
-	enum cmp_jump_table_case cmp_jump_table_idx;
+	unsigned int cmp_jump_table_idx;
 	/**< Indicates which compare function to use. */
 	unsigned int sig_cmp_fn;
 	/**< Indicates which signature compare function to use. */
-- 
2.47.2


^ permalink raw reply	[relevance 7%]

* RE: [RFC 1/3] hash: move table of hash compare functions out of header
  2025-08-21 20:35  7% ` [RFC 1/3] hash: move table of hash compare functions out of header Stephen Hemminger
@ 2025-08-22  9:05  0%   ` Morten Brørup
  0 siblings, 0 replies; 77+ results
From: Morten Brørup @ 2025-08-22  9:05 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin

> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, 21 August 2025 22.35
> 
> Remove the definition of the compare jump table from the
> header file so the internal details are not exposed.
> Prevents future ABI breakage if new sizes are added.
> 
> Make other macros local if possible, header should
> only contain exposed API.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---

[...]

> +/*
> + * All different options to select a key compare function,
> + * based on the key size and custom function.
> + * Not in rte_cuckoo_hash.h to avoid ABI issues.
> + */
> +enum cmp_jump_table_case {
> +	KEY_CUSTOM = 0,
> +#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
> +	KEY_16_BYTES,
> +	KEY_32_BYTES,
> +	KEY_48_BYTES,
> +	KEY_64_BYTES,
> +	KEY_80_BYTES,
> +	KEY_96_BYTES,
> +	KEY_112_BYTES,
> +	KEY_128_BYTES,
> +#endif
> +	KEY_OTHER_BYTES,
> +	NUM_KEY_CMP_CASES,
> +};
> 
>  /* Enum used to select the implementation of the signature comparison
> function to use
>   * eg: a system supporting SVE might want to use a NEON or scalar
> implementation.
> @@ -117,6 +154,25 @@ void rte_hash_set_cmp_func(struct rte_hash *h,
> rte_hash_cmp_eq_t func)
>  	h->rte_hash_custom_cmp_eq = func;
>  }
> 
> +/*
> + * Table storing all different key compare functions
> + * (multi-process supported)
> + */
> +static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
> +	[KEY_CUSTOM] = NULL,
> +#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
> +	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
> +	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
> +	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,
> +	[KEY_64_BYTES] = rte_hash_k64_cmp_eq,
> +	[KEY_80_BYTES] = rte_hash_k80_cmp_eq,
> +	[KEY_96_BYTES] = rte_hash_k96_cmp_eq,
> +	[KEY_112_BYTES] = rte_hash_k112_cmp_eq,
> +	[KEY_128_BYTES] = rte_hash_k128_cmp_eq,
> +#endif
> +	[KEY_OTHER_BYTES] = memcmp,
> +};

Nice trick explicitly indexing these here; it reduces the risk of not matching the cmp_jump_table_case.

Consider adding static_assert() that RTE_DIM(cmp_jump_table) == NUM_KEY_CMP_CASES.

With or without suggested static_assert()...
Acked-by: Morten Brørup <mb@smartsharesystems.com>

Good cleanup!


^ permalink raw reply	[relevance 0%]

* [dpdk-dev v9 1/3] cryptodev: add ec points to sm2 op
  @ 2025-08-22 11:13  4% ` Kai Ji
  0 siblings, 0 replies; 77+ results
From: Kai Ji @ 2025-08-22 11:13 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji, Arkadiusz Kusztal, Akhil Goyal, Fan Zhang

In the case when PMD cannot support the full process of the SM2,
but elliptic curve computation only, additional fields
are needed to handle such a case.

Points C1, kP therefore were added to the SM2 crypto operation struct.

Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 doc/guides/rel_notes/release_25_11.rst |  2 +
 lib/cryptodev/rte_crypto_asym.h        | 56 +++++++++++++++++++-------
 2 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index ccad6d89ff..b15d2e0e8f 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -100,6 +100,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cryptodev: The ``rte_crypto_sm2_op_param`` struct member to hold ciphertext
+  is changed to union data type. This change is to support partial SM2 calculation.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..039dcb85a7 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -654,6 +654,8 @@ enum rte_crypto_sm2_op_capa {
 	/**< Random number generator supported in SM2 ops. */
 	RTE_CRYPTO_SM2_PH,
 	/**< Prehash message before crypto op. */
+	RTE_CRYPTO_SM2_PARTIAL,
+	/**< Calculate elliptic curve points only. */
 };
 
 /**
@@ -681,20 +683,46 @@ struct rte_crypto_sm2_op_param {
 	 * will be overwritten by the PMD with the decrypted length.
 	 */
 
-	rte_crypto_param cipher;
-	/**<
-	 * Pointer to input data
-	 * - to be decrypted for SM2 private decrypt.
-	 *
-	 * Pointer to output data
-	 * - for SM2 public encrypt.
-	 * In this case the underlying array should have been allocated
-	 * with enough memory to hold ciphertext output (at least X bytes
-	 * for prime field curve of N bytes and for message M bytes,
-	 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
-	 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
-	 * be overwritten by the PMD with the encrypted length.
-	 */
+	union {
+		rte_crypto_param cipher;
+		/**<
+		 * Pointer to input data
+		 * - to be decrypted for SM2 private decrypt.
+		 *
+		 * Pointer to output data
+		 * - for SM2 public encrypt.
+		 * In this case the underlying array should have been allocated
+		 * with enough memory to hold ciphertext output (at least X bytes
+		 * for prime field curve of N bytes and for message M bytes,
+		 * where X = (C1 || C2 || C3) and computed based on SM2 RFC as
+		 * C1 (1 + N + N), C2 = M, C3 = N. The cipher.length field will
+		 * be overwritten by the PMD with the encrypted length.
+		 */
+		struct {
+			struct rte_crypto_ec_point c1;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * In the case of encryption, it is an output - point C1 = (x1,y1).
+			 * In the case of decryption, if is an input - point C1 = (x1,y1).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+			struct rte_crypto_ec_point kp;
+			/**<
+			 * This field is used only when PMD does not support the full
+			 * process of the SM2 encryption/decryption, but the elliptic
+			 * curve part only.
+			 *
+			 * It is an output in the encryption case, it is a point
+			 * [k]P = (x2,y2).
+			 *
+			 * Must be used along with the RTE_CRYPTO_SM2_PARTIAL flag.
+			 */
+		};
+	};
 
 	rte_crypto_uint id;
 	/**< The SM2 id used by signer and verifier. */
-- 
2.43.0


^ permalink raw reply	[relevance 4%]

* [PATCH v2 1/4] hash: move table of hash compare functions out of header
  @ 2025-08-22 18:19  7%   ` Stephen Hemminger
  0 siblings, 0 replies; 77+ results
From: Stephen Hemminger @ 2025-08-22 18:19 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Morten Brørup, Yipeng Wang,
	Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin

Remove the definition of the compare jump table from the
header file so the internal details are not exposed.
Prevents future ABI breakage if new sizes are added.

Make other macros local if possible, header should
only contain exposed API.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/hash/rte_cuckoo_hash.c | 74 ++++++++++++++++++++++++++++++-----
 lib/hash/rte_cuckoo_hash.h | 79 +-------------------------------------
 2 files changed, 65 insertions(+), 88 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..619fe0c691 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -25,14 +25,51 @@
 #include <rte_tailq.h>
 
 #include "rte_hash.h"
+#include "rte_cuckoo_hash.h"
 
-/* needs to be before rte_cuckoo_hash.h */
 RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
 #define RTE_LOGTYPE_HASH hash_logtype
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-#include "rte_cuckoo_hash.h"
+/* Macro to enable/disable run-time checking of function parameters */
+#if defined(RTE_LIBRTE_HASH_DEBUG)
+#define RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return retval; \
+} while (0)
+#else
+#define RETURN_IF_TRUE(cond, retval)
+#endif
+
+#if defined(RTE_ARCH_X86)
+#include "rte_cmp_x86.h"
+#endif
+
+#if defined(RTE_ARCH_ARM64)
+#include "rte_cmp_arm64.h"
+#endif
+
+/*
+ * All different options to select a key compare function,
+ * based on the key size and custom function.
+ * Not in rte_cuckoo_hash.h to avoid ABI issues.
+ */
+enum cmp_jump_table_case {
+	KEY_CUSTOM = 0,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	KEY_16_BYTES,
+	KEY_32_BYTES,
+	KEY_48_BYTES,
+	KEY_64_BYTES,
+	KEY_80_BYTES,
+	KEY_96_BYTES,
+	KEY_112_BYTES,
+	KEY_128_BYTES,
+#endif
+	KEY_OTHER_BYTES,
+	NUM_KEY_CMP_CASES,
+};
 
 /* Enum used to select the implementation of the signature comparison function to use
  * eg: a system supporting SVE might want to use a NEON or scalar implementation.
@@ -117,6 +154,25 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 	h->rte_hash_custom_cmp_eq = func;
 }
 
+/*
+ * Table storing all different key compare functions
+ * (multi-process supported)
+ */
+static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
+	[KEY_CUSTOM] = NULL,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
+	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
+	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,
+	[KEY_64_BYTES] = rte_hash_k64_cmp_eq,
+	[KEY_80_BYTES] = rte_hash_k80_cmp_eq,
+	[KEY_96_BYTES] = rte_hash_k96_cmp_eq,
+	[KEY_112_BYTES] = rte_hash_k112_cmp_eq,
+	[KEY_128_BYTES] = rte_hash_k128_cmp_eq,
+#endif
+	[KEY_OTHER_BYTES] = memcmp,
+};
+
 static inline int
 rte_hash_cmp_eq(const void *key1, const void *key2, const struct rte_hash *h)
 {
@@ -390,13 +446,13 @@ rte_hash_create(const struct rte_hash_parameters *params)
 		goto err_unlock;
 	}
 
-/*
- * If x86 architecture is used, select appropriate compare function,
- * which may use x86 intrinsics, otherwise use memcmp
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	/* Select function to compare keys */
 	switch (params->key_len) {
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	/*
+	 * If x86 architecture is used, select appropriate compare function,
+	 * which may use x86 intrinsics, otherwise use memcmp
+	 */
 	case 16:
 		h->cmp_jump_table_idx = KEY_16_BYTES;
 		break;
@@ -421,13 +477,11 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	case 128:
 		h->cmp_jump_table_idx = KEY_128_BYTES;
 		break;
+#endif
 	default:
 		/* If key is not multiple of 16, use generic memcmp */
 		h->cmp_jump_table_idx = KEY_OTHER_BYTES;
 	}
-#else
-	h->cmp_jump_table_idx = KEY_OTHER_BYTES;
-#endif
 
 	if (use_local_cache) {
 		local_free_slots = rte_zmalloc_socket(NULL,
diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h
index 26a992419a..16fe999c4c 100644
--- a/lib/hash/rte_cuckoo_hash.h
+++ b/lib/hash/rte_cuckoo_hash.h
@@ -12,86 +12,9 @@
 #define _RTE_CUCKOO_HASH_H_
 
 #include <stdalign.h>
-
-#if defined(RTE_ARCH_X86)
-#include "rte_cmp_x86.h"
-#endif
-
-#if defined(RTE_ARCH_ARM64)
-#include "rte_cmp_arm64.h"
-#endif
-
-/* Macro to enable/disable run-time checking of function parameters */
-#if defined(RTE_LIBRTE_HASH_DEBUG)
-#define RETURN_IF_TRUE(cond, retval) do { \
-	if (cond) \
-		return retval; \
-} while (0)
-#else
-#define RETURN_IF_TRUE(cond, retval)
-#endif
-
 #include <rte_hash_crc.h>
 #include <rte_jhash.h>
 
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_16_BYTES,
-	KEY_32_BYTES,
-	KEY_48_BYTES,
-	KEY_64_BYTES,
-	KEY_80_BYTES,
-	KEY_96_BYTES,
-	KEY_112_BYTES,
-	KEY_128_BYTES,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	rte_hash_k16_cmp_eq,
-	rte_hash_k32_cmp_eq,
-	rte_hash_k48_cmp_eq,
-	rte_hash_k64_cmp_eq,
-	rte_hash_k80_cmp_eq,
-	rte_hash_k96_cmp_eq,
-	rte_hash_k112_cmp_eq,
-	rte_hash_k128_cmp_eq,
-	memcmp
-};
-#else
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	memcmp
-};
-
-#endif
-
-
 /**
  * Number of items per bucket.
  * 8 is a tradeoff between performance and memory consumption.
@@ -189,7 +112,7 @@ struct __rte_cache_aligned rte_hash {
 	uint32_t hash_func_init_val;    /**< Init value used by hash_func. */
 	rte_hash_cmp_eq_t rte_hash_custom_cmp_eq;
 	/**< Custom function used to compare keys. */
-	enum cmp_jump_table_case cmp_jump_table_idx;
+	unsigned int cmp_jump_table_idx;
 	/**< Indicates which compare function to use. */
 	unsigned int sig_cmp_fn;
 	/**< Indicates which signature compare function to use. */
-- 
2.47.2


^ permalink raw reply	[relevance 7%]

* [PATCH v3 1/4] hash: move table of hash compare functions out of header
  @ 2025-08-26 14:48  7%   ` Stephen Hemminger
  0 siblings, 0 replies; 77+ results
From: Stephen Hemminger @ 2025-08-26 14:48 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Morten Brørup, Yipeng Wang,
	Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin

Remove the definition of the compare jump table from the
header file so the internal details are not exposed.
Prevents future ABI breakage if new sizes are added.

Make other macros local if possible, header should
only contain exposed API.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/hash/rte_cuckoo_hash.c | 74 ++++++++++++++++++++++++++++++-----
 lib/hash/rte_cuckoo_hash.h | 79 +-------------------------------------
 2 files changed, 65 insertions(+), 88 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..619fe0c691 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -25,14 +25,51 @@
 #include <rte_tailq.h>
 
 #include "rte_hash.h"
+#include "rte_cuckoo_hash.h"
 
-/* needs to be before rte_cuckoo_hash.h */
 RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
 #define RTE_LOGTYPE_HASH hash_logtype
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-#include "rte_cuckoo_hash.h"
+/* Macro to enable/disable run-time checking of function parameters */
+#if defined(RTE_LIBRTE_HASH_DEBUG)
+#define RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return retval; \
+} while (0)
+#else
+#define RETURN_IF_TRUE(cond, retval)
+#endif
+
+#if defined(RTE_ARCH_X86)
+#include "rte_cmp_x86.h"
+#endif
+
+#if defined(RTE_ARCH_ARM64)
+#include "rte_cmp_arm64.h"
+#endif
+
+/*
+ * All different options to select a key compare function,
+ * based on the key size and custom function.
+ * Not in rte_cuckoo_hash.h to avoid ABI issues.
+ */
+enum cmp_jump_table_case {
+	KEY_CUSTOM = 0,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	KEY_16_BYTES,
+	KEY_32_BYTES,
+	KEY_48_BYTES,
+	KEY_64_BYTES,
+	KEY_80_BYTES,
+	KEY_96_BYTES,
+	KEY_112_BYTES,
+	KEY_128_BYTES,
+#endif
+	KEY_OTHER_BYTES,
+	NUM_KEY_CMP_CASES,
+};
 
 /* Enum used to select the implementation of the signature comparison function to use
  * eg: a system supporting SVE might want to use a NEON or scalar implementation.
@@ -117,6 +154,25 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 	h->rte_hash_custom_cmp_eq = func;
 }
 
+/*
+ * Table storing all different key compare functions
+ * (multi-process supported)
+ */
+static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
+	[KEY_CUSTOM] = NULL,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
+	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
+	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,
+	[KEY_64_BYTES] = rte_hash_k64_cmp_eq,
+	[KEY_80_BYTES] = rte_hash_k80_cmp_eq,
+	[KEY_96_BYTES] = rte_hash_k96_cmp_eq,
+	[KEY_112_BYTES] = rte_hash_k112_cmp_eq,
+	[KEY_128_BYTES] = rte_hash_k128_cmp_eq,
+#endif
+	[KEY_OTHER_BYTES] = memcmp,
+};
+
 static inline int
 rte_hash_cmp_eq(const void *key1, const void *key2, const struct rte_hash *h)
 {
@@ -390,13 +446,13 @@ rte_hash_create(const struct rte_hash_parameters *params)
 		goto err_unlock;
 	}
 
-/*
- * If x86 architecture is used, select appropriate compare function,
- * which may use x86 intrinsics, otherwise use memcmp
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	/* Select function to compare keys */
 	switch (params->key_len) {
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	/*
+	 * If x86 architecture is used, select appropriate compare function,
+	 * which may use x86 intrinsics, otherwise use memcmp
+	 */
 	case 16:
 		h->cmp_jump_table_idx = KEY_16_BYTES;
 		break;
@@ -421,13 +477,11 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	case 128:
 		h->cmp_jump_table_idx = KEY_128_BYTES;
 		break;
+#endif
 	default:
 		/* If key is not multiple of 16, use generic memcmp */
 		h->cmp_jump_table_idx = KEY_OTHER_BYTES;
 	}
-#else
-	h->cmp_jump_table_idx = KEY_OTHER_BYTES;
-#endif
 
 	if (use_local_cache) {
 		local_free_slots = rte_zmalloc_socket(NULL,
diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h
index 26a992419a..16fe999c4c 100644
--- a/lib/hash/rte_cuckoo_hash.h
+++ b/lib/hash/rte_cuckoo_hash.h
@@ -12,86 +12,9 @@
 #define _RTE_CUCKOO_HASH_H_
 
 #include <stdalign.h>
-
-#if defined(RTE_ARCH_X86)
-#include "rte_cmp_x86.h"
-#endif
-
-#if defined(RTE_ARCH_ARM64)
-#include "rte_cmp_arm64.h"
-#endif
-
-/* Macro to enable/disable run-time checking of function parameters */
-#if defined(RTE_LIBRTE_HASH_DEBUG)
-#define RETURN_IF_TRUE(cond, retval) do { \
-	if (cond) \
-		return retval; \
-} while (0)
-#else
-#define RETURN_IF_TRUE(cond, retval)
-#endif
-
 #include <rte_hash_crc.h>
 #include <rte_jhash.h>
 
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_16_BYTES,
-	KEY_32_BYTES,
-	KEY_48_BYTES,
-	KEY_64_BYTES,
-	KEY_80_BYTES,
-	KEY_96_BYTES,
-	KEY_112_BYTES,
-	KEY_128_BYTES,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	rte_hash_k16_cmp_eq,
-	rte_hash_k32_cmp_eq,
-	rte_hash_k48_cmp_eq,
-	rte_hash_k64_cmp_eq,
-	rte_hash_k80_cmp_eq,
-	rte_hash_k96_cmp_eq,
-	rte_hash_k112_cmp_eq,
-	rte_hash_k128_cmp_eq,
-	memcmp
-};
-#else
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	memcmp
-};
-
-#endif
-
-
 /**
  * Number of items per bucket.
  * 8 is a tradeoff between performance and memory consumption.
@@ -189,7 +112,7 @@ struct __rte_cache_aligned rte_hash {
 	uint32_t hash_func_init_val;    /**< Init value used by hash_func. */
 	rte_hash_cmp_eq_t rte_hash_custom_cmp_eq;
 	/**< Custom function used to compare keys. */
-	enum cmp_jump_table_case cmp_jump_table_idx;
+	unsigned int cmp_jump_table_idx;
 	/**< Indicates which compare function to use. */
 	unsigned int sig_cmp_fn;
 	/**< Indicates which signature compare function to use. */
-- 
2.47.2


^ permalink raw reply	[relevance 7%]

* [PATCH v1] pcapng: allow any protocol link type for the interface block
@ 2025-08-27 15:38  3% Schneide
  2025-08-27 22:32  3% ` [PATCH v2] " Schneide
  0 siblings, 1 reply; 77+ results
From: Schneide @ 2025-08-27 15:38 UTC (permalink / raw)
  To: dev, Thomas Monjalon, Reshma Pattan, Stephen Hemminger,
	Jerin Jacob, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan
  Cc: Dylan Schneider

From: Dylan Schneider <schneide@qti.qualcomm.com>

Allow the user to specify protocol link type when creating pcapng files.
This change is needed to specify the protocol type in the pcapng file,
DLT_EN10MB specifies ethernet packets only. This will allow dissectors
for other protocols to be used on files generated by pcapng.

Includes a breaking change to rte_pcapng_add_interface to add link_type
parameter. Existing calls to the function have been updated to pass
DLT_EN10MB for the link type argument.

Fixes: d1da6d0d04c7 ("pcapng: require per-interface information")
Signed-off-by: Dylan Schneider <schneide@qti.qualcomm.com>
Cc: stephen@networkplumber.org
---
 .mailmap                               |  1 +
 app/dumpcap/main.c                     |  5 +++--
 app/test/test_pcapng.c                 |  8 ++++----
 doc/guides/rel_notes/release_25_11.rst |  4 ++++
 lib/graph/graph_pcap.c                 |  2 +-
 lib/pcapng/meson.build                 |  2 ++
 lib/pcapng/rte_pcapng.c                | 21 +++++++++++++++------
 lib/pcapng/rte_pcapng.h                |  7 ++++++-
 8 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/.mailmap b/.mailmap
index 34a99f93a1..1a003778b2 100644
--- a/.mailmap
+++ b/.mailmap
@@ -402,6 +402,7 @@ Dukai Yuan <dukaix.yuan@intel.com>
 Dumitru Ceara <dceara@redhat.com> <dumitru.ceara@gmail.com>
 Duncan Bellamy <dunk@denkimushi.com>
 Dustin Lundquist <dustin@null-ptr.net>
+Dylan Schneider <schneide@qti.qualcomm.com>
 Dzmitry Sautsa <dzmitryx.sautsa@intel.com>
 Ed Czeck <ed.czeck@atomicrules.com>
 Eduard Serra <eserra@vmware.com>
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 3d3c0dbc66..e5ba36350b 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -800,8 +800,9 @@ static dumpcap_out_t create_output(void)
 		free(os);
 
 		TAILQ_FOREACH(intf, &interfaces, next) {
-			if (rte_pcapng_add_interface(ret.pcapng, intf->port, intf->ifname,
-						     intf->ifdescr, intf->opts.filter) < 0)
+			if (rte_pcapng_add_interface(ret.pcapng, intf->port, DLT_EN10MB,
+						     intf->ifname, intf->ifdescr,
+						     intf->opts.filter) < 0)
 				rte_exit(EXIT_FAILURE, "rte_pcapng_add_interface %u failed\n",
 					intf->port);
 		}
diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c
index 8f2cff36c3..bcf99724fa 100644
--- a/app/test/test_pcapng.c
+++ b/app/test/test_pcapng.c
@@ -345,7 +345,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface to the file */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u\n", port_id);
@@ -353,7 +353,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface with ifname and ifdescr */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       "myeth", "Some long description", NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u with ifname\n", port_id);
@@ -361,7 +361,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface with filter */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, "tcp port 8080");
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u with filter\n", port_id);
@@ -406,7 +406,7 @@ test_write_packets(void)
 	}
 
 	/* Add interface to the file */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u\n", port_id);
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index ccad6d89ff..b1f75a489c 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -84,6 +84,10 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* pcapng: Changed the API for adding interfaces to include a link type argument.
+  The link type was previously hardcoded to the ethernet link type in the API.
+  This argument is added to ``rte_pcapng_add_interface``.
+  These functions are versioned to retain binary compatibility until the next LTS release.
 
 ABI Changes
 -----------
diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c
index 89525f1220..08dcda0d28 100644
--- a/lib/graph/graph_pcap.c
+++ b/lib/graph/graph_pcap.c
@@ -117,7 +117,7 @@ graph_pcap_file_open(const char *filename)
 
 	/* Add the configured interfaces as possible capture ports */
 	RTE_ETH_FOREACH_DEV(portid) {
-		ret = rte_pcapng_add_interface(pcapng_fd, portid,
+		ret = rte_pcapng_add_interface(pcapng_fd, portid, DLT_EN10MB,
 					       NULL, NULL, NULL);
 		if (ret < 0) {
 			graph_err("Graph rte_pcapng_add_interface port %u failed: %d",
diff --git a/lib/pcapng/meson.build b/lib/pcapng/meson.build
index 4549925d41..3aa7ba5155 100644
--- a/lib/pcapng/meson.build
+++ b/lib/pcapng/meson.build
@@ -5,3 +5,5 @@ sources = files('rte_pcapng.c')
 headers = files('rte_pcapng.h')
 
 deps += ['ethdev']
+
+use_function_versioning = true
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 2a07b4c1f5..1ff8d14d08 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -200,11 +200,10 @@ pcapng_section_block(rte_pcapng_t *self,
 }
 
 /* Write an interface block for a DPDK port */
-RTE_EXPORT_SYMBOL(rte_pcapng_add_interface)
-int
-rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
-			 const char *ifname, const char *ifdescr,
-			 const char *filter)
+RTE_DEFAULT_SYMBOL(27, int, rte_pcapng_add_interface,
+		   (rte_pcapng_t *self, uint16_t port, uint16_t link_type,
+		   const char *ifname, const char *ifdescr,
+		   const char *filter))
 {
 	struct pcapng_interface_block *hdr;
 	struct rte_eth_dev_info dev_info;
@@ -274,7 +273,7 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 	hdr = (struct pcapng_interface_block *)buf;
 	*hdr = (struct pcapng_interface_block) {
 		.block_type = PCAPNG_INTERFACE_BLOCK,
-		.link_type = 1,		/* DLT_EN10MB - Ethernet */
+		.link_type = link_type,
 		.block_length = len,
 	};
 
@@ -319,6 +318,16 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 	return write(self->outfd, buf, len);
 }
 
+RTE_VERSION_SYMBOL(26, int, rte_pcapng_add_interface,
+		   (rte_pcapng_t *self, uint16_t port,
+		   const char *ifname, const char *ifdescr,
+		   const char *filter))
+{
+	/* Call the new version with a default link_type (Ethernet) */
+	return rte_pcapng_add_interface(self, port, DLT_EN10MB,
+					ifname, ifdescr, filter);
+}
+
 /*
  * Write an Interface statistics block at the end of capture.
  */
diff --git a/lib/pcapng/rte_pcapng.h b/lib/pcapng/rte_pcapng.h
index 48f2b57564..1b3f9b9464 100644
--- a/lib/pcapng/rte_pcapng.h
+++ b/lib/pcapng/rte_pcapng.h
@@ -28,6 +28,9 @@
 extern "C" {
 #endif
 
+/* default link type for ethernet traffic */
+#define DLT_EN10MB 1
+
 /* Opaque handle used for functions in this library. */
 typedef struct rte_pcapng rte_pcapng_t;
 
@@ -71,6 +74,8 @@ rte_pcapng_close(rte_pcapng_t *self);
  *  The handle to the packet capture file
  * @param port
  *  The Ethernet port to report stats on.
+ * @param link_type
+ *  The link type (e.g., DLT_EN10MB).
  * @param ifname (optional)
  *  Interface name to record in the file.
  *  If not specified, name will be constructed from port
@@ -84,7 +89,7 @@ rte_pcapng_close(rte_pcapng_t *self);
  * must be added.
  */
 int
-rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
+rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port, uint16_t link_type,
 			 const char *ifname, const char *ifdescr,
 			 const char *filter);
 
-- 
2.27.0


^ permalink raw reply	[relevance 3%]

* [PATCH v2] pcapng: allow any protocol link type for the interface block
  2025-08-27 15:38  3% [PATCH v1] pcapng: allow any protocol link type for the interface block Schneide
@ 2025-08-27 22:32  3% ` Schneide
  0 siblings, 0 replies; 77+ results
From: Schneide @ 2025-08-27 22:32 UTC (permalink / raw)
  To: dev, Thomas Monjalon, Reshma Pattan, Stephen Hemminger,
	Jerin Jacob, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan
  Cc: Dylan Schneider

From: Dylan Schneider <schneide@qti.qualcomm.com>

Allow the user to specify protocol link type when creating pcapng files.
This change is needed to specify the protocol type in the pcapng file,
DLT_EN10MB specifies ethernet packets only. This will allow dissectors
for other protocols to be used on files generated by pcapng.

Includes a breaking change to rte_pcapng_add_interface to add link_type
parameter. Existing calls to the function have been updated to pass
DLT_EN10MB for the link type argument.

Fixes: d1da6d0d04c7 ("pcapng: require per-interface information")
Signed-off-by: Dylan Schneider <schneide@qti.qualcomm.com>
Cc: stephen@networkplumber.org
---
v2:
* Remove function versioning
* Define DLT_EN10MB macro only if it has not been defined

 .mailmap                               |  1 +
 app/dumpcap/main.c                     |  5 +++--
 app/test/test_pcapng.c                 |  8 ++++----
 doc/guides/rel_notes/release_25_11.rst |  4 ++++
 lib/graph/graph_pcap.c                 |  2 +-
 lib/pcapng/rte_pcapng.c                |  4 ++--
 lib/pcapng/rte_pcapng.h                | 10 +++++++++-
 7 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/.mailmap b/.mailmap
index 34a99f93a1..1a003778b2 100644
--- a/.mailmap
+++ b/.mailmap
@@ -402,6 +402,7 @@ Dukai Yuan <dukaix.yuan@intel.com>
 Dumitru Ceara <dceara@redhat.com> <dumitru.ceara@gmail.com>
 Duncan Bellamy <dunk@denkimushi.com>
 Dustin Lundquist <dustin@null-ptr.net>
+Dylan Schneider <schneide@qti.qualcomm.com>
 Dzmitry Sautsa <dzmitryx.sautsa@intel.com>
 Ed Czeck <ed.czeck@atomicrules.com>
 Eduard Serra <eserra@vmware.com>
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 3d3c0dbc66..e5ba36350b 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -800,8 +800,9 @@ static dumpcap_out_t create_output(void)
 		free(os);
 
 		TAILQ_FOREACH(intf, &interfaces, next) {
-			if (rte_pcapng_add_interface(ret.pcapng, intf->port, intf->ifname,
-						     intf->ifdescr, intf->opts.filter) < 0)
+			if (rte_pcapng_add_interface(ret.pcapng, intf->port, DLT_EN10MB,
+						     intf->ifname, intf->ifdescr,
+						     intf->opts.filter) < 0)
 				rte_exit(EXIT_FAILURE, "rte_pcapng_add_interface %u failed\n",
 					intf->port);
 		}
diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c
index 8f2cff36c3..bcf99724fa 100644
--- a/app/test/test_pcapng.c
+++ b/app/test/test_pcapng.c
@@ -345,7 +345,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface to the file */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u\n", port_id);
@@ -353,7 +353,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface with ifname and ifdescr */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       "myeth", "Some long description", NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u with ifname\n", port_id);
@@ -361,7 +361,7 @@ test_add_interface(void)
 	}
 
 	/* Add interface with filter */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, "tcp port 8080");
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u with filter\n", port_id);
@@ -406,7 +406,7 @@ test_write_packets(void)
 	}
 
 	/* Add interface to the file */
-	ret = rte_pcapng_add_interface(pcapng, port_id,
+	ret = rte_pcapng_add_interface(pcapng, port_id, DLT_EN10MB,
 				       NULL, NULL, NULL);
 	if (ret < 0) {
 		fprintf(stderr, "can not add port %u\n", port_id);
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index ccad6d89ff..b1f75a489c 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -84,6 +84,10 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* pcapng: Changed the API for adding interfaces to include a link type argument.
+  The link type was previously hardcoded to the ethernet link type in the API.
+  This argument is added to ``rte_pcapng_add_interface``.
+  These functions are versioned to retain binary compatibility until the next LTS release.
 
 ABI Changes
 -----------
diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c
index 89525f1220..08dcda0d28 100644
--- a/lib/graph/graph_pcap.c
+++ b/lib/graph/graph_pcap.c
@@ -117,7 +117,7 @@ graph_pcap_file_open(const char *filename)
 
 	/* Add the configured interfaces as possible capture ports */
 	RTE_ETH_FOREACH_DEV(portid) {
-		ret = rte_pcapng_add_interface(pcapng_fd, portid,
+		ret = rte_pcapng_add_interface(pcapng_fd, portid, DLT_EN10MB,
 					       NULL, NULL, NULL);
 		if (ret < 0) {
 			graph_err("Graph rte_pcapng_add_interface port %u failed: %d",
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 2a07b4c1f5..21bc94cea1 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -202,7 +202,7 @@ pcapng_section_block(rte_pcapng_t *self,
 /* Write an interface block for a DPDK port */
 RTE_EXPORT_SYMBOL(rte_pcapng_add_interface)
 int
-rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
+rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port, uint16_t link_type,
 			 const char *ifname, const char *ifdescr,
 			 const char *filter)
 {
@@ -274,7 +274,7 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 	hdr = (struct pcapng_interface_block *)buf;
 	*hdr = (struct pcapng_interface_block) {
 		.block_type = PCAPNG_INTERFACE_BLOCK,
-		.link_type = 1,		/* DLT_EN10MB - Ethernet */
+		.link_type = link_type,
 		.block_length = len,
 	};
 
diff --git a/lib/pcapng/rte_pcapng.h b/lib/pcapng/rte_pcapng.h
index 48f2b57564..c51c63fccf 100644
--- a/lib/pcapng/rte_pcapng.h
+++ b/lib/pcapng/rte_pcapng.h
@@ -28,6 +28,12 @@
 extern "C" {
 #endif
 
+
+/* default link type for ethernet traffic */
+#ifndef DLT_EN10MB
+#define DLT_EN10MB 1
+#endif
+
 /* Opaque handle used for functions in this library. */
 typedef struct rte_pcapng rte_pcapng_t;
 
@@ -71,6 +77,8 @@ rte_pcapng_close(rte_pcapng_t *self);
  *  The handle to the packet capture file
  * @param port
  *  The Ethernet port to report stats on.
+ * @param link_type
+ *  The link type (e.g., DLT_EN10MB).
  * @param ifname (optional)
  *  Interface name to record in the file.
  *  If not specified, name will be constructed from port
@@ -84,7 +92,7 @@ rte_pcapng_close(rte_pcapng_t *self);
  * must be added.
  */
 int
-rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
+rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port, uint16_t link_type,
 			 const char *ifname, const char *ifdescr,
 			 const char *filter);
 
-- 
2.27.0


^ permalink raw reply	[relevance 3%]

* [PATCH] dpdk: support quick jump to API definition
@ 2025-08-28  2:46  1% Chengwen Feng
  0 siblings, 0 replies; 77+ results
From: Chengwen Feng @ 2025-08-28  2:46 UTC (permalink / raw)
  To: thomas, david.marchand; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

And also change the gen-version-map.py to ensure it only identifies
RTE_EXPORT_XXX_SYMBOL that end with a semicolon.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 buildtools/gen-version-map.py                 |    6 +-
 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_glue.c         |    2 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mlx5/windows/mlx5_glue.c       |    2 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/node_mbuf_dynfield.c                 |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 401 files changed, 4177 insertions(+), 4177 deletions(-)

diff --git a/buildtools/gen-version-map.py b/buildtools/gen-version-map.py
index 57e08a8c0f..fb7f7f2c59 100755
--- a/buildtools/gen-version-map.py
+++ b/buildtools/gen-version-map.py
@@ -9,10 +9,10 @@
 
 # From eal_export.h
 export_exp_sym_regexp = re.compile(
-    r"^RTE_EXPORT_EXPERIMENTAL_SYMBOL\(([^,]+), ([0-9]+.[0-9]+)\)"
+    r"^RTE_EXPORT_EXPERIMENTAL_SYMBOL\(([^,]+), ([0-9]+.[0-9]+)\);"
 )
-export_int_sym_regexp = re.compile(r"^RTE_EXPORT_INTERNAL_SYMBOL\(([^)]+)\)")
-export_sym_regexp = re.compile(r"^RTE_EXPORT_SYMBOL\(([^)]+)\)")
+export_int_sym_regexp = re.compile(r"^RTE_EXPORT_INTERNAL_SYMBOL\(([^)]+)\);")
+export_sym_regexp = re.compile(r"^RTE_EXPORT_SYMBOL\(([^)]+)\);")
 ver_sym_regexp = re.compile(r"^RTE_VERSION_SYMBOL\(([^,]+), [^,]+, ([^,]+),")
 ver_exp_sym_regexp = re.compile(r"^RTE_VERSION_EXPERIMENTAL_SYMBOL\([^,]+, ([^,]+),")
 default_sym_regexp = re.compile(r"^RTE_DEFAULT_SYMBOL\(([^,]+), [^,]+, ([^,]+),")
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index b7f02f56e1..7160a5dc96 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -4636,7 +4636,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acc_configure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acc_configure, 22.11);
 int
 rte_acc_configure(const char *dev_name, struct rte_acc_conf *conf)
 {
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 82cf98da5d..4bc6acfd9f 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -3367,7 +3367,7 @@ static int agx100_configure(const char *dev_name, const struct rte_fpga_5gnr_fec
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_5gnr_fec_configure, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_5gnr_fec_configure, 20.11);
 int rte_fpga_5gnr_fec_configure(const char *dev_name, const struct rte_fpga_5gnr_fec_conf *conf)
 {
 	struct rte_bbdev *bbdev = rte_bbdev_get_named_dev(dev_name);
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 4723a51dcf..73c98afd9a 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -2453,7 +2453,7 @@ set_default_fpga_conf(struct rte_fpga_lte_fec_conf *def_conf)
 }
 
 /* Initial configuration of FPGA LTE FEC device */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_lte_fec_configure, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_lte_fec_configure, 20.11);
 int
 rte_fpga_lte_fec_configure(const char *dev_name,
 		const struct rte_fpga_lte_fec_conf *conf)
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index ac766e283e..15f4440061 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -261,7 +261,7 @@ auxiliary_parse(const char *name, void *addr)
 }
 
 /* Register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_register);
 void
 rte_auxiliary_register(struct rte_auxiliary_driver *driver)
 {
@@ -269,7 +269,7 @@ rte_auxiliary_register(struct rte_auxiliary_driver *driver)
 }
 
 /* Unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_unregister);
 void
 rte_auxiliary_unregister(struct rte_auxiliary_driver *driver)
 {
diff --git a/drivers/bus/cdx/cdx.c b/drivers/bus/cdx/cdx.c
index 729d54337c..d492e08931 100644
--- a/drivers/bus/cdx/cdx.c
+++ b/drivers/bus/cdx/cdx.c
@@ -140,13 +140,13 @@ cdx_get_kernel_driver_by_path(const char *filename, char *driver_name,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_map_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_map_device);
 int rte_cdx_map_device(struct rte_cdx_device *dev)
 {
 	return cdx_vfio_map_resource(dev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unmap_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unmap_device);
 void rte_cdx_unmap_device(struct rte_cdx_device *dev)
 {
 	cdx_vfio_unmap_resource(dev);
@@ -481,7 +481,7 @@ cdx_parse(const char *name, void *addr)
 }
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_register);
 void
 rte_cdx_register(struct rte_cdx_driver *driver)
 {
@@ -490,7 +490,7 @@ rte_cdx_register(struct rte_cdx_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unregister);
 void
 rte_cdx_unregister(struct rte_cdx_driver *driver)
 {
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 37e0c424d4..ef7e33145d 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -551,7 +551,7 @@ cdx_vfio_map_resource(struct rte_cdx_device *dev)
 		return cdx_vfio_map_resource_secondary(dev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_enable);
 int
 rte_cdx_vfio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -586,7 +586,7 @@ rte_cdx_vfio_intr_enable(const struct rte_intr_handle *intr_handle)
 }
 
 /* disable MSI interrupts */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_disable);
 int
 rte_cdx_vfio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -614,7 +614,7 @@ rte_cdx_vfio_intr_disable(const struct rte_intr_handle *intr_handle)
 }
 
 /* Enable Bus Mastering */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_enable);
 int
 rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 {
@@ -660,7 +660,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 }
 
 /* Disable Bus Mastering */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_disable);
 int
 rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
 {
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 5420733019..fd391dbb8e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -60,19 +60,19 @@ struct netcfg_info *dpaa_netcfg;
 /* define a variable to hold the portal_key, once created.*/
 static pthread_key_t dpaa_portal_key;
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_svr_family)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_svr_family);
 unsigned int dpaa_svr_family;
 
 #define FSL_DPAA_BUS_NAME	dpaa_bus
 
-RTE_EXPORT_INTERNAL_SYMBOL(per_lcore_dpaa_io)
+RTE_EXPORT_INTERNAL_SYMBOL(per_lcore_dpaa_io);
 RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io);
 
 #define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield"
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_seqn_dynfield_offset);
 int dpaa_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_eth_port_cfg)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_eth_port_cfg);
 struct fm_eth_port_cfg *
 dpaa_get_eth_port_cfg(int dev_id)
 {
@@ -320,7 +320,7 @@ dpaa_clean_device_list(void)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_init);
 int rte_dpaa_portal_init(void *arg)
 {
 	static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = {
@@ -399,7 +399,7 @@ int rte_dpaa_portal_init(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_init);
 int
 rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
 {
@@ -428,7 +428,7 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_close)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_close);
 int rte_dpaa_portal_fq_close(struct qman_fq *fq)
 {
 	return fsl_qman_fq_portal_destroy(fq->qp);
@@ -556,7 +556,7 @@ rte_dpaa_bus_scan(void)
 }
 
 /* register a dpaa bus based dpaa driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_register);
 void
 rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 {
@@ -568,7 +568,7 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 }
 
 /* un-register a dpaa bus based dpaa driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_unregister);
 void
 rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
 {
diff --git a/drivers/bus/dpaa/dpaa_bus_base_symbols.c b/drivers/bus/dpaa/dpaa_bus_base_symbols.c
index 522cdca27e..d829d48381 100644
--- a/drivers/bus/dpaa/dpaa_bus_base_symbols.c
+++ b/drivers/bus/dpaa/dpaa_bus_base_symbols.c
@@ -5,96 +5,96 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(fman_ip_rev)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_hi)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_lo)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_mcast_filter_table)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_reset_mcast_filter_table)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_clear_mac_addr)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_add_mac_addr)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get_all)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_get_all)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_enable_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_disable_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_rx_status)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_bp)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_threshold)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_threshold)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_quanta)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_quanta)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fdoff)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_err_fqid)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_ic_params)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fdoff)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_maxfrm)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_maxfrm)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_sg_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_sg)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_discard_rx_errors)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_receive_rx_errors)
-RTE_EXPORT_INTERNAL_SYMBOL(netcfg_acquire)
-RTE_EXPORT_INTERNAL_SYMBOL(netcfg_release)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_new_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_free_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_get_params)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_release)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_acquire)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_query_free_buffers)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_fqid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_reserve_fqid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_pool_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_cgrid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_release_cgrid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_ioctl_version_number)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_speed)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_restart_link_autoneg)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_set_fq_lookup_table)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_register_cb)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_poll_free)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_poll_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_clear_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dqrr_consume)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_static_dequeue_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dca_index)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_create_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_fqid)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_state)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_init_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_retire_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_oos_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_np)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_frm_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_set_vdq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_volatile_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_modify_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_create_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_delete_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_caam)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_fd)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(fsl_qman_fq_portal_create)
+RTE_EXPORT_INTERNAL_SYMBOL(fman_ip_rev);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_hi);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_lo);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_mcast_filter_table);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_reset_mcast_filter_table);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_clear_mac_addr);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_add_mac_addr);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get_all);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_get_all);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_enable_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_disable_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_rx_status);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_bp);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_threshold);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_threshold);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_quanta);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_quanta);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fdoff);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_err_fqid);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_ic_params);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fdoff);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_maxfrm);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_maxfrm);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_sg_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_sg);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_discard_rx_errors);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_receive_rx_errors);
+RTE_EXPORT_INTERNAL_SYMBOL(netcfg_acquire);
+RTE_EXPORT_INTERNAL_SYMBOL(netcfg_release);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_new_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_free_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_get_params);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_release);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_acquire);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_query_free_buffers);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_fqid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_reserve_fqid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_pool_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_cgrid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_release_cgrid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_ioctl_version_number);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_speed);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_restart_link_autoneg);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_set_fq_lookup_table);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_register_cb);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_poll_free);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_poll_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_clear_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dqrr_consume);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_static_dequeue_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dca_index);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_create_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_fqid);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_state);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_init_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_retire_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_oos_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_np);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_frm_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_set_vdq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_volatile_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_modify_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_create_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_delete_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_caam);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_fd);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(fsl_qman_fq_portal_create);
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index ebc0c1fb4f..490193b535 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -30,10 +30,10 @@
 struct rte_fslmc_bus rte_fslmc_bus;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_seqn_dynfield_offset);
 int dpaa2_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_get_device_count)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_get_device_count);
 uint32_t
 rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
 {
@@ -528,7 +528,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
 }
 
 /*register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_register);
 void
 rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 {
@@ -538,7 +538,7 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 }
 
 /*un-register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_unregister);
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 68439cbd8c..63c490cb4e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -84,7 +84,7 @@ enum {
 	FSLMC_VFIO_SOCKET_REQ_MEM
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_get_mcp_ptr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_get_mcp_ptr);
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
 {
@@ -156,7 +156,7 @@ fslmc_io_virt2phy(const void *virtaddr)
 }
 
 /*register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_object_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_object_register);
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
 {
@@ -987,7 +987,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_vaddr_to_iova);
 uint64_t
 rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
 	uint64_t size)
@@ -1006,7 +1006,7 @@ rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
 	return RTE_BAD_IOVA;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_iova_to_vaddr);
 void *
 rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
 	uint64_t size)
@@ -1023,7 +1023,7 @@ rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_vaddr_to_iova);
 __rte_hot uint64_t
 rte_fslmc_mem_vaddr_to_iova(void *vaddr)
 {
@@ -1033,7 +1033,7 @@ rte_fslmc_mem_vaddr_to_iova(void *vaddr)
 	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_iova_to_vaddr);
 __rte_hot void *
 rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
 {
@@ -1043,7 +1043,7 @@ rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
 	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_vaddr_to_iova);
 uint64_t
 rte_fslmc_io_vaddr_to_iova(void *vaddr)
 {
@@ -1059,7 +1059,7 @@ rte_fslmc_io_vaddr_to_iova(void *vaddr)
 	return RTE_BAD_IOVA;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_iova_to_vaddr);
 void *
 rte_fslmc_io_iova_to_vaddr(uint64_t iova)
 {
@@ -1150,14 +1150,14 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fslmc_vfio_mem_dmamap)
+RTE_EXPORT_SYMBOL(rte_fslmc_vfio_mem_dmamap);
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
 	return fslmc_map_dma(vaddr, iova, size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_vfio_mem_dmaunmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_vfio_mem_dmaunmap);
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
@@ -1275,7 +1275,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 
 #define IRQ_SET_BUF_LEN  (sizeof(struct vfio_irq_set) + sizeof(int))
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_enable);
 int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
 {
 	int len, ret;
@@ -1307,7 +1307,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_disable);
 int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 {
 	struct vfio_irq_set *irq_set;
diff --git a/drivers/bus/fslmc/mc/dpbp.c b/drivers/bus/fslmc/mc/dpbp.c
index 08f24d33e8..57f05958d3 100644
--- a/drivers/bus/fslmc/mc/dpbp.c
+++ b/drivers/bus/fslmc/mc/dpbp.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_open);
 int dpbp_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpbp_id,
@@ -160,7 +160,7 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_enable);
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -183,7 +183,7 @@ int dpbp_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_disable);
 int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -240,7 +240,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_reset);
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -264,7 +264,7 @@ int dpbp_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_attributes);
 int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -336,7 +336,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
  * Return:  '0' on Success; Error code otherwise.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_num_free_bufs)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_num_free_bufs);
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpci.c b/drivers/bus/fslmc/mc/dpci.c
index 9df3827f92..288deb82bc 100644
--- a/drivers/bus/fslmc/mc/dpci.c
+++ b/drivers/bus/fslmc/mc/dpci.c
@@ -317,7 +317,7 @@ int dpci_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_rx_queue);
 int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
@@ -480,7 +480,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_opr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_opr);
 int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -519,7 +519,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_get_opr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_get_opr);
 int dpci_get_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpcon.c b/drivers/bus/fslmc/mc/dpcon.c
index b9f2f50e12..e9441a5dc9 100644
--- a/drivers/bus/fslmc/mc/dpcon.c
+++ b/drivers/bus/fslmc/mc/dpcon.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_open);
 int dpcon_open(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       int dpcon_id,
@@ -67,7 +67,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_close);
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -168,7 +168,7 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_enable);
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -192,7 +192,7 @@ int dpcon_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_disable);
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token)
@@ -251,7 +251,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_reset);
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -275,7 +275,7 @@ int dpcon_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_get_attributes);
 int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint32_t cmd_flags,
 			 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index 97e90b09f1..24b7e55064 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -26,7 +26,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_open);
 int dpdmai_open(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		int dpdmai_id,
@@ -65,7 +65,7 @@ int dpdmai_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_close);
 int dpdmai_close(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -175,7 +175,7 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_enable);
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token)
@@ -199,7 +199,7 @@ int dpdmai_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_disable);
 int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token)
@@ -282,7 +282,7 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_attributes);
 int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
 			  uint32_t cmd_flags,
 			  uint16_t token,
@@ -327,7 +327,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_set_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_set_rx_queue);
 int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -370,7 +370,7 @@ int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_rx_queue);
 int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -421,7 +421,7 @@ int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_tx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_tx_queue);
 int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index 8cdf8f432a..3937805dcf 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_open);
 int dpio_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpio_id,
@@ -64,7 +64,7 @@ int dpio_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_close);
 int dpio_close(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -177,7 +177,7 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_enable);
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -201,7 +201,7 @@ int dpio_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_disable);
 int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -259,7 +259,7 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_reset);
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -284,7 +284,7 @@ int dpio_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_attributes);
 int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -330,7 +330,7 @@ int dpio_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination);
 int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -359,7 +359,7 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination);
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -396,7 +396,7 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_by_core_id)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_by_core_id);
 int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
 					uint32_t cmd_flags,
 					uint16_t token,
@@ -425,7 +425,7 @@ int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_source)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_source);
 int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -454,7 +454,7 @@ int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination_source)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination_source);
 int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -491,7 +491,7 @@ int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_add_static_dequeue_channel)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_add_static_dequeue_channel);
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
 				    uint16_t token,
@@ -531,7 +531,7 @@ int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_remove_static_dequeue_channel)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_remove_static_dequeue_channel);
 int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				       uint32_t cmd_flags,
 				       uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpmng.c b/drivers/bus/fslmc/mc/dpmng.c
index 47c85cd80d..1a468df32f 100644
--- a/drivers/bus/fslmc/mc/dpmng.c
+++ b/drivers/bus/fslmc/mc/dpmng.c
@@ -20,7 +20,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mc_get_version)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_get_version);
 int mc_get_version(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   struct mc_version *mc_ver_info)
@@ -60,7 +60,7 @@ int mc_get_version(struct fsl_mc_io *mc_io,
  *
  * Return:     '0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mc_get_soc_version)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_get_soc_version);
 int mc_get_soc_version(struct fsl_mc_io *mc_io,
 		       uint32_t cmd_flags,
 		       struct mc_soc_version *mc_platform_info)
diff --git a/drivers/bus/fslmc/mc/mc_sys.c b/drivers/bus/fslmc/mc/mc_sys.c
index ef4c8dd3b8..0facfbf1de 100644
--- a/drivers/bus/fslmc/mc/mc_sys.c
+++ b/drivers/bus/fslmc/mc/mc_sys.c
@@ -53,7 +53,7 @@ static int mc_status_to_error(enum mc_cmd_status status)
 	return -EINVAL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mc_send_command)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_send_command);
 int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd)
 {
 	enum mc_cmd_status status;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 925e83e97d..c641709016 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -96,7 +96,7 @@ dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dpbp_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dpbp_dev);
 struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void)
 {
 	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
@@ -110,7 +110,7 @@ struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void)
 	return dpbp_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dpbp_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dpbp_dev);
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
 {
 	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
@@ -124,7 +124,7 @@ void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dpbp_supported)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dpbp_supported);
 int dpaa2_dpbp_supported(void)
 {
 	if (TAILQ_EMPTY(&dpbp_dev_list))
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index b546da82f6..f99a7a2afa 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -152,7 +152,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_alloc_dpci_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_alloc_dpci_dev);
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void)
 {
 	struct dpaa2_dpci_dev *dpci_dev = NULL;
@@ -166,7 +166,7 @@ struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void)
 	return dpci_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_free_dpci_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_free_dpci_dev);
 void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 {
 	struct dpaa2_dpci_dev *dpci_dev = NULL;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index e32471d8b5..c777a66e35 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -48,12 +48,12 @@
 
 #define NUM_HOST_CPUS RTE_MAX_LCORE
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_io_portal)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_io_portal);
 struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
-RTE_EXPORT_INTERNAL_SYMBOL(per_lcore__dpaa2_io)
+RTE_EXPORT_INTERNAL_SYMBOL(per_lcore__dpaa2_io);
 RTE_DEFINE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_global_active_dqs_list)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_global_active_dqs_list);
 struct swp_active_dqs rte_global_active_dqs_list[NUM_MAX_SWP];
 
 TAILQ_HEAD(dpio_dev_list, dpaa2_dpio_dev);
@@ -62,14 +62,14 @@ static struct dpio_dev_list dpio_dev_list
 static uint32_t io_space_count;
 
 /* Variable to store DPAA2 platform type */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_svr_family)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_svr_family);
 uint32_t dpaa2_svr_family;
 
 /* Variable to store DPAA2 DQRR size */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dqrr_size)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dqrr_size);
 uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eqcr_size)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eqcr_size);
 uint8_t dpaa2_eqcr_size;
 
 /* Variable to hold the portal_key, once created.*/
@@ -339,7 +339,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	return dpio_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_swp)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_swp);
 int
 dpaa2_affine_qbman_swp(void)
 {
@@ -361,7 +361,7 @@ dpaa2_affine_qbman_swp(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_ethrx_swp)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_ethrx_swp);
 int
 dpaa2_affine_qbman_ethrx_swp(void)
 {
@@ -623,7 +623,7 @@ dpaa2_create_dpio_device(int vdev_fd,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dq_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dq_storage);
 void
 dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage)
 {
@@ -635,7 +635,7 @@ dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dq_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dq_storage);
 int
 dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage)
 {
@@ -658,7 +658,7 @@ dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_eq_descriptors)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_eq_descriptors);
 uint32_t
 dpaa2_free_eq_descriptors(void)
 {
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index f13168dce3..f41a165faa 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -327,7 +327,7 @@ uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
 	return r->opridsz;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_query_state)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_query_state);
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
@@ -385,7 +385,7 @@ int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
 	return (int)((r->st1 & 0x40) >> 6);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_state_frame_count)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_state_frame_count);
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
 {
 	return (r->frm_cnt & 0x00FFFFFF);
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 84853924e7..a203f02bfb 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -407,7 +407,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
 	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_interrupt_clear_status)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_interrupt_clear_status);
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
@@ -609,13 +609,13 @@ enum qb_enqueue_commands {
 #define QB_ENQUEUE_CMD_NLIS_SHIFT            14
 #define QB_ENQUEUE_CMD_IS_NESN_SHIFT         15
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_clear);
 void qbman_eq_desc_clear(struct qbman_eq_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_no_orp)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_no_orp);
 void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
 {
 	d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT);
@@ -625,7 +625,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
 		d->eq.verb |= enqueue_rejects_to_fq;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_orp)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_orp);
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete)
 {
@@ -665,7 +665,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
 	d->eq.seqnum |= 1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_response)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_response);
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				dma_addr_t storage_phys,
 				int stash)
@@ -674,20 +674,20 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 	d->eq.wae = stash;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_token)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_token);
 void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token)
 {
 	d->eq.rspid = token;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_fq)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_fq);
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
 {
 	d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT);
 	d->eq.tgtid = fqid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_qd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_qd);
 void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
 			  uint16_t qd_bin, uint8_t qd_prio)
 {
@@ -705,7 +705,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
 		d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_dca)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_dca);
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park)
 {
@@ -1227,7 +1227,7 @@ static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple);
 int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 				      const struct qbman_eq_desc *d,
 				      const struct qbman_fd *fd,
@@ -1502,7 +1502,7 @@ static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_fd);
 int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 					 const struct qbman_eq_desc *d,
 					 struct qbman_fd **fd,
@@ -1758,7 +1758,7 @@ static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 
 	return num_enqueued;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_desc)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_desc);
 int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 					   const struct qbman_eq_desc *d,
 					   const struct qbman_fd *fd,
@@ -1785,7 +1785,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
 	*enabled = src | (1 << channel_idx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_push_set)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_push_set);
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
 {
 	uint16_t dqsrc;
@@ -1823,13 +1823,13 @@ enum qb_pull_dt_e {
 	qb_pull_dt_framequeue
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_clear);
 void qbman_pull_desc_clear(struct qbman_pull_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_storage);
 void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 				 struct qbman_result *storage,
 				 dma_addr_t storage_phys,
@@ -1850,7 +1850,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 	d->pull.rsp_addr = storage_phys;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_numframes)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_numframes);
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes)
 {
@@ -1862,7 +1862,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
 	d->pull.tok = token;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_fq)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_fq);
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
 {
 	d->pull.verb |= 1 << QB_VDQCR_VERB_DCT_SHIFT;
@@ -1978,7 +1978,7 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_pull)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_pull);
 int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 {
 	if (!s->stash_off)
@@ -2006,7 +2006,7 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 
 #include <rte_prefetch.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_prefetch_dqrr_next)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_prefetch_dqrr_next);
 void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
 {
 	const struct qbman_result *p;
@@ -2020,7 +2020,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
  * only once, so repeated calls can return a sequence of DQRR entries, without
  * requiring they be consumed immediately or in any particular order.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_next)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_next);
 const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
 {
 	if (!s->stash_off)
@@ -2224,7 +2224,7 @@ const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
 }
 
 /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_consume)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_consume);
 void qbman_swp_dqrr_consume(struct qbman_swp *s,
 			    const struct qbman_result *dq)
 {
@@ -2233,7 +2233,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s,
 }
 
 /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_idx_consume)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_idx_consume);
 void qbman_swp_dqrr_idx_consume(struct qbman_swp *s,
 			    uint8_t dqrr_index)
 {
@@ -2244,7 +2244,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s,
 /* Polling user-provided storage */
 /*********************************/
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_has_new_result)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_has_new_result);
 int qbman_result_has_new_result(struct qbman_swp *s,
 				struct qbman_result *dq)
 {
@@ -2273,7 +2273,7 @@ int qbman_result_has_new_result(struct qbman_swp *s,
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_new_result)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_new_result);
 int qbman_check_new_result(struct qbman_result *dq)
 {
 	if (dq->dq.tok == 0)
@@ -2289,7 +2289,7 @@ int qbman_check_new_result(struct qbman_result *dq)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_command_complete)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_command_complete);
 int qbman_check_command_complete(struct qbman_result *dq)
 {
 	struct qbman_swp *s;
@@ -2377,19 +2377,19 @@ int qbman_result_is_FQPN(const struct qbman_result *dq)
 
 /* These APIs assume qbman_result_is_DQ() is TRUE */
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_flags)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_flags);
 uint8_t qbman_result_DQ_flags(const struct qbman_result *dq)
 {
 	return dq->dq.stat;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_seqnum)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_seqnum);
 uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq)
 {
 	return dq->dq.seqnum;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_odpid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_odpid);
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq)
 {
 	return dq->dq.oprid;
@@ -2410,13 +2410,13 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq)
 	return dq->dq.fq_frm_cnt;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fqd_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fqd_ctx);
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq)
 {
 	return dq->dq.fqd_ctx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fd);
 const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq)
 {
 	return (const struct qbman_fd *)&dq->dq.fd[0];
@@ -2425,7 +2425,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq)
 /**************************************/
 /* Parsing state-change notifications */
 /**************************************/
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_SCN_state)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_SCN_state);
 uint8_t qbman_result_SCN_state(const struct qbman_result *scn)
 {
 	return scn->scn.state;
@@ -2485,25 +2485,25 @@ uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn)
 /********************/
 /* Parsing EQ RESP  */
 /********************/
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_fd);
 struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp)
 {
 	return (struct qbman_fd *)&eqresp->eq_resp.fd[0];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_set_rspid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_set_rspid);
 void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val)
 {
 	eqresp->eq_resp.rspid = val;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rspid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rspid);
 uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp)
 {
 	return eqresp->eq_resp.rspid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rc)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rc);
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp)
 {
 	if (eqresp->eq_resp.rc == 0xE)
@@ -2518,14 +2518,14 @@ uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp)
 #define QB_BR_RC_VALID_SHIFT  5
 #define QB_BR_RCDI_SHIFT      6
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_clear);
 void qbman_release_desc_clear(struct qbman_release_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 	d->br.verb = 1 << QB_BR_RC_VALID_SHIFT;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_set_bpid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_set_bpid);
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid)
 {
 	d->br.bpid = bpid;
@@ -2640,7 +2640,7 @@ static int qbman_swp_release_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_release)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_release);
 int qbman_swp_release(struct qbman_swp *s,
 			     const struct qbman_release_desc *d,
 			     const uint64_t *buffers,
@@ -2767,7 +2767,7 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 	return num;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_acquire)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_acquire);
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers)
 {
@@ -2951,13 +2951,13 @@ int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
 				  1, ctx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_idx);
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr)
 {
 	return QBMAN_IDX_FROM_DQRR(dqrr);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_from_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_from_idx);
 struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx)
 {
 	struct qbman_result *dq;
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index ca9e49f548..cd1375af96 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -45,7 +45,7 @@ static TAILQ_HEAD(, rte_afu_driver) ifpga_afu_drv_list =
 
 
 /* register a ifpga bus based driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_register);
 void rte_ifpga_driver_register(struct rte_afu_driver *driver)
 {
 	RTE_VERIFY(driver);
@@ -54,7 +54,7 @@ void rte_ifpga_driver_register(struct rte_afu_driver *driver)
 }
 
 /* un-register a fpga bus based driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_unregister);
 void rte_ifpga_driver_unregister(struct rte_afu_driver *driver)
 {
 	TAILQ_REMOVE(&ifpga_afu_drv_list, driver, next);
@@ -74,7 +74,7 @@ ifpga_find_afu_dev(const struct rte_rawdev *rdev,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_find_afu_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_find_afu_by_name);
 struct rte_afu_device *
 rte_ifpga_find_afu_by_name(const char *name)
 {
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index 3f13e1d6ac..de48704948 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -49,7 +49,7 @@
  */
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -71,7 +71,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev)
 {
@@ -413,7 +413,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv __rte_unused,
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int rte_pci_read_config(const struct rte_pci_device *dev,
 		void *buf, size_t len, off_t offset)
 {
@@ -460,7 +460,7 @@ int rte_pci_read_config(const struct rte_pci_device *dev,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int rte_pci_write_config(const struct rte_pci_device *dev,
 		const void *buf, size_t len, off_t offset)
 {
@@ -503,7 +503,7 @@ int rte_pci_write_config(const struct rte_pci_device *dev,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 		      void *buf, size_t len, off_t offset)
 {
@@ -515,7 +515,7 @@ int rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 		       const void *buf, size_t len, off_t offset)
 {
@@ -526,7 +526,7 @@ int rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 	return len;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 		struct rte_pci_ioport *p)
@@ -588,7 +588,7 @@ pci_uio_ioport_read(struct rte_pci_ioport *p,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p,
 		void *data, size_t len, off_t offset)
@@ -631,7 +631,7 @@ pci_uio_ioport_write(struct rte_pci_ioport *p,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p,
 		const void *data, size_t len, off_t offset)
@@ -645,7 +645,7 @@ rte_pci_ioport_write(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p)
 {
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index c20d159218..1eb87c8fe6 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -55,7 +55,7 @@ pci_get_kernel_driver_by_path(const char *filename, char *dri_name,
 }
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -86,7 +86,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev)
 {
@@ -630,7 +630,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv,
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int rte_pci_read_config(const struct rte_pci_device *device,
 		void *buf, size_t len, off_t offset)
 {
@@ -654,7 +654,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int rte_pci_write_config(const struct rte_pci_device *device,
 		const void *buf, size_t len, off_t offset)
 {
@@ -678,7 +678,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int rte_pci_mmio_read(const struct rte_pci_device *device, int bar,
 		void *buf, size_t len, off_t offset)
 {
@@ -701,7 +701,7 @@ int rte_pci_mmio_read(const struct rte_pci_device *device, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int rte_pci_mmio_write(const struct rte_pci_device *device, int bar,
 		const void *buf, size_t len, off_t offset)
 {
@@ -723,7 +723,7 @@ int rte_pci_mmio_write(const struct rte_pci_device *device, int bar,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 		struct rte_pci_ioport *p)
@@ -751,7 +751,7 @@ rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p,
 		void *data, size_t len, off_t offset)
@@ -771,7 +771,7 @@ rte_pci_ioport_read(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p,
 		const void *data, size_t len, off_t offset)
@@ -791,7 +791,7 @@ rte_pci_ioport_write(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p)
 {
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index c88634f790..39e564c2e9 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -33,7 +33,7 @@
 
 #define SYSFS_PCI_DEVICES "/sys/bus/pci/devices"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_get_sysfs_path)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_get_sysfs_path);
 const char *rte_pci_get_sysfs_path(void)
 {
 	const char *path = NULL;
@@ -479,7 +479,7 @@ pci_dump_one_device(FILE *f, struct rte_pci_device *dev)
 }
 
 /* dump devices on the bus */
-RTE_EXPORT_SYMBOL(rte_pci_dump)
+RTE_EXPORT_SYMBOL(rte_pci_dump);
 void
 rte_pci_dump(FILE *f)
 {
@@ -504,7 +504,7 @@ pci_parse(const char *name, void *addr)
 }
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_register);
 void
 rte_pci_register(struct rte_pci_driver *driver)
 {
@@ -512,7 +512,7 @@ rte_pci_register(struct rte_pci_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_unregister);
 void
 rte_pci_unregister(struct rte_pci_driver *driver)
 {
@@ -800,7 +800,7 @@ rte_pci_get_iommu_class(void)
 	return iova_mode;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_has_capability_list, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_has_capability_list, 23.11);
 bool
 rte_pci_has_capability_list(const struct rte_pci_device *dev)
 {
@@ -812,14 +812,14 @@ rte_pci_has_capability_list(const struct rte_pci_device *dev)
 	return (status & RTE_PCI_STATUS_CAP_LIST) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_capability, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_capability, 23.11);
 off_t
 rte_pci_find_capability(const struct rte_pci_device *dev, uint8_t cap)
 {
 	return rte_pci_find_next_capability(dev, cap, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_next_capability, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_next_capability, 23.11);
 off_t
 rte_pci_find_next_capability(const struct rte_pci_device *dev, uint8_t cap,
 	off_t offset)
@@ -857,7 +857,7 @@ rte_pci_find_next_capability(const struct rte_pci_device *dev, uint8_t cap,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_ext_capability, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_ext_capability, 20.11);
 off_t
 rte_pci_find_ext_capability(const struct rte_pci_device *dev, uint32_t cap)
 {
@@ -900,7 +900,7 @@ rte_pci_find_ext_capability(const struct rte_pci_device *dev, uint32_t cap)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_set_bus_master, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_set_bus_master, 21.08);
 int
 rte_pci_set_bus_master(const struct rte_pci_device *dev, bool enable)
 {
@@ -929,7 +929,7 @@ rte_pci_set_bus_master(const struct rte_pci_device *dev, bool enable)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_pasid_set_state)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_pasid_set_state);
 int
 rte_pci_pasid_set_state(const struct rte_pci_device *dev,
 		off_t offset, bool enable)
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index e7e449306e..fc899efd3b 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -37,7 +37,7 @@ DEFINE_DEVPROPKEY(DEVPKEY_Device_Numa_Node, 0x540b947e, 0x8b40, 0x45bc,
  */
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -52,7 +52,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev __rte_unused)
 {
@@ -64,7 +64,7 @@ rte_pci_unmap_device(struct rte_pci_device *dev __rte_unused)
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int
 rte_pci_read_config(const struct rte_pci_device *dev __rte_unused,
 	void *buf __rte_unused, size_t len __rte_unused,
@@ -79,7 +79,7 @@ rte_pci_read_config(const struct rte_pci_device *dev __rte_unused,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int
 rte_pci_write_config(const struct rte_pci_device *dev __rte_unused,
 	const void *buf __rte_unused, size_t len __rte_unused,
@@ -94,7 +94,7 @@ rte_pci_write_config(const struct rte_pci_device *dev __rte_unused,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int
 rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 		      void *buf, size_t len, off_t offset)
@@ -107,7 +107,7 @@ rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int
 rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 		       const void *buf, size_t len, off_t offset)
@@ -131,7 +131,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv __rte_unused,
 	return RTE_IOVA_DC;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev __rte_unused,
 	int bar __rte_unused, struct rte_pci_ioport *p __rte_unused)
@@ -145,7 +145,7 @@ rte_pci_ioport_map(struct rte_pci_device *dev __rte_unused,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p __rte_unused,
 	void *data __rte_unused, size_t len __rte_unused,
@@ -158,7 +158,7 @@ rte_pci_ioport_read(struct rte_pci_ioport *p __rte_unused,
 	 */
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p __rte_unused)
 {
@@ -181,7 +181,7 @@ pci_device_iommu_support_va(const struct rte_pci_device *dev __rte_unused)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p __rte_unused,
 		const void *data __rte_unused, size_t len __rte_unused,
diff --git a/drivers/bus/platform/platform.c b/drivers/bus/platform/platform.c
index 0f50027236..9fdbb29e19 100644
--- a/drivers/bus/platform/platform.c
+++ b/drivers/bus/platform/platform.c
@@ -29,14 +29,14 @@
 
 #define PLATFORM_BUS_DEVICES_PATH "/sys/bus/platform/devices"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_register);
 void
 rte_platform_register(struct rte_platform_driver *pdrv)
 {
 	TAILQ_INSERT_TAIL(&platform_bus.driver_list, pdrv, next);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_unregister);
 void
 rte_platform_unregister(struct rte_platform_driver *pdrv)
 {
diff --git a/drivers/bus/uacce/uacce.c b/drivers/bus/uacce/uacce.c
index 87e68b3dbf..679738c665 100644
--- a/drivers/bus/uacce/uacce.c
+++ b/drivers/bus/uacce/uacce.c
@@ -583,7 +583,7 @@ uacce_dev_iterate(const void *start, const char *str,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_avail_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_avail_queues);
 int
 rte_uacce_avail_queues(struct rte_uacce_device *dev)
 {
@@ -597,7 +597,7 @@ rte_uacce_avail_queues(struct rte_uacce_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_alloc);
 int
 rte_uacce_queue_alloc(struct rte_uacce_device *dev, struct rte_uacce_qcontex *qctx)
 {
@@ -612,7 +612,7 @@ rte_uacce_queue_alloc(struct rte_uacce_device *dev, struct rte_uacce_qcontex *qc
 	return -EIO;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_free);
 void
 rte_uacce_queue_free(struct rte_uacce_qcontex *qctx)
 {
@@ -622,7 +622,7 @@ rte_uacce_queue_free(struct rte_uacce_qcontex *qctx)
 	qctx->fd = -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_start)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_start);
 int
 rte_uacce_queue_start(struct rte_uacce_qcontex *qctx)
 {
@@ -630,7 +630,7 @@ rte_uacce_queue_start(struct rte_uacce_qcontex *qctx)
 	return ioctl(qctx->fd, UACCE_CMD_START_Q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_ioctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_ioctl);
 int
 rte_uacce_queue_ioctl(struct rte_uacce_qcontex *qctx, unsigned long cmd, void *arg)
 {
@@ -640,7 +640,7 @@ rte_uacce_queue_ioctl(struct rte_uacce_qcontex *qctx, unsigned long cmd, void *a
 	return ioctl(qctx->fd, cmd, arg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_mmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_mmap);
 void *
 rte_uacce_queue_mmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 {
@@ -666,7 +666,7 @@ rte_uacce_queue_mmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_unmap);
 void
 rte_uacce_queue_unmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 {
@@ -676,7 +676,7 @@ rte_uacce_queue_unmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_register);
 void
 rte_uacce_register(struct rte_uacce_driver *driver)
 {
@@ -684,7 +684,7 @@ rte_uacce_register(struct rte_uacce_driver *driver)
 	driver->bus = &uacce_bus;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_unregister);
 void
 rte_uacce_unregister(struct rte_uacce_driver *driver)
 {
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index be375f63dc..c1c510c448 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -52,7 +52,7 @@ static struct vdev_custom_scans vdev_custom_scans =
 static rte_spinlock_t vdev_custom_scan_lock = RTE_SPINLOCK_INITIALIZER;
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_register);
 void
 rte_vdev_register(struct rte_vdev_driver *driver)
 {
@@ -60,14 +60,14 @@ rte_vdev_register(struct rte_vdev_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_unregister);
 void
 rte_vdev_unregister(struct rte_vdev_driver *driver)
 {
 	TAILQ_REMOVE(&vdev_driver_list, driver, next);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_add_custom_scan)
+RTE_EXPORT_SYMBOL(rte_vdev_add_custom_scan);
 int
 rte_vdev_add_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 {
@@ -96,7 +96,7 @@ rte_vdev_add_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	return (custom_scan == NULL) ? -1 : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_remove_custom_scan)
+RTE_EXPORT_SYMBOL(rte_vdev_remove_custom_scan);
 int
 rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 {
@@ -321,7 +321,7 @@ insert_vdev(const char *name, const char *args,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_init)
+RTE_EXPORT_SYMBOL(rte_vdev_init);
 int
 rte_vdev_init(const char *name, const char *args)
 {
@@ -361,7 +361,7 @@ vdev_remove_driver(struct rte_vdev_device *dev)
 	return driver->remove(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_uninit)
+RTE_EXPORT_SYMBOL(rte_vdev_uninit);
 int
 rte_vdev_uninit(const char *name)
 {
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index ed18d4da96..67c17b9286 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -165,7 +165,7 @@ static const char *map_names[VMBUS_MAX_RESOURCE] = {
 
 
 /* map the resources of a vmbus device in virtual memory */
-RTE_EXPORT_SYMBOL(rte_vmbus_map_device)
+RTE_EXPORT_SYMBOL(rte_vmbus_map_device);
 int
 rte_vmbus_map_device(struct rte_vmbus_device *dev)
 {
@@ -224,7 +224,7 @@ rte_vmbus_map_device(struct rte_vmbus_device *dev)
 	return vmbus_uio_map_resource(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_unmap_device)
+RTE_EXPORT_SYMBOL(rte_vmbus_unmap_device);
 void
 rte_vmbus_unmap_device(struct rte_vmbus_device *dev)
 {
@@ -341,7 +341,7 @@ vmbus_scan_one(const char *name)
 /*
  * Scan the content of the vmbus, and the devices in the devices list
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_scan)
+RTE_EXPORT_SYMBOL(rte_vmbus_scan);
 int
 rte_vmbus_scan(void)
 {
@@ -373,19 +373,19 @@ rte_vmbus_scan(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_mask)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_mask);
 void rte_vmbus_irq_mask(struct rte_vmbus_device *device)
 {
 	vmbus_uio_irq_control(device, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_unmask)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_unmask);
 void rte_vmbus_irq_unmask(struct rte_vmbus_device *device)
 {
 	vmbus_uio_irq_control(device, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_read)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_read);
 int rte_vmbus_irq_read(struct rte_vmbus_device *device)
 {
 	return vmbus_uio_irq_read(device);
diff --git a/drivers/bus/vmbus/vmbus_channel.c b/drivers/bus/vmbus/vmbus_channel.c
index a876c909dd..03820015ae 100644
--- a/drivers/bus/vmbus/vmbus_channel.c
+++ b/drivers/bus/vmbus/vmbus_channel.c
@@ -48,7 +48,7 @@ vmbus_set_event(const struct vmbus_channel *chan)
 /*
  * Set the wait between when hypervisor examines the trigger.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_set_latency)
+RTE_EXPORT_SYMBOL(rte_vmbus_set_latency);
 void
 rte_vmbus_set_latency(const struct rte_vmbus_device *dev,
 		      const struct vmbus_channel *chan,
@@ -78,7 +78,7 @@ rte_vmbus_set_latency(const struct rte_vmbus_device *dev,
  * Since this in userspace, rely on the monitor page.
  * Can't do a hypercall from userspace.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_tx)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_tx);
 void
 rte_vmbus_chan_signal_tx(const struct vmbus_channel *chan)
 {
@@ -96,7 +96,7 @@ rte_vmbus_chan_signal_tx(const struct vmbus_channel *chan)
 
 
 /* Do a simple send directly using transmit ring. */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_send)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_send);
 int rte_vmbus_chan_send(struct vmbus_channel *chan, uint16_t type,
 			void *data, uint32_t dlen,
 			uint64_t xactid, uint32_t flags, bool *need_sig)
@@ -140,7 +140,7 @@ int rte_vmbus_chan_send(struct vmbus_channel *chan, uint16_t type,
 }
 
 /* Do a scatter/gather send where the descriptor points to data. */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_send_sglist)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_send_sglist);
 int rte_vmbus_chan_send_sglist(struct vmbus_channel *chan,
 			       struct vmbus_gpa sg[], uint32_t sglen,
 			       void *data, uint32_t dlen,
@@ -184,7 +184,7 @@ int rte_vmbus_chan_send_sglist(struct vmbus_channel *chan,
 	return error;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_rx_empty)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_rx_empty);
 bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
 {
 	const struct vmbus_br *br = &channel->rxbr;
@@ -194,7 +194,7 @@ bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
 }
 
 /* Signal host after reading N bytes */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_read)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_read);
 void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read)
 {
 	struct vmbus_br *rbr = &chan->rxbr;
@@ -225,7 +225,7 @@ void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read)
 	vmbus_set_event(chan);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv);
 int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
 			uint64_t *request_id)
 {
@@ -273,7 +273,7 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
 }
 
 /* TODO: replace this with inplace ring buffer (no copy) */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv_raw)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv_raw);
 int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan,
 			    void *data, uint32_t *len)
 {
@@ -344,7 +344,7 @@ int vmbus_chan_create(const struct rte_vmbus_device *device,
 }
 
 /* Setup the primary channel */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_open)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_open);
 int rte_vmbus_chan_open(struct rte_vmbus_device *device,
 			struct vmbus_channel **new_chan)
 {
@@ -365,7 +365,7 @@ int rte_vmbus_chan_open(struct rte_vmbus_device *device,
 	return err;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_max_channels)
+RTE_EXPORT_SYMBOL(rte_vmbus_max_channels);
 int rte_vmbus_max_channels(const struct rte_vmbus_device *device)
 {
 	if (vmbus_uio_subchannels_supported(device, device->primary))
@@ -375,7 +375,7 @@ int rte_vmbus_max_channels(const struct rte_vmbus_device *device)
 }
 
 /* Setup secondary channel */
-RTE_EXPORT_SYMBOL(rte_vmbus_subchan_open)
+RTE_EXPORT_SYMBOL(rte_vmbus_subchan_open);
 int rte_vmbus_subchan_open(struct vmbus_channel *primary,
 			   struct vmbus_channel **new_chan)
 {
@@ -391,13 +391,13 @@ int rte_vmbus_subchan_open(struct vmbus_channel *primary,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_sub_channel_index)
+RTE_EXPORT_SYMBOL(rte_vmbus_sub_channel_index);
 uint16_t rte_vmbus_sub_channel_index(const struct vmbus_channel *chan)
 {
 	return chan->subchannel_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_close)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_close);
 void rte_vmbus_chan_close(struct vmbus_channel *chan)
 {
 	const struct rte_vmbus_device *device = chan->device;
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index a787d8b18d..a567b0755b 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -192,7 +192,7 @@ vmbus_ignore_device(struct rte_vmbus_device *dev)
  * all registered drivers that have a matching entry in its id_table
  * for discovered devices.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_probe)
+RTE_EXPORT_SYMBOL(rte_vmbus_probe);
 int
 rte_vmbus_probe(void)
 {
@@ -282,7 +282,7 @@ vmbus_devargs_lookup(struct rte_vmbus_device *dev)
 }
 
 /* register vmbus driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_register);
 void
 rte_vmbus_register(struct rte_vmbus_driver *driver)
 {
@@ -293,7 +293,7 @@ rte_vmbus_register(struct rte_vmbus_driver *driver)
 }
 
 /* unregister vmbus driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_unregister);
 void
 rte_vmbus_unregister(struct rte_vmbus_driver *driver)
 {
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 0e6777e6ca..17048c1a7e 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -305,7 +305,7 @@ ot_ipsec_inb_tunnel_hdr_fill(struct roc_ot_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_fill);
 int
 cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -415,7 +415,7 @@ cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_fill);
 int
 cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -580,21 +580,21 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_valid)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_valid);
 bool
 cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa)
 {
 	return !!sa->w2.s.valid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_valid)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_valid);
 bool
 cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
 {
 	return !!sa->w2.s.valid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_ivlen_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_ivlen_get);
 uint8_t
 cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		     enum rte_crypto_auth_algorithm a_algo,
@@ -631,7 +631,7 @@ cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 	return ivlen;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_icvlen_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_icvlen_get);
 uint8_t
 cnxk_ipsec_icvlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		      enum rte_crypto_auth_algorithm a_algo,
@@ -678,7 +678,7 @@ cnxk_ipsec_icvlen_get(enum rte_crypto_cipher_algorithm c_algo,
 	return icv;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_roundup_byte)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_roundup_byte);
 uint8_t
 cnxk_ipsec_outb_roundup_byte(enum rte_crypto_cipher_algorithm c_algo,
 			     enum rte_crypto_aead_algorithm aead_algo)
@@ -709,7 +709,7 @@ cnxk_ipsec_outb_roundup_byte(enum rte_crypto_cipher_algorithm c_algo,
 	return roundup_byte;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_rlens_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_rlens_get);
 int
 cnxk_ipsec_outb_rlens_get(struct cnxk_ipsec_outb_rlens *rlens,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -984,7 +984,7 @@ on_fill_ipsec_common_sa(struct rte_security_ipsec_xform *ipsec,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_outb_sa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_outb_sa_create);
 int
 cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
 			     struct rte_crypto_sym_xform *crypto_xform,
@@ -1130,7 +1130,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
 	return ctx_len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_inb_sa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_inb_sa_create);
 int
 cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
 			    struct rte_crypto_sym_xform *crypto_xform,
@@ -1484,7 +1484,7 @@ ow_ipsec_inb_tunnel_hdr_fill(struct roc_ow_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_inb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_inb_sa_fill);
 int
 cnxk_ow_ipsec_inb_sa_fill(struct roc_ow_ipsec_inb_sa *sa,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -1591,7 +1591,7 @@ cnxk_ow_ipsec_inb_sa_fill(struct roc_ow_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_outb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_outb_sa_fill);
 int
 cnxk_ow_ipsec_outb_sa_fill(struct roc_ow_ipsec_outb_sa *sa,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
diff --git a/drivers/common/cnxk/cnxk_utils.c b/drivers/common/cnxk/cnxk_utils.c
index 8ca4664d25..cbd8779ce4 100644
--- a/drivers/common/cnxk/cnxk_utils.c
+++ b/drivers/common/cnxk/cnxk_utils.c
@@ -10,7 +10,7 @@
 
 #include "cnxk_utils.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_err_to_rte_err)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_err_to_rte_err);
 int
 roc_nix_tm_err_to_rte_err(int errorcode)
 {
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 88f229163a..b511e2d17e 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -243,7 +243,7 @@ plt_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
 static int plt_init_cb_num;
 static roc_plt_init_cb_t plt_init_cbs[PLT_INIT_CB_MAX];
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init_cb_register)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init_cb_register);
 int
 roc_plt_init_cb_register(roc_plt_init_cb_t cb)
 {
@@ -254,7 +254,7 @@ roc_plt_init_cb_register(roc_plt_init_cb_t cb)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_control_lmt_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_control_lmt_id_get);
 uint16_t
 roc_plt_control_lmt_id_get(void)
 {
@@ -266,7 +266,7 @@ roc_plt_control_lmt_id_get(void)
 		return ROC_NUM_LMT_LINES - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_lmt_validate)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_lmt_validate);
 uint16_t
 roc_plt_lmt_validate(void)
 {
@@ -281,7 +281,7 @@ roc_plt_lmt_validate(void)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init);
 int
 roc_plt_init(void)
 {
@@ -321,31 +321,31 @@ roc_plt_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_base)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_base);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, INFO);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_mbox)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_mbox);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_mbox, mbox, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_cpt)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_cpt);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_cpt, crypto, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ml)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ml);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ml, ml, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npa)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npa);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_npa, mempool, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_nix)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_nix);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_nix, nix, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npc)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npc);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_npc, flow, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_sso)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_sso);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_sso, event, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tim)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tim);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_tim, timer, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tm)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tm);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_tm, tm, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_dpi)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_dpi);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_dpi, dpi, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_rep)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_rep);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_rep, rep, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_esw)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_esw);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_esw, esw, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ree)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ree);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ree, ree, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform_base_symbols.c b/drivers/common/cnxk/roc_platform_base_symbols.c
index 7f0fe601ad..b8d2026dd5 100644
--- a/drivers/common/cnxk/roc_platform_base_symbols.c
+++ b/drivers/common/cnxk/roc_platform_base_symbols.c
@@ -5,545 +5,545 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_xcbc_key_derive)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_hash_key_derive)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_npa_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_start_rxtx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_stop_rxtx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_state)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_get_linkinfo)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_mode)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_supported_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_change)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_tx_control)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_misc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_handler)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_available)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_max_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_rxc_time_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_flush)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_reload)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_eng_grp_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lmtline_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_ctx_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_parse_hdr_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_afs_print)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lfs_print)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_wait_queue_idle)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure_v2)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_tx_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_delete_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_rx_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_rss_action_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_vlan_tpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_repte_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_is_repte_pfs_vf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_md5_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha1_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha256_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha512_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_lmt_base_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_num_lmtlines_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_ring_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_list_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_nix_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_meta_aura_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_dev_pffunc_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_outb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_reass_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_outb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_is_supported)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_hw_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_active_lmac_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_lmac_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_threshold_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_custom_tag_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_intr_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_recovery)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_port_map_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sc_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_stats_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read64)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write64)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read32)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write32)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_save)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_ap2mlip)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_mlip2ap)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_pa_to_offset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_offset_to_pa)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_write_job)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_valid_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_done_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_enqueue)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_queue_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_lf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_sl)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_on)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_off)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_on)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_off)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_model)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_lbk)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_esw)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_base_chan)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_rx_chan_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vwqe_interval)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_sdp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_vf_or_sdp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf_func)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_inl_ipsec_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cpt_ctx_cache_sync)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_pkt_len)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_rep_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_level_to_idx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_to_idx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_timeunit_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_count_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free_all)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_pre_color_tbl_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_connect)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_get_reg_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_reg_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_queues_ctx_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cqe_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_cpt_lfs_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_desc_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_npa_bp_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_count_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_bpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_aura_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_lf_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_inj_lf_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sa_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_rx_inject_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_spi_range)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_sz)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_reassembly_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_probed)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_multi_channel)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_rq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_soft_exp_poll_switch)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_is_with_inl_dev)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sso_pffunc_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_tag_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_sa_sync)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ctx_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cpt_lf_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ts_pkind_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_lock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_unlock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_pool_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_eng_caps_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_custom_meta_pool_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_xaq_realloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_qptr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_release)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_err_intr_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ras_intr_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_queue_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_queue_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_cq_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_cq_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_rxtx_start_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_event_start_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_loopback_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_entries_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_del)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_promisc_mode_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_state_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_mtu_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_rx_len_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_promisc_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_rx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mcast_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_custom_fmt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_switch_hdr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_eeprom_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_drop_re_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_rx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_tx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_clock_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_sync_time_adjust)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_is_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_is_sso_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_modify)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_cman_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_head_tail_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_head_tail_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_default_fill)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_flowkey_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_default_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_num_xstats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_names_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_flush_spin)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prepare_rate_limited_tree)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_prepare_tree)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_format_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_aura_fc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_free_resources)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_pkt_mode_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_name_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_smq_flush)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_xmit_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_suspend_resume)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prealloc_res)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_shaper_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_parent_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_rlimit_sq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rlimit_sq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_max)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_root_has_sp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_egress_link_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_leaf_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_lvl)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_next)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_next)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_is_user_hierarchy_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_tree_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_max_prio)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_is_leaf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_default_red_algo)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_cnt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_have_link_access)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_alloc_and_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_strip_vtag_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_insert_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_tpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_lf_init_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_pc_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_drop_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_limit_modify)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_range_update_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_zero_aura_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_bp_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_lock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_unlock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_ctx_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_limit_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_sub_return)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_sub_return)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_read_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_clear_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_free_mcam_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_read_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_get_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_clear_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_move)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_all_resources)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entries)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_enable_all_entries)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_ena_dis_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_write_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_low_priority_mcam)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_profile_name_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_capa_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_key_type_config_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_validate_portid_action)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_parse)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_sdp_channel_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_merge_base_steering_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_aged_flow_ctx_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_defrag_mcam_banks)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_key_type)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_mcam_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_attach)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_detach)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_msix_offsets_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_config_lf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_len_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_prog)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_qp_get_base)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_process)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_hmac_opad_ipad_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_auth_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ciph_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ctx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_ns_to_gw)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_link)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_unlink)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_gwc_invalidate)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_release)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_from_tag)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_hws_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_qos_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_init_xaq_aura)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_free_xaq_aura)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_alloc_xaq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_release_xaq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_set_priority)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stash_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config_hwwqe)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_interval)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_error_msg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_clk_freq_get)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_xcbc_key_derive);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_hash_key_derive);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_npa_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_start_rxtx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_stop_rxtx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_state);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_get_linkinfo);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_mode);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_supported_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_change);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_tx_control);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_misc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_handler);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_available);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_max_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_rxc_time_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_flush);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_reload);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_eng_grp_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lmtline_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_ctx_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_parse_hdr_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_afs_print);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lfs_print);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_wait_queue_idle);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure_v2);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_tx_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_delete_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_rx_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_rss_action_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_vlan_tpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_repte_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_is_repte_pfs_vf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_md5_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha1_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha256_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha512_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_lmt_base_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_num_lmtlines_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_ring_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_list_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_nix_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_meta_aura_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_dev_pffunc_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_outb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_reass_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_outb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_is_supported);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_hw_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_active_lmac_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_lmac_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_threshold_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_custom_tag_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_intr_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_recovery);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_port_map_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sc_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_stats_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read64);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write64);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read32);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write32);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_save);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_ap2mlip);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_mlip2ap);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_pa_to_offset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_offset_to_pa);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_write_job);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_valid_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_done_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_enqueue);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_queue_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_lf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_sl);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_on);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_off);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_on);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_off);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_model);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_lbk);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_esw);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_base_chan);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_rx_chan_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vwqe_interval);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_sdp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_vf_or_sdp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf_func);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_inl_ipsec_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cpt_ctx_cache_sync);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_pkt_len);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_rep_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_level_to_idx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_to_idx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_timeunit_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_count_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free_all);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_pre_color_tbl_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_connect);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_get_reg_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_reg_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_queues_ctx_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cqe_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_cpt_lfs_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_desc_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_npa_bp_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_count_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_bpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_aura_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_lf_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_inj_lf_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sa_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_rx_inject_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_spi_range);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_sz);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_reassembly_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_probed);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_multi_channel);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_rq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_soft_exp_poll_switch);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_is_with_inl_dev);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sso_pffunc_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_tag_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_sa_sync);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ctx_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cpt_lf_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ts_pkind_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_lock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_unlock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_pool_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_eng_caps_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_custom_meta_pool_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_xaq_realloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_qptr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_release);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_err_intr_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ras_intr_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_queue_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_queue_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_cq_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_cq_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_rxtx_start_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_event_start_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_loopback_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_entries_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_del);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_promisc_mode_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_state_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_mtu_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_rx_len_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_promisc_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_rx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mcast_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_custom_fmt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_switch_hdr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_eeprom_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_drop_re_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_rx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_tx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_clock_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_sync_time_adjust);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_is_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_is_sso_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_modify);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_cman_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_head_tail_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_head_tail_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_default_fill);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_flowkey_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_default_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_num_xstats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_names_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_flush_spin);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prepare_rate_limited_tree);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_prepare_tree);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_format_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_aura_fc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_free_resources);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_pkt_mode_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_name_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_smq_flush);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_xmit_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_suspend_resume);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prealloc_res);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_shaper_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_parent_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_rlimit_sq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rlimit_sq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_max);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_root_has_sp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_egress_link_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_leaf_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_lvl);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_next);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_next);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_is_user_hierarchy_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_tree_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_max_prio);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_is_leaf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_default_red_algo);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_cnt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_have_link_access);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_alloc_and_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_strip_vtag_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_insert_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_tpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_lf_init_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_pc_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_drop_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_limit_modify);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_range_update_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_zero_aura_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_bp_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_lock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_unlock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_ctx_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_limit_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_sub_return);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_sub_return);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_read_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_clear_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_free_mcam_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_read_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_get_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_clear_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_move);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_all_resources);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entries);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_enable_all_entries);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_ena_dis_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_write_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_low_priority_mcam);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_profile_name_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_capa_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_key_type_config_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_validate_portid_action);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_parse);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_sdp_channel_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_merge_base_steering_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_aged_flow_ctx_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_defrag_mcam_banks);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_key_type);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_mcam_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_attach);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_detach);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_msix_offsets_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_config_lf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_len_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_prog);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_qp_get_base);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_process);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_hmac_opad_ipad_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_auth_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ciph_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ctx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_ns_to_gw);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_link);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_unlink);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_gwc_invalidate);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_release);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_from_tag);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_hws_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_qos_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_init_xaq_aura);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_free_xaq_aura);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_alloc_xaq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_release_xaq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_set_priority);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stash_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config_hwwqe);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_interval);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_error_msg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_clk_freq_get);
diff --git a/drivers/common/cpt/cpt_fpm_tables.c b/drivers/common/cpt/cpt_fpm_tables.c
index 0cb14733d9..9216a5de6c 100644
--- a/drivers/common/cpt/cpt_fpm_tables.c
+++ b/drivers/common/cpt/cpt_fpm_tables.c
@@ -1082,7 +1082,7 @@ static rte_spinlock_t lock = RTE_SPINLOCK_INITIALIZER;
 static uint8_t *fpm_table;
 static int nb_devs;
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_init)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_init);
 int cpt_fpm_init(uint64_t *fpm_table_iova)
 {
 	int i, len = 0;
@@ -1127,7 +1127,7 @@ int cpt_fpm_init(uint64_t *fpm_table_iova)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_clear);
 void cpt_fpm_clear(void)
 {
 	rte_spinlock_lock(&lock);
diff --git a/drivers/common/cpt/cpt_pmd_ops_helper.c b/drivers/common/cpt/cpt_pmd_ops_helper.c
index c7e6f37026..c5d29205f1 100644
--- a/drivers/common/cpt/cpt_pmd_ops_helper.c
+++ b/drivers/common/cpt/cpt_pmd_ops_helper.c
@@ -15,7 +15,7 @@
 #define CPT_MAX_ASYM_OP_NUM_PARAMS 5
 #define CPT_MAX_ASYM_OP_MOD_LEN 1024
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_direct_mode)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_direct_mode);
 int32_t
 cpt_pmd_ops_helper_get_mlen_direct_mode(void)
 {
@@ -30,7 +30,7 @@ cpt_pmd_ops_helper_get_mlen_direct_mode(void)
 	return len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_sg_mode)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_sg_mode);
 int
 cpt_pmd_ops_helper_get_mlen_sg_mode(void)
 {
@@ -46,7 +46,7 @@ cpt_pmd_ops_helper_get_mlen_sg_mode(void)
 	return len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_asym_get_mlen)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_asym_get_mlen);
 int
 cpt_pmd_ops_helper_asym_get_mlen(void)
 {
diff --git a/drivers/common/dpaax/caamflib.c b/drivers/common/dpaax/caamflib.c
index 82a7413b5f..b5bf48704c 100644
--- a/drivers/common/dpaax/caamflib.c
+++ b/drivers/common/dpaax/caamflib.c
@@ -15,5 +15,5 @@
  * - SEC HW block revision format is "v"
  * - SEC revision format is "x.y"
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rta_sec_era)
+RTE_EXPORT_INTERNAL_SYMBOL(rta_sec_era);
 enum rta_sec_era rta_sec_era;
diff --git a/drivers/common/dpaax/dpaa_of.c b/drivers/common/dpaax/dpaa_of.c
index 23035f530d..b58370dfca 100644
--- a/drivers/common/dpaax/dpaa_of.c
+++ b/drivers/common/dpaax/dpaa_of.c
@@ -214,7 +214,7 @@ linear_dir(struct dt_dir *d)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_init_path)
+RTE_EXPORT_INTERNAL_SYMBOL(of_init_path);
 int
 of_init_path(const char *dt_path)
 {
@@ -299,7 +299,7 @@ check_compatible(const struct dt_file *f, const char *compatible)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_find_compatible_node)
+RTE_EXPORT_INTERNAL_SYMBOL(of_find_compatible_node);
 const struct device_node *
 of_find_compatible_node(const struct device_node *from,
 			const char *type __rte_unused,
@@ -325,7 +325,7 @@ of_find_compatible_node(const struct device_node *from,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_property)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_property);
 const void *
 of_get_property(const struct device_node *from, const char *name,
 		size_t *lenp)
@@ -345,7 +345,7 @@ of_get_property(const struct device_node *from, const char *name,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_available)
+RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_available);
 bool
 of_device_is_available(const struct device_node *dev_node)
 {
@@ -362,7 +362,7 @@ of_device_is_available(const struct device_node *dev_node)
 	return false;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_find_node_by_phandle)
+RTE_EXPORT_INTERNAL_SYMBOL(of_find_node_by_phandle);
 const struct device_node *
 of_find_node_by_phandle(uint64_t ph)
 {
@@ -376,7 +376,7 @@ of_find_node_by_phandle(uint64_t ph)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_parent)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_parent);
 const struct device_node *
 of_get_parent(const struct device_node *dev_node)
 {
@@ -392,7 +392,7 @@ of_get_parent(const struct device_node *dev_node)
 	return &d->parent->node.node;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_next_child)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_next_child);
 const struct device_node *
 of_get_next_child(const struct device_node *dev_node,
 		  const struct device_node *prev)
@@ -422,7 +422,7 @@ of_get_next_child(const struct device_node *dev_node,
 	return &c->node.node;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_n_addr_cells)
+RTE_EXPORT_INTERNAL_SYMBOL(of_n_addr_cells);
 uint32_t
 of_n_addr_cells(const struct device_node *dev_node)
 {
@@ -467,7 +467,7 @@ of_n_size_cells(const struct device_node *dev_node)
 	return OF_DEFAULT_NS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_address);
 const uint32_t *
 of_get_address(const struct device_node *dev_node, size_t idx,
 	       uint64_t *size, uint32_t *flags __rte_unused)
@@ -497,7 +497,7 @@ of_get_address(const struct device_node *dev_node, size_t idx,
 	return (const uint32_t *)buf;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_translate_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_translate_address);
 uint64_t
 of_translate_address(const struct device_node *dev_node,
 		     const uint32_t *addr)
@@ -544,7 +544,7 @@ of_translate_address(const struct device_node *dev_node,
 	return phys_addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_compatible)
+RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_compatible);
 bool
 of_device_is_compatible(const struct device_node *dev_node,
 			const char *compatible)
@@ -585,7 +585,7 @@ static const void *of_get_mac_addr(const struct device_node *np,
  * this case, the real MAC is in 'local-mac-address', and 'mac-address' exists
  * but is all zeros.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_mac_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_mac_address);
 const void *of_get_mac_address(const struct device_node *np)
 {
 	const void *addr;
diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c
index 1220d9654b..59cc65e9d4 100644
--- a/drivers/common/dpaax/dpaax_iova_table.c
+++ b/drivers/common/dpaax/dpaax_iova_table.c
@@ -9,7 +9,7 @@
 #include "dpaax_logs.h"
 
 /* Global table reference */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_p)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_p);
 struct dpaax_iova_table *dpaax_iova_table_p;
 
 static int dpaax_handle_memevents(void);
@@ -155,7 +155,7 @@ read_memory_node(unsigned int *count)
 	return nodes;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_populate)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_populate);
 int
 dpaax_iova_table_populate(void)
 {
@@ -257,7 +257,7 @@ dpaax_iova_table_populate(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_depopulate)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_depopulate);
 void
 dpaax_iova_table_depopulate(void)
 {
@@ -267,7 +267,7 @@ dpaax_iova_table_depopulate(void)
 	DPAAX_DEBUG("IOVA Table cleaned");
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_update)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_update);
 int
 dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
 {
@@ -354,7 +354,7 @@ dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
  * Dump the table, with its entries, on screen. Only works in Debug Mode
  * Not for weak hearted - the tables can get quite large
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_dump);
 void
 dpaax_iova_table_dump(void)
 {
@@ -467,5 +467,5 @@ dpaax_handle_memevents(void)
 					       dpaax_memevent_cb, NULL);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_logger)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_logger);
 RTE_LOG_REGISTER_DEFAULT(dpaax_logger, ERR);
diff --git a/drivers/common/ionic/ionic_common_uio.c b/drivers/common/ionic/ionic_common_uio.c
index aaefab918c..b21e24573e 100644
--- a/drivers/common/ionic/ionic_common_uio.c
+++ b/drivers/common/ionic/ionic_common_uio.c
@@ -104,7 +104,7 @@ uio_get_idx_for_devname(struct uio_name *name_cache, char *devname)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mnet_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mnet_devices);
 void
 ionic_uio_scan_mnet_devices(void)
 {
@@ -148,7 +148,7 @@ ionic_uio_scan_mnet_devices(void)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mcrypt_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mcrypt_devices);
 void
 ionic_uio_scan_mcrypt_devices(void)
 {
@@ -304,7 +304,7 @@ uio_get_map_res_addr(int uio_idx, int size, int res_idx)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_get_rsrc)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_get_rsrc);
 void
 ionic_uio_get_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 {
@@ -323,7 +323,7 @@ ionic_uio_get_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 	bar->vaddr = ((char *)bar->vaddr) + offs;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_rel_rsrc)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_rel_rsrc);
 void
 ionic_uio_rel_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_common_auxiliary.c b/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
index 3ee2f4638a..81ff1ded67 100644
--- a/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
+++ b/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
@@ -19,7 +19,7 @@
 #define AUXILIARY_SYSFS_PATH "/sys/bus/auxiliary/devices"
 #define MLX5_AUXILIARY_PREFIX "mlx5_core.sf."
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_auxiliary_get_child_name)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_auxiliary_get_child_name);
 int
 mlx5_auxiliary_get_child_name(const char *dev, const char *node,
 			      char *child, size_t size)
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index 2867e21618..d045f77d33 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -28,11 +28,11 @@
 #include "mlx5_glue.h"
 
 #ifdef MLX5_GLUE
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 const struct mlx5_glue *mlx5_glue;
 #endif
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_pci_addr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_pci_addr);
 int
 mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr)
 {
@@ -92,7 +92,7 @@ mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr)
  * @return
  *   port_name field set according to recognized name format.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_translate_port_name)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_translate_port_name);
 void
 mlx5_translate_port_name(const char *port_name_in,
 			 struct mlx5_switch_info *port_info_out)
@@ -159,7 +159,7 @@ mlx5_translate_port_name(const char *port_name_in,
 	port_info_out->name_type = MLX5_PHYS_PORT_NAME_TYPE_UNKNOWN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_ifname_sysfs)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_ifname_sysfs);
 int
 mlx5_get_ifname_sysfs(const char *ibdev_path, char *ifname)
 {
@@ -893,7 +893,7 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
  * @return
  *   Pointer to an `ibv_context` on success, or NULL on failure, with `rte_errno` set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx);
 void *
 mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 {
@@ -931,7 +931,7 @@ mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 	return (void *)ctx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_device_guid)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_device_guid);
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
 {
@@ -977,7 +977,7 @@ mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
  * indirect mkey created by the DevX API.
  * This mkey should be used for DevX commands requesting mkey as a parameter.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create);
 int
 mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 			    size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr)
@@ -1017,7 +1017,7 @@ mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy);
 void
 mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
 {
@@ -1049,7 +1049,7 @@ mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
  *  - Interrupt handle on success.
  *  - NULL on failure, with rte_errno set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_create);
 struct rte_intr_handle *
 mlx5_os_interrupt_handler_create(int mode, bool set_fd_nonblock, int fd,
 				 rte_intr_callback_fn cb, void *cb_arg)
@@ -1151,7 +1151,7 @@ mlx5_intr_callback_unregister(const struct rte_intr_handle *handle,
  *   Callback argument for cb.
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_destroy);
 void
 mlx5_os_interrupt_handler_destroy(struct rte_intr_handle *intr_handle,
 				  rte_intr_callback_fn cb, void *cb_arg)
diff --git a/drivers/common/mlx5/linux/mlx5_common_verbs.c b/drivers/common/mlx5/linux/mlx5_common_verbs.c
index 98260df470..aba729a80a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_verbs.c
+++ b/drivers/common/mlx5/linux/mlx5_common_verbs.c
@@ -106,7 +106,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx)
  * @return
  *   0 on successful registration, -1 otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_reg_mr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_reg_mr);
 int
 mlx5_common_verbs_reg_mr(void *pd, void *addr, size_t length,
 			 struct mlx5_pmd_mr *pmd_mr)
@@ -136,7 +136,7 @@ mlx5_common_verbs_reg_mr(void *pd, void *addr, size_t length,
  *   pmd_mr struct set with lkey, address, length and pointer to mr object
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_dereg_mr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_dereg_mr);
 void
 mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
 {
@@ -154,7 +154,7 @@ mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
  * @param[out] dereg_mr_cb
  *   Pointer to dereg_mr func
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb);
 void
 mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index a91eaa429d..0e35fd91c7 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -1580,7 +1580,7 @@ mlx5_glue_dv_destroy_steering_anchor(struct mlx5dv_steering_anchor *sa)
 #endif
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
 	.version = MLX5_GLUE_VERSION,
diff --git a/drivers/common/mlx5/linux/mlx5_nl.c b/drivers/common/mlx5/linux/mlx5_nl.c
index 86166e92d0..5810161631 100644
--- a/drivers/common/mlx5/linux/mlx5_nl.c
+++ b/drivers/common/mlx5/linux/mlx5_nl.c
@@ -196,7 +196,7 @@ RTE_ATOMIC(uint32_t) atomic_sn;
  *   A file descriptor on success, a negative errno value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_init);
 int
 mlx5_nl_init(int protocol, int groups)
 {
@@ -643,7 +643,7 @@ mlx5_nl_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
  * @return
  *    0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vf_mac_addr_modify)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vf_mac_addr_modify);
 int
 mlx5_nl_vf_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
 			   struct rte_ether_addr *mac, int vf_index)
@@ -731,7 +731,7 @@ mlx5_nl_vf_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_add)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_add);
 int
 mlx5_nl_mac_addr_add(int nlsk_fd, unsigned int iface_idx,
 		     uint64_t *mac_own, struct rte_ether_addr *mac,
@@ -769,7 +769,7 @@ mlx5_nl_mac_addr_add(int nlsk_fd, unsigned int iface_idx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_remove)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_remove);
 int
 mlx5_nl_mac_addr_remove(int nlsk_fd, unsigned int iface_idx, uint64_t *mac_own,
 			struct rte_ether_addr *mac, uint32_t index)
@@ -794,7 +794,7 @@ mlx5_nl_mac_addr_remove(int nlsk_fd, unsigned int iface_idx, uint64_t *mac_own,
  * @param n
  *   @p mac_addrs array size.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_sync)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_sync);
 void
 mlx5_nl_mac_addr_sync(int nlsk_fd, unsigned int iface_idx,
 		      struct rte_ether_addr *mac_addrs, int n)
@@ -851,7 +851,7 @@ mlx5_nl_mac_addr_sync(int nlsk_fd, unsigned int iface_idx,
  * @param mac_own
  *   BITFIELD_DECLARE array to store the mac.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_flush)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_flush);
 void
 mlx5_nl_mac_addr_flush(int nlsk_fd, unsigned int iface_idx,
 		       struct rte_ether_addr *mac_addrs, int n,
@@ -930,7 +930,7 @@ mlx5_nl_device_flags(int nlsk_fd, unsigned int iface_idx, uint32_t flags,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_promisc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_promisc);
 int
 mlx5_nl_promisc(int nlsk_fd, unsigned int iface_idx, int enable)
 {
@@ -957,7 +957,7 @@ mlx5_nl_promisc(int nlsk_fd, unsigned int iface_idx, int enable)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_allmulti)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_allmulti);
 int
 mlx5_nl_allmulti(int nlsk_fd, unsigned int iface_idx, int enable)
 {
@@ -1147,7 +1147,7 @@ mlx5_nl_port_info(int nl, uint32_t pindex, struct mlx5_nl_port_info *data)
  *   A valid (nonzero) interface index on success, 0 otherwise and rte_errno
  *   is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_ifindex)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_ifindex);
 unsigned int
 mlx5_nl_ifindex(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info *dev_info)
 {
@@ -1204,7 +1204,7 @@ mlx5_nl_ifindex(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info
  *   Port state (ibv_port_state) on success, negative on error
  *   and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_port_state)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_port_state);
 int
 mlx5_nl_port_state(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info *dev_info)
 {
@@ -1240,7 +1240,7 @@ mlx5_nl_port_state(int nl, const char *name, uint32_t pindex, struct mlx5_dev_in
  *   A valid (nonzero) number of ports on success, 0 otherwise
  *   and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_portnum)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_portnum);
 unsigned int
 mlx5_nl_portnum(int nl, const char *name, struct mlx5_dev_info *dev_info)
 {
@@ -1447,7 +1447,7 @@ mlx5_nl_switch_info_cb(struct nlmsghdr *nh, void *arg)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_switch_info)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_switch_info);
 int
 mlx5_nl_switch_info(int nl, unsigned int ifindex,
 		    struct mlx5_switch_info *info)
@@ -1498,7 +1498,7 @@ mlx5_nl_switch_info(int nl, unsigned int ifindex,
  * @param[in] ifindex
  *   Interface index of network device to delete.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_delete)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_delete);
 void
 mlx5_nl_vlan_vmwa_delete(struct mlx5_nl_vlan_vmwa_context *vmwa,
 		      uint32_t ifindex)
@@ -1576,7 +1576,7 @@ nl_attr_nest_end(struct nlmsghdr *nlh, struct nlattr *nest)
  * @param[in] tag
  *   VLAN tag for VLAN network device to create.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_create);
 uint32_t
 mlx5_nl_vlan_vmwa_create(struct mlx5_nl_vlan_vmwa_context *vmwa,
 			 uint32_t ifindex, uint16_t tag)
@@ -1729,7 +1729,7 @@ mlx5_nl_generic_family_id_get(int nlsk_fd, const char *name)
  *   otherwise and rte_errno is set.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_family_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_family_id_get);
 int
 mlx5_nl_devlink_family_id_get(int nlsk_fd)
 {
@@ -1956,7 +1956,7 @@ mlx5_nl_enable_roce_set(int nlsk_fd, int family_id, const char *pci_addr,
  * @return
  *  0 on success, negative on failure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_parse_link_status_update)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_parse_link_status_update);
 int
 mlx5_nl_parse_link_status_update(struct nlmsghdr *hdr, uint32_t *ifindex)
 {
@@ -1988,7 +1988,7 @@ mlx5_nl_parse_link_status_update(struct nlmsghdr *hdr, uint32_t *ifindex)
  *  0 on success, including the case when there are no events.
  *  Negative on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_read_events)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_read_events);
 int
 mlx5_nl_read_events(int nlsk_fd, mlx5_nl_event_cb *cb, void *cb_arg)
 {
@@ -2076,7 +2076,7 @@ mlx5_nl_esw_multiport_cb(struct nlmsghdr *nh, void *arg)
 
 #define NL_ESW_MULTIPORT_PARAM "esw_multiport"
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_esw_multiport_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_esw_multiport_get);
 int
 mlx5_nl_devlink_esw_multiport_get(int nlsk_fd, int family_id, const char *pci_addr, int *enable)
 {
@@ -2115,14 +2115,14 @@ mlx5_nl_devlink_esw_multiport_get(int nlsk_fd, int family_id, const char *pci_ad
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_init);
 int
 mlx5_nl_rdma_monitor_init(void)
 {
 	return mlx5_nl_init(NETLINK_RDMA, RDMA_NL_GROUP_NOTIFICATION);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_info_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_info_get);
 void
 mlx5_nl_rdma_monitor_info_get(struct nlmsghdr *hdr, struct mlx5_nl_port_info *data)
 {
@@ -2217,7 +2217,7 @@ mlx5_nl_rdma_monitor_cap_get_cb(struct nlmsghdr *hdr, void *arg)
  * @return
  *   0 on success, negative on error and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_cap_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_cap_get);
 int
 mlx5_nl_rdma_monitor_cap_get(int nl, uint8_t *cap)
 {
diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 84a93e7dbd..98249c2c9e 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -21,7 +21,7 @@
 #include "mlx5_common_defs.h"
 #include "mlx5_common_private.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(haswell_broadwell_cpu)
+RTE_EXPORT_INTERNAL_SYMBOL(haswell_broadwell_cpu);
 uint8_t haswell_broadwell_cpu;
 
 /* Driver type key for new device global syntax. */
@@ -138,7 +138,7 @@ driver_get(uint32_t class)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_kvargs_process)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_kvargs_process);
 int
 mlx5_kvargs_process(struct mlx5_kvargs_ctrl *mkvlist, const char *const keys[],
 		    arg_handler_t handler, void *opaque_arg)
@@ -475,7 +475,7 @@ to_mlx5_device(const struct rte_device *rte_dev)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_to_pci_str)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_to_pci_str);
 int
 mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size)
 {
@@ -525,7 +525,7 @@ mlx5_dev_mempool_register(struct mlx5_common_device *cdev,
  * @param mp
  *   Mempool being unregistered.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_unregister);
 void
 mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev,
 			    struct rte_mempool *mp)
@@ -605,7 +605,7 @@ mlx5_dev_mempool_event_cb(enum rte_mempool_event event, struct rte_mempool *mp,
  * Callbacks addresses are local in each process.
  * Therefore, each process can register private callbacks.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_subscribe)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_subscribe);
 int
 mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev)
 {
@@ -1235,7 +1235,7 @@ mlx5_common_dev_dma_unmap(struct rte_device *rte_dev, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_class_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_class_driver_register);
 void
 mlx5_class_driver_register(struct mlx5_class_driver *driver)
 {
@@ -1258,7 +1258,7 @@ static bool mlx5_common_initialized;
  * for multiple PMDs. Each mlx5 PMD that depends on mlx5_common module,
  * must invoke in its constructor.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_init);
 void
 mlx5_common_init(void)
 {
@@ -1417,7 +1417,7 @@ mlx5_devx_alloc_uar(struct mlx5_common_device *cdev)
 	return uar;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_release)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_release);
 void
 mlx5_devx_uar_release(struct mlx5_uar *uar)
 {
@@ -1426,7 +1426,7 @@ mlx5_devx_uar_release(struct mlx5_uar *uar)
 	memset(uar, 0, sizeof(*uar));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_prepare)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_prepare);
 int
 mlx5_devx_uar_prepare(struct mlx5_common_device *cdev, struct mlx5_uar *uar)
 {
diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 18a53769c9..929b794ba7 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -24,7 +24,7 @@
  * @param[in] cq
  *   DevX CQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_destroy);
 void
 mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq)
 {
@@ -81,7 +81,7 @@ mlx5_cq_init(struct mlx5_devx_cq *cq_obj, uint16_t cq_size)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_create);
 int
 mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
 		    struct mlx5_devx_cq_attr *attr, int socket)
@@ -197,7 +197,7 @@ mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
  * @param[in] sq
  *   DevX SQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_destroy);
 void
 mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
 {
@@ -242,7 +242,7 @@ mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_create);
 int
 mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
 		    struct mlx5_devx_create_sq_attr *attr, int socket)
@@ -380,7 +380,7 @@ mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp)
  * @param[in] qp
  *   DevX QP to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_destroy);
 void
 mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp)
 {
@@ -419,7 +419,7 @@ mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_create);
 int
 mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint32_t queue_size,
 		    struct mlx5_devx_qp_attr *attr, int socket)
@@ -490,7 +490,7 @@ mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint32_t queue_size,
  * @param[in] rq
  *   DevX RQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_destroy);
 void
 mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
 {
@@ -766,7 +766,7 @@ mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_create);
 int
 mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
 		    uint32_t wqe_size, uint16_t log_wqbb_n,
@@ -790,7 +790,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
  * @return
  *	 0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp2rts)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp2rts);
 int
 mlx5_devx_qp2rts(struct mlx5_devx_qp *qp, uint32_t remote_qp_id)
 {
diff --git a/drivers/common/mlx5/mlx5_common_mp.c b/drivers/common/mlx5/mlx5_common_mp.c
index 1ff268f348..44ccee4cfa 100644
--- a/drivers/common/mlx5/mlx5_common_mp.c
+++ b/drivers/common/mlx5/mlx5_common_mp.c
@@ -25,7 +25,7 @@
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mr_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mr_create);
 int
 mlx5_mp_req_mr_create(struct mlx5_common_device *cdev, uintptr_t addr)
 {
@@ -65,7 +65,7 @@ mlx5_mp_req_mr_create(struct mlx5_common_device *cdev, uintptr_t addr)
  * @param reg
  *   True to register the mempool, False to unregister.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mempool_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mempool_reg);
 int
 mlx5_mp_req_mempool_reg(struct mlx5_common_device *cdev,
 			struct rte_mempool *mempool, bool reg,
@@ -116,7 +116,7 @@ mlx5_mp_req_mempool_reg(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_queue_state_modify)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_queue_state_modify);
 int
 mlx5_mp_req_queue_state_modify(struct mlx5_mp_id *mp_id,
 			       struct mlx5_mp_arg_queue_state_modify *sm)
@@ -155,7 +155,7 @@ mlx5_mp_req_queue_state_modify(struct mlx5_mp_id *mp_id,
  * @return
  *   fd on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_verbs_cmd_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_verbs_cmd_fd);
 int
 mlx5_mp_req_verbs_cmd_fd(struct mlx5_mp_id *mp_id)
 {
@@ -197,7 +197,7 @@ mlx5_mp_req_verbs_cmd_fd(struct mlx5_mp_id *mp_id)
 /**
  * Initialize by primary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_primary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_primary);
 int
 mlx5_mp_init_primary(const char *name, const rte_mp_t primary_action)
 {
@@ -215,7 +215,7 @@ mlx5_mp_init_primary(const char *name, const rte_mp_t primary_action)
 /**
  * Un-initialize by primary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_primary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_primary);
 void
 mlx5_mp_uninit_primary(const char *name)
 {
@@ -226,7 +226,7 @@ mlx5_mp_uninit_primary(const char *name)
 /**
  * Initialize by secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_secondary);
 int
 mlx5_mp_init_secondary(const char *name, const rte_mp_t secondary_action)
 {
@@ -237,7 +237,7 @@ mlx5_mp_init_secondary(const char *name, const rte_mp_t secondary_action)
 /**
  * Un-initialize by secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_secondary);
 void
 mlx5_mp_uninit_secondary(const char *name)
 {
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index c41ffff2d5..a928515728 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -52,7 +52,7 @@ struct mlx5_mempool_reg {
 	bool is_extmem;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mprq_buf_free_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mprq_buf_free_cb);
 void
 mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque)
 {
@@ -251,7 +251,7 @@ mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket)
  * @param bt
  *   Pointer to B-tree structure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_btree_free)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_btree_free);
 void
 mlx5_mr_btree_free(struct mlx5_mr_btree *bt)
 {
@@ -302,7 +302,7 @@ mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_ctrl_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_ctrl_init);
 int
 mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, uint32_t *dev_gen_ptr,
 		  int socket)
@@ -969,7 +969,7 @@ mlx5_mr_create_primary(void *pd,
  * @return
  *   Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_create);
 uint32_t
 mlx5_mr_create(struct mlx5_common_device *cdev,
 	       struct mlx5_mr_share_cache *share_cache,
@@ -1064,7 +1064,7 @@ mr_lookup_caches(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *   Searched LKey on success, UINT32_MAX on no match.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_addr2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_addr2mr_bh);
 uint32_t
 mlx5_mr_addr2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr)
 {
@@ -1155,7 +1155,7 @@ mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket)
  * @param mr_ctrl
  *   Pointer to per-queue MR local cache.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_flush_local_cache)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_flush_local_cache);
 void
 mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl)
 {
@@ -1810,7 +1810,7 @@ mlx5_mr_mempool_register_secondary(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_register);
 int
 mlx5_mr_mempool_register(struct mlx5_common_device *cdev,
 			 struct rte_mempool *mp, bool is_extmem)
@@ -1876,7 +1876,7 @@ mlx5_mr_mempool_unregister_secondary(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_unregister);
 int
 mlx5_mr_mempool_unregister(struct mlx5_common_device *cdev,
 			   struct rte_mempool *mp)
@@ -1988,7 +1988,7 @@ mlx5_lookup_mempool_regs(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *  0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_populate_cache)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_populate_cache);
 int
 mlx5_mr_mempool_populate_cache(struct mlx5_mr_ctrl *mr_ctrl,
 			       struct rte_mempool *mp)
@@ -2048,7 +2048,7 @@ mlx5_mr_mempool_populate_cache(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *   MR lkey on success, UINT32_MAX on failure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool2mr_bh);
 uint32_t
 mlx5_mr_mempool2mr_bh(struct mlx5_mr_ctrl *mr_ctrl,
 		      struct rte_mempool *mp, uintptr_t addr)
@@ -2075,7 +2075,7 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_ctrl *mr_ctrl,
 	return lkey;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mb2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mb2mr_bh);
 uint32_t
 mlx5_mr_mb2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb)
 {
diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c
index 8bd43bc166..10b1c90fa9 100644
--- a/drivers/common/mlx5/mlx5_common_pci.c
+++ b/drivers/common/mlx5/mlx5_common_pci.c
@@ -103,14 +103,14 @@ pci_ids_table_update(const struct rte_pci_id *driver_id_table)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_pci)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_pci);
 bool
 mlx5_dev_is_pci(const struct rte_device *dev)
 {
 	return strcmp(dev->bus->name, "pci") == 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_vf_pci)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_vf_pci);
 bool
 mlx5_dev_is_vf_pci(const struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c
index 14056cebcb..88f2d48c2e 100644
--- a/drivers/common/mlx5/mlx5_common_utils.c
+++ b/drivers/common/mlx5/mlx5_common_utils.c
@@ -27,7 +27,7 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_create);
 struct mlx5_list *
 mlx5_list_create(const char *name, void *ctx, bool lcores_share,
 		 mlx5_list_create_cb cb_create,
@@ -122,7 +122,7 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst,
 	return entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_lookup)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_lookup);
 struct mlx5_list_entry *
 mlx5_list_lookup(struct mlx5_list *list, void *ctx)
 {
@@ -263,7 +263,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst,
 	return local_entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_register);
 struct mlx5_list_entry *
 mlx5_list_register(struct mlx5_list *list, void *ctx)
 {
@@ -323,7 +323,7 @@ _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst,
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_unregister);
 int
 mlx5_list_unregister(struct mlx5_list *list,
 		      struct mlx5_list_entry *entry)
@@ -371,7 +371,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_destroy);
 void
 mlx5_list_destroy(struct mlx5_list *list)
 {
@@ -379,7 +379,7 @@ mlx5_list_destroy(struct mlx5_list *list)
 	mlx5_free(list);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_get_entry_num)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_get_entry_num);
 uint32_t
 mlx5_list_get_entry_num(struct mlx5_list *list)
 {
@@ -389,7 +389,7 @@ mlx5_list_get_entry_num(struct mlx5_list *list)
 
 /********************* Hash List **********************/
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_create);
 struct mlx5_hlist *
 mlx5_hlist_create(const char *name, uint32_t size, bool direct_key,
 		  bool lcores_share, void *ctx, mlx5_list_create_cb cb_create,
@@ -455,7 +455,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key,
 }
 
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_lookup)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_lookup);
 struct mlx5_list_entry *
 mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx)
 {
@@ -468,7 +468,7 @@ mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx)
 	return _mlx5_list_lookup(&h->buckets[idx].l, &h->l_const, ctx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_register);
 struct mlx5_list_entry*
 mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx)
 {
@@ -497,7 +497,7 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx)
 	return entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_unregister);
 int
 mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry)
 {
@@ -516,7 +516,7 @@ mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_destroy);
 void
 mlx5_hlist_destroy(struct mlx5_hlist *h)
 {
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index cf601254ab..82ba2106a8 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -87,7 +87,7 @@ mlx5_devx_get_hca_cap(void *ctx, uint32_t *in, uint32_t *out,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_read)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_read);
 int
 mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id, uint32_t arg,
 			    uint32_t *data, uint32_t dw_cnt)
@@ -138,7 +138,7 @@ mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id, uint32_t arg,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_write)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_write);
 int
 mlx5_devx_cmd_register_write(void *ctx, uint16_t reg_id, uint32_t arg,
 			     uint32_t *data, uint32_t dw_cnt)
@@ -179,7 +179,7 @@ mlx5_devx_cmd_register_write(void *ctx, uint16_t reg_id, uint32_t arg,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc_general)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc_general);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc_general(void *ctx,
 		struct mlx5_devx_counter_attr *attr)
@@ -229,7 +229,7 @@ mlx5_devx_cmd_flow_counter_alloc_general(void *ctx,
  *   Pointer to counter object on success, a negative value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
 {
@@ -281,7 +281,7 @@ mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_query);
 int
 mlx5_devx_cmd_flow_counter_query(struct mlx5_devx_obj *dcs,
 				 int clear, uint32_t n_counters,
@@ -343,7 +343,7 @@ mlx5_devx_cmd_flow_counter_query(struct mlx5_devx_obj *dcs,
  *   Pointer to Devx mkey on success, a negative value otherwise and rte_errno
  *   is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_mkey_create);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_mkey_create(void *ctx,
 			  struct mlx5_devx_mkey_attr *attr)
@@ -447,7 +447,7 @@ mlx5_devx_cmd_mkey_create(void *ctx,
  * @return
  *   0 on success, non-zero value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_get_out_command_status)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_get_out_command_status);
 int
 mlx5_devx_get_out_command_status(void *out)
 {
@@ -474,7 +474,7 @@ mlx5_devx_get_out_command_status(void *out)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_destroy);
 int
 mlx5_devx_cmd_destroy(struct mlx5_devx_obj *obj)
 {
@@ -634,7 +634,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
  * @return
  *   0 on success, a negative errno otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_match_sample_info_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_match_sample_info_query);
 int
 mlx5_devx_cmd_match_sample_info_query(void *ctx, uint32_t sample_field_id,
 				      struct mlx5_devx_match_sample_info_query_attr *attr)
@@ -672,7 +672,7 @@ mlx5_devx_cmd_match_sample_info_query(void *ctx, uint32_t sample_field_id,
 #endif
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_parse_samples)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_parse_samples);
 int
 mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
 				  uint32_t *ids,
@@ -727,7 +727,7 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flex_parser)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flex_parser);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flex_parser(void *ctx,
 				 struct mlx5_devx_graph_node_attr *data)
@@ -928,7 +928,7 @@ mlx5_devx_query_pkt_integrity_match(void *hcattr)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_hca_attr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_hca_attr);
 int
 mlx5_devx_cmd_query_hca_attr(void *ctx,
 			     struct mlx5_hca_attr *attr)
@@ -1438,7 +1438,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_qp_query_tis_td)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_qp_query_tis_td);
 int
 mlx5_devx_cmd_qp_query_tis_td(void *qp, uint32_t tis_num,
 			      uint32_t *tis_td)
@@ -1525,7 +1525,7 @@ devx_cmd_fill_wq_data(void *wq_ctx, struct mlx5_devx_wq_attr *wq_attr)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rq(void *ctx,
 			struct mlx5_devx_create_rq_attr *rq_attr,
@@ -1584,7 +1584,7 @@ mlx5_devx_cmd_create_rq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rq);
 int
 mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			struct mlx5_devx_modify_rq_attr *rq_attr)
@@ -1638,7 +1638,7 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_rq);
 int
 mlx5_devx_cmd_query_rq(struct mlx5_devx_obj *rq_obj, void *out, size_t outlen)
 {
@@ -1668,7 +1668,7 @@ mlx5_devx_cmd_query_rq(struct mlx5_devx_obj *rq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rmp)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rmp);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rmp(void *ctx,
 			 struct mlx5_devx_create_rmp_attr *rmp_attr,
@@ -1716,7 +1716,7 @@ mlx5_devx_cmd_create_rmp(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tir)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tir);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_tir(void *ctx,
 			 struct mlx5_devx_tir_attr *tir_attr)
@@ -1785,7 +1785,7 @@ mlx5_devx_cmd_create_tir(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_tir)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_tir);
 int
 mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
 			 struct mlx5_devx_modify_tir_attr *modify_tir_attr)
@@ -1870,7 +1870,7 @@ mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rqt)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rqt);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rqt(void *ctx,
 			 struct mlx5_devx_rqt_attr *rqt_attr)
@@ -1925,7 +1925,7 @@ mlx5_devx_cmd_create_rqt(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rqt)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rqt);
 int
 mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			 struct mlx5_devx_rqt_attr *rqt_attr)
@@ -1974,7 +1974,7 @@ mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  **/
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_sq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_sq(void *ctx,
 			struct mlx5_devx_create_sq_attr *sq_attr)
@@ -2041,7 +2041,7 @@ mlx5_devx_cmd_create_sq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_sq);
 int
 mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
 			struct mlx5_devx_modify_sq_attr *sq_attr)
@@ -2081,7 +2081,7 @@ mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_sq);
 int
 mlx5_devx_cmd_query_sq(struct mlx5_devx_obj *sq_obj, void *out, size_t outlen)
 {
@@ -2109,7 +2109,7 @@ mlx5_devx_cmd_query_sq(struct mlx5_devx_obj *sq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tis)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tis);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_tis(void *ctx,
 			 struct mlx5_devx_tis_attr *tis_attr)
@@ -2153,7 +2153,7 @@ mlx5_devx_cmd_create_tis(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_td)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_td);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_td(void *ctx)
 {
@@ -2196,7 +2196,7 @@ mlx5_devx_cmd_create_td(void *ctx)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_dump);
 int
 mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused,
 			void *rx_domain __rte_unused,
@@ -2222,7 +2222,7 @@ mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused,
 	return -ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_single_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_single_dump);
 int
 mlx5_devx_cmd_flow_single_dump(void *rule_info __rte_unused,
 			FILE *file __rte_unused)
@@ -2248,7 +2248,7 @@ mlx5_devx_cmd_flow_single_dump(void *rule_info __rte_unused,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_cq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_cq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_cq(void *ctx, struct mlx5_devx_cq_attr *attr)
 {
@@ -2317,7 +2317,7 @@ mlx5_devx_cmd_create_cq(void *ctx, struct mlx5_devx_cq_attr *attr)
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_cq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_cq);
 int
 mlx5_devx_cmd_query_cq(struct mlx5_devx_obj *cq_obj, void *out, size_t outlen)
 {
@@ -2345,7 +2345,7 @@ mlx5_devx_cmd_query_cq(struct mlx5_devx_obj *cq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_virtq(void *ctx,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2422,7 +2422,7 @@ mlx5_devx_cmd_create_virtq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_virtq);
 int
 mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2521,7 +2521,7 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtq);
 int
 mlx5_devx_cmd_query_virtq(struct mlx5_devx_obj *virtq_obj,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2564,7 +2564,7 @@ mlx5_devx_cmd_query_virtq(struct mlx5_devx_obj *virtq_obj,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_qp)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_qp);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_qp(void *ctx,
 			struct mlx5_devx_qp_attr *attr)
@@ -2667,7 +2667,7 @@ mlx5_devx_cmd_create_qp(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_qp_state)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_qp_state);
 int
 mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
 			      uint32_t remote_qp_id)
@@ -2745,7 +2745,7 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtio_q_counters)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtio_q_counters);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
 {
@@ -2777,7 +2777,7 @@ mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
 	return couners_obj;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtio_q_counters)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtio_q_counters);
 int
 mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
 				   struct mlx5_devx_virtio_q_couners_attr *attr)
@@ -2827,7 +2827,7 @@ mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_hit_aso_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_hit_aso_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flow_hit_aso_obj(void *ctx, uint32_t pd)
 {
@@ -2870,7 +2870,7 @@ mlx5_devx_cmd_create_flow_hit_aso_obj(void *ctx, uint32_t pd)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_alloc_pd)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_alloc_pd);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_alloc_pd(void *ctx)
 {
@@ -2911,7 +2911,7 @@ mlx5_devx_cmd_alloc_pd(void *ctx)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_meter_aso_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_meter_aso_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flow_meter_aso_obj(void *ctx, uint32_t pd,
 						uint32_t log_obj_size)
@@ -2965,7 +2965,7 @@ mlx5_devx_cmd_create_flow_meter_aso_obj(void *ctx, uint32_t pd,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_conn_track_offload_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_conn_track_offload_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_conn_track_offload_obj(void *ctx, uint32_t pd,
 					    uint32_t log_obj_size)
@@ -3012,7 +3012,7 @@ mlx5_devx_cmd_create_conn_track_offload_obj(void *ctx, uint32_t pd,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_geneve_tlv_option)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_geneve_tlv_option);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_geneve_tlv_option(void *ctx,
 				  struct mlx5_devx_geneve_tlv_option_attr *attr)
@@ -3075,7 +3075,7 @@ mlx5_devx_cmd_create_geneve_tlv_option(void *ctx,
  * @return
  *   0 on success, a negative errno otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_geneve_tlv_option)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_geneve_tlv_option);
 int
 mlx5_devx_cmd_query_geneve_tlv_option(void *ctx,
 				      struct mlx5_devx_obj *geneve_tlv_opt_obj,
@@ -3113,7 +3113,7 @@ mlx5_devx_cmd_query_geneve_tlv_option(void *ctx,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_wq_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_wq_query);
 int
 mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id)
 {
@@ -3154,7 +3154,7 @@ mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id)
  *   Pointer to counter object on success, a NULL value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_alloc);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome)
 {
@@ -3196,7 +3196,7 @@ mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_query);
 int
 mlx5_devx_cmd_queue_counter_query(struct mlx5_devx_obj *dcs, int clear,
 				  uint32_t *out_of_buffers)
@@ -3232,7 +3232,7 @@ mlx5_devx_cmd_queue_counter_query(struct mlx5_devx_obj *dcs, int clear,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_dek_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_dek_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_dek_obj(void *ctx, struct mlx5_devx_dek_attr *attr)
 {
@@ -3283,7 +3283,7 @@ mlx5_devx_cmd_create_dek_obj(void *ctx, struct mlx5_devx_dek_attr *attr)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_import_kek_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_import_kek_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_import_kek_obj(void *ctx,
 				    struct mlx5_devx_import_kek_attr *attr)
@@ -3331,7 +3331,7 @@ mlx5_devx_cmd_create_import_kek_obj(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_credential_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_credential_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_credential_obj(void *ctx,
 				    struct mlx5_devx_credential_attr *attr)
@@ -3380,7 +3380,7 @@ mlx5_devx_cmd_create_credential_obj(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_crypto_login_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_crypto_login_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_crypto_login_obj(void *ctx,
 				      struct mlx5_devx_crypto_login_attr *attr)
@@ -3432,7 +3432,7 @@ mlx5_devx_cmd_create_crypto_login_obj(void *ctx,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_lag)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_lag);
 int
 mlx5_devx_cmd_query_lag(void *ctx,
 			struct mlx5_devx_lag_context *lag_ctx)
diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c
index 28fb19b285..a1077f59d4 100644
--- a/drivers/common/mlx5/mlx5_malloc.c
+++ b/drivers/common/mlx5/mlx5_malloc.c
@@ -169,7 +169,7 @@ mlx5_malloc_socket_internal(size_t size, unsigned int align, int socket, bool ze
 		      rte_malloc_socket(NULL, size, align, socket);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_malloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_malloc);
 void *
 mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
 {
@@ -220,7 +220,7 @@ mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_realloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_realloc);
 void *
 mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
 	     int socket)
@@ -268,7 +268,7 @@ mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
 	return new_addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_free)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_free);
 void
 mlx5_free(void *addr)
 {
@@ -289,7 +289,7 @@ mlx5_free(void *addr)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_memory_stat_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_memory_stat_dump);
 void
 mlx5_memory_stat_dump(void)
 {
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index 7fac361460..3212f13369 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -282,7 +282,7 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
  * @return
  *   Pointer to an `ibv_context` on success, or NULL on failure, with `rte_errno` set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx);
 void *
 mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 {
@@ -314,7 +314,7 @@ mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
  * @return
  *   umem on successful registration, NULL and errno otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_reg);
 void *
 mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
 {
@@ -345,7 +345,7 @@ mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
  * @return
  *   0 on successful release, negative number otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_dereg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_dereg);
 int
 mlx5_os_umem_dereg(void *pumem)
 {
@@ -446,7 +446,7 @@ mlx5_os_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
  *   Pointer to dereg_mr func
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb);
 void
 mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
 {
@@ -458,7 +458,7 @@ mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
  * In Windows, no need to wrap the MR, no known issue for it in kernel.
  * Use the regular function to create direct MR.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create);
 int
 mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 			    size_t length, struct mlx5_pmd_wrapped_mr *wpmd_mr)
@@ -478,7 +478,7 @@ mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy);
 void
 mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *wpmd_mr)
 {
diff --git a/drivers/common/mlx5/windows/mlx5_glue.c b/drivers/common/mlx5/windows/mlx5_glue.c
index 066e2fdce3..9c24d1c941 100644
--- a/drivers/common/mlx5/windows/mlx5_glue.c
+++ b/drivers/common/mlx5/windows/mlx5_glue.c
@@ -410,7 +410,7 @@ mlx5_glue_devx_set_mtu(void *ctx, uint32_t mtu)
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.version = MLX5_GLUE_VERSION,
diff --git a/drivers/common/mvep/mvep_common.c b/drivers/common/mvep/mvep_common.c
index 2035300cce..cede7b9004 100644
--- a/drivers/common/mvep/mvep_common.c
+++ b/drivers/common/mvep/mvep_common.c
@@ -19,7 +19,7 @@ struct mvep {
 
 static struct mvep mvep;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_init);
 int rte_mvep_init(enum mvep_module_type module __rte_unused,
 		  struct rte_kvargs *kvlist __rte_unused)
 {
@@ -36,7 +36,7 @@ int rte_mvep_init(enum mvep_module_type module __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_deinit);
 int rte_mvep_deinit(enum mvep_module_type module __rte_unused)
 {
 	mvep.ref_count--;
diff --git a/drivers/common/nfp/nfp_common.c b/drivers/common/nfp/nfp_common.c
index 475f64daab..46254499b9 100644
--- a/drivers/common/nfp/nfp_common.c
+++ b/drivers/common/nfp/nfp_common.c
@@ -15,7 +15,7 @@
  */
 #define NFP_NET_POLL_TIMEOUT    5000
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig_real)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig_real);
 int
 nfp_reconfig_real(struct nfp_hw *hw,
 		uint32_t update)
@@ -80,7 +80,7 @@ nfp_reconfig_real(struct nfp_hw *hw,
  *   - (0) if OK to reconfigure the device.
  *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig);
 int
 nfp_reconfig(struct nfp_hw *hw,
 		uint32_t ctrl,
@@ -125,7 +125,7 @@ nfp_reconfig(struct nfp_hw *hw,
  *   - (0) if OK to reconfigure the device.
  *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_ext_reconfig)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_ext_reconfig);
 int
 nfp_ext_reconfig(struct nfp_hw *hw,
 		uint32_t ctrl_ext,
@@ -153,7 +153,7 @@ nfp_ext_reconfig(struct nfp_hw *hw,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_read_mac)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_read_mac);
 void
 nfp_read_mac(struct nfp_hw *hw)
 {
@@ -166,7 +166,7 @@ nfp_read_mac(struct nfp_hw *hw)
 	memcpy(&hw->mac_addr.addr_bytes[4], &tmp, 2);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_write_mac)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_write_mac);
 void
 nfp_write_mac(struct nfp_hw *hw,
 		uint8_t *mac)
@@ -183,7 +183,7 @@ nfp_write_mac(struct nfp_hw *hw,
 			hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_enable_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_enable_queues);
 void
 nfp_enable_queues(struct nfp_hw *hw,
 		uint16_t nb_rx_queues,
@@ -207,7 +207,7 @@ nfp_enable_queues(struct nfp_hw *hw,
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_disable_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_disable_queues);
 void
 nfp_disable_queues(struct nfp_hw *hw)
 {
diff --git a/drivers/common/nfp/nfp_common_pci.c b/drivers/common/nfp/nfp_common_pci.c
index 4a2fb5e82d..12c17b09b2 100644
--- a/drivers/common/nfp/nfp_common_pci.c
+++ b/drivers/common/nfp/nfp_common_pci.c
@@ -258,7 +258,7 @@ nfp_common_init(void)
 	nfp_common_initialized = true;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_class_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_class_driver_register);
 void
 nfp_class_driver_register(struct nfp_class_driver *driver)
 {
diff --git a/drivers/common/nfp/nfp_dev.c b/drivers/common/nfp/nfp_dev.c
index 486ed2cdfe..a8eb213e5a 100644
--- a/drivers/common/nfp/nfp_dev.c
+++ b/drivers/common/nfp/nfp_dev.c
@@ -50,7 +50,7 @@ const struct nfp_dev_info nfp_dev_info[NFP_DEV_CNT] = {
 	},
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_dev_info_get)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_dev_info_get);
 const struct nfp_dev_info *
 nfp_dev_info_get(uint16_t device_id)
 {
diff --git a/drivers/common/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c
index 74c7a859a4..f1b39deea7 100644
--- a/drivers/common/nitrox/nitrox_device.c
+++ b/drivers/common/nitrox/nitrox_device.c
@@ -65,7 +65,7 @@ ndev_release(struct nitrox_device *ndev)
 TAILQ_HEAD(ndrv_list, nitrox_driver);
 static struct ndrv_list ndrv_list = TAILQ_HEAD_INITIALIZER(ndrv_list);
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_register_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_register_driver);
 void
 nitrox_register_driver(struct nitrox_driver *ndrv)
 {
diff --git a/drivers/common/nitrox/nitrox_logs.c b/drivers/common/nitrox/nitrox_logs.c
index e4ebb39ff1..6187452cda 100644
--- a/drivers/common/nitrox/nitrox_logs.c
+++ b/drivers/common/nitrox/nitrox_logs.c
@@ -5,5 +5,5 @@
 #include <eal_export.h>
 #include <rte_log.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_logtype);
 RTE_LOG_REGISTER_DEFAULT(nitrox_logtype, NOTICE);
diff --git a/drivers/common/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c
index 8f481e6876..8084b1421f 100644
--- a/drivers/common/nitrox/nitrox_qp.c
+++ b/drivers/common/nitrox/nitrox_qp.c
@@ -104,7 +104,7 @@ nitrox_release_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr)
 	return rte_memzone_free(qp->cmdq.mz);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_setup);
 int
 nitrox_qp_setup(struct nitrox_qp *qp, uint8_t *bar_addr, const char *dev_name,
 		uint32_t nb_descriptors, uint8_t instr_size, int socket_id)
@@ -147,7 +147,7 @@ nitrox_release_ridq(struct nitrox_qp *qp)
 	rte_free(qp->ridq);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_release)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_release);
 int
 nitrox_qp_release(struct nitrox_qp *qp, uint8_t *bar_addr)
 {
diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c
index 9e0bbf453f..d0018673f8 100644
--- a/drivers/common/octeontx/octeontx_mbox.c
+++ b/drivers/common/octeontx/octeontx_mbox.c
@@ -70,7 +70,7 @@ struct mbox_intf_ver {
 	uint32_t minor:10;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_logtype_mbox)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_logtype_mbox);
 RTE_LOG_REGISTER(octeontx_logtype_mbox, pmd.octeontx.mbox, NOTICE);
 
 static inline void
@@ -194,7 +194,7 @@ mbox_send(struct mbox *m, struct octeontx_mbox_hdr *hdr, const void *txmsg,
 	return res;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_ram_mbox_base)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_ram_mbox_base);
 int
 octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain)
 {
@@ -219,7 +219,7 @@ octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_reg);
 int
 octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain)
 {
@@ -244,7 +244,7 @@ octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_send)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_send);
 int
 octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata,
 				 uint16_t txlen, void *rxdata, uint16_t rxlen)
@@ -309,7 +309,7 @@ octeontx_check_mbox_version(struct mbox_intf_ver *app_intf_ver,
 	return result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_init)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_init);
 int
 octeontx_mbox_init(void)
 {
@@ -349,7 +349,7 @@ octeontx_mbox_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_get_global_domain)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_get_global_domain);
 uint16_t
 octeontx_get_global_domain(void)
 {
diff --git a/drivers/common/sfc_efx/sfc_base_symbols.c b/drivers/common/sfc_efx/sfc_base_symbols.c
index bbb6f39924..1f62696c3b 100644
--- a/drivers/common/sfc_efx/sfc_base_symbols.c
+++ b/drivers/common/sfc_efx/sfc_base_symbols.c
@@ -5,274 +5,274 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(efx_crc32_calculate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qprime)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpending)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_check_init_done)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpoll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_usecs_to_ticks)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qmoderate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_mac_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_vlan_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_restore)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_supported_filters)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_tx)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_local)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_full)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_eth_local)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ether_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_uc_def)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_mc_def)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_encap_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_vxlan)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_geneve)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_nvgre)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_rss_context)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_dwords)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable_unlocked)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_trigger)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_line)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_message)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fatal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_from_sdu)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_get_all_ucast_mcast)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_drain)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_up)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_multicast_list_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_include_fcs_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stat_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_get_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_upload)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_periodic)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_update)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_get_limits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_invalid)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_phy_port)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_function)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_mh_function)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_id_by_selector)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_recirc_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_ct_mark_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_mport_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_clone)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_equal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_is_valid)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decap)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_pop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_dst_mac)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_src_mac)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decr_ip_ttl)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_nat)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_push)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_encap)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_count)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_flag)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_deliver)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_drop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_specs_equal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_class_cmp)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_recirc_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_do_ct_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_outer_rule_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_dst_mac_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_src_mac_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_update)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_eh_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_get_nb_count)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_counter_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_clear_fw_rsrc_ids)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_start)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_give_credits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_mport_alloc_alias)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_read_mport_journal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_replay)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_new_epoch)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_start)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_poll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_abort)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_client_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_own_client_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_timeout)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_proxy_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_reboot)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_family)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_family_probe_bar)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_probe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_limits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_version)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_bar_region)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_vi_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_unprobe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_version)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_board_info)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_hw_unavailable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_hw_unavailable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_calculate_pcie_link_bandwidth)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_subvariant)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_fw_subvariant)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_check_pcie_link_speed)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_config_add)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_reconfigure)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_map)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_verify)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lane_count_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lp_cap_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_oui_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_media_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_module_get_info)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_fec_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_link_state_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_poll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_loopback_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_type_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_vlan_strip_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_hash_flags_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_hash_default_support_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_default_support_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc_v2)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_tbl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qflush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qenable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate_es_super_buffer)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_pkt_length_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_hash_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_get_layout)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_layout_check)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_list)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_supported_num_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_is_supported)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_describe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_add)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_reconfigure)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpace)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qflush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qenable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_write)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_post)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_post)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_dma_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso2_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_vlantci_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_checksum_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstart)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_doorbell_offset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_features)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_verify_features)
+RTE_EXPORT_INTERNAL_SYMBOL(efx_crc32_calculate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qprime);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpending);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_check_init_done);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpoll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_usecs_to_ticks);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qmoderate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_mac_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_vlan_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_restore);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_supported_filters);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_tx);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_local);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_full);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_eth_local);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ether_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_uc_def);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_mc_def);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_encap_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_vxlan);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_geneve);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_nvgre);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_rss_context);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_dwords);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable_unlocked);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_trigger);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_line);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_message);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fatal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_from_sdu);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_get_all_ucast_mcast);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_drain);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_up);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_multicast_list_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_include_fcs_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stat_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_get_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_upload);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_periodic);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_update);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_get_limits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_invalid);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_phy_port);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_function);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_mh_function);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_id_by_selector);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_recirc_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_ct_mark_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_mport_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_clone);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_equal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_is_valid);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decap);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_pop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_dst_mac);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_src_mac);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decr_ip_ttl);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_nat);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_push);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_encap);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_count);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_flag);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_deliver);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_drop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_specs_equal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_class_cmp);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_recirc_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_do_ct_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_outer_rule_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_dst_mac_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_src_mac_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_update);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_eh_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_get_nb_count);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_counter_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_clear_fw_rsrc_ids);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_start);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_give_credits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_mport_alloc_alias);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_read_mport_journal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_replay);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_new_epoch);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_start);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_poll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_abort);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_client_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_own_client_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_timeout);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_proxy_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_reboot);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_family);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_family_probe_bar);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_probe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_limits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_version);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_bar_region);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_vi_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_unprobe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_version);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_board_info);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_hw_unavailable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_hw_unavailable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_calculate_pcie_link_bandwidth);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_subvariant);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_fw_subvariant);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_check_pcie_link_speed);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_config_add);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_reconfigure);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_map);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_verify);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lane_count_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lp_cap_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_oui_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_media_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_module_get_info);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_fec_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_link_state_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_poll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_loopback_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_type_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_vlan_strip_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_hash_flags_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_hash_default_support_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_default_support_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc_v2);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_tbl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qflush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qenable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate_es_super_buffer);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_pkt_length_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_hash_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_get_layout);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_layout_check);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_list);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_supported_num_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_is_supported);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_describe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_add);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_reconfigure);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpace);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qflush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qenable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_write);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_post);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_post);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_dma_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso2_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_vlantci_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_checksum_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstart);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_doorbell_offset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_features);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_verify_features);
diff --git a/drivers/common/sfc_efx/sfc_efx.c b/drivers/common/sfc_efx/sfc_efx.c
index 60f20ef262..0cde581485 100644
--- a/drivers/common/sfc_efx/sfc_efx.c
+++ b/drivers/common/sfc_efx/sfc_efx.c
@@ -36,7 +36,7 @@ sfc_efx_kvarg_dev_class_handler(__rte_unused const char *key,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_dev_class_get)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_dev_class_get);
 enum sfc_efx_dev_class
 sfc_efx_dev_class_get(struct rte_devargs *devargs)
 {
@@ -95,7 +95,7 @@ sfc_efx_pci_config_readd(efsys_pci_config_t *configp, uint32_t offset,
 	return (rc < 0 || rc != sizeof(*edp)) ? EIO : 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_family)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_family);
 int
 sfc_efx_family(struct rte_pci_device *pci_dev,
 	       efx_bar_region_t *mem_ebrp, efx_family_t *family)
diff --git a/drivers/common/sfc_efx/sfc_efx_mcdi.c b/drivers/common/sfc_efx/sfc_efx_mcdi.c
index 1fe3515d2d..647108cb45 100644
--- a/drivers/common/sfc_efx/sfc_efx_mcdi.c
+++ b/drivers/common/sfc_efx/sfc_efx_mcdi.c
@@ -265,7 +265,7 @@ sfc_efx_mcdi_ev_proxy_response(void *arg, uint32_t handle, efx_rc_t result)
 	mcdi->proxy_result = result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_init)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_init);
 int
 sfc_efx_mcdi_init(struct sfc_efx_mcdi *mcdi,
 		  uint32_t logtype, const char *log_prefix, efx_nic_t *nic,
@@ -322,7 +322,7 @@ sfc_efx_mcdi_init(struct sfc_efx_mcdi *mcdi,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_fini)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_fini);
 void
 sfc_efx_mcdi_fini(struct sfc_efx_mcdi *mcdi)
 {
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 31ec88c7d6..c0b312ed75 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -875,14 +875,14 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver1_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver1_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
 	return cn10k_cpt_crypto_adapter_enqueue(ws, ev, nb_events, false);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver2_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver2_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -1216,7 +1216,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
@@ -1241,7 +1241,7 @@ cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 	return (uintptr_t)cop;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_vector_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_vector_dequeue);
 uintptr_t
 cn10k_cpt_crypto_adapter_vector_dequeue(uintptr_t get_work1)
 {
@@ -1345,7 +1345,7 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
 }
 
 #if defined(RTE_ARCH_ARM64)
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1489,7 +1489,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 	return count + i;
 }
 #else
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1969,7 +1969,7 @@ cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_rx_inject_configure)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_rx_inject_configure);
 int
 cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
 {
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 6ef7c5bb22..40ff647b29 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -776,7 +776,7 @@ ca_lmtst_burst_submit(struct ops_burst *burst)
 	return i;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn20k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -1167,7 +1167,7 @@ cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn20k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
@@ -1192,7 +1192,7 @@ cn20k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 	return (uintptr_t)cop;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_vector_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_vector_dequeue);
 uintptr_t
 cn20k_cpt_crypto_adapter_vector_dequeue(uintptr_t get_work1)
 {
@@ -1707,7 +1707,7 @@ cn20k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 }
 
 #if defined(RTE_ARCH_ARM64)
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1851,7 +1851,7 @@ cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 	return count + i;
 }
 #else
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1864,7 +1864,7 @@ cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 }
 #endif
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_rx_inject_configure)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_rx_inject_configure);
 int
 cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
 {
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index c94e9e0f92..82e6121954 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -407,7 +407,7 @@ cn9k_ca_meta_info_extract(struct rte_crypto_op *op,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_enqueue);
 uint16_t
 cn9k_cpt_crypto_adapter_enqueue(uintptr_t base, struct rte_crypto_op *op)
 {
@@ -665,7 +665,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn9k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 261e14b418..9894cb51ce 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -979,7 +979,7 @@ cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qptr_get, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qptr_get, 24.03);
 struct rte_pmd_cnxk_crypto_qptr *
 rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id)
 {
@@ -1042,7 +1042,7 @@ cnxk_crypto_cn9k_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, uint1
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_submit, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_submit, 24.03);
 void
 rte_pmd_cnxk_crypto_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, uint16_t nb_inst)
 {
@@ -1054,7 +1054,7 @@ rte_pmd_cnxk_crypto_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, ui
 	plt_err("Invalid cnxk model");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_flush, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_flush, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_flush(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			       struct rte_pmd_cnxk_crypto_cptr *cptr, bool invalidate)
@@ -1079,7 +1079,7 @@ rte_pmd_cnxk_crypto_cptr_flush(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return roc_cpt_lf_ctx_flush(&qp->lf, cptr, invalidate);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_get, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_get, 24.07);
 struct rte_pmd_cnxk_crypto_cptr *
 rte_pmd_cnxk_crypto_cptr_get(struct rte_pmd_cnxk_crypto_sess *rte_sess)
 {
@@ -1133,7 +1133,7 @@ rte_pmd_cnxk_crypto_cptr_get(struct rte_pmd_cnxk_crypto_sess *rte_sess)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_read, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_read, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_read(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			      struct rte_pmd_cnxk_crypto_cptr *cptr, void *data, uint32_t len)
@@ -1167,7 +1167,7 @@ rte_pmd_cnxk_crypto_cptr_read(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_write, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_write, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_write(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			       struct rte_pmd_cnxk_crypto_cptr *cptr, void *data, uint32_t len)
@@ -1205,7 +1205,7 @@ rte_pmd_cnxk_crypto_cptr_write(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qp_stats_get, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qp_stats_get, 24.07);
 int
 rte_pmd_cnxk_crypto_qp_stats_get(struct rte_pmd_cnxk_crypto_qptr *qptr,
 				 struct rte_pmd_cnxk_crypto_qp_stats *stats)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index ca10d88da7..12ff985e09 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -4161,7 +4161,7 @@ dpaa2_sec_process_ordered_event(struct qbman_swp *swp,
 	ev->event_ptr = crypto_op;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_attach);
 int
 dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
@@ -4242,7 +4242,7 @@ dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_detach);
 int
 dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
 			int qp_id)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 65bbd38b17..921652900a 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -3511,7 +3511,7 @@ dpaa_sec_process_atomic_event(void *event,
 	return qman_cb_dqrr_defer;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_attach);
 int
 dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
@@ -3556,7 +3556,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_detach);
 int
 dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
 			int qp_id)
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index 88657f49cc..9a11f5e985 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -657,7 +657,7 @@ submit_request_to_sso(struct ssows *ws, uintptr_t req,
 	ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_enqueue);
 uint16_t __rte_hot
 otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op)
 {
@@ -948,7 +948,7 @@ otx_cpt_dequeue_sym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
 	return otx_cpt_pkt_dequeue(qptr, ops, nb_ops, OP_TYPE_SYM);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_dequeue);
 uintptr_t __rte_hot
 otx_crypto_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 1ca8443431..770ef03650 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -358,7 +358,7 @@ update_max_nb_qp(struct scheduler_ctx *sched_ctx)
 }
 
 /** Attach a device to the scheduler. */
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_attach)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_attach);
 int
 rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t worker_id)
 {
@@ -421,7 +421,7 @@ rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t worker_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_detach)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_detach);
 int
 rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t worker_id)
 {
@@ -480,7 +480,7 @@ rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t worker_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_set);
 int
 rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
 		enum rte_cryptodev_scheduler_mode mode)
@@ -545,7 +545,7 @@ rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_get);
 enum rte_cryptodev_scheduler_mode
 rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
 {
@@ -567,7 +567,7 @@ rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
 	return sched_ctx->mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_set);
 int
 rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
 		uint32_t enable_reorder)
@@ -597,7 +597,7 @@ rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_get);
 int
 rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
 {
@@ -619,7 +619,7 @@ rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
 	return (int)sched_ctx->reordering_enabled;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_load_user_scheduler)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_load_user_scheduler);
 int
 rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
 		struct rte_cryptodev_scheduler *scheduler) {
@@ -692,7 +692,7 @@ rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_workers_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_workers_get);
 int
 rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t *workers)
 {
@@ -724,7 +724,7 @@ rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t *workers)
 	return (int)nb_workers;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_set);
 int
 rte_cryptodev_scheduler_option_set(uint8_t scheduler_id,
 		enum rte_cryptodev_schedule_option_type option_type,
@@ -757,7 +757,7 @@ rte_cryptodev_scheduler_option_set(uint8_t scheduler_id,
 	return sched_ctx->ops.option_set(dev, option_type, option);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_get);
 int
 rte_cryptodev_scheduler_option_get(uint8_t scheduler_id,
 		enum rte_cryptodev_schedule_option_type option_type,
diff --git a/drivers/dma/cnxk/cnxk_dmadev_fp.c b/drivers/dma/cnxk/cnxk_dmadev_fp.c
index dea73c5b41..887fc70628 100644
--- a/drivers/dma/cnxk/cnxk_dmadev_fp.c
+++ b/drivers/dma/cnxk/cnxk_dmadev_fp.c
@@ -446,7 +446,7 @@ cnxk_dma_adapter_format_event(uint64_t event)
 	return w0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_dma_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_dma_adapter_enqueue);
 uint16_t
 cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -506,7 +506,7 @@ cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_dual_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_dual_enqueue);
 uint16_t
 cn9k_dma_adapter_dual_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -577,7 +577,7 @@ cn9k_dma_adapter_dual_enqueue(void *ws, struct rte_event ev[], uint16_t nb_event
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_enqueue);
 uint16_t
 cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -645,7 +645,7 @@ cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_dma_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_dma_adapter_dequeue);
 uintptr_t
 cnxk_dma_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/event/cnxk/cnxk_worker.c b/drivers/event/cnxk/cnxk_worker.c
index 5e5beb6aac..008f4277c1 100644
--- a/drivers/event/cnxk/cnxk_worker.c
+++ b/drivers/event/cnxk/cnxk_worker.c
@@ -13,7 +13,7 @@ struct pwords {
 	uint64_t u[5];
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_wait_head, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_wait_head, 23.11);
 void
 rte_pmd_cnxk_eventdev_wait_head(uint8_t dev, uint8_t port)
 {
@@ -30,7 +30,7 @@ rte_pmd_cnxk_eventdev_wait_head(uint8_t dev, uint8_t port)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_is_head, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_is_head, 23.11);
 uint8_t
 rte_pmd_cnxk_eventdev_is_head(uint8_t dev, uint8_t port)
 {
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.c b/drivers/event/dlb2/rte_pmd_dlb2.c
index 80186dd07d..e77a30ff7d 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.c
+++ b/drivers/event/dlb2/rte_pmd_dlb2.c
@@ -10,7 +10,7 @@
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_token_pop_mode, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_token_pop_mode, 20.11);
 int
 rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
@@ -40,7 +40,7 @@ rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_port_param, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_port_param, 25.07);
 int
 rte_pmd_dlb2_set_port_param(uint8_t dev_id,
 			    uint8_t port_id,
diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
index e83e872f40..855c60944e 100644
--- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c
+++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
@@ -201,7 +201,7 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsigned int max_objs,
 	return hp->size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_mbuf_exchange, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_mbuf_exchange, 23.07);
 int
 rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2)
 {
@@ -229,14 +229,14 @@ rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_is_hwpool, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_is_hwpool, 23.07);
 int
 rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp)
 {
 	return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_range_check_disable, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_range_check_disable, 23.07);
 int
 rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp)
 {
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 7dacaa9513..3b80d2b2a7 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -33,11 +33,11 @@
  * is to optimize the PA_to_VA searches until a better mechanism (algo) is
  * available.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_memsegs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_memsegs);
 struct dpaa_memseg_list rte_dpaa_memsegs
 	= TAILQ_HEAD_INITIALIZER(rte_dpaa_memsegs);
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_bpid_info)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_bpid_info);
 struct dpaa_bp_info *rte_dpaa_bpid_info;
 
 RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_mempool, NOTICE);
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 118eb76db7..4fea1bfd37 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -34,13 +34,13 @@
 
 #include <dpaax_iova_table.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info);
 struct dpaa2_bp_info *rte_dpaa2_bpid_info;
 static struct dpaa2_bp_list *h_bp_list;
 
 static int16_t s_dpaa2_pool_ops_idx = RTE_MEMPOOL_MAX_OPS_IDX;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mpool_get_ops_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mpool_get_ops_idx);
 int rte_dpaa2_mpool_get_ops_idx(void)
 {
 	return s_dpaa2_pool_ops_idx;
@@ -298,7 +298,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info_init);
 int rte_dpaa2_bpid_info_init(struct rte_mempool *mp)
 {
 	struct dpaa2_bp_info *bp_info = mempool_to_bpinfo(mp);
@@ -322,7 +322,7 @@ int rte_dpaa2_bpid_info_init(struct rte_mempool *mp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_pool_bpid)
+RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_pool_bpid);
 uint16_t
 rte_dpaa2_mbuf_pool_bpid(struct rte_mempool *mp)
 {
@@ -337,7 +337,7 @@ rte_dpaa2_mbuf_pool_bpid(struct rte_mempool *mp)
 	return bp_info->bpid;
 }
 
-RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_from_buf_addr)
+RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_from_buf_addr);
 struct rte_mbuf *
 rte_dpaa2_mbuf_from_buf_addr(struct rte_mempool *mp, void *buf_addr)
 {
@@ -353,7 +353,7 @@ rte_dpaa2_mbuf_from_buf_addr(struct rte_mempool *mp, void *buf_addr)
 			bp_info->meta_data_size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mbuf_alloc_bulk)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mbuf_alloc_bulk);
 int
 rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 			  void **obj_table, unsigned int count)
diff --git a/drivers/net/atlantic/rte_pmd_atlantic.c b/drivers/net/atlantic/rte_pmd_atlantic.c
index b5b6ab7d4b..c306bf02d2 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic.c
+++ b/drivers/net/atlantic/rte_pmd_atlantic.c
@@ -9,7 +9,7 @@
 #include "atl_ethdev.h"
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_enable, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_enable, 19.05);
 int
 rte_pmd_atl_macsec_enable(uint16_t port,
 			  uint8_t encr, uint8_t repl_prot)
@@ -26,7 +26,7 @@ rte_pmd_atl_macsec_enable(uint16_t port,
 	return atl_macsec_enable(dev, encr, repl_prot);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_disable, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_disable, 19.05);
 int
 rte_pmd_atl_macsec_disable(uint16_t port)
 {
@@ -42,7 +42,7 @@ rte_pmd_atl_macsec_disable(uint16_t port)
 	return atl_macsec_disable(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_txsc, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_txsc, 19.05);
 int
 rte_pmd_atl_macsec_config_txsc(uint16_t port, uint8_t *mac)
 {
@@ -58,7 +58,7 @@ rte_pmd_atl_macsec_config_txsc(uint16_t port, uint8_t *mac)
 	return atl_macsec_config_txsc(dev, mac);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_rxsc, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_rxsc, 19.05);
 int
 rte_pmd_atl_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 {
@@ -74,7 +74,7 @@ rte_pmd_atl_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 	return atl_macsec_config_rxsc(dev, mac, pi);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_txsa, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_txsa, 19.05);
 int
 rte_pmd_atl_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -91,7 +91,7 @@ rte_pmd_atl_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 	return atl_macsec_select_txsa(dev, idx, an, pn, key);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_rxsa, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_rxsa, 19.05);
 int
 rte_pmd_atl_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index 4974e390e7..8691c8769d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -40,7 +40,7 @@ int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg)
 		true : false;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_tx_loopback);
 int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on)
 {
 	struct rte_eth_dev *eth_dev;
@@ -82,7 +82,7 @@ rte_pmd_bnxt_set_all_queues_drop_en_cb(struct bnxt_vnic_info *vnic, void *onptr)
 	vnic->bd_stall = !(*on);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_all_queues_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_all_queues_drop_en);
 int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 {
 	struct rte_eth_dev *eth_dev;
@@ -134,7 +134,7 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_addr);
 int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
 				struct rte_ether_addr *mac_addr)
 {
@@ -175,7 +175,7 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rate_limit)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rate_limit);
 int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
 				uint32_t tx_rate, uint64_t q_msk)
 {
@@ -233,7 +233,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_anti_spoof);
 int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
@@ -294,7 +294,7 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_anti_spoof);
 int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
@@ -354,7 +354,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq_cb(struct bnxt_vnic_info *vnic, void *onptr)
 	vnic->vlan_strip = *on;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_stripq);
 int
 rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -398,7 +398,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rxmode)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rxmode);
 int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 				uint16_t rx_mask, uint8_t on)
 {
@@ -497,7 +497,7 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_filter);
 int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 				    uint64_t vf_mask, uint8_t vlan_on)
 {
@@ -593,7 +593,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_stats);
 int rte_pmd_bnxt_get_vf_stats(uint16_t port,
 			      uint16_t vf_id,
 			      struct rte_eth_stats *stats)
@@ -631,7 +631,7 @@ int rte_pmd_bnxt_get_vf_stats(uint16_t port,
 				     NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_reset_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_reset_vf_stats);
 int rte_pmd_bnxt_reset_vf_stats(uint16_t port,
 				uint16_t vf_id)
 {
@@ -667,7 +667,7 @@ int rte_pmd_bnxt_reset_vf_stats(uint16_t port,
 	return bnxt_hwrm_func_clr_stats(bp, bp->pf->first_vf_id + vf_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_rx_status)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_rx_status);
 int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id)
 {
 	struct rte_eth_dev *dev;
@@ -702,7 +702,7 @@ int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id)
 	return bnxt_vf_vnic_count(bp, vf_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_tx_drop_count)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_tx_drop_count);
 int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
 				      uint64_t *count)
 {
@@ -739,7 +739,7 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
 					     count);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_mac_addr_add)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_mac_addr_add);
 int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct rte_ether_addr *addr,
 				uint32_t vf_id)
 {
@@ -823,7 +823,7 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct rte_ether_addr *addr,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_insert);
 int
 rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
 		uint16_t vlan_id)
@@ -869,7 +869,7 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_persist_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_persist_stats);
 int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 1677615435..6454805f6e 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -1404,7 +1404,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 	rte_pktmbuf_free(pkt);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_conf_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_conf_get);
 int
 rte_eth_bond_8023ad_conf_get(uint16_t port_id,
 		struct rte_eth_bond_8023ad_conf *conf)
@@ -1422,7 +1422,7 @@ rte_eth_bond_8023ad_conf_get(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_set);
 int
 rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id,
 		enum rte_bond_8023ad_agg_selection agg_selection)
@@ -1447,7 +1447,7 @@ rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_get);
 int rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id)
 {
 	struct rte_eth_dev *bond_dev;
@@ -1495,7 +1495,7 @@ bond_8023ad_setup_validate(uint16_t port_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_setup)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_setup);
 int
 rte_eth_bond_8023ad_setup(uint16_t port_id,
 		struct rte_eth_bond_8023ad_conf *conf)
@@ -1517,7 +1517,7 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_member_info)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_member_info);
 int
 rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
 		struct rte_eth_bond_8023ad_member_info *info)
@@ -1579,7 +1579,7 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect);
 int
 rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled)
@@ -1601,7 +1601,7 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib);
 int
 rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled)
@@ -1623,7 +1623,7 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib_get);
 int
 rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 {
@@ -1638,7 +1638,7 @@ rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect_get);
 int
 rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 {
@@ -1653,7 +1653,7 @@ rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 	return ACTOR_STATE(port, COLLECTING);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_slowtx)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_slowtx);
 int
 rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt)
@@ -1715,7 +1715,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			bond_mode_8023ad_ext_periodic_cb, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_enable)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_enable);
 int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port)
 {
@@ -1742,7 +1742,7 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port)
 	return retval;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_disable)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_disable);
 int
 rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port)
 {
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 9e5df67c18..25ceb82ce7 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -150,7 +150,7 @@ deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_create)
+RTE_EXPORT_SYMBOL(rte_eth_bond_create);
 int
 rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id)
 {
@@ -189,7 +189,7 @@ rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id)
 	return bond_dev->data->port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_free)
+RTE_EXPORT_SYMBOL(rte_eth_bond_free);
 int
 rte_eth_bond_free(const char *name)
 {
@@ -634,7 +634,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
 
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_member_add)
+RTE_EXPORT_SYMBOL(rte_eth_bond_member_add);
 int
 rte_eth_bond_member_add(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -773,7 +773,7 @@ __eth_bond_member_remove_lock_free(uint16_t bonding_port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_member_remove)
+RTE_EXPORT_SYMBOL(rte_eth_bond_member_remove);
 int
 rte_eth_bond_member_remove(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -796,7 +796,7 @@ rte_eth_bond_member_remove(uint16_t bonding_port_id, uint16_t member_port_id)
 	return retval;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mode_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mode_set);
 int
 rte_eth_bond_mode_set(uint16_t bonding_port_id, uint8_t mode)
 {
@@ -814,7 +814,7 @@ rte_eth_bond_mode_set(uint16_t bonding_port_id, uint8_t mode)
 	return bond_ethdev_mode_set(bonding_eth_dev, mode);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mode_get);
 int
 rte_eth_bond_mode_get(uint16_t bonding_port_id)
 {
@@ -828,7 +828,7 @@ rte_eth_bond_mode_get(uint16_t bonding_port_id)
 	return internals->mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_primary_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_primary_set);
 int
 rte_eth_bond_primary_set(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -850,7 +850,7 @@ rte_eth_bond_primary_set(uint16_t bonding_port_id, uint16_t member_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_primary_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_primary_get);
 int
 rte_eth_bond_primary_get(uint16_t bonding_port_id)
 {
@@ -867,7 +867,7 @@ rte_eth_bond_primary_get(uint16_t bonding_port_id)
 	return internals->current_primary_port;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_members_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_members_get);
 int
 rte_eth_bond_members_get(uint16_t bonding_port_id, uint16_t members[],
 			uint16_t len)
@@ -892,7 +892,7 @@ rte_eth_bond_members_get(uint16_t bonding_port_id, uint16_t members[],
 	return internals->member_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_active_members_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_active_members_get);
 int
 rte_eth_bond_active_members_get(uint16_t bonding_port_id, uint16_t members[],
 		uint16_t len)
@@ -916,7 +916,7 @@ rte_eth_bond_active_members_get(uint16_t bonding_port_id, uint16_t members[],
 	return internals->active_member_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_set);
 int
 rte_eth_bond_mac_address_set(uint16_t bonding_port_id,
 		struct rte_ether_addr *mac_addr)
@@ -943,7 +943,7 @@ rte_eth_bond_mac_address_set(uint16_t bonding_port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_reset)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_reset);
 int
 rte_eth_bond_mac_address_reset(uint16_t bonding_port_id)
 {
@@ -985,7 +985,7 @@ rte_eth_bond_mac_address_reset(uint16_t bonding_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_set);
 int
 rte_eth_bond_xmit_policy_set(uint16_t bonding_port_id, uint8_t policy)
 {
@@ -1016,7 +1016,7 @@ rte_eth_bond_xmit_policy_set(uint16_t bonding_port_id, uint8_t policy)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_get);
 int
 rte_eth_bond_xmit_policy_get(uint16_t bonding_port_id)
 {
@@ -1030,7 +1030,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonding_port_id)
 	return internals->balance_xmit_policy;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_link_monitoring_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_link_monitoring_set);
 int
 rte_eth_bond_link_monitoring_set(uint16_t bonding_port_id, uint32_t internal_ms)
 {
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 6c723c9cec..c87a020adb 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -13,7 +13,7 @@ cnxk_ethdev_rx_offload_cb_t cnxk_ethdev_rx_offload_cb;
 
 #define NIX_TM_DFLT_RR_WT 71
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_model_str_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_model_str_get, 23.11);
 const char *
 rte_pmd_cnxk_model_str_get(void)
 {
@@ -89,14 +89,14 @@ nix_inl_cq_sz_clamp_up(struct roc_nix *nix, struct rte_mempool *mp,
 	return nb_desc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ethdev_rx_offload_cb_register)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ethdev_rx_offload_cb_register);
 void
 cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb)
 {
 	cnxk_ethdev_rx_offload_cb = cb;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_nix_inb_mode_set)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_nix_inb_mode_set);
 int
 cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)
 {
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index ac6ee79f78..8af31c74f2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -306,21 +306,21 @@ cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_submit, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_submit, 23.11);
 uint16_t
 rte_pmd_cnxk_inl_dev_submit(struct rte_pmd_cnxk_inl_dev_q *qptr, void *inst, uint16_t nb_inst)
 {
 	return cnxk_pmd_ops.inl_dev_submit((struct roc_nix_inl_dev_q *)qptr, inst, nb_inst);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_qptr_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_qptr_get, 23.11);
 struct rte_pmd_cnxk_inl_dev_q *
 rte_pmd_cnxk_inl_dev_qptr_get(void)
 {
 	return roc_nix_inl_dev_qptr_get(0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_cpt_q_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_cpt_q_stats_get, 23.11);
 int
 rte_pmd_cnxk_cpt_q_stats_get(uint16_t portid, enum rte_pmd_cnxk_cpt_q_stats_type type,
 			     struct rte_pmd_cnxk_cpt_q_stats *stats, uint16_t idx)
@@ -332,7 +332,7 @@ rte_pmd_cnxk_cpt_q_stats_get(uint16_t portid, enum rte_pmd_cnxk_cpt_q_stats_type
 					    (struct roc_nix_cpt_lf_stats *)stats, idx);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_session_base_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_session_base_get, 23.11);
 union rte_pmd_cnxk_ipsec_hw_sa *
 rte_pmd_cnxk_hw_session_base_get(uint16_t portid, bool inb)
 {
@@ -348,7 +348,7 @@ rte_pmd_cnxk_hw_session_base_get(uint16_t portid, bool inb)
 	return (union rte_pmd_cnxk_ipsec_hw_sa *)sa_base;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_sa_flush, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_sa_flush, 23.11);
 int
 rte_pmd_cnxk_sa_flush(uint16_t portid, union rte_pmd_cnxk_ipsec_hw_sa *sess, bool inb)
 {
@@ -375,7 +375,7 @@ rte_pmd_cnxk_sa_flush(uint16_t portid, union rte_pmd_cnxk_ipsec_hw_sa *sess, boo
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_read, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_read, 22.07);
 int
 rte_pmd_cnxk_hw_sa_read(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw_sa *data,
 			uint32_t len, bool inb)
@@ -421,7 +421,7 @@ rte_pmd_cnxk_hw_sa_read(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_write, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_write, 22.07);
 int
 rte_pmd_cnxk_hw_sa_write(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw_sa *data,
 			 uint32_t len, bool inb)
@@ -462,7 +462,7 @@ rte_pmd_cnxk_hw_sa_write(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_h
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_ipsec_res, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_ipsec_res, 23.11);
 union rte_pmd_cnxk_cpt_res_s *
 rte_pmd_cnxk_inl_ipsec_res(struct rte_mbuf *mbuf)
 {
@@ -481,7 +481,7 @@ rte_pmd_cnxk_inl_ipsec_res(struct rte_mbuf *mbuf)
 	return (void *)(wqe + 64 + desc_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_inline_inb_cfg_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_inline_inb_cfg_set, 23.11);
 void
 rte_pmd_cnxk_hw_inline_inb_cfg_set(uint16_t portid, struct rte_pmd_cnxk_ipsec_inb_cfg *cfg)
 {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 00b57cb715..32e34eb272 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1295,7 +1295,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_attach);
 int
 dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
@@ -1361,7 +1361,7 @@ dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_detach);
 int
 dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id)
@@ -1803,7 +1803,7 @@ is_dpaa_supported(struct rte_eth_dev *dev)
 	return is_device_supported(dev, &rte_dpaa_pmd);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa_set_tx_loopback);
 int
 rte_pmd_dpaa_set_tx_loopback(uint16_t port, uint8_t on)
 {
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index b1d473429a..6cb811597c 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -29,7 +29,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		uint64_t req_dist_set,
 		struct dpkg_profile_cfg *kg_cfg);
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_set_custom_hash)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_set_custom_hash);
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	uint16_t offset, uint8_t size)
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
index 8c685120bd..f8ca9a3874 100644
--- a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -144,7 +144,7 @@ static void hash_init(void)
 		}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_tlu_hash, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_tlu_hash, 21.11);
 uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
 {
 	static int init;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 998d1e7c53..3e5e8fe407 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2240,7 +2240,7 @@ dpaa2_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_attach);
 int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		struct dpaa2_dpcon_dev *dpcon,
@@ -2327,7 +2327,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_detach);
 int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id)
 {
@@ -2413,7 +2413,7 @@ dpaa2_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *ops)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_thread_init, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_thread_init, 21.08);
 void
 rte_pmd_dpaa2_thread_init(void)
 {
@@ -2853,7 +2853,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_dev_is_dpaa2, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_dev_is_dpaa2, 24.11);
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
@@ -2869,7 +2869,7 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_ep_name, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_ep_name, 24.11);
 const char *
 rte_pmd_dpaa2_ep_name(uint32_t eth_id)
 {
@@ -2895,7 +2895,7 @@ rte_pmd_dpaa2_ep_name(uint32_t eth_id)
 }
 
 #if defined(RTE_LIBRTE_IEEE1588)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_one_step_ts, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_one_step_ts, 24.11);
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
 {
@@ -2924,7 +2924,7 @@ rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
 	return priv->ptp_correction_offset;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_set_one_step_ts, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_set_one_step_ts, 24.11);
 int
 rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
 {
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 1908d1e865..95bd99fe80 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -55,7 +55,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_mux_flow_create)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_mux_flow_create);
 int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	struct rte_flow_item pattern[],
@@ -366,7 +366,7 @@ rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_rx_frame_len, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_rx_frame_len, 21.05);
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -394,7 +394,7 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 }
 
 /* dump the status of the dpaa2_mux counters on the console */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_dump_counter, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_dump_counter, 24.11);
 void
 rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 {
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 67d065bb7c..3c76df4c6f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1581,7 +1581,7 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q,
 	*dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dev_tx_multi_txq_ordered)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dev_tx_multi_txq_ordered);
 uint16_t
 dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		struct rte_mbuf **bufs, uint16_t nb_pkts)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..19b29b8576 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -14,7 +14,7 @@
 #include "i40e_rxtx.h"
 #include "rte_pmd_i40e.h"
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ping_vfs)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ping_vfs);
 int
 rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf)
 {
@@ -40,7 +40,7 @@ rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_anti_spoof);
 int
 rte_pmd_i40e_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -145,7 +145,7 @@ i40e_add_rm_all_vlan_filter(struct i40e_vsi *vsi, uint8_t add)
 	return I40E_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_anti_spoof);
 int
 rte_pmd_i40e_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -406,7 +406,7 @@ i40e_vsi_set_tx_loopback(struct i40e_vsi *vsi, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tx_loopback);
 int
 rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on)
 {
@@ -450,7 +450,7 @@ rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_unicast_promisc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_unicast_promisc);
 int
 rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -492,7 +492,7 @@ rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_multicast_promisc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_multicast_promisc);
 int
 rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -534,7 +534,7 @@ rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_addr);
 int
 rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 			     struct rte_ether_addr *mac_addr)
@@ -625,7 +625,7 @@ rte_pmd_i40e_remove_vf_mac_addr(uint16_t port, uint16_t vf_id,
 }
 
 /* Set vlan strip on/off for specific VF from host */
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_stripq);
 int
 rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -662,7 +662,7 @@ rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_insert);
 int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id,
 				    uint16_t vlan_id)
 {
@@ -728,7 +728,7 @@ int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_broadcast)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_broadcast);
 int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id,
 				  uint8_t on)
 {
@@ -795,7 +795,7 @@ int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_tag)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_tag);
 int rte_pmd_i40e_set_vf_vlan_tag(uint16_t port, uint16_t vf_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
@@ -890,7 +890,7 @@ i40e_vlan_filter_count(struct i40e_vsi *vsi)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_filter);
 int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id,
 				    uint64_t vf_mask, uint8_t on)
 {
@@ -973,7 +973,7 @@ int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_vf_stats);
 int
 rte_pmd_i40e_get_vf_stats(uint16_t port,
 			  uint16_t vf_id,
@@ -1019,7 +1019,7 @@ rte_pmd_i40e_get_vf_stats(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_reset_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_reset_vf_stats);
 int
 rte_pmd_i40e_reset_vf_stats(uint16_t port,
 			    uint16_t vf_id)
@@ -1054,7 +1054,7 @@ rte_pmd_i40e_reset_vf_stats(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_max_bw)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_max_bw);
 int
 rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw)
 {
@@ -1144,7 +1144,7 @@ rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_bw_alloc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_bw_alloc);
 int
 rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id,
 				uint8_t tc_num, uint8_t *bw_weight)
@@ -1259,7 +1259,7 @@ rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_max_bw)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_max_bw);
 int
 rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id,
 			      uint8_t tc_no, uint32_t bw)
@@ -1378,7 +1378,7 @@ rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tc_strict_prio)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tc_strict_prio);
 int
 rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map)
 {
@@ -1624,7 +1624,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_process_ddp_package)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_process_ddp_package);
 int
 rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
 				 uint32_t size,
@@ -1809,7 +1809,7 @@ i40e_get_tlv_section_size(struct i40e_profile_section_header *sec)
 	return nb_tlv;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_info)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_info);
 int rte_pmd_i40e_get_ddp_info(uint8_t *pkg_buff, uint32_t pkg_size,
 	uint8_t *info_buff, uint32_t info_size,
 	enum rte_pmd_i40e_package_info type)
@@ -2118,7 +2118,7 @@ int rte_pmd_i40e_get_ddp_info(uint8_t *pkg_buff, uint32_t pkg_size,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_list)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_list);
 int
 rte_pmd_i40e_get_ddp_list(uint16_t port, uint8_t *buff, uint32_t size)
 {
@@ -2250,7 +2250,7 @@ static int check_invalid_ptype_mapping(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_update)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_update);
 int
 rte_pmd_i40e_ptype_mapping_update(
 			uint16_t port,
@@ -2289,7 +2289,7 @@ rte_pmd_i40e_ptype_mapping_update(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_reset);
 int rte_pmd_i40e_ptype_mapping_reset(uint16_t port)
 {
 	struct rte_eth_dev *dev;
@@ -2306,7 +2306,7 @@ int rte_pmd_i40e_ptype_mapping_reset(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_get);
 int rte_pmd_i40e_ptype_mapping_get(
 			uint16_t port,
 			struct rte_pmd_i40e_ptype_mapping *mapping_items,
@@ -2342,7 +2342,7 @@ int rte_pmd_i40e_ptype_mapping_get(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_replace)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_replace);
 int rte_pmd_i40e_ptype_mapping_replace(uint16_t port,
 				       uint32_t target,
 				       uint8_t mask,
@@ -2381,7 +2381,7 @@ int rte_pmd_i40e_ptype_mapping_replace(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_add_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_add_vf_mac_addr);
 int
 rte_pmd_i40e_add_vf_mac_addr(uint16_t port, uint16_t vf_id,
 			     struct rte_ether_addr *mac_addr)
@@ -2429,7 +2429,7 @@ rte_pmd_i40e_add_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_reset);
 int rte_pmd_i40e_flow_type_mapping_reset(uint16_t port)
 {
 	struct rte_eth_dev *dev;
@@ -2446,7 +2446,7 @@ int rte_pmd_i40e_flow_type_mapping_reset(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_get);
 int rte_pmd_i40e_flow_type_mapping_get(
 			uint16_t port,
 			struct rte_pmd_i40e_flow_type_mapping *mapping_items)
@@ -2472,7 +2472,7 @@ int rte_pmd_i40e_flow_type_mapping_get(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_update)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_update);
 int
 rte_pmd_i40e_flow_type_mapping_update(
 			uint16_t port,
@@ -2526,7 +2526,7 @@ rte_pmd_i40e_flow_type_mapping_update(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_query_vfid_by_mac)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_query_vfid_by_mac);
 int
 rte_pmd_i40e_query_vfid_by_mac(uint16_t port,
 			const struct rte_ether_addr *vf_mac)
@@ -2997,7 +2997,7 @@ i40e_queue_region_get_all_info(struct i40e_pf *pf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_rss_queue_region_conf)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_rss_queue_region_conf);
 int rte_pmd_i40e_rss_queue_region_conf(uint16_t port_id,
 		enum rte_pmd_i40e_queue_region_op op_type, void *arg)
 {
@@ -3063,7 +3063,7 @@ int rte_pmd_i40e_rss_queue_region_conf(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_add_del_packet_template)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_add_del_packet_template);
 int rte_pmd_i40e_flow_add_del_packet_template(
 			uint16_t port,
 			const struct rte_pmd_i40e_pkt_template_conf *conf,
@@ -3097,7 +3097,7 @@ int rte_pmd_i40e_flow_add_del_packet_template(
 	return i40e_flow_add_del_fdir_filter(dev, &filter_conf, add);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_get);
 int
 rte_pmd_i40e_inset_get(uint16_t port, uint8_t pctype,
 		       struct rte_pmd_i40e_inset *inset,
@@ -3170,7 +3170,7 @@ rte_pmd_i40e_inset_get(uint16_t port, uint8_t pctype,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_set)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_set);
 int
 rte_pmd_i40e_inset_set(uint16_t port, uint8_t pctype,
 		       struct rte_pmd_i40e_inset *inset,
@@ -3245,7 +3245,7 @@ rte_pmd_i40e_inset_set(uint16_t port, uint8_t pctype,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_info, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_info, 20.08);
 int
 rte_pmd_i40e_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 {
@@ -3262,7 +3262,7 @@ rte_pmd_i40e_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_stats, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_stats, 20.08);
 int
 rte_pmd_i40e_get_fdir_stats(uint16_t port, struct rte_eth_fdir_stats *fdir_stat)
 {
@@ -3279,7 +3279,7 @@ rte_pmd_i40e_get_fdir_stats(uint16_t port, struct rte_eth_fdir_stats *fdir_stat)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_gre_key_len, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_gre_key_len, 20.08);
 int
 rte_pmd_i40e_set_gre_key_len(uint16_t port, uint8_t len)
 {
@@ -3299,7 +3299,7 @@ rte_pmd_i40e_set_gre_key_len(uint16_t port, uint8_t len)
 	return i40e_dev_set_gre_key_len(hw, len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_switch_dev, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_switch_dev, 19.11);
 int
 rte_pmd_i40e_set_switch_dev(uint16_t port_id, struct rte_eth_dev *switch_dev)
 {
@@ -3321,7 +3321,7 @@ rte_pmd_i40e_set_switch_dev(uint16_t port_id, struct rte_eth_dev *switch_dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_pf_src_prune, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_pf_src_prune, 23.07);
 int
 rte_pmd_i40e_set_pf_src_prune(uint16_t port, uint8_t on)
 {
diff --git a/drivers/net/intel/iavf/iavf_base_symbols.c b/drivers/net/intel/iavf/iavf_base_symbols.c
index 2111b14aa8..706aa36a92 100644
--- a/drivers/net/intel/iavf/iavf_base_symbols.c
+++ b/drivers/net/intel/iavf/iavf_base_symbols.c
@@ -5,10 +5,10 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_init_adminq)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_shutdown_adminq)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_clean_arq_element)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_set_mac_type)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_aq_send_msg_to_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_parse_hw_config)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_init_adminq);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_shutdown_adminq);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_clean_arq_element);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_set_mac_type);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_aq_send_msg_to_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_parse_hw_config);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_reset);
diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index 7033a74610..ff298e164b 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -75,23 +75,23 @@ struct offload_info {
 };
 
 /* Offset of mbuf dynamic field for protocol extraction's metadata */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, 20.11);
 int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
 
 /* Mask of mbuf dynamic flags for protocol extraction's type */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_vlan_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_vlan_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_tcp_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_tcp_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask, 21.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;
 
 uint8_t
diff --git a/drivers/net/intel/ice/ice_diagnose.c b/drivers/net/intel/ice/ice_diagnose.c
index 298d1eda5c..db89b793ed 100644
--- a/drivers/net/intel/ice/ice_diagnose.c
+++ b/drivers/net/intel/ice/ice_diagnose.c
@@ -410,7 +410,7 @@ ice_dump_pkg(struct rte_eth_dev *dev, uint8_t **buff, uint32_t *size)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_package, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_package, 19.11);
 int rte_pmd_ice_dump_package(uint16_t port, uint8_t **buff, uint32_t *size)
 {
 	struct rte_eth_dev *dev;
@@ -499,7 +499,7 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, uint32_t *size)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_switch, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_switch, 22.11);
 int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size)
 {
 	struct rte_eth_dev *dev;
@@ -801,7 +801,7 @@ query_node_recursive(struct ice_hw *hw, struct rte_eth_dev_data *ethdata,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_txsched, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_txsched, 24.03);
 int
 rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..cdf804e119 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -382,7 +382,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_init);
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -443,7 +443,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_deinit);
 int
 idpf_adapter_deinit(struct idpf_adapter *adapter)
 {
@@ -456,7 +456,7 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_init);
 int
 idpf_vport_init(struct idpf_vport *vport,
 		struct virtchnl2_create_vport *create_vport_info,
@@ -570,7 +570,7 @@ idpf_vport_init(struct idpf_vport *vport,
 err_create_vport:
 	return ret;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_deinit);
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
@@ -588,7 +588,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_rss_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_rss_config);
 int
 idpf_vport_rss_config(struct idpf_vport *vport)
 {
@@ -615,7 +615,7 @@ idpf_vport_rss_config(struct idpf_vport *vport)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config);
 int
 idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
@@ -691,7 +691,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config_by_qids)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config_by_qids);
 int
 idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues)
 {
@@ -767,7 +767,7 @@ idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_unmap_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_unmap_config);
 int
 idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
@@ -779,7 +779,7 @@ idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_info_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_info_init);
 int
 idpf_vport_info_init(struct idpf_vport *vport,
 			    struct virtchnl2_create_vport *vport_info)
@@ -816,7 +816,7 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_stats_update)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_stats_update);
 void
 idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..4e6fa28ac2 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -11,7 +11,7 @@
 int idpf_timestamp_dynfield_offset = -1;
 uint64_t idpf_timestamp_dynflag;
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_thresh_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_thresh_check);
 int
 idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 {
@@ -27,7 +27,7 @@ idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_thresh_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_thresh_check);
 int
 idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
 			uint16_t tx_free_thresh)
@@ -76,7 +76,7 @@ idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rxq_mbufs_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rxq_mbufs_release);
 void
 idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 {
@@ -93,7 +93,7 @@ idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_descq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_descq_reset);
 void
 idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 {
@@ -113,7 +113,7 @@ idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 	rxq->expected_gen_id = 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_bufq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_bufq_reset);
 void
 idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 {
@@ -149,7 +149,7 @@ idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 	rxq->bufq2 = NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_queue_reset);
 void
 idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
@@ -158,7 +158,7 @@ idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 	idpf_qc_split_rx_bufq_reset(rxq->bufq2);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rx_queue_reset);
 void
 idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
@@ -190,7 +190,7 @@ idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 	rxq->rxrearm_nb = 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_descq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_descq_reset);
 void
 idpf_qc_split_tx_descq_reset(struct ci_tx_queue *txq)
 {
@@ -229,7 +229,7 @@ idpf_qc_split_tx_descq_reset(struct ci_tx_queue *txq)
 	txq->tx_next_rs = txq->tx_rs_thresh - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_complq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_complq_reset);
 void
 idpf_qc_split_tx_complq_reset(struct ci_tx_queue *cq)
 {
@@ -248,7 +248,7 @@ idpf_qc_split_tx_complq_reset(struct ci_tx_queue *cq)
 	cq->expected_gen_id = 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_tx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_tx_queue_reset);
 void
 idpf_qc_single_tx_queue_reset(struct ci_tx_queue *txq)
 {
@@ -286,7 +286,7 @@ idpf_qc_single_tx_queue_reset(struct ci_tx_queue *txq)
 	txq->tx_next_rs = txq->tx_rs_thresh - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_queue_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_queue_release);
 void
 idpf_qc_rx_queue_release(void *rxq)
 {
@@ -317,7 +317,7 @@ idpf_qc_rx_queue_release(void *rxq)
 	rte_free(q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_queue_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_queue_release);
 void
 idpf_qc_tx_queue_release(void *txq)
 {
@@ -337,7 +337,7 @@ idpf_qc_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_ts_mbuf_register)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_ts_mbuf_register);
 int
 idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 {
@@ -355,7 +355,7 @@ idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rxq_mbufs_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rxq_mbufs_alloc);
 int
 idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
@@ -391,7 +391,7 @@ idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rxq_mbufs_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rxq_mbufs_alloc);
 int
 idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
@@ -615,7 +615,7 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 	rx_bufq->rx_tail = next_avail;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts);
 uint16_t
 idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			 uint16_t nb_pkts)
@@ -848,7 +848,7 @@ idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
 				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts);
 uint16_t
 idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 uint16_t nb_pkts)
@@ -1040,7 +1040,7 @@ idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts);
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1159,7 +1159,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_scatter_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_scatter_pkts);
 uint16_t
 idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts)
@@ -1337,7 +1337,7 @@ idpf_xmit_cleanup(struct ci_tx_queue *txq)
 }
 
 /* TX function */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts);
 uint16_t
 idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			  uint16_t nb_pkts)
@@ -1505,7 +1505,7 @@ idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 }
 
 /* TX prep functions */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_prep_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_prep_pkts);
 uint16_t
 idpf_dp_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		  uint16_t nb_pkts)
@@ -1607,7 +1607,7 @@ idpf_rxq_vec_setup_default(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_singleq_rx_vec_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_singleq_rx_vec_setup);
 int __rte_cold
 idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
@@ -1615,7 +1615,7 @@ idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 	return idpf_rxq_vec_setup_default(rxq);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_splitq_rx_vec_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_splitq_rx_vec_setup);
 int __rte_cold
 idpf_qc_splitq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
index 1babc5114b..aedee7b046 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
@@ -475,7 +475,7 @@ _idpf_singleq_recv_raw_pkts_vec_avx2(struct idpf_rx_queue *rxq, struct rte_mbuf
  * Notice:
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx2)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx2);
 uint16_t
 idpf_dp_singleq_recv_pkts_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 {
@@ -618,7 +618,7 @@ idpf_singleq_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts
 	return nb_pkts;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx2)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx2);
 uint16_t
 idpf_dp_singleq_xmit_pkts_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c b/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
index 06e73c8725..c9e7b39de2 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
@@ -532,7 +532,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
  * Notice:
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx512);
 uint16_t
 idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				 uint16_t nb_pkts)
@@ -990,7 +990,7 @@ _idpf_splitq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 }
 
 /* only bufq2 can receive pkts */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts_avx512);
 uint16_t
 idpf_dp_splitq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 			     uint16_t nb_pkts)
@@ -1159,7 +1159,7 @@ idpf_singleq_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx512);
 uint16_t
 idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				 uint16_t nb_pkts)
@@ -1361,7 +1361,7 @@ idpf_splitq_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts_avx512);
 uint16_t
 idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts)
@@ -1369,7 +1369,7 @@ idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return idpf_splitq_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_vec_avx512_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_vec_avx512_setup);
 int __rte_cold
 idpf_qc_tx_vec_avx512_setup(struct ci_tx_queue *txq)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..871893a9ed 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -160,7 +160,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_one_msg_read)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_one_msg_read);
 int
 idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
 		     uint8_t *buf)
@@ -185,7 +185,7 @@ idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_le
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_cmd_execute)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_cmd_execute);
 int
 idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 {
@@ -235,7 +235,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_api_version_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_api_version_check);
 int
 idpf_vc_api_version_check(struct idpf_adapter *adapter)
 {
@@ -276,7 +276,7 @@ idpf_vc_api_version_check(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_caps_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_caps_get);
 int
 idpf_vc_caps_get(struct idpf_adapter *adapter)
 {
@@ -301,7 +301,7 @@ idpf_vc_caps_get(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_create)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_create);
 int
 idpf_vc_vport_create(struct idpf_vport *vport,
 		     struct virtchnl2_create_vport *create_vport_info)
@@ -338,7 +338,7 @@ idpf_vc_vport_create(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_destroy);
 int
 idpf_vc_vport_destroy(struct idpf_vport *vport)
 {
@@ -363,7 +363,7 @@ idpf_vc_vport_destroy(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_add)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_add);
 int
 idpf_vc_queue_grps_add(struct idpf_vport *vport,
 		       struct virtchnl2_add_queue_groups *p2p_queue_grps_info,
@@ -396,7 +396,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_del)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_del);
 int idpf_vc_queue_grps_del(struct idpf_vport *vport,
 			  uint16_t num_q_grps,
 			  struct virtchnl2_queue_group_id *qg_ids)
@@ -431,7 +431,7 @@ int idpf_vc_queue_grps_del(struct idpf_vport *vport,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_set);
 int
 idpf_vc_rss_key_set(struct idpf_vport *vport)
 {
@@ -466,7 +466,7 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_get);
 int idpf_vc_rss_key_get(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
@@ -509,7 +509,7 @@ int idpf_vc_rss_key_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_set);
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -544,7 +544,7 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_get);
 int
 idpf_vc_rss_lut_get(struct idpf_vport *vport)
 {
@@ -587,7 +587,7 @@ idpf_vc_rss_lut_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_get);
 int
 idpf_vc_rss_hash_get(struct idpf_vport *vport)
 {
@@ -620,7 +620,7 @@ idpf_vc_rss_hash_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_set);
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
@@ -647,7 +647,7 @@ idpf_vc_rss_hash_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_irq_map_unmap_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_irq_map_unmap_config);
 int
 idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
@@ -689,7 +689,7 @@ idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_alloc);
 int
 idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 {
@@ -720,7 +720,7 @@ idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_dealloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_dealloc);
 int
 idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 {
@@ -748,7 +748,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue);
 int
 idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 			  uint32_t type, bool on)
@@ -787,7 +787,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch);
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 		     bool rx, bool on, uint32_t type)
@@ -828,7 +828,7 @@ idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 }
 
 #define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queues_ena_dis)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queues_ena_dis);
 int
 idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 {
@@ -897,7 +897,7 @@ idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_ena_dis)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_ena_dis);
 int
 idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 {
@@ -923,7 +923,7 @@ idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ptype_info_query)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ptype_info_query);
 int
 idpf_vc_ptype_info_query(struct idpf_adapter *adapter,
 			 struct virtchnl2_get_ptype_info *req_ptype_info,
@@ -946,7 +946,7 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_stats_query)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_stats_query);
 int
 idpf_vc_stats_query(struct idpf_vport *vport,
 		struct virtchnl2_vport_stats **pstats)
@@ -974,7 +974,7 @@ idpf_vc_stats_query(struct idpf_vport *vport,
 }
 
 #define IDPF_RX_BUF_STRIDE		64
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config);
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
@@ -1064,7 +1064,7 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config_by_info)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config_by_info);
 int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info,
 			       uint16_t num_qs)
 {
@@ -1100,7 +1100,7 @@ int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_in
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config);
 int
 idpf_vc_txq_config(struct idpf_vport *vport, struct ci_tx_queue *txq)
 {
@@ -1172,7 +1172,7 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct ci_tx_queue *txq)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config_by_info)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config_by_info);
 int
 idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info,
 		       uint16_t num_qs)
@@ -1208,7 +1208,7 @@ idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_recv)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_recv);
 int
 idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 		  struct idpf_ctlq_msg *q_msg)
@@ -1216,7 +1216,7 @@ idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_post_rx_buffs)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_post_rx_buffs);
 int
 idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			   u16 *buff_count, struct idpf_dma_mem **buffs)
diff --git a/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c b/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
index 2ee87c94c2..a0b4a2b6a9 100644
--- a/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
@@ -35,7 +35,7 @@ static const struct rte_afu_uuid afu_uuid_ipn3ke_map[] = {
 	{ 0, 0 /* sentinel */ },
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(ipn3ke_bridge_func)
+RTE_EXPORT_INTERNAL_SYMBOL(ipn3ke_bridge_func);
 struct ipn3ke_pub_func ipn3ke_bridge_func;
 
 static int
diff --git a/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c b/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
index c2300a8955..c4ffb3d100 100644
--- a/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
@@ -10,7 +10,7 @@
 #include <eal_export.h>
 #include "rte_pmd_ixgbe.h"
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_addr);
 int
 rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf,
 			      struct rte_ether_addr *mac_addr)
@@ -47,7 +47,7 @@ rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_ping_vf)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_ping_vf);
 int
 rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf)
 {
@@ -80,7 +80,7 @@ rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_anti_spoof);
 int
 rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -111,7 +111,7 @@ rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_anti_spoof);
 int
 rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -141,7 +141,7 @@ rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_insert);
 int
 rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id)
 {
@@ -178,7 +178,7 @@ rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tx_loopback);
 int
 rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on)
 {
@@ -209,7 +209,7 @@ rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_all_queues_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_all_queues_drop_en);
 int
 rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on)
 {
@@ -240,7 +240,7 @@ rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_split_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_split_drop_en);
 int
 rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -276,7 +276,7 @@ rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_stripq);
 int
 rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -324,7 +324,7 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rxmode)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rxmode);
 int
 rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf,
 			    uint16_t rx_mask, uint8_t on)
@@ -372,7 +372,7 @@ rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rx)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rx);
 int
 rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -423,7 +423,7 @@ rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_tx)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_tx);
 int
 rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -474,7 +474,7 @@ rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_filter);
 int
 rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 				 uint64_t vf_mask, uint8_t vlan_on)
@@ -510,7 +510,7 @@ rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rate_limit)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rate_limit);
 int
 rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf,
 				uint32_t tx_rate, uint64_t q_msk)
@@ -527,7 +527,7 @@ rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf,
 	return ixgbe_set_vf_rate_limit(dev, vf, tx_rate, q_msk);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_enable)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_enable);
 int
 rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp)
 {
@@ -552,7 +552,7 @@ rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_disable)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_disable);
 int
 rte_pmd_ixgbe_macsec_disable(uint16_t port)
 {
@@ -572,7 +572,7 @@ rte_pmd_ixgbe_macsec_disable(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_txsc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_txsc);
 int
 rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac)
 {
@@ -598,7 +598,7 @@ rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_rxsc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_rxsc);
 int
 rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 {
@@ -625,7 +625,7 @@ rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_txsa)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_txsa);
 int
 rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -682,7 +682,7 @@ rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_rxsa)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_rxsa);
 int
 rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -726,7 +726,7 @@ rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tc_bw_alloc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tc_bw_alloc);
 int
 rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 			      uint8_t tc_num,
@@ -800,7 +800,7 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_upd_fctrl_sbp)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_upd_fctrl_sbp);
 int
 rte_pmd_ixgbe_upd_fctrl_sbp(uint16_t port, int enable)
 {
@@ -830,7 +830,7 @@ rte_pmd_ixgbe_upd_fctrl_sbp(uint16_t port, int enable)
 }
 
 #ifdef RTE_LIBRTE_IXGBE_BYPASS
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_init)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_init);
 int
 rte_pmd_ixgbe_bypass_init(uint16_t port_id)
 {
@@ -846,7 +846,7 @@ rte_pmd_ixgbe_bypass_init(uint16_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_show);
 int
 rte_pmd_ixgbe_bypass_state_show(uint16_t port_id, uint32_t *state)
 {
@@ -861,7 +861,7 @@ rte_pmd_ixgbe_bypass_state_show(uint16_t port_id, uint32_t *state)
 	return ixgbe_bypass_state_show(dev, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_set)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_set);
 int
 rte_pmd_ixgbe_bypass_state_set(uint16_t port_id, uint32_t *new_state)
 {
@@ -876,7 +876,7 @@ rte_pmd_ixgbe_bypass_state_set(uint16_t port_id, uint32_t *new_state)
 	return ixgbe_bypass_state_store(dev, new_state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_show);
 int
 rte_pmd_ixgbe_bypass_event_show(uint16_t port_id,
 				uint32_t event,
@@ -893,7 +893,7 @@ rte_pmd_ixgbe_bypass_event_show(uint16_t port_id,
 	return ixgbe_bypass_event_show(dev, event, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_store)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_store);
 int
 rte_pmd_ixgbe_bypass_event_store(uint16_t port_id,
 				 uint32_t event,
@@ -910,7 +910,7 @@ rte_pmd_ixgbe_bypass_event_store(uint16_t port_id,
 	return ixgbe_bypass_event_store(dev, event, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_store)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_store);
 int
 rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port_id, uint32_t timeout)
 {
@@ -925,7 +925,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port_id, uint32_t timeout)
 	return ixgbe_bypass_wd_timeout_store(dev, timeout);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_ver_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_ver_show);
 int
 rte_pmd_ixgbe_bypass_ver_show(uint16_t port_id, uint32_t *ver)
 {
@@ -940,7 +940,7 @@ rte_pmd_ixgbe_bypass_ver_show(uint16_t port_id, uint32_t *ver)
 	return ixgbe_bypass_ver_show(dev, ver);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_show);
 int
 rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port_id, uint32_t *wd_timeout)
 {
@@ -955,7 +955,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port_id, uint32_t *wd_timeout)
 	return ixgbe_bypass_wd_timeout_show(dev, wd_timeout);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_reset);
 int
 rte_pmd_ixgbe_bypass_wd_reset(uint16_t port_id)
 {
@@ -1024,7 +1024,7 @@ STATIC void rte_pmd_ixgbe_release_swfw(struct ixgbe_hw *hw, u32 mask)
 	ixgbe_release_swfw_semaphore(hw, mask);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_lock)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_lock);
 int
 rte_pmd_ixgbe_mdio_lock(uint16_t port)
 {
@@ -1052,7 +1052,7 @@ rte_pmd_ixgbe_mdio_lock(uint16_t port)
 	return IXGBE_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlock)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlock);
 int
 rte_pmd_ixgbe_mdio_unlock(uint16_t port)
 {
@@ -1080,7 +1080,7 @@ rte_pmd_ixgbe_mdio_unlock(uint16_t port)
 	return IXGBE_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_read)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_read);
 int
 rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
 				 uint32_t dev_type, uint16_t *phy_data)
@@ -1128,7 +1128,7 @@ rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_write)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_write);
 int
 rte_pmd_ixgbe_mdio_unlocked_write(uint16_t port, uint32_t reg_addr,
 				  uint32_t dev_type, uint16_t phy_data)
@@ -1176,7 +1176,7 @@ rte_pmd_ixgbe_mdio_unlocked_write(uint16_t port, uint32_t reg_addr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_info, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_info, 20.08);
 int
 rte_pmd_ixgbe_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 {
@@ -1193,7 +1193,7 @@ rte_pmd_ixgbe_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_stats, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_stats, 20.08);
 int
 rte_pmd_ixgbe_get_fdir_stats(uint16_t port,
 			     struct rte_eth_fdir_stats *fdir_stats)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1321be779b..d79bc3d745 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -3379,7 +3379,7 @@ mlx5_set_metadata_mask(struct rte_eth_dev *dev)
 	DRV_LOG(DEBUG, "metadata reg_c0 mask %08X", sh->dv_regc0_mask);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_get_dyn_flag_names, 20.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_get_dyn_flag_names, 20.02);
 int
 rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n)
 {
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8db372123c..ce4d2246a6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -7880,7 +7880,7 @@ mlx5_flow_cache_flow_toggle(struct rte_eth_dev *dev, bool orig_prio)
  * @return
  *   Negative value on error, positive on success.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_flow_engine_set_mode, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_flow_engine_set_mode, 23.03);
 int
 rte_pmd_mlx5_flow_engine_set_mode(enum rte_pmd_mlx5_flow_engine_mode mode, uint32_t flags)
 {
@@ -10986,7 +10986,7 @@ mlx5_action_handle_detach(struct rte_eth_dev *dev)
 	(MLX5DV_DR_DOMAIN_SYNC_FLAGS_SW | MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW)
 #endif
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_sync_flow, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_sync_flow, 20.11);
 int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -12263,7 +12263,7 @@ mlx5_flow_discover_ipv6_tc_support(struct rte_eth_dev *dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_create_geneve_tlv_parser, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_create_geneve_tlv_parser, 24.03);
 void *
 rte_pmd_mlx5_create_geneve_tlv_parser(uint16_t port_id,
 				      const struct rte_pmd_mlx5_geneve_tlv tlv_list[],
@@ -12281,7 +12281,7 @@ rte_pmd_mlx5_create_geneve_tlv_parser(uint16_t port_id,
 #endif
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_destroy_geneve_tlv_parser, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_destroy_geneve_tlv_parser, 24.03);
 int
 rte_pmd_mlx5_destroy_geneve_tlv_parser(void *handle)
 {
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 5e8c312d00..cc26c785c0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -1832,7 +1832,7 @@ mlxreg_host_shaper_config(struct rte_eth_dev *dev,
 #endif
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_host_shaper_config, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_host_shaper_config, 22.07);
 int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate,
 				    uint32_t flags)
 {
@@ -1874,7 +1874,7 @@ int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate,
  * @return
  *   0 for Success, non-zero value depending on failure type
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_rxq_dump_contexts, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_rxq_dump_contexts, 24.07);
 int rte_pmd_mlx5_rxq_dump_contexts(uint16_t port_id, uint16_t queue_id, const char *filename)
 {
 	struct rte_eth_dev *dev;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 77c5848c37..9bfef96b5f 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -3311,7 +3311,7 @@ mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
 	return &priv->ext_rxqs[dpdk_idx - RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_map, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_map, 22.03);
 int
 rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 				      uint32_t hw_idx)
@@ -3345,7 +3345,7 @@ rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_unmap, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_unmap, 22.03);
 int
 rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
 {
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fe9da7f8c1..41d427d8c4 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -777,7 +777,7 @@ mlx5_tx_burst_mode_get(struct rte_eth_dev *dev,
  *   0 for success, non-zero value depending on failure.
  *
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_txq_dump_contexts, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_txq_dump_contexts, 24.07);
 int rte_pmd_mlx5_txq_dump_contexts(uint16_t port_id, uint16_t queue_id, const char *filename)
 {
 	struct rte_eth_dev *dev;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index b090d8274d..565dcf804d 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1415,7 +1415,7 @@ mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq)
 	return txq->is_hairpin ? txq->obj->sq->id : txq->obj->sq_obj.sq->id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_sq_enable, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_sq_enable, 22.07);
 int
 rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num)
 {
@@ -1597,7 +1597,7 @@ mlx5_external_tx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
 	return &priv->ext_txqs[dpdk_idx - MLX5_EXTERNAL_TX_QUEUE_ID_MIN];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_map, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_map, 24.07);
 int
 rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 				      uint32_t hw_idx)
@@ -1631,7 +1631,7 @@ rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_unmap, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_unmap, 24.07);
 int
 rte_pmd_mlx5_external_tx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
 {
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9451431144..f11ad9251a 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -45,7 +45,7 @@ struct octeontx_vdev_init_params {
 	uint8_t	nr_port;
 };
 
-RTE_EXPORT_SYMBOL(rte_octeontx_pchan_map)
+RTE_EXPORT_SYMBOL(rte_octeontx_pchan_map);
 uint16_t
 rte_octeontx_pchan_map[OCTEONTX_MAX_BGX_PORTS][OCTEONTX_MAX_LMAC_PER_BGX];
 
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index b1085bf390..962106fa2c 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -457,7 +457,7 @@ do_eth_dev_ring_create(const char *name,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_from_rings)
+RTE_EXPORT_SYMBOL(rte_eth_from_rings);
 int
 rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[],
 		const unsigned int nb_rx_queues,
@@ -516,7 +516,7 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[],
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_from_ring)
+RTE_EXPORT_SYMBOL(rte_eth_from_ring);
 int
 rte_eth_from_ring(struct rte_ring *r)
 {
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index 91a1c3a98e..40d6e768bc 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -517,7 +517,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_softnic,
 	PMD_PARAM_CPU_ID "=<uint32> "
 );
 
-RTE_EXPORT_SYMBOL(rte_pmd_softnic_manage)
+RTE_EXPORT_SYMBOL(rte_pmd_softnic_manage);
 int
 rte_pmd_softnic_manage(uint16_t port_id)
 {
diff --git a/drivers/net/softnic/rte_eth_softnic_thread.c b/drivers/net/softnic/rte_eth_softnic_thread.c
index f72c836199..d18d7cf9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic_thread.c
+++ b/drivers/net/softnic/rte_eth_softnic_thread.c
@@ -555,7 +555,7 @@ rte_pmd_softnic_run_internal(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_softnic_run)
+RTE_EXPORT_SYMBOL(rte_pmd_softnic_run);
 int
 rte_pmd_softnic_run(uint16_t port_id)
 {
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 44bf2e3241..cd6698f353 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1046,7 +1046,7 @@ vhost_driver_setup(struct rte_eth_dev *eth_dev)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_vhost_get_queue_event)
+RTE_EXPORT_SYMBOL(rte_eth_vhost_get_queue_event);
 int
 rte_eth_vhost_get_queue_event(uint16_t port_id,
 		struct rte_eth_vhost_queue_event *event)
@@ -1084,7 +1084,7 @@ rte_eth_vhost_get_queue_event(uint16_t port_id,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_vhost_get_vid_from_port_id)
+RTE_EXPORT_SYMBOL(rte_eth_vhost_get_vid_from_port_id);
 int
 rte_eth_vhost_get_vid_from_port_id(uint16_t port_id)
 {
diff --git a/drivers/power/kvm_vm/guest_channel.c b/drivers/power/kvm_vm/guest_channel.c
index 42bfcedb56..7abffc2e3c 100644
--- a/drivers/power/kvm_vm/guest_channel.c
+++ b/drivers/power/kvm_vm/guest_channel.c
@@ -152,7 +152,7 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_guest_channel_send_msg)
+RTE_EXPORT_SYMBOL(rte_power_guest_channel_send_msg);
 int rte_power_guest_channel_send_msg(struct rte_power_channel_packet *pkt,
 			unsigned int lcore_id)
 {
@@ -214,7 +214,7 @@ int power_guest_channel_read_msg(void *pkt,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_guest_channel_receive_msg)
+RTE_EXPORT_SYMBOL(rte_power_guest_channel_receive_msg);
 int rte_power_guest_channel_receive_msg(void *pkt,
 		size_t pkt_len,
 		unsigned int lcore_id)
diff --git a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
index 60c2080740..bcb4373ec7 100644
--- a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
+++ b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
@@ -17,7 +17,7 @@
 #include "cnxk_rvu_lf.h"
 #include "cnxk_rvu_lf_driver.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_id_range_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_id_range_set);
 int
 rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to)
 {
@@ -32,7 +32,7 @@ rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to)
 	return roc_rvu_lf_msg_id_range_set(roc_rvu_lf, from, to);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_process);
 int
 rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id,
 			void *req, uint16_t req_len, void *rsp, uint16_t rsp_len)
@@ -48,7 +48,7 @@ rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id,
 	return roc_rvu_lf_msg_process(roc_rvu_lf, vf, msg_id, req, req_len, rsp, rsp_len);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_register);
 int
 rte_pmd_rvu_lf_msg_handler_register(uint8_t dev_id, rte_pmd_rvu_lf_msg_handler_cb_fn cb)
 {
@@ -63,7 +63,7 @@ rte_pmd_rvu_lf_msg_handler_register(uint8_t dev_id, rte_pmd_rvu_lf_msg_handler_c
 	return roc_rvu_lf_msg_handler_register(roc_rvu_lf, (roc_rvu_lf_msg_handler_cb_fn)cb);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_unregister);
 int
 rte_pmd_rvu_lf_msg_handler_unregister(uint8_t dev_id)
 {
@@ -78,7 +78,7 @@ rte_pmd_rvu_lf_msg_handler_unregister(uint8_t dev_id)
 	return roc_rvu_lf_msg_handler_unregister(roc_rvu_lf);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_register);
 int
 rte_pmd_rvu_lf_irq_register(uint8_t dev_id, unsigned int irq,
 			    rte_pmd_rvu_lf_intr_callback_fn cb, void *data)
@@ -94,7 +94,7 @@ rte_pmd_rvu_lf_irq_register(uint8_t dev_id, unsigned int irq,
 	return roc_rvu_lf_irq_register(roc_rvu_lf, irq, (roc_rvu_lf_intr_cb_fn)cb, data);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_unregister);
 int
 rte_pmd_rvu_lf_irq_unregister(uint8_t dev_id, unsigned int irq,
 			      rte_pmd_rvu_lf_intr_callback_fn cb, void *data)
@@ -110,7 +110,7 @@ rte_pmd_rvu_lf_irq_unregister(uint8_t dev_id, unsigned int irq,
 	return roc_rvu_lf_irq_unregister(roc_rvu_lf, irq, (roc_rvu_lf_intr_cb_fn)cb, data);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_bar_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_bar_get);
 int
 rte_pmd_rvu_lf_bar_get(uint8_t dev_id, uint8_t bar_num, size_t *va, size_t *mask)
 {
@@ -135,21 +135,21 @@ rte_pmd_rvu_lf_bar_get(uint8_t dev_id, uint8_t bar_num, size_t *va, size_t *mask
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_npa_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_npa_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_npa_pf_func_get(void)
 {
 	return roc_npa_pf_func_get();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_sso_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_sso_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_sso_pf_func_get(void)
 {
 	return roc_sso_pf_func_get();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_pf_func_get(uint8_t dev_id)
 {
diff --git a/drivers/raw/ifpga/rte_pmd_ifpga.c b/drivers/raw/ifpga/rte_pmd_ifpga.c
index 620b35624b..5b2b634da2 100644
--- a/drivers/raw/ifpga/rte_pmd_ifpga.c
+++ b/drivers/raw/ifpga/rte_pmd_ifpga.c
@@ -13,7 +13,7 @@
 #include "base/ifpga_sec_mgr.h"
 
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_dev_id);
 int
 rte_pmd_ifpga_get_dev_id(const char *pci_addr, uint16_t *dev_id)
 {
@@ -102,7 +102,7 @@ get_share_data(struct opae_adapter *adapter)
 	return sd;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_rsu_status)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_rsu_status);
 int
 rte_pmd_ifpga_get_rsu_status(uint16_t dev_id, uint32_t *stat, uint32_t *prog)
 {
@@ -125,7 +125,7 @@ rte_pmd_ifpga_get_rsu_status(uint16_t dev_id, uint32_t *stat, uint32_t *prog)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_set_rsu_status)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_set_rsu_status);
 int
 rte_pmd_ifpga_set_rsu_status(uint16_t dev_id, uint32_t stat, uint32_t prog)
 {
@@ -267,7 +267,7 @@ get_port_property(struct opae_adapter *adapter, uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_property)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_property);
 int
 rte_pmd_ifpga_get_property(uint16_t dev_id, rte_pmd_ifpga_prop *prop)
 {
@@ -304,7 +304,7 @@ rte_pmd_ifpga_get_property(uint16_t dev_id, rte_pmd_ifpga_prop *prop)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_phy_info)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_phy_info);
 int
 rte_pmd_ifpga_get_phy_info(uint16_t dev_id, rte_pmd_ifpga_phy_info *info)
 {
@@ -345,7 +345,7 @@ rte_pmd_ifpga_get_phy_info(uint16_t dev_id, rte_pmd_ifpga_phy_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_update_flash)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_update_flash);
 int
 rte_pmd_ifpga_update_flash(uint16_t dev_id, const char *image,
 	uint64_t *status)
@@ -359,7 +359,7 @@ rte_pmd_ifpga_update_flash(uint16_t dev_id, const char *image,
 	return opae_mgr_update_flash(adapter->mgr, image, status);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_stop_update)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_stop_update);
 int
 rte_pmd_ifpga_stop_update(uint16_t dev_id, int force)
 {
@@ -372,7 +372,7 @@ rte_pmd_ifpga_stop_update(uint16_t dev_id, int force)
 	return opae_mgr_stop_flash_update(adapter->mgr, force);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reboot_try)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reboot_try);
 int
 rte_pmd_ifpga_reboot_try(uint16_t dev_id)
 {
@@ -399,7 +399,7 @@ rte_pmd_ifpga_reboot_try(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reload)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reload);
 int
 rte_pmd_ifpga_reload(uint16_t dev_id, int type, int page)
 {
@@ -412,7 +412,7 @@ rte_pmd_ifpga_reload(uint16_t dev_id, int type, int page)
 	return opae_mgr_reload(adapter->mgr, type, page);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_partial_reconfigure)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_partial_reconfigure);
 int
 rte_pmd_ifpga_partial_reconfigure(uint16_t dev_id, int port, const char *file)
 {
@@ -427,7 +427,7 @@ rte_pmd_ifpga_partial_reconfigure(uint16_t dev_id, int port, const char *file)
 	return ifpga_rawdev_partial_reconfigure(dev, port, file);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_cleanup)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_cleanup);
 void
 rte_pmd_ifpga_cleanup(void)
 {
diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c
index 7056b1c117..5ec1202bd4 100644
--- a/lib/acl/acl_bld.c
+++ b/lib/acl/acl_bld.c
@@ -1622,7 +1622,7 @@ get_first_load_size(const struct rte_acl_config *cfg)
 	return (ofs < max_ofs) ? sizeof(uint32_t) : sizeof(uint8_t);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_build)
+RTE_EXPORT_SYMBOL(rte_acl_build);
 int
 rte_acl_build(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg)
 {
diff --git a/lib/acl/acl_run_scalar.c b/lib/acl/acl_run_scalar.c
index 32ebe3119b..24d160bf8c 100644
--- a/lib/acl/acl_run_scalar.c
+++ b/lib/acl/acl_run_scalar.c
@@ -108,7 +108,7 @@ scalar_transition(const uint64_t *trans_table, uint64_t transition,
 	return transition;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify_scalar)
+RTE_EXPORT_SYMBOL(rte_acl_classify_scalar);
 int
 rte_acl_classify_scalar(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories)
diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c
index 8c0ca29618..60e9d7d336 100644
--- a/lib/acl/rte_acl.c
+++ b/lib/acl/rte_acl.c
@@ -264,7 +264,7 @@ acl_get_best_alg(void)
 	return alg[i];
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_set_ctx_classify)
+RTE_EXPORT_SYMBOL(rte_acl_set_ctx_classify);
 extern int
 rte_acl_set_ctx_classify(struct rte_acl_ctx *ctx, enum rte_acl_classify_alg alg)
 {
@@ -287,7 +287,7 @@ rte_acl_set_ctx_classify(struct rte_acl_ctx *ctx, enum rte_acl_classify_alg alg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify_alg)
+RTE_EXPORT_SYMBOL(rte_acl_classify_alg);
 int
 rte_acl_classify_alg(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories,
@@ -300,7 +300,7 @@ rte_acl_classify_alg(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	return classify_fns[alg](ctx, data, results, num, categories);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify)
+RTE_EXPORT_SYMBOL(rte_acl_classify);
 int
 rte_acl_classify(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories)
@@ -309,7 +309,7 @@ rte_acl_classify(const struct rte_acl_ctx *ctx, const uint8_t **data,
 		ctx->alg);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_find_existing)
+RTE_EXPORT_SYMBOL(rte_acl_find_existing);
 struct rte_acl_ctx *
 rte_acl_find_existing(const char *name)
 {
@@ -334,7 +334,7 @@ rte_acl_find_existing(const char *name)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_free)
+RTE_EXPORT_SYMBOL(rte_acl_free);
 void
 rte_acl_free(struct rte_acl_ctx *ctx)
 {
@@ -367,7 +367,7 @@ rte_acl_free(struct rte_acl_ctx *ctx)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_create)
+RTE_EXPORT_SYMBOL(rte_acl_create);
 struct rte_acl_ctx *
 rte_acl_create(const struct rte_acl_param *param)
 {
@@ -464,7 +464,7 @@ acl_check_rule(const struct rte_acl_rule_data *rd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_add_rules)
+RTE_EXPORT_SYMBOL(rte_acl_add_rules);
 int
 rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules,
 	uint32_t num)
@@ -494,7 +494,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules,
  * Reset all rules.
  * Note that RT structures are not affected.
  */
-RTE_EXPORT_SYMBOL(rte_acl_reset_rules)
+RTE_EXPORT_SYMBOL(rte_acl_reset_rules);
 void
 rte_acl_reset_rules(struct rte_acl_ctx *ctx)
 {
@@ -505,7 +505,7 @@ rte_acl_reset_rules(struct rte_acl_ctx *ctx)
 /*
  * Reset all rules and destroys RT structures.
  */
-RTE_EXPORT_SYMBOL(rte_acl_reset)
+RTE_EXPORT_SYMBOL(rte_acl_reset);
 void
 rte_acl_reset(struct rte_acl_ctx *ctx)
 {
@@ -518,7 +518,7 @@ rte_acl_reset(struct rte_acl_ctx *ctx)
 /*
  * Dump ACL context to the stdout.
  */
-RTE_EXPORT_SYMBOL(rte_acl_dump)
+RTE_EXPORT_SYMBOL(rte_acl_dump);
 void
 rte_acl_dump(const struct rte_acl_ctx *ctx)
 {
@@ -538,7 +538,7 @@ rte_acl_dump(const struct rte_acl_ctx *ctx)
 /*
  * Dump all ACL contexts to the stdout.
  */
-RTE_EXPORT_SYMBOL(rte_acl_list_dump)
+RTE_EXPORT_SYMBOL(rte_acl_list_dump);
 void
 rte_acl_list_dump(void)
 {
diff --git a/lib/argparse/rte_argparse.c b/lib/argparse/rte_argparse.c
index 331f05f01d..1ddec956e9 100644
--- a/lib/argparse/rte_argparse.c
+++ b/lib/argparse/rte_argparse.c
@@ -793,7 +793,7 @@ show_args_help(const struct rte_argparse *obj)
 		printf("\n");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse, 24.03);
 int
 rte_argparse_parse(const struct rte_argparse *obj, int argc, char **argv)
 {
@@ -832,7 +832,7 @@ rte_argparse_parse(const struct rte_argparse *obj, int argc, char **argv)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse_type, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse_type, 24.03);
 int
 rte_argparse_parse_type(const char *str, enum rte_argparse_value_type val_type, void *val)
 {
diff --git a/lib/bbdev/bbdev_trace_points.c b/lib/bbdev/bbdev_trace_points.c
index 942c7be819..ac7ab2d553 100644
--- a/lib/bbdev/bbdev_trace_points.c
+++ b/lib/bbdev/bbdev_trace_points.c
@@ -22,9 +22,9 @@ RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_queue_start,
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_queue_stop,
 	lib.bbdev.queue.stop)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_enqueue, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_enqueue, 25.03);
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_enqueue,
 	lib.bbdev.enq)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_dequeue, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_dequeue, 25.03);
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_dequeue,
 	lib.bbdev.deq)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index e0f8c8eb0d..eecaae2396 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -93,7 +93,7 @@ static rte_spinlock_t rte_bbdev_cb_lock = RTE_SPINLOCK_INITIALIZER;
  * Global array of all devices. This is not static because it's used by the
  * inline enqueue and dequeue functions
  */
-RTE_EXPORT_SYMBOL(rte_bbdev_devices)
+RTE_EXPORT_SYMBOL(rte_bbdev_devices);
 struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
 
 /* Global array with rte_bbdev_data structures */
@@ -175,7 +175,7 @@ find_free_dev_id(void)
 	return RTE_BBDEV_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_allocate)
+RTE_EXPORT_SYMBOL(rte_bbdev_allocate);
 struct rte_bbdev *
 rte_bbdev_allocate(const char *name)
 {
@@ -235,7 +235,7 @@ rte_bbdev_allocate(const char *name)
 	return bbdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_release)
+RTE_EXPORT_SYMBOL(rte_bbdev_release);
 int
 rte_bbdev_release(struct rte_bbdev *bbdev)
 {
@@ -271,7 +271,7 @@ rte_bbdev_release(struct rte_bbdev *bbdev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_get_named_dev)
+RTE_EXPORT_SYMBOL(rte_bbdev_get_named_dev);
 struct rte_bbdev *
 rte_bbdev_get_named_dev(const char *name)
 {
@@ -292,14 +292,14 @@ rte_bbdev_get_named_dev(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_count)
+RTE_EXPORT_SYMBOL(rte_bbdev_count);
 uint16_t
 rte_bbdev_count(void)
 {
 	return num_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_is_valid)
+RTE_EXPORT_SYMBOL(rte_bbdev_is_valid);
 bool
 rte_bbdev_is_valid(uint16_t dev_id)
 {
@@ -309,7 +309,7 @@ rte_bbdev_is_valid(uint16_t dev_id)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_find_next)
+RTE_EXPORT_SYMBOL(rte_bbdev_find_next);
 uint16_t
 rte_bbdev_find_next(uint16_t dev_id)
 {
@@ -320,7 +320,7 @@ rte_bbdev_find_next(uint16_t dev_id)
 	return dev_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_setup_queues)
+RTE_EXPORT_SYMBOL(rte_bbdev_setup_queues);
 int
 rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
 {
@@ -413,7 +413,7 @@ rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_intr_enable)
+RTE_EXPORT_SYMBOL(rte_bbdev_intr_enable);
 int
 rte_bbdev_intr_enable(uint16_t dev_id)
 {
@@ -446,7 +446,7 @@ rte_bbdev_intr_enable(uint16_t dev_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_configure)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_configure);
 int
 rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
 		const struct rte_bbdev_queue_conf *conf)
@@ -568,7 +568,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_start)
+RTE_EXPORT_SYMBOL(rte_bbdev_start);
 int
 rte_bbdev_start(uint16_t dev_id)
 {
@@ -603,7 +603,7 @@ rte_bbdev_start(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stop)
+RTE_EXPORT_SYMBOL(rte_bbdev_stop);
 int
 rte_bbdev_stop(uint16_t dev_id)
 {
@@ -627,7 +627,7 @@ rte_bbdev_stop(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_close)
+RTE_EXPORT_SYMBOL(rte_bbdev_close);
 int
 rte_bbdev_close(uint16_t dev_id)
 {
@@ -675,7 +675,7 @@ rte_bbdev_close(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_start)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_start);
 int
 rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
 {
@@ -708,7 +708,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_stop)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_stop);
 int
 rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
 {
@@ -773,7 +773,7 @@ reset_stats_in_queues(struct rte_bbdev *dev)
 	rte_bbdev_log_debug("Reset stats on %u", dev->data->dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stats_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_stats_get);
 int
 rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
 {
@@ -797,7 +797,7 @@ rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_bbdev_stats_reset);
 int
 rte_bbdev_stats_reset(uint16_t dev_id)
 {
@@ -815,7 +815,7 @@ rte_bbdev_stats_reset(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_info_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_info_get);
 int
 rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
 {
@@ -844,7 +844,7 @@ rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_info_get);
 int
 rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
 		struct rte_bbdev_queue_info *queue_info)
@@ -931,7 +931,7 @@ bbdev_op_init(struct rte_mempool *mempool, void *arg, void *element,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_bbdev_op_pool_create);
 struct rte_mempool *
 rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
 		unsigned int num_elements, unsigned int cache_size,
@@ -979,7 +979,7 @@ rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_callback_register)
+RTE_EXPORT_SYMBOL(rte_bbdev_callback_register);
 int
 rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
 		rte_bbdev_cb_fn cb_fn, void *cb_arg)
@@ -1025,7 +1025,7 @@ rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
 	return (user_cb == NULL) ? -ENOMEM : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_bbdev_callback_unregister);
 int
 rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
 		rte_bbdev_cb_fn cb_fn, void *cb_arg)
@@ -1071,7 +1071,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_pmd_callback_process)
+RTE_EXPORT_SYMBOL(rte_bbdev_pmd_callback_process);
 void
 rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
 	enum rte_bbdev_event_type event, void *ret_param)
@@ -1114,7 +1114,7 @@ rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
 	rte_spinlock_unlock(&rte_bbdev_cb_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_enable)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_enable);
 int
 rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
 {
@@ -1126,7 +1126,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_intr_enable(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_disable)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_disable);
 int
 rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
 {
@@ -1138,7 +1138,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_intr_disable(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_ctl)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_ctl);
 int
 rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
 		void *data)
@@ -1176,7 +1176,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_bbdev_op_type_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_op_type_str);
 const char *
 rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
 {
@@ -1197,7 +1197,7 @@ rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_device_status_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_device_status_str);
 const char *
 rte_bbdev_device_status_str(enum rte_bbdev_device_status status)
 {
@@ -1221,7 +1221,7 @@ rte_bbdev_device_status_str(enum rte_bbdev_device_status status)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_enqueue_status_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_enqueue_status_str);
 const char *
 rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status)
 {
@@ -1241,7 +1241,7 @@ rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_queue_ops_dump, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_queue_ops_dump, 24.11);
 int
 rte_bbdev_queue_ops_dump(uint16_t dev_id, uint16_t queue_id, FILE *f)
 {
@@ -1281,7 +1281,7 @@ rte_bbdev_queue_ops_dump(uint16_t dev_id, uint16_t queue_id, FILE *f)
 	return dev->dev_ops->queue_ops_dump(dev, queue_id, f);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_ops_param_string, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_ops_param_string, 24.11);
 char *
 rte_bbdev_ops_param_string(void *op, enum rte_bbdev_op_type op_type, char *str, uint32_t len)
 {
diff --git a/lib/bitratestats/rte_bitrate.c b/lib/bitratestats/rte_bitrate.c
index 592e478e06..4fe8ce452c 100644
--- a/lib/bitratestats/rte_bitrate.c
+++ b/lib/bitratestats/rte_bitrate.c
@@ -29,7 +29,7 @@ struct rte_stats_bitrates {
 	uint16_t id_stats_set;
 };
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_create)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_create);
 struct rte_stats_bitrates *
 rte_stats_bitrate_create(void)
 {
@@ -37,14 +37,14 @@ rte_stats_bitrate_create(void)
 		RTE_CACHE_LINE_SIZE);
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_free)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_free);
 void
 rte_stats_bitrate_free(struct rte_stats_bitrates *bitrate_data)
 {
 	rte_free(bitrate_data);
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_reg)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_reg);
 int
 rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data)
 {
@@ -66,7 +66,7 @@ rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data)
 	return return_value;
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_calc)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_calc);
 int
 rte_stats_bitrate_calc(struct rte_stats_bitrates *bitrate_data,
 			uint16_t port_id)
diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c
index 5239b3e11e..2a21934a22 100644
--- a/lib/bpf/bpf.c
+++ b/lib/bpf/bpf.c
@@ -11,7 +11,7 @@
 
 #include "bpf_impl.h"
 
-RTE_EXPORT_SYMBOL(rte_bpf_destroy)
+RTE_EXPORT_SYMBOL(rte_bpf_destroy);
 void
 rte_bpf_destroy(struct rte_bpf *bpf)
 {
@@ -22,7 +22,7 @@ rte_bpf_destroy(struct rte_bpf *bpf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_get_jit)
+RTE_EXPORT_SYMBOL(rte_bpf_get_jit);
 int
 rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit)
 {
diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 86e703299d..129457741d 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -518,7 +518,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
+RTE_EXPORT_SYMBOL(rte_bpf_convert);
 struct rte_bpf_prm *
 rte_bpf_convert(const struct bpf_program *prog)
 {
diff --git a/lib/bpf/bpf_dump.c b/lib/bpf/bpf_dump.c
index 6ee0e32b43..e2a4f48a2d 100644
--- a/lib/bpf/bpf_dump.c
+++ b/lib/bpf/bpf_dump.c
@@ -44,7 +44,7 @@ static const char *const jump_tbl[16] = {
 	[EBPF_CALL >> 4] = "call", [EBPF_EXIT >> 4] = "exit",
 };
 
-RTE_EXPORT_SYMBOL(rte_bpf_dump)
+RTE_EXPORT_SYMBOL(rte_bpf_dump);
 void rte_bpf_dump(FILE *f, const struct ebpf_insn *buf, uint32_t len)
 {
 	uint32_t i;
diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index 4b5ea9f1a4..7090be62e1 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -476,7 +476,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_exec_burst)
+RTE_EXPORT_SYMBOL(rte_bpf_exec_burst);
 uint32_t
 rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	uint32_t num)
@@ -496,7 +496,7 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_exec)
+RTE_EXPORT_SYMBOL(rte_bpf_exec);
 uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
 {
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 556e613762..5050cbf34d 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -80,7 +80,7 @@ bpf_check_xsym(const struct rte_bpf_xsym *xsym)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_load);
 struct rte_bpf *
 rte_bpf_load(const struct rte_bpf_prm *prm)
 {
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 1d30ba17e2..26cf263ba2 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -295,7 +295,7 @@ bpf_load_elf(const struct rte_bpf_prm *prm, int32_t fd, const char *section)
 	return bpf;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load);
 struct rte_bpf *
 rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	const char *sname)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index 01f813c56b..7167603bf0 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -466,7 +466,7 @@ bpf_eth_unload(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_unload)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_unload);
 void
 rte_bpf_eth_rx_unload(uint16_t port, uint16_t queue)
 {
@@ -478,7 +478,7 @@ rte_bpf_eth_rx_unload(uint16_t port, uint16_t queue)
 	rte_spinlock_unlock(&cbh->lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_unload)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_unload);
 void
 rte_bpf_eth_tx_unload(uint16_t port, uint16_t queue)
 {
@@ -560,7 +560,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_elf_load);
 int
 rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
@@ -577,7 +577,7 @@ rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_elf_load);
 int
 rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index dea0d703ca..fdefa70e91 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -11,7 +11,7 @@
  */
 
 #ifndef RTE_LIBRTE_BPF_ELF
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load);
 struct rte_bpf *
 rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	const char *sname)
@@ -29,7 +29,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 #endif
 
 #ifndef RTE_HAS_LIBPCAP
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
+RTE_EXPORT_SYMBOL(rte_bpf_convert);
 struct rte_bpf_prm *
 rte_bpf_convert(const struct bpf_program *prog)
 {
diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c
index 8bbdcf146e..fcf6e31924 100644
--- a/lib/cfgfile/rte_cfgfile.c
+++ b/lib/cfgfile/rte_cfgfile.c
@@ -159,7 +159,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_load)
+RTE_EXPORT_SYMBOL(rte_cfgfile_load);
 struct rte_cfgfile *
 rte_cfgfile_load(const char *filename, int flags)
 {
@@ -167,7 +167,7 @@ rte_cfgfile_load(const char *filename, int flags)
 					    &default_cfgfile_params);
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_load_with_params)
+RTE_EXPORT_SYMBOL(rte_cfgfile_load_with_params);
 struct rte_cfgfile *
 rte_cfgfile_load_with_params(const char *filename, int flags,
 			     const struct rte_cfgfile_parameters *params)
@@ -272,7 +272,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_create)
+RTE_EXPORT_SYMBOL(rte_cfgfile_create);
 struct rte_cfgfile *
 rte_cfgfile_create(int flags)
 {
@@ -329,7 +329,7 @@ rte_cfgfile_create(int flags)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_add_section)
+RTE_EXPORT_SYMBOL(rte_cfgfile_add_section);
 int
 rte_cfgfile_add_section(struct rte_cfgfile *cfg, const char *sectionname)
 {
@@ -371,7 +371,7 @@ rte_cfgfile_add_section(struct rte_cfgfile *cfg, const char *sectionname)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_add_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_add_entry);
 int rte_cfgfile_add_entry(struct rte_cfgfile *cfg,
 		const char *sectionname, const char *entryname,
 		const char *entryvalue)
@@ -396,7 +396,7 @@ int rte_cfgfile_add_entry(struct rte_cfgfile *cfg,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_set_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_set_entry);
 int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname, const char *entryvalue)
 {
@@ -425,7 +425,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_save)
+RTE_EXPORT_SYMBOL(rte_cfgfile_save);
 int rte_cfgfile_save(struct rte_cfgfile *cfg, const char *filename)
 {
 	int i, j;
@@ -450,7 +450,7 @@ int rte_cfgfile_save(struct rte_cfgfile *cfg, const char *filename)
 	return fclose(f);
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_close)
+RTE_EXPORT_SYMBOL(rte_cfgfile_close);
 int rte_cfgfile_close(struct rte_cfgfile *cfg)
 {
 	int i;
@@ -474,7 +474,7 @@ int rte_cfgfile_close(struct rte_cfgfile *cfg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_num_sections)
+RTE_EXPORT_SYMBOL(rte_cfgfile_num_sections);
 int
 rte_cfgfile_num_sections(struct rte_cfgfile *cfg, const char *sectionname,
 size_t length)
@@ -488,7 +488,7 @@ size_t length)
 	return num_sections;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_sections)
+RTE_EXPORT_SYMBOL(rte_cfgfile_sections);
 int
 rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
 	int max_sections)
@@ -501,14 +501,14 @@ rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_has_section)
+RTE_EXPORT_SYMBOL(rte_cfgfile_has_section);
 int
 rte_cfgfile_has_section(struct rte_cfgfile *cfg, const char *sectionname)
 {
 	return _get_section(cfg, sectionname) != NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries);
 int
 rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
 	const char *sectionname)
@@ -519,7 +519,7 @@ rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
 	return s->num_entries;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries_by_index)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries_by_index);
 int
 rte_cfgfile_section_num_entries_by_index(struct rte_cfgfile *cfg,
 	char *sectionname, int index)
@@ -532,7 +532,7 @@ rte_cfgfile_section_num_entries_by_index(struct rte_cfgfile *cfg,
 	strlcpy(sectionname, sect->name, CFG_NAME_LEN);
 	return sect->num_entries;
 }
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries);
 int
 rte_cfgfile_section_entries(struct rte_cfgfile *cfg, const char *sectionname,
 		struct rte_cfgfile_entry *entries, int max_entries)
@@ -546,7 +546,7 @@ rte_cfgfile_section_entries(struct rte_cfgfile *cfg, const char *sectionname,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries_by_index)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries_by_index);
 int
 rte_cfgfile_section_entries_by_index(struct rte_cfgfile *cfg, int index,
 		char *sectionname,
@@ -564,7 +564,7 @@ rte_cfgfile_section_entries_by_index(struct rte_cfgfile *cfg, int index,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_get_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_get_entry);
 const char *
 rte_cfgfile_get_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname)
@@ -580,7 +580,7 @@ rte_cfgfile_get_entry(struct rte_cfgfile *cfg, const char *sectionname,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_has_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_has_entry);
 int
 rte_cfgfile_has_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname)
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index d1003f0b8e..eae053b184 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -40,7 +40,7 @@ cmdline_complete_buffer(struct rdline *rdl, const char *buf,
 	return cmdline_complete(cl, buf, state, dstbuf, dstsize);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_write_char)
+RTE_EXPORT_SYMBOL(cmdline_write_char);
 int
 cmdline_write_char(struct rdline *rdl, char c)
 {
@@ -59,7 +59,7 @@ cmdline_write_char(struct rdline *rdl, char c)
 }
 
 
-RTE_EXPORT_SYMBOL(cmdline_set_prompt)
+RTE_EXPORT_SYMBOL(cmdline_set_prompt);
 void
 cmdline_set_prompt(struct cmdline *cl, const char *prompt)
 {
@@ -68,7 +68,7 @@ cmdline_set_prompt(struct cmdline *cl, const char *prompt)
 	strlcpy(cl->prompt, prompt, sizeof(cl->prompt));
 }
 
-RTE_EXPORT_SYMBOL(cmdline_new)
+RTE_EXPORT_SYMBOL(cmdline_new);
 struct cmdline *
 cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 {
@@ -99,14 +99,14 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 	return cl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_rdline)
+RTE_EXPORT_SYMBOL(cmdline_get_rdline);
 struct rdline*
 cmdline_get_rdline(struct cmdline *cl)
 {
 	return &cl->rdl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_free)
+RTE_EXPORT_SYMBOL(cmdline_free);
 void
 cmdline_free(struct cmdline *cl)
 {
@@ -122,7 +122,7 @@ cmdline_free(struct cmdline *cl)
 	free(cl);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_printf)
+RTE_EXPORT_SYMBOL(cmdline_printf);
 void
 cmdline_printf(const struct cmdline *cl, const char *fmt, ...)
 {
@@ -138,7 +138,7 @@ cmdline_printf(const struct cmdline *cl, const char *fmt, ...)
 	va_end(ap);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_in)
+RTE_EXPORT_SYMBOL(cmdline_in);
 int
 cmdline_in(struct cmdline *cl, const char *buf, int size)
 {
@@ -176,7 +176,7 @@ cmdline_in(struct cmdline *cl, const char *buf, int size)
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_quit)
+RTE_EXPORT_SYMBOL(cmdline_quit);
 void
 cmdline_quit(struct cmdline *cl)
 {
@@ -186,7 +186,7 @@ cmdline_quit(struct cmdline *cl)
 	rdline_quit(&cl->rdl);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_interact)
+RTE_EXPORT_SYMBOL(cmdline_interact);
 void
 cmdline_interact(struct cmdline *cl)
 {
diff --git a/lib/cmdline/cmdline_cirbuf.c b/lib/cmdline/cmdline_cirbuf.c
index 07d9fc6b90..b74d61bb52 100644
--- a/lib/cmdline/cmdline_cirbuf.c
+++ b/lib/cmdline/cmdline_cirbuf.c
@@ -13,7 +13,7 @@
 #include <eal_export.h>
 
 
-RTE_EXPORT_SYMBOL(cirbuf_init)
+RTE_EXPORT_SYMBOL(cirbuf_init);
 int
 cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int maxlen)
 {
@@ -29,7 +29,7 @@ cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int max
 
 /* multiple add */
 
-RTE_EXPORT_SYMBOL(cirbuf_add_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_add_buf_head);
 int
 cirbuf_add_buf_head(struct cirbuf *cbuf, const char *c, unsigned int n)
 {
@@ -61,7 +61,7 @@ cirbuf_add_buf_head(struct cirbuf *cbuf, const char *c, unsigned int n)
 
 /* multiple add */
 
-RTE_EXPORT_SYMBOL(cirbuf_add_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_add_buf_tail);
 int
 cirbuf_add_buf_tail(struct cirbuf *cbuf, const char *c, unsigned int n)
 {
@@ -105,7 +105,7 @@ __cirbuf_add_head(struct cirbuf * cbuf, char c)
 	cbuf->len ++;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_head_safe)
+RTE_EXPORT_SYMBOL(cirbuf_add_head_safe);
 int
 cirbuf_add_head_safe(struct cirbuf * cbuf, char c)
 {
@@ -116,7 +116,7 @@ cirbuf_add_head_safe(struct cirbuf * cbuf, char c)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_head)
+RTE_EXPORT_SYMBOL(cirbuf_add_head);
 void
 cirbuf_add_head(struct cirbuf * cbuf, char c)
 {
@@ -136,7 +136,7 @@ __cirbuf_add_tail(struct cirbuf * cbuf, char c)
 	cbuf->len ++;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_tail_safe)
+RTE_EXPORT_SYMBOL(cirbuf_add_tail_safe);
 int
 cirbuf_add_tail_safe(struct cirbuf * cbuf, char c)
 {
@@ -147,7 +147,7 @@ cirbuf_add_tail_safe(struct cirbuf * cbuf, char c)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_tail)
+RTE_EXPORT_SYMBOL(cirbuf_add_tail);
 void
 cirbuf_add_tail(struct cirbuf * cbuf, char c)
 {
@@ -190,7 +190,7 @@ __cirbuf_shift_right(struct cirbuf *cbuf)
 }
 
 /* XXX we could do a better algorithm here... */
-RTE_EXPORT_SYMBOL(cirbuf_align_left)
+RTE_EXPORT_SYMBOL(cirbuf_align_left);
 int
 cirbuf_align_left(struct cirbuf * cbuf)
 {
@@ -212,7 +212,7 @@ cirbuf_align_left(struct cirbuf * cbuf)
 }
 
 /* XXX we could do a better algorithm here... */
-RTE_EXPORT_SYMBOL(cirbuf_align_right)
+RTE_EXPORT_SYMBOL(cirbuf_align_right);
 int
 cirbuf_align_right(struct cirbuf * cbuf)
 {
@@ -235,7 +235,7 @@ cirbuf_align_right(struct cirbuf * cbuf)
 
 /* buffer del */
 
-RTE_EXPORT_SYMBOL(cirbuf_del_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_del_buf_head);
 int
 cirbuf_del_buf_head(struct cirbuf *cbuf, unsigned int size)
 {
@@ -256,7 +256,7 @@ cirbuf_del_buf_head(struct cirbuf *cbuf, unsigned int size)
 
 /* buffer del */
 
-RTE_EXPORT_SYMBOL(cirbuf_del_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_del_buf_tail);
 int
 cirbuf_del_buf_tail(struct cirbuf *cbuf, unsigned int size)
 {
@@ -287,7 +287,7 @@ __cirbuf_del_head(struct cirbuf * cbuf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_head_safe)
+RTE_EXPORT_SYMBOL(cirbuf_del_head_safe);
 int
 cirbuf_del_head_safe(struct cirbuf * cbuf)
 {
@@ -298,7 +298,7 @@ cirbuf_del_head_safe(struct cirbuf * cbuf)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_head)
+RTE_EXPORT_SYMBOL(cirbuf_del_head);
 void
 cirbuf_del_head(struct cirbuf * cbuf)
 {
@@ -317,7 +317,7 @@ __cirbuf_del_tail(struct cirbuf * cbuf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_tail_safe)
+RTE_EXPORT_SYMBOL(cirbuf_del_tail_safe);
 int
 cirbuf_del_tail_safe(struct cirbuf * cbuf)
 {
@@ -328,7 +328,7 @@ cirbuf_del_tail_safe(struct cirbuf * cbuf)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_tail)
+RTE_EXPORT_SYMBOL(cirbuf_del_tail);
 void
 cirbuf_del_tail(struct cirbuf * cbuf)
 {
@@ -337,7 +337,7 @@ cirbuf_del_tail(struct cirbuf * cbuf)
 
 /* convert to buffer */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_get_buf_head);
 int
 cirbuf_get_buf_head(struct cirbuf *cbuf, char *c, unsigned int size)
 {
@@ -376,7 +376,7 @@ cirbuf_get_buf_head(struct cirbuf *cbuf, char *c, unsigned int size)
 
 /* convert to buffer */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_get_buf_tail);
 int
 cirbuf_get_buf_tail(struct cirbuf *cbuf, char *c, unsigned int size)
 {
@@ -416,7 +416,7 @@ cirbuf_get_buf_tail(struct cirbuf *cbuf, char *c, unsigned int size)
 
 /* get head or get tail */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_head)
+RTE_EXPORT_SYMBOL(cirbuf_get_head);
 char
 cirbuf_get_head(struct cirbuf * cbuf)
 {
@@ -425,7 +425,7 @@ cirbuf_get_head(struct cirbuf * cbuf)
 
 /* get head or get tail */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_tail)
+RTE_EXPORT_SYMBOL(cirbuf_get_tail);
 char
 cirbuf_get_tail(struct cirbuf * cbuf)
 {
diff --git a/lib/cmdline/cmdline_parse.c b/lib/cmdline/cmdline_parse.c
index 201fddb8c3..cfaba5f83b 100644
--- a/lib/cmdline/cmdline_parse.c
+++ b/lib/cmdline/cmdline_parse.c
@@ -50,7 +50,7 @@ iscomment(char c)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_isendoftoken)
+RTE_EXPORT_SYMBOL(cmdline_isendoftoken);
 int
 cmdline_isendoftoken(char c)
 {
@@ -298,21 +298,21 @@ __cmdline_parse(struct cmdline *cl, const char *buf, bool call_fn)
 	return linelen;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse)
+RTE_EXPORT_SYMBOL(cmdline_parse);
 int
 cmdline_parse(struct cmdline *cl, const char *buf)
 {
 	return __cmdline_parse(cl, buf, true);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_check)
+RTE_EXPORT_SYMBOL(cmdline_parse_check);
 int
 cmdline_parse_check(struct cmdline *cl, const char *buf)
 {
 	return __cmdline_parse(cl, buf, false);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete)
+RTE_EXPORT_SYMBOL(cmdline_complete);
 int
 cmdline_complete(struct cmdline *cl, const char *buf, int *state,
 		 char *dst, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_bool.c b/lib/cmdline/cmdline_parse_bool.c
index e03cc3d545..4ef6b8ac68 100644
--- a/lib/cmdline/cmdline_parse_bool.c
+++ b/lib/cmdline/cmdline_parse_bool.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse_bool.h"
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(cmdline_token_bool_ops, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(cmdline_token_bool_ops, 25.03);
 struct cmdline_token_ops cmdline_token_bool_ops = {
 	.parse = cmdline_parse_bool,
 	.complete_get_nb = NULL,
diff --git a/lib/cmdline/cmdline_parse_etheraddr.c b/lib/cmdline/cmdline_parse_etheraddr.c
index 7358572ba1..eec5a71b9d 100644
--- a/lib/cmdline/cmdline_parse_etheraddr.c
+++ b/lib/cmdline/cmdline_parse_etheraddr.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_etheraddr.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_etheraddr_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_etheraddr_ops);
 struct cmdline_token_ops cmdline_token_etheraddr_ops = {
 	.parse = cmdline_parse_etheraddr,
 	.complete_get_nb = NULL,
@@ -22,7 +22,7 @@ struct cmdline_token_ops cmdline_token_etheraddr_ops = {
 	.get_help = cmdline_get_help_etheraddr,
 };
 
-RTE_EXPORT_SYMBOL(cmdline_parse_etheraddr)
+RTE_EXPORT_SYMBOL(cmdline_parse_etheraddr);
 int
 cmdline_parse_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 	const char *buf, void *res, unsigned ressize)
@@ -54,7 +54,7 @@ cmdline_parse_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_etheraddr)
+RTE_EXPORT_SYMBOL(cmdline_get_help_etheraddr);
 int
 cmdline_get_help_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 			       char *dstbuf, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_ipaddr.c b/lib/cmdline/cmdline_parse_ipaddr.c
index 55522016c8..c44275fd42 100644
--- a/lib/cmdline/cmdline_parse_ipaddr.c
+++ b/lib/cmdline/cmdline_parse_ipaddr.c
@@ -15,7 +15,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_ipaddr.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_ipaddr_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_ipaddr_ops);
 struct cmdline_token_ops cmdline_token_ipaddr_ops = {
 	.parse = cmdline_parse_ipaddr,
 	.complete_get_nb = NULL,
@@ -26,7 +26,7 @@ struct cmdline_token_ops cmdline_token_ipaddr_ops = {
 #define PREFIXMAX 128
 #define V4PREFIXMAX 32
 
-RTE_EXPORT_SYMBOL(cmdline_parse_ipaddr)
+RTE_EXPORT_SYMBOL(cmdline_parse_ipaddr);
 int
 cmdline_parse_ipaddr(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	unsigned ressize)
@@ -93,7 +93,7 @@ cmdline_parse_ipaddr(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_ipaddr)
+RTE_EXPORT_SYMBOL(cmdline_get_help_ipaddr);
 int cmdline_get_help_ipaddr(cmdline_parse_token_hdr_t *tk, char *dstbuf,
 			    unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_parse_num.c b/lib/cmdline/cmdline_parse_num.c
index f21796bedb..a4be661ed5 100644
--- a/lib/cmdline/cmdline_parse_num.c
+++ b/lib/cmdline/cmdline_parse_num.c
@@ -21,7 +21,7 @@
 #define debug_printf(...) do {} while (0)
 #endif
 
-RTE_EXPORT_SYMBOL(cmdline_token_num_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_num_ops);
 struct cmdline_token_ops cmdline_token_num_ops = {
 	.parse = cmdline_parse_num,
 	.complete_get_nb = NULL,
@@ -94,7 +94,7 @@ check_res_size(struct cmdline_token_num_data *nd, unsigned ressize)
 }
 
 /* parse an int */
-RTE_EXPORT_SYMBOL(cmdline_parse_num)
+RTE_EXPORT_SYMBOL(cmdline_parse_num);
 int
 cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
 	unsigned ressize)
@@ -316,7 +316,7 @@ cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
 
 
 /* parse an int */
-RTE_EXPORT_SYMBOL(cmdline_get_help_num)
+RTE_EXPORT_SYMBOL(cmdline_get_help_num);
 int
 cmdline_get_help_num(cmdline_parse_token_hdr_t *tk, char *dstbuf, unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_parse_portlist.c b/lib/cmdline/cmdline_parse_portlist.c
index ef6ce223b5..e1a35c0385 100644
--- a/lib/cmdline/cmdline_parse_portlist.c
+++ b/lib/cmdline/cmdline_parse_portlist.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_portlist.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_portlist_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_portlist_ops);
 struct cmdline_token_ops cmdline_token_portlist_ops = {
 	.parse = cmdline_parse_portlist,
 	.complete_get_nb = NULL,
@@ -70,7 +70,7 @@ parse_ports(cmdline_portlist_t *pl, const char *str)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_portlist)
+RTE_EXPORT_SYMBOL(cmdline_parse_portlist);
 int
 cmdline_parse_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 	const char *buf, void *res, unsigned ressize)
@@ -107,7 +107,7 @@ cmdline_parse_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_portlist)
+RTE_EXPORT_SYMBOL(cmdline_get_help_portlist);
 int
 cmdline_get_help_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 		char *dstbuf, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_string.c b/lib/cmdline/cmdline_parse_string.c
index 731947159f..e6a68656a6 100644
--- a/lib/cmdline/cmdline_parse_string.c
+++ b/lib/cmdline/cmdline_parse_string.c
@@ -12,7 +12,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_string.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_string_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_string_ops);
 struct cmdline_token_ops cmdline_token_string_ops = {
 	.parse = cmdline_parse_string,
 	.complete_get_nb = cmdline_complete_get_nb_string,
@@ -49,7 +49,7 @@ get_next_token(const char *s)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_string)
+RTE_EXPORT_SYMBOL(cmdline_parse_string);
 int
 cmdline_parse_string(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	unsigned ressize)
@@ -135,7 +135,7 @@ cmdline_parse_string(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete_get_nb_string)
+RTE_EXPORT_SYMBOL(cmdline_complete_get_nb_string);
 int cmdline_complete_get_nb_string(cmdline_parse_token_hdr_t *tk)
 {
 	struct cmdline_token_string *tk2;
@@ -159,7 +159,7 @@ int cmdline_complete_get_nb_string(cmdline_parse_token_hdr_t *tk)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete_get_elt_string)
+RTE_EXPORT_SYMBOL(cmdline_complete_get_elt_string);
 int cmdline_complete_get_elt_string(cmdline_parse_token_hdr_t *tk, int idx,
 				    char *dstbuf, unsigned int size)
 {
@@ -192,7 +192,7 @@ int cmdline_complete_get_elt_string(cmdline_parse_token_hdr_t *tk, int idx,
 }
 
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_string)
+RTE_EXPORT_SYMBOL(cmdline_get_help_string);
 int cmdline_get_help_string(cmdline_parse_token_hdr_t *tk, char *dstbuf,
 			    unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 3b8d435e98..f9b9959331 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -54,7 +54,7 @@ rdline_init(struct rdline *rdl,
 	return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
 }
 
-RTE_EXPORT_SYMBOL(rdline_new)
+RTE_EXPORT_SYMBOL(rdline_new);
 struct rdline *
 rdline_new(rdline_write_char_t *write_char,
 	   rdline_validate_t *validate,
@@ -71,14 +71,14 @@ rdline_new(rdline_write_char_t *write_char,
 	return rdl;
 }
 
-RTE_EXPORT_SYMBOL(rdline_free)
+RTE_EXPORT_SYMBOL(rdline_free);
 void
 rdline_free(struct rdline *rdl)
 {
 	free(rdl);
 }
 
-RTE_EXPORT_SYMBOL(rdline_newline)
+RTE_EXPORT_SYMBOL(rdline_newline);
 void
 rdline_newline(struct rdline *rdl, const char *prompt)
 {
@@ -103,7 +103,7 @@ rdline_newline(struct rdline *rdl, const char *prompt)
 	rdl->history_cur_line = -1;
 }
 
-RTE_EXPORT_SYMBOL(rdline_stop)
+RTE_EXPORT_SYMBOL(rdline_stop);
 void
 rdline_stop(struct rdline *rdl)
 {
@@ -112,7 +112,7 @@ rdline_stop(struct rdline *rdl)
 	rdl->status = RDLINE_INIT;
 }
 
-RTE_EXPORT_SYMBOL(rdline_quit)
+RTE_EXPORT_SYMBOL(rdline_quit);
 void
 rdline_quit(struct rdline *rdl)
 {
@@ -121,7 +121,7 @@ rdline_quit(struct rdline *rdl)
 	rdl->status = RDLINE_EXITED;
 }
 
-RTE_EXPORT_SYMBOL(rdline_restart)
+RTE_EXPORT_SYMBOL(rdline_restart);
 void
 rdline_restart(struct rdline *rdl)
 {
@@ -130,7 +130,7 @@ rdline_restart(struct rdline *rdl)
 	rdl->status = RDLINE_RUNNING;
 }
 
-RTE_EXPORT_SYMBOL(rdline_reset)
+RTE_EXPORT_SYMBOL(rdline_reset);
 void
 rdline_reset(struct rdline *rdl)
 {
@@ -145,7 +145,7 @@ rdline_reset(struct rdline *rdl)
 	rdl->history_cur_line = -1;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_buffer)
+RTE_EXPORT_SYMBOL(rdline_get_buffer);
 const char *
 rdline_get_buffer(struct rdline *rdl)
 {
@@ -182,7 +182,7 @@ display_right_buffer(struct rdline *rdl, int force)
 				  CIRBUF_GET_LEN(&rdl->right));
 }
 
-RTE_EXPORT_SYMBOL(rdline_redisplay)
+RTE_EXPORT_SYMBOL(rdline_redisplay);
 void
 rdline_redisplay(struct rdline *rdl)
 {
@@ -201,7 +201,7 @@ rdline_redisplay(struct rdline *rdl)
 	display_right_buffer(rdl, 1);
 }
 
-RTE_EXPORT_SYMBOL(rdline_char_in)
+RTE_EXPORT_SYMBOL(rdline_char_in);
 int
 rdline_char_in(struct rdline *rdl, char c)
 {
@@ -573,7 +573,7 @@ rdline_get_history_size(struct rdline * rdl)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_history_item)
+RTE_EXPORT_SYMBOL(rdline_get_history_item);
 char *
 rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 {
@@ -600,21 +600,21 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_history_buffer_size)
+RTE_EXPORT_SYMBOL(rdline_get_history_buffer_size);
 size_t
 rdline_get_history_buffer_size(struct rdline *rdl)
 {
 	return sizeof(rdl->history_buf);
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_opaque)
+RTE_EXPORT_SYMBOL(rdline_get_opaque);
 void *
 rdline_get_opaque(struct rdline *rdl)
 {
 	return rdl != NULL ? rdl->opaque : NULL;
 }
 
-RTE_EXPORT_SYMBOL(rdline_add_history)
+RTE_EXPORT_SYMBOL(rdline_add_history);
 int
 rdline_add_history(struct rdline * rdl, const char * buf)
 {
@@ -644,7 +644,7 @@ rdline_add_history(struct rdline * rdl, const char * buf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rdline_clear_history)
+RTE_EXPORT_SYMBOL(rdline_clear_history);
 void
 rdline_clear_history(struct rdline * rdl)
 {
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index f3d62acdae..53131e17c8 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -14,7 +14,7 @@
 
 #include <eal_export.h>
 
-RTE_EXPORT_SYMBOL(cmdline_file_new)
+RTE_EXPORT_SYMBOL(cmdline_file_new);
 struct cmdline *
 cmdline_file_new(cmdline_parse_ctx_t *ctx, const char *prompt, const char *path)
 {
@@ -32,7 +32,7 @@ cmdline_file_new(cmdline_parse_ctx_t *ctx, const char *prompt, const char *path)
 	return cmdline_new(ctx, prompt, fd, -1);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_stdin_new)
+RTE_EXPORT_SYMBOL(cmdline_stdin_new);
 struct cmdline *
 cmdline_stdin_new(cmdline_parse_ctx_t *ctx, const char *prompt)
 {
@@ -46,7 +46,7 @@ cmdline_stdin_new(cmdline_parse_ctx_t *ctx, const char *prompt)
 	return cl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_stdin_exit)
+RTE_EXPORT_SYMBOL(cmdline_stdin_exit);
 void
 cmdline_stdin_exit(struct cmdline *cl)
 {
diff --git a/lib/cmdline/cmdline_vt100.c b/lib/cmdline/cmdline_vt100.c
index 272088a0c6..8eaa3efb36 100644
--- a/lib/cmdline/cmdline_vt100.c
+++ b/lib/cmdline/cmdline_vt100.c
@@ -42,7 +42,7 @@ const char *cmdline_vt100_commands[] = {
 	vt100_bs,
 };
 
-RTE_EXPORT_SYMBOL(vt100_init)
+RTE_EXPORT_SYMBOL(vt100_init);
 void
 vt100_init(struct cmdline_vt100 *vt)
 {
@@ -72,7 +72,7 @@ match_command(char *buf, unsigned int size)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(vt100_parser)
+RTE_EXPORT_SYMBOL(vt100_parser);
 int
 vt100_parser(struct cmdline_vt100 *vt, char ch)
 {
diff --git a/lib/compressdev/rte_comp.c b/lib/compressdev/rte_comp.c
index 662691796a..c22e25c762 100644
--- a/lib/compressdev/rte_comp.c
+++ b/lib/compressdev/rte_comp.c
@@ -6,7 +6,7 @@
 #include "rte_comp.h"
 #include "rte_compressdev_internal.h"
 
-RTE_EXPORT_SYMBOL(rte_comp_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_comp_get_feature_name);
 const char *
 rte_comp_get_feature_name(uint64_t flag)
 {
@@ -125,7 +125,7 @@ rte_comp_op_init(struct rte_mempool *mempool,
 	op->mempool = mempool;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_comp_op_pool_create);
 struct rte_mempool *
 rte_comp_op_pool_create(const char *name,
 		unsigned int nb_elts, unsigned int cache_size,
@@ -181,7 +181,7 @@ rte_comp_op_pool_create(const char *name,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_alloc)
+RTE_EXPORT_SYMBOL(rte_comp_op_alloc);
 struct rte_comp_op *
 rte_comp_op_alloc(struct rte_mempool *mempool)
 {
@@ -197,7 +197,7 @@ rte_comp_op_alloc(struct rte_mempool *mempool)
 	return op;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_bulk_alloc)
+RTE_EXPORT_SYMBOL(rte_comp_op_bulk_alloc);
 int
 rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -223,7 +223,7 @@ rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
  * @param op
  *   Compress operation
  */
-RTE_EXPORT_SYMBOL(rte_comp_op_free)
+RTE_EXPORT_SYMBOL(rte_comp_op_free);
 void
 rte_comp_op_free(struct rte_comp_op *op)
 {
@@ -231,7 +231,7 @@ rte_comp_op_free(struct rte_comp_op *op)
 		rte_mempool_put(op->mempool, op);
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_bulk_free)
+RTE_EXPORT_SYMBOL(rte_comp_op_bulk_free);
 void
 rte_comp_op_bulk_free(struct rte_comp_op **ops, uint16_t nb_ops)
 {
diff --git a/lib/compressdev/rte_compressdev.c b/lib/compressdev/rte_compressdev.c
index 33de3f511b..cbb7c812f4 100644
--- a/lib/compressdev/rte_compressdev.c
+++ b/lib/compressdev/rte_compressdev.c
@@ -29,7 +29,7 @@ static struct rte_compressdev_global compressdev_globals = {
 		.max_devs		= RTE_COMPRESS_MAX_DEVS
 };
 
-RTE_EXPORT_SYMBOL(rte_compressdev_capability_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_capability_get);
 const struct rte_compressdev_capabilities *
 rte_compressdev_capability_get(uint8_t dev_id,
 			enum rte_comp_algorithm algo)
@@ -53,7 +53,7 @@ rte_compressdev_capability_get(uint8_t dev_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_compressdev_get_feature_name);
 const char *
 rte_compressdev_get_feature_name(uint64_t flag)
 {
@@ -83,7 +83,7 @@ rte_compressdev_get_dev(uint8_t dev_id)
 	return &compressdev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_get_named_dev)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_get_named_dev);
 struct rte_compressdev *
 rte_compressdev_pmd_get_named_dev(const char *name)
 {
@@ -120,7 +120,7 @@ rte_compressdev_is_valid_dev(uint8_t dev_id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_compressdev_get_dev_id);
 int
 rte_compressdev_get_dev_id(const char *name)
 {
@@ -139,14 +139,14 @@ rte_compressdev_get_dev_id(const char *name)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_count)
+RTE_EXPORT_SYMBOL(rte_compressdev_count);
 uint8_t
 rte_compressdev_count(void)
 {
 	return compressdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_devices_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_devices_get);
 uint8_t
 rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
 	uint8_t nb_devices)
@@ -172,7 +172,7 @@ rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_socket_id)
+RTE_EXPORT_SYMBOL(rte_compressdev_socket_id);
 int
 rte_compressdev_socket_id(uint8_t dev_id)
 {
@@ -230,7 +230,7 @@ rte_compressdev_find_free_device_index(void)
 	return RTE_COMPRESS_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_allocate)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_allocate);
 struct rte_compressdev *
 rte_compressdev_pmd_allocate(const char *name, int socket_id)
 {
@@ -277,7 +277,7 @@ rte_compressdev_pmd_allocate(const char *name, int socket_id)
 	return compressdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_release_device)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_release_device);
 int
 rte_compressdev_pmd_release_device(struct rte_compressdev *compressdev)
 {
@@ -298,7 +298,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *compressdev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_count)
+RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_count);
 uint16_t
 rte_compressdev_queue_pair_count(uint8_t dev_id)
 {
@@ -424,7 +424,7 @@ rte_compressdev_queue_pairs_release(struct rte_compressdev *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_configure)
+RTE_EXPORT_SYMBOL(rte_compressdev_configure);
 int
 rte_compressdev_configure(uint8_t dev_id, struct rte_compressdev_config *config)
 {
@@ -460,7 +460,7 @@ rte_compressdev_configure(uint8_t dev_id, struct rte_compressdev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_start)
+RTE_EXPORT_SYMBOL(rte_compressdev_start);
 int
 rte_compressdev_start(uint8_t dev_id)
 {
@@ -494,7 +494,7 @@ rte_compressdev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stop)
+RTE_EXPORT_SYMBOL(rte_compressdev_stop);
 void
 rte_compressdev_stop(uint8_t dev_id)
 {
@@ -520,7 +520,7 @@ rte_compressdev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_close)
+RTE_EXPORT_SYMBOL(rte_compressdev_close);
 int
 rte_compressdev_close(uint8_t dev_id)
 {
@@ -557,7 +557,7 @@ rte_compressdev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_setup)
+RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_setup);
 int
 rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		uint32_t max_inflight_ops, int socket_id)
@@ -593,7 +593,7 @@ rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_setup(dev, queue_pair_id, max_inflight_ops, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_compressdev_dequeue_burst);
 uint16_t
 rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -603,7 +603,7 @@ rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 	return dev->dequeue_burst(dev->data->queue_pairs[qp_id], ops, nb_ops);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_compressdev_enqueue_burst);
 uint16_t
 rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -613,7 +613,7 @@ rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 	return dev->enqueue_burst(dev->data->queue_pairs[qp_id], ops, nb_ops);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stats_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_stats_get);
 int
 rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats)
 {
@@ -638,7 +638,7 @@ rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_compressdev_stats_reset);
 void
 rte_compressdev_stats_reset(uint8_t dev_id)
 {
@@ -657,7 +657,7 @@ rte_compressdev_stats_reset(uint8_t dev_id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_info_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_info_get);
 void
 rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info)
 {
@@ -679,7 +679,7 @@ rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info)
 	dev_info->driver_name = dev->device->driver->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_create);
 int
 rte_compressdev_private_xform_create(uint8_t dev_id,
 		const struct rte_comp_xform *xform,
@@ -706,7 +706,7 @@ rte_compressdev_private_xform_create(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_free)
+RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_free);
 int
 rte_compressdev_private_xform_free(uint8_t dev_id, void *priv_xform)
 {
@@ -731,7 +731,7 @@ rte_compressdev_private_xform_free(uint8_t dev_id, void *priv_xform)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stream_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_stream_create);
 int
 rte_compressdev_stream_create(uint8_t dev_id,
 		const struct rte_comp_xform *xform,
@@ -759,7 +759,7 @@ rte_compressdev_stream_create(uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stream_free)
+RTE_EXPORT_SYMBOL(rte_compressdev_stream_free);
 int
 rte_compressdev_stream_free(uint8_t dev_id, void *stream)
 {
@@ -784,7 +784,7 @@ rte_compressdev_stream_free(uint8_t dev_id, void *stream)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_name_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_name_get);
 const char *
 rte_compressdev_name_get(uint8_t dev_id)
 {
diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c
index 7e11ad7148..5fad809337 100644
--- a/lib/compressdev/rte_compressdev_pmd.c
+++ b/lib/compressdev/rte_compressdev_pmd.c
@@ -56,7 +56,7 @@ rte_compressdev_pmd_parse_uint_arg(const char *key __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_parse_input_args)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_parse_input_args);
 int
 rte_compressdev_pmd_parse_input_args(
 		struct rte_compressdev_pmd_init_params *params,
@@ -93,7 +93,7 @@ rte_compressdev_pmd_parse_input_args(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_create);
 struct rte_compressdev *
 rte_compressdev_pmd_create(const char *name,
 		struct rte_device *device,
@@ -143,7 +143,7 @@ rte_compressdev_pmd_create(const char *name,
 	return compressdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_destroy)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_destroy);
 int
 rte_compressdev_pmd_destroy(struct rte_compressdev *compressdev)
 {
diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index d79d561bf6..ce43a9fde7 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -56,7 +56,7 @@ rte_cryptodev_pmd_parse_uint_arg(const char *key __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_parse_input_args)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_parse_input_args);
 int
 rte_cryptodev_pmd_parse_input_args(
 		struct rte_cryptodev_pmd_init_params *params,
@@ -100,7 +100,7 @@ rte_cryptodev_pmd_parse_input_args(
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create);
 struct rte_cryptodev *
 rte_cryptodev_pmd_create(const char *name,
 		struct rte_device *device,
@@ -151,7 +151,7 @@ rte_cryptodev_pmd_create(const char *name,
 	return cryptodev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_destroy);
 int
 rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 {
@@ -175,7 +175,7 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_probing_finish);
 void
 rte_cryptodev_pmd_probing_finish(struct rte_cryptodev *cryptodev)
 {
@@ -214,7 +214,7 @@ dummy_crypto_dequeue_burst(__rte_unused void *qp,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_reset);
 void
 cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
 {
@@ -233,7 +233,7 @@ cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
 	*fp_ops = dummy;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_set)
+RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_set);
 void
 cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
 		     const struct rte_cryptodev *dev)
@@ -246,7 +246,7 @@ cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
 	fp_ops->qp_depth_used = dev->qp_depth_used;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_session_event_mdata_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_session_event_mdata_get);
 void *
 rte_cryptodev_session_event_mdata_get(struct rte_crypto_op *op)
 {
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 69737adcbe..e890026e69 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -43,11 +43,11 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_free,
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
 	lib.cryptodev.asym.free)
 
-RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_enqueue_burst)
+RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_enqueue_burst);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
 	lib.cryptodev.enq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_dequeue_burst)
+RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_dequeue_burst);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_dequeue_burst,
 	lib.cryptodev.deq.burst)
 
@@ -201,6 +201,6 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_op_pool_create,
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_count,
 	lib.cryptodev.count)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_cryptodev_trace_qp_depth_used, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_cryptodev_trace_qp_depth_used, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_qp_depth_used,
 	lib.cryptodev.qp_depth_used)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..8e45370391 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -36,7 +36,7 @@ static uint8_t nb_drivers;
 
 static struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodevs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodevs);
 struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices;
 
 static struct rte_cryptodev_global cryptodev_globals = {
@@ -46,13 +46,13 @@ static struct rte_cryptodev_global cryptodev_globals = {
 };
 
 /* Public fastpath APIs. */
-RTE_EXPORT_SYMBOL(rte_crypto_fp_ops)
+RTE_EXPORT_SYMBOL(rte_crypto_fp_ops);
 struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
 
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_logtype)
+RTE_EXPORT_SYMBOL(rte_cryptodev_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_cryptodev_logtype, INFO);
 
 /**
@@ -109,7 +109,7 @@ crypto_cipher_algorithm_strings[] = {
  * The crypto cipher operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_cipher_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_cipher_operation_strings);
 const char *
 rte_crypto_cipher_operation_strings[] = {
 		[RTE_CRYPTO_CIPHER_OP_ENCRYPT]	= "encrypt",
@@ -182,7 +182,7 @@ crypto_aead_algorithm_strings[] = {
  * The crypto AEAD operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_aead_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_aead_operation_strings);
 const char *
 rte_crypto_aead_operation_strings[] = {
 	[RTE_CRYPTO_AEAD_OP_ENCRYPT]	= "encrypt",
@@ -210,7 +210,7 @@ crypto_asym_xform_strings[] = {
 /**
  * Asymmetric crypto operation strings identifiers.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_asym_op_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_asym_op_strings);
 const char *rte_crypto_asym_op_strings[] = {
 	[RTE_CRYPTO_ASYM_OP_ENCRYPT]	= "encrypt",
 	[RTE_CRYPTO_ASYM_OP_DECRYPT]	= "decrypt",
@@ -221,7 +221,7 @@ const char *rte_crypto_asym_op_strings[] = {
 /**
  * Asymmetric crypto key exchange operation strings identifiers.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_asym_ke_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_asym_ke_strings);
 const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE] = "priv_key_generate",
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE] = "pub_key_generate",
@@ -246,7 +246,7 @@ struct rte_cryptodev_asym_session_pool_private_data {
 	/**< Session user data will be placed after sess_private_data */
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_cipher_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_cipher_algo_enum);
 int
 rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
 		const char *algo_string)
@@ -267,7 +267,7 @@ rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_auth_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_auth_algo_enum);
 int
 rte_cryptodev_get_auth_algo_enum(enum rte_crypto_auth_algorithm *algo_enum,
 		const char *algo_string)
@@ -288,7 +288,7 @@ rte_cryptodev_get_auth_algo_enum(enum rte_crypto_auth_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_aead_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_aead_algo_enum);
 int
 rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
 		const char *algo_string)
@@ -309,7 +309,7 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_xform_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_xform_enum);
 int
 rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 		const char *xform_string)
@@ -331,7 +331,7 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_cipher_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_cipher_algo_string, 23.03);
 const char *
 rte_cryptodev_get_cipher_algo_string(enum rte_crypto_cipher_algorithm algo_enum)
 {
@@ -345,7 +345,7 @@ rte_cryptodev_get_cipher_algo_string(enum rte_crypto_cipher_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_auth_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_auth_algo_string, 23.03);
 const char *
 rte_cryptodev_get_auth_algo_string(enum rte_crypto_auth_algorithm algo_enum)
 {
@@ -359,7 +359,7 @@ rte_cryptodev_get_auth_algo_string(enum rte_crypto_auth_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_aead_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_aead_algo_string, 23.03);
 const char *
 rte_cryptodev_get_aead_algo_string(enum rte_crypto_aead_algorithm algo_enum)
 {
@@ -373,7 +373,7 @@ rte_cryptodev_get_aead_algo_string(enum rte_crypto_aead_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_get_xform_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_get_xform_string, 23.03);
 const char *
 rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum)
 {
@@ -391,14 +391,14 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum)
  * The crypto auth operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_auth_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_auth_operation_strings);
 const char *
 rte_crypto_auth_operation_strings[] = {
 		[RTE_CRYPTO_AUTH_OP_VERIFY]	= "verify",
 		[RTE_CRYPTO_AUTH_OP_GENERATE]	= "generate"
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_get);
 const struct rte_cryptodev_symmetric_capability *
 rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx)
@@ -468,7 +468,7 @@ param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_capability_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_capability_get);
 const struct rte_cryptodev_asymmetric_xform_capability *
 rte_cryptodev_asym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_asym_capability_idx *idx)
@@ -498,7 +498,7 @@ rte_cryptodev_asym_capability_get(uint8_t dev_id,
 	return asym_cap;
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_cipher)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_cipher);
 int
 rte_cryptodev_sym_capability_check_cipher(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -521,7 +521,7 @@ rte_cryptodev_sym_capability_check_cipher(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_auth)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_auth);
 int
 rte_cryptodev_sym_capability_check_auth(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -550,7 +550,7 @@ rte_cryptodev_sym_capability_check_auth(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_aead)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_aead);
 int
 rte_cryptodev_sym_capability_check_aead(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -585,7 +585,7 @@ rte_cryptodev_sym_capability_check_aead(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_optype)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_optype);
 int
 rte_cryptodev_asym_xform_capability_check_optype(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -602,7 +602,7 @@ rte_cryptodev_asym_xform_capability_check_optype(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_modlen)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_modlen);
 int
 rte_cryptodev_asym_xform_capability_check_modlen(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -638,7 +638,7 @@ rte_cryptodev_asym_xform_capability_check_modlen(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_hash)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_hash);
 bool
 rte_cryptodev_asym_xform_capability_check_hash(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -655,7 +655,7 @@ rte_cryptodev_asym_xform_capability_check_hash(
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_xform_capability_check_opcap, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_xform_capability_check_opcap, 24.11);
 int
 rte_cryptodev_asym_xform_capability_check_opcap(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -789,7 +789,7 @@ cryptodev_cb_init(struct rte_cryptodev *dev)
 	return -ENOMEM;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_feature_name);
 const char *
 rte_cryptodev_get_feature_name(uint64_t flag)
 {
@@ -853,14 +853,14 @@ rte_cryptodev_get_feature_name(uint64_t flag)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_dev);
 struct rte_cryptodev *
 rte_cryptodev_pmd_get_dev(uint8_t dev_id)
 {
 	return &cryptodev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_named_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_named_dev);
 struct rte_cryptodev *
 rte_cryptodev_pmd_get_named_dev(const char *name)
 {
@@ -891,7 +891,7 @@ rte_cryptodev_is_valid_device_data(uint8_t dev_id)
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_is_valid_dev)
+RTE_EXPORT_SYMBOL(rte_cryptodev_is_valid_dev);
 unsigned int
 rte_cryptodev_is_valid_dev(uint8_t dev_id)
 {
@@ -913,7 +913,7 @@ rte_cryptodev_is_valid_dev(uint8_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_dev_id);
 int
 rte_cryptodev_get_dev_id(const char *name)
 {
@@ -940,7 +940,7 @@ rte_cryptodev_get_dev_id(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_count)
+RTE_EXPORT_SYMBOL(rte_cryptodev_count);
 uint8_t
 rte_cryptodev_count(void)
 {
@@ -949,7 +949,7 @@ rte_cryptodev_count(void)
 	return cryptodev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_device_count_by_driver)
+RTE_EXPORT_SYMBOL(rte_cryptodev_device_count_by_driver);
 uint8_t
 rte_cryptodev_device_count_by_driver(uint8_t driver_id)
 {
@@ -966,7 +966,7 @@ rte_cryptodev_device_count_by_driver(uint8_t driver_id)
 	return dev_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_devices_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_devices_get);
 uint8_t
 rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
 	uint8_t nb_devices)
@@ -995,7 +995,7 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_sec_ctx)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_sec_ctx);
 void *
 rte_cryptodev_get_sec_ctx(uint8_t dev_id)
 {
@@ -1011,7 +1011,7 @@ rte_cryptodev_get_sec_ctx(uint8_t dev_id)
 	return sec_ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_socket_id)
+RTE_EXPORT_SYMBOL(rte_cryptodev_socket_id);
 int
 rte_cryptodev_socket_id(uint8_t dev_id)
 {
@@ -1106,7 +1106,7 @@ rte_cryptodev_find_free_device_index(void)
 	return RTE_CRYPTO_MAX_DEVS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_allocate);
 struct rte_cryptodev *
 rte_cryptodev_pmd_allocate(const char *name, int socket_id)
 {
@@ -1166,7 +1166,7 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id)
 	return cryptodev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_release_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_release_device);
 int
 rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 {
@@ -1196,7 +1196,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_count)
+RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_count);
 uint16_t
 rte_cryptodev_queue_pair_count(uint8_t dev_id)
 {
@@ -1279,7 +1279,7 @@ rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_reset, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_reset, 24.11);
 int
 rte_cryptodev_queue_pair_reset(uint8_t dev_id, uint16_t queue_pair_id,
 		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
@@ -1304,7 +1304,7 @@ rte_cryptodev_queue_pair_reset(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_reset(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_configure)
+RTE_EXPORT_SYMBOL(rte_cryptodev_configure);
 int
 rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 {
@@ -1352,7 +1352,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_start)
+RTE_EXPORT_SYMBOL(rte_cryptodev_start);
 int
 rte_cryptodev_start(uint8_t dev_id)
 {
@@ -1390,7 +1390,7 @@ rte_cryptodev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stop)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stop);
 void
 rte_cryptodev_stop(uint8_t dev_id)
 {
@@ -1420,7 +1420,7 @@ rte_cryptodev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_close)
+RTE_EXPORT_SYMBOL(rte_cryptodev_close);
 int
 rte_cryptodev_close(uint8_t dev_id)
 {
@@ -1463,7 +1463,7 @@ rte_cryptodev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_qp_status)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_qp_status);
 int
 rte_cryptodev_get_qp_status(uint8_t dev_id, uint16_t queue_pair_id)
 {
@@ -1518,7 +1518,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct rte_mempool *mp,
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_setup)
+RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_setup);
 int
 rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
@@ -1572,7 +1572,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_setup(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_add_enq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_add_enq_callback);
 struct rte_cryptodev_cb *
 rte_cryptodev_add_enq_callback(uint8_t dev_id,
 			       uint16_t qp_id,
@@ -1643,7 +1643,7 @@ rte_cryptodev_add_enq_callback(uint8_t dev_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_remove_enq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_remove_enq_callback);
 int
 rte_cryptodev_remove_enq_callback(uint8_t dev_id,
 				  uint16_t qp_id,
@@ -1720,7 +1720,7 @@ rte_cryptodev_remove_enq_callback(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_add_deq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_add_deq_callback);
 struct rte_cryptodev_cb *
 rte_cryptodev_add_deq_callback(uint8_t dev_id,
 			       uint16_t qp_id,
@@ -1792,7 +1792,7 @@ rte_cryptodev_add_deq_callback(uint8_t dev_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_remove_deq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_remove_deq_callback);
 int
 rte_cryptodev_remove_deq_callback(uint8_t dev_id,
 				  uint16_t qp_id,
@@ -1869,7 +1869,7 @@ rte_cryptodev_remove_deq_callback(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stats_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stats_get);
 int
 rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
 {
@@ -1896,7 +1896,7 @@ rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stats_reset);
 void
 rte_cryptodev_stats_reset(uint8_t dev_id)
 {
@@ -1916,7 +1916,7 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
 	dev->dev_ops->stats_reset(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_info_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_info_get);
 void
 rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 {
@@ -1942,7 +1942,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_callback_register)
+RTE_EXPORT_SYMBOL(rte_cryptodev_callback_register);
 int
 rte_cryptodev_callback_register(uint8_t dev_id,
 			enum rte_cryptodev_event_type event,
@@ -1988,7 +1988,7 @@ rte_cryptodev_callback_register(uint8_t dev_id,
 	return (user_cb == NULL) ? -ENOMEM : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_cryptodev_callback_unregister);
 int
 rte_cryptodev_callback_unregister(uint8_t dev_id,
 			enum rte_cryptodev_event_type event,
@@ -2037,7 +2037,7 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_callback_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_callback_process);
 void
 rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 	enum rte_cryptodev_event_type event)
@@ -2060,7 +2060,7 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_event_error_query, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_event_error_query, 23.03);
 int
 rte_cryptodev_queue_pair_event_error_query(uint8_t dev_id, uint16_t qp_id)
 {
@@ -2080,7 +2080,7 @@ rte_cryptodev_queue_pair_event_error_query(uint8_t dev_id, uint16_t qp_id)
 	return dev->dev_ops->queue_pair_event_error_query(dev, qp_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_pool_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_pool_create);
 struct rte_mempool *
 rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -2119,7 +2119,7 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_pool_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_pool_create);
 struct rte_mempool *
 rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
 	uint32_t cache_size, uint16_t user_data_size, int socket_id)
@@ -2170,7 +2170,7 @@ rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_create);
 void *
 rte_cryptodev_sym_session_create(uint8_t dev_id,
 		struct rte_crypto_sym_xform *xforms,
@@ -2238,7 +2238,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_create);
 int
 rte_cryptodev_asym_session_create(uint8_t dev_id,
 		struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
@@ -2315,7 +2315,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_free)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_free);
 int
 rte_cryptodev_sym_session_free(uint8_t dev_id, void *_sess)
 {
@@ -2362,7 +2362,7 @@ rte_cryptodev_sym_session_free(uint8_t dev_id, void *_sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_free)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_free);
 int
 rte_cryptodev_asym_session_free(uint8_t dev_id, void *sess)
 {
@@ -2394,14 +2394,14 @@ rte_cryptodev_asym_session_free(uint8_t dev_id, void *sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_header_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_header_session_size);
 unsigned int
 rte_cryptodev_asym_get_header_session_size(void)
 {
 	return sizeof(struct rte_cryptodev_asym_session);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_get_private_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_get_private_session_size);
 unsigned int
 rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
 {
@@ -2424,7 +2424,7 @@ rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
 	return priv_sess_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_private_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_private_session_size);
 unsigned int
 rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
 {
@@ -2447,7 +2447,7 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
 	return priv_sess_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_set_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_set_user_data);
 int
 rte_cryptodev_sym_session_set_user_data(void *_sess, void *data,
 		uint16_t size)
@@ -2467,7 +2467,7 @@ rte_cryptodev_sym_session_set_user_data(void *_sess, void *data,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_get_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_get_user_data);
 void *
 rte_cryptodev_sym_session_get_user_data(void *_sess)
 {
@@ -2484,7 +2484,7 @@ rte_cryptodev_sym_session_get_user_data(void *_sess)
 	return data;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_set_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_set_user_data);
 int
 rte_cryptodev_asym_session_set_user_data(void *session, void *data, uint16_t size)
 {
@@ -2504,7 +2504,7 @@ rte_cryptodev_asym_session_set_user_data(void *session, void *data, uint16_t siz
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_get_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_get_user_data);
 void *
 rte_cryptodev_asym_session_get_user_data(void *session)
 {
@@ -2529,7 +2529,7 @@ sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
 		vec->status[i] = errnum;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_cpu_crypto_process)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_cpu_crypto_process);
 uint32_t
 rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
 	void *_sess, union rte_crypto_sym_ofs ofs,
@@ -2556,7 +2556,7 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
 	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_raw_dp_ctx_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_raw_dp_ctx_size);
 int
 rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
 {
@@ -2583,7 +2583,7 @@ rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
 	return RTE_ALIGN_CEIL((size + priv_size), 8);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_configure_raw_dp_ctx)
+RTE_EXPORT_SYMBOL(rte_cryptodev_configure_raw_dp_ctx);
 int
 rte_cryptodev_configure_raw_dp_ctx(uint8_t dev_id, uint16_t qp_id,
 	struct rte_crypto_raw_dp_ctx *ctx,
@@ -2607,7 +2607,7 @@ rte_cryptodev_configure_raw_dp_ctx(uint8_t dev_id, uint16_t qp_id,
 			sess_type, session_ctx, is_update);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_session_event_mdata_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_session_event_mdata_set);
 int
 rte_cryptodev_session_event_mdata_set(uint8_t dev_id, void *sess,
 	enum rte_crypto_op_type op_type,
@@ -2651,7 +2651,7 @@ rte_cryptodev_session_event_mdata_set(uint8_t dev_id, void *sess,
 		return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_burst);
 uint32_t
 rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 	struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
@@ -2661,7 +2661,7 @@ rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 			ofs, user_data, enqueue_status);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_done)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_done);
 int
 rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
 		uint32_t n)
@@ -2669,7 +2669,7 @@ rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
 	return ctx->enqueue_done(ctx->qp_data, ctx->drv_ctx_data, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_burst);
 uint32_t
 rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 	rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
@@ -2683,7 +2683,7 @@ rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 		out_user_data, is_user_data_array, n_success_jobs, status);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_done)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_done);
 int
 rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
 		uint32_t n)
@@ -2710,7 +2710,7 @@ rte_crypto_op_init(struct rte_mempool *mempool,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_crypto_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_crypto_op_pool_create);
 struct rte_mempool *
 rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
 		unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
@@ -2780,7 +2780,7 @@ rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
 	return mp;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create_dev_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create_dev_name);
 int
 rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix)
 {
@@ -2810,7 +2810,7 @@ TAILQ_HEAD(cryptodev_driver_list, cryptodev_driver);
 static struct cryptodev_driver_list cryptodev_driver_list =
 	TAILQ_HEAD_INITIALIZER(cryptodev_driver_list);
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_driver_id_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_driver_id_get);
 int
 rte_cryptodev_driver_id_get(const char *name)
 {
@@ -2836,7 +2836,7 @@ rte_cryptodev_driver_id_get(const char *name)
 	return driver_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_name_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_name_get);
 const char *
 rte_cryptodev_name_get(uint8_t dev_id)
 {
@@ -2856,7 +2856,7 @@ rte_cryptodev_name_get(uint8_t dev_id)
 	return dev->data->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_driver_name_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_driver_name_get);
 const char *
 rte_cryptodev_driver_name_get(uint8_t driver_id)
 {
@@ -2872,7 +2872,7 @@ rte_cryptodev_driver_name_get(uint8_t driver_id)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_allocate_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_allocate_driver);
 uint8_t
 rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
 		const struct rte_driver *drv)
diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c
index a35967f7b7..10374d8d72 100644
--- a/lib/dispatcher/rte_dispatcher.c
+++ b/lib/dispatcher/rte_dispatcher.c
@@ -267,7 +267,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher)
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_create, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_create, 23.11);
 struct rte_dispatcher *
 rte_dispatcher_create(uint8_t event_dev_id)
 {
@@ -302,7 +302,7 @@ rte_dispatcher_create(uint8_t event_dev_id)
 	return dispatcher;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_free, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_free, 23.11);
 int
 rte_dispatcher_free(struct rte_dispatcher *dispatcher)
 {
@@ -320,7 +320,7 @@ rte_dispatcher_free(struct rte_dispatcher *dispatcher)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_service_id_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_service_id_get, 23.11);
 uint32_t
 rte_dispatcher_service_id_get(const struct rte_dispatcher *dispatcher)
 {
@@ -344,7 +344,7 @@ lcore_port_index(struct rte_dispatcher_lcore *lcore,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_bind_port_to_lcore, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_bind_port_to_lcore, 23.11);
 int
 rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher,
 	uint8_t event_port_id, uint16_t batch_size, uint64_t timeout,
@@ -374,7 +374,7 @@ rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unbind_port_from_lcore, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unbind_port_from_lcore, 23.11);
 int
 rte_dispatcher_unbind_port_from_lcore(struct rte_dispatcher *dispatcher,
 	uint8_t event_port_id, unsigned int lcore_id)
@@ -457,7 +457,7 @@ evd_install_handler(struct rte_dispatcher *dispatcher,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_register, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_register, 23.11);
 int
 rte_dispatcher_register(struct rte_dispatcher *dispatcher,
 	rte_dispatcher_match_t match_fun, void *match_data,
@@ -529,7 +529,7 @@ evd_uninstall_handler(struct rte_dispatcher *dispatcher, int handler_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unregister, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unregister, 23.11);
 int
 rte_dispatcher_unregister(struct rte_dispatcher *dispatcher, int handler_id)
 {
@@ -583,7 +583,7 @@ evd_alloc_finalizer(struct rte_dispatcher *dispatcher)
 	return finalizer;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_register, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_register, 23.11);
 int
 rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher,
 	rte_dispatcher_finalize_t finalize_fun, void *finalize_data)
@@ -601,7 +601,7 @@ rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher,
 	return finalizer->id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_unregister, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_unregister, 23.11);
 int
 rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher,
 	int finalizer_id)
@@ -653,14 +653,14 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state)
 	RTE_VERIFY(rc == 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_start, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_start, 23.11);
 void
 rte_dispatcher_start(struct rte_dispatcher *dispatcher)
 {
 	evd_set_service_runstate(dispatcher, 1);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stop, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stop, 23.11);
 void
 rte_dispatcher_stop(struct rte_dispatcher *dispatcher)
 {
@@ -677,7 +677,7 @@ evd_aggregate_stats(struct rte_dispatcher_stats *result,
 	result->ev_drop_count += part->ev_drop_count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_get, 23.11);
 void
 rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher,
 	struct rte_dispatcher_stats *stats)
@@ -694,7 +694,7 @@ rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_reset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_reset, 23.11);
 void
 rte_dispatcher_stats_reset(struct rte_dispatcher *dispatcher)
 {
diff --git a/lib/distributor/rte_distributor.c b/lib/distributor/rte_distributor.c
index dde7ce2677..ca35ad97d9 100644
--- a/lib/distributor/rte_distributor.c
+++ b/lib/distributor/rte_distributor.c
@@ -32,7 +32,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
 
 /**** Burst Packet APIs called by workers ****/
 
-RTE_EXPORT_SYMBOL(rte_distributor_request_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_request_pkt);
 void
 rte_distributor_request_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **oldpkt,
@@ -85,7 +85,7 @@ rte_distributor_request_pkt(struct rte_distributor *d,
 			rte_memory_order_release);
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_poll_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_poll_pkt);
 int
 rte_distributor_poll_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **pkts)
@@ -130,7 +130,7 @@ rte_distributor_poll_pkt(struct rte_distributor *d,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_get_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_get_pkt);
 int
 rte_distributor_get_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **pkts,
@@ -161,7 +161,7 @@ rte_distributor_get_pkt(struct rte_distributor *d,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_return_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_return_pkt);
 int
 rte_distributor_return_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
@@ -444,7 +444,7 @@ release(struct rte_distributor *d, unsigned int wkr)
 
 
 /* process a set of packets to distribute them to workers */
-RTE_EXPORT_SYMBOL(rte_distributor_process)
+RTE_EXPORT_SYMBOL(rte_distributor_process);
 int
 rte_distributor_process(struct rte_distributor *d,
 		struct rte_mbuf **mbufs, unsigned int num_mbufs)
@@ -615,7 +615,7 @@ rte_distributor_process(struct rte_distributor *d,
 }
 
 /* return to the caller, packets returned from workers */
-RTE_EXPORT_SYMBOL(rte_distributor_returned_pkts)
+RTE_EXPORT_SYMBOL(rte_distributor_returned_pkts);
 int
 rte_distributor_returned_pkts(struct rte_distributor *d,
 		struct rte_mbuf **mbufs, unsigned int max_mbufs)
@@ -662,7 +662,7 @@ total_outstanding(const struct rte_distributor *d)
  * Flush the distributor, so that there are no outstanding packets in flight or
  * queued up.
  */
-RTE_EXPORT_SYMBOL(rte_distributor_flush)
+RTE_EXPORT_SYMBOL(rte_distributor_flush);
 int
 rte_distributor_flush(struct rte_distributor *d)
 {
@@ -695,7 +695,7 @@ rte_distributor_flush(struct rte_distributor *d)
 }
 
 /* clears the internal returns array in the distributor */
-RTE_EXPORT_SYMBOL(rte_distributor_clear_returns)
+RTE_EXPORT_SYMBOL(rte_distributor_clear_returns);
 void
 rte_distributor_clear_returns(struct rte_distributor *d)
 {
@@ -717,7 +717,7 @@ rte_distributor_clear_returns(struct rte_distributor *d)
 }
 
 /* creates a distributor instance */
-RTE_EXPORT_SYMBOL(rte_distributor_create)
+RTE_EXPORT_SYMBOL(rte_distributor_create);
 struct rte_distributor *
 rte_distributor_create(const char *name,
 		unsigned int socket_id,
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 17ee0808a9..65cb34d3e1 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -22,7 +22,7 @@
 
 static int16_t dma_devices_max;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_fp_objs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_fp_objs);
 struct rte_dma_fp_object *rte_dma_fp_objs;
 static struct rte_dma_dev *rte_dma_devices;
 static struct {
@@ -39,7 +39,7 @@ RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO);
 #define RTE_DMA_LOG(level, ...) \
 	RTE_LOG_LINE(level, DMADEV, "" __VA_ARGS__)
 
-RTE_EXPORT_SYMBOL(rte_dma_dev_max)
+RTE_EXPORT_SYMBOL(rte_dma_dev_max);
 int
 rte_dma_dev_max(size_t dev_max)
 {
@@ -57,7 +57,7 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_next_dev)
+RTE_EXPORT_SYMBOL(rte_dma_next_dev);
 int16_t
 rte_dma_next_dev(int16_t start_dev_id)
 {
@@ -352,7 +352,7 @@ dma_release(struct rte_dma_dev *dev)
 	memset(dev, 0, sizeof(struct rte_dma_dev));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_allocate);
 struct rte_dma_dev *
 rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size)
 {
@@ -370,7 +370,7 @@ rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_release);
 int
 rte_dma_pmd_release(const char *name)
 {
@@ -390,7 +390,7 @@ rte_dma_pmd_release(const char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_get_dev_id_by_name)
+RTE_EXPORT_SYMBOL(rte_dma_get_dev_id_by_name);
 int
 rte_dma_get_dev_id_by_name(const char *name)
 {
@@ -406,7 +406,7 @@ rte_dma_get_dev_id_by_name(const char *name)
 	return dev->data->dev_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_is_valid)
+RTE_EXPORT_SYMBOL(rte_dma_is_valid);
 bool
 rte_dma_is_valid(int16_t dev_id)
 {
@@ -415,7 +415,7 @@ rte_dma_is_valid(int16_t dev_id)
 		rte_dma_devices[dev_id].state != RTE_DMA_DEV_UNUSED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_get_dev_by_id)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_get_dev_by_id);
 struct rte_dma_dev *
 rte_dma_pmd_get_dev_by_id(int16_t dev_id)
 {
@@ -425,7 +425,7 @@ rte_dma_pmd_get_dev_by_id(int16_t dev_id)
 	return &rte_dma_devices[dev_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_count_avail)
+RTE_EXPORT_SYMBOL(rte_dma_count_avail);
 uint16_t
 rte_dma_count_avail(void)
 {
@@ -443,7 +443,7 @@ rte_dma_count_avail(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_info_get)
+RTE_EXPORT_SYMBOL(rte_dma_info_get);
 int
 rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info)
 {
@@ -475,7 +475,7 @@ rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_configure)
+RTE_EXPORT_SYMBOL(rte_dma_configure);
 int
 rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf)
 {
@@ -533,7 +533,7 @@ rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_start)
+RTE_EXPORT_SYMBOL(rte_dma_start);
 int
 rte_dma_start(int16_t dev_id)
 {
@@ -567,7 +567,7 @@ rte_dma_start(int16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stop)
+RTE_EXPORT_SYMBOL(rte_dma_stop);
 int
 rte_dma_stop(int16_t dev_id)
 {
@@ -596,7 +596,7 @@ rte_dma_stop(int16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_close)
+RTE_EXPORT_SYMBOL(rte_dma_close);
 int
 rte_dma_close(int16_t dev_id)
 {
@@ -625,7 +625,7 @@ rte_dma_close(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_vchan_setup)
+RTE_EXPORT_SYMBOL(rte_dma_vchan_setup);
 int
 rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan,
 		    const struct rte_dma_vchan_conf *conf)
@@ -720,7 +720,7 @@ rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stats_get)
+RTE_EXPORT_SYMBOL(rte_dma_stats_get);
 int
 rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats)
 {
@@ -743,7 +743,7 @@ rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats)
 	return dev->dev_ops->stats_get(dev, vchan, stats, sizeof(struct rte_dma_stats));
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stats_reset)
+RTE_EXPORT_SYMBOL(rte_dma_stats_reset);
 int
 rte_dma_stats_reset(int16_t dev_id, uint16_t vchan)
 {
@@ -769,7 +769,7 @@ rte_dma_stats_reset(int16_t dev_id, uint16_t vchan)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_vchan_status)
+RTE_EXPORT_SYMBOL(rte_dma_vchan_status);
 int
 rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
 {
@@ -837,7 +837,7 @@ dma_dump_capability(FILE *f, uint64_t dev_capa)
 	(void)fprintf(f, "\n");
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_dump)
+RTE_EXPORT_SYMBOL(rte_dma_dump);
 int
 rte_dma_dump(int16_t dev_id, FILE *f)
 {
diff --git a/lib/dmadev/rte_dmadev_trace_points.c b/lib/dmadev/rte_dmadev_trace_points.c
index 1c8998fb98..f5103d27da 100644
--- a/lib/dmadev/rte_dmadev_trace_points.c
+++ b/lib/dmadev/rte_dmadev_trace_points.c
@@ -37,30 +37,30 @@ RTE_TRACE_POINT_REGISTER(rte_dma_trace_vchan_status,
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_dump,
 	lib.dmadev.dump)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy,
 	lib.dmadev.copy)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy_sg, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy_sg, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy_sg,
 	lib.dmadev.copy_sg)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_fill, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_fill, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_fill,
 	lib.dmadev.fill)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_submit, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_submit, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_submit,
 	lib.dmadev.submit)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed,
 	lib.dmadev.completed)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed_status, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed_status, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed_status,
 	lib.dmadev.completed_status)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_burst_capacity, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_burst_capacity, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_burst_capacity,
 	lib.dmadev.burst_capacity)
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 32243f293a..2a644d720c 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -136,7 +136,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -154,7 +154,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -163,7 +163,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/arm/rte_hypervisor.c b/lib/eal/arm/rte_hypervisor.c
index 51b224fb94..45e6ef667b 100644
--- a/lib/eal/arm/rte_hypervisor.c
+++ b/lib/eal/arm/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 4826b370ea..b9c3ab30a6 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -27,7 +27,7 @@ RTE_INIT(rte_power_intrinsics_init)
  * This function uses WFE/WFET instruction to make lcore suspend
  * execution on ARM.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -80,7 +80,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on ARM.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -94,7 +94,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
  * on ARM.
  * Note that lcore_id is not used here.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -108,7 +108,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 #endif /* RTE_ARCH_64 */
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index 0a2311a342..6df7a6ab4d 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -17,14 +17,14 @@
 static struct rte_bus_list rte_bus_list =
 	TAILQ_HEAD_INITIALIZER(rte_bus_list);
 
-RTE_EXPORT_SYMBOL(rte_bus_name)
+RTE_EXPORT_SYMBOL(rte_bus_name);
 const char *
 rte_bus_name(const struct rte_bus *bus)
 {
 	return bus->name;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_register);
 void
 rte_bus_register(struct rte_bus *bus)
 {
@@ -41,7 +41,7 @@ rte_bus_register(struct rte_bus *bus)
 	EAL_LOG(DEBUG, "Registered [%s] bus.", rte_bus_name(bus));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_unregister);
 void
 rte_bus_unregister(struct rte_bus *bus)
 {
@@ -50,7 +50,7 @@ rte_bus_unregister(struct rte_bus *bus)
 }
 
 /* Scan all the buses for registered devices */
-RTE_EXPORT_SYMBOL(rte_bus_scan)
+RTE_EXPORT_SYMBOL(rte_bus_scan);
 int
 rte_bus_scan(void)
 {
@@ -68,7 +68,7 @@ rte_bus_scan(void)
 }
 
 /* Probe all devices of all buses */
-RTE_EXPORT_SYMBOL(rte_bus_probe)
+RTE_EXPORT_SYMBOL(rte_bus_probe);
 int
 rte_bus_probe(void)
 {
@@ -130,7 +130,7 @@ bus_dump_one(FILE *f, struct rte_bus *bus)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_dump)
+RTE_EXPORT_SYMBOL(rte_bus_dump);
 void
 rte_bus_dump(FILE *f)
 {
@@ -147,7 +147,7 @@ rte_bus_dump(FILE *f)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find)
+RTE_EXPORT_SYMBOL(rte_bus_find);
 struct rte_bus *
 rte_bus_find(const struct rte_bus *start, rte_bus_cmp_t cmp,
 	     const void *data)
@@ -183,7 +183,7 @@ bus_find_device(const struct rte_bus *bus, const void *_dev)
 	return dev == NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find_by_device)
+RTE_EXPORT_SYMBOL(rte_bus_find_by_device);
 struct rte_bus *
 rte_bus_find_by_device(const struct rte_device *dev)
 {
@@ -198,7 +198,7 @@ cmp_bus_name(const struct rte_bus *bus, const void *_name)
 	return strcmp(rte_bus_name(bus), name);
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find_by_name)
+RTE_EXPORT_SYMBOL(rte_bus_find_by_name);
 struct rte_bus *
 rte_bus_find_by_name(const char *busname)
 {
@@ -230,7 +230,7 @@ rte_bus_find_by_device_name(const char *str)
 /*
  * Get iommu class of devices on the bus.
  */
-RTE_EXPORT_SYMBOL(rte_bus_get_iommu_class)
+RTE_EXPORT_SYMBOL(rte_bus_get_iommu_class);
 enum rte_iova_mode
 rte_bus_get_iommu_class(void)
 {
diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c
index 0f10c6894b..787f3d53f5 100644
--- a/lib/eal/common/eal_common_class.c
+++ b/lib/eal/common/eal_common_class.c
@@ -15,7 +15,7 @@
 static struct rte_class_list rte_class_list =
 	TAILQ_HEAD_INITIALIZER(rte_class_list);
 
-RTE_EXPORT_SYMBOL(rte_class_register)
+RTE_EXPORT_SYMBOL(rte_class_register);
 void
 rte_class_register(struct rte_class *class)
 {
@@ -26,7 +26,7 @@ rte_class_register(struct rte_class *class)
 	EAL_LOG(DEBUG, "Registered [%s] device class.", class->name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_unregister)
+RTE_EXPORT_SYMBOL(rte_class_unregister);
 void
 rte_class_unregister(struct rte_class *class)
 {
@@ -34,7 +34,7 @@ rte_class_unregister(struct rte_class *class)
 	EAL_LOG(DEBUG, "Unregistered [%s] device class.", class->name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_find)
+RTE_EXPORT_SYMBOL(rte_class_find);
 struct rte_class *
 rte_class_find(const struct rte_class *start, rte_class_cmp_t cmp,
 	       const void *data)
@@ -61,7 +61,7 @@ cmp_class_name(const struct rte_class *class, const void *_name)
 	return strcmp(class->name, name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_find_by_name)
+RTE_EXPORT_SYMBOL(rte_class_find_by_name);
 struct rte_class *
 rte_class_find_by_name(const char *name)
 {
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 7fc7611a07..8804b9f171 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -29,7 +29,7 @@ static char runtime_dir[PATH_MAX];
 /* internal configuration */
 static struct internal_config internal_config;
 
-RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir)
+RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir);
 const char *
 rte_eal_get_runtime_dir(void)
 {
@@ -61,7 +61,7 @@ eal_get_internal_configuration(void)
 	return &internal_config;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_iova_mode)
+RTE_EXPORT_SYMBOL(rte_eal_iova_mode);
 enum rte_iova_mode
 rte_eal_iova_mode(void)
 {
@@ -69,7 +69,7 @@ rte_eal_iova_mode(void)
 }
 
 /* Get the EAL base address */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_baseaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_baseaddr);
 uint64_t
 rte_eal_get_baseaddr(void)
 {
@@ -78,7 +78,7 @@ rte_eal_get_baseaddr(void)
 		       eal_get_baseaddr();
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_process_type)
+RTE_EXPORT_SYMBOL(rte_eal_process_type);
 enum rte_proc_type_t
 rte_eal_process_type(void)
 {
@@ -86,7 +86,7 @@ rte_eal_process_type(void)
 }
 
 /* Return user provided mbuf pool ops name */
-RTE_EXPORT_SYMBOL(rte_eal_mbuf_user_pool_ops)
+RTE_EXPORT_SYMBOL(rte_eal_mbuf_user_pool_ops);
 const char *
 rte_eal_mbuf_user_pool_ops(void)
 {
@@ -94,14 +94,14 @@ rte_eal_mbuf_user_pool_ops(void)
 }
 
 /* return non-zero if hugepages are enabled. */
-RTE_EXPORT_SYMBOL(rte_eal_has_hugepages)
+RTE_EXPORT_SYMBOL(rte_eal_has_hugepages);
 int
 rte_eal_has_hugepages(void)
 {
 	return !internal_config.no_hugetlbfs;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_has_pci)
+RTE_EXPORT_SYMBOL(rte_eal_has_pci);
 int
 rte_eal_has_pci(void)
 {
diff --git a/lib/eal/common/eal_common_cpuflags.c b/lib/eal/common/eal_common_cpuflags.c
index cbd49a151b..b86fd01b89 100644
--- a/lib/eal/common/eal_common_cpuflags.c
+++ b/lib/eal/common/eal_common_cpuflags.c
@@ -8,7 +8,7 @@
 #include <rte_common.h>
 #include <rte_cpuflags.h>
 
-RTE_EXPORT_SYMBOL(rte_cpu_is_supported)
+RTE_EXPORT_SYMBOL(rte_cpu_is_supported);
 int
 rte_cpu_is_supported(void)
 {
diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c
index 7a42546da2..af1d7353df 100644
--- a/lib/eal/common/eal_common_debug.c
+++ b/lib/eal/common/eal_common_debug.c
@@ -14,7 +14,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(__rte_panic)
+RTE_EXPORT_SYMBOL(__rte_panic);
 void
 __rte_panic(const char *funcname, const char *format, ...)
 {
@@ -32,7 +32,7 @@ __rte_panic(const char *funcname, const char *format, ...)
  * Like rte_panic this terminates the application. However, no traceback is
  * provided and no core-dump is generated.
  */
-RTE_EXPORT_SYMBOL(rte_exit)
+RTE_EXPORT_SYMBOL(rte_exit);
 void
 rte_exit(int exit_code, const char *format, ...)
 {
diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c
index 7185de0cb9..5937db32ca 100644
--- a/lib/eal/common/eal_common_dev.c
+++ b/lib/eal/common/eal_common_dev.c
@@ -21,49 +21,49 @@
 #include "eal_private.h"
 #include "hotplug_mp.h"
 
-RTE_EXPORT_SYMBOL(rte_driver_name)
+RTE_EXPORT_SYMBOL(rte_driver_name);
 const char *
 rte_driver_name(const struct rte_driver *driver)
 {
 	return driver->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_bus)
+RTE_EXPORT_SYMBOL(rte_dev_bus);
 const struct rte_bus *
 rte_dev_bus(const struct rte_device *dev)
 {
 	return dev->bus;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_bus_info)
+RTE_EXPORT_SYMBOL(rte_dev_bus_info);
 const char *
 rte_dev_bus_info(const struct rte_device *dev)
 {
 	return dev->bus_info;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_devargs)
+RTE_EXPORT_SYMBOL(rte_dev_devargs);
 const struct rte_devargs *
 rte_dev_devargs(const struct rte_device *dev)
 {
 	return dev->devargs;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_driver)
+RTE_EXPORT_SYMBOL(rte_dev_driver);
 const struct rte_driver *
 rte_dev_driver(const struct rte_device *dev)
 {
 	return dev->driver;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_name)
+RTE_EXPORT_SYMBOL(rte_dev_name);
 const char *
 rte_dev_name(const struct rte_device *dev)
 {
 	return dev->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_numa_node)
+RTE_EXPORT_SYMBOL(rte_dev_numa_node);
 int
 rte_dev_numa_node(const struct rte_device *dev)
 {
@@ -122,7 +122,7 @@ static int cmp_dev_name(const struct rte_device *dev, const void *_name)
 	return strcmp(dev->name, name);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_is_probed)
+RTE_EXPORT_SYMBOL(rte_dev_is_probed);
 int
 rte_dev_is_probed(const struct rte_device *dev)
 {
@@ -155,7 +155,7 @@ build_devargs(const char *busname, const char *devname,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_hotplug_add)
+RTE_EXPORT_SYMBOL(rte_eal_hotplug_add);
 int
 rte_eal_hotplug_add(const char *busname, const char *devname,
 		    const char *drvargs)
@@ -240,7 +240,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_probe)
+RTE_EXPORT_SYMBOL(rte_dev_probe);
 int
 rte_dev_probe(const char *devargs)
 {
@@ -334,7 +334,7 @@ rte_dev_probe(const char *devargs)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_hotplug_remove)
+RTE_EXPORT_SYMBOL(rte_eal_hotplug_remove);
 int
 rte_eal_hotplug_remove(const char *busname, const char *devname)
 {
@@ -378,7 +378,7 @@ local_dev_remove(struct rte_device *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_remove)
+RTE_EXPORT_SYMBOL(rte_dev_remove);
 int
 rte_dev_remove(struct rte_device *dev)
 {
@@ -476,7 +476,7 @@ rte_dev_remove(struct rte_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_register)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_register);
 int
 rte_dev_event_callback_register(const char *device_name,
 				rte_dev_event_cb_fn cb_fn,
@@ -545,7 +545,7 @@ rte_dev_event_callback_register(const char *device_name,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_unregister);
 int
 rte_dev_event_callback_unregister(const char *device_name,
 				  rte_dev_event_cb_fn cb_fn,
@@ -599,7 +599,7 @@ rte_dev_event_callback_unregister(const char *device_name,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_process)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_process);
 void
 rte_dev_event_callback_process(const char *device_name,
 			       enum rte_dev_event_type event)
@@ -626,7 +626,7 @@ rte_dev_event_callback_process(const char *device_name,
 	rte_spinlock_unlock(&dev_event_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_iterator_init)
+RTE_EXPORT_SYMBOL(rte_dev_iterator_init);
 int
 rte_dev_iterator_init(struct rte_dev_iterator *it,
 		      const char *dev_str)
@@ -779,7 +779,7 @@ bus_next_dev_cmp(const struct rte_bus *bus,
 	it->device = dev;
 	return dev == NULL;
 }
-RTE_EXPORT_SYMBOL(rte_dev_iterator_next)
+RTE_EXPORT_SYMBOL(rte_dev_iterator_next);
 struct rte_device *
 rte_dev_iterator_next(struct rte_dev_iterator *it)
 {
@@ -824,7 +824,7 @@ rte_dev_iterator_next(struct rte_dev_iterator *it)
 	return it->device;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_dma_map)
+RTE_EXPORT_SYMBOL(rte_dev_dma_map);
 int
 rte_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova,
 		size_t len)
@@ -842,7 +842,7 @@ rte_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova,
 	return dev->bus->dma_map(dev, addr, iova, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_dev_dma_unmap);
 int
 rte_dev_dma_unmap(struct rte_device *dev, void *addr, uint64_t iova,
 		  size_t len)
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index c523429d67..c72c60ff4b 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -181,7 +181,7 @@ bus_name_cmp(const struct rte_bus *bus, const void *name)
 	return strncmp(bus->name, name, strlen(bus->name));
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_parse)
+RTE_EXPORT_SYMBOL(rte_devargs_parse);
 int
 rte_devargs_parse(struct rte_devargs *da, const char *dev)
 {
@@ -248,7 +248,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_parsef)
+RTE_EXPORT_SYMBOL(rte_devargs_parsef);
 int
 rte_devargs_parsef(struct rte_devargs *da, const char *format, ...)
 {
@@ -283,7 +283,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_reset)
+RTE_EXPORT_SYMBOL(rte_devargs_reset);
 void
 rte_devargs_reset(struct rte_devargs *da)
 {
@@ -293,7 +293,7 @@ rte_devargs_reset(struct rte_devargs *da)
 	da->data = NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_insert)
+RTE_EXPORT_SYMBOL(rte_devargs_insert);
 int
 rte_devargs_insert(struct rte_devargs **da)
 {
@@ -325,7 +325,7 @@ rte_devargs_insert(struct rte_devargs **da)
 }
 
 /* store in allowed list parameter for later parsing */
-RTE_EXPORT_SYMBOL(rte_devargs_add)
+RTE_EXPORT_SYMBOL(rte_devargs_add);
 int
 rte_devargs_add(enum rte_devtype devtype, const char *devargs_str)
 {
@@ -362,7 +362,7 @@ rte_devargs_add(enum rte_devtype devtype, const char *devargs_str)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_remove)
+RTE_EXPORT_SYMBOL(rte_devargs_remove);
 int
 rte_devargs_remove(struct rte_devargs *devargs)
 {
@@ -385,7 +385,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 }
 
 /* count the number of devices of a specified type */
-RTE_EXPORT_SYMBOL(rte_devargs_type_count)
+RTE_EXPORT_SYMBOL(rte_devargs_type_count);
 unsigned int
 rte_devargs_type_count(enum rte_devtype devtype)
 {
@@ -401,7 +401,7 @@ rte_devargs_type_count(enum rte_devtype devtype)
 }
 
 /* dump the user devices on the console */
-RTE_EXPORT_SYMBOL(rte_devargs_dump)
+RTE_EXPORT_SYMBOL(rte_devargs_dump);
 void
 rte_devargs_dump(FILE *f)
 {
@@ -416,7 +416,7 @@ rte_devargs_dump(FILE *f)
 }
 
 /* bus-aware rte_devargs iterator. */
-RTE_EXPORT_SYMBOL(rte_devargs_next)
+RTE_EXPORT_SYMBOL(rte_devargs_next);
 struct rte_devargs *
 rte_devargs_next(const char *busname, const struct rte_devargs *start)
 {
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index 3f933c3f7b..256a041789 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -17,10 +17,10 @@
 #define strerror_r(errnum, buf, buflen) strerror_s(buf, buflen, errnum)
 #endif
 
-RTE_EXPORT_SYMBOL(per_lcore__rte_errno)
+RTE_EXPORT_SYMBOL(per_lcore__rte_errno);
 RTE_DEFINE_PER_LCORE(int, _rte_errno);
 
-RTE_EXPORT_SYMBOL(rte_strerror)
+RTE_EXPORT_SYMBOL(rte_strerror);
 const char *
 rte_strerror(int errnum)
 {
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 8bdcefb717..4fe80cee1e 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -686,7 +686,7 @@ fully_validate(const char *name, unsigned int elt_sz, unsigned int len)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_init)
+RTE_EXPORT_SYMBOL(rte_fbarray_init);
 int
 rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
 		unsigned int elt_sz)
@@ -813,7 +813,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_attach)
+RTE_EXPORT_SYMBOL(rte_fbarray_attach);
 int
 rte_fbarray_attach(struct rte_fbarray *arr)
 {
@@ -902,7 +902,7 @@ rte_fbarray_attach(struct rte_fbarray *arr)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_detach)
+RTE_EXPORT_SYMBOL(rte_fbarray_detach);
 int
 rte_fbarray_detach(struct rte_fbarray *arr)
 {
@@ -956,7 +956,7 @@ rte_fbarray_detach(struct rte_fbarray *arr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_destroy)
+RTE_EXPORT_SYMBOL(rte_fbarray_destroy);
 int
 rte_fbarray_destroy(struct rte_fbarray *arr)
 {
@@ -1043,7 +1043,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_get)
+RTE_EXPORT_SYMBOL(rte_fbarray_get);
 void *
 rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx)
 {
@@ -1063,21 +1063,21 @@ rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_set_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_set_used);
 int
 rte_fbarray_set_used(struct rte_fbarray *arr, unsigned int idx)
 {
 	return set_used(arr, idx, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_set_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_set_free);
 int
 rte_fbarray_set_free(struct rte_fbarray *arr, unsigned int idx)
 {
 	return set_used(arr, idx, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_is_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_is_used);
 int
 rte_fbarray_is_used(struct rte_fbarray *arr, unsigned int idx)
 {
@@ -1147,28 +1147,28 @@ fbarray_find(struct rte_fbarray *arr, unsigned int start, bool next, bool used)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_free);
 int
 rte_fbarray_find_next_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_used);
 int
 rte_fbarray_find_next_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_free);
 int
 rte_fbarray_find_prev_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_used);
 int
 rte_fbarray_find_prev_used(struct rte_fbarray *arr, unsigned int start)
 {
@@ -1227,7 +1227,7 @@ fbarray_find_n(struct rte_fbarray *arr, unsigned int start, unsigned int n,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_free);
 int
 rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1235,7 +1235,7 @@ rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_used);
 int
 rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1243,7 +1243,7 @@ rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_free);
 int
 rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1251,7 +1251,7 @@ rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_used);
 int
 rte_fbarray_find_prev_n_used(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1395,28 +1395,28 @@ fbarray_find_biggest(struct rte_fbarray *arr, unsigned int start, bool used,
 	return biggest_idx;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_free);
 int
 rte_fbarray_find_biggest_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_used);
 int
 rte_fbarray_find_biggest_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_free);
 int
 rte_fbarray_find_rev_biggest_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, false, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_used);
 int
 rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start)
 {
@@ -1424,35 +1424,35 @@ rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_free);
 int
 rte_fbarray_find_contig_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_used);
 int
 rte_fbarray_find_contig_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_free);
 int
 rte_fbarray_find_rev_contig_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_used);
 int
 rte_fbarray_find_rev_contig_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, false, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_idx)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_idx);
 int
 rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt)
 {
@@ -1479,7 +1479,7 @@ rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_dump_metadata)
+RTE_EXPORT_SYMBOL(rte_fbarray_dump_metadata);
 void
 rte_fbarray_dump_metadata(struct rte_fbarray *arr, FILE *f)
 {
diff --git a/lib/eal/common/eal_common_hexdump.c b/lib/eal/common/eal_common_hexdump.c
index 28159f298a..e560aad214 100644
--- a/lib/eal/common/eal_common_hexdump.c
+++ b/lib/eal/common/eal_common_hexdump.c
@@ -8,7 +8,7 @@
 
 #define LINE_LEN 128
 
-RTE_EXPORT_SYMBOL(rte_hexdump)
+RTE_EXPORT_SYMBOL(rte_hexdump);
 void
 rte_hexdump(FILE *f, const char *title, const void *buf, unsigned int len)
 {
@@ -47,7 +47,7 @@ rte_hexdump(FILE *f, const char *title, const void *buf, unsigned int len)
 	fflush(f);
 }
 
-RTE_EXPORT_SYMBOL(rte_memdump)
+RTE_EXPORT_SYMBOL(rte_memdump);
 void
 rte_memdump(FILE *f, const char *title, const void *buf, unsigned int len)
 {
diff --git a/lib/eal/common/eal_common_hypervisor.c b/lib/eal/common/eal_common_hypervisor.c
index 7158fd25de..6231294eab 100644
--- a/lib/eal/common/eal_common_hypervisor.c
+++ b/lib/eal/common/eal_common_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get_name)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get_name);
 const char *
 rte_hypervisor_get_name(enum rte_hypervisor id)
 {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index b42fa862f3..4775d894c3 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -30,7 +30,7 @@
 #define RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags) \
 	(!!(flags & RTE_INTR_INSTANCE_F_SHARED))
 
-RTE_EXPORT_SYMBOL(rte_intr_instance_alloc)
+RTE_EXPORT_SYMBOL(rte_intr_instance_alloc);
 struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
 {
 	struct rte_intr_handle *intr_handle;
@@ -98,7 +98,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_dup)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_dup);
 struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
 {
 	struct rte_intr_handle *intr_handle;
@@ -124,7 +124,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
 	return intr_handle;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_event_list_update)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_event_list_update);
 int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
 {
 	struct rte_epoll_event *tmp_elist;
@@ -175,7 +175,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_instance_free)
+RTE_EXPORT_SYMBOL(rte_intr_instance_free);
 void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
 {
 	if (intr_handle == NULL)
@@ -191,7 +191,7 @@ void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_fd_set)
+RTE_EXPORT_SYMBOL(rte_intr_fd_set);
 int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -203,7 +203,7 @@ int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_fd_get)
+RTE_EXPORT_SYMBOL(rte_intr_fd_get);
 int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -213,7 +213,7 @@ int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_type_set)
+RTE_EXPORT_SYMBOL(rte_intr_type_set);
 int rte_intr_type_set(struct rte_intr_handle *intr_handle,
 	enum rte_intr_handle_type type)
 {
@@ -226,7 +226,7 @@ int rte_intr_type_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_type_get)
+RTE_EXPORT_SYMBOL(rte_intr_type_get);
 enum rte_intr_handle_type rte_intr_type_get(
 	const struct rte_intr_handle *intr_handle)
 {
@@ -237,7 +237,7 @@ enum rte_intr_handle_type rte_intr_type_get(
 	return RTE_INTR_HANDLE_UNKNOWN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_set);
 int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -249,7 +249,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_get);
 int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -259,7 +259,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_set);
 int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
 				 int max_intr)
 {
@@ -280,7 +280,7 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_get);
 int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -290,7 +290,7 @@ int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_set);
 int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -302,7 +302,7 @@ int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_get);
 int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -312,7 +312,7 @@ int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_intr_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_intr_get);
 int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -322,7 +322,7 @@ int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_set);
 int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
 	uint8_t efd_counter_size)
 {
@@ -335,7 +335,7 @@ int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_get);
 int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -345,7 +345,7 @@ int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_get);
 int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
 	int index)
 {
@@ -363,7 +363,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_set);
 int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
 	int index, int fd)
 {
@@ -383,7 +383,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_get);
 struct rte_epoll_event *rte_intr_elist_index_get(
 	struct rte_intr_handle *intr_handle, int index)
 {
@@ -401,7 +401,7 @@ struct rte_epoll_event *rte_intr_elist_index_get(
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_set);
 int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
 	int index, struct rte_epoll_event elist)
 {
@@ -421,7 +421,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_alloc);
 int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
 	const char *name, int size)
 {
@@ -455,7 +455,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_get);
 int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
 				int index)
 {
@@ -473,7 +473,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_set);
 int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
 				int index, int vec)
 {
@@ -493,7 +493,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_free);
 void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
 {
 	if (intr_handle == NULL)
@@ -506,7 +506,7 @@ void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
 	intr_handle->vec_list_size = 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_get);
 void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -516,7 +516,7 @@ void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_set);
 int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
 	void *windows_handle)
 {
diff --git a/lib/eal/common/eal_common_launch.c b/lib/eal/common/eal_common_launch.c
index a7deac6ecd..a408e44bbd 100644
--- a/lib/eal/common/eal_common_launch.c
+++ b/lib/eal/common/eal_common_launch.c
@@ -16,7 +16,7 @@
 /*
  * Wait until a lcore finished its job.
  */
-RTE_EXPORT_SYMBOL(rte_eal_wait_lcore)
+RTE_EXPORT_SYMBOL(rte_eal_wait_lcore);
 int
 rte_eal_wait_lcore(unsigned worker_id)
 {
@@ -32,7 +32,7 @@ rte_eal_wait_lcore(unsigned worker_id)
  * function f with argument arg. Once the execution is done, the
  * remote lcore switches to WAIT state.
  */
-RTE_EXPORT_SYMBOL(rte_eal_remote_launch)
+RTE_EXPORT_SYMBOL(rte_eal_remote_launch);
 int
 rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
@@ -64,7 +64,7 @@ rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
  * rte_eal_remote_launch() for all of them. If call_main is true
  * (set to CALL_MAIN), also call the function on the main lcore.
  */
-RTE_EXPORT_SYMBOL(rte_eal_mp_remote_launch)
+RTE_EXPORT_SYMBOL(rte_eal_mp_remote_launch);
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
 			 enum rte_rmt_call_main_t call_main)
@@ -94,7 +94,7 @@ rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
 /*
  * Return the state of the lcore identified by worker_id.
  */
-RTE_EXPORT_SYMBOL(rte_eal_get_lcore_state)
+RTE_EXPORT_SYMBOL(rte_eal_get_lcore_state);
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
 {
@@ -105,7 +105,7 @@ rte_eal_get_lcore_state(unsigned lcore_id)
  * Do a rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  */
-RTE_EXPORT_SYMBOL(rte_eal_mp_wait_lcore)
+RTE_EXPORT_SYMBOL(rte_eal_mp_wait_lcore);
 void
 rte_eal_mp_wait_lcore(void)
 {
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 5c8b0f9aa2..b031c37caf 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -19,19 +19,19 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-RTE_EXPORT_SYMBOL(rte_get_main_lcore)
+RTE_EXPORT_SYMBOL(rte_get_main_lcore);
 unsigned int rte_get_main_lcore(void)
 {
 	return rte_eal_get_configuration()->main_lcore;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_count)
+RTE_EXPORT_SYMBOL(rte_lcore_count);
 unsigned int rte_lcore_count(void)
 {
 	return rte_eal_get_configuration()->lcore_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_index)
+RTE_EXPORT_SYMBOL(rte_lcore_index);
 int rte_lcore_index(int lcore_id)
 {
 	if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -47,7 +47,7 @@ int rte_lcore_index(int lcore_id)
 	return lcore_config[lcore_id].core_index;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id)
+RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id);
 int rte_lcore_to_cpu_id(int lcore_id)
 {
 	if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -63,13 +63,13 @@ int rte_lcore_to_cpu_id(int lcore_id)
 	return lcore_config[lcore_id].core_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_cpuset)
+RTE_EXPORT_SYMBOL(rte_lcore_cpuset);
 rte_cpuset_t rte_lcore_cpuset(unsigned int lcore_id)
 {
 	return lcore_config[lcore_id].cpuset;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_lcore_role)
+RTE_EXPORT_SYMBOL(rte_eal_lcore_role);
 enum rte_lcore_role_t
 rte_eal_lcore_role(unsigned int lcore_id)
 {
@@ -80,7 +80,7 @@ rte_eal_lcore_role(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_has_role)
+RTE_EXPORT_SYMBOL(rte_lcore_has_role);
 int
 rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
 {
@@ -92,7 +92,7 @@ rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
 	return cfg->lcore_role[lcore_id] == role;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_is_enabled)
+RTE_EXPORT_SYMBOL(rte_lcore_is_enabled);
 int rte_lcore_is_enabled(unsigned int lcore_id)
 {
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -102,7 +102,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_next_lcore)
+RTE_EXPORT_SYMBOL(rte_get_next_lcore);
 unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
@@ -122,7 +122,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id)
+RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id);
 unsigned int
 rte_lcore_to_socket_id(unsigned int lcore_id)
 {
@@ -231,7 +231,7 @@ rte_eal_cpu_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_socket_count)
+RTE_EXPORT_SYMBOL(rte_socket_count);
 unsigned int
 rte_socket_count(void)
 {
@@ -239,7 +239,7 @@ rte_socket_count(void)
 	return config->numa_node_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_socket_id_by_idx)
+RTE_EXPORT_SYMBOL(rte_socket_id_by_idx);
 int
 rte_socket_id_by_idx(unsigned int idx)
 {
@@ -289,7 +289,7 @@ free_callback(struct lcore_callback *callback)
 	free(callback);
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_callback_register)
+RTE_EXPORT_SYMBOL(rte_lcore_callback_register);
 void *
 rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
 	rte_lcore_uninit_cb uninit, void *arg)
@@ -340,7 +340,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
 	return callback;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_lcore_callback_unregister);
 void
 rte_lcore_callback_unregister(void *handle)
 {
@@ -426,7 +426,7 @@ eal_lcore_non_eal_release(unsigned int lcore_id)
 	rte_rwlock_write_unlock(&lcore_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_iterate)
+RTE_EXPORT_SYMBOL(rte_lcore_iterate);
 int
 rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg)
 {
@@ -463,7 +463,7 @@ lcore_role_str(enum rte_lcore_role_t role)
 
 static rte_lcore_usage_cb lcore_usage_cb;
 
-RTE_EXPORT_SYMBOL(rte_lcore_register_usage_cb)
+RTE_EXPORT_SYMBOL(rte_lcore_register_usage_cb);
 void
 rte_lcore_register_usage_cb(rte_lcore_usage_cb cb)
 {
@@ -510,7 +510,7 @@ lcore_dump_cb(unsigned int lcore_id, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_dump)
+RTE_EXPORT_SYMBOL(rte_lcore_dump);
 void
 rte_lcore_dump(FILE *f)
 {
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
index 8a7920ed0f..bcf88ac661 100644
--- a/lib/eal/common/eal_common_lcore_var.c
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -76,7 +76,7 @@ lcore_var_alloc(size_t size, size_t align)
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_lcore_var_alloc, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_lcore_var_alloc, 24.11);
 void *
 rte_lcore_var_alloc(size_t size, size_t align)
 {
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index 84ee3f3959..f82aca83b5 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -70,140 +70,140 @@ eal_mcfg_update_from_internal(void)
 	mcfg->version = RTE_VERSION;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mem_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mem_get_lock);
 rte_rwlock_t *
 rte_mcfg_mem_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->memory_hotplug_lock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_lock);
 void
 rte_mcfg_mem_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_unlock);
 void
 rte_mcfg_mem_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_lock);
 void
 rte_mcfg_mem_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_unlock);
 void
 rte_mcfg_mem_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_tailq_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_tailq_get_lock);
 rte_rwlock_t *
 rte_mcfg_tailq_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->qlock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_lock);
 void
 rte_mcfg_tailq_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_unlock);
 void
 rte_mcfg_tailq_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_lock);
 void
 rte_mcfg_tailq_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_unlock);
 void
 rte_mcfg_tailq_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mempool_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mempool_get_lock);
 rte_rwlock_t *
 rte_mcfg_mempool_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->mplock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_lock);
 void
 rte_mcfg_mempool_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_unlock);
 void
 rte_mcfg_mempool_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_lock);
 void
 rte_mcfg_mempool_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_unlock);
 void
 rte_mcfg_mempool_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_timer_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_timer_get_lock);
 rte_spinlock_t *
 rte_mcfg_timer_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->tlock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_timer_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_timer_lock);
 void
 rte_mcfg_timer_lock(void)
 {
 	rte_spinlock_lock(rte_mcfg_timer_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_timer_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_timer_unlock);
 void
 rte_mcfg_timer_unlock(void)
 {
 	rte_spinlock_unlock(rte_mcfg_timer_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_ethdev_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_ethdev_get_lock);
 rte_spinlock_t *
 rte_mcfg_ethdev_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->ethdev_lock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_get_single_file_segments)
+RTE_EXPORT_SYMBOL(rte_mcfg_get_single_file_segments);
 bool
 rte_mcfg_get_single_file_segments(void)
 {
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index 38ccc734e8..1e55c75570 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -343,7 +343,7 @@ virt2memseg_list(const void *addr)
 	return msl;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2memseg_list)
+RTE_EXPORT_SYMBOL(rte_mem_virt2memseg_list);
 struct rte_memseg_list *
 rte_mem_virt2memseg_list(const void *addr)
 {
@@ -381,7 +381,7 @@ find_virt_legacy(const struct rte_memseg_list *msl __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_iova2virt)
+RTE_EXPORT_SYMBOL(rte_mem_iova2virt);
 void *
 rte_mem_iova2virt(rte_iova_t iova)
 {
@@ -403,7 +403,7 @@ rte_mem_iova2virt(rte_iova_t iova)
 	return vi.virt;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2memseg)
+RTE_EXPORT_SYMBOL(rte_mem_virt2memseg);
 struct rte_memseg *
 rte_mem_virt2memseg(const void *addr, const struct rte_memseg_list *msl)
 {
@@ -425,7 +425,7 @@ physmem_size(const struct rte_memseg_list *msl, void *arg)
 }
 
 /* get the total size of memory */
-RTE_EXPORT_SYMBOL(rte_eal_get_physmem_size)
+RTE_EXPORT_SYMBOL(rte_eal_get_physmem_size);
 uint64_t
 rte_eal_get_physmem_size(void)
 {
@@ -474,7 +474,7 @@ dump_memseg(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
  * Defining here because declared in rte_memory.h, but the actual implementation
  * is in eal_common_memalloc.c, like all other memalloc internals.
  */
-RTE_EXPORT_SYMBOL(rte_mem_event_callback_register)
+RTE_EXPORT_SYMBOL(rte_mem_event_callback_register);
 int
 rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
 		void *arg)
@@ -491,7 +491,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
 	return eal_memalloc_mem_event_callback_register(name, clb, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_event_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_mem_event_callback_unregister);
 int
 rte_mem_event_callback_unregister(const char *name, void *arg)
 {
@@ -507,7 +507,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg)
 	return eal_memalloc_mem_event_callback_unregister(name, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_register)
+RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_register);
 int
 rte_mem_alloc_validator_register(const char *name,
 		rte_mem_alloc_validator_t clb, int socket_id, size_t limit)
@@ -525,7 +525,7 @@ rte_mem_alloc_validator_register(const char *name,
 			limit);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_unregister)
+RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_unregister);
 int
 rte_mem_alloc_validator_unregister(const char *name, int socket_id)
 {
@@ -542,7 +542,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id)
 }
 
 /* Dump the physical memory layout on console */
-RTE_EXPORT_SYMBOL(rte_dump_physmem_layout)
+RTE_EXPORT_SYMBOL(rte_dump_physmem_layout);
 void
 rte_dump_physmem_layout(FILE *f)
 {
@@ -614,14 +614,14 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask)
+RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask);
 int
 rte_mem_check_dma_mask(uint8_t maskbits)
 {
 	return check_dma_mask(maskbits, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask_thread_unsafe);
 int
 rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits)
 {
@@ -635,7 +635,7 @@ rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits)
  * initialization. PMDs should use rte_mem_check_dma_mask if addressing
  * limitations by the device.
  */
-RTE_EXPORT_SYMBOL(rte_mem_set_dma_mask)
+RTE_EXPORT_SYMBOL(rte_mem_set_dma_mask);
 void
 rte_mem_set_dma_mask(uint8_t maskbits)
 {
@@ -646,14 +646,14 @@ rte_mem_set_dma_mask(uint8_t maskbits)
 }
 
 /* return the number of memory channels */
-RTE_EXPORT_SYMBOL(rte_memory_get_nchannel)
+RTE_EXPORT_SYMBOL(rte_memory_get_nchannel);
 unsigned rte_memory_get_nchannel(void)
 {
 	return rte_eal_get_configuration()->mem_config->nchannel;
 }
 
 /* return the number of memory rank */
-RTE_EXPORT_SYMBOL(rte_memory_get_nrank)
+RTE_EXPORT_SYMBOL(rte_memory_get_nrank);
 unsigned rte_memory_get_nrank(void)
 {
 	return rte_eal_get_configuration()->mem_config->nrank;
@@ -677,7 +677,7 @@ rte_eal_memdevice_init(void)
 }
 
 /* Lock page in physical memory and prevent from swapping. */
-RTE_EXPORT_SYMBOL(rte_mem_lock_page)
+RTE_EXPORT_SYMBOL(rte_mem_lock_page);
 int
 rte_mem_lock_page(const void *virt)
 {
@@ -687,7 +687,7 @@ rte_mem_lock_page(const void *virt)
 	return rte_mem_lock((void *)aligned, page_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_contig_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_contig_walk_thread_unsafe);
 int
 rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
 {
@@ -727,7 +727,7 @@ rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_contig_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_contig_walk);
 int
 rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
 {
@@ -741,7 +741,7 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_walk_thread_unsafe);
 int
 rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
 {
@@ -770,7 +770,7 @@ rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_walk);
 int
 rte_memseg_walk(rte_memseg_walk_t func, void *arg)
 {
@@ -784,7 +784,7 @@ rte_memseg_walk(rte_memseg_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_list_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_list_walk_thread_unsafe);
 int
 rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
 {
@@ -804,7 +804,7 @@ rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_list_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_list_walk);
 int
 rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
 {
@@ -818,7 +818,7 @@ rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_thread_unsafe);
 int
 rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
 {
@@ -861,7 +861,7 @@ rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd);
 int
 rte_memseg_get_fd(const struct rte_memseg *ms)
 {
@@ -874,7 +874,7 @@ rte_memseg_get_fd(const struct rte_memseg *ms)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset_thread_unsafe);
 int
 rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
 		size_t *offset)
@@ -918,7 +918,7 @@ rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset);
 int
 rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset)
 {
@@ -931,7 +931,7 @@ rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_register)
+RTE_EXPORT_SYMBOL(rte_extmem_register);
 int
 rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
 		unsigned int n_pages, size_t page_sz)
@@ -981,7 +981,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_unregister)
+RTE_EXPORT_SYMBOL(rte_extmem_unregister);
 int
 rte_extmem_unregister(void *va_addr, size_t len)
 {
@@ -1037,14 +1037,14 @@ sync_memory(void *va_addr, size_t len, bool attach)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_attach)
+RTE_EXPORT_SYMBOL(rte_extmem_attach);
 int
 rte_extmem_attach(void *va_addr, size_t len)
 {
 	return sync_memory(va_addr, len, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_detach)
+RTE_EXPORT_SYMBOL(rte_extmem_detach);
 int
 rte_extmem_detach(void *va_addr, size_t len)
 {
@@ -1702,7 +1702,7 @@ RTE_INIT(memory_telemetry)
 
 #endif /* telemetry !RTE_EXEC_ENV_WINDOWS */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_memzero_explicit, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_memzero_explicit, 25.07);
 void
 rte_memzero_explicit(void *dst, size_t sz)
 {
diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c
index db43af13a8..77ab3a61cc 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -26,7 +26,7 @@
 /* Default count used until rte_memzone_max_set() is called */
 #define DEFAULT_MAX_MEMZONE_COUNT 2560
 
-RTE_EXPORT_SYMBOL(rte_memzone_max_set)
+RTE_EXPORT_SYMBOL(rte_memzone_max_set);
 int
 rte_memzone_max_set(size_t max)
 {
@@ -48,7 +48,7 @@ rte_memzone_max_set(size_t max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memzone_max_get)
+RTE_EXPORT_SYMBOL(rte_memzone_max_get);
 size_t
 rte_memzone_max_get(void)
 {
@@ -266,7 +266,7 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,
  * specified alignment and boundary). If the allocation cannot be done,
  * return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve_bounded)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve_bounded);
 const struct rte_memzone *
 rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,
 			    unsigned flags, unsigned align, unsigned bound)
@@ -279,7 +279,7 @@ rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,
  * Return a pointer to a correctly filled memzone descriptor (with a
  * specified alignment). If the allocation cannot be done, return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve_aligned)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve_aligned);
 const struct rte_memzone *
 rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,
 			    unsigned flags, unsigned align)
@@ -292,7 +292,7 @@ rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,
  * Return a pointer to a correctly filled memzone descriptor. If the
  * allocation cannot be done, return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve);
 const struct rte_memzone *
 rte_memzone_reserve(const char *name, size_t len, int socket_id,
 		    unsigned flags)
@@ -301,7 +301,7 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,
 					       flags, RTE_CACHE_LINE_SIZE, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_memzone_free)
+RTE_EXPORT_SYMBOL(rte_memzone_free);
 int
 rte_memzone_free(const struct rte_memzone *mz)
 {
@@ -348,7 +348,7 @@ rte_memzone_free(const struct rte_memzone *mz)
 /*
  * Lookup for the memzone identified by the given name
  */
-RTE_EXPORT_SYMBOL(rte_memzone_lookup)
+RTE_EXPORT_SYMBOL(rte_memzone_lookup);
 const struct rte_memzone *
 rte_memzone_lookup(const char *name)
 {
@@ -425,7 +425,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg)
 }
 
 /* Dump all reserved memory zones on console */
-RTE_EXPORT_SYMBOL(rte_memzone_dump)
+RTE_EXPORT_SYMBOL(rte_memzone_dump);
 void
 rte_memzone_dump(FILE *f)
 {
@@ -467,7 +467,7 @@ rte_eal_memzone_init(void)
 }
 
 /* Walk all reserved memory zones */
-RTE_EXPORT_SYMBOL(rte_memzone_walk)
+RTE_EXPORT_SYMBOL(rte_memzone_walk);
 void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *),
 		      void *arg)
 {
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 3169dd069f..bd4d03db30 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -167,7 +167,7 @@ eal_get_application_usage_hook(void)
 }
 
 /* Set a per-application usage message */
-RTE_EXPORT_SYMBOL(rte_set_application_usage_hook)
+RTE_EXPORT_SYMBOL(rte_set_application_usage_hook);
 rte_usage_hook_t
 rte_set_application_usage_hook(rte_usage_hook_t usage_func)
 {
@@ -767,7 +767,7 @@ check_core_list(int *lcores, unsigned int count)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_parse_coremask)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_parse_coremask);
 int
 rte_eal_parse_coremask(const char *coremask, int *cores)
 {
@@ -2080,7 +2080,7 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth)
+RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth);
 uint16_t
 rte_vect_get_max_simd_bitwidth(void)
 {
@@ -2089,7 +2089,7 @@ rte_vect_get_max_simd_bitwidth(void)
 	return internal_conf->max_simd_bitwidth.bitwidth;
 }
 
-RTE_EXPORT_SYMBOL(rte_vect_set_max_simd_bitwidth)
+RTE_EXPORT_SYMBOL(rte_vect_set_max_simd_bitwidth);
 int
 rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)
 {
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 0dea787e38..edc1b8bfe9 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -143,7 +143,7 @@ create_socket_path(const char *name, char *buf, int len)
 		strlcpy(buf, prefix, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_primary_proc_alive)
+RTE_EXPORT_SYMBOL(rte_eal_primary_proc_alive);
 int
 rte_eal_primary_proc_alive(const char *config_file_path)
 {
@@ -199,7 +199,7 @@ validate_action_name(const char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_register)
+RTE_EXPORT_SYMBOL(rte_mp_action_register);
 int
 rte_mp_action_register(const char *name, rte_mp_t action)
 {
@@ -236,7 +236,7 @@ rte_mp_action_register(const char *name, rte_mp_t action)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_unregister)
+RTE_EXPORT_SYMBOL(rte_mp_action_unregister);
 void
 rte_mp_action_unregister(const char *name)
 {
@@ -840,7 +840,7 @@ check_input(const struct rte_mp_msg *msg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_sendmsg)
+RTE_EXPORT_SYMBOL(rte_mp_sendmsg);
 int
 rte_mp_sendmsg(struct rte_mp_msg *msg)
 {
@@ -994,7 +994,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_sync)
+RTE_EXPORT_SYMBOL(rte_mp_request_sync);
 int
 rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 		const struct timespec *ts)
@@ -1092,7 +1092,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_async)
+RTE_EXPORT_SYMBOL(rte_mp_request_async);
 int
 rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 		rte_mp_async_reply_t clb)
@@ -1245,7 +1245,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_reply)
+RTE_EXPORT_SYMBOL(rte_mp_reply);
 int
 rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
 {
@@ -1298,7 +1298,7 @@ set_mp_status(enum mp_status status)
 	return rte_atomic_load_explicit(&mcfg->mp_status, rte_memory_order_relaxed) == desired;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_disable)
+RTE_EXPORT_SYMBOL(rte_mp_disable);
 bool
 rte_mp_disable(void)
 {
diff --git a/lib/eal/common/eal_common_string_fns.c b/lib/eal/common/eal_common_string_fns.c
index fa87831c3a..0b4f814951 100644
--- a/lib/eal/common/eal_common_string_fns.c
+++ b/lib/eal/common/eal_common_string_fns.c
@@ -13,7 +13,7 @@
 #include <rte_errno.h>
 
 /* split string into tokens */
-RTE_EXPORT_SYMBOL(rte_strsplit)
+RTE_EXPORT_SYMBOL(rte_strsplit);
 int
 rte_strsplit(char *string, int stringlen,
 	     char **tokens, int maxtokens, char delim)
@@ -48,7 +48,7 @@ rte_strsplit(char *string, int stringlen,
  * Return negative value and NUL-terminate if dst is too short,
  * Otherwise return number of bytes copied.
  */
-RTE_EXPORT_SYMBOL(rte_strscpy)
+RTE_EXPORT_SYMBOL(rte_strscpy);
 ssize_t
 rte_strscpy(char *dst, const char *src, size_t dsize)
 {
@@ -71,7 +71,7 @@ rte_strscpy(char *dst, const char *src, size_t dsize)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_str_to_size)
+RTE_EXPORT_SYMBOL(rte_str_to_size);
 uint64_t
 rte_str_to_size(const char *str)
 {
@@ -110,7 +110,7 @@ rte_str_to_size(const char *str)
 	return size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_size_to_str, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_size_to_str, 25.07);
 char *
 rte_size_to_str(char *buf, int buf_size, uint64_t count, bool use_iec, const char *unit)
 {
diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c
index 47080d75ac..9f8adaf97f 100644
--- a/lib/eal/common/eal_common_tailqs.c
+++ b/lib/eal/common/eal_common_tailqs.c
@@ -23,7 +23,7 @@ static struct rte_tailq_elem_head rte_tailq_elem_head =
 /* number of tailqs registered, -1 before call to rte_eal_tailqs_init */
 static int rte_tailqs_count = -1;
 
-RTE_EXPORT_SYMBOL(rte_eal_tailq_lookup)
+RTE_EXPORT_SYMBOL(rte_eal_tailq_lookup);
 struct rte_tailq_head *
 rte_eal_tailq_lookup(const char *name)
 {
@@ -42,7 +42,7 @@ rte_eal_tailq_lookup(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_dump_tailq)
+RTE_EXPORT_SYMBOL(rte_dump_tailq);
 void
 rte_dump_tailq(FILE *f)
 {
@@ -108,7 +108,7 @@ rte_eal_tailq_update(struct rte_tailq_elem *t)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_tailq_register)
+RTE_EXPORT_SYMBOL(rte_eal_tailq_register);
 int
 rte_eal_tailq_register(struct rte_tailq_elem *t)
 {
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index c0622c5c23..3e37ab1742 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -23,15 +23,15 @@
 #include "eal_thread.h"
 #include "eal_trace.h"
 
-RTE_EXPORT_SYMBOL(per_lcore__lcore_id)
+RTE_EXPORT_SYMBOL(per_lcore__lcore_id);
 RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
-RTE_EXPORT_SYMBOL(per_lcore__thread_id)
+RTE_EXPORT_SYMBOL(per_lcore__thread_id);
 RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 static RTE_DEFINE_PER_LCORE(unsigned int, _numa_id) =
 	(unsigned int)SOCKET_ID_ANY;
 static RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
 
-RTE_EXPORT_SYMBOL(rte_socket_id)
+RTE_EXPORT_SYMBOL(rte_socket_id);
 unsigned rte_socket_id(void)
 {
 	return RTE_PER_LCORE(_numa_id);
@@ -86,7 +86,7 @@ thread_update_affinity(rte_cpuset_t *cpusetp)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity);
 int
 rte_thread_set_affinity(rte_cpuset_t *cpusetp)
 {
@@ -99,7 +99,7 @@ rte_thread_set_affinity(rte_cpuset_t *cpusetp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity);
 void
 rte_thread_get_affinity(rte_cpuset_t *cpusetp)
 {
@@ -288,7 +288,7 @@ static uint32_t control_thread_start(void *arg)
 	return start_routine(start_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_create_control)
+RTE_EXPORT_SYMBOL(rte_thread_create_control);
 int
 rte_thread_create_control(rte_thread_t *thread, const char *name,
 		rte_thread_func start_routine, void *arg)
@@ -348,7 +348,7 @@ add_internal_prefix(char *prefixed_name, const char *name, size_t size)
 	strlcpy(prefixed_name + prefixlen, name, size - prefixlen);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_create_internal_control)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_create_internal_control);
 int
 rte_thread_create_internal_control(rte_thread_t *id, const char *name,
 		rte_thread_func func, void *arg)
@@ -359,7 +359,7 @@ rte_thread_create_internal_control(rte_thread_t *id, const char *name,
 	return rte_thread_create_control(id, prefixed_name, func, arg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_set_prefixed_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_set_prefixed_name);
 void
 rte_thread_set_prefixed_name(rte_thread_t id, const char *name)
 {
@@ -369,7 +369,7 @@ rte_thread_set_prefixed_name(rte_thread_t id, const char *name)
 	rte_thread_set_name(id, prefixed_name);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_register)
+RTE_EXPORT_SYMBOL(rte_thread_register);
 int
 rte_thread_register(void)
 {
@@ -402,7 +402,7 @@ rte_thread_register(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_unregister)
+RTE_EXPORT_SYMBOL(rte_thread_unregister);
 void
 rte_thread_unregister(void)
 {
@@ -416,7 +416,7 @@ rte_thread_unregister(void)
 			lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_init)
+RTE_EXPORT_SYMBOL(rte_thread_attr_init);
 int
 rte_thread_attr_init(rte_thread_attr_t *attr)
 {
@@ -429,7 +429,7 @@ rte_thread_attr_init(rte_thread_attr_t *attr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_attr_set_priority);
 int
 rte_thread_attr_set_priority(rte_thread_attr_t *thread_attr,
 		enum rte_thread_priority priority)
@@ -442,7 +442,7 @@ rte_thread_attr_set_priority(rte_thread_attr_t *thread_attr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_set_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_attr_set_affinity);
 int
 rte_thread_attr_set_affinity(rte_thread_attr_t *thread_attr,
 		rte_cpuset_t *cpuset)
@@ -458,7 +458,7 @@ rte_thread_attr_set_affinity(rte_thread_attr_t *thread_attr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_get_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_attr_get_affinity);
 int
 rte_thread_attr_get_affinity(rte_thread_attr_t *thread_attr,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c
index bbf8b8b11b..121b850c27 100644
--- a/lib/eal/common/eal_common_timer.c
+++ b/lib/eal/common/eal_common_timer.c
@@ -19,10 +19,10 @@
 static uint64_t eal_tsc_resolution_hz;
 
 /* Pointer to user delay function */
-RTE_EXPORT_SYMBOL(rte_delay_us)
+RTE_EXPORT_SYMBOL(rte_delay_us);
 void (*rte_delay_us)(unsigned int) = NULL;
 
-RTE_EXPORT_SYMBOL(rte_delay_us_block)
+RTE_EXPORT_SYMBOL(rte_delay_us_block);
 void
 rte_delay_us_block(unsigned int us)
 {
@@ -32,7 +32,7 @@ rte_delay_us_block(unsigned int us)
 		rte_pause();
 }
 
-RTE_EXPORT_SYMBOL(rte_get_tsc_hz)
+RTE_EXPORT_SYMBOL(rte_get_tsc_hz);
 uint64_t
 rte_get_tsc_hz(void)
 {
@@ -79,7 +79,7 @@ set_tsc_freq(void)
 	mcfg->tsc_hz = freq;
 }
 
-RTE_EXPORT_SYMBOL(rte_delay_us_callback_register)
+RTE_EXPORT_SYMBOL(rte_delay_us_callback_register);
 void rte_delay_us_callback_register(void (*userfunc)(unsigned int))
 {
 	rte_delay_us = userfunc;
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index be1f78a68d..d5e8aaedfa 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -17,9 +17,9 @@
 #include <eal_export.h>
 #include "eal_trace.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05);
 RTE_DEFINE_PER_LCORE(volatile int, trace_point_sz);
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05);
 RTE_DEFINE_PER_LCORE(void *, trace_mem);
 static RTE_DEFINE_PER_LCORE(char *, ctf_field);
 
@@ -97,7 +97,7 @@ eal_trace_fini(void)
 	eal_trace_args_free();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_is_enabled, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_is_enabled, 20.05);
 bool
 rte_trace_is_enabled(void)
 {
@@ -115,7 +115,7 @@ trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
 			rte_memory_order_release);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_set, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_set, 20.05);
 void
 rte_trace_mode_set(enum rte_trace_mode mode)
 {
@@ -127,7 +127,7 @@ rte_trace_mode_set(enum rte_trace_mode mode)
 	trace.mode = mode;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_get, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_get, 20.05);
 enum
 rte_trace_mode rte_trace_mode_get(void)
 {
@@ -140,7 +140,7 @@ trace_point_is_invalid(rte_trace_point_t *t)
 	return (t == NULL) || (trace_id_get(t) >= trace.nb_trace_points);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_is_enabled, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_is_enabled, 20.05);
 bool
 rte_trace_point_is_enabled(rte_trace_point_t *t)
 {
@@ -153,7 +153,7 @@ rte_trace_point_is_enabled(rte_trace_point_t *t)
 	return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_enable, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_enable, 20.05);
 int
 rte_trace_point_enable(rte_trace_point_t *t)
 {
@@ -169,7 +169,7 @@ rte_trace_point_enable(rte_trace_point_t *t)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_disable, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_disable, 20.05);
 int
 rte_trace_point_disable(rte_trace_point_t *t)
 {
@@ -185,7 +185,7 @@ rte_trace_point_disable(rte_trace_point_t *t)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_pattern, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_pattern, 20.05);
 int
 rte_trace_pattern(const char *pattern, bool enable)
 {
@@ -210,7 +210,7 @@ rte_trace_pattern(const char *pattern, bool enable)
 	return rc | found;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_regexp, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_regexp, 20.05);
 int
 rte_trace_regexp(const char *regex, bool enable)
 {
@@ -240,7 +240,7 @@ rte_trace_regexp(const char *regex, bool enable)
 	return rc | found;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_lookup, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_lookup, 20.05);
 rte_trace_point_t *
 rte_trace_point_lookup(const char *name)
 {
@@ -291,7 +291,7 @@ trace_lcore_mem_dump(FILE *f)
 	rte_spinlock_unlock(&trace->lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_dump, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_dump, 20.05);
 void
 rte_trace_dump(FILE *f)
 {
@@ -327,7 +327,7 @@ thread_get_name(rte_thread_t id, char *name, size_t len)
 	RTE_SET_USED(len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05);
 void
 __rte_trace_mem_per_thread_alloc(void)
 {
@@ -449,7 +449,7 @@ trace_mem_free(void)
 	rte_spinlock_unlock(&trace->lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05);
 void
 __rte_trace_point_emit_field(size_t sz, const char *in, const char *datatype)
 {
@@ -476,7 +476,7 @@ __rte_trace_point_emit_field(size_t sz, const char *in, const char *datatype)
 	RTE_PER_LCORE(ctf_field) = field;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05);
 int
 __rte_trace_point_register(rte_trace_point_t *handle, const char *name,
 		void (*register_fn)(void))
diff --git a/lib/eal/common/eal_common_trace_ctf.c b/lib/eal/common/eal_common_trace_ctf.c
index aa60a705d1..72177e097c 100644
--- a/lib/eal/common/eal_common_trace_ctf.c
+++ b/lib/eal/common/eal_common_trace_ctf.c
@@ -357,7 +357,7 @@ meta_fixup(struct trace *trace, char *meta)
 	meta_fix_freq_offset(trace, meta);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_metadata_dump, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_metadata_dump, 20.05);
 int
 rte_trace_metadata_dump(FILE *f)
 {
diff --git a/lib/eal/common/eal_common_trace_points.c b/lib/eal/common/eal_common_trace_points.c
index 0903f3c639..790df83098 100644
--- a/lib/eal/common/eal_common_trace_points.c
+++ b/lib/eal/common/eal_common_trace_points.c
@@ -9,58 +9,58 @@
 #include <eal_export.h>
 #include <eal_trace_internal.h>
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_void, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_void, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_void,
 	lib.eal.generic.void)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u64, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u64, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u64,
 	lib.eal.generic.u64)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u32, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u32, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u32,
 	lib.eal.generic.u32)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u16, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u16, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u16,
 	lib.eal.generic.u16)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u8, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u8, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u8,
 	lib.eal.generic.u8)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i64, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i64, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i64,
 	lib.eal.generic.i64)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i32, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i32, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i32,
 	lib.eal.generic.i32)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i16, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i16, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i16,
 	lib.eal.generic.i16)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i8, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i8, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i8,
 	lib.eal.generic.i8)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_int, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_int, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_int,
 	lib.eal.generic.int)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_long, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_long, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_long,
 	lib.eal.generic.long)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_float, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_float, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_float,
 	lib.eal.generic.float)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_double, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_double, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_double,
 	lib.eal.generic.double)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_ptr, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_ptr, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_ptr,
 	lib.eal.generic.ptr)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_str, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_str, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_str,
 	lib.eal.generic.string)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_size_t, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_size_t, 20.11);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_size_t,
 	lib.eal.generic.size_t)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_func, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_func, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_func,
 	lib.eal.generic.func)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_blob, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_blob, 23.03);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_blob,
 	lib.eal.generic.blob)
 
diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c
index e1996433b7..bde8111af2 100644
--- a/lib/eal/common/eal_common_trace_utils.c
+++ b/lib/eal/common/eal_common_trace_utils.c
@@ -410,7 +410,7 @@ trace_mem_save(struct trace *trace, struct __rte_trace_header *hdr,
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_save, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_save, 20.05);
 int
 rte_trace_save(void)
 {
diff --git a/lib/eal/common/eal_common_uuid.c b/lib/eal/common/eal_common_uuid.c
index 0e0924d08d..941ae246b8 100644
--- a/lib/eal/common/eal_common_uuid.c
+++ b/lib/eal/common/eal_common_uuid.c
@@ -78,7 +78,7 @@ static void uuid_unpack(const rte_uuid_t in, struct uuid *uu)
 	memcpy(uu->node, ptr, 6);
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_is_null)
+RTE_EXPORT_SYMBOL(rte_uuid_is_null);
 bool rte_uuid_is_null(const rte_uuid_t uu)
 {
 	const uint8_t *cp = uu;
@@ -93,7 +93,7 @@ bool rte_uuid_is_null(const rte_uuid_t uu)
 /*
  * rte_uuid_compare() - compare two UUIDs.
  */
-RTE_EXPORT_SYMBOL(rte_uuid_compare)
+RTE_EXPORT_SYMBOL(rte_uuid_compare);
 int rte_uuid_compare(const rte_uuid_t uu1, const rte_uuid_t uu2)
 {
 	struct uuid	uuid1, uuid2;
@@ -113,7 +113,7 @@ int rte_uuid_compare(const rte_uuid_t uu1, const rte_uuid_t uu2)
 	return memcmp(uuid1.node, uuid2.node, 6);
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_parse)
+RTE_EXPORT_SYMBOL(rte_uuid_parse);
 int rte_uuid_parse(const char *in, rte_uuid_t uu)
 {
 	struct uuid	uuid;
@@ -156,7 +156,7 @@ int rte_uuid_parse(const char *in, rte_uuid_t uu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_unparse)
+RTE_EXPORT_SYMBOL(rte_uuid_unparse);
 void rte_uuid_unparse(const rte_uuid_t uu, char *out, size_t len)
 {
 	struct uuid uuid;
diff --git a/lib/eal/common/rte_bitset.c b/lib/eal/common/rte_bitset.c
index 78001b1ee8..4fe0a1b61a 100644
--- a/lib/eal/common/rte_bitset.c
+++ b/lib/eal/common/rte_bitset.c
@@ -10,7 +10,7 @@
 #include <eal_export.h>
 #include "rte_bitset.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bitset_to_str, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bitset_to_str, 24.11);
 ssize_t
 rte_bitset_to_str(const uint64_t *bitset, size_t num_bits, char *buf, size_t capacity)
 {
diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c
index 08a4d595da..f6f2a7a93f 100644
--- a/lib/eal/common/rte_keepalive.c
+++ b/lib/eal/common/rte_keepalive.c
@@ -64,7 +64,7 @@ print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core)
 	      );
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_dispatch_pings)
+RTE_EXPORT_SYMBOL(rte_keepalive_dispatch_pings);
 void
 rte_keepalive_dispatch_pings(__rte_unused void *ptr_timer,
 	void *ptr_data)
@@ -119,7 +119,7 @@ rte_keepalive_dispatch_pings(__rte_unused void *ptr_timer,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_create)
+RTE_EXPORT_SYMBOL(rte_keepalive_create);
 struct rte_keepalive *
 rte_keepalive_create(rte_keepalive_failure_callback_t callback,
 	void *data)
@@ -138,7 +138,7 @@ rte_keepalive_create(rte_keepalive_failure_callback_t callback,
 	return keepcfg;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_register_relay_callback)
+RTE_EXPORT_SYMBOL(rte_keepalive_register_relay_callback);
 void rte_keepalive_register_relay_callback(struct rte_keepalive *keepcfg,
 	rte_keepalive_relay_callback_t callback,
 	void *data)
@@ -147,7 +147,7 @@ void rte_keepalive_register_relay_callback(struct rte_keepalive *keepcfg,
 	keepcfg->relay_callback_data = data;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_register_core)
+RTE_EXPORT_SYMBOL(rte_keepalive_register_core);
 void
 rte_keepalive_register_core(struct rte_keepalive *keepcfg, const int id_core)
 {
@@ -157,14 +157,14 @@ rte_keepalive_register_core(struct rte_keepalive *keepcfg, const int id_core)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_mark_alive)
+RTE_EXPORT_SYMBOL(rte_keepalive_mark_alive);
 void
 rte_keepalive_mark_alive(struct rte_keepalive *keepcfg)
 {
 	keepcfg->live_data[rte_lcore_id()].core_state = RTE_KA_STATE_ALIVE;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_mark_sleep)
+RTE_EXPORT_SYMBOL(rte_keepalive_mark_sleep);
 void
 rte_keepalive_mark_sleep(struct rte_keepalive *keepcfg)
 {
diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c
index 3a86c19490..297604f6b6 100644
--- a/lib/eal/common/rte_malloc.c
+++ b/lib/eal/common/rte_malloc.c
@@ -49,14 +49,14 @@ mem_free(void *addr, const bool trace_ena, bool zero)
 		EAL_LOG(ERR, "Error: Invalid memory");
 }
 
-RTE_EXPORT_SYMBOL(rte_free)
+RTE_EXPORT_SYMBOL(rte_free);
 void
 rte_free(void *addr)
 {
 	mem_free(addr, true, false);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_free_sensitive, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_free_sensitive, 25.07);
 void
 rte_free_sensitive(void *addr)
 {
@@ -99,7 +99,7 @@ malloc_socket(const char *type, size_t size, unsigned int align,
 /*
  * Allocate memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_malloc_socket)
+RTE_EXPORT_SYMBOL(rte_malloc_socket);
 void *
 rte_malloc_socket(const char *type, size_t size, unsigned int align,
 		int socket_arg)
@@ -116,7 +116,7 @@ eal_malloc_no_trace(const char *type, size_t size, unsigned int align)
 /*
  * Allocate memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_malloc)
+RTE_EXPORT_SYMBOL(rte_malloc);
 void *
 rte_malloc(const char *type, size_t size, unsigned align)
 {
@@ -126,7 +126,7 @@ rte_malloc(const char *type, size_t size, unsigned align)
 /*
  * Allocate zero'd memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_zmalloc_socket)
+RTE_EXPORT_SYMBOL(rte_zmalloc_socket);
 void *
 rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
 {
@@ -156,7 +156,7 @@ rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
 /*
  * Allocate zero'd memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_zmalloc)
+RTE_EXPORT_SYMBOL(rte_zmalloc);
 void *
 rte_zmalloc(const char *type, size_t size, unsigned align)
 {
@@ -166,7 +166,7 @@ rte_zmalloc(const char *type, size_t size, unsigned align)
 /*
  * Allocate zero'd memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_calloc_socket)
+RTE_EXPORT_SYMBOL(rte_calloc_socket);
 void *
 rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket)
 {
@@ -176,7 +176,7 @@ rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int
 /*
  * Allocate zero'd memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_calloc)
+RTE_EXPORT_SYMBOL(rte_calloc);
 void *
 rte_calloc(const char *type, size_t num, size_t size, unsigned align)
 {
@@ -186,7 +186,7 @@ rte_calloc(const char *type, size_t num, size_t size, unsigned align)
 /*
  * Resize allocated memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_realloc_socket)
+RTE_EXPORT_SYMBOL(rte_realloc_socket);
 void *
 rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
 {
@@ -238,14 +238,14 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
 /*
  * Resize allocated memory.
  */
-RTE_EXPORT_SYMBOL(rte_realloc)
+RTE_EXPORT_SYMBOL(rte_realloc);
 void *
 rte_realloc(void *ptr, size_t size, unsigned int align)
 {
 	return rte_realloc_socket(ptr, size, align, SOCKET_ID_ANY);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_validate)
+RTE_EXPORT_SYMBOL(rte_malloc_validate);
 int
 rte_malloc_validate(const void *ptr, size_t *size)
 {
@@ -260,7 +260,7 @@ rte_malloc_validate(const void *ptr, size_t *size)
 /*
  * Function to retrieve data for heap on given socket
  */
-RTE_EXPORT_SYMBOL(rte_malloc_get_socket_stats)
+RTE_EXPORT_SYMBOL(rte_malloc_get_socket_stats);
 int
 rte_malloc_get_socket_stats(int socket,
 		struct rte_malloc_socket_stats *socket_stats)
@@ -279,7 +279,7 @@ rte_malloc_get_socket_stats(int socket,
 /*
  * Function to dump contents of all heaps
  */
-RTE_EXPORT_SYMBOL(rte_malloc_dump_heaps)
+RTE_EXPORT_SYMBOL(rte_malloc_dump_heaps);
 void
 rte_malloc_dump_heaps(FILE *f)
 {
@@ -292,7 +292,7 @@ rte_malloc_dump_heaps(FILE *f)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_get_socket)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_get_socket);
 int
 rte_malloc_heap_get_socket(const char *name)
 {
@@ -329,7 +329,7 @@ rte_malloc_heap_get_socket(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_socket_is_external)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_socket_is_external);
 int
 rte_malloc_heap_socket_is_external(int socket_id)
 {
@@ -358,7 +358,7 @@ rte_malloc_heap_socket_is_external(int socket_id)
 /*
  * Print stats on memory type. If type is NULL, info on all types is printed
  */
-RTE_EXPORT_SYMBOL(rte_malloc_dump_stats)
+RTE_EXPORT_SYMBOL(rte_malloc_dump_stats);
 void
 rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
 {
@@ -388,7 +388,7 @@ rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
 /*
  * Return the IO address of a virtual address obtained through rte_malloc
  */
-RTE_EXPORT_SYMBOL(rte_malloc_virt2iova)
+RTE_EXPORT_SYMBOL(rte_malloc_virt2iova);
 rte_iova_t
 rte_malloc_virt2iova(const void *addr)
 {
@@ -426,7 +426,7 @@ find_named_heap(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_add)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_add);
 int
 rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 		rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz)
@@ -482,7 +482,7 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_remove)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_remove);
 int
 rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
 {
@@ -598,21 +598,21 @@ sync_memory(const char *heap_name, void *va_addr, size_t len, bool attach)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_attach)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_attach);
 int
 rte_malloc_heap_memory_attach(const char *heap_name, void *va_addr, size_t len)
 {
 	return sync_memory(heap_name, va_addr, len, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_detach)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_detach);
 int
 rte_malloc_heap_memory_detach(const char *heap_name, void *va_addr, size_t len)
 {
 	return sync_memory(heap_name, va_addr, len, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_create)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_create);
 int
 rte_malloc_heap_create(const char *heap_name)
 {
@@ -664,7 +664,7 @@ rte_malloc_heap_create(const char *heap_name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_destroy)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_destroy);
 int
 rte_malloc_heap_destroy(const char *heap_name)
 {
diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 576a32a46c..d995113793 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -83,7 +83,7 @@ __rte_srand_lfsr258(uint64_t seed, struct rte_rand_state *state)
 	state->z5 = __rte_rand_lfsr258_gen_seed(&lcg_seed, 8388608UL);
 }
 
-RTE_EXPORT_SYMBOL(rte_srand)
+RTE_EXPORT_SYMBOL(rte_srand);
 void
 rte_srand(uint64_t seed)
 {
@@ -144,7 +144,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	return RTE_LCORE_VAR(rand_state);
 }
 
-RTE_EXPORT_SYMBOL(rte_rand)
+RTE_EXPORT_SYMBOL(rte_rand);
 uint64_t
 rte_rand(void)
 {
@@ -155,7 +155,7 @@ rte_rand(void)
 	return __rte_rand_lfsr258(state);
 }
 
-RTE_EXPORT_SYMBOL(rte_rand_max)
+RTE_EXPORT_SYMBOL(rte_rand_max);
 uint64_t
 rte_rand_max(uint64_t upper_bound)
 {
@@ -195,7 +195,7 @@ rte_rand_max(uint64_t upper_bound)
 	return res;
 }
 
-RTE_EXPORT_SYMBOL(rte_drand)
+RTE_EXPORT_SYMBOL(rte_drand);
 double
 rte_drand(void)
 {
diff --git a/lib/eal/common/rte_reciprocal.c b/lib/eal/common/rte_reciprocal.c
index 99c54df141..12b329484c 100644
--- a/lib/eal/common/rte_reciprocal.c
+++ b/lib/eal/common/rte_reciprocal.c
@@ -13,7 +13,7 @@
 
 #include "rte_reciprocal.h"
 
-RTE_EXPORT_SYMBOL(rte_reciprocal_value)
+RTE_EXPORT_SYMBOL(rte_reciprocal_value);
 struct rte_reciprocal rte_reciprocal_value(uint32_t d)
 {
 	struct rte_reciprocal R;
@@ -101,7 +101,7 @@ divide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, uint64_t *r)
 	return q1*b + q0;
 }
 
-RTE_EXPORT_SYMBOL(rte_reciprocal_value_u64)
+RTE_EXPORT_SYMBOL(rte_reciprocal_value_u64);
 struct rte_reciprocal_u64
 rte_reciprocal_value_u64(uint64_t d)
 {
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d2ac9d3f14..83cf5d3e12 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -121,7 +121,7 @@ rte_service_init(void)
 	return -ENOMEM;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_finalize)
+RTE_EXPORT_SYMBOL(rte_service_finalize);
 void
 rte_service_finalize(void)
 {
@@ -176,7 +176,7 @@ service_mt_safe(struct rte_service_spec_impl *s)
 	return !!(s->spec.capabilities & RTE_SERVICE_CAP_MT_SAFE);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_set_stats_enable)
+RTE_EXPORT_SYMBOL(rte_service_set_stats_enable);
 int32_t
 rte_service_set_stats_enable(uint32_t id, int32_t enabled)
 {
@@ -191,7 +191,7 @@ rte_service_set_stats_enable(uint32_t id, int32_t enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_set_runstate_mapped_check)
+RTE_EXPORT_SYMBOL(rte_service_set_runstate_mapped_check);
 int32_t
 rte_service_set_runstate_mapped_check(uint32_t id, int32_t enabled)
 {
@@ -206,14 +206,14 @@ rte_service_set_runstate_mapped_check(uint32_t id, int32_t enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_count)
+RTE_EXPORT_SYMBOL(rte_service_get_count);
 uint32_t
 rte_service_get_count(void)
 {
 	return rte_service_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_by_name)
+RTE_EXPORT_SYMBOL(rte_service_get_by_name);
 int32_t
 rte_service_get_by_name(const char *name, uint32_t *service_id)
 {
@@ -232,7 +232,7 @@ rte_service_get_by_name(const char *name, uint32_t *service_id)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_name)
+RTE_EXPORT_SYMBOL(rte_service_get_name);
 const char *
 rte_service_get_name(uint32_t id)
 {
@@ -241,7 +241,7 @@ rte_service_get_name(uint32_t id)
 	return s->spec.name;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_probe_capability)
+RTE_EXPORT_SYMBOL(rte_service_probe_capability);
 int32_t
 rte_service_probe_capability(uint32_t id, uint32_t capability)
 {
@@ -250,7 +250,7 @@ rte_service_probe_capability(uint32_t id, uint32_t capability)
 	return !!(s->spec.capabilities & capability);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_register)
+RTE_EXPORT_SYMBOL(rte_service_component_register);
 int32_t
 rte_service_component_register(const struct rte_service_spec *spec,
 			       uint32_t *id_ptr)
@@ -285,7 +285,7 @@ rte_service_component_register(const struct rte_service_spec *spec,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_unregister)
+RTE_EXPORT_SYMBOL(rte_service_component_unregister);
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
@@ -307,7 +307,7 @@ rte_service_component_unregister(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_runstate_set)
+RTE_EXPORT_SYMBOL(rte_service_component_runstate_set);
 int32_t
 rte_service_component_runstate_set(uint32_t id, uint32_t runstate)
 {
@@ -328,7 +328,7 @@ rte_service_component_runstate_set(uint32_t id, uint32_t runstate)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_runstate_set)
+RTE_EXPORT_SYMBOL(rte_service_runstate_set);
 int32_t
 rte_service_runstate_set(uint32_t id, uint32_t runstate)
 {
@@ -350,7 +350,7 @@ rte_service_runstate_set(uint32_t id, uint32_t runstate)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_runstate_get)
+RTE_EXPORT_SYMBOL(rte_service_runstate_get);
 int32_t
 rte_service_runstate_get(uint32_t id)
 {
@@ -461,7 +461,7 @@ service_run(uint32_t i, struct core_state *cs, const uint64_t *mapped_services,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_may_be_active)
+RTE_EXPORT_SYMBOL(rte_service_may_be_active);
 int32_t
 rte_service_may_be_active(uint32_t id)
 {
@@ -483,7 +483,7 @@ rte_service_may_be_active(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_run_iter_on_app_lcore)
+RTE_EXPORT_SYMBOL(rte_service_run_iter_on_app_lcore);
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
@@ -543,7 +543,7 @@ service_runner_func(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_may_be_active)
+RTE_EXPORT_SYMBOL(rte_service_lcore_may_be_active);
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
@@ -559,7 +559,7 @@ rte_service_lcore_may_be_active(uint32_t lcore)
 			       rte_memory_order_acquire);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_count)
+RTE_EXPORT_SYMBOL(rte_service_lcore_count);
 int32_t
 rte_service_lcore_count(void)
 {
@@ -573,7 +573,7 @@ rte_service_lcore_count(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_list)
+RTE_EXPORT_SYMBOL(rte_service_lcore_list);
 int32_t
 rte_service_lcore_list(uint32_t array[], uint32_t n)
 {
@@ -598,7 +598,7 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_count_services)
+RTE_EXPORT_SYMBOL(rte_service_lcore_count_services);
 int32_t
 rte_service_lcore_count_services(uint32_t lcore)
 {
@@ -612,7 +612,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	return rte_bitset_count_set(cs->mapped_services, RTE_SERVICE_NUM_MAX);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_start_with_defaults)
+RTE_EXPORT_SYMBOL(rte_service_start_with_defaults);
 int32_t
 rte_service_start_with_defaults(void)
 {
@@ -686,7 +686,7 @@ service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_map_lcore_set)
+RTE_EXPORT_SYMBOL(rte_service_map_lcore_set);
 int32_t
 rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
 {
@@ -695,7 +695,7 @@ rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
 	return service_update(id, lcore, &on, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_map_lcore_get)
+RTE_EXPORT_SYMBOL(rte_service_map_lcore_get);
 int32_t
 rte_service_map_lcore_get(uint32_t id, uint32_t lcore)
 {
@@ -723,7 +723,7 @@ set_lcore_state(uint32_t lcore, int32_t state)
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_lcore_reset_all);
 int32_t
 rte_service_lcore_reset_all(void)
 {
@@ -750,7 +750,7 @@ rte_service_lcore_reset_all(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_add)
+RTE_EXPORT_SYMBOL(rte_service_lcore_add);
 int32_t
 rte_service_lcore_add(uint32_t lcore)
 {
@@ -774,7 +774,7 @@ rte_service_lcore_add(uint32_t lcore)
 	return rte_eal_wait_lcore(lcore);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_del)
+RTE_EXPORT_SYMBOL(rte_service_lcore_del);
 int32_t
 rte_service_lcore_del(uint32_t lcore)
 {
@@ -799,7 +799,7 @@ rte_service_lcore_del(uint32_t lcore)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_start)
+RTE_EXPORT_SYMBOL(rte_service_lcore_start);
 int32_t
 rte_service_lcore_start(uint32_t lcore)
 {
@@ -833,7 +833,7 @@ rte_service_lcore_start(uint32_t lcore)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_stop)
+RTE_EXPORT_SYMBOL(rte_service_lcore_stop);
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
@@ -974,7 +974,7 @@ attr_get_service_cycles(uint32_t service_id)
 	return attr_get(service_id, lcore_attr_get_service_cycles);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_attr_get)
+RTE_EXPORT_SYMBOL(rte_service_attr_get);
 int32_t
 rte_service_attr_get(uint32_t id, uint32_t attr_id, uint64_t *attr_value)
 {
@@ -1002,7 +1002,7 @@ rte_service_attr_get(uint32_t id, uint32_t attr_id, uint64_t *attr_value)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_attr_get)
+RTE_EXPORT_SYMBOL(rte_service_lcore_attr_get);
 int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
@@ -1027,7 +1027,7 @@ rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_service_attr_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_attr_reset_all);
 int32_t
 rte_service_attr_reset_all(uint32_t id)
 {
@@ -1046,7 +1046,7 @@ rte_service_attr_reset_all(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_attr_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_lcore_attr_reset_all);
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
@@ -1100,7 +1100,7 @@ service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 	fprintf(f, "\n");
 }
 
-RTE_EXPORT_SYMBOL(rte_service_dump)
+RTE_EXPORT_SYMBOL(rte_service_dump);
 int32_t
 rte_service_dump(FILE *f, uint32_t id)
 {
diff --git a/lib/eal/common/rte_version.c b/lib/eal/common/rte_version.c
index 627b89d4a8..529aedfa71 100644
--- a/lib/eal/common/rte_version.c
+++ b/lib/eal/common/rte_version.c
@@ -5,31 +5,31 @@
 #include <eal_export.h>
 #include <rte_version.h>
 
-RTE_EXPORT_SYMBOL(rte_version_prefix)
+RTE_EXPORT_SYMBOL(rte_version_prefix);
 const char *
 rte_version_prefix(void) { return RTE_VER_PREFIX; }
 
-RTE_EXPORT_SYMBOL(rte_version_year)
+RTE_EXPORT_SYMBOL(rte_version_year);
 unsigned int
 rte_version_year(void) { return RTE_VER_YEAR; }
 
-RTE_EXPORT_SYMBOL(rte_version_month)
+RTE_EXPORT_SYMBOL(rte_version_month);
 unsigned int
 rte_version_month(void) { return RTE_VER_MONTH; }
 
-RTE_EXPORT_SYMBOL(rte_version_minor)
+RTE_EXPORT_SYMBOL(rte_version_minor);
 unsigned int
 rte_version_minor(void) { return RTE_VER_MINOR; }
 
-RTE_EXPORT_SYMBOL(rte_version_suffix)
+RTE_EXPORT_SYMBOL(rte_version_suffix);
 const char *
 rte_version_suffix(void) { return RTE_VER_SUFFIX; }
 
-RTE_EXPORT_SYMBOL(rte_version_release)
+RTE_EXPORT_SYMBOL(rte_version_release);
 unsigned int
 rte_version_release(void) { return RTE_VER_RELEASE; }
 
-RTE_EXPORT_SYMBOL(rte_version)
+RTE_EXPORT_SYMBOL(rte_version);
 const char *
 rte_version(void)
 {
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index c1ab8d86d2..7da0e2914c 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -74,7 +74,7 @@ static struct flock wr_lock = {
 struct lcore_config lcore_config[RTE_MAX_LCORE];
 
 /* used by rte_rdtsc() */
-RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
+RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map);
 int rte_cycles_vmware_tsc_map;
 
 
@@ -517,7 +517,7 @@ sync_func(__rte_unused void *arg)
 	return 0;
 }
 /* Abstraction for port I/0 privilege */
-RTE_EXPORT_SYMBOL(rte_eal_iopl_init)
+RTE_EXPORT_SYMBOL(rte_eal_iopl_init);
 int
 rte_eal_iopl_init(void)
 {
@@ -538,7 +538,7 @@ static void rte_eal_init_alert(const char *msg)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -888,7 +888,7 @@ rte_eal_init(int argc, char **argv)
 	return fctret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -917,7 +917,7 @@ rte_eal_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
+RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev);
 int rte_eal_create_uio_dev(void)
 {
 	const struct internal_config *internal_conf =
@@ -925,20 +925,20 @@ int rte_eal_create_uio_dev(void)
 	return internal_conf->create_uio_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode);
 enum rte_intr_mode
 rte_eal_vfio_intr_mode(void)
 {
 	return RTE_INTR_MODE_NONE;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token);
 void
 rte_eal_vfio_get_vf_token(__rte_unused rte_uuid_t vf_token)
 {
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_setup_device)
+RTE_EXPORT_SYMBOL(rte_vfio_setup_device);
 int rte_vfio_setup_device(__rte_unused const char *sysfs_base,
 		      __rte_unused const char *dev_addr,
 		      __rte_unused int *vfio_dev_fd,
@@ -948,7 +948,7 @@ int rte_vfio_setup_device(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_release_device)
+RTE_EXPORT_SYMBOL(rte_vfio_release_device);
 int rte_vfio_release_device(__rte_unused const char *sysfs_base,
 			__rte_unused const char *dev_addr,
 			__rte_unused int fd)
@@ -957,33 +957,33 @@ int rte_vfio_release_device(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_enable)
+RTE_EXPORT_SYMBOL(rte_vfio_enable);
 int rte_vfio_enable(__rte_unused const char *modname)
 {
 	rte_errno = ENOTSUP;
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_is_enabled);
 int rte_vfio_is_enabled(__rte_unused const char *modname)
 {
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled);
 int rte_vfio_noiommu_is_enabled(void)
 {
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_clear_group)
+RTE_EXPORT_SYMBOL(rte_vfio_clear_group);
 int rte_vfio_clear_group(__rte_unused int vfio_group_fd)
 {
 	rte_errno = ENOTSUP;
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_num)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_num);
 int
 rte_vfio_get_group_num(__rte_unused const char *sysfs_base,
 		       __rte_unused const char *dev_addr,
@@ -993,7 +993,7 @@ rte_vfio_get_group_num(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd);
 int
 rte_vfio_get_container_fd(void)
 {
@@ -1001,7 +1001,7 @@ rte_vfio_get_container_fd(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd);
 int
 rte_vfio_get_group_fd(__rte_unused int iommu_group_num)
 {
@@ -1009,7 +1009,7 @@ rte_vfio_get_group_fd(__rte_unused int iommu_group_num)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_create)
+RTE_EXPORT_SYMBOL(rte_vfio_container_create);
 int
 rte_vfio_container_create(void)
 {
@@ -1017,7 +1017,7 @@ rte_vfio_container_create(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_destroy)
+RTE_EXPORT_SYMBOL(rte_vfio_container_destroy);
 int
 rte_vfio_container_destroy(__rte_unused int container_fd)
 {
@@ -1025,7 +1025,7 @@ rte_vfio_container_destroy(__rte_unused int container_fd)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind);
 int
 rte_vfio_container_group_bind(__rte_unused int container_fd,
 		__rte_unused int iommu_group_num)
@@ -1034,7 +1034,7 @@ rte_vfio_container_group_bind(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind);
 int
 rte_vfio_container_group_unbind(__rte_unused int container_fd,
 		__rte_unused int iommu_group_num)
@@ -1043,7 +1043,7 @@ rte_vfio_container_group_unbind(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -1054,7 +1054,7 @@ rte_vfio_container_dma_map(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c03e281e67..ae318313de 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -207,7 +207,7 @@ eal_alarm_callback(void *arg __rte_unused)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -260,7 +260,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c
index 737d1040ea..ca2b721d09 100644
--- a/lib/eal/freebsd/eal_dev.c
+++ b/lib/eal/freebsd/eal_dev.c
@@ -8,7 +8,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -16,7 +16,7 @@ rte_dev_event_monitor_start(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -24,7 +24,7 @@ rte_dev_event_monitor_stop(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -32,7 +32,7 @@ rte_dev_hotplug_handle_enable(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 5c3ab6699e..72865b7be5 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -81,7 +81,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb, void *cb_arg)
@@ -213,7 +213,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 				rte_intr_callback_fn cb_fn, void *cb_arg,
@@ -270,7 +270,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -358,7 +358,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -371,7 +371,7 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -413,7 +413,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -454,7 +454,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
@@ -656,7 +656,7 @@ rte_eal_intr_init(void)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
 		int epfd, int op, unsigned int vec, void *data)
@@ -670,7 +670,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -680,14 +680,14 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
@@ -695,7 +695,7 @@ rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -703,7 +703,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -711,7 +711,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 		int maxevents, int timeout)
@@ -724,7 +724,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -737,7 +737,7 @@ rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 {
@@ -749,21 +749,21 @@ rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 6d3d46a390..37b7852430 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -36,7 +36,7 @@ uint64_t eal_get_baseaddr(void)
 /*
  * Get physical address of any mapped virtual address in the current process.
  */
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virtaddr)
 {
@@ -45,7 +45,7 @@ rte_mem_virt2phy(const void *virtaddr)
 	(void)virtaddr;
 	return RTE_BAD_IOVA;
 }
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virtaddr)
 {
@@ -297,7 +297,7 @@ rte_eal_hugepage_attach(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c
index 7ed76ed796..53755f6b54 100644
--- a/lib/eal/freebsd/eal_thread.c
+++ b/lib/eal/freebsd/eal_thread.c
@@ -26,7 +26,7 @@
 #include "eal_thread.h"
 
 /* require calling thread tid by gettid() */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int rte_sys_gettid(void)
 {
 	long lwpid;
@@ -34,7 +34,7 @@ int rte_sys_gettid(void)
 	return (int)lwpid;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
 	char truncated[RTE_THREAD_NAME_SIZE];
diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c
index d21ffa2694..46c90e3b03 100644
--- a/lib/eal/freebsd/eal_timer.c
+++ b/lib/eal/freebsd/eal_timer.c
@@ -24,7 +24,7 @@
 #warning HPET is not supported in FreeBSD
 #endif
 
-RTE_EXPORT_SYMBOL(eal_timer_source)
+RTE_EXPORT_SYMBOL(eal_timer_source);
 enum timer_source eal_timer_source = EAL_TIMER_TSC;
 
 uint64_t
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 52efb8626b..e3b3f99830 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -79,7 +79,7 @@ static struct flock wr_lock = {
 struct lcore_config lcore_config[RTE_MAX_LCORE];
 
 /* used by rte_rdtsc() */
-RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
+RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map);
 int rte_cycles_vmware_tsc_map;
 
 
@@ -828,7 +828,7 @@ sync_func(__rte_unused void *arg)
  * iopl() call is mostly for the i386 architecture. For other architectures,
  * return -1 to indicate IO privilege can't be changed in this way.
  */
-RTE_EXPORT_SYMBOL(rte_eal_iopl_init)
+RTE_EXPORT_SYMBOL(rte_eal_iopl_init);
 int
 rte_eal_iopl_init(void)
 {
@@ -924,7 +924,7 @@ eal_worker_thread_create(unsigned int lcore_id)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -1305,7 +1305,7 @@ mark_freeable(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -1348,7 +1348,7 @@ rte_eal_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
+RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev);
 int rte_eal_create_uio_dev(void)
 {
 	const struct internal_config *internal_conf =
@@ -1357,7 +1357,7 @@ int rte_eal_create_uio_dev(void)
 	return internal_conf->create_uio_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode);
 enum rte_intr_mode
 rte_eal_vfio_intr_mode(void)
 {
@@ -1367,7 +1367,7 @@ rte_eal_vfio_intr_mode(void)
 	return internal_conf->vfio_intr_mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token);
 void
 rte_eal_vfio_get_vf_token(rte_uuid_t vf_token)
 {
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index eb6a21d4f0..4bb5117cdc 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -135,7 +135,7 @@ eal_alarm_callback(void *arg __rte_unused)
 	rte_spinlock_unlock(&alarm_list_lk);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -200,7 +200,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 33b78464d5..c1801cd520 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -304,7 +304,7 @@ dev_uev_handler(__rte_unused void *param)
 	free(uevent.devname);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -355,7 +355,7 @@ rte_dev_event_monitor_start(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -424,7 +424,7 @@ dev_sigbus_handler_unregister(void)
 	return rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -440,7 +440,7 @@ rte_dev_hotplug_handle_enable(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 4ec78de82c..c705b2617e 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -483,7 +483,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb, void *cb_arg)
@@ -568,7 +568,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 				rte_intr_callback_fn cb_fn, void *cb_arg,
@@ -620,7 +620,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -687,7 +687,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -700,7 +700,7 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -781,7 +781,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
  * auto-masked. In fact, for interrupt handle types VFIO_MSIX and VFIO_MSI,
  * this function is no-op.
  */
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
@@ -834,7 +834,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -1313,7 +1313,7 @@ eal_init_tls_epfd(void)
 	return pfd;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
@@ -1386,7 +1386,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	       int maxevents, int timeout)
@@ -1394,7 +1394,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return eal_epoll_wait(epfd, events, maxevents, timeout, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -1419,7 +1419,7 @@ eal_epoll_data_safe_free(struct rte_epoll_event *ev)
 	ev->epfd = -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd,
 	      struct rte_epoll_event *event)
@@ -1461,7 +1461,7 @@ rte_epoll_ctl(int epfd, int op, int fd,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		int op, unsigned int vec, void *data)
@@ -1527,7 +1527,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
@@ -1546,7 +1546,7 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -1594,7 +1594,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
@@ -1609,14 +1609,14 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	rte_intr_max_intr_set(intr_handle, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
 	return !(!rte_intr_nb_efd_get(intr_handle));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -1627,7 +1627,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 				rte_intr_nb_efd_get(intr_handle));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -1640,7 +1640,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index e433c1afee..0c6fd8799d 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -89,7 +89,7 @@ uint64_t eal_get_baseaddr(void)
 /*
  * Get physical address of any mapped virtual address in the current process.
  */
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virtaddr)
 {
@@ -147,7 +147,7 @@ rte_mem_virt2phy(const void *virtaddr)
 	return physaddr;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virtaddr)
 {
@@ -1688,7 +1688,7 @@ rte_eal_hugepage_attach(void)
 			eal_hugepage_attach();
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c
index c0056f825d..530fb265ba 100644
--- a/lib/eal/linux/eal_thread.c
+++ b/lib/eal/linux/eal_thread.c
@@ -17,13 +17,13 @@
 #include "eal_private.h"
 
 /* require calling thread tid by gettid() */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int rte_sys_gettid(void)
 {
 	return (int)syscall(SYS_gettid);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
 	int ret = ENOSYS;
diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c
index 0e670a0af6..3bb91c682a 100644
--- a/lib/eal/linux/eal_timer.c
+++ b/lib/eal/linux/eal_timer.c
@@ -19,7 +19,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(eal_timer_source)
+RTE_EXPORT_SYMBOL(eal_timer_source);
 enum timer_source eal_timer_source = EAL_TIMER_HPET;
 
 #ifdef RTE_LIBEAL_USE_HPET
@@ -95,7 +95,7 @@ hpet_msb_inc(__rte_unused void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_hpet_hz)
+RTE_EXPORT_SYMBOL(rte_get_hpet_hz);
 uint64_t
 rte_get_hpet_hz(void)
 {
@@ -108,7 +108,7 @@ rte_get_hpet_hz(void)
 	return eal_hpet_resolution_hz;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_hpet_cycles)
+RTE_EXPORT_SYMBOL(rte_get_hpet_cycles);
 uint64_t
 rte_get_hpet_cycles(void)
 {
@@ -135,7 +135,7 @@ rte_get_hpet_cycles(void)
  * Open and mmap /dev/hpet (high precision event timer) that will
  * provide our time reference.
  */
-RTE_EXPORT_SYMBOL(rte_eal_hpet_init)
+RTE_EXPORT_SYMBOL(rte_eal_hpet_init);
 int
 rte_eal_hpet_init(int make_default)
 {
diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index 805f0ff92c..1cd6914bb2 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -517,7 +517,7 @@ get_vfio_cfg_by_container_fd(int container_fd)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd);
 int
 rte_vfio_get_group_fd(int iommu_group_num)
 {
@@ -716,7 +716,7 @@ vfio_sync_default_container(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_clear_group)
+RTE_EXPORT_SYMBOL(rte_vfio_clear_group);
 int
 rte_vfio_clear_group(int vfio_group_fd)
 {
@@ -740,7 +740,7 @@ rte_vfio_clear_group(int vfio_group_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_setup_device)
+RTE_EXPORT_SYMBOL(rte_vfio_setup_device);
 int
 rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -994,7 +994,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_release_device)
+RTE_EXPORT_SYMBOL(rte_vfio_release_device);
 int
 rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
 		    int vfio_dev_fd)
@@ -1083,7 +1083,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_enable)
+RTE_EXPORT_SYMBOL(rte_vfio_enable);
 int
 rte_vfio_enable(const char *modname)
 {
@@ -1160,7 +1160,7 @@ rte_vfio_enable(const char *modname)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_is_enabled);
 int
 rte_vfio_is_enabled(const char *modname)
 {
@@ -1243,7 +1243,7 @@ vfio_set_iommu_type(int vfio_container_fd)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vfio_get_device_info, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vfio_get_device_info, 24.03);
 int
 rte_vfio_get_device_info(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -1303,7 +1303,7 @@ vfio_has_supported_extensions(int vfio_container_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd);
 int
 rte_vfio_get_container_fd(void)
 {
@@ -1375,7 +1375,7 @@ rte_vfio_get_container_fd(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_num)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_num);
 int
 rte_vfio_get_group_num(const char *sysfs_base,
 		const char *dev_addr, int *iommu_group_num)
@@ -2045,7 +2045,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled);
 int
 rte_vfio_noiommu_is_enabled(void)
 {
@@ -2078,7 +2078,7 @@ rte_vfio_noiommu_is_enabled(void)
 	return c == 'Y';
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_create)
+RTE_EXPORT_SYMBOL(rte_vfio_container_create);
 int
 rte_vfio_container_create(void)
 {
@@ -2104,7 +2104,7 @@ rte_vfio_container_create(void)
 	return vfio_cfgs[i].vfio_container_fd;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_destroy)
+RTE_EXPORT_SYMBOL(rte_vfio_container_destroy);
 int
 rte_vfio_container_destroy(int container_fd)
 {
@@ -2130,7 +2130,7 @@ rte_vfio_container_destroy(int container_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind);
 int
 rte_vfio_container_group_bind(int container_fd, int iommu_group_num)
 {
@@ -2145,7 +2145,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num)
 	return vfio_get_group_fd(vfio_cfg, iommu_group_num);
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind);
 int
 rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
 {
@@ -2186,7 +2186,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova,
 		uint64_t len)
@@ -2207,7 +2207,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova,
 	return container_dma_map(vfio_cfg, vaddr, iova, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova,
 		uint64_t len)
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 19fbf37e3e..9ad981f8fe 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -62,7 +62,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -80,7 +80,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -89,7 +89,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/loongarch/rte_hypervisor.c b/lib/eal/loongarch/rte_hypervisor.c
index 7dd70fe90c..0a463e98b6 100644
--- a/lib/eal/loongarch/rte_hypervisor.c
+++ b/lib/eal/loongarch/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/loongarch/rte_power_intrinsics.c b/lib/eal/loongarch/rte_power_intrinsics.c
index e1a2b2d7ed..6c8e063609 100644
--- a/lib/eal/loongarch/rte_power_intrinsics.c
+++ b/lib/eal/loongarch/rte_power_intrinsics.c
@@ -10,7 +10,7 @@
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -24,7 +24,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -36,7 +36,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -45,7 +45,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index a78a7d1b53..8569fdb3f7 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -86,7 +86,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -104,7 +104,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -113,7 +113,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/ppc/rte_hypervisor.c b/lib/eal/ppc/rte_hypervisor.c
index 51b224fb94..45e6ef667b 100644
--- a/lib/eal/ppc/rte_hypervisor.c
+++ b/lib/eal/ppc/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/ppc/rte_power_intrinsics.c b/lib/eal/ppc/rte_power_intrinsics.c
index d9d8eb8d51..de1ebaad52 100644
--- a/lib/eal/ppc/rte_power_intrinsics.c
+++ b/lib/eal/ppc/rte_power_intrinsics.c
@@ -10,7 +10,7 @@
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -24,7 +24,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -36,7 +36,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -45,7 +45,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
index 4dec491b0d..815028220c 100644
--- a/lib/eal/riscv/rte_cpuflags.c
+++ b/lib/eal/riscv/rte_cpuflags.c
@@ -91,7 +91,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -109,7 +109,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -118,7 +118,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/riscv/rte_hypervisor.c b/lib/eal/riscv/rte_hypervisor.c
index 73020f7753..acc698b8a4 100644
--- a/lib/eal/riscv/rte_hypervisor.c
+++ b/lib/eal/riscv/rte_hypervisor.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/riscv/rte_power_intrinsics.c b/lib/eal/riscv/rte_power_intrinsics.c
index 11eff53ff2..9a84447a20 100644
--- a/lib/eal/riscv/rte_power_intrinsics.c
+++ b/lib/eal/riscv/rte_power_intrinsics.c
@@ -12,7 +12,7 @@
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		  const uint64_t tsc_timestamp)
@@ -26,7 +26,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -38,7 +38,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -50,7 +50,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 			const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/unix/eal_debug.c b/lib/eal/unix/eal_debug.c
index e3689531e4..86e02b9665 100644
--- a/lib/eal/unix/eal_debug.c
+++ b/lib/eal/unix/eal_debug.c
@@ -47,7 +47,7 @@ static char *safe_itoa(long val, char *buf, size_t len, unsigned int radix)
  * Most of libc is therefore not safe, include RTE_LOG (calls syslog);
  * backtrace_symbols (calls malloc), etc.
  */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void rte_dump_stack(void)
 {
 	void *func[BACKTRACE_SIZE];
@@ -124,7 +124,7 @@ void rte_dump_stack(void)
 #else /* !RTE_BACKTRACE */
 
 /* stub if not enabled */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void rte_dump_stack(void) { }
 
 #endif /* RTE_BACKTRACE */
diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c
index 6b8451cd3e..b67cfc0b7b 100644
--- a/lib/eal/unix/eal_filesystem.c
+++ b/lib/eal/unix/eal_filesystem.c
@@ -78,7 +78,7 @@ int eal_create_runtime_dir(void)
 }
 
 /* parse a sysfs (or other) file containing one integer value */
-RTE_EXPORT_SYMBOL(eal_parse_sysfs_value)
+RTE_EXPORT_SYMBOL(eal_parse_sysfs_value);
 int eal_parse_sysfs_value(const char *filename, unsigned long *val)
 {
 	FILE *f;
diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c
index f2c16fb8a7..1627e62de9 100644
--- a/lib/eal/unix/eal_firmware.c
+++ b/lib/eal/unix/eal_firmware.c
@@ -147,7 +147,7 @@ firmware_read(const char *name, void **buf, size_t *bufsz)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read);
 int
 rte_firmware_read(const char *name, void **buf, size_t *bufsz)
 {
diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c
index 55b647c736..4ba28b714d 100644
--- a/lib/eal/unix/eal_unix_memory.c
+++ b/lib/eal/unix/eal_unix_memory.c
@@ -110,7 +110,7 @@ mem_rte_to_sys_prot(int prot)
 	return sys_prot;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map);
 void *
 rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	int fd, uint64_t offset)
@@ -134,14 +134,14 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	return mem_map(requested_addr, size, sys_prot, sys_flags, fd, offset);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap);
 int
 rte_mem_unmap(void *virt, size_t size)
 {
 	return mem_unmap(virt, size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size);
 size_t
 rte_mem_page_size(void)
 {
@@ -165,7 +165,7 @@ rte_mem_page_size(void)
 	return page_size;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock);
 int
 rte_mem_lock(const void *virt, size_t size)
 {
diff --git a/lib/eal/unix/eal_unix_timer.c b/lib/eal/unix/eal_unix_timer.c
index 3dbcf61e90..27679601cf 100644
--- a/lib/eal/unix/eal_unix_timer.c
+++ b/lib/eal/unix/eal_unix_timer.c
@@ -8,7 +8,7 @@
 #include <eal_export.h>
 #include <rte_cycles.h>
 
-RTE_EXPORT_SYMBOL(rte_delay_us_sleep)
+RTE_EXPORT_SYMBOL(rte_delay_us_sleep);
 void
 rte_delay_us_sleep(unsigned int us)
 {
diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
index 950c0848ba..c1bb4d7091 100644
--- a/lib/eal/unix/rte_thread.c
+++ b/lib/eal/unix/rte_thread.c
@@ -119,7 +119,7 @@ thread_start_wrapper(void *arg)
 }
 #endif
 
-RTE_EXPORT_SYMBOL(rte_thread_create)
+RTE_EXPORT_SYMBOL(rte_thread_create);
 int
 rte_thread_create(rte_thread_t *thread_id,
 		const rte_thread_attr_t *thread_attr,
@@ -228,7 +228,7 @@ rte_thread_create(rte_thread_t *thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_join)
+RTE_EXPORT_SYMBOL(rte_thread_join);
 int
 rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 {
@@ -251,21 +251,21 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_detach)
+RTE_EXPORT_SYMBOL(rte_thread_detach);
 int
 rte_thread_detach(rte_thread_t thread_id)
 {
 	return pthread_detach((pthread_t)thread_id.opaque_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_equal)
+RTE_EXPORT_SYMBOL(rte_thread_equal);
 int
 rte_thread_equal(rte_thread_t t1, rte_thread_t t2)
 {
 	return pthread_equal((pthread_t)t1.opaque_id, (pthread_t)t2.opaque_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_self)
+RTE_EXPORT_SYMBOL(rte_thread_self);
 rte_thread_t
 rte_thread_self(void)
 {
@@ -278,7 +278,7 @@ rte_thread_self(void)
 	return thread_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_priority)
+RTE_EXPORT_SYMBOL(rte_thread_get_priority);
 int
 rte_thread_get_priority(rte_thread_t thread_id,
 	enum rte_thread_priority *priority)
@@ -301,7 +301,7 @@ rte_thread_get_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_set_priority);
 int
 rte_thread_set_priority(rte_thread_t thread_id,
 	enum rte_thread_priority priority)
@@ -323,7 +323,7 @@ rte_thread_set_priority(rte_thread_t thread_id,
 		&param);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_create)
+RTE_EXPORT_SYMBOL(rte_thread_key_create);
 int
 rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *))
 {
@@ -346,7 +346,7 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *))
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_delete)
+RTE_EXPORT_SYMBOL(rte_thread_key_delete);
 int
 rte_thread_key_delete(rte_thread_key key)
 {
@@ -369,7 +369,7 @@ rte_thread_key_delete(rte_thread_key key)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_set)
+RTE_EXPORT_SYMBOL(rte_thread_value_set);
 int
 rte_thread_value_set(rte_thread_key key, const void *value)
 {
@@ -390,7 +390,7 @@ rte_thread_value_set(rte_thread_key key, const void *value)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_get)
+RTE_EXPORT_SYMBOL(rte_thread_value_get);
 void *
 rte_thread_value_get(rte_thread_key key)
 {
@@ -402,7 +402,7 @@ rte_thread_value_get(rte_thread_key key)
 	return pthread_getspecific(key->thread_index);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id);
 int
 rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		const rte_cpuset_t *cpuset)
@@ -411,7 +411,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		sizeof(*cpuset), cpuset);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id);
 int
 rte_thread_get_affinity_by_id(rte_thread_t thread_id,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 4f0a164d9b..a38c69ddfd 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -75,7 +75,7 @@ eal_proc_type_detect(void)
 	return ptype;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_disable)
+RTE_EXPORT_SYMBOL(rte_mp_disable);
 bool
 rte_mp_disable(void)
 {
@@ -191,12 +191,12 @@ rte_eal_init_alert(const char *msg)
  * until eal_common_trace.c can be compiled.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05);
 RTE_DEFINE_PER_LCORE(volatile int, trace_point_sz);
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05);
 RTE_DEFINE_PER_LCORE(void *, trace_mem);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05);
 void
 __rte_trace_mem_per_thread_alloc(void)
 {
@@ -207,7 +207,7 @@ trace_mem_per_thread_free(void)
 {
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05);
 void
 __rte_trace_point_emit_field(size_t sz, const char *field,
 	const char *type)
@@ -217,7 +217,7 @@ __rte_trace_point_emit_field(size_t sz, const char *field,
 	RTE_SET_USED(type);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05);
 int
 __rte_trace_point_register(rte_trace_point_t *trace, const char *name,
 	void (*register_fn)(void))
@@ -228,7 +228,7 @@ __rte_trace_point_register(rte_trace_point_t *trace, const char *name,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -246,7 +246,7 @@ rte_eal_cleanup(void)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -520,7 +520,7 @@ eal_asprintf(char **buffer, const char *format, ...)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -531,7 +531,7 @@ rte_vfio_container_dma_map(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -542,7 +542,7 @@ rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read);
 int
 rte_firmware_read(__rte_unused const char *name,
 			__rte_unused void **buf,
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index 0b11d331dc..11d35a7828 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -84,7 +84,7 @@ alarm_task_exec(void *arg)
 	task->ret = alarm_set(task->entry, task->deadline);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -186,7 +186,7 @@ alarm_matches(const struct alarm_entry *ap,
 	return (ap->cb_fn == cb_fn) && (any_arg || ap->cb_arg == cb_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c
index a4549e1179..7355826cb8 100644
--- a/lib/eal/windows/eal_debug.c
+++ b/lib/eal/windows/eal_debug.c
@@ -15,7 +15,7 @@
 #define BACKTRACE_SIZE 256
 
 /* dump the stack of the calling core */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void
 rte_dump_stack(void)
 {
diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c
index 9c7463edf2..4c74162ca0 100644
--- a/lib/eal/windows/eal_dev.c
+++ b/lib/eal/windows/eal_dev.c
@@ -7,7 +7,7 @@
 
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -15,7 +15,7 @@ rte_dev_event_monitor_start(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -23,7 +23,7 @@ rte_dev_event_monitor_stop(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -31,7 +31,7 @@ rte_dev_hotplug_handle_enable(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c
index 5ff30c7631..14b0cfeee8 100644
--- a/lib/eal/windows/eal_interrupts.c
+++ b/lib/eal/windows/eal_interrupts.c
@@ -109,14 +109,14 @@ rte_eal_intr_init(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int
 rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(__rte_unused struct rte_intr_handle *intr_handle,
 		__rte_unused int epfd, __rte_unused int op,
@@ -150,7 +150,7 @@ eal_intr_thread_cancel(void)
 	WaitForSingleObject(intr_thread_handle, INFINITE);
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -159,7 +159,7 @@ rte_intr_callback_register(
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -169,7 +169,7 @@ rte_intr_callback_unregister_pending(
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -178,7 +178,7 @@ rte_intr_callback_unregister(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -187,28 +187,28 @@ rte_intr_callback_unregister_sync(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -218,14 +218,14 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
@@ -234,7 +234,7 @@ rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -243,7 +243,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -252,7 +252,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 		int maxevents, int timeout)
@@ -265,7 +265,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -278,7 +278,7 @@ rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 {
@@ -290,14 +290,14 @@ rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 9f85191016..4bc251598e 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -213,7 +213,7 @@ eal_mem_virt2iova_cleanup(void)
 		CloseHandle(virt2phys_device);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virt)
 {
@@ -234,7 +234,7 @@ rte_mem_virt2phy(const void *virt)
 	return phys.QuadPart;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virt)
 {
@@ -250,7 +250,7 @@ rte_mem_virt2iova(const void *virt)
 }
 
 /* Always using physical addresses under Windows if they can be obtained. */
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
@@ -522,7 +522,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map);
 void *
 rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	int fd, uint64_t offset)
@@ -606,7 +606,7 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	return virt;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap);
 int
 rte_mem_unmap(void *virt, size_t size)
 {
@@ -630,7 +630,7 @@ eal_get_baseaddr(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size);
 size_t
 rte_mem_page_size(void)
 {
@@ -642,7 +642,7 @@ rte_mem_page_size(void)
 	return info.dwPageSize;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock);
 int
 rte_mem_lock(const void *virt, size_t size)
 {
diff --git a/lib/eal/windows/eal_mp.c b/lib/eal/windows/eal_mp.c
index 6703355318..48653ef02a 100644
--- a/lib/eal/windows/eal_mp.c
+++ b/lib/eal/windows/eal_mp.c
@@ -25,7 +25,7 @@ rte_mp_channel_cleanup(void)
 	EAL_LOG_NOT_IMPLEMENTED();
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_register)
+RTE_EXPORT_SYMBOL(rte_mp_action_register);
 int
 rte_mp_action_register(const char *name, rte_mp_t action)
 {
@@ -35,7 +35,7 @@ rte_mp_action_register(const char *name, rte_mp_t action)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_unregister)
+RTE_EXPORT_SYMBOL(rte_mp_action_unregister);
 void
 rte_mp_action_unregister(const char *name)
 {
@@ -43,7 +43,7 @@ rte_mp_action_unregister(const char *name)
 	EAL_LOG_NOT_IMPLEMENTED();
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_sendmsg)
+RTE_EXPORT_SYMBOL(rte_mp_sendmsg);
 int
 rte_mp_sendmsg(struct rte_mp_msg *msg)
 {
@@ -52,7 +52,7 @@ rte_mp_sendmsg(struct rte_mp_msg *msg)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_sync)
+RTE_EXPORT_SYMBOL(rte_mp_request_sync);
 int
 rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	const struct timespec *ts)
@@ -64,7 +64,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_async)
+RTE_EXPORT_SYMBOL(rte_mp_request_async);
 int
 rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 		rte_mp_async_reply_t clb)
@@ -76,7 +76,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_reply)
+RTE_EXPORT_SYMBOL(rte_mp_reply);
 int
 rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
 {
diff --git a/lib/eal/windows/eal_thread.c b/lib/eal/windows/eal_thread.c
index 3eeb94a589..811ae007ba 100644
--- a/lib/eal/windows/eal_thread.c
+++ b/lib/eal/windows/eal_thread.c
@@ -72,7 +72,7 @@ eal_thread_ack_command(void)
 }
 
 /* get current thread ID */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int
 rte_sys_gettid(void)
 {
diff --git a/lib/eal/windows/eal_timer.c b/lib/eal/windows/eal_timer.c
index 33cbac6a03..ccaa743b5b 100644
--- a/lib/eal/windows/eal_timer.c
+++ b/lib/eal/windows/eal_timer.c
@@ -15,7 +15,7 @@
 #define US_PER_SEC 1E6
 #define CYC_PER_100KHZ 1E5
 
-RTE_EXPORT_SYMBOL(rte_delay_us_sleep)
+RTE_EXPORT_SYMBOL(rte_delay_us_sleep);
 void
 rte_delay_us_sleep(unsigned int us)
 {
diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c
index 85e5a57346..e1bae54ec8 100644
--- a/lib/eal/windows/rte_thread.c
+++ b/lib/eal/windows/rte_thread.c
@@ -182,7 +182,7 @@ thread_func_wrapper(void *arg)
 	return (DWORD)ctx.thread_func(ctx.routine_args);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_create)
+RTE_EXPORT_SYMBOL(rte_thread_create);
 int
 rte_thread_create(rte_thread_t *thread_id,
 		  const rte_thread_attr_t *thread_attr,
@@ -260,7 +260,7 @@ rte_thread_create(rte_thread_t *thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_join)
+RTE_EXPORT_SYMBOL(rte_thread_join);
 int
 rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 {
@@ -301,7 +301,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_detach)
+RTE_EXPORT_SYMBOL(rte_thread_detach);
 int
 rte_thread_detach(rte_thread_t thread_id)
 {
@@ -311,14 +311,14 @@ rte_thread_detach(rte_thread_t thread_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_equal)
+RTE_EXPORT_SYMBOL(rte_thread_equal);
 int
 rte_thread_equal(rte_thread_t t1, rte_thread_t t2)
 {
 	return t1.opaque_id == t2.opaque_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_self)
+RTE_EXPORT_SYMBOL(rte_thread_self);
 rte_thread_t
 rte_thread_self(void)
 {
@@ -329,7 +329,7 @@ rte_thread_self(void)
 	return thread_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void
 rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
@@ -371,7 +371,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 		EAL_LOG(DEBUG, "Failed to set thread name");
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_priority)
+RTE_EXPORT_SYMBOL(rte_thread_get_priority);
 int
 rte_thread_get_priority(rte_thread_t thread_id,
 	enum rte_thread_priority *priority)
@@ -411,7 +411,7 @@ rte_thread_get_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_set_priority);
 int
 rte_thread_set_priority(rte_thread_t thread_id,
 			enum rte_thread_priority priority)
@@ -450,7 +450,7 @@ rte_thread_set_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_create)
+RTE_EXPORT_SYMBOL(rte_thread_key_create);
 int
 rte_thread_key_create(rte_thread_key *key,
 		__rte_unused void (*destructor)(void *))
@@ -471,7 +471,7 @@ rte_thread_key_create(rte_thread_key *key,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_delete)
+RTE_EXPORT_SYMBOL(rte_thread_key_delete);
 int
 rte_thread_key_delete(rte_thread_key key)
 {
@@ -490,7 +490,7 @@ rte_thread_key_delete(rte_thread_key key)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_set)
+RTE_EXPORT_SYMBOL(rte_thread_value_set);
 int
 rte_thread_value_set(rte_thread_key key, const void *value)
 {
@@ -511,7 +511,7 @@ rte_thread_value_set(rte_thread_key key, const void *value)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_get)
+RTE_EXPORT_SYMBOL(rte_thread_value_get);
 void *
 rte_thread_value_get(rte_thread_key key)
 {
@@ -531,7 +531,7 @@ rte_thread_value_get(rte_thread_key key)
 	return output;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id);
 int
 rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		const rte_cpuset_t *cpuset)
@@ -572,7 +572,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id);
 int
 rte_thread_get_affinity_by_id(rte_thread_t thread_id,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
index 5d1f352e04..90495b19a4 100644
--- a/lib/eal/x86/rte_cpuflags.c
+++ b/lib/eal/x86/rte_cpuflags.c
@@ -149,7 +149,7 @@ struct feature_entry rte_cpu_feature_table[] = {
 	FEAT_DEF(INVTSC, 0x80000007, 0, RTE_REG_EDX,  8)
 };
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -192,7 +192,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return feat->value;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -201,7 +201,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/x86/rte_hypervisor.c b/lib/eal/x86/rte_hypervisor.c
index 0c649c1d41..6756cd10c0 100644
--- a/lib/eal/x86/rte_hypervisor.c
+++ b/lib/eal/x86/rte_hypervisor.c
@@ -14,7 +14,7 @@
 /* See http://lwn.net/Articles/301888/ */
 #define HYPERVISOR_INFO_LEAF 0x40000000
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 1cb2e908c0..70fe5deb5b 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -159,7 +159,7 @@ __check_val_size(const uint8_t sz)
  * For more information about usage of these instructions, please refer to
  * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -221,7 +221,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
  * information about usage of this instruction, please refer to Intel(R) 64 and
  * IA-32 Architectures Software Developer's Manual.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -266,7 +266,7 @@ RTE_INIT(rte_power_intrinsics_init) {
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -316,7 +316,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/x86/rte_spinlock.c b/lib/eal/x86/rte_spinlock.c
index da783919e5..8f000366aa 100644
--- a/lib/eal/x86/rte_spinlock.c
+++ b/lib/eal/x86/rte_spinlock.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include "rte_cpuflags.h"
 
-RTE_EXPORT_SYMBOL(rte_rtm_supported)
+RTE_EXPORT_SYMBOL(rte_rtm_supported);
 uint8_t rte_rtm_supported; /* cache the flag to avoid the overhead
 			      of the rte_cpu_get_flag_enabled function */
 
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index b0e44e5c51..066e35ae4b 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -497,7 +497,7 @@ efd_search_hash(struct rte_efd_table * const table,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_create)
+RTE_EXPORT_SYMBOL(rte_efd_create);
 struct rte_efd_table *
 rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
 		uint64_t online_cpu_socket_bitmask, uint8_t offline_cpu_socket)
@@ -722,7 +722,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_find_existing)
+RTE_EXPORT_SYMBOL(rte_efd_find_existing);
 struct rte_efd_table *
 rte_efd_find_existing(const char *name)
 {
@@ -749,7 +749,7 @@ rte_efd_find_existing(const char *name)
 	return table;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_free)
+RTE_EXPORT_SYMBOL(rte_efd_free);
 void
 rte_efd_free(struct rte_efd_table *table)
 {
@@ -1166,7 +1166,7 @@ efd_compute_update(struct rte_efd_table * const table,
 	return RTE_EFD_UPDATE_FAILED;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_update)
+RTE_EXPORT_SYMBOL(rte_efd_update);
 int
 rte_efd_update(struct rte_efd_table * const table, const unsigned int socket_id,
 		const void *key, const efd_value_t value)
@@ -1190,7 +1190,7 @@ rte_efd_update(struct rte_efd_table * const table, const unsigned int socket_id,
 	return status;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_delete)
+RTE_EXPORT_SYMBOL(rte_efd_delete);
 int
 rte_efd_delete(struct rte_efd_table * const table, const unsigned int socket_id,
 		const void *key, efd_value_t * const prev_value)
@@ -1307,7 +1307,7 @@ efd_lookup_internal(const struct efd_online_group_entry * const group,
 	return value;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_lookup)
+RTE_EXPORT_SYMBOL(rte_efd_lookup);
 efd_value_t
 rte_efd_lookup(const struct rte_efd_table * const table,
 		const unsigned int socket_id, const void *key)
@@ -1329,7 +1329,7 @@ rte_efd_lookup(const struct rte_efd_table * const table,
 			table->lookup_fn);
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_efd_lookup_bulk);
 void rte_efd_lookup_bulk(const struct rte_efd_table * const table,
 		const unsigned int socket_id, const int num_keys,
 		const void **key_list, efd_value_t * const value_list)
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index ec0c1e1176..47a02da4a7 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -75,7 +75,7 @@ eth_dev_get(uint16_t port_id)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocate);
 struct rte_eth_dev *
 rte_eth_dev_allocate(const char *name)
 {
@@ -130,7 +130,7 @@ rte_eth_dev_allocate(const char *name)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocated)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocated);
 struct rte_eth_dev *
 rte_eth_dev_allocated(const char *name)
 {
@@ -153,7 +153,7 @@ rte_eth_dev_allocated(const char *name)
  * makes sure that the same device would have the same port ID both
  * in the primary and secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_attach_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_attach_secondary);
 struct rte_eth_dev *
 rte_eth_dev_attach_secondary(const char *name)
 {
@@ -184,7 +184,7 @@ rte_eth_dev_attach_secondary(const char *name)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_callback_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_callback_process);
 int
 rte_eth_dev_callback_process(struct rte_eth_dev *dev,
 	enum rte_eth_event_type event, void *ret_param)
@@ -212,7 +212,7 @@ rte_eth_dev_callback_process(struct rte_eth_dev *dev,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_probing_finish);
 void
 rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 {
@@ -232,7 +232,7 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	dev->state = RTE_ETH_DEV_ATTACHED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_release_port)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_release_port);
 int
 rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 {
@@ -291,7 +291,7 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_create);
 int
 rte_eth_dev_create(struct rte_device *device, const char *name,
 	size_t priv_data_size,
@@ -367,7 +367,7 @@ rte_eth_dev_create(struct rte_device *device, const char *name,
 	return retval;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_destroy);
 int
 rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
 	ethdev_uninit_t ethdev_uninit)
@@ -388,7 +388,7 @@ rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
 	return rte_eth_dev_release_port(ethdev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_get_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_get_by_name);
 struct rte_eth_dev *
 rte_eth_dev_get_by_name(const char *name)
 {
@@ -400,7 +400,7 @@ rte_eth_dev_get_by_name(const char *name)
 	return &rte_eth_devices[pid];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_rx_hairpin_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_rx_hairpin_queue);
 int
 rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 {
@@ -409,7 +409,7 @@ rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_tx_hairpin_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_tx_hairpin_queue);
 int
 rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 {
@@ -418,7 +418,7 @@ rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_internal_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_internal_reset);
 void
 rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
 {
@@ -629,7 +629,7 @@ eth_dev_tokenise_representor_list(char *p_val, struct rte_eth_devargs *eth_devar
 	return result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devargs_parse)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devargs_parse);
 int
 rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_devargs,
 		      unsigned int nb_da)
@@ -692,7 +692,7 @@ eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_
 			port_id, queue_id, ring_name);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_free);
 int
 rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
 		uint16_t queue_id)
@@ -717,7 +717,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_reserve)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_reserve);
 const struct rte_memzone *
 rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
 			 uint16_t queue_id, size_t size, unsigned int align,
@@ -753,7 +753,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
 			RTE_MEMZONE_IOVA_CONTIG, align);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_bind)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_bind);
 int
 rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
 				struct rte_hairpin_peer_info *peer_info,
@@ -772,7 +772,7 @@ rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
 	return dev->dev_ops->hairpin_queue_peer_bind(dev, cur_queue, peer_info, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_unbind)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_unbind);
 int
 rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
 				  uint32_t direction)
@@ -787,7 +787,7 @@ rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
 	return dev->dev_ops->hairpin_queue_peer_unbind(dev, cur_queue, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_update)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_update);
 int
 rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
 				  struct rte_hairpin_peer_info *cur_info,
@@ -809,7 +809,7 @@ rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
 						       cur_info, peer_info, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_ip_reassembly_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_ip_reassembly_dynfield_register);
 int
 rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
 {
@@ -838,7 +838,7 @@ rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_pkt_burst_dummy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_pkt_burst_dummy);
 uint16_t
 rte_eth_pkt_burst_dummy(void *queue __rte_unused,
 		struct rte_mbuf **pkts __rte_unused,
@@ -847,7 +847,7 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_representor_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_representor_id_get);
 int
 rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
@@ -943,7 +943,7 @@ rte_eth_representor_id_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_alloc);
 int
 rte_eth_switch_domain_alloc(uint16_t *domain_id)
 {
@@ -964,7 +964,7 @@ rte_eth_switch_domain_alloc(uint16_t *domain_id)
 	return -ENOSPC;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_free);
 int
 rte_eth_switch_domain_free(uint16_t domain_id)
 {
@@ -981,7 +981,7 @@ rte_eth_switch_domain_free(uint16_t domain_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_get_restore_flags)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_get_restore_flags);
 uint64_t
 rte_eth_get_restore_flags(struct rte_eth_dev *dev, enum rte_eth_dev_operation op)
 {
diff --git a/lib/ethdev/ethdev_linux_ethtool.c b/lib/ethdev/ethdev_linux_ethtool.c
index 5eddda1da3..0205181e80 100644
--- a/lib/ethdev/ethdev_linux_ethtool.c
+++ b/lib/ethdev/ethdev_linux_ethtool.c
@@ -133,7 +133,7 @@ static const uint32_t link_modes[] = {
 	[120] =  800000, /* ETHTOOL_LINK_MODE_800000baseVR4_Full_BIT */
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_ethtool)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_ethtool);
 uint32_t
 rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 {
@@ -157,7 +157,7 @@ rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 	return rte_eth_speed_bitflag(speed, duplex);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_glink)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_glink);
 uint32_t
 rte_eth_link_speed_glink(const uint32_t *bitmap, int8_t nwords)
 {
@@ -178,7 +178,7 @@ rte_eth_link_speed_glink(const uint32_t *bitmap, int8_t nwords)
 	return ethdev_bitmap;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_gset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_gset);
 uint32_t
 rte_eth_link_speed_gset(uint32_t legacy_bitmap)
 {
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 285d377d91..222b17d8ce 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -286,7 +286,7 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.clbk = (void * __rte_atomic *)(uintptr_t)dev->pre_tx_burst_cbs;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_call_rx_callbacks)
+RTE_EXPORT_SYMBOL(rte_eth_call_rx_callbacks);
 uint16_t
 rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
 	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
@@ -310,7 +310,7 @@ rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
 	return nb_rx;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_call_tx_callbacks)
+RTE_EXPORT_SYMBOL(rte_eth_call_tx_callbacks);
 uint16_t
 rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
 	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 071c508327..444f82a723 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -26,30 +26,30 @@ RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_stop,
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_close,
 	lib.ethdev.close)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_empty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_empty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst_empty,
 	lib.ethdev.rx.burst.empty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_nonempty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_nonempty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst_nonempty,
 	lib.ethdev.rx.burst.nonempty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_tx_burst, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_tx_burst, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_burst,
 	lib.ethdev.tx.burst)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_empty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_empty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks_empty,
 	lib.ethdev.call_rx_callbacks.empty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_nonempty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_nonempty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks_nonempty,
 	lib.ethdev.call_rx_callbacks.nonempty)
 
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_tx_callbacks,
 	lib.ethdev.call_tx_callbacks)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_tx_queue_count, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_tx_queue_count, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_queue_count,
 	lib.ethdev.tx_queue_count)
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index dd7c00bc94..92ba1e9b28 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -39,11 +39,11 @@
 
 #define ETH_XSTATS_ITER_NUM	0x100
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devices);
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
 /* public fast-path API */
-RTE_EXPORT_SYMBOL(rte_eth_fp_ops)
+RTE_EXPORT_SYMBOL(rte_eth_fp_ops);
 struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
 /* spinlock for add/remove Rx callbacks */
@@ -176,7 +176,7 @@ static const struct {
 	{RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT, "symmetric_toeplitz_sort"},
 };
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_init)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_init);
 int
 rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str)
 {
@@ -293,7 +293,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_next)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_next);
 uint16_t
 rte_eth_iterator_next(struct rte_dev_iterator *iter)
 {
@@ -334,7 +334,7 @@ rte_eth_iterator_next(struct rte_dev_iterator *iter)
 	return RTE_MAX_ETHPORTS;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_cleanup)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_cleanup);
 void
 rte_eth_iterator_cleanup(struct rte_dev_iterator *iter)
 {
@@ -353,7 +353,7 @@ rte_eth_iterator_cleanup(struct rte_dev_iterator *iter)
 	memset(iter, 0, sizeof(*iter));
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next)
+RTE_EXPORT_SYMBOL(rte_eth_find_next);
 uint16_t
 rte_eth_find_next(uint16_t port_id)
 {
@@ -378,7 +378,7 @@ rte_eth_find_next(uint16_t port_id)
 	     port_id < RTE_MAX_ETHPORTS; \
 	     port_id = rte_eth_find_next(port_id + 1))
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_of)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_of);
 uint16_t
 rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent)
 {
@@ -392,7 +392,7 @@ rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent)
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_sibling)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_sibling);
 uint16_t
 rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id)
 {
@@ -413,7 +413,7 @@ eth_dev_is_allocated(const struct rte_eth_dev *ethdev)
 	return ethdev->data != NULL && ethdev->data->name[0] != '\0';
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_is_valid_port)
+RTE_EXPORT_SYMBOL(rte_eth_dev_is_valid_port);
 int
 rte_eth_dev_is_valid_port(uint16_t port_id)
 {
@@ -440,7 +440,7 @@ eth_is_valid_owner_id(uint64_t owner_id)
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_owned_by)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_owned_by);
 uint64_t
 rte_eth_find_next_owned_by(uint16_t port_id, const uint64_t owner_id)
 {
@@ -454,7 +454,7 @@ rte_eth_find_next_owned_by(uint16_t port_id, const uint64_t owner_id)
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_new)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_new);
 int
 rte_eth_dev_owner_new(uint64_t *owner_id)
 {
@@ -530,7 +530,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_set);
 int
 rte_eth_dev_owner_set(const uint16_t port_id,
 		      const struct rte_eth_dev_owner *owner)
@@ -551,7 +551,7 @@ rte_eth_dev_owner_set(const uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_unset)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_unset);
 int
 rte_eth_dev_owner_unset(const uint16_t port_id, const uint64_t owner_id)
 {
@@ -573,7 +573,7 @@ rte_eth_dev_owner_unset(const uint16_t port_id, const uint64_t owner_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_delete)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_delete);
 int
 rte_eth_dev_owner_delete(const uint64_t owner_id)
 {
@@ -611,7 +611,7 @@ rte_eth_dev_owner_delete(const uint64_t owner_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_get);
 int
 rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner)
 {
@@ -650,7 +650,7 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_socket_id)
+RTE_EXPORT_SYMBOL(rte_eth_dev_socket_id);
 int
 rte_eth_dev_socket_id(uint16_t port_id)
 {
@@ -676,7 +676,7 @@ rte_eth_dev_socket_id(uint16_t port_id)
 	return socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_sec_ctx)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_sec_ctx);
 void *
 rte_eth_dev_get_sec_ctx(uint16_t port_id)
 {
@@ -690,7 +690,7 @@ rte_eth_dev_get_sec_ctx(uint16_t port_id)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_count_avail)
+RTE_EXPORT_SYMBOL(rte_eth_dev_count_avail);
 uint16_t
 rte_eth_dev_count_avail(void)
 {
@@ -707,7 +707,7 @@ rte_eth_dev_count_avail(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_count_total)
+RTE_EXPORT_SYMBOL(rte_eth_dev_count_total);
 uint16_t
 rte_eth_dev_count_total(void)
 {
@@ -721,7 +721,7 @@ rte_eth_dev_count_total(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_name_by_port)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_name_by_port);
 int
 rte_eth_dev_get_name_by_port(uint16_t port_id, char *name)
 {
@@ -748,7 +748,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_port_by_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_port_by_name);
 int
 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id)
 {
@@ -839,7 +839,7 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_queue_is_valid, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_queue_is_valid, 23.07);
 int
 rte_eth_rx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 {
@@ -851,7 +851,7 @@ rte_eth_rx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 	return eth_dev_validate_rx_queue(dev, queue_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_queue_is_valid, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_queue_is_valid, 23.07);
 int
 rte_eth_tx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 {
@@ -863,7 +863,7 @@ rte_eth_tx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 	return eth_dev_validate_tx_queue(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_start);
 int
 rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id)
 {
@@ -908,7 +908,7 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_stop);
 int
 rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id)
 {
@@ -946,7 +946,7 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_start);
 int
 rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id)
 {
@@ -991,7 +991,7 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_stop);
 int
 rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 {
@@ -1029,7 +1029,7 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_speed_bitflag)
+RTE_EXPORT_SYMBOL(rte_eth_speed_bitflag);
 uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
@@ -1087,7 +1087,7 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_offload_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_offload_name);
 const char *
 rte_eth_dev_rx_offload_name(uint64_t offload)
 {
@@ -1106,7 +1106,7 @@ rte_eth_dev_rx_offload_name(uint64_t offload)
 	return name;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_offload_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_offload_name);
 const char *
 rte_eth_dev_tx_offload_name(uint64_t offload)
 {
@@ -1168,7 +1168,7 @@ eth_dev_offload_names(uint64_t bitmask, char *buf, size_t size,
 	return buf;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_capability_name, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_capability_name, 21.11);
 const char *
 rte_eth_dev_capability_name(uint64_t capability)
 {
@@ -1318,7 +1318,7 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_configure)
+RTE_EXPORT_SYMBOL(rte_eth_dev_configure);
 int
 rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1782,7 +1782,7 @@ eth_dev_config_restore(struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_start);
 int
 rte_eth_dev_start(uint16_t port_id)
 {
@@ -1857,7 +1857,7 @@ rte_eth_dev_start(uint16_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_stop);
 int
 rte_eth_dev_stop(uint16_t port_id)
 {
@@ -1888,7 +1888,7 @@ rte_eth_dev_stop(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_up)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_up);
 int
 rte_eth_dev_set_link_up(uint16_t port_id)
 {
@@ -1907,7 +1907,7 @@ rte_eth_dev_set_link_up(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_down)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_down);
 int
 rte_eth_dev_set_link_down(uint16_t port_id)
 {
@@ -1926,7 +1926,7 @@ rte_eth_dev_set_link_down(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get, 24.11);
 int
 rte_eth_speed_lanes_get(uint16_t port_id, uint32_t *lane)
 {
@@ -1940,7 +1940,7 @@ rte_eth_speed_lanes_get(uint16_t port_id, uint32_t *lane)
 	return eth_err(port_id, dev->dev_ops->speed_lanes_get(dev, lane));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get_capability, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get_capability, 24.11);
 int
 rte_eth_speed_lanes_get_capability(uint16_t port_id,
 				   struct rte_eth_speed_lanes_capa *speed_lanes_capa,
@@ -1967,7 +1967,7 @@ rte_eth_speed_lanes_get_capability(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_set, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_set, 24.11);
 int
 rte_eth_speed_lanes_set(uint16_t port_id, uint32_t speed_lanes_capa)
 {
@@ -1981,7 +1981,7 @@ rte_eth_speed_lanes_set(uint16_t port_id, uint32_t speed_lanes_capa)
 	return eth_err(port_id, dev->dev_ops->speed_lanes_set(dev, speed_lanes_capa));
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_close)
+RTE_EXPORT_SYMBOL(rte_eth_dev_close);
 int
 rte_eth_dev_close(uint16_t port_id)
 {
@@ -2016,7 +2016,7 @@ rte_eth_dev_close(uint16_t port_id)
 	return firsterr;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_reset)
+RTE_EXPORT_SYMBOL(rte_eth_dev_reset);
 int
 rte_eth_dev_reset(uint16_t port_id)
 {
@@ -2042,7 +2042,7 @@ rte_eth_dev_reset(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_is_removed)
+RTE_EXPORT_SYMBOL(rte_eth_dev_is_removed);
 int
 rte_eth_dev_is_removed(uint16_t port_id)
 {
@@ -2270,7 +2270,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_queue_setup)
+RTE_EXPORT_SYMBOL(rte_eth_rx_queue_setup);
 int
 rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc, unsigned int socket_id,
@@ -2496,7 +2496,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	return eth_err(port_id, ret);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_hairpin_queue_setup, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_hairpin_queue_setup, 19.11);
 int
 rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 			       uint16_t nb_rx_desc,
@@ -2602,7 +2602,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_queue_setup)
+RTE_EXPORT_SYMBOL(rte_eth_tx_queue_setup);
 int
 rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -2714,7 +2714,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 		       tx_queue_id, nb_tx_desc, socket_id, &local_conf));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_hairpin_queue_setup, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_hairpin_queue_setup, 19.11);
 int
 rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 			       uint16_t nb_tx_desc,
@@ -2814,7 +2814,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_bind, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_bind, 20.11);
 int
 rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port)
 {
@@ -2842,7 +2842,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_unbind, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_unbind, 20.11);
 int
 rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port)
 {
@@ -2870,7 +2870,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_get_peer_ports, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_get_peer_ports, 20.11);
 int
 rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
 			       size_t len, uint32_t direction)
@@ -2909,7 +2909,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_drop_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_drop_callback);
 void
 rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent,
 		void *userdata __rte_unused)
@@ -2919,7 +2919,7 @@ rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent,
 	rte_eth_trace_tx_buffer_drop_callback((void **)pkts, unsent);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_count_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_count_callback);
 void
 rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent,
 		void *userdata)
@@ -2932,7 +2932,7 @@ rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent,
 	rte_eth_trace_tx_buffer_count_callback((void **)pkts, unsent, *count);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_set_err_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_set_err_callback);
 int
 rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer,
 		buffer_tx_error_fn cbfn, void *userdata)
@@ -2951,7 +2951,7 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_init)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_init);
 int
 rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size)
 {
@@ -2973,7 +2973,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_done_cleanup)
+RTE_EXPORT_SYMBOL(rte_eth_tx_done_cleanup);
 int
 rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
 {
@@ -3001,7 +3001,7 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_enable)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_enable);
 int
 rte_eth_promiscuous_enable(uint16_t port_id)
 {
@@ -3028,7 +3028,7 @@ rte_eth_promiscuous_enable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_disable)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_disable);
 int
 rte_eth_promiscuous_disable(uint16_t port_id)
 {
@@ -3056,7 +3056,7 @@ rte_eth_promiscuous_disable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_get)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_get);
 int
 rte_eth_promiscuous_get(uint16_t port_id)
 {
@@ -3070,7 +3070,7 @@ rte_eth_promiscuous_get(uint16_t port_id)
 	return dev->data->promiscuous;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_enable)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_enable);
 int
 rte_eth_allmulticast_enable(uint16_t port_id)
 {
@@ -3096,7 +3096,7 @@ rte_eth_allmulticast_enable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_disable)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_disable);
 int
 rte_eth_allmulticast_disable(uint16_t port_id)
 {
@@ -3124,7 +3124,7 @@ rte_eth_allmulticast_disable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_get)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_get);
 int
 rte_eth_allmulticast_get(uint16_t port_id)
 {
@@ -3138,7 +3138,7 @@ rte_eth_allmulticast_get(uint16_t port_id)
 	return dev->data->all_multicast;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_link_get)
+RTE_EXPORT_SYMBOL(rte_eth_link_get);
 int
 rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link)
 {
@@ -3167,7 +3167,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_link_get_nowait)
+RTE_EXPORT_SYMBOL(rte_eth_link_get_nowait);
 int
 rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link)
 {
@@ -3196,7 +3196,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_speed_to_str, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_speed_to_str, 20.11);
 const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
@@ -3260,7 +3260,7 @@ rte_eth_link_speed_to_str(uint32_t link_speed)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_to_str, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_to_str, 20.11);
 int
 rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 {
@@ -3297,7 +3297,7 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_stats_get)
+RTE_EXPORT_SYMBOL(rte_eth_stats_get);
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
 {
@@ -3325,7 +3325,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_stats_reset)
+RTE_EXPORT_SYMBOL(rte_eth_stats_reset);
 int
 rte_eth_stats_reset(uint16_t port_id)
 {
@@ -3387,7 +3387,7 @@ eth_dev_get_xstats_count(uint16_t port_id)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_id_by_name)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_id_by_name);
 int
 rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name,
 		uint64_t *id)
@@ -3523,7 +3523,7 @@ eth_xstats_get_by_name_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 
 
 /* retrieve ethdev extended statistics names */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names_by_id)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names_by_id);
 int
 rte_eth_xstats_get_names_by_id(uint16_t port_id,
 	struct rte_eth_xstat_name *xstats_names, unsigned int size,
@@ -3616,7 +3616,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id,
 	return size;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names);
 int
 rte_eth_xstats_get_names(uint16_t port_id,
 	struct rte_eth_xstat_name *xstats_names,
@@ -3743,7 +3743,7 @@ eth_xtats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 /* retrieve ethdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_by_id)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_by_id);
 int
 rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids,
 			 uint64_t *values, unsigned int size)
@@ -3830,7 +3830,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids,
 	return (i == size) ? (int32_t)size : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get);
 int
 rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats,
 	unsigned int n)
@@ -3882,7 +3882,7 @@ rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats,
 }
 
 /* reset ethdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_reset);
 int
 rte_eth_xstats_reset(uint16_t port_id)
 {
@@ -3904,7 +3904,7 @@ rte_eth_xstats_reset(uint16_t port_id)
 	return rte_eth_stats_reset(port_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_set_counter, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_set_counter, 25.03);
 int
 rte_eth_xstats_set_counter(uint16_t port_id, uint64_t id, int on_off)
 {
@@ -3934,7 +3934,7 @@ rte_eth_xstats_set_counter(uint16_t port_id, uint64_t id, int on_off)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_query_state, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_query_state, 25.03);
 int
 rte_eth_xstats_query_state(uint16_t port_id, uint64_t id)
 {
@@ -3978,7 +3978,7 @@ eth_dev_set_queue_stats_mapping(uint16_t port_id, uint16_t queue_id,
 	return dev->dev_ops->queue_stats_mapping_set(dev, queue_id, stat_idx, is_rx);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_tx_queue_stats_mapping)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_tx_queue_stats_mapping);
 int
 rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id,
 		uint8_t stat_idx)
@@ -3995,7 +3995,7 @@ rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_rx_queue_stats_mapping)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_rx_queue_stats_mapping);
 int
 rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id,
 		uint8_t stat_idx)
@@ -4012,7 +4012,7 @@ rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_fw_version_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_fw_version_get);
 int
 rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size)
 {
@@ -4038,7 +4038,7 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_info_get);
 int
 rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
 {
@@ -4103,7 +4103,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_conf_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_conf_get, 21.11);
 int
 rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf)
 {
@@ -4126,7 +4126,7 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_supported_ptypes)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_supported_ptypes);
 int
 rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
 				 uint32_t *ptypes, int num)
@@ -4168,7 +4168,7 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return j;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_ptypes)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_ptypes);
 int
 rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 				 uint32_t *set_ptypes, unsigned int num)
@@ -4264,7 +4264,7 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_macaddrs_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_macaddrs_get, 21.11);
 int
 rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
 	unsigned int num)
@@ -4292,7 +4292,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
 	return num;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_macaddr_get)
+RTE_EXPORT_SYMBOL(rte_eth_macaddr_get);
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
@@ -4315,7 +4315,7 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_mtu)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_mtu);
 int
 rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu)
 {
@@ -4337,7 +4337,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_mtu)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_mtu);
 int
 rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
 {
@@ -4384,7 +4384,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_eth_dev_vlan_filter);
 int
 rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 {
@@ -4432,7 +4432,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_strip_on_queue)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_strip_on_queue);
 int
 rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id,
 				    int on)
@@ -4456,7 +4456,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_ether_type)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_ether_type);
 int
 rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
 				enum rte_vlan_type vlan_type,
@@ -4477,7 +4477,7 @@ rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_offload)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_offload);
 int
 rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 {
@@ -4574,7 +4574,7 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_vlan_offload)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_vlan_offload);
 int
 rte_eth_dev_get_vlan_offload(uint16_t port_id)
 {
@@ -4603,7 +4603,7 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_pvid)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_pvid);
 int
 rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on)
 {
@@ -4622,7 +4622,7 @@ rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_get);
 int
 rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
@@ -4649,7 +4649,7 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_set);
 int
 rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
@@ -4680,7 +4680,7 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_priority_flow_ctrl_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_priority_flow_ctrl_set);
 int
 rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 				   struct rte_eth_pfc_conf *pfc_conf)
@@ -4763,7 +4763,7 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_info_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_info_get, 22.03);
 int
 rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id,
 		struct rte_eth_pfc_queue_info *pfc_queue_info)
@@ -4791,7 +4791,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_configure, 22.03);
 int
 rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id,
 		struct rte_eth_pfc_queue_conf *pfc_queue_conf)
@@ -4910,7 +4910,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_update)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_update);
 int
 rte_eth_dev_rss_reta_update(uint16_t port_id,
 			    struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -4963,7 +4963,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_query)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_query);
 int
 rte_eth_dev_rss_reta_query(uint16_t port_id,
 			   struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -4996,7 +4996,7 @@ rte_eth_dev_rss_reta_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_update)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_update);
 int
 rte_eth_dev_rss_hash_update(uint16_t port_id,
 			    struct rte_eth_rss_conf *rss_conf)
@@ -5063,7 +5063,7 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_conf_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_conf_get);
 int
 rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
 			      struct rte_eth_rss_conf *rss_conf)
@@ -5105,7 +5105,7 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_rss_algo_name, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_rss_algo_name, 23.11);
 const char *
 rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
 {
@@ -5120,7 +5120,7 @@ rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
 	return name;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_find_rss_algo, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_find_rss_algo, 24.03);
 int
 rte_eth_find_rss_algo(const char *name, uint32_t *algo)
 {
@@ -5136,7 +5136,7 @@ rte_eth_find_rss_algo(const char *name, uint32_t *algo)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_add)
+RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_add);
 int
 rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 				struct rte_eth_udp_tunnel *udp_tunnel)
@@ -5168,7 +5168,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_delete)
+RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_delete);
 int
 rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 				   struct rte_eth_udp_tunnel *udp_tunnel)
@@ -5200,7 +5200,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_led_on)
+RTE_EXPORT_SYMBOL(rte_eth_led_on);
 int
 rte_eth_led_on(uint16_t port_id)
 {
@@ -5219,7 +5219,7 @@ rte_eth_led_on(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_led_off)
+RTE_EXPORT_SYMBOL(rte_eth_led_off);
 int
 rte_eth_led_off(uint16_t port_id)
 {
@@ -5238,7 +5238,7 @@ rte_eth_led_off(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get_capability, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get_capability, 20.11);
 int
 rte_eth_fec_get_capability(uint16_t port_id,
 			   struct rte_eth_fec_capa *speed_fec_capa,
@@ -5266,7 +5266,7 @@ rte_eth_fec_get_capability(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get, 20.11);
 int
 rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa)
 {
@@ -5292,7 +5292,7 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_set, 20.11);
 int
 rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa)
 {
@@ -5342,7 +5342,7 @@ eth_dev_get_mac_addr_index(uint16_t port_id, const struct rte_ether_addr *addr)
 
 static const struct rte_ether_addr null_mac_addr;
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_add)
+RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_add);
 int
 rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			uint32_t pool)
@@ -5409,7 +5409,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_remove)
+RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_remove);
 int
 rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
 {
@@ -5452,7 +5452,7 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_default_mac_addr_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_default_mac_addr_set);
 int
 rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 {
@@ -5526,7 +5526,7 @@ eth_dev_get_hash_mac_addr_index(uint16_t port_id,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_uc_hash_table_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_uc_hash_table_set);
 int
 rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr,
 				uint8_t on)
@@ -5592,7 +5592,7 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_uc_all_hash_table_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_uc_all_hash_table_set);
 int
 rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on)
 {
@@ -5611,7 +5611,7 @@ rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_set_queue_rate_limit)
+RTE_EXPORT_SYMBOL(rte_eth_set_queue_rate_limit);
 int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
 					uint32_t tx_rate)
 {
@@ -5652,7 +5652,7 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_set, 22.07);
 int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id,
 			       uint8_t avail_thresh)
 {
@@ -5685,7 +5685,7 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_query, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_query, 22.07);
 int rte_eth_rx_avail_thresh_query(uint16_t port_id, uint16_t *queue_id,
 				 uint8_t *avail_thresh)
 {
@@ -5726,7 +5726,7 @@ RTE_INIT(eth_dev_init_cb_lists)
 		TAILQ_INIT(&rte_eth_devices[i].link_intr_cbs);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_callback_register)
+RTE_EXPORT_SYMBOL(rte_eth_dev_callback_register);
 int
 rte_eth_dev_callback_register(uint16_t port_id,
 			enum rte_eth_event_type event,
@@ -5796,7 +5796,7 @@ rte_eth_dev_callback_register(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_eth_dev_callback_unregister);
 int
 rte_eth_dev_callback_unregister(uint16_t port_id,
 			enum rte_eth_event_type event,
@@ -5862,7 +5862,7 @@ rte_eth_dev_callback_unregister(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl);
 int
 rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
 {
@@ -5902,7 +5902,7 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q_get_fd)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q_get_fd);
 int
 rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
 {
@@ -5941,7 +5941,7 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
 	return fd;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q);
 int
 rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
 			  int epfd, int op, void *data)
@@ -5985,7 +5985,7 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_enable)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_enable);
 int
 rte_eth_dev_rx_intr_enable(uint16_t port_id,
 			   uint16_t queue_id)
@@ -6009,7 +6009,7 @@ rte_eth_dev_rx_intr_enable(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_disable)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_disable);
 int
 rte_eth_dev_rx_intr_disable(uint16_t port_id,
 			    uint16_t queue_id)
@@ -6034,7 +6034,7 @@ rte_eth_dev_rx_intr_disable(uint16_t port_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eth_add_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_rx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_rx_callback_fn fn, void *user_param)
@@ -6094,7 +6094,7 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_add_first_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_first_rx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_rx_callback_fn fn, void *user_param)
@@ -6137,7 +6137,7 @@ rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_add_tx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_tx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_tx_callback_fn fn, void *user_param)
@@ -6199,7 +6199,7 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_remove_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_remove_rx_callback);
 int
 rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id,
 		const struct rte_eth_rxtx_callback *user_cb)
@@ -6236,7 +6236,7 @@ rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_remove_tx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_remove_tx_callback);
 int
 rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id,
 		const struct rte_eth_rxtx_callback *user_cb)
@@ -6273,7 +6273,7 @@ rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_rx_queue_info_get);
 int
 rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	struct rte_eth_rxq_info *qinfo)
@@ -6322,7 +6322,7 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_tx_queue_info_get);
 int
 rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	struct rte_eth_txq_info *qinfo)
@@ -6371,7 +6371,7 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_recycle_rx_queue_info_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_recycle_rx_queue_info_get, 23.11);
 int
 rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 		struct rte_eth_recycle_rxq_info *recycle_rxq_info)
@@ -6394,7 +6394,7 @@ rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_burst_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_rx_burst_mode_get);
 int
 rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode)
@@ -6428,7 +6428,7 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_burst_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_tx_burst_mode_get);
 int
 rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode)
@@ -6462,7 +6462,7 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_get_monitor_addr, 21.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_get_monitor_addr, 21.02);
 int
 rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
 		struct rte_power_monitor_cond *pmc)
@@ -6495,7 +6495,7 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_mc_addr_list)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_mc_addr_list);
 int
 rte_eth_dev_set_mc_addr_list(uint16_t port_id,
 			     struct rte_ether_addr *mc_addr_set,
@@ -6518,7 +6518,7 @@ rte_eth_dev_set_mc_addr_list(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_enable)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_enable);
 int
 rte_eth_timesync_enable(uint16_t port_id)
 {
@@ -6537,7 +6537,7 @@ rte_eth_timesync_enable(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_disable)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_disable);
 int
 rte_eth_timesync_disable(uint16_t port_id)
 {
@@ -6556,7 +6556,7 @@ rte_eth_timesync_disable(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_rx_timestamp)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_rx_timestamp);
 int
 rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp,
 				   uint32_t flags)
@@ -6585,7 +6585,7 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_tx_timestamp)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_tx_timestamp);
 int
 rte_eth_timesync_read_tx_timestamp(uint16_t port_id,
 				   struct timespec *timestamp)
@@ -6614,7 +6614,7 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id,
 
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_adjust_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_adjust_time);
 int
 rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta)
 {
@@ -6633,7 +6633,7 @@ rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_timesync_adjust_freq, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_timesync_adjust_freq, 24.11);
 int
 rte_eth_timesync_adjust_freq(uint16_t port_id, int64_t ppm)
 {
@@ -6652,7 +6652,7 @@ rte_eth_timesync_adjust_freq(uint16_t port_id, int64_t ppm)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_time);
 int
 rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp)
 {
@@ -6678,7 +6678,7 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_write_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_write_time);
 int
 rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp)
 {
@@ -6704,7 +6704,7 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_read_clock, 19.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_read_clock, 19.08);
 int
 rte_eth_read_clock(uint16_t port_id, uint64_t *clock)
 {
@@ -6729,7 +6729,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_reg_info)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_reg_info);
 int
 rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info)
 {
@@ -6760,7 +6760,7 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_reg_info_ext, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_reg_info_ext, 24.11);
 int
 rte_eth_dev_get_reg_info_ext(uint16_t port_id, struct rte_dev_reg_info *info)
 {
@@ -6796,7 +6796,7 @@ rte_eth_dev_get_reg_info_ext(uint16_t port_id, struct rte_dev_reg_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom_length)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom_length);
 int
 rte_eth_dev_get_eeprom_length(uint16_t port_id)
 {
@@ -6815,7 +6815,7 @@ rte_eth_dev_get_eeprom_length(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom);
 int
 rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 {
@@ -6841,7 +6841,7 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_eeprom)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_eeprom);
 int
 rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 {
@@ -6867,7 +6867,7 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_info, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_info, 18.05);
 int
 rte_eth_dev_get_module_info(uint16_t port_id,
 			    struct rte_eth_dev_module_info *modinfo)
@@ -6894,7 +6894,7 @@ rte_eth_dev_get_module_info(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_eeprom, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_eeprom, 18.05);
 int
 rte_eth_dev_get_module_eeprom(uint16_t port_id,
 			      struct rte_dev_eeprom_info *info)
@@ -6935,7 +6935,7 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_dcb_info)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_dcb_info);
 int
 rte_eth_dev_get_dcb_info(uint16_t port_id,
 			     struct rte_eth_dcb_info *dcb_info)
@@ -6983,7 +6983,7 @@ eth_dev_adjust_nb_desc(uint16_t *nb_desc,
 	*nb_desc = (uint16_t)nb_desc_32;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_adjust_nb_rx_tx_desc)
+RTE_EXPORT_SYMBOL(rte_eth_dev_adjust_nb_rx_tx_desc);
 int
 rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id,
 				 uint16_t *nb_rx_desc,
@@ -7009,7 +7009,7 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_hairpin_capability_get, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_hairpin_capability_get, 19.11);
 int
 rte_eth_dev_hairpin_capability_get(uint16_t port_id,
 				   struct rte_eth_hairpin_cap *cap)
@@ -7037,7 +7037,7 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_pool_ops_supported)
+RTE_EXPORT_SYMBOL(rte_eth_dev_pool_ops_supported);
 int
 rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
 {
@@ -7064,7 +7064,7 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_representor_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_representor_info_get, 21.05);
 int
 rte_eth_representor_info_get(uint16_t port_id,
 			     struct rte_eth_representor_info *info)
@@ -7084,7 +7084,7 @@ rte_eth_representor_info_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_metadata_negotiate)
+RTE_EXPORT_SYMBOL(rte_eth_rx_metadata_negotiate);
 int
 rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
 {
@@ -7120,7 +7120,7 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_capability_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_capability_get, 22.03);
 int
 rte_eth_ip_reassembly_capability_get(uint16_t port_id,
 		struct rte_eth_ip_reassembly_params *reassembly_capa)
@@ -7156,7 +7156,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_get, 22.03);
 int
 rte_eth_ip_reassembly_conf_get(uint16_t port_id,
 		struct rte_eth_ip_reassembly_params *conf)
@@ -7190,7 +7190,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_set, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_set, 22.03);
 int
 rte_eth_ip_reassembly_conf_set(uint16_t port_id,
 		const struct rte_eth_ip_reassembly_params *conf)
@@ -7231,7 +7231,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priv_dump, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priv_dump, 22.03);
 int
 rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
 {
@@ -7250,7 +7250,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
 	return eth_err(port_id, dev->dev_ops->eth_dev_priv_dump(dev, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_descriptor_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_descriptor_dump, 22.11);
 int
 rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 			   uint16_t offset, uint16_t num, FILE *file)
@@ -7277,7 +7277,7 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 		       dev->dev_ops->eth_rx_descriptor_dump(dev, queue_id, offset, num, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_descriptor_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_descriptor_dump, 22.11);
 int
 rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 			   uint16_t offset, uint16_t num, FILE *file)
@@ -7304,7 +7304,7 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 		       dev->dev_ops->eth_tx_descriptor_dump(dev, queue_id, offset, num, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_buffer_split_get_supported_hdr_ptypes, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_buffer_split_get_supported_hdr_ptypes, 22.11);
 int
 rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num)
 {
@@ -7344,7 +7344,7 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes
 	return j;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_count_aggr_ports, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_count_aggr_ports, 23.03);
 int rte_eth_dev_count_aggr_ports(uint16_t port_id)
 {
 	struct rte_eth_dev *dev;
@@ -7362,7 +7362,7 @@ int rte_eth_dev_count_aggr_ports(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_map_aggr_tx_affinity, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_map_aggr_tx_affinity, 23.03);
 int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id,
 				     uint8_t affinity)
 {
@@ -7418,5 +7418,5 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_logtype)
+RTE_EXPORT_SYMBOL(rte_eth_dev_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c
index a8460e6977..413db0acd9 100644
--- a/lib/ethdev/rte_ethdev_cman.c
+++ b/lib/ethdev/rte_ethdev_cman.c
@@ -12,7 +12,7 @@
 #include "ethdev_trace.h"
 
 /* Get congestion management information for a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_info_get, 22.11);
 int
 rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info)
 {
@@ -41,7 +41,7 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info)
 }
 
 /* Initialize congestion management structure with default values */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_init, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_init, 22.11);
 int
 rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config)
 {
@@ -70,7 +70,7 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config)
 }
 
 /* Configure congestion management on a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_set, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_set, 22.11);
 int
 rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *config)
 {
@@ -98,7 +98,7 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf
 }
 
 /* Retrieve congestion management configuration of a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_get, 22.11);
 int
 rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config)
 {
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index fe8f43caff..25801717a7 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -23,11 +23,11 @@
 #define FLOW_LOG RTE_ETHDEV_LOG_LINE
 
 /* Mbuf dynamic field name for metadata. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_offs, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_offs, 19.11);
 int32_t rte_flow_dynf_metadata_offs = -1;
 
 /* Mbuf dynamic field flag bit number for metadata. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_mask, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_mask, 19.11);
 uint64_t rte_flow_dynf_metadata_mask;
 
 /**
@@ -281,7 +281,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
 	MK_FLOW_ACTION(JUMP_TO_TABLE_INDEX, sizeof(struct rte_flow_action_jump_to_table_index)),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_register, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_register, 19.11);
 int
 rte_flow_dynf_metadata_register(void)
 {
@@ -370,7 +370,7 @@ rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error)
 }
 
 /* Check whether a flow rule can be created on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_validate)
+RTE_EXPORT_SYMBOL(rte_flow_validate);
 int
 rte_flow_validate(uint16_t port_id,
 		  const struct rte_flow_attr *attr,
@@ -407,7 +407,7 @@ rte_flow_validate(uint16_t port_id,
 }
 
 /* Create a flow rule on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_create)
+RTE_EXPORT_SYMBOL(rte_flow_create);
 struct rte_flow *
 rte_flow_create(uint16_t port_id,
 		const struct rte_flow_attr *attr,
@@ -438,7 +438,7 @@ rte_flow_create(uint16_t port_id,
 }
 
 /* Destroy a flow rule on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_destroy)
+RTE_EXPORT_SYMBOL(rte_flow_destroy);
 int
 rte_flow_destroy(uint16_t port_id,
 		 struct rte_flow *flow,
@@ -465,7 +465,7 @@ rte_flow_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_update, 23.07);
 int
 rte_flow_actions_update(uint16_t port_id,
 			struct rte_flow *flow,
@@ -493,7 +493,7 @@ rte_flow_actions_update(uint16_t port_id,
 }
 
 /* Destroy all flow rules associated with a port. */
-RTE_EXPORT_SYMBOL(rte_flow_flush)
+RTE_EXPORT_SYMBOL(rte_flow_flush);
 int
 rte_flow_flush(uint16_t port_id,
 	       struct rte_flow_error *error)
@@ -520,7 +520,7 @@ rte_flow_flush(uint16_t port_id,
 }
 
 /* Query an existing flow rule. */
-RTE_EXPORT_SYMBOL(rte_flow_query)
+RTE_EXPORT_SYMBOL(rte_flow_query);
 int
 rte_flow_query(uint16_t port_id,
 	       struct rte_flow *flow,
@@ -550,7 +550,7 @@ rte_flow_query(uint16_t port_id,
 }
 
 /* Restrict ingress traffic to the defined flow rules. */
-RTE_EXPORT_SYMBOL(rte_flow_isolate)
+RTE_EXPORT_SYMBOL(rte_flow_isolate);
 int
 rte_flow_isolate(uint16_t port_id,
 		 int set,
@@ -578,7 +578,7 @@ rte_flow_isolate(uint16_t port_id,
 }
 
 /* Initialize flow error structure. */
-RTE_EXPORT_SYMBOL(rte_flow_error_set)
+RTE_EXPORT_SYMBOL(rte_flow_error_set);
 int
 rte_flow_error_set(struct rte_flow_error *error,
 		   int code,
@@ -1114,7 +1114,7 @@ rte_flow_conv_name(int is_action,
 }
 
 /** Helper function to convert flow API objects. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_conv, 18.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_conv, 18.11);
 int
 rte_flow_conv(enum rte_flow_conv_op op,
 	      void *dst,
@@ -1186,7 +1186,7 @@ rte_flow_conv(enum rte_flow_conv_op op,
 }
 
 /** Store a full rte_flow description. */
-RTE_EXPORT_SYMBOL(rte_flow_copy)
+RTE_EXPORT_SYMBOL(rte_flow_copy);
 size_t
 rte_flow_copy(struct rte_flow_desc *desc, size_t len,
 	      const struct rte_flow_attr *attr,
@@ -1241,7 +1241,7 @@ rte_flow_copy(struct rte_flow_desc *desc, size_t len,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dev_dump, 20.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dev_dump, 20.02);
 int
 rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow,
 			FILE *file, struct rte_flow_error *error)
@@ -1263,7 +1263,7 @@ rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow,
 				  NULL, rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_aged_flows, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_aged_flows, 20.05);
 int
 rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
 		    uint32_t nb_contexts, struct rte_flow_error *error)
@@ -1289,7 +1289,7 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_q_aged_flows, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_q_aged_flows, 22.11);
 int
 rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts,
 			  uint32_t nb_contexts, struct rte_flow_error *error)
@@ -1317,7 +1317,7 @@ rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_create, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_create, 21.05);
 struct rte_flow_action_handle *
 rte_flow_action_handle_create(uint16_t port_id,
 			      const struct rte_flow_indir_action_conf *conf,
@@ -1345,7 +1345,7 @@ rte_flow_action_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_destroy, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_destroy, 21.05);
 int
 rte_flow_action_handle_destroy(uint16_t port_id,
 			       struct rte_flow_action_handle *handle,
@@ -1369,7 +1369,7 @@ rte_flow_action_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_update, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_update, 21.05);
 int
 rte_flow_action_handle_update(uint16_t port_id,
 			      struct rte_flow_action_handle *handle,
@@ -1394,7 +1394,7 @@ rte_flow_action_handle_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query, 21.05);
 int
 rte_flow_action_handle_query(uint16_t port_id,
 			     const struct rte_flow_action_handle *handle,
@@ -1419,7 +1419,7 @@ rte_flow_action_handle_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_decap_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_decap_set, 20.11);
 int
 rte_flow_tunnel_decap_set(uint16_t port_id,
 			  struct rte_flow_tunnel *tunnel,
@@ -1449,7 +1449,7 @@ rte_flow_tunnel_decap_set(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_match, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_match, 20.11);
 int
 rte_flow_tunnel_match(uint16_t port_id,
 		      struct rte_flow_tunnel *tunnel,
@@ -1479,7 +1479,7 @@ rte_flow_tunnel_match(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_restore_info, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_restore_info, 20.11);
 int
 rte_flow_get_restore_info(uint16_t port_id,
 			  struct rte_mbuf *m,
@@ -1514,7 +1514,7 @@ static struct {
 	.desc = { .name = "RTE_MBUF_F_RX_RESTORE_INFO", },
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_restore_info_dynflag, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_restore_info_dynflag, 23.07);
 uint64_t
 rte_flow_restore_info_dynflag(void)
 {
@@ -1535,7 +1535,7 @@ rte_flow_restore_info_dynflag_register(void)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_action_decap_release, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_action_decap_release, 20.11);
 int
 rte_flow_tunnel_action_decap_release(uint16_t port_id,
 				     struct rte_flow_action *actions,
@@ -1565,7 +1565,7 @@ rte_flow_tunnel_action_decap_release(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_item_release, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_item_release, 20.11);
 int
 rte_flow_tunnel_item_release(uint16_t port_id,
 			     struct rte_flow_item *items,
@@ -1593,7 +1593,7 @@ rte_flow_tunnel_item_release(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_SYMBOL(rte_flow_pick_transfer_proxy)
+RTE_EXPORT_SYMBOL(rte_flow_pick_transfer_proxy);
 int
 rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id,
 			     struct rte_flow_error *error)
@@ -1621,7 +1621,7 @@ rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_create, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_create, 21.11);
 struct rte_flow_item_flex_handle *
 rte_flow_flex_item_create(uint16_t port_id,
 			  const struct rte_flow_item_flex_conf *conf,
@@ -1648,7 +1648,7 @@ rte_flow_flex_item_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_release, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_release, 21.11);
 int
 rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
@@ -1670,7 +1670,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_info_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_info_get, 22.03);
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
@@ -1707,7 +1707,7 @@ rte_flow_info_get(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_configure, 22.03);
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
@@ -1766,7 +1766,7 @@ rte_flow_configure(uint16_t port_id,
 				  NULL, rte_strerror(EINVAL));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_create, 22.03);
 struct rte_flow_pattern_template *
 rte_flow_pattern_template_create(uint16_t port_id,
 		const struct rte_flow_pattern_template_attr *template_attr,
@@ -1823,7 +1823,7 @@ rte_flow_pattern_template_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_destroy, 22.03);
 int
 rte_flow_pattern_template_destroy(uint16_t port_id,
 		struct rte_flow_pattern_template *pattern_template,
@@ -1854,7 +1854,7 @@ rte_flow_pattern_template_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_create, 22.03);
 struct rte_flow_actions_template *
 rte_flow_actions_template_create(uint16_t port_id,
 			const struct rte_flow_actions_template_attr *template_attr,
@@ -1921,7 +1921,7 @@ rte_flow_actions_template_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_destroy, 22.03);
 int
 rte_flow_actions_template_destroy(uint16_t port_id,
 			struct rte_flow_actions_template *actions_template,
@@ -1952,7 +1952,7 @@ rte_flow_actions_template_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_create, 22.03);
 struct rte_flow_template_table *
 rte_flow_template_table_create(uint16_t port_id,
 			const struct rte_flow_template_table_attr *table_attr,
@@ -2026,7 +2026,7 @@ rte_flow_template_table_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_destroy, 22.03);
 int
 rte_flow_template_table_destroy(uint16_t port_id,
 				struct rte_flow_template_table *template_table,
@@ -2057,7 +2057,7 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_group_set_miss_actions, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_group_set_miss_actions, 23.11);
 int
 rte_flow_group_set_miss_actions(uint16_t port_id,
 				uint32_t group_id,
@@ -2080,7 +2080,7 @@ rte_flow_group_set_miss_actions(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create, 22.03);
 struct rte_flow *
 rte_flow_async_create(uint16_t port_id,
 		      uint32_t queue_id,
@@ -2122,7 +2122,7 @@ rte_flow_async_create(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index, 23.03);
 struct rte_flow *
 rte_flow_async_create_by_index(uint16_t port_id,
 			       uint32_t queue_id,
@@ -2161,7 +2161,7 @@ rte_flow_async_create_by_index(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index_with_pattern, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index_with_pattern, 24.11);
 struct rte_flow *
 rte_flow_async_create_by_index_with_pattern(uint16_t port_id,
 					    uint32_t queue_id,
@@ -2206,7 +2206,7 @@ rte_flow_async_create_by_index_with_pattern(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_destroy, 22.03);
 int
 rte_flow_async_destroy(uint16_t port_id,
 		       uint32_t queue_id,
@@ -2237,7 +2237,7 @@ rte_flow_async_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_actions_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_actions_update, 23.07);
 int
 rte_flow_async_actions_update(uint16_t port_id,
 			      uint32_t queue_id,
@@ -2272,7 +2272,7 @@ rte_flow_async_actions_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_push, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_push, 22.03);
 int
 rte_flow_push(uint16_t port_id,
 	      uint32_t queue_id,
@@ -2297,7 +2297,7 @@ rte_flow_push(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pull, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pull, 22.03);
 int
 rte_flow_pull(uint16_t port_id,
 	      uint32_t queue_id,
@@ -2324,7 +2324,7 @@ rte_flow_pull(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_create, 22.03);
 struct rte_flow_action_handle *
 rte_flow_async_action_handle_create(uint16_t port_id,
 		uint32_t queue_id,
@@ -2361,7 +2361,7 @@ rte_flow_async_action_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_destroy, 22.03);
 int
 rte_flow_async_action_handle_destroy(uint16_t port_id,
 		uint32_t queue_id,
@@ -2391,7 +2391,7 @@ rte_flow_async_action_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_update, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_update, 22.03);
 int
 rte_flow_async_action_handle_update(uint16_t port_id,
 		uint32_t queue_id,
@@ -2423,7 +2423,7 @@ rte_flow_async_action_handle_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query, 22.11);
 int
 rte_flow_async_action_handle_query(uint16_t port_id,
 		uint32_t queue_id,
@@ -2455,7 +2455,7 @@ rte_flow_async_action_handle_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query_update, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query_update, 23.03);
 int
 rte_flow_action_handle_query_update(uint16_t port_id,
 				    struct rte_flow_action_handle *handle,
@@ -2481,7 +2481,7 @@ rte_flow_action_handle_query_update(uint16_t port_id,
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query_update, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query_update, 23.03);
 int
 rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
 					  const struct rte_flow_op_attr *attr,
@@ -2508,7 +2508,7 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
 								  user_data, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_create, 23.07);
 struct rte_flow_action_list_handle *
 rte_flow_action_list_handle_create(uint16_t port_id,
 				   const
@@ -2536,7 +2536,7 @@ rte_flow_action_list_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_destroy, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_destroy, 23.07);
 int
 rte_flow_action_list_handle_destroy(uint16_t port_id,
 				    struct rte_flow_action_list_handle *handle,
@@ -2559,7 +2559,7 @@ rte_flow_action_list_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_create, 23.07);
 struct rte_flow_action_list_handle *
 rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id,
 					 const struct rte_flow_op_attr *attr,
@@ -2596,7 +2596,7 @@ rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_destroy, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_destroy, 23.07);
 int
 rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id,
 				 const struct rte_flow_op_attr *op_attr,
@@ -2624,7 +2624,7 @@ rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_query_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_query_update, 23.07);
 int
 rte_flow_action_list_handle_query_update(uint16_t port_id,
 			 const struct rte_flow_action_list_handle *handle,
@@ -2651,7 +2651,7 @@ rte_flow_action_list_handle_query_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_query_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_query_update, 23.07);
 int
 rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_id,
 			 const struct rte_flow_op_attr *attr,
@@ -2686,7 +2686,7 @@ rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_table_hash, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_table_hash, 23.11);
 int
 rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table *table,
 			 const struct rte_flow_item pattern[], uint8_t pattern_template_index,
@@ -2708,7 +2708,7 @@ rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_encap_hash, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_encap_hash, 24.03);
 int
 rte_flow_calc_encap_hash(uint16_t port_id, const struct rte_flow_item pattern[],
 			 enum rte_flow_encap_hash_field dest_field, uint8_t hash_len,
@@ -2738,7 +2738,7 @@ rte_flow_calc_encap_hash(uint16_t port_id, const struct rte_flow_item pattern[],
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resizable, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resizable, 24.03);
 bool
 rte_flow_template_table_resizable(__rte_unused uint16_t port_id,
 				  const struct rte_flow_template_table_attr *tbl_attr)
@@ -2747,7 +2747,7 @@ rte_flow_template_table_resizable(__rte_unused uint16_t port_id,
 		RTE_FLOW_TABLE_SPECIALIZE_RESIZABLE) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize, 24.03);
 int
 rte_flow_template_table_resize(uint16_t port_id,
 			       struct rte_flow_template_table *table,
@@ -2771,7 +2771,7 @@ rte_flow_template_table_resize(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_update_resized, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_update_resized, 24.03);
 int
 rte_flow_async_update_resized(uint16_t port_id, uint32_t queue,
 			      const struct rte_flow_op_attr *attr,
@@ -2796,7 +2796,7 @@ rte_flow_async_update_resized(uint16_t port_id, uint32_t queue,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize_complete, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize_complete, 24.03);
 int
 rte_flow_template_table_resize_complete(uint16_t port_id,
 					struct rte_flow_template_table *table,
@@ -3032,7 +3032,7 @@ rte_flow_dummy_async_action_list_handle_query_update(
 				  rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_flow_fp_default_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_flow_fp_default_ops);
 struct rte_flow_fp_ops rte_flow_fp_default_ops = {
 	.async_create = rte_flow_dummy_async_create,
 	.async_create_by_index = rte_flow_dummy_async_create_by_index,
diff --git a/lib/ethdev/rte_mtr.c b/lib/ethdev/rte_mtr.c
index c6f0698ed3..e4bd02c73b 100644
--- a/lib/ethdev/rte_mtr.c
+++ b/lib/ethdev/rte_mtr.c
@@ -78,7 +78,7 @@ __extension__ ({					\
 })
 
 /* MTR capabilities get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_capabilities_get, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_capabilities_get, 17.11);
 int
 rte_mtr_capabilities_get(uint16_t port_id,
 	struct rte_mtr_capabilities *cap,
@@ -95,7 +95,7 @@ rte_mtr_capabilities_get(uint16_t port_id,
 }
 
 /* MTR meter profile add */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_add, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_add, 17.11);
 int
 rte_mtr_meter_profile_add(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -114,7 +114,7 @@ rte_mtr_meter_profile_add(uint16_t port_id,
 }
 
 /** MTR meter profile delete */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_delete, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_delete, 17.11);
 int
 rte_mtr_meter_profile_delete(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -131,7 +131,7 @@ rte_mtr_meter_profile_delete(uint16_t port_id,
 }
 
 /** MTR meter profile get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_get, 22.11);
 struct rte_flow_meter_profile *
 rte_mtr_meter_profile_get(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -148,7 +148,7 @@ rte_mtr_meter_profile_get(uint16_t port_id,
 }
 
 /* MTR meter policy validate */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_validate, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_validate, 21.05);
 int
 rte_mtr_meter_policy_validate(uint16_t port_id,
 	struct rte_mtr_meter_policy_params *policy,
@@ -165,7 +165,7 @@ rte_mtr_meter_policy_validate(uint16_t port_id,
 }
 
 /* MTR meter policy add */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_add, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_add, 21.05);
 int
 rte_mtr_meter_policy_add(uint16_t port_id,
 	uint32_t policy_id,
@@ -183,7 +183,7 @@ rte_mtr_meter_policy_add(uint16_t port_id,
 }
 
 /** MTR meter policy delete */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_delete, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_delete, 21.05);
 int
 rte_mtr_meter_policy_delete(uint16_t port_id,
 	uint32_t policy_id,
@@ -200,7 +200,7 @@ rte_mtr_meter_policy_delete(uint16_t port_id,
 }
 
 /** MTR meter policy get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_get, 22.11);
 struct rte_flow_meter_policy *
 rte_mtr_meter_policy_get(uint16_t port_id,
 	uint32_t policy_id,
@@ -217,7 +217,7 @@ rte_mtr_meter_policy_get(uint16_t port_id,
 }
 
 /** MTR object create */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_create, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_create, 17.11);
 int
 rte_mtr_create(uint16_t port_id,
 	uint32_t mtr_id,
@@ -236,7 +236,7 @@ rte_mtr_create(uint16_t port_id,
 }
 
 /** MTR object destroy */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_destroy, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_destroy, 17.11);
 int
 rte_mtr_destroy(uint16_t port_id,
 	uint32_t mtr_id,
@@ -253,7 +253,7 @@ rte_mtr_destroy(uint16_t port_id,
 }
 
 /** MTR object meter enable */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_enable, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_enable, 17.11);
 int
 rte_mtr_meter_enable(uint16_t port_id,
 	uint32_t mtr_id,
@@ -270,7 +270,7 @@ rte_mtr_meter_enable(uint16_t port_id,
 }
 
 /** MTR object meter disable */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_disable, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_disable, 17.11);
 int
 rte_mtr_meter_disable(uint16_t port_id,
 	uint32_t mtr_id,
@@ -287,7 +287,7 @@ rte_mtr_meter_disable(uint16_t port_id,
 }
 
 /** MTR object meter profile update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_update, 17.11);
 int
 rte_mtr_meter_profile_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -305,7 +305,7 @@ rte_mtr_meter_profile_update(uint16_t port_id,
 }
 
 /** MTR object meter policy update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_update, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_update, 21.05);
 int
 rte_mtr_meter_policy_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -323,7 +323,7 @@ rte_mtr_meter_policy_update(uint16_t port_id,
 }
 
 /** MTR object meter DSCP table update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_dscp_table_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_dscp_table_update, 17.11);
 int
 rte_mtr_meter_dscp_table_update(uint16_t port_id,
 	uint32_t mtr_id, enum rte_mtr_color_in_protocol proto,
@@ -341,7 +341,7 @@ rte_mtr_meter_dscp_table_update(uint16_t port_id,
 }
 
 /** MTR object meter VLAN table update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_vlan_table_update, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_vlan_table_update, 22.07);
 int
 rte_mtr_meter_vlan_table_update(uint16_t port_id,
 	uint32_t mtr_id, enum rte_mtr_color_in_protocol proto,
@@ -359,7 +359,7 @@ rte_mtr_meter_vlan_table_update(uint16_t port_id,
 }
 
 /** Set the input color protocol on MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_set, 22.07);
 int
 rte_mtr_color_in_protocol_set(uint16_t port_id,
 	uint32_t mtr_id,
@@ -378,7 +378,7 @@ rte_mtr_color_in_protocol_set(uint16_t port_id,
 }
 
 /** Get input color protocols of MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_get, 22.07);
 int
 rte_mtr_color_in_protocol_get(uint16_t port_id,
 	uint32_t mtr_id,
@@ -396,7 +396,7 @@ rte_mtr_color_in_protocol_get(uint16_t port_id,
 }
 
 /** Get input color protocol priority of MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_priority_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_priority_get, 22.07);
 int
 rte_mtr_color_in_protocol_priority_get(uint16_t port_id,
 	uint32_t mtr_id,
@@ -415,7 +415,7 @@ rte_mtr_color_in_protocol_priority_get(uint16_t port_id,
 }
 
 /** MTR object enabled stats update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_update, 17.11);
 int
 rte_mtr_stats_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -433,7 +433,7 @@ rte_mtr_stats_update(uint16_t port_id,
 }
 
 /** MTR object stats read */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_read, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_read, 17.11);
 int
 rte_mtr_stats_read(uint16_t port_id,
 	uint32_t mtr_id,
diff --git a/lib/ethdev/rte_tm.c b/lib/ethdev/rte_tm.c
index 66b8934c3b..cb858deff9 100644
--- a/lib/ethdev/rte_tm.c
+++ b/lib/ethdev/rte_tm.c
@@ -59,7 +59,7 @@ __extension__ ({					\
 })
 
 /* Get number of leaf nodes */
-RTE_EXPORT_SYMBOL(rte_tm_get_number_of_leaf_nodes)
+RTE_EXPORT_SYMBOL(rte_tm_get_number_of_leaf_nodes);
 int
 rte_tm_get_number_of_leaf_nodes(uint16_t port_id,
 	uint32_t *n_leaf_nodes,
@@ -89,7 +89,7 @@ rte_tm_get_number_of_leaf_nodes(uint16_t port_id,
 }
 
 /* Check node type (leaf or non-leaf) */
-RTE_EXPORT_SYMBOL(rte_tm_node_type_get)
+RTE_EXPORT_SYMBOL(rte_tm_node_type_get);
 int
 rte_tm_node_type_get(uint16_t port_id,
 	uint32_t node_id,
@@ -107,7 +107,7 @@ rte_tm_node_type_get(uint16_t port_id,
 }
 
 /* Get capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_capabilities_get);
 int rte_tm_capabilities_get(uint16_t port_id,
 	struct rte_tm_capabilities *cap,
 	struct rte_tm_error *error)
@@ -123,7 +123,7 @@ int rte_tm_capabilities_get(uint16_t port_id,
 }
 
 /* Get level capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_level_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_level_capabilities_get);
 int rte_tm_level_capabilities_get(uint16_t port_id,
 	uint32_t level_id,
 	struct rte_tm_level_capabilities *cap,
@@ -140,7 +140,7 @@ int rte_tm_level_capabilities_get(uint16_t port_id,
 }
 
 /* Get node capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_node_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_node_capabilities_get);
 int rte_tm_node_capabilities_get(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_node_capabilities *cap,
@@ -157,7 +157,7 @@ int rte_tm_node_capabilities_get(uint16_t port_id,
 }
 
 /* Add WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_wred_profile_add)
+RTE_EXPORT_SYMBOL(rte_tm_wred_profile_add);
 int rte_tm_wred_profile_add(uint16_t port_id,
 	uint32_t wred_profile_id,
 	const struct rte_tm_wred_params *profile,
@@ -174,7 +174,7 @@ int rte_tm_wred_profile_add(uint16_t port_id,
 }
 
 /* Delete WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_wred_profile_delete)
+RTE_EXPORT_SYMBOL(rte_tm_wred_profile_delete);
 int rte_tm_wred_profile_delete(uint16_t port_id,
 	uint32_t wred_profile_id,
 	struct rte_tm_error *error)
@@ -190,7 +190,7 @@ int rte_tm_wred_profile_delete(uint16_t port_id,
 }
 
 /* Add/update shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_add_update)
+RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_add_update);
 int rte_tm_shared_wred_context_add_update(uint16_t port_id,
 	uint32_t shared_wred_context_id,
 	uint32_t wred_profile_id,
@@ -209,7 +209,7 @@ int rte_tm_shared_wred_context_add_update(uint16_t port_id,
 }
 
 /* Delete shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_delete);
 int rte_tm_shared_wred_context_delete(uint16_t port_id,
 	uint32_t shared_wred_context_id,
 	struct rte_tm_error *error)
@@ -226,7 +226,7 @@ int rte_tm_shared_wred_context_delete(uint16_t port_id,
 }
 
 /* Add shaper profile */
-RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_add)
+RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_add);
 int rte_tm_shaper_profile_add(uint16_t port_id,
 	uint32_t shaper_profile_id,
 	const struct rte_tm_shaper_params *profile,
@@ -244,7 +244,7 @@ int rte_tm_shaper_profile_add(uint16_t port_id,
 }
 
 /* Delete WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_delete);
 int rte_tm_shaper_profile_delete(uint16_t port_id,
 	uint32_t shaper_profile_id,
 	struct rte_tm_error *error)
@@ -260,7 +260,7 @@ int rte_tm_shaper_profile_delete(uint16_t port_id,
 }
 
 /* Add shared shaper */
-RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_add_update)
+RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_add_update);
 int rte_tm_shared_shaper_add_update(uint16_t port_id,
 	uint32_t shared_shaper_id,
 	uint32_t shaper_profile_id,
@@ -278,7 +278,7 @@ int rte_tm_shared_shaper_add_update(uint16_t port_id,
 }
 
 /* Delete shared shaper */
-RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_delete);
 int rte_tm_shared_shaper_delete(uint16_t port_id,
 	uint32_t shared_shaper_id,
 	struct rte_tm_error *error)
@@ -294,7 +294,7 @@ int rte_tm_shared_shaper_delete(uint16_t port_id,
 }
 
 /* Add node to port traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_node_add)
+RTE_EXPORT_SYMBOL(rte_tm_node_add);
 int rte_tm_node_add(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t parent_node_id,
@@ -316,7 +316,7 @@ int rte_tm_node_add(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tm_node_query, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tm_node_query, 24.11);
 int rte_tm_node_query(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t *parent_node_id,
@@ -340,7 +340,7 @@ int rte_tm_node_query(uint16_t port_id,
 }
 
 /* Delete node from traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_node_delete)
+RTE_EXPORT_SYMBOL(rte_tm_node_delete);
 int rte_tm_node_delete(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -356,7 +356,7 @@ int rte_tm_node_delete(uint16_t port_id,
 }
 
 /* Suspend node */
-RTE_EXPORT_SYMBOL(rte_tm_node_suspend)
+RTE_EXPORT_SYMBOL(rte_tm_node_suspend);
 int rte_tm_node_suspend(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -372,7 +372,7 @@ int rte_tm_node_suspend(uint16_t port_id,
 }
 
 /* Resume node */
-RTE_EXPORT_SYMBOL(rte_tm_node_resume)
+RTE_EXPORT_SYMBOL(rte_tm_node_resume);
 int rte_tm_node_resume(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -388,7 +388,7 @@ int rte_tm_node_resume(uint16_t port_id,
 }
 
 /* Commit the initial port traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_hierarchy_commit)
+RTE_EXPORT_SYMBOL(rte_tm_hierarchy_commit);
 int rte_tm_hierarchy_commit(uint16_t port_id,
 	int clear_on_fail,
 	struct rte_tm_error *error)
@@ -404,7 +404,7 @@ int rte_tm_hierarchy_commit(uint16_t port_id,
 }
 
 /* Update node parent  */
-RTE_EXPORT_SYMBOL(rte_tm_node_parent_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_parent_update);
 int rte_tm_node_parent_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t parent_node_id,
@@ -424,7 +424,7 @@ int rte_tm_node_parent_update(uint16_t port_id,
 }
 
 /* Update node private shaper */
-RTE_EXPORT_SYMBOL(rte_tm_node_shaper_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shaper_update);
 int rte_tm_node_shaper_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shaper_profile_id,
@@ -442,7 +442,7 @@ int rte_tm_node_shaper_update(uint16_t port_id,
 }
 
 /* Update node shared shapers */
-RTE_EXPORT_SYMBOL(rte_tm_node_shared_shaper_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shared_shaper_update);
 int rte_tm_node_shared_shaper_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shared_shaper_id,
@@ -461,7 +461,7 @@ int rte_tm_node_shared_shaper_update(uint16_t port_id,
 }
 
 /* Update node stats */
-RTE_EXPORT_SYMBOL(rte_tm_node_stats_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_stats_update);
 int rte_tm_node_stats_update(uint16_t port_id,
 	uint32_t node_id,
 	uint64_t stats_mask,
@@ -478,7 +478,7 @@ int rte_tm_node_stats_update(uint16_t port_id,
 }
 
 /* Update WFQ weight mode */
-RTE_EXPORT_SYMBOL(rte_tm_node_wfq_weight_mode_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_wfq_weight_mode_update);
 int rte_tm_node_wfq_weight_mode_update(uint16_t port_id,
 	uint32_t node_id,
 	int *wfq_weight_mode,
@@ -498,7 +498,7 @@ int rte_tm_node_wfq_weight_mode_update(uint16_t port_id,
 }
 
 /* Update node congestion management mode */
-RTE_EXPORT_SYMBOL(rte_tm_node_cman_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_cman_update);
 int rte_tm_node_cman_update(uint16_t port_id,
 	uint32_t node_id,
 	enum rte_tm_cman_mode cman,
@@ -515,7 +515,7 @@ int rte_tm_node_cman_update(uint16_t port_id,
 }
 
 /* Update node private WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_node_wred_context_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_wred_context_update);
 int rte_tm_node_wred_context_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t wred_profile_id,
@@ -533,7 +533,7 @@ int rte_tm_node_wred_context_update(uint16_t port_id,
 }
 
 /* Update node shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_node_shared_wred_context_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shared_wred_context_update);
 int rte_tm_node_shared_wred_context_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shared_wred_context_id,
@@ -553,7 +553,7 @@ int rte_tm_node_shared_wred_context_update(uint16_t port_id,
 }
 
 /* Read and/or clear stats counters for specific node */
-RTE_EXPORT_SYMBOL(rte_tm_node_stats_read)
+RTE_EXPORT_SYMBOL(rte_tm_node_stats_read);
 int rte_tm_node_stats_read(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_node_stats *stats,
@@ -573,7 +573,7 @@ int rte_tm_node_stats_read(uint16_t port_id,
 }
 
 /* Packet marking - VLAN DEI */
-RTE_EXPORT_SYMBOL(rte_tm_mark_vlan_dei)
+RTE_EXPORT_SYMBOL(rte_tm_mark_vlan_dei);
 int rte_tm_mark_vlan_dei(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
@@ -592,7 +592,7 @@ int rte_tm_mark_vlan_dei(uint16_t port_id,
 }
 
 /* Packet marking - IPv4/IPv6 ECN */
-RTE_EXPORT_SYMBOL(rte_tm_mark_ip_ecn)
+RTE_EXPORT_SYMBOL(rte_tm_mark_ip_ecn);
 int rte_tm_mark_ip_ecn(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
@@ -611,7 +611,7 @@ int rte_tm_mark_ip_ecn(uint16_t port_id,
 }
 
 /* Packet marking - IPv4/IPv6 DSCP */
-RTE_EXPORT_SYMBOL(rte_tm_mark_ip_dscp)
+RTE_EXPORT_SYMBOL(rte_tm_mark_ip_dscp);
 int rte_tm_mark_ip_dscp(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index dffd2c71d0..10fb0bf1c7 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -107,7 +107,7 @@ dummy_event_port_preschedule_hint(__rte_unused void *port,
 {
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_reset);
 void
 event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 {
@@ -131,7 +131,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 	*fp_op = dummy;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_set)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_set);
 void
 event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
 		     const struct rte_eventdev *dev)
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index ade6723b7b..5cfd23221a 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -38,27 +38,27 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_stop,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_close,
 	lib.eventdev.close)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_enq_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_enq_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_enq_burst,
 	lib.eventdev.enq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_deq_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_deq_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
 	lib.eventdev.deq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_maintain)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_maintain);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
 	lib.eventdev.maintain)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_profile_switch, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_profile_switch, 23.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
 	lib.eventdev.port.profile.switch)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule_modify, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule_modify, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule_modify,
 	lib.eventdev.port.preschedule.modify)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule,
 	lib.eventdev.port.preschedule)
 
@@ -103,7 +103,7 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_start,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_stop,
 	lib.eventdev.tx.adapter.stop)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_eth_tx_adapter_enqueue)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_eth_tx_adapter_enqueue);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_enqueue,
 	lib.eventdev.tx.adapter.enq)
 
@@ -120,15 +120,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_adapter_stop,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_adapter_free,
 	lib.eventdev.timer.free)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_arm_burst,
 	lib.eventdev.timer.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_tmo_tick_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_tmo_tick_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_arm_tmo_tick_burst,
 	lib.eventdev.timer.tick.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_cancel_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_cancel_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_cancel_burst,
 	lib.eventdev.timer.cancel)
 
@@ -151,7 +151,7 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop,
 	lib.eventdev.crypto.stop)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_crypto_adapter_enqueue)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_crypto_adapter_enqueue);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue,
 	lib.eventdev.crypto.enq)
 
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index b827a0ffd6..aadf992570 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -363,7 +363,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create_ext);
 int
 rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_crypto_adapter_conf_cb conf_cb,
@@ -439,7 +439,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create);
 int
 rte_event_crypto_adapter_create(uint8_t id, uint8_t dev_id,
 				struct rte_event_port_conf *port_config,
@@ -468,7 +468,7 @@ rte_event_crypto_adapter_create(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_free);
 int
 rte_event_crypto_adapter_free(uint8_t id)
 {
@@ -1040,7 +1040,7 @@ eca_add_queue_pair(struct event_crypto_adapter *adapter, uint8_t cdev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_add)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_add);
 int
 rte_event_crypto_adapter_queue_pair_add(uint8_t id,
 			uint8_t cdev_id,
@@ -1195,7 +1195,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_del)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_del);
 int
 rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id,
 					int32_t queue_pair_id)
@@ -1321,7 +1321,7 @@ eca_adapter_ctrl(uint8_t id, int start)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_start);
 int
 rte_event_crypto_adapter_start(uint8_t id)
 {
@@ -1336,7 +1336,7 @@ rte_event_crypto_adapter_start(uint8_t id)
 	return eca_adapter_ctrl(id, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stop);
 int
 rte_event_crypto_adapter_stop(uint8_t id)
 {
@@ -1344,7 +1344,7 @@ rte_event_crypto_adapter_stop(uint8_t id)
 	return eca_adapter_ctrl(id, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_get);
 int
 rte_event_crypto_adapter_stats_get(uint8_t id,
 				struct rte_event_crypto_adapter_stats *stats)
@@ -1397,7 +1397,7 @@ rte_event_crypto_adapter_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_reset);
 int
 rte_event_crypto_adapter_stats_reset(uint8_t id)
 {
@@ -1430,7 +1430,7 @@ rte_event_crypto_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_init, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_init(
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1469,7 +1469,7 @@ crypto_adapter_cap_check(struct event_crypto_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_set, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1502,7 +1502,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_get, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1534,7 +1534,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_service_id_get);
 int
 rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1554,7 +1554,7 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_event_port_get);
 int
 rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -1573,7 +1573,7 @@ rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_vector_limits_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_vector_limits_get);
 int
 rte_event_crypto_adapter_vector_limits_get(
 	uint8_t dev_id, uint16_t cdev_id,
diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c
index cb799f3410..b8b1fa88d5 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -341,7 +341,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create_ext, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create_ext, 23.11);
 int
 rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id,
 				 rte_event_dma_adapter_conf_cb conf_cb,
@@ -435,7 +435,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create, 23.11);
 int
 rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config,
 			    enum rte_event_dma_adapter_mode mode)
@@ -460,7 +460,7 @@ rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_free, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_free, 23.11);
 int
 rte_event_dma_adapter_free(uint8_t id)
 {
@@ -481,7 +481,7 @@ rte_event_dma_adapter_free(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_event_port_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_event_port_get, 23.11);
 int
 rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -988,7 +988,7 @@ edma_add_vchan(struct event_dma_adapter *adapter, int16_t dma_dev_id, uint16_t v
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_add, 23.11);
 int
 rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
 				const struct rte_event *event)
@@ -1103,7 +1103,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_del, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_del, 23.11);
 int
 rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan)
 {
@@ -1170,7 +1170,7 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_service_id_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_service_id_get, 23.11);
 int
 rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1230,7 +1230,7 @@ edma_adapter_ctrl(uint8_t id, int start)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_start, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_start, 23.11);
 int
 rte_event_dma_adapter_start(uint8_t id)
 {
@@ -1245,7 +1245,7 @@ rte_event_dma_adapter_start(uint8_t id)
 	return edma_adapter_ctrl(id, 1);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stop, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stop, 23.11);
 int
 rte_event_dma_adapter_stop(uint8_t id)
 {
@@ -1254,7 +1254,7 @@ rte_event_dma_adapter_stop(uint8_t id)
 
 #define DEFAULT_MAX_NB 128
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_init, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_init, 23.11);
 int
 rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params)
 {
@@ -1290,7 +1290,7 @@ dma_adapter_cap_check(struct event_dma_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_set, 23.11);
 int
 rte_event_dma_adapter_runtime_params_set(uint8_t id,
 					 struct rte_event_dma_adapter_runtime_params *params)
@@ -1320,7 +1320,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_get, 23.11);
 int
 rte_event_dma_adapter_runtime_params_get(uint8_t id,
 					 struct rte_event_dma_adapter_runtime_params *params)
@@ -1348,7 +1348,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_get, 23.11);
 int
 rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats)
 {
@@ -1394,7 +1394,7 @@ rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_reset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_reset, 23.11);
 int
 rte_event_dma_adapter_stats_reset(uint8_t id)
 {
@@ -1427,7 +1427,7 @@ rte_event_dma_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_enqueue, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_enqueue, 23.11);
 uint16_t
 rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
 			      uint16_t nb_events)
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 994f256322..cffc28b71d 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -2519,7 +2519,7 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_ext);
 int
 rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_eth_rx_adapter_conf_cb conf_cb,
@@ -2534,7 +2534,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 	return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_with_params)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_with_params);
 int
 rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id,
 			struct rte_event_port_conf *port_config,
@@ -2567,7 +2567,7 @@ rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_create_ext_with_params, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_create_ext_with_params, 23.11);
 int
 rte_event_eth_rx_adapter_create_ext_with_params(uint8_t id, uint8_t dev_id,
 			rte_event_eth_rx_adapter_conf_cb conf_cb,
@@ -2584,7 +2584,7 @@ rte_event_eth_rx_adapter_create_ext_with_params(uint8_t id, uint8_t dev_id,
 	return rxa_create(id, dev_id, &temp_params, conf_cb, conf_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create);
 int
 rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
 		struct rte_event_port_conf *port_config)
@@ -2610,7 +2610,7 @@ rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_free);
 int
 rte_event_eth_rx_adapter_free(uint8_t id)
 {
@@ -2643,7 +2643,7 @@ rte_event_eth_rx_adapter_free(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_add)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_add);
 int
 rte_event_eth_rx_adapter_queue_add(uint8_t id,
 		uint16_t eth_dev_id,
@@ -2797,7 +2797,7 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_queues_add, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_queues_add, 25.03);
 int
 rte_event_eth_rx_adapter_queues_add(uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id[],
 				    const struct rte_event_eth_rx_adapter_queue_conf queue_conf[],
@@ -2969,7 +2969,7 @@ rxa_sw_vector_limits(struct rte_event_eth_rx_adapter_vector_limits *limits)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_del)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_del);
 int
 rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id,
 				int32_t rx_queue_id)
@@ -3098,7 +3098,7 @@ rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_vector_limits_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_vector_limits_get);
 int
 rte_event_eth_rx_adapter_vector_limits_get(
 	uint8_t dev_id, uint16_t eth_port_id,
@@ -3140,7 +3140,7 @@ rte_event_eth_rx_adapter_vector_limits_get(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_start);
 int
 rte_event_eth_rx_adapter_start(uint8_t id)
 {
@@ -3148,7 +3148,7 @@ rte_event_eth_rx_adapter_start(uint8_t id)
 	return rxa_ctrl(id, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stop);
 int
 rte_event_eth_rx_adapter_stop(uint8_t id)
 {
@@ -3165,7 +3165,7 @@ rxa_queue_stats_reset(struct eth_rx_queue_info *queue_info)
 	memset(q_stats, 0, sizeof(*q_stats));
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_get);
 int
 rte_event_eth_rx_adapter_stats_get(uint8_t id,
 			       struct rte_event_eth_rx_adapter_stats *stats)
@@ -3240,7 +3240,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_get);
 int
 rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
 		uint16_t eth_dev_id,
@@ -3305,7 +3305,7 @@ rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_reset);
 int
 rte_event_eth_rx_adapter_stats_reset(uint8_t id)
 {
@@ -3353,7 +3353,7 @@ rte_event_eth_rx_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_reset);
 int
 rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
 		uint16_t eth_dev_id,
@@ -3408,7 +3408,7 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_service_id_get);
 int
 rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -3431,7 +3431,7 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return rx_adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_event_port_get);
 int
 rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -3454,7 +3454,7 @@ rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return rx_adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_cb_register)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_cb_register);
 int
 rte_event_eth_rx_adapter_cb_register(uint8_t id,
 					uint16_t eth_dev_id,
@@ -3503,7 +3503,7 @@ rte_event_eth_rx_adapter_cb_register(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_conf_get);
 int
 rte_event_eth_rx_adapter_queue_conf_get(uint8_t id,
 			uint16_t eth_dev_id,
@@ -3605,7 +3605,7 @@ rxa_is_queue_added(struct event_eth_rx_adapter *rx_adapter,
 #define rxa_dev_instance_get(rx_adapter) \
 		rxa_evdev((rx_adapter))->dev_ops->eth_rx_adapter_instance_get
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_instance_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_instance_get);
 int
 rte_event_eth_rx_adapter_instance_get(uint16_t eth_dev_id,
 				      uint16_t rx_queue_id,
@@ -3684,7 +3684,7 @@ rxa_caps_check(struct event_eth_rx_adapter *rxa)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_init, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_init(
 		struct rte_event_eth_rx_adapter_runtime_params *params)
@@ -3698,7 +3698,7 @@ rte_event_eth_rx_adapter_runtime_params_init(
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_set, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_eth_rx_adapter_runtime_params *params)
@@ -3727,7 +3727,7 @@ rte_event_eth_rx_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_get, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_eth_rx_adapter_runtime_params *params)
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 83b6af0955..bcc573c155 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -1039,7 +1039,7 @@ txa_service_stop(uint8_t id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create);
 int
 rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
 				struct rte_event_port_conf *port_conf)
@@ -1084,7 +1084,7 @@ rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create_ext);
 int
 rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_eth_tx_adapter_conf_cb conf_cb,
@@ -1129,7 +1129,7 @@ rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_event_port_get);
 int
 rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -1140,7 +1140,7 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return txa_service_event_port_get(id, event_port_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_free);
 int
 rte_event_eth_tx_adapter_free(uint8_t id)
 {
@@ -1160,7 +1160,7 @@ rte_event_eth_tx_adapter_free(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_add)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_add);
 int
 rte_event_eth_tx_adapter_queue_add(uint8_t id,
 				uint16_t eth_dev_id,
@@ -1194,7 +1194,7 @@ rte_event_eth_tx_adapter_queue_add(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_del)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_del);
 int
 rte_event_eth_tx_adapter_queue_del(uint8_t id,
 				uint16_t eth_dev_id,
@@ -1227,7 +1227,7 @@ rte_event_eth_tx_adapter_queue_del(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_service_id_get);
 int
 rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1236,7 +1236,7 @@ rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return txa_service_id_get(id, service_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_start);
 int
 rte_event_eth_tx_adapter_start(uint8_t id)
 {
@@ -1251,7 +1251,7 @@ rte_event_eth_tx_adapter_start(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_get);
 int
 rte_event_eth_tx_adapter_stats_get(uint8_t id,
 				struct rte_event_eth_tx_adapter_stats *stats)
@@ -1288,7 +1288,7 @@ rte_event_eth_tx_adapter_stats_get(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_reset);
 int
 rte_event_eth_tx_adapter_stats_reset(uint8_t id)
 {
@@ -1306,7 +1306,7 @@ rte_event_eth_tx_adapter_stats_reset(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_init, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_init(
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1333,7 +1333,7 @@ txa_caps_check(struct txa_service_data *txa)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_set, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1365,7 +1365,7 @@ rte_event_eth_tx_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_get, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1397,7 +1397,7 @@ rte_event_eth_tx_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stop);
 int
 rte_event_eth_tx_adapter_stop(uint8_t id)
 {
@@ -1412,7 +1412,7 @@ rte_event_eth_tx_adapter_stop(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_instance_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_instance_get);
 int
 rte_event_eth_tx_adapter_instance_get(uint16_t eth_dev_id,
 				      uint16_t tx_queue_id,
@@ -1546,7 +1546,7 @@ txa_queue_start_state_set(uint16_t eth_dev_id, uint16_t tx_queue_id,
 					    start_state, txa);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_start);
 int
 rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id)
 {
@@ -1555,7 +1555,7 @@ rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id)
 	return txa_queue_start_state_set(eth_dev_id, tx_queue_id, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_stop);
 int
 rte_event_eth_tx_adapter_queue_stop(uint16_t eth_dev_id, uint16_t tx_queue_id)
 {
diff --git a/lib/eventdev/rte_event_ring.c b/lib/eventdev/rte_event_ring.c
index 5718985486..1a0ea149d7 100644
--- a/lib/eventdev/rte_event_ring.c
+++ b/lib/eventdev/rte_event_ring.c
@@ -8,7 +8,7 @@
 #include "rte_event_ring.h"
 #include "eventdev_trace.h"
 
-RTE_EXPORT_SYMBOL(rte_event_ring_init)
+RTE_EXPORT_SYMBOL(rte_event_ring_init);
 int
 rte_event_ring_init(struct rte_event_ring *r, const char *name,
 	unsigned int count, unsigned int flags)
@@ -24,7 +24,7 @@ rte_event_ring_init(struct rte_event_ring *r, const char *name,
 }
 
 /* create the ring */
-RTE_EXPORT_SYMBOL(rte_event_ring_create)
+RTE_EXPORT_SYMBOL(rte_event_ring_create);
 struct rte_event_ring *
 rte_event_ring_create(const char *name, unsigned int count, int socket_id,
 		unsigned int flags)
@@ -37,7 +37,7 @@ rte_event_ring_create(const char *name, unsigned int count, int socket_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_ring_lookup)
+RTE_EXPORT_SYMBOL(rte_event_ring_lookup);
 struct rte_event_ring *
 rte_event_ring_lookup(const char *name)
 {
@@ -47,7 +47,7 @@ rte_event_ring_lookup(const char *name)
 }
 
 /* free the ring */
-RTE_EXPORT_SYMBOL(rte_event_ring_free)
+RTE_EXPORT_SYMBOL(rte_event_ring_free);
 void
 rte_event_ring_free(struct rte_event_ring *r)
 {
diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c
index 06ce478d90..5b8b2c1fcd 100644
--- a/lib/eventdev/rte_event_timer_adapter.c
+++ b/lib/eventdev/rte_event_timer_adapter.c
@@ -133,7 +133,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_create(const struct rte_event_timer_adapter_conf *conf)
 {
@@ -141,7 +141,7 @@ rte_event_timer_adapter_create(const struct rte_event_timer_adapter_conf *conf)
 						  NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create_ext);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_create_ext(
 		const struct rte_event_timer_adapter_conf *conf,
@@ -267,7 +267,7 @@ rte_event_timer_adapter_create_ext(
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_get_info)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_get_info);
 int
 rte_event_timer_adapter_get_info(const struct rte_event_timer_adapter *adapter,
 		struct rte_event_timer_adapter_info *adapter_info)
@@ -288,7 +288,7 @@ rte_event_timer_adapter_get_info(const struct rte_event_timer_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_start);
 int
 rte_event_timer_adapter_start(const struct rte_event_timer_adapter *adapter)
 {
@@ -312,7 +312,7 @@ rte_event_timer_adapter_start(const struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stop);
 int
 rte_event_timer_adapter_stop(const struct rte_event_timer_adapter *adapter)
 {
@@ -336,7 +336,7 @@ rte_event_timer_adapter_stop(const struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_lookup)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_lookup);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_lookup(uint16_t adapter_id)
 {
@@ -404,7 +404,7 @@ rte_event_timer_adapter_lookup(uint16_t adapter_id)
 	return adapter;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_free);
 int
 rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 {
@@ -446,7 +446,7 @@ rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_service_id_get);
 int
 rte_event_timer_adapter_service_id_get(struct rte_event_timer_adapter *adapter,
 				       uint32_t *service_id)
@@ -464,7 +464,7 @@ rte_event_timer_adapter_service_id_get(struct rte_event_timer_adapter *adapter,
 	return adapter->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_get);
 int
 rte_event_timer_adapter_stats_get(struct rte_event_timer_adapter *adapter,
 				  struct rte_event_timer_adapter_stats *stats)
@@ -479,7 +479,7 @@ rte_event_timer_adapter_stats_get(struct rte_event_timer_adapter *adapter,
 	return adapter->ops->stats_get(adapter, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_reset);
 int
 rte_event_timer_adapter_stats_reset(struct rte_event_timer_adapter *adapter)
 {
@@ -490,7 +490,7 @@ rte_event_timer_adapter_stats_reset(struct rte_event_timer_adapter *adapter)
 	return adapter->ops->stats_reset(adapter);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_timer_remaining_ticks_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_timer_remaining_ticks_get, 23.03);
 int
 rte_event_timer_remaining_ticks_get(
 			const struct rte_event_timer_adapter *adapter,
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
index ad764e2882..24a7a063ce 100644
--- a/lib/eventdev/rte_event_vector_adapter.c
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -151,14 +151,14 @@ default_port_conf_cb(uint8_t event_dev_id, uint8_t *event_port_id, void *conf_ar
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf)
 {
 	return rte_event_vector_adapter_create_ext(conf, default_port_conf_cb, NULL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create_ext, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create_ext, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
 				    rte_event_vector_adapter_port_conf_cb_t conf_cb, void *conf_arg)
@@ -304,7 +304,7 @@ rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_lookup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_lookup, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_lookup(uint32_t adapter_id)
 {
@@ -372,7 +372,7 @@ rte_event_vector_adapter_lookup(uint32_t adapter_id)
 	return adapter;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_service_id_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_service_id_get, 25.07);
 int
 rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
 					uint32_t *service_id)
@@ -385,7 +385,7 @@ rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter
 	return adapter->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_destroy, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_destroy, 25.07);
 int
 rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
 {
@@ -414,7 +414,7 @@ rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_info_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_info_get, 25.07);
 int
 rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_adapter_info *info)
 {
@@ -429,7 +429,7 @@ rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_conf_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_conf_get, 25.07);
 int
 rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
 				  struct rte_event_vector_adapter_conf *conf)
@@ -441,7 +441,7 @@ rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_remaining, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_remaining, 25.07);
 uint8_t
 rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
 {
@@ -461,7 +461,7 @@ rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
 	return remaining;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_get, 25.07);
 int
 rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
 				   struct rte_event_vector_adapter_stats *stats)
@@ -476,7 +476,7 @@ rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_reset, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_reset, 25.07);
 int
 rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
 {
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index b921142d7b..9325d5880d 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -30,12 +30,12 @@
 #include "eventdev_pmd.h"
 #include "eventdev_trace.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_event_logtype, INFO);
 
 static struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eventdevs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eventdevs);
 struct rte_eventdev *rte_eventdevs = rte_event_devices;
 
 static struct rte_eventdev_global eventdev_globals = {
@@ -43,19 +43,19 @@ static struct rte_eventdev_global eventdev_globals = {
 };
 
 /* Public fastpath APIs. */
-RTE_EXPORT_SYMBOL(rte_event_fp_ops)
+RTE_EXPORT_SYMBOL(rte_event_fp_ops);
 struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
 
 /* Event dev north bound API implementation */
 
-RTE_EXPORT_SYMBOL(rte_event_dev_count)
+RTE_EXPORT_SYMBOL(rte_event_dev_count);
 uint8_t
 rte_event_dev_count(void)
 {
 	return eventdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_event_dev_get_dev_id);
 int
 rte_event_dev_get_dev_id(const char *name)
 {
@@ -80,7 +80,7 @@ rte_event_dev_get_dev_id(const char *name)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_socket_id)
+RTE_EXPORT_SYMBOL(rte_event_dev_socket_id);
 int
 rte_event_dev_socket_id(uint8_t dev_id)
 {
@@ -94,7 +94,7 @@ rte_event_dev_socket_id(uint8_t dev_id)
 	return dev->data->socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_info_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_info_get);
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 {
@@ -123,7 +123,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_caps_get);
 int
 rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 				uint32_t *caps)
@@ -150,7 +150,7 @@ rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 		: 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_caps_get);
 int
 rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 {
@@ -176,7 +176,7 @@ rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 		: 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_caps_get);
 int
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps)
@@ -205,7 +205,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 		dev->dev_ops->crypto_adapter_caps_get(dev, cdev, caps) : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_caps_get);
 int
 rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 				uint32_t *caps)
@@ -234,7 +234,7 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 		: 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_caps_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_caps_get, 23.11);
 int
 rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *caps)
 {
@@ -257,7 +257,7 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_caps_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_caps_get, 25.07);
 int
 rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 {
@@ -374,7 +374,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_configure)
+RTE_EXPORT_SYMBOL(rte_event_dev_configure);
 int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
@@ -577,7 +577,7 @@ is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_default_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_queue_default_conf_get);
 int
 rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
 				 struct rte_event_queue_conf *queue_conf)
@@ -638,7 +638,7 @@ is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_queue_setup)
+RTE_EXPORT_SYMBOL(rte_event_queue_setup);
 int
 rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		      const struct rte_event_queue_conf *queue_conf)
@@ -710,7 +710,7 @@ is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_default_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_port_default_conf_get);
 int
 rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
 				 struct rte_event_port_conf *port_conf)
@@ -738,7 +738,7 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_setup)
+RTE_EXPORT_SYMBOL(rte_event_port_setup);
 int
 rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		     const struct rte_event_port_conf *port_conf)
@@ -829,7 +829,7 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_quiesce)
+RTE_EXPORT_SYMBOL(rte_event_port_quiesce);
 void
 rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		       rte_eventdev_port_flush_t release_cb, void *args)
@@ -850,7 +850,7 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		dev->dev_ops->port_quiesce(dev, dev->data->ports[port_id], release_cb, args);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_attr_get);
 int
 rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 		       uint32_t *attr_value)
@@ -881,7 +881,7 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_port_attr_get);
 int
 rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 			uint32_t *attr_value)
@@ -933,7 +933,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_queue_attr_get);
 int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value)
@@ -993,7 +993,7 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_attr_set)
+RTE_EXPORT_SYMBOL(rte_event_queue_attr_set);
 int
 rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			 uint64_t attr_value)
@@ -1022,7 +1022,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return dev->dev_ops->queue_attr_set(dev, queue_id, attr_id, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_link)
+RTE_EXPORT_SYMBOL(rte_event_port_link);
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
@@ -1031,7 +1031,7 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 	return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_set, 23.11);
 int
 rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
 				 const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
@@ -1114,7 +1114,7 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_unlink)
+RTE_EXPORT_SYMBOL(rte_event_port_unlink);
 int
 rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 		      uint8_t queues[], uint16_t nb_unlinks)
@@ -1122,7 +1122,7 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 	return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_unlink, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_unlink, 23.11);
 int
 rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 			      uint16_t nb_unlinks, uint8_t profile_id)
@@ -1209,7 +1209,7 @@ rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_unlinks_in_progress)
+RTE_EXPORT_SYMBOL(rte_event_port_unlinks_in_progress);
 int
 rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
 {
@@ -1234,7 +1234,7 @@ rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
 	return dev->dev_ops->port_unlinks_in_progress(dev, dev->data->ports[port_id]);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_links_get)
+RTE_EXPORT_SYMBOL(rte_event_port_links_get);
 int
 rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[])
@@ -1267,7 +1267,7 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_get, 23.11);
 int
 rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 				 uint8_t priorities[], uint8_t profile_id)
@@ -1311,7 +1311,7 @@ rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dequeue_timeout_ticks)
+RTE_EXPORT_SYMBOL(rte_event_dequeue_timeout_ticks);
 int
 rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 				 uint64_t *timeout_ticks)
@@ -1331,7 +1331,7 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 	return dev->dev_ops->timeout_ticks(dev, ns, timeout_ticks);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_service_id_get);
 int
 rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
 {
@@ -1351,7 +1351,7 @@ rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
 	return dev->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_dump)
+RTE_EXPORT_SYMBOL(rte_event_dev_dump);
 int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
@@ -1379,7 +1379,7 @@ xstats_get_count(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_names_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_names_get);
 int
 rte_event_dev_xstats_names_get(uint8_t dev_id,
 		enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
@@ -1404,7 +1404,7 @@ rte_event_dev_xstats_names_get(uint8_t dev_id,
 }
 
 /* retrieve eventdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_get);
 int
 rte_event_dev_xstats_get(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 		uint8_t queue_port_id, const uint64_t ids[],
@@ -1420,7 +1420,7 @@ rte_event_dev_xstats_get(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_by_name_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_by_name_get);
 uint64_t
 rte_event_dev_xstats_by_name_get(uint8_t dev_id, const char *name,
 		uint64_t *id)
@@ -1440,7 +1440,7 @@ rte_event_dev_xstats_by_name_get(uint8_t dev_id, const char *name,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_reset);
 int rte_event_dev_xstats_reset(uint8_t dev_id,
 		enum rte_event_dev_xstats_mode mode, int16_t queue_port_id,
 		const uint64_t ids[], uint32_t nb_ids)
@@ -1453,10 +1453,10 @@ int rte_event_dev_xstats_reset(uint8_t dev_id,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_selftest_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_selftest_seqn_dynfield_offset);
 int rte_event_pmd_selftest_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_SYMBOL(rte_event_dev_selftest)
+RTE_EXPORT_SYMBOL(rte_event_dev_selftest);
 int rte_event_dev_selftest(uint8_t dev_id)
 {
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -1477,7 +1477,7 @@ int rte_event_dev_selftest(uint8_t dev_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_vector_pool_create)
+RTE_EXPORT_SYMBOL(rte_event_vector_pool_create);
 struct rte_mempool *
 rte_event_vector_pool_create(const char *name, unsigned int n,
 			     unsigned int cache_size, uint16_t nb_elem,
@@ -1523,7 +1523,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_start)
+RTE_EXPORT_SYMBOL(rte_event_dev_start);
 int
 rte_event_dev_start(uint8_t dev_id)
 {
@@ -1555,7 +1555,7 @@ rte_event_dev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_stop_flush_callback_register)
+RTE_EXPORT_SYMBOL(rte_event_dev_stop_flush_callback_register);
 int
 rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
 					   rte_eventdev_stop_flush_t callback,
@@ -1576,7 +1576,7 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_stop)
+RTE_EXPORT_SYMBOL(rte_event_dev_stop);
 void
 rte_event_dev_stop(uint8_t dev_id)
 {
@@ -1601,7 +1601,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_close)
+RTE_EXPORT_SYMBOL(rte_event_dev_close);
 int
 rte_event_dev_close(uint8_t dev_id)
 {
@@ -1672,7 +1672,7 @@ eventdev_find_free_device_index(void)
 	return RTE_EVENT_MAX_DEVS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_allocate);
 struct rte_eventdev *
 rte_event_pmd_allocate(const char *name, int socket_id)
 {
@@ -1721,7 +1721,7 @@ rte_event_pmd_allocate(const char *name, int socket_id)
 	return eventdev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_release);
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev)
 {
@@ -1758,7 +1758,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_probing_finish);
 void
 event_dev_probing_finish(struct rte_eventdev *eventdev)
 {
diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c
index 184210f380..065ac7cd63 100644
--- a/lib/fib/rte_fib.c
+++ b/lib/fib/rte_fib.c
@@ -118,7 +118,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_add)
+RTE_EXPORT_SYMBOL(rte_fib_add);
 int
 rte_fib_add(struct rte_fib *fib, uint32_t ip, uint8_t depth, uint64_t next_hop)
 {
@@ -128,7 +128,7 @@ rte_fib_add(struct rte_fib *fib, uint32_t ip, uint8_t depth, uint64_t next_hop)
 	return fib->modify(fib, ip, depth, next_hop, RTE_FIB_ADD);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_delete)
+RTE_EXPORT_SYMBOL(rte_fib_delete);
 int
 rte_fib_delete(struct rte_fib *fib, uint32_t ip, uint8_t depth)
 {
@@ -138,7 +138,7 @@ rte_fib_delete(struct rte_fib *fib, uint32_t ip, uint8_t depth)
 	return fib->modify(fib, ip, depth, 0, RTE_FIB_DEL);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_fib_lookup_bulk);
 int
 rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips,
 	uint64_t *next_hops, int n)
@@ -150,7 +150,7 @@ rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_create)
+RTE_EXPORT_SYMBOL(rte_fib_create);
 struct rte_fib *
 rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 {
@@ -247,7 +247,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_find_existing)
+RTE_EXPORT_SYMBOL(rte_fib_find_existing);
 struct rte_fib *
 rte_fib_find_existing(const char *name)
 {
@@ -286,7 +286,7 @@ free_dataplane(struct rte_fib *fib)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_free)
+RTE_EXPORT_SYMBOL(rte_fib_free);
 void
 rte_fib_free(struct rte_fib *fib)
 {
@@ -316,21 +316,21 @@ rte_fib_free(struct rte_fib *fib)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_get_dp)
+RTE_EXPORT_SYMBOL(rte_fib_get_dp);
 void *
 rte_fib_get_dp(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->dp;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_get_rib)
+RTE_EXPORT_SYMBOL(rte_fib_get_rib);
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_select_lookup)
+RTE_EXPORT_SYMBOL(rte_fib_select_lookup);
 int
 rte_fib_select_lookup(struct rte_fib *fib,
 	enum rte_fib_lookup_type type)
@@ -350,7 +350,7 @@ rte_fib_select_lookup(struct rte_fib *fib,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fib_rcu_qsbr_add, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fib_rcu_qsbr_add, 24.11);
 int
 rte_fib_rcu_qsbr_add(struct rte_fib *fib, struct rte_fib_rcu_config *cfg)
 {
diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c
index 93a1c7197b..0b28dfee98 100644
--- a/lib/fib/rte_fib6.c
+++ b/lib/fib/rte_fib6.c
@@ -116,7 +116,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_add)
+RTE_EXPORT_SYMBOL(rte_fib6_add);
 int
 rte_fib6_add(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	uint8_t depth, uint64_t next_hop)
@@ -127,7 +127,7 @@ rte_fib6_add(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	return fib->modify(fib, ip, depth, next_hop, RTE_FIB6_ADD);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_delete)
+RTE_EXPORT_SYMBOL(rte_fib6_delete);
 int
 rte_fib6_delete(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	uint8_t depth)
@@ -138,7 +138,7 @@ rte_fib6_delete(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	return fib->modify(fib, ip, depth, 0, RTE_FIB6_DEL);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_fib6_lookup_bulk);
 int
 rte_fib6_lookup_bulk(struct rte_fib6 *fib,
 	const struct rte_ipv6_addr *ips,
@@ -150,7 +150,7 @@ rte_fib6_lookup_bulk(struct rte_fib6 *fib,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_create)
+RTE_EXPORT_SYMBOL(rte_fib6_create);
 struct rte_fib6 *
 rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 {
@@ -245,7 +245,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_find_existing)
+RTE_EXPORT_SYMBOL(rte_fib6_find_existing);
 struct rte_fib6 *
 rte_fib6_find_existing(const char *name)
 {
@@ -284,7 +284,7 @@ free_dataplane(struct rte_fib6 *fib)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_free)
+RTE_EXPORT_SYMBOL(rte_fib6_free);
 void
 rte_fib6_free(struct rte_fib6 *fib)
 {
@@ -314,21 +314,21 @@ rte_fib6_free(struct rte_fib6 *fib)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_get_dp)
+RTE_EXPORT_SYMBOL(rte_fib6_get_dp);
 void *
 rte_fib6_get_dp(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->dp;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_get_rib)
+RTE_EXPORT_SYMBOL(rte_fib6_get_rib);
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_select_lookup)
+RTE_EXPORT_SYMBOL(rte_fib6_select_lookup);
 int
 rte_fib6_select_lookup(struct rte_fib6 *fib,
 	enum rte_fib6_lookup_type type)
diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c
index 0473d9ffb3..58c9bd702b 100644
--- a/lib/gpudev/gpudev.c
+++ b/lib/gpudev/gpudev.c
@@ -50,7 +50,7 @@ struct rte_gpu_callback {
 static rte_rwlock_t gpu_callback_lock = RTE_RWLOCK_INITIALIZER;
 static void gpu_free_callbacks(struct rte_gpu *dev);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_init, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_init, 21.11);
 int
 rte_gpu_init(size_t dev_max)
 {
@@ -78,14 +78,14 @@ rte_gpu_init(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_count_avail, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_count_avail, 21.11);
 uint16_t
 rte_gpu_count_avail(void)
 {
 	return gpu_count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_is_valid, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_is_valid, 21.11);
 bool
 rte_gpu_is_valid(int16_t dev_id)
 {
@@ -103,7 +103,7 @@ gpu_match_parent(int16_t dev_id, int16_t parent)
 	return gpus[dev_id].mpshared->info.parent == parent;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_find_next, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_find_next, 21.11);
 int16_t
 rte_gpu_find_next(int16_t dev_id, int16_t parent)
 {
@@ -139,7 +139,7 @@ gpu_get_by_id(int16_t dev_id)
 	return &gpus[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_get_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_get_by_name);
 struct rte_gpu *
 rte_gpu_get_by_name(const char *name)
 {
@@ -182,7 +182,7 @@ gpu_shared_mem_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_allocate);
 struct rte_gpu *
 rte_gpu_allocate(const char *name)
 {
@@ -244,7 +244,7 @@ rte_gpu_allocate(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_attach);
 struct rte_gpu *
 rte_gpu_attach(const char *name)
 {
@@ -294,7 +294,7 @@ rte_gpu_attach(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_add_child, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_add_child, 21.11);
 int16_t
 rte_gpu_add_child(const char *name, int16_t parent, uint64_t child_context)
 {
@@ -317,7 +317,7 @@ rte_gpu_add_child(const char *name, int16_t parent, uint64_t child_context)
 	return dev->mpshared->info.dev_id;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_complete_new)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_complete_new);
 void
 rte_gpu_complete_new(struct rte_gpu *dev)
 {
@@ -328,7 +328,7 @@ rte_gpu_complete_new(struct rte_gpu *dev)
 	rte_gpu_notify(dev, RTE_GPU_EVENT_NEW);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_release);
 int
 rte_gpu_release(struct rte_gpu *dev)
 {
@@ -358,7 +358,7 @@ rte_gpu_release(struct rte_gpu *dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_close, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_close, 21.11);
 int
 rte_gpu_close(int16_t dev_id)
 {
@@ -385,7 +385,7 @@ rte_gpu_close(int16_t dev_id)
 	return firsterr;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_register, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_register, 21.11);
 int
 rte_gpu_callback_register(int16_t dev_id, enum rte_gpu_event event,
 		rte_gpu_callback_t *function, void *user_data)
@@ -445,7 +445,7 @@ rte_gpu_callback_register(int16_t dev_id, enum rte_gpu_event event,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_unregister, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_unregister, 21.11);
 int
 rte_gpu_callback_unregister(int16_t dev_id, enum rte_gpu_event event,
 		rte_gpu_callback_t *function, void *user_data)
@@ -505,7 +505,7 @@ gpu_free_callbacks(struct rte_gpu *dev)
 	rte_rwlock_write_unlock(&gpu_callback_lock);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_notify)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_notify);
 void
 rte_gpu_notify(struct rte_gpu *dev, enum rte_gpu_event event)
 {
@@ -522,7 +522,7 @@ rte_gpu_notify(struct rte_gpu *dev, enum rte_gpu_event event)
 	rte_rwlock_read_unlock(&gpu_callback_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_info_get, 21.11);
 int
 rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info)
 {
@@ -547,7 +547,7 @@ rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info)
 	return GPU_DRV_RET(dev->ops.dev_info_get(dev, info));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_alloc, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_alloc, 21.11);
 void *
 rte_gpu_mem_alloc(int16_t dev_id, size_t size, unsigned int align)
 {
@@ -592,7 +592,7 @@ rte_gpu_mem_alloc(int16_t dev_id, size_t size, unsigned int align)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_free, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_free, 21.11);
 int
 rte_gpu_mem_free(int16_t dev_id, void *ptr)
 {
@@ -616,7 +616,7 @@ rte_gpu_mem_free(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_free(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_register, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_register, 21.11);
 int
 rte_gpu_mem_register(int16_t dev_id, size_t size, void *ptr)
 {
@@ -641,7 +641,7 @@ rte_gpu_mem_register(int16_t dev_id, size_t size, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_register(dev, size, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_unregister, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_unregister, 21.11);
 int
 rte_gpu_mem_unregister(int16_t dev_id, void *ptr)
 {
@@ -665,7 +665,7 @@ rte_gpu_mem_unregister(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_unregister(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_map, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_map, 21.11);
 void *
 rte_gpu_mem_cpu_map(int16_t dev_id, size_t size, void *ptr)
 {
@@ -704,7 +704,7 @@ rte_gpu_mem_cpu_map(int16_t dev_id, size_t size, void *ptr)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_unmap, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_unmap, 21.11);
 int
 rte_gpu_mem_cpu_unmap(int16_t dev_id, void *ptr)
 {
@@ -728,7 +728,7 @@ rte_gpu_mem_cpu_unmap(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_cpu_unmap(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_wmb, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_wmb, 21.11);
 int
 rte_gpu_wmb(int16_t dev_id)
 {
@@ -748,7 +748,7 @@ rte_gpu_wmb(int16_t dev_id)
 	return GPU_DRV_RET(dev->ops.wmb(dev));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_flag, 21.11);
 int
 rte_gpu_comm_create_flag(uint16_t dev_id, struct rte_gpu_comm_flag *devflag,
 		enum rte_gpu_comm_flag_type mtype)
@@ -785,7 +785,7 @@ rte_gpu_comm_create_flag(uint16_t dev_id, struct rte_gpu_comm_flag *devflag,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_flag, 21.11);
 int
 rte_gpu_comm_destroy_flag(struct rte_gpu_comm_flag *devflag)
 {
@@ -807,7 +807,7 @@ rte_gpu_comm_destroy_flag(struct rte_gpu_comm_flag *devflag)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_flag, 21.11);
 int
 rte_gpu_comm_set_flag(struct rte_gpu_comm_flag *devflag, uint32_t val)
 {
@@ -826,7 +826,7 @@ rte_gpu_comm_set_flag(struct rte_gpu_comm_flag *devflag, uint32_t val)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_flag_value, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_flag_value, 21.11);
 int
 rte_gpu_comm_get_flag_value(struct rte_gpu_comm_flag *devflag, uint32_t *val)
 {
@@ -844,7 +844,7 @@ rte_gpu_comm_get_flag_value(struct rte_gpu_comm_flag *devflag, uint32_t *val)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_list, 21.11);
 struct rte_gpu_comm_list *
 rte_gpu_comm_create_list(uint16_t dev_id,
 		uint32_t num_comm_items)
@@ -968,7 +968,7 @@ rte_gpu_comm_create_list(uint16_t dev_id,
 	return comm_list;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_list, 21.11);
 int
 rte_gpu_comm_destroy_list(struct rte_gpu_comm_list *comm_list,
 		uint32_t num_comm_items)
@@ -1014,7 +1014,7 @@ rte_gpu_comm_destroy_list(struct rte_gpu_comm_list *comm_list,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_populate_list_pkts, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_populate_list_pkts, 21.11);
 int
 rte_gpu_comm_populate_list_pkts(struct rte_gpu_comm_list *comm_list_item,
 		struct rte_mbuf **mbufs, uint32_t num_mbufs)
@@ -1053,7 +1053,7 @@ rte_gpu_comm_populate_list_pkts(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_status, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_status, 21.11);
 int
 rte_gpu_comm_set_status(struct rte_gpu_comm_list *comm_list_item,
 		enum rte_gpu_comm_list_status status)
@@ -1068,7 +1068,7 @@ rte_gpu_comm_set_status(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_status, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_status, 21.11);
 int
 rte_gpu_comm_get_status(struct rte_gpu_comm_list *comm_list_item,
 		enum rte_gpu_comm_list_status *status)
@@ -1083,7 +1083,7 @@ rte_gpu_comm_get_status(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_cleanup_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_cleanup_list, 21.11);
 int
 rte_gpu_comm_cleanup_list(struct rte_gpu_comm_list *comm_list_item)
 {
diff --git a/lib/graph/graph.c b/lib/graph/graph.c
index 0975bd8d49..9d62599c41 100644
--- a/lib/graph/graph.c
+++ b/lib/graph/graph.c
@@ -334,7 +334,7 @@ graph_src_node_avail(struct graph *graph)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_bind)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_bind);
 int
 rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore)
 {
@@ -366,7 +366,7 @@ rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_unbind)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_unbind);
 void
 rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id)
 {
@@ -385,7 +385,7 @@ rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_lookup)
+RTE_EXPORT_SYMBOL(rte_graph_lookup);
 struct rte_graph *
 rte_graph_lookup(const char *name)
 {
@@ -399,7 +399,7 @@ rte_graph_lookup(const char *name)
 	return graph_mem_fixup_secondary(rc);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_create)
+RTE_EXPORT_SYMBOL(rte_graph_create);
 rte_graph_t
 rte_graph_create(const char *name, struct rte_graph_param *prm)
 {
@@ -504,7 +504,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_destroy)
+RTE_EXPORT_SYMBOL(rte_graph_destroy);
 int
 rte_graph_destroy(rte_graph_t id)
 {
@@ -620,7 +620,7 @@ graph_clone(struct graph *parent_graph, const char *name, struct rte_graph_param
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_clone)
+RTE_EXPORT_SYMBOL(rte_graph_clone);
 rte_graph_t
 rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm)
 {
@@ -636,7 +636,7 @@ rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_from_name)
+RTE_EXPORT_SYMBOL(rte_graph_from_name);
 rte_graph_t
 rte_graph_from_name(const char *name)
 {
@@ -649,7 +649,7 @@ rte_graph_from_name(const char *name)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_id_to_name)
+RTE_EXPORT_SYMBOL(rte_graph_id_to_name);
 char *
 rte_graph_id_to_name(rte_graph_t id)
 {
@@ -665,7 +665,7 @@ rte_graph_id_to_name(rte_graph_t id)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_node_get)
+RTE_EXPORT_SYMBOL(rte_graph_node_get);
 struct rte_node *
 rte_graph_node_get(rte_graph_t gid, uint32_t nid)
 {
@@ -689,7 +689,7 @@ rte_graph_node_get(rte_graph_t gid, uint32_t nid)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_node_get_by_name)
+RTE_EXPORT_SYMBOL(rte_graph_node_get_by_name);
 struct rte_node *
 rte_graph_node_get_by_name(const char *graph_name, const char *node_name)
 {
@@ -712,7 +712,7 @@ rte_graph_node_get_by_name(const char *graph_name, const char *node_name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(__rte_node_stream_alloc)
+RTE_EXPORT_SYMBOL(__rte_node_stream_alloc);
 void __rte_noinline
 __rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node)
 {
@@ -728,7 +728,7 @@ __rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node)
 	node->realloc_count++;
 }
 
-RTE_EXPORT_SYMBOL(__rte_node_stream_alloc_size)
+RTE_EXPORT_SYMBOL(__rte_node_stream_alloc_size);
 void __rte_noinline
 __rte_node_stream_alloc_size(struct rte_graph *graph, struct rte_node *node,
 			     uint16_t req_size)
@@ -802,7 +802,7 @@ graph_to_dot(FILE *f, struct graph *graph)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_export)
+RTE_EXPORT_SYMBOL(rte_graph_export);
 int
 rte_graph_export(const char *name, FILE *f)
 {
@@ -840,21 +840,21 @@ graph_scan_dump(FILE *f, rte_graph_t id, bool all)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_dump)
+RTE_EXPORT_SYMBOL(rte_graph_dump);
 void
 rte_graph_dump(FILE *f, rte_graph_t id)
 {
 	graph_scan_dump(f, id, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_list_dump)
+RTE_EXPORT_SYMBOL(rte_graph_list_dump);
 void
 rte_graph_list_dump(FILE *f)
 {
 	graph_scan_dump(f, 0, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_max_count)
+RTE_EXPORT_SYMBOL(rte_graph_max_count);
 rte_graph_t
 rte_graph_max_count(void)
 {
diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c
index e3b8cccdc1..2d4f07ad80 100644
--- a/lib/graph/graph_debug.c
+++ b/lib/graph/graph_debug.c
@@ -52,7 +52,7 @@ node_dump(FILE *f, struct node *n)
 		fprintf(f, "     edge[%d] <%s>\n", i, n->next_nodes[i]);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_obj_dump)
+RTE_EXPORT_SYMBOL(rte_graph_obj_dump);
 void
 rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all)
 {
diff --git a/lib/graph/graph_feature_arc.c b/lib/graph/graph_feature_arc.c
index 823aad3e73..c7641ea619 100644
--- a/lib/graph/graph_feature_arc.c
+++ b/lib/graph/graph_feature_arc.c
@@ -53,7 +53,7 @@ static struct rte_mbuf_dynfield rte_graph_feature_arc_mbuf_desc = {
 	.align = alignof(struct rte_graph_feature_arc_mbuf_dynfields),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_main, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_main, 25.07);
 rte_graph_feature_arc_main_t *__rte_graph_feature_arc_main;
 
 /* global feature arc list */
@@ -1062,7 +1062,7 @@ refill_fastpath_data(struct rte_graph_feature_arc *arc, uint32_t feature_bit,
 }
 
 /* feature arc initialization, public API */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_init, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_init, 25.07);
 int
 rte_graph_feature_arc_init(uint16_t num_feature_arcs)
 {
@@ -1193,7 +1193,7 @@ rte_graph_feature_arc_init(uint16_t num_feature_arcs)
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_create, 25.07);
 int
 rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
 			     rte_graph_feature_arc_t *_arc)
@@ -1335,7 +1335,7 @@ rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_add, 25.07);
 int
 rte_graph_feature_add(struct rte_graph_feature_register *freg)
 {
@@ -1583,7 +1583,7 @@ rte_graph_feature_add(struct rte_graph_feature_register *freg)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_lookup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_lookup, 25.07);
 int
 rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name,
 			 rte_graph_feature_t *feat)
@@ -1603,7 +1603,7 @@ rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_enable, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_enable, 25.07);
 int
 rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index,
 			 const char *feature_name, uint16_t app_cookie,
@@ -1678,7 +1678,7 @@ rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_disable, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_disable, 25.07);
 int
 rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name,
 			  struct rte_rcu_qsbr *qsbr)
@@ -1796,7 +1796,7 @@ rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const ch
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_destroy, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_destroy, 25.07);
 int
 rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc)
 {
@@ -1861,7 +1861,7 @@ rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_cleanup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_cleanup, 25.07);
 int
 rte_graph_feature_arc_cleanup(void)
 {
@@ -1886,7 +1886,7 @@ rte_graph_feature_arc_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_lookup_by_name, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_lookup_by_name, 25.07);
 int
 rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc)
 {
@@ -1924,7 +1924,7 @@ rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_enabled_features, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_enabled_features, 25.07);
 uint32_t
 rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc)
 {
@@ -1938,7 +1938,7 @@ rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc)
 	return arc->runtime_enabled_features;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_features, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_features, 25.07);
 uint32_t
 rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc)
 {
@@ -1957,7 +1957,7 @@ rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc)
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_name, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_name, 25.07);
 char *
 rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat)
 {
@@ -1978,7 +1978,7 @@ rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_fe
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_node, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_node, 25.07);
 int
 rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat,
 				      rte_node_t *node)
@@ -2005,7 +2005,7 @@ rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_fe
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_register, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_register, 25.07);
 void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg,
 				      const char *caller_name, int lineno)
 {
@@ -2015,7 +2015,7 @@ void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg
 	STAILQ_INSERT_TAIL(&feature_arc_list, reg, next_arc);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_register, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_register, 25.07);
 void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
 				  const char *caller_name, int lineno)
 {
@@ -2026,7 +2026,7 @@ void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
 	STAILQ_INSERT_TAIL(&feature_list, reg, next_feature);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_names_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_names_get, 25.07);
 uint32_t
 rte_graph_feature_arc_names_get(char *arc_names[])
 {
diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c
index eac73cbf71..040fcd6725 100644
--- a/lib/graph/graph_stats.c
+++ b/lib/graph/graph_stats.c
@@ -376,7 +376,7 @@ expand_pattern_to_cluster(struct cluster *cluster, const char *pattern)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_create)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_create);
 struct rte_graph_cluster_stats *
 rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
 {
@@ -440,7 +440,7 @@ rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_destroy)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_destroy);
 void
 rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat)
 {
@@ -515,7 +515,7 @@ cluster_node_store_prev_stats(struct cluster_node *cluster)
 	stat->prev_cycles = stat->cycles;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_get)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_get);
 void
 rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
 {
@@ -537,7 +537,7 @@ rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_reset)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_reset);
 void
 rte_graph_cluster_stats_reset(struct rte_graph_cluster_stats *stat)
 {
diff --git a/lib/graph/node.c b/lib/graph/node.c
index cae1c809ed..76953a6e75 100644
--- a/lib/graph/node.c
+++ b/lib/graph/node.c
@@ -102,7 +102,7 @@ node_has_duplicate_entry(const char *name)
 }
 
 /* Public functions */
-RTE_EXPORT_SYMBOL(__rte_node_register)
+RTE_EXPORT_SYMBOL(__rte_node_register);
 rte_node_t
 __rte_node_register(const struct rte_node_register *reg)
 {
@@ -238,7 +238,7 @@ node_clone(struct node *node, const char *name)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_clone)
+RTE_EXPORT_SYMBOL(rte_node_clone);
 rte_node_t
 rte_node_clone(rte_node_t id, const char *name)
 {
@@ -255,7 +255,7 @@ rte_node_clone(rte_node_t id, const char *name)
 	return RTE_NODE_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_from_name)
+RTE_EXPORT_SYMBOL(rte_node_from_name);
 rte_node_t
 rte_node_from_name(const char *name)
 {
@@ -268,7 +268,7 @@ rte_node_from_name(const char *name)
 	return RTE_NODE_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_id_to_name)
+RTE_EXPORT_SYMBOL(rte_node_id_to_name);
 char *
 rte_node_id_to_name(rte_node_t id)
 {
@@ -284,7 +284,7 @@ rte_node_id_to_name(rte_node_t id)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_count)
+RTE_EXPORT_SYMBOL(rte_node_edge_count);
 rte_edge_t
 rte_node_edge_count(rte_node_t id)
 {
@@ -354,7 +354,7 @@ edge_update(struct node *node, struct node *prev, rte_edge_t from,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_shrink)
+RTE_EXPORT_SYMBOL(rte_node_edge_shrink);
 rte_edge_t
 rte_node_edge_shrink(rte_node_t id, rte_edge_t size)
 {
@@ -382,7 +382,7 @@ rte_node_edge_shrink(rte_node_t id, rte_edge_t size)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_update)
+RTE_EXPORT_SYMBOL(rte_node_edge_update);
 rte_edge_t
 rte_node_edge_update(rte_node_t id, rte_edge_t from, const char **next_nodes,
 		     uint16_t nb_edges)
@@ -419,7 +419,7 @@ node_copy_edges(struct node *node, char *next_nodes[])
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_get)
+RTE_EXPORT_SYMBOL(rte_node_edge_get);
 rte_node_t
 rte_node_edge_get(rte_node_t id, char *next_nodes[])
 {
@@ -466,21 +466,21 @@ node_scan_dump(FILE *f, rte_node_t id, bool all)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_dump)
+RTE_EXPORT_SYMBOL(rte_node_dump);
 void
 rte_node_dump(FILE *f, rte_node_t id)
 {
 	node_scan_dump(f, id, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_node_list_dump)
+RTE_EXPORT_SYMBOL(rte_node_list_dump);
 void
 rte_node_list_dump(FILE *f)
 {
 	node_scan_dump(f, 0, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_node_max_count)
+RTE_EXPORT_SYMBOL(rte_node_max_count);
 rte_node_t
 rte_node_max_count(void)
 {
@@ -517,7 +517,7 @@ node_override_process_func(rte_node_t id, rte_node_process_t process)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_free, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_free, 25.07);
 int
 rte_node_free(rte_node_t id)
 {
diff --git a/lib/graph/rte_graph_model_mcore_dispatch.c b/lib/graph/rte_graph_model_mcore_dispatch.c
index 70f0069bc1..3143b69188 100644
--- a/lib/graph/rte_graph_model_mcore_dispatch.c
+++ b/lib/graph/rte_graph_model_mcore_dispatch.c
@@ -114,7 +114,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_node_enqueue)
+RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_node_enqueue);
 bool __rte_noinline
 __rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node,
 					      struct rte_graph_rq_head *rq)
@@ -132,7 +132,7 @@ __rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node,
 	return graph != NULL ? __graph_sched_node_enqueue(node, graph) : false;
 }
 
-RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_wq_process)
+RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_wq_process);
 void
 __rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph)
 {
@@ -172,7 +172,7 @@ __rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph)
 	rte_mempool_put_bulk(mp, (void **)wq_nodes, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_node_lcore_affinity_set)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_node_lcore_affinity_set);
 int
 rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, unsigned int lcore_id)
 {
diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c
index 71f8fb44ca..97bc2c2141 100644
--- a/lib/graph/rte_graph_worker.c
+++ b/lib/graph/rte_graph_worker.c
@@ -6,7 +6,7 @@
 #include "rte_graph_worker_common.h"
 #include "graph_private.h"
 
-RTE_EXPORT_SYMBOL(rte_graph_model_is_valid)
+RTE_EXPORT_SYMBOL(rte_graph_model_is_valid);
 bool
 rte_graph_model_is_valid(uint8_t model)
 {
@@ -16,7 +16,7 @@ rte_graph_model_is_valid(uint8_t model)
 	return true;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_worker_model_set)
+RTE_EXPORT_SYMBOL(rte_graph_worker_model_set);
 int
 rte_graph_worker_model_set(uint8_t model)
 {
@@ -32,7 +32,7 @@ rte_graph_worker_model_set(uint8_t model)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_worker_model_get)
+RTE_EXPORT_SYMBOL(rte_graph_worker_model_get);
 uint8_t
 rte_graph_worker_model_get(struct rte_graph *graph)
 {
diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
index 578cc9b801..2285bf318e 100644
--- a/lib/gro/rte_gro.c
+++ b/lib/gro/rte_gro.c
@@ -89,7 +89,7 @@ struct gro_ctx {
 	void *tbls[RTE_GRO_TYPE_MAX_NUM];
 };
 
-RTE_EXPORT_SYMBOL(rte_gro_ctx_create)
+RTE_EXPORT_SYMBOL(rte_gro_ctx_create);
 void *
 rte_gro_ctx_create(const struct rte_gro_param *param)
 {
@@ -131,7 +131,7 @@ rte_gro_ctx_create(const struct rte_gro_param *param)
 	return gro_ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_ctx_destroy)
+RTE_EXPORT_SYMBOL(rte_gro_ctx_destroy);
 void
 rte_gro_ctx_destroy(void *ctx)
 {
@@ -151,7 +151,7 @@ rte_gro_ctx_destroy(void *ctx)
 	rte_free(gro_ctx);
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_reassemble_burst)
+RTE_EXPORT_SYMBOL(rte_gro_reassemble_burst);
 uint16_t
 rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 		uint16_t nb_pkts,
@@ -352,7 +352,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 	return nb_after_gro;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_reassemble)
+RTE_EXPORT_SYMBOL(rte_gro_reassemble);
 uint16_t
 rte_gro_reassemble(struct rte_mbuf **pkts,
 		uint16_t nb_pkts,
@@ -421,7 +421,7 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
 	return unprocess_num;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_timeout_flush)
+RTE_EXPORT_SYMBOL(rte_gro_timeout_flush);
 uint16_t
 rte_gro_timeout_flush(void *ctx,
 		uint64_t timeout_cycles,
@@ -480,7 +480,7 @@ rte_gro_timeout_flush(void *ctx,
 	return num;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_get_pkt_count)
+RTE_EXPORT_SYMBOL(rte_gro_get_pkt_count);
 uint64_t
 rte_gro_get_pkt_count(void *ctx)
 {
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index cbf7365702..712221e3d3 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -25,7 +25,7 @@
 		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
-RTE_EXPORT_SYMBOL(rte_gso_segment)
+RTE_EXPORT_SYMBOL(rte_gso_segment);
 int
 rte_gso_segment(struct rte_mbuf *pkt,
 		const struct rte_gso_ctx *gso_ctx,
diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..f565874e28 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -77,7 +77,7 @@ struct __rte_hash_rcu_dq_entry {
 	uint32_t ext_bkt_idx;
 };
 
-RTE_EXPORT_SYMBOL(rte_hash_find_existing)
+RTE_EXPORT_SYMBOL(rte_hash_find_existing);
 struct rte_hash *
 rte_hash_find_existing(const char *name)
 {
@@ -110,7 +110,7 @@ rte_hash_get_last_bkt(struct rte_hash_bucket *lst_bkt)
 	return lst_bkt;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_set_cmp_func)
+RTE_EXPORT_SYMBOL(rte_hash_set_cmp_func);
 void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 {
 	h->cmp_jump_table_idx = KEY_CUSTOM;
@@ -156,7 +156,7 @@ get_alt_bucket_index(const struct rte_hash *h,
 	return (cur_bkt_idx ^ sig) & h->bucket_bitmask;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_create)
+RTE_EXPORT_SYMBOL(rte_hash_create);
 struct rte_hash *
 rte_hash_create(const struct rte_hash_parameters *params)
 {
@@ -528,7 +528,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_free)
+RTE_EXPORT_SYMBOL(rte_hash_free);
 void
 rte_hash_free(struct rte_hash *h)
 {
@@ -576,7 +576,7 @@ rte_hash_free(struct rte_hash *h)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_hash)
+RTE_EXPORT_SYMBOL(rte_hash_hash);
 hash_sig_t
 rte_hash_hash(const struct rte_hash *h, const void *key)
 {
@@ -584,7 +584,7 @@ rte_hash_hash(const struct rte_hash *h, const void *key)
 	return h->hash_func(key, h->key_len, h->hash_func_init_val);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_max_key_id)
+RTE_EXPORT_SYMBOL(rte_hash_max_key_id);
 int32_t
 rte_hash_max_key_id(const struct rte_hash *h)
 {
@@ -600,7 +600,7 @@ rte_hash_max_key_id(const struct rte_hash *h)
 		return h->entries;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_count)
+RTE_EXPORT_SYMBOL(rte_hash_count);
 int32_t
 rte_hash_count(const struct rte_hash *h)
 {
@@ -670,7 +670,7 @@ __hash_rw_reader_unlock(const struct rte_hash *h)
 		rte_rwlock_read_unlock(h->readwrite_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_reset)
+RTE_EXPORT_SYMBOL(rte_hash_reset);
 void
 rte_hash_reset(struct rte_hash *h)
 {
@@ -1254,7 +1254,7 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
 
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash);
 int32_t
 rte_hash_add_key_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1263,7 +1263,7 @@ rte_hash_add_key_with_hash(const struct rte_hash *h,
 	return __rte_hash_add_key_with_hash(h, key, sig, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key)
+RTE_EXPORT_SYMBOL(rte_hash_add_key);
 int32_t
 rte_hash_add_key(const struct rte_hash *h, const void *key)
 {
@@ -1271,7 +1271,7 @@ rte_hash_add_key(const struct rte_hash *h, const void *key)
 	return __rte_hash_add_key_with_hash(h, key, rte_hash_hash(h, key), 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash_data)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash_data);
 int
 rte_hash_add_key_with_hash_data(const struct rte_hash *h,
 			const void *key, hash_sig_t sig, void *data)
@@ -1286,7 +1286,7 @@ rte_hash_add_key_with_hash_data(const struct rte_hash *h,
 		return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_data)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_data);
 int
 rte_hash_add_key_data(const struct rte_hash *h, const void *key, void *data)
 {
@@ -1480,7 +1480,7 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key,
 		return __rte_hash_lookup_with_hash_l(h, key, sig, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash);
 int32_t
 rte_hash_lookup_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1489,7 +1489,7 @@ rte_hash_lookup_with_hash(const struct rte_hash *h,
 	return __rte_hash_lookup_with_hash(h, key, sig, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup)
+RTE_EXPORT_SYMBOL(rte_hash_lookup);
 int32_t
 rte_hash_lookup(const struct rte_hash *h, const void *key)
 {
@@ -1497,7 +1497,7 @@ rte_hash_lookup(const struct rte_hash *h, const void *key)
 	return __rte_hash_lookup_with_hash(h, key, rte_hash_hash(h, key), NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_data);
 int
 rte_hash_lookup_with_hash_data(const struct rte_hash *h,
 			const void *key, hash_sig_t sig, void **data)
@@ -1506,7 +1506,7 @@ rte_hash_lookup_with_hash_data(const struct rte_hash *h,
 	return __rte_hash_lookup_with_hash(h, key, sig, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_data);
 int
 rte_hash_lookup_data(const struct rte_hash *h, const void *key, void **data)
 {
@@ -1574,7 +1574,7 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_rcu_qsbr_add)
+RTE_EXPORT_SYMBOL(rte_hash_rcu_qsbr_add);
 int
 rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
 {
@@ -1645,7 +1645,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_hash_rcu_qsbr_dq_reclaim, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_hash_rcu_qsbr_dq_reclaim, 24.07);
 int rte_hash_rcu_qsbr_dq_reclaim(struct rte_hash *h, unsigned int *freed, unsigned int *pending,
 				 unsigned int *available)
 {
@@ -1870,7 +1870,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_del_key_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_del_key_with_hash);
 int32_t
 rte_hash_del_key_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1879,7 +1879,7 @@ rte_hash_del_key_with_hash(const struct rte_hash *h,
 	return __rte_hash_del_key_with_hash(h, key, sig);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_del_key)
+RTE_EXPORT_SYMBOL(rte_hash_del_key);
 int32_t
 rte_hash_del_key(const struct rte_hash *h, const void *key)
 {
@@ -1887,7 +1887,7 @@ rte_hash_del_key(const struct rte_hash *h, const void *key)
 	return __rte_hash_del_key_with_hash(h, key, rte_hash_hash(h, key));
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_get_key_with_position)
+RTE_EXPORT_SYMBOL(rte_hash_get_key_with_position);
 int
 rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position,
 			       void **key)
@@ -1908,7 +1908,7 @@ rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_free_key_with_position)
+RTE_EXPORT_SYMBOL(rte_hash_free_key_with_position);
 int
 rte_hash_free_key_with_position(const struct rte_hash *h,
 				const int32_t position)
@@ -2421,7 +2421,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 					 hit_mask, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk);
 int
 rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 		      uint32_t num_keys, int32_t *positions)
@@ -2434,7 +2434,7 @@ rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk_data);
 int
 rte_hash_lookup_bulk_data(const struct rte_hash *h, const void **keys,
 		      uint32_t num_keys, uint64_t *hit_mask, void *data[])
@@ -2535,7 +2535,7 @@ __rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 				num_keys, positions, hit_mask, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk);
 int
 rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 		hash_sig_t *sig, uint32_t num_keys, int32_t *positions)
@@ -2550,7 +2550,7 @@ rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk_data);
 int
 rte_hash_lookup_with_hash_bulk_data(const struct rte_hash *h,
 		const void **keys, hash_sig_t *sig,
@@ -2570,7 +2570,7 @@ rte_hash_lookup_with_hash_bulk_data(const struct rte_hash *h,
 	return rte_popcount64(*hit_mask);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_iterate)
+RTE_EXPORT_SYMBOL(rte_hash_iterate);
 int32_t
 rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32_t *next)
 {
diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c
index 38b15a14d1..c755f29cad 100644
--- a/lib/hash/rte_fbk_hash.c
+++ b/lib/hash/rte_fbk_hash.c
@@ -42,7 +42,7 @@ EAL_REGISTER_TAILQ(rte_fbk_hash_tailq)
  * @return
  *   pointer to hash table structure or NULL on error.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_find_existing)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_find_existing);
 struct rte_fbk_hash_table *
 rte_fbk_hash_find_existing(const char *name)
 {
@@ -77,7 +77,7 @@ rte_fbk_hash_find_existing(const char *name)
  *   Pointer to hash table structure that is used in future hash table
  *   operations, or NULL on error.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_create)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_create);
 struct rte_fbk_hash_table *
 rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
 {
@@ -180,7 +180,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
  * @param ht
  *   Hash table to deallocate.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_free)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_free);
 void
 rte_fbk_hash_free(struct rte_fbk_hash_table *ht)
 {
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
index 9fe90d6425..21535e8916 100644
--- a/lib/hash/rte_hash_crc.c
+++ b/lib/hash/rte_hash_crc.c
@@ -13,7 +13,7 @@ RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
 #define HASH_CRC_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH_CRC, "" __VA_ARGS__)
 
-RTE_EXPORT_SYMBOL(rte_hash_crc32_alg)
+RTE_EXPORT_SYMBOL(rte_hash_crc32_alg);
 uint8_t rte_hash_crc32_alg = CRC32_SW;
 
 /**
@@ -28,7 +28,7 @@ uint8_t rte_hash_crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-RTE_EXPORT_SYMBOL(rte_hash_crc_set_alg)
+RTE_EXPORT_SYMBOL(rte_hash_crc_set_alg);
 void
 rte_hash_crc_set_alg(uint8_t alg)
 {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 6c662bf14f..fe0eb44829 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -71,7 +71,7 @@ struct rte_thash_ctx {
 	uint8_t		hash_key[];
 };
 
-RTE_EXPORT_SYMBOL(rte_thash_gfni_supported)
+RTE_EXPORT_SYMBOL(rte_thash_gfni_supported);
 int
 rte_thash_gfni_supported(void)
 {
@@ -85,7 +85,7 @@ rte_thash_gfni_supported(void)
 	return 0;
 };
 
-RTE_EXPORT_SYMBOL(rte_thash_complete_matrix)
+RTE_EXPORT_SYMBOL(rte_thash_complete_matrix);
 void
 rte_thash_complete_matrix(uint64_t *matrixes, const uint8_t *rss_key, int size)
 {
@@ -206,7 +206,7 @@ free_lfsr(struct thash_lfsr *lfsr)
 		rte_free(lfsr);
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_init_ctx)
+RTE_EXPORT_SYMBOL(rte_thash_init_ctx);
 struct rte_thash_ctx *
 rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
 	uint8_t *key, uint32_t flags)
@@ -297,7 +297,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_find_existing)
+RTE_EXPORT_SYMBOL(rte_thash_find_existing);
 struct rte_thash_ctx *
 rte_thash_find_existing(const char *name)
 {
@@ -324,7 +324,7 @@ rte_thash_find_existing(const char *name)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_free_ctx)
+RTE_EXPORT_SYMBOL(rte_thash_free_ctx);
 void
 rte_thash_free_ctx(struct rte_thash_ctx *ctx)
 {
@@ -546,7 +546,7 @@ insert_after(struct rte_thash_ctx *ctx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_add_helper)
+RTE_EXPORT_SYMBOL(rte_thash_add_helper);
 int
 rte_thash_add_helper(struct rte_thash_ctx *ctx, const char *name, uint32_t len,
 	uint32_t offset)
@@ -637,7 +637,7 @@ rte_thash_add_helper(struct rte_thash_ctx *ctx, const char *name, uint32_t len,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_helper)
+RTE_EXPORT_SYMBOL(rte_thash_get_helper);
 struct rte_thash_subtuple_helper *
 rte_thash_get_helper(struct rte_thash_ctx *ctx, const char *name)
 {
@@ -654,7 +654,7 @@ rte_thash_get_helper(struct rte_thash_ctx *ctx, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_complement)
+RTE_EXPORT_SYMBOL(rte_thash_get_complement);
 uint32_t
 rte_thash_get_complement(struct rte_thash_subtuple_helper *h,
 	uint32_t hash, uint32_t desired_hash)
@@ -662,14 +662,14 @@ rte_thash_get_complement(struct rte_thash_subtuple_helper *h,
 	return h->compl_table[(hash ^ desired_hash) & h->lsb_msk];
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_key)
+RTE_EXPORT_SYMBOL(rte_thash_get_key);
 const uint8_t *
 rte_thash_get_key(struct rte_thash_ctx *ctx)
 {
 	return ctx->hash_key;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_gfni_matrices)
+RTE_EXPORT_SYMBOL(rte_thash_get_gfni_matrices);
 const uint64_t *
 rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx)
 {
@@ -765,7 +765,7 @@ write_unaligned_bits(uint8_t *ptr, int len, int offset, uint32_t val)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_adjust_tuple)
+RTE_EXPORT_SYMBOL(rte_thash_adjust_tuple);
 int
 rte_thash_adjust_tuple(struct rte_thash_ctx *ctx,
 	struct rte_thash_subtuple_helper *h,
@@ -835,7 +835,7 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_thash_gen_key, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_thash_gen_key, 24.11);
 int
 rte_thash_gen_key(uint8_t *key, size_t key_len, size_t reta_sz_log,
 	uint32_t entropy_start, size_t entropy_sz)
diff --git a/lib/hash/rte_thash_gf2_poly_math.c b/lib/hash/rte_thash_gf2_poly_math.c
index ddf4dd863b..05cd0d5f37 100644
--- a/lib/hash/rte_thash_gf2_poly_math.c
+++ b/lib/hash/rte_thash_gf2_poly_math.c
@@ -242,7 +242,7 @@ thash_test_poly_order(uint32_t poly, int degree)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(thash_get_rand_poly)
+RTE_EXPORT_INTERNAL_SYMBOL(thash_get_rand_poly);
 uint32_t
 thash_get_rand_poly(uint32_t poly_degree)
 {
diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c
index 2003c7b3db..b82b9bba63 100644
--- a/lib/hash/rte_thash_gfni.c
+++ b/lib/hash/rte_thash_gfni.c
@@ -13,7 +13,7 @@ RTE_LOG_REGISTER_SUFFIX(hash_gfni_logtype, gfni, INFO);
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_stub)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_stub);
 uint32_t
 rte_thash_gfni_stub(const uint64_t *mtrx __rte_unused,
 	const uint8_t *key __rte_unused, int len __rte_unused)
@@ -29,7 +29,7 @@ rte_thash_gfni_stub(const uint64_t *mtrx __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_bulk_stub)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_bulk_stub);
 void
 rte_thash_gfni_bulk_stub(const uint64_t *mtrx __rte_unused,
 	int len __rte_unused, uint8_t *tuple[] __rte_unused,
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index ee9aa93027..b004302468 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -15,7 +15,7 @@ RTE_LOG_REGISTER_DEFAULT(ipfrag_logtype, INFO);
 #define	IP_FRAG_HASH_FNUM	2
 
 /* free mbufs from death row */
-RTE_EXPORT_SYMBOL(rte_ip_frag_free_death_row)
+RTE_EXPORT_SYMBOL(rte_ip_frag_free_death_row);
 void
 rte_ip_frag_free_death_row(struct rte_ip_frag_death_row *dr,
 		uint32_t prefetch)
@@ -40,7 +40,7 @@ rte_ip_frag_free_death_row(struct rte_ip_frag_death_row *dr,
 }
 
 /* create fragmentation table */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_create)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_create);
 struct rte_ip_frag_tbl *
 rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
 	uint32_t max_entries, uint64_t max_cycles, int socket_id)
@@ -85,7 +85,7 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
 }
 
 /* delete fragmentation table */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_destroy)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_destroy);
 void
 rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
 {
@@ -99,7 +99,7 @@ rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
 }
 
 /* dump frag table statistics to file */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_statistics_dump)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_statistics_dump);
 void
 rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
 {
@@ -129,7 +129,7 @@ rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
 }
 
 /* Delete expired fragments */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_del_expired_entries)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_del_expired_entries);
 void
 rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, uint64_t tms)
diff --git a/lib/ip_frag/rte_ipv4_fragmentation.c b/lib/ip_frag/rte_ipv4_fragmentation.c
index 435a6e13bb..065e49780f 100644
--- a/lib/ip_frag/rte_ipv4_fragmentation.c
+++ b/lib/ip_frag/rte_ipv4_fragmentation.c
@@ -105,7 +105,7 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph,
  *   in the pkts_out array.
  *   Otherwise - (-1) * <errno>.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_fragment_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_fragment_packet);
 int32_t
 rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
@@ -288,7 +288,7 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,
  *   in the pkts_out array.
  *   Otherwise - (-1) * errno.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_fragment_copy_nonseg_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_fragment_copy_nonseg_packet);
 int32_t
 rte_ipv4_fragment_copy_nonseg_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
diff --git a/lib/ip_frag/rte_ipv4_reassembly.c b/lib/ip_frag/rte_ipv4_reassembly.c
index 3c8ae113ba..fca05ddc9e 100644
--- a/lib/ip_frag/rte_ipv4_reassembly.c
+++ b/lib/ip_frag/rte_ipv4_reassembly.c
@@ -95,7 +95,7 @@ ipv4_frag_reassemble(struct ip_frag_pkt *fp)
  *   - an error occurred.
  *   - not all fragments of the packet are collected yet.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_frag_reassemble_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_frag_reassemble_packet);
 struct rte_mbuf *
 rte_ipv4_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index c81f2402e3..573732f596 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -64,7 +64,7 @@ __free_fragments(struct rte_mbuf *mb[], uint32_t num)
  *   in the pkts_out array.
  *   Otherwise - (-1) * <errno>.
  */
-RTE_EXPORT_SYMBOL(rte_ipv6_fragment_packet)
+RTE_EXPORT_SYMBOL(rte_ipv6_fragment_packet);
 int32_t
 rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 0e809a01e5..ca37d03dee 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -133,7 +133,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
  */
 #define MORE_FRAGS(x) (((x) & 0x100) >> 8)
 #define FRAG_OFFSET(x) (rte_cpu_to_be_16(x) >> 3)
-RTE_EXPORT_SYMBOL(rte_ipv6_frag_reassemble_packet)
+RTE_EXPORT_SYMBOL(rte_ipv6_frag_reassemble_packet);
 struct rte_mbuf *
 rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
diff --git a/lib/ipsec/ipsec_sad.c b/lib/ipsec/ipsec_sad.c
index 15ea868f77..fe5d25a94f 100644
--- a/lib/ipsec/ipsec_sad.c
+++ b/lib/ipsec/ipsec_sad.c
@@ -114,7 +114,7 @@ add_specific(struct rte_ipsec_sad *sad, const void *key,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_add)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_add);
 int
 rte_ipsec_sad_add(struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *key,
@@ -214,7 +214,7 @@ del_specific(struct rte_ipsec_sad *sad, const void *key, int key_type)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_del)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_del);
 int
 rte_ipsec_sad_del(struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *key,
@@ -254,7 +254,7 @@ rte_ipsec_sad_del(struct rte_ipsec_sad *sad,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_create)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_create);
 struct rte_ipsec_sad *
 rte_ipsec_sad_create(const char *name, const struct rte_ipsec_sad_conf *conf)
 {
@@ -384,7 +384,7 @@ rte_ipsec_sad_create(const char *name, const struct rte_ipsec_sad_conf *conf)
 	return sad;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_find_existing)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_find_existing);
 struct rte_ipsec_sad *
 rte_ipsec_sad_find_existing(const char *name)
 {
@@ -419,7 +419,7 @@ rte_ipsec_sad_find_existing(const char *name)
 	return sad;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_destroy)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_destroy);
 void
 rte_ipsec_sad_destroy(struct rte_ipsec_sad *sad)
 {
@@ -542,7 +542,7 @@ __ipsec_sad_lookup(const struct rte_ipsec_sad *sad,
 	return found;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_lookup)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_lookup);
 int
 rte_ipsec_sad_lookup(const struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *keys[], void *sa[], uint32_t n)
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
index a9b6f05270..4cff0e2438 100644
--- a/lib/ipsec/ipsec_telemetry.c
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -205,7 +205,7 @@ handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_add)
+RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_add);
 int
 rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
 {
@@ -218,7 +218,7 @@ rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_del)
+RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_del);
 void
 rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa)
 {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 4f589f3f3f..a03e106bb1 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -85,7 +85,7 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_type)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_type);
 uint64_t
 rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 {
@@ -158,7 +158,7 @@ ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 	return sz;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_fini)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_fini);
 void
 rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 {
@@ -528,7 +528,7 @@ fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_size)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_size);
 int
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
@@ -549,7 +549,7 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 	return ipsec_sa_size(type, &wsz, &nb);
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_init)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_init);
 int
 rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
diff --git a/lib/ipsec/ses.c b/lib/ipsec/ses.c
index 224e752d05..7b137ca9b6 100644
--- a/lib/ipsec/ses.c
+++ b/lib/ipsec/ses.c
@@ -29,7 +29,7 @@ session_check(struct rte_ipsec_session *ss)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_session_prepare)
+RTE_EXPORT_SYMBOL(rte_ipsec_session_prepare);
 int
 rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
 {
diff --git a/lib/jobstats/rte_jobstats.c b/lib/jobstats/rte_jobstats.c
index 20a4f1391a..4729316e08 100644
--- a/lib/jobstats/rte_jobstats.c
+++ b/lib/jobstats/rte_jobstats.c
@@ -64,7 +64,7 @@ default_update_function(struct rte_jobstats *job, int64_t result)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_init)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_init);
 int
 rte_jobstats_context_init(struct rte_jobstats_context *ctx)
 {
@@ -79,7 +79,7 @@ rte_jobstats_context_init(struct rte_jobstats_context *ctx)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_start)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_start);
 void
 rte_jobstats_context_start(struct rte_jobstats_context *ctx)
 {
@@ -92,7 +92,7 @@ rte_jobstats_context_start(struct rte_jobstats_context *ctx)
 	ctx->state_time = now;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_finish)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_finish);
 void
 rte_jobstats_context_finish(struct rte_jobstats_context *ctx)
 {
@@ -106,7 +106,7 @@ rte_jobstats_context_finish(struct rte_jobstats_context *ctx)
 	ctx->state_time = now;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_reset)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_reset);
 void
 rte_jobstats_context_reset(struct rte_jobstats_context *ctx)
 {
@@ -118,14 +118,14 @@ rte_jobstats_context_reset(struct rte_jobstats_context *ctx)
 	ctx->loop_cnt = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_target)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_target);
 void
 rte_jobstats_set_target(struct rte_jobstats *job, int64_t target)
 {
 	job->target = target;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_start)
+RTE_EXPORT_SYMBOL(rte_jobstats_start);
 int
 rte_jobstats_start(struct rte_jobstats_context *ctx, struct rte_jobstats *job)
 {
@@ -145,7 +145,7 @@ rte_jobstats_start(struct rte_jobstats_context *ctx, struct rte_jobstats *job)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_abort)
+RTE_EXPORT_SYMBOL(rte_jobstats_abort);
 int
 rte_jobstats_abort(struct rte_jobstats *job)
 {
@@ -166,7 +166,7 @@ rte_jobstats_abort(struct rte_jobstats *job)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_finish)
+RTE_EXPORT_SYMBOL(rte_jobstats_finish);
 int
 rte_jobstats_finish(struct rte_jobstats *job, int64_t job_value)
 {
@@ -203,7 +203,7 @@ rte_jobstats_finish(struct rte_jobstats *job, int64_t job_value)
 	return need_update;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_period)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_period);
 void
 rte_jobstats_set_period(struct rte_jobstats *job, uint64_t period,
 		uint8_t saturate)
@@ -218,7 +218,7 @@ rte_jobstats_set_period(struct rte_jobstats *job, uint64_t period,
 	job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_min)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_min);
 void
 rte_jobstats_set_min(struct rte_jobstats *job, uint64_t period)
 {
@@ -227,7 +227,7 @@ rte_jobstats_set_min(struct rte_jobstats *job, uint64_t period)
 		job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_max)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_max);
 void
 rte_jobstats_set_max(struct rte_jobstats *job, uint64_t period)
 {
@@ -236,7 +236,7 @@ rte_jobstats_set_max(struct rte_jobstats *job, uint64_t period)
 		job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_init)
+RTE_EXPORT_SYMBOL(rte_jobstats_init);
 int
 rte_jobstats_init(struct rte_jobstats *job, const char *name,
 		uint64_t min_period, uint64_t max_period, uint64_t initial_period,
@@ -257,7 +257,7 @@ rte_jobstats_init(struct rte_jobstats *job, const char *name,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_update_period_function)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_update_period_function);
 void
 rte_jobstats_set_update_period_function(struct rte_jobstats *job,
 		rte_job_update_period_cb_t update_period_cb)
@@ -268,7 +268,7 @@ rte_jobstats_set_update_period_function(struct rte_jobstats *job,
 	job->update_period_cb = update_period_cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_reset)
+RTE_EXPORT_SYMBOL(rte_jobstats_reset);
 void
 rte_jobstats_reset(struct rte_jobstats *job)
 {
diff --git a/lib/kvargs/rte_kvargs.c b/lib/kvargs/rte_kvargs.c
index 4e3198b33f..d1aa30b96f 100644
--- a/lib/kvargs/rte_kvargs.c
+++ b/lib/kvargs/rte_kvargs.c
@@ -152,7 +152,7 @@ check_for_valid_keys(struct rte_kvargs *kvlist,
  * E.g. given a list = { rx = 0, rx = 1, tx = 2 } the number of args for
  * arg "rx" will be 2.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_count)
+RTE_EXPORT_SYMBOL(rte_kvargs_count);
 unsigned
 rte_kvargs_count(const struct rte_kvargs *kvlist, const char *key_match)
 {
@@ -195,7 +195,7 @@ kvargs_process_common(const struct rte_kvargs *kvlist, const char *key_match,
 /*
  * For each matching key in key=value, call the given handler function.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_process)
+RTE_EXPORT_SYMBOL(rte_kvargs_process);
 int
 rte_kvargs_process(const struct rte_kvargs *kvlist, const char *key_match, arg_handler_t handler,
 		   void *opaque_arg)
@@ -206,7 +206,7 @@ rte_kvargs_process(const struct rte_kvargs *kvlist, const char *key_match, arg_h
 /*
  * For each matching key in key=value or only-key, call the given handler function.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_process_opt)
+RTE_EXPORT_SYMBOL(rte_kvargs_process_opt);
 int
 rte_kvargs_process_opt(const struct rte_kvargs *kvlist, const char *key_match,
 		       arg_handler_t handler, void *opaque_arg)
@@ -215,7 +215,7 @@ rte_kvargs_process_opt(const struct rte_kvargs *kvlist, const char *key_match,
 }
 
 /* free the rte_kvargs structure */
-RTE_EXPORT_SYMBOL(rte_kvargs_free)
+RTE_EXPORT_SYMBOL(rte_kvargs_free);
 void
 rte_kvargs_free(struct rte_kvargs *kvlist)
 {
@@ -227,7 +227,7 @@ rte_kvargs_free(struct rte_kvargs *kvlist)
 }
 
 /* Lookup a value in an rte_kvargs list by its key and value. */
-RTE_EXPORT_SYMBOL(rte_kvargs_get_with_value)
+RTE_EXPORT_SYMBOL(rte_kvargs_get_with_value);
 const char *
 rte_kvargs_get_with_value(const struct rte_kvargs *kvlist, const char *key,
 			  const char *value)
@@ -247,7 +247,7 @@ rte_kvargs_get_with_value(const struct rte_kvargs *kvlist, const char *key,
 }
 
 /* Lookup a value in an rte_kvargs list by its key. */
-RTE_EXPORT_SYMBOL(rte_kvargs_get)
+RTE_EXPORT_SYMBOL(rte_kvargs_get);
 const char *
 rte_kvargs_get(const struct rte_kvargs *kvlist, const char *key)
 {
@@ -261,7 +261,7 @@ rte_kvargs_get(const struct rte_kvargs *kvlist, const char *key)
  * an allocated structure that contains a key/value list. Also
  * check if only valid keys were used.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_parse)
+RTE_EXPORT_SYMBOL(rte_kvargs_parse);
 struct rte_kvargs *
 rte_kvargs_parse(const char *args, const char * const valid_keys[])
 {
@@ -285,7 +285,7 @@ rte_kvargs_parse(const char *args, const char * const valid_keys[])
 	return kvlist;
 }
 
-RTE_EXPORT_SYMBOL(rte_kvargs_parse_delim)
+RTE_EXPORT_SYMBOL(rte_kvargs_parse_delim);
 struct rte_kvargs *
 rte_kvargs_parse_delim(const char *args, const char * const valid_keys[],
 		       const char *valid_ends)
diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c
index f61d5a273f..5437258219 100644
--- a/lib/latencystats/rte_latencystats.c
+++ b/lib/latencystats/rte_latencystats.c
@@ -116,7 +116,7 @@ latencystats_collect(uint64_t values[])
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_update)
+RTE_EXPORT_SYMBOL(rte_latencystats_update);
 int32_t
 rte_latencystats_update(void)
 {
@@ -256,7 +256,7 @@ calc_latency(uint16_t pid __rte_unused,
 	return nb_pkts;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_init)
+RTE_EXPORT_SYMBOL(rte_latencystats_init);
 int
 rte_latencystats_init(uint64_t app_samp_intvl,
 		rte_latency_stats_flow_type_fn user_cb)
@@ -349,7 +349,7 @@ rte_latencystats_init(uint64_t app_samp_intvl,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_uninit)
+RTE_EXPORT_SYMBOL(rte_latencystats_uninit);
 int
 rte_latencystats_uninit(void)
 {
@@ -396,7 +396,7 @@ rte_latencystats_uninit(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_get_names)
+RTE_EXPORT_SYMBOL(rte_latencystats_get_names);
 int
 rte_latencystats_get_names(struct rte_metric_name *names, uint16_t size)
 {
@@ -412,7 +412,7 @@ rte_latencystats_get_names(struct rte_metric_name *names, uint16_t size)
 	return NUM_LATENCY_STATS;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_get)
+RTE_EXPORT_SYMBOL(rte_latencystats_get);
 int
 rte_latencystats_get(struct rte_metric_value *values, uint16_t size)
 {
diff --git a/lib/log/log.c b/lib/log/log.c
index 8ad5250a13..1e8e98944f 100644
--- a/lib/log/log.c
+++ b/lib/log/log.c
@@ -79,7 +79,7 @@ struct log_cur_msg {
 static RTE_DEFINE_PER_LCORE(struct log_cur_msg, log_cur_msg);
 
 /* Change the stream that will be used by logging system */
-RTE_EXPORT_SYMBOL(rte_openlog_stream)
+RTE_EXPORT_SYMBOL(rte_openlog_stream);
 int
 rte_openlog_stream(FILE *f)
 {
@@ -91,7 +91,7 @@ rte_openlog_stream(FILE *f)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_get_stream)
+RTE_EXPORT_SYMBOL(rte_log_get_stream);
 FILE *
 rte_log_get_stream(void)
 {
@@ -101,7 +101,7 @@ rte_log_get_stream(void)
 }
 
 /* Set global log level */
-RTE_EXPORT_SYMBOL(rte_log_set_global_level)
+RTE_EXPORT_SYMBOL(rte_log_set_global_level);
 void
 rte_log_set_global_level(uint32_t level)
 {
@@ -109,14 +109,14 @@ rte_log_set_global_level(uint32_t level)
 }
 
 /* Get global log level */
-RTE_EXPORT_SYMBOL(rte_log_get_global_level)
+RTE_EXPORT_SYMBOL(rte_log_get_global_level);
 uint32_t
 rte_log_get_global_level(void)
 {
 	return rte_logs.level;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_get_level)
+RTE_EXPORT_SYMBOL(rte_log_get_level);
 int
 rte_log_get_level(uint32_t type)
 {
@@ -126,7 +126,7 @@ rte_log_get_level(uint32_t type)
 	return rte_logs.dynamic_types[type].loglevel;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_can_log)
+RTE_EXPORT_SYMBOL(rte_log_can_log);
 bool
 rte_log_can_log(uint32_t logtype, uint32_t level)
 {
@@ -160,7 +160,7 @@ logtype_set_level(uint32_t type, uint32_t level)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_log_set_level)
+RTE_EXPORT_SYMBOL(rte_log_set_level);
 int
 rte_log_set_level(uint32_t type, uint32_t level)
 {
@@ -175,7 +175,7 @@ rte_log_set_level(uint32_t type, uint32_t level)
 }
 
 /* set log level by regular expression */
-RTE_EXPORT_SYMBOL(rte_log_set_level_regexp)
+RTE_EXPORT_SYMBOL(rte_log_set_level_regexp);
 int
 rte_log_set_level_regexp(const char *regex, uint32_t level)
 {
@@ -234,7 +234,7 @@ log_save_level(uint32_t priority, const char *regex, const char *pattern)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_regexp)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_regexp);
 int
 eal_log_save_regexp(const char *regex, uint32_t level)
 {
@@ -242,7 +242,7 @@ eal_log_save_regexp(const char *regex, uint32_t level)
 }
 
 /* set log level based on globbing pattern */
-RTE_EXPORT_SYMBOL(rte_log_set_level_pattern)
+RTE_EXPORT_SYMBOL(rte_log_set_level_pattern);
 int
 rte_log_set_level_pattern(const char *pattern, uint32_t level)
 {
@@ -262,7 +262,7 @@ rte_log_set_level_pattern(const char *pattern, uint32_t level)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_pattern)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_pattern);
 int
 eal_log_save_pattern(const char *pattern, uint32_t level)
 {
@@ -270,14 +270,14 @@ eal_log_save_pattern(const char *pattern, uint32_t level)
 }
 
 /* get the current loglevel for the message being processed */
-RTE_EXPORT_SYMBOL(rte_log_cur_msg_loglevel)
+RTE_EXPORT_SYMBOL(rte_log_cur_msg_loglevel);
 int rte_log_cur_msg_loglevel(void)
 {
 	return RTE_PER_LCORE(log_cur_msg).loglevel;
 }
 
 /* get the current logtype for the message being processed */
-RTE_EXPORT_SYMBOL(rte_log_cur_msg_logtype)
+RTE_EXPORT_SYMBOL(rte_log_cur_msg_logtype);
 int rte_log_cur_msg_logtype(void)
 {
 	return RTE_PER_LCORE(log_cur_msg).logtype;
@@ -329,7 +329,7 @@ log_register(const char *name, uint32_t level)
 }
 
 /* register an extended log type */
-RTE_EXPORT_SYMBOL(rte_log_register)
+RTE_EXPORT_SYMBOL(rte_log_register);
 int
 rte_log_register(const char *name)
 {
@@ -337,7 +337,7 @@ rte_log_register(const char *name)
 }
 
 /* Register an extended log type and try to pick its level from EAL options */
-RTE_EXPORT_SYMBOL(rte_log_register_type_and_pick_level)
+RTE_EXPORT_SYMBOL(rte_log_register_type_and_pick_level);
 int
 rte_log_register_type_and_pick_level(const char *name, uint32_t level_def)
 {
@@ -400,7 +400,7 @@ RTE_INIT_PRIO(log_init, LOG)
 	rte_logs.dynamic_types_len = RTE_LOGTYPE_FIRST_EXT_ID;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_level2str)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_level2str);
 const char *
 eal_log_level2str(uint32_t level)
 {
@@ -434,7 +434,7 @@ log_type_compare(const void *a, const void *b)
 }
 
 /* Dump name of each logtype, one per line. */
-RTE_EXPORT_SYMBOL(rte_log_list_types)
+RTE_EXPORT_SYMBOL(rte_log_list_types);
 void
 rte_log_list_types(FILE *out, const char *prefix)
 {
@@ -464,7 +464,7 @@ rte_log_list_types(FILE *out, const char *prefix)
 }
 
 /* dump global level and registered log types */
-RTE_EXPORT_SYMBOL(rte_log_dump)
+RTE_EXPORT_SYMBOL(rte_log_dump);
 void
 rte_log_dump(FILE *f)
 {
@@ -486,7 +486,7 @@ rte_log_dump(FILE *f)
  * Generates a log message The message will be sent in the stream
  * defined by the previous call to rte_openlog_stream().
  */
-RTE_EXPORT_SYMBOL(rte_vlog)
+RTE_EXPORT_SYMBOL(rte_vlog);
 int
 rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
 {
@@ -512,7 +512,7 @@ rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
  * defined by the previous call to rte_openlog_stream().
  * No need to check level here, done by rte_vlog().
  */
-RTE_EXPORT_SYMBOL(rte_log)
+RTE_EXPORT_SYMBOL(rte_log);
 int
 rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
 {
@@ -528,7 +528,7 @@ rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
 /*
  * Called by rte_eal_init
  */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_init)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_init);
 void
 eal_log_init(const char *id)
 {
@@ -574,7 +574,7 @@ eal_log_init(const char *id)
 /*
  * Called by eal_cleanup
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_log_cleanup)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_log_cleanup);
 void
 rte_eal_log_cleanup(void)
 {
diff --git a/lib/log/log_color.c b/lib/log/log_color.c
index 690a27f96e..cf1af6483f 100644
--- a/lib/log/log_color.c
+++ b/lib/log/log_color.c
@@ -100,7 +100,7 @@ color_snprintf(char *buf, size_t len, enum log_field field,
  *   auto - enable if stderr is a terminal
  *   never - color output is disabled.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_color)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_color);
 int
 eal_log_color(const char *mode)
 {
diff --git a/lib/log/log_syslog.c b/lib/log/log_syslog.c
index 99d4132a55..121ebafe69 100644
--- a/lib/log/log_syslog.c
+++ b/lib/log/log_syslog.c
@@ -46,7 +46,7 @@ static const struct {
 	{ "local7", LOG_LOCAL7 },
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_syslog)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_syslog);
 int
 eal_log_syslog(const char *name)
 {
diff --git a/lib/log/log_timestamp.c b/lib/log/log_timestamp.c
index 47b6f7cfc4..d08e27d18c 100644
--- a/lib/log/log_timestamp.c
+++ b/lib/log/log_timestamp.c
@@ -41,7 +41,7 @@ static struct {
 } log_time;
 
 /* Set the log timestamp format */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_timestamp)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_timestamp);
 int
 eal_log_timestamp(const char *str)
 {
diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c
index 6dab86a05e..440deebe7d 100644
--- a/lib/lpm/rte_lpm.c
+++ b/lib/lpm/rte_lpm.c
@@ -118,7 +118,7 @@ depth_to_range(uint8_t depth)
 /*
  * Find an existing lpm table and return a pointer to it.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_find_existing)
+RTE_EXPORT_SYMBOL(rte_lpm_find_existing);
 struct rte_lpm *
 rte_lpm_find_existing(const char *name)
 {
@@ -147,7 +147,7 @@ rte_lpm_find_existing(const char *name)
 /*
  * Allocates memory for LPM object
  */
-RTE_EXPORT_SYMBOL(rte_lpm_create)
+RTE_EXPORT_SYMBOL(rte_lpm_create);
 struct rte_lpm *
 rte_lpm_create(const char *name, int socket_id,
 		const struct rte_lpm_config *config)
@@ -254,7 +254,7 @@ rte_lpm_create(const char *name, int socket_id,
 /*
  * Deallocates memory for given LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_free)
+RTE_EXPORT_SYMBOL(rte_lpm_free);
 void
 rte_lpm_free(struct rte_lpm *lpm)
 {
@@ -304,7 +304,7 @@ __lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n)
 
 /* Associate QSBR variable with an LPM object.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_rcu_qsbr_add)
+RTE_EXPORT_SYMBOL(rte_lpm_rcu_qsbr_add);
 int
 rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
 {
@@ -823,7 +823,7 @@ add_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked, uint8_t depth,
 /*
  * Add a route
  */
-RTE_EXPORT_SYMBOL(rte_lpm_add)
+RTE_EXPORT_SYMBOL(rte_lpm_add);
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 		uint32_t next_hop)
@@ -875,7 +875,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 /*
  * Look for a rule in the high-level rules table
  */
-RTE_EXPORT_SYMBOL(rte_lpm_is_rule_present)
+RTE_EXPORT_SYMBOL(rte_lpm_is_rule_present);
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 uint32_t *next_hop)
@@ -1181,7 +1181,7 @@ delete_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked,
 /*
  * Deletes a rule
  */
-RTE_EXPORT_SYMBOL(rte_lpm_delete)
+RTE_EXPORT_SYMBOL(rte_lpm_delete);
 int
 rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 {
@@ -1240,7 +1240,7 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 /*
  * Delete all rules from the LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_delete_all)
+RTE_EXPORT_SYMBOL(rte_lpm_delete_all);
 void
 rte_lpm_delete_all(struct rte_lpm *lpm)
 {
diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
index e23c886766..38e8247067 100644
--- a/lib/lpm/rte_lpm6.c
+++ b/lib/lpm/rte_lpm6.c
@@ -208,7 +208,7 @@ rebuild_lpm(struct rte_lpm6 *lpm)
 /*
  * Allocates memory for LPM object
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_create)
+RTE_EXPORT_SYMBOL(rte_lpm6_create);
 struct rte_lpm6 *
 rte_lpm6_create(const char *name, int socket_id,
 		const struct rte_lpm6_config *config)
@@ -349,7 +349,7 @@ rte_lpm6_create(const char *name, int socket_id,
 /*
  * Find an existing lpm table and return a pointer to it.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_find_existing)
+RTE_EXPORT_SYMBOL(rte_lpm6_find_existing);
 struct rte_lpm6 *
 rte_lpm6_find_existing(const char *name)
 {
@@ -378,7 +378,7 @@ rte_lpm6_find_existing(const char *name)
 /*
  * Deallocates memory for given LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_free)
+RTE_EXPORT_SYMBOL(rte_lpm6_free);
 void
 rte_lpm6_free(struct rte_lpm6 *lpm)
 {
@@ -823,7 +823,7 @@ simulate_add(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *masked_ip, uint8_
 /*
  * Add a route
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_add)
+RTE_EXPORT_SYMBOL(rte_lpm6_add);
 int
 rte_lpm6_add(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth,
 	     uint32_t next_hop)
@@ -913,7 +913,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
 /*
  * Looks up an IP
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_lookup)
+RTE_EXPORT_SYMBOL(rte_lpm6_lookup);
 int
 rte_lpm6_lookup(const struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip,
 		uint32_t *next_hop)
@@ -946,7 +946,7 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip,
 /*
  * Looks up a group of IP addresses
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_lookup_bulk_func)
+RTE_EXPORT_SYMBOL(rte_lpm6_lookup_bulk_func);
 int
 rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 		struct rte_ipv6_addr *ips,
@@ -992,7 +992,7 @@ rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 /*
  * Look for a rule in the high-level rules table
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_is_rule_present)
+RTE_EXPORT_SYMBOL(rte_lpm6_is_rule_present);
 int
 rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth,
 			 uint32_t *next_hop)
@@ -1042,7 +1042,7 @@ rule_delete(struct rte_lpm6 *lpm, struct rte_ipv6_addr *ip, uint8_t depth)
  * rather than doing incremental updates like
  * the regular delete function
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete_bulk_func)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete_bulk_func);
 int
 rte_lpm6_delete_bulk_func(struct rte_lpm6 *lpm,
 		struct rte_ipv6_addr *ips, uint8_t *depths,
@@ -1082,7 +1082,7 @@ rte_lpm6_delete_bulk_func(struct rte_lpm6 *lpm,
 /*
  * Delete all rules from the LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete_all)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete_all);
 void
 rte_lpm6_delete_all(struct rte_lpm6 *lpm)
 {
@@ -1267,7 +1267,7 @@ remove_tbl(struct rte_lpm6 *lpm, struct rte_lpm_tbl8_hdr *tbl_hdr,
 /*
  * Deletes a rule
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete);
 int
 rte_lpm6_delete(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth)
 {
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 9e7731a8a2..cce4d023a7 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -30,7 +30,7 @@ RTE_LOG_REGISTER_DEFAULT(mbuf_logtype, INFO);
  * rte_mempool_create(), or called directly if using
  * rte_mempool_create_empty()/rte_mempool_populate()
  */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_init)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_init);
 void
 rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
 {
@@ -71,7 +71,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
  * rte_mempool_obj_iter() or rte_mempool_create().
  * Set the fields of a packet mbuf to their default values.
  */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_init)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_init);
 void
 rte_pktmbuf_init(struct rte_mempool *mp,
 		 __rte_unused void *opaque_arg,
@@ -222,7 +222,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 }
 
 /* Helper to create a mbuf pool with given mempool ops name*/
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_by_ops)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_by_ops);
 struct rte_mempool *
 rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size,
@@ -275,7 +275,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
 }
 
 /* helper to create a mbuf pool */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create);
 struct rte_mempool *
 rte_pktmbuf_pool_create(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size,
@@ -286,7 +286,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned int n,
 }
 
 /* Helper to create a mbuf pool with pinned external data buffers. */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_extbuf)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_extbuf);
 struct rte_mempool *
 rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size,
@@ -374,7 +374,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-RTE_EXPORT_SYMBOL(rte_mbuf_sanity_check)
+RTE_EXPORT_SYMBOL(rte_mbuf_sanity_check);
 void
 rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 {
@@ -384,7 +384,7 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 		rte_panic("%s\n", reason);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_check)
+RTE_EXPORT_SYMBOL(rte_mbuf_check);
 int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason)
 {
@@ -494,7 +494,7 @@ __rte_pktmbuf_free_seg_via_array(struct rte_mbuf *m,
 #define RTE_PKTMBUF_FREE_PENDING_SZ 64
 
 /* Free a bulk of packet mbufs back into their original mempools. */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_free_bulk)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_free_bulk);
 void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 {
 	struct rte_mbuf *m, *m_next, *pending[RTE_PKTMBUF_FREE_PENDING_SZ];
@@ -521,7 +521,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 }
 
 /* Creates a shallow copy of mbuf */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_clone)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_clone);
 struct rte_mbuf *
 rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 {
@@ -561,7 +561,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 }
 
 /* convert multi-segment mbuf to single mbuf */
-RTE_EXPORT_SYMBOL(__rte_pktmbuf_linearize)
+RTE_EXPORT_SYMBOL(__rte_pktmbuf_linearize);
 int
 __rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
 {
@@ -599,7 +599,7 @@ __rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
 }
 
 /* Create a deep copy of mbuf */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_copy)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_copy);
 struct rte_mbuf *
 rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 		 uint32_t off, uint32_t len)
@@ -677,7 +677,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 }
 
 /* dump a mbuf on console */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_dump)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_dump);
 void
 rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 {
@@ -720,7 +720,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 }
 
 /* read len data bytes in a mbuf at specified offset (internal) */
-RTE_EXPORT_SYMBOL(__rte_pktmbuf_read)
+RTE_EXPORT_SYMBOL(__rte_pktmbuf_read);
 const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 	uint32_t len, void *buf)
 {
@@ -758,7 +758,7 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
  * Get the name of a RX offload flag. Must be kept synchronized with flag
  * definitions in rte_mbuf.h.
  */
-RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_name)
+RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_name);
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
@@ -798,7 +798,7 @@ struct flag_mask {
 };
 
 /* write the list of rx ol flags in buffer buf */
-RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_list)
+RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_list);
 int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
@@ -865,7 +865,7 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
  * Get the name of a TX offload flag. Must be kept synchronized with flag
  * definitions in rte_mbuf.h.
  */
-RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_name)
+RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_name);
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
@@ -900,7 +900,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
 }
 
 /* write the list of tx ol flags in buffer buf */
-RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_list)
+RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_list);
 int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index 5987c9dee8..f6dd7cd556 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -190,7 +190,7 @@ __mbuf_dynfield_lookup(const char *name)
 	return mbuf_dynfield;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_lookup)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_lookup);
 int
 rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
 {
@@ -327,7 +327,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 	return offset;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register_offset)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register_offset);
 int
 rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 				size_t req)
@@ -354,7 +354,7 @@ rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register);
 int
 rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
 {
@@ -387,7 +387,7 @@ __mbuf_dynflag_lookup(const char *name)
 	return mbuf_dynflag;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_lookup)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_lookup);
 int
 rte_mbuf_dynflag_lookup(const char *name,
 			struct rte_mbuf_dynflag *params)
@@ -503,7 +503,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 	return bitnum;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register_bitnum)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register_bitnum);
 int
 rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 				unsigned int req)
@@ -527,14 +527,14 @@ rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register);
 int
 rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
 {
 	return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_dump)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_dump);
 void rte_mbuf_dyn_dump(FILE *out)
 {
 	struct mbuf_dynfield_list *mbuf_dynfield_list;
@@ -622,7 +622,7 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_rx_timestamp_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_rx_timestamp_register);
 int
 rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag)
 {
@@ -630,7 +630,7 @@ rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag)
 			"Rx", RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_tx_timestamp_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_tx_timestamp_register);
 int
 rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag)
 {
diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c
index 219b364803..3ef59826ef 100644
--- a/lib/mbuf/rte_mbuf_pool_ops.c
+++ b/lib/mbuf/rte_mbuf_pool_ops.c
@@ -11,7 +11,7 @@
 
 #include "mbuf_log.h"
 
-RTE_EXPORT_SYMBOL(rte_mbuf_set_platform_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_set_platform_mempool_ops);
 int
 rte_mbuf_set_platform_mempool_ops(const char *ops_name)
 {
@@ -41,7 +41,7 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name)
 	return -EEXIST;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_platform_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_platform_mempool_ops);
 const char *
 rte_mbuf_platform_mempool_ops(void)
 {
@@ -53,7 +53,7 @@ rte_mbuf_platform_mempool_ops(void)
 	return mz->addr;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_set_user_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_set_user_mempool_ops);
 int
 rte_mbuf_set_user_mempool_ops(const char *ops_name)
 {
@@ -78,7 +78,7 @@ rte_mbuf_set_user_mempool_ops(const char *ops_name)
 
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_user_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_user_mempool_ops);
 const char *
 rte_mbuf_user_mempool_ops(void)
 {
@@ -91,7 +91,7 @@ rte_mbuf_user_mempool_ops(void)
 }
 
 /* Return mbuf pool ops name */
-RTE_EXPORT_SYMBOL(rte_mbuf_best_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_best_mempool_ops);
 const char *
 rte_mbuf_best_mempool_ops(void)
 {
diff --git a/lib/mbuf/rte_mbuf_ptype.c b/lib/mbuf/rte_mbuf_ptype.c
index 2c80294498..715c6c1700 100644
--- a/lib/mbuf/rte_mbuf_ptype.c
+++ b/lib/mbuf/rte_mbuf_ptype.c
@@ -9,7 +9,7 @@
 #include <rte_mbuf_ptype.h>
 
 /* get the name of the l2 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l2_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l2_name);
 const char *rte_get_ptype_l2_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L2_MASK) {
@@ -28,7 +28,7 @@ const char *rte_get_ptype_l2_name(uint32_t ptype)
 }
 
 /* get the name of the l3 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l3_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l3_name);
 const char *rte_get_ptype_l3_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L3_MASK) {
@@ -43,7 +43,7 @@ const char *rte_get_ptype_l3_name(uint32_t ptype)
 }
 
 /* get the name of the l4 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l4_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l4_name);
 const char *rte_get_ptype_l4_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L4_MASK) {
@@ -60,7 +60,7 @@ const char *rte_get_ptype_l4_name(uint32_t ptype)
 }
 
 /* get the name of the tunnel packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_tunnel_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_tunnel_name);
 const char *rte_get_ptype_tunnel_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
@@ -82,7 +82,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l2 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l2_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l2_name);
 const char *rte_get_ptype_inner_l2_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
@@ -94,7 +94,7 @@ const char *rte_get_ptype_inner_l2_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l3 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l3_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l3_name);
 const char *rte_get_ptype_inner_l3_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
@@ -111,7 +111,7 @@ const char *rte_get_ptype_inner_l3_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l4 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l4_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l4_name);
 const char *rte_get_ptype_inner_l4_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
@@ -127,7 +127,7 @@ const char *rte_get_ptype_inner_l4_name(uint32_t ptype)
 }
 
 /* write the packet type name into the buffer */
-RTE_EXPORT_SYMBOL(rte_get_ptype_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_name);
 int rte_get_ptype_name(uint32_t ptype, char *buf, size_t buflen)
 {
 	int ret;
diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c
index 5ff32f1e45..505b80aa33 100644
--- a/lib/member/rte_member.c
+++ b/lib/member/rte_member.c
@@ -24,7 +24,7 @@ static struct rte_tailq_elem rte_member_tailq = {
 };
 EAL_REGISTER_TAILQ(rte_member_tailq)
 
-RTE_EXPORT_SYMBOL(rte_member_find_existing)
+RTE_EXPORT_SYMBOL(rte_member_find_existing);
 struct rte_member_setsum *
 rte_member_find_existing(const char *name)
 {
@@ -49,7 +49,7 @@ rte_member_find_existing(const char *name)
 	return setsum;
 }
 
-RTE_EXPORT_SYMBOL(rte_member_free)
+RTE_EXPORT_SYMBOL(rte_member_free);
 void
 rte_member_free(struct rte_member_setsum *setsum)
 {
@@ -88,7 +88,7 @@ rte_member_free(struct rte_member_setsum *setsum)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_member_create)
+RTE_EXPORT_SYMBOL(rte_member_create);
 struct rte_member_setsum *
 rte_member_create(const struct rte_member_parameters *params)
 {
@@ -192,7 +192,7 @@ rte_member_create(const struct rte_member_parameters *params)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_member_add)
+RTE_EXPORT_SYMBOL(rte_member_add);
 int
 rte_member_add(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t set_id)
@@ -212,7 +212,7 @@ rte_member_add(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_add_byte_count)
+RTE_EXPORT_SYMBOL(rte_member_add_byte_count);
 int
 rte_member_add_byte_count(const struct rte_member_setsum *setsum,
 			  const void *key, uint32_t byte_count)
@@ -228,7 +228,7 @@ rte_member_add_byte_count(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup)
+RTE_EXPORT_SYMBOL(rte_member_lookup);
 int
 rte_member_lookup(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t *set_id)
@@ -248,7 +248,7 @@ rte_member_lookup(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_member_lookup_bulk);
 int
 rte_member_lookup_bulk(const struct rte_member_setsum *setsum,
 				const void **keys, uint32_t num_keys,
@@ -269,7 +269,7 @@ rte_member_lookup_bulk(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_multi)
+RTE_EXPORT_SYMBOL(rte_member_lookup_multi);
 int
 rte_member_lookup_multi(const struct rte_member_setsum *setsum, const void *key,
 				uint32_t match_per_key, member_set_t *set_id)
@@ -289,7 +289,7 @@ rte_member_lookup_multi(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_multi_bulk)
+RTE_EXPORT_SYMBOL(rte_member_lookup_multi_bulk);
 int
 rte_member_lookup_multi_bulk(const struct rte_member_setsum *setsum,
 			const void **keys, uint32_t num_keys,
@@ -312,7 +312,7 @@ rte_member_lookup_multi_bulk(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_query_count)
+RTE_EXPORT_SYMBOL(rte_member_query_count);
 int
 rte_member_query_count(const struct rte_member_setsum *setsum,
 		       const void *key, uint64_t *output)
@@ -328,7 +328,7 @@ rte_member_query_count(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_report_heavyhitter)
+RTE_EXPORT_SYMBOL(rte_member_report_heavyhitter);
 int
 rte_member_report_heavyhitter(const struct rte_member_setsum *setsum,
 				void **key, uint64_t *count)
@@ -344,7 +344,7 @@ rte_member_report_heavyhitter(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_delete)
+RTE_EXPORT_SYMBOL(rte_member_delete);
 int
 rte_member_delete(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t set_id)
@@ -364,7 +364,7 @@ rte_member_delete(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_reset)
+RTE_EXPORT_SYMBOL(rte_member_reset);
 void
 rte_member_reset(const struct rte_member_setsum *setsum)
 {
diff --git a/lib/mempool/mempool_trace_points.c b/lib/mempool/mempool_trace_points.c
index ec465780f4..fa15c55994 100644
--- a/lib/mempool/mempool_trace_points.c
+++ b/lib/mempool/mempool_trace_points.c
@@ -7,35 +7,35 @@
 
 #include "mempool_trace.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_dequeue_bulk,
 	lib.mempool.ops.deq.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_contig_blocks, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_contig_blocks, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_dequeue_contig_blocks,
 	lib.mempool.ops.deq.contig)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_enqueue_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_enqueue_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_enqueue_bulk,
 	lib.mempool.ops.enq.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_put, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_put, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_generic_put,
 	lib.mempool.generic.put)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_put_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_put_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_put_bulk,
 	lib.mempool.put.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_get, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_get, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_generic_get,
 	lib.mempool.generic.get)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_bulk,
 	lib.mempool.get.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_contig_blocks, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_contig_blocks, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_contig_blocks,
 	lib.mempool.get.blocks)
 
@@ -66,14 +66,14 @@ RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_create,
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_free,
 	lib.mempool.cache.free)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_default_cache, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_default_cache, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_default_cache,
 	lib.mempool.default.cache)
 
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_page_size,
 	lib.mempool.get.page.size)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_cache_flush, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_cache_flush, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_flush,
 	lib.mempool.cache.flush)
 
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 1021ede0c2..41a0d8c35c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -32,7 +32,7 @@
 #include "mempool_trace.h"
 #include "rte_mempool.h"
 
-RTE_EXPORT_SYMBOL(rte_mempool_logtype)
+RTE_EXPORT_SYMBOL(rte_mempool_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_mempool_logtype, INFO);
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
@@ -181,7 +181,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
 }
 
 /* call obj_cb() for each mempool element */
-RTE_EXPORT_SYMBOL(rte_mempool_obj_iter)
+RTE_EXPORT_SYMBOL(rte_mempool_obj_iter);
 uint32_t
 rte_mempool_obj_iter(struct rte_mempool *mp,
 	rte_mempool_obj_cb_t *obj_cb, void *obj_cb_arg)
@@ -200,7 +200,7 @@ rte_mempool_obj_iter(struct rte_mempool *mp,
 }
 
 /* call mem_cb() for each mempool memory chunk */
-RTE_EXPORT_SYMBOL(rte_mempool_mem_iter)
+RTE_EXPORT_SYMBOL(rte_mempool_mem_iter);
 uint32_t
 rte_mempool_mem_iter(struct rte_mempool *mp,
 	rte_mempool_mem_cb_t *mem_cb, void *mem_cb_arg)
@@ -217,7 +217,7 @@ rte_mempool_mem_iter(struct rte_mempool *mp,
 }
 
 /* get the header, trailer and total size of a mempool element. */
-RTE_EXPORT_SYMBOL(rte_mempool_calc_obj_size)
+RTE_EXPORT_SYMBOL(rte_mempool_calc_obj_size);
 uint32_t
 rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	struct rte_mempool_objsz *sz)
@@ -318,7 +318,7 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_iova)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_iova);
 int
 rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -404,7 +404,7 @@ get_iova(void *addr)
 /* Populate the mempool with a virtual area. Return the number of
  * objects added, or a negative value on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_virt)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_virt);
 int
 rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -459,7 +459,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 }
 
 /* Get the minimal page size used in a mempool before populating it. */
-RTE_EXPORT_SYMBOL(rte_mempool_get_page_size)
+RTE_EXPORT_SYMBOL(rte_mempool_get_page_size);
 int
 rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 {
@@ -489,7 +489,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
  * and populate them. Return the number of objects added, or a negative
  * value on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_default)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_default);
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
@@ -668,7 +668,7 @@ rte_mempool_memchunk_anon_free(struct rte_mempool_memhdr *memhdr,
 }
 
 /* populate the mempool with an anonymous mapping */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_anon)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_anon);
 int
 rte_mempool_populate_anon(struct rte_mempool *mp)
 {
@@ -723,7 +723,7 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 }
 
 /* free a mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_free)
+RTE_EXPORT_SYMBOL(rte_mempool_free);
 void
 rte_mempool_free(struct rte_mempool *mp)
 {
@@ -772,7 +772,7 @@ mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
  * returned to an underlying mempool. This structure is identical to the
  * local_cache[lcore_id] pointed to by the mempool structure.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_cache_create)
+RTE_EXPORT_SYMBOL(rte_mempool_cache_create);
 struct rte_mempool_cache *
 rte_mempool_cache_create(uint32_t size, int socket_id)
 {
@@ -802,7 +802,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id)
  * remaining objects in the cache are flushed to the corresponding
  * mempool.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_cache_free)
+RTE_EXPORT_SYMBOL(rte_mempool_cache_free);
 void
 rte_mempool_cache_free(struct rte_mempool_cache *cache)
 {
@@ -811,7 +811,7 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
 }
 
 /* create an empty mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_create_empty)
+RTE_EXPORT_SYMBOL(rte_mempool_create_empty);
 struct rte_mempool *
 rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	unsigned cache_size, unsigned private_data_size,
@@ -980,7 +980,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 }
 
 /* create the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_create)
+RTE_EXPORT_SYMBOL(rte_mempool_create);
 struct rte_mempool *
 rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	unsigned cache_size, unsigned private_data_size,
@@ -1017,7 +1017,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 }
 
 /* Return the number of entries in the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_avail_count)
+RTE_EXPORT_SYMBOL(rte_mempool_avail_count);
 unsigned int
 rte_mempool_avail_count(const struct rte_mempool *mp)
 {
@@ -1042,7 +1042,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp)
 }
 
 /* return the number of entries allocated from the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_in_use_count)
+RTE_EXPORT_SYMBOL(rte_mempool_in_use_count);
 unsigned int
 rte_mempool_in_use_count(const struct rte_mempool *mp)
 {
@@ -1074,7 +1074,7 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
 }
 
 /* check and update cookies or panic (internal) */
-RTE_EXPORT_SYMBOL(rte_mempool_check_cookies)
+RTE_EXPORT_SYMBOL(rte_mempool_check_cookies);
 void rte_mempool_check_cookies(const struct rte_mempool *mp,
 	void * const *obj_table_const, unsigned n, int free)
 {
@@ -1143,7 +1143,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_contig_blocks_check_cookies)
+RTE_EXPORT_SYMBOL(rte_mempool_contig_blocks_check_cookies);
 void
 rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
 	void * const *first_obj_table_const, unsigned int n, int free)
@@ -1220,7 +1220,7 @@ mempool_audit_cache(const struct rte_mempool *mp)
 }
 
 /* check the consistency of mempool (size, cookies, ...) */
-RTE_EXPORT_SYMBOL(rte_mempool_audit)
+RTE_EXPORT_SYMBOL(rte_mempool_audit);
 void
 rte_mempool_audit(struct rte_mempool *mp)
 {
@@ -1232,7 +1232,7 @@ rte_mempool_audit(struct rte_mempool *mp)
 }
 
 /* dump the status of the mempool on the console */
-RTE_EXPORT_SYMBOL(rte_mempool_dump)
+RTE_EXPORT_SYMBOL(rte_mempool_dump);
 void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
@@ -1337,7 +1337,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 }
 
 /* dump the status of all mempools on the console */
-RTE_EXPORT_SYMBOL(rte_mempool_list_dump)
+RTE_EXPORT_SYMBOL(rte_mempool_list_dump);
 void
 rte_mempool_list_dump(FILE *f)
 {
@@ -1358,7 +1358,7 @@ rte_mempool_list_dump(FILE *f)
 }
 
 /* search a mempool from its name */
-RTE_EXPORT_SYMBOL(rte_mempool_lookup)
+RTE_EXPORT_SYMBOL(rte_mempool_lookup);
 struct rte_mempool *
 rte_mempool_lookup(const char *name)
 {
@@ -1386,7 +1386,7 @@ rte_mempool_lookup(const char *name)
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_walk)
+RTE_EXPORT_SYMBOL(rte_mempool_walk);
 void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 		      void *arg)
 {
@@ -1405,7 +1405,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 	rte_mcfg_mempool_read_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_mem_range, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_mem_range, 24.07);
 int rte_mempool_get_mem_range(const struct rte_mempool *mp,
 		struct rte_mempool_mem_range_info *mem_range)
 {
@@ -1440,7 +1440,7 @@ int rte_mempool_get_mem_range(const struct rte_mempool *mp,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_obj_alignment, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_obj_alignment, 24.07);
 size_t rte_mempool_get_obj_alignment(const struct rte_mempool *mp)
 {
 	if (mp == NULL)
@@ -1474,7 +1474,7 @@ mempool_event_callback_invoke(enum rte_mempool_event event,
 	rte_mcfg_tailq_read_unlock();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_register);
 int
 rte_mempool_event_callback_register(rte_mempool_event_callback *func,
 				    void *user_data)
@@ -1513,7 +1513,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_unregister);
 int
 rte_mempool_event_callback_unregister(rte_mempool_event_callback *func,
 				      void *user_data)
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 066bec36fc..8dcb9161bf 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -15,14 +15,14 @@
 #include "mempool_trace.h"
 
 /* indirect jump table to support external memory pools. */
-RTE_EXPORT_SYMBOL(rte_mempool_ops_table)
+RTE_EXPORT_SYMBOL(rte_mempool_ops_table);
 struct rte_mempool_ops_table rte_mempool_ops_table = {
 	.sl =  RTE_SPINLOCK_INITIALIZER,
 	.num_ops = 0
 };
 
 /* add a new ops struct in rte_mempool_ops_table, return its index. */
-RTE_EXPORT_SYMBOL(rte_mempool_register_ops)
+RTE_EXPORT_SYMBOL(rte_mempool_register_ops);
 int
 rte_mempool_register_ops(const struct rte_mempool_ops *h)
 {
@@ -149,7 +149,7 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 }
 
 /* wrapper to get additional mempool info */
-RTE_EXPORT_SYMBOL(rte_mempool_ops_get_info)
+RTE_EXPORT_SYMBOL(rte_mempool_ops_get_info);
 int
 rte_mempool_ops_get_info(const struct rte_mempool *mp,
 			 struct rte_mempool_info *info)
@@ -165,7 +165,7 @@ rte_mempool_ops_get_info(const struct rte_mempool *mp,
 
 
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
-RTE_EXPORT_SYMBOL(rte_mempool_set_ops_byname)
+RTE_EXPORT_SYMBOL(rte_mempool_set_ops_byname);
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	void *pool_config)
diff --git a/lib/mempool/rte_mempool_ops_default.c b/lib/mempool/rte_mempool_ops_default.c
index d27d6fc473..3ece87ca26 100644
--- a/lib/mempool/rte_mempool_ops_default.c
+++ b/lib/mempool/rte_mempool_ops_default.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include <rte_mempool.h>
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_helper)
+RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_helper);
 ssize_t
 rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
@@ -67,7 +67,7 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp,
 	return mem_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_default)
+RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_default);
 ssize_t
 rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
@@ -90,7 +90,7 @@ check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_populate_helper)
+RTE_EXPORT_SYMBOL(rte_mempool_op_populate_helper);
 int
 rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int flags,
 			unsigned int max_objs, void *vaddr, rte_iova_t iova,
@@ -138,7 +138,7 @@ rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int flags,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_populate_default)
+RTE_EXPORT_SYMBOL(rte_mempool_op_populate_default);
 int
 rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
 				void *vaddr, rte_iova_t iova, size_t len,
diff --git a/lib/meter/rte_meter.c b/lib/meter/rte_meter.c
index ec76bec4cb..b78c2abe34 100644
--- a/lib/meter/rte_meter.c
+++ b/lib/meter/rte_meter.c
@@ -37,7 +37,7 @@ rte_meter_get_tb_params(uint64_t hz, uint64_t rate, uint64_t *tb_period, uint64_
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_srtcm_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_srtcm_profile_config);
 int
 rte_meter_srtcm_profile_config(struct rte_meter_srtcm_profile *p,
 	struct rte_meter_srtcm_params *params)
@@ -60,7 +60,7 @@ rte_meter_srtcm_profile_config(struct rte_meter_srtcm_profile *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_srtcm_config)
+RTE_EXPORT_SYMBOL(rte_meter_srtcm_config);
 int
 rte_meter_srtcm_config(struct rte_meter_srtcm *m,
 	struct rte_meter_srtcm_profile *p)
@@ -77,7 +77,7 @@ rte_meter_srtcm_config(struct rte_meter_srtcm *m,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_profile_config);
 int
 rte_meter_trtcm_profile_config(struct rte_meter_trtcm_profile *p,
 	struct rte_meter_trtcm_params *params)
@@ -105,7 +105,7 @@ rte_meter_trtcm_profile_config(struct rte_meter_trtcm_profile *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_config);
 int
 rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	struct rte_meter_trtcm_profile *p)
@@ -122,7 +122,7 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config);
 int
 rte_meter_trtcm_rfc4115_profile_config(
 	struct rte_meter_trtcm_rfc4115_profile *p,
@@ -148,7 +148,7 @@ rte_meter_trtcm_rfc4115_profile_config(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_config);
 int
 rte_meter_trtcm_rfc4115_config(
 	struct rte_meter_trtcm_rfc4115 *m,
diff --git a/lib/metrics/rte_metrics.c b/lib/metrics/rte_metrics.c
index 4cd4623b7a..5065a7d4af 100644
--- a/lib/metrics/rte_metrics.c
+++ b/lib/metrics/rte_metrics.c
@@ -56,7 +56,7 @@ struct rte_metrics_data_s {
 	rte_spinlock_t lock;
 };
 
-RTE_EXPORT_SYMBOL(rte_metrics_init)
+RTE_EXPORT_SYMBOL(rte_metrics_init);
 int
 rte_metrics_init(int socket_id)
 {
@@ -82,7 +82,7 @@ rte_metrics_init(int socket_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_deinit)
+RTE_EXPORT_SYMBOL(rte_metrics_deinit);
 int
 rte_metrics_deinit(void)
 {
@@ -106,7 +106,7 @@ rte_metrics_deinit(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_reg_name)
+RTE_EXPORT_SYMBOL(rte_metrics_reg_name);
 int
 rte_metrics_reg_name(const char *name)
 {
@@ -115,7 +115,7 @@ rte_metrics_reg_name(const char *name)
 	return rte_metrics_reg_names(list_names, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_reg_names)
+RTE_EXPORT_SYMBOL(rte_metrics_reg_names);
 int
 rte_metrics_reg_names(const char * const *names, uint16_t cnt_names)
 {
@@ -162,14 +162,14 @@ rte_metrics_reg_names(const char * const *names, uint16_t cnt_names)
 	return idx_base;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_update_value)
+RTE_EXPORT_SYMBOL(rte_metrics_update_value);
 int
 rte_metrics_update_value(int port_id, uint16_t key, const uint64_t value)
 {
 	return rte_metrics_update_values(port_id, key, &value, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_update_values)
+RTE_EXPORT_SYMBOL(rte_metrics_update_values);
 int
 rte_metrics_update_values(int port_id,
 	uint16_t key,
@@ -232,7 +232,7 @@ rte_metrics_update_values(int port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_get_names)
+RTE_EXPORT_SYMBOL(rte_metrics_get_names);
 int
 rte_metrics_get_names(struct rte_metric_name *names,
 	uint16_t capacity)
@@ -264,7 +264,7 @@ rte_metrics_get_names(struct rte_metric_name *names,
 	return return_value;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_get_values)
+RTE_EXPORT_SYMBOL(rte_metrics_get_values);
 int
 rte_metrics_get_values(int port_id,
 	struct rte_metric_value *values,
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index f9ec556595..3061d6d15f 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -72,7 +72,7 @@ rte_metrics_tel_reg_port_ethdev_to_metrics(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05);
 int32_t
 rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 {
@@ -227,7 +227,7 @@ rte_metrics_tel_format_port(uint32_t pid, json_t *ports,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05);
 int32_t
 rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 		char **json_buffer)
@@ -281,7 +281,7 @@ rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05);
 int32_t
 rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 		int *reg_index, char **json_buffer)
@@ -312,7 +312,7 @@ rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05);
 int32_t
 rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 {
@@ -379,7 +379,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05);
 int32_t
 rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 {
@@ -550,7 +550,7 @@ RTE_INIT(metrics_ctor)
 
 #else /* !RTE_HAS_JANSSON */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05);
 int32_t
 rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 {
@@ -560,7 +560,7 @@ rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05);
 int32_t
 rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	char **json_buffer)
@@ -571,7 +571,7 @@ rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05);
 int32_t
 rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	int *reg_index, char **json_buffer)
@@ -583,7 +583,7 @@ rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05);
 int32_t
 rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 {
@@ -592,7 +592,7 @@ rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05);
 int32_t
 rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 {
@@ -602,7 +602,7 @@ rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_global_stats, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_global_stats, 20.05);
 int32_t
 rte_metrics_tel_get_global_stats(struct telemetry_encode_param *ep)
 {
diff --git a/lib/mldev/mldev_utils.c b/lib/mldev/mldev_utils.c
index b15f825158..dc60af306e 100644
--- a/lib/mldev/mldev_utils.c
+++ b/lib/mldev/mldev_utils.c
@@ -15,7 +15,7 @@
  * This file implements Machine Learning utility routines, except type conversion routines.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_size_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_size_get);
 int
 rte_ml_io_type_size_get(enum rte_ml_io_type type)
 {
@@ -51,7 +51,7 @@ rte_ml_io_type_size_get(enum rte_ml_io_type type)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_to_str)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_to_str);
 void
 rte_ml_io_type_to_str(enum rte_ml_io_type type, char *str, int len)
 {
diff --git a/lib/mldev/mldev_utils_neon.c b/lib/mldev/mldev_utils_neon.c
index 0222bd7e15..03c9236b3a 100644
--- a/lib/mldev/mldev_utils_neon.c
+++ b/lib/mldev/mldev_utils_neon.c
@@ -77,7 +77,7 @@ __float32_to_int8_neon_s8x1(const float *input, int8_t *output, float scale, int
 	*output = vqmovnh_s16(s16);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11);
 int
 rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -152,7 +152,7 @@ __int8_to_float32_neon_f32x1(const int8_t *input, float *output, float scale, in
 	*output = scale * (vcvts_f32_s32((int32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11);
 int
 rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -246,7 +246,7 @@ __float32_to_uint8_neon_u8x1(const float *input, uint8_t *output, float scale, u
 	*output = vqmovnh_u16(u16);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11);
 int
 rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -321,7 +321,7 @@ __uint8_to_float32_neon_f32x1(const uint8_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_u32((uint32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11);
 int
 rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -401,7 +401,7 @@ __float32_to_int16_neon_s16x1(const float *input, int16_t *output, float scale,
 	*output = vqmovns_s32(vget_lane_s32(s32x2, 0));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11);
 int
 rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -470,7 +470,7 @@ __int16_to_float32_neon_f32x1(const int16_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_s32((int32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11);
 int
 rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -547,7 +547,7 @@ __float32_to_uint16_neon_u16x1(const float *input, uint16_t *output, float scale
 	*output = vqmovns_u32(u32) + zero_point;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11);
 int
 rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint16_t zero_point)
@@ -618,7 +618,7 @@ __uint16_to_float32_neon_f32x1(const uint16_t *input, float *output, float scale
 	*output = scale * (vcvts_f32_u32((uint32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11);
 int
 rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint16_t zero_point)
@@ -697,7 +697,7 @@ __float32_to_int32_neon_s32x1(const float *input, int32_t *output, float scale,
 	vst1_lane_s32(output, s32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11);
 int
 rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -762,7 +762,7 @@ __int32_to_float32_neon_f32x1(const int32_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_s32(*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11);
 int
 rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -830,7 +830,7 @@ __float32_to_uint32_neon_u32x1(const float *input, uint32_t *output, float scale
 	*output = vcvtas_u32_f32((*input) / scale + (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11);
 int
 rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint32_t zero_point)
@@ -897,7 +897,7 @@ __uint32_to_float32_neon_f32x1(const uint32_t *input, float *output, float scale
 	*output = scale * (vcvts_f32_u32(*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11);
 int
 rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint32_t zero_point)
@@ -992,7 +992,7 @@ __float32_to_int64_neon_s64x1(const float *input, int64_t *output, float scale,
 	*output = s64;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11);
 int
 rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -1081,7 +1081,7 @@ __int64_to_float32_neon_f32x1(const int64_t *input, float *output, float scale,
 	vst1_lane_f32(output, f32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11);
 int
 rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -1172,7 +1172,7 @@ __float32_to_uint64_neon_u64x1(const float *input, uint64_t *output, float scale
 	vst1q_lane_u64(output, u64x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11);
 int
 rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint64_t zero_point)
@@ -1263,7 +1263,7 @@ __uint64_to_float32_neon_f32x1(const uint64_t *input, float *output, float scale
 	vst1_lane_f32(output, f32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11);
 int
 rte_ml_io_uint64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint64_t zero_point)
@@ -1332,7 +1332,7 @@ __float32_to_float16_neon_f16x1(const float32_t *input, float16_t *output)
 	vst1_lane_f16(output, f16x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11);
 int
 rte_ml_io_float32_to_float16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -1400,7 +1400,7 @@ __float16_to_float32_neon_f32x1(const float16_t *input, float32_t *output)
 	vst1q_lane_f32(output, f32x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11);
 int
 rte_ml_io_float16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_neon_bfloat16.c b/lib/mldev/mldev_utils_neon_bfloat16.c
index 65cd73f880..0456528514 100644
--- a/lib/mldev/mldev_utils_neon_bfloat16.c
+++ b/lib/mldev/mldev_utils_neon_bfloat16.c
@@ -51,7 +51,7 @@ __float32_to_bfloat16_neon_f16x1(const float32_t *input, bfloat16_t *output)
 	vst1_lane_bf16(output, bf16x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11);
 int
 rte_ml_io_float32_to_bfloat16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -119,7 +119,7 @@ __bfloat16_to_float32_neon_f32x1(const bfloat16_t *input, float32_t *output)
 	vst1q_lane_f32(output, f32x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11);
 int
 rte_ml_io_bfloat16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_scalar.c b/lib/mldev/mldev_utils_scalar.c
index a3aac3f92e..db01e5f68b 100644
--- a/lib/mldev/mldev_utils_scalar.c
+++ b/lib/mldev/mldev_utils_scalar.c
@@ -11,7 +11,7 @@
  * types from higher precision to lower precision and vice-versa, except bfloat16.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11);
 int
 rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -45,7 +45,7 @@ rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11);
 int
 rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -70,7 +70,7 @@ rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11);
 int
 rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -104,7 +104,7 @@ rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11);
 int
 rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -129,7 +129,7 @@ rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11);
 int
 rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -163,7 +163,7 @@ rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11);
 int
 rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -188,7 +188,7 @@ rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11);
 int
 rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint16_t zero_point)
@@ -222,7 +222,7 @@ rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11);
 int
 rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint16_t zero_point)
@@ -247,7 +247,7 @@ rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11);
 int
 rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -272,7 +272,7 @@ rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11);
 int
 rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -297,7 +297,7 @@ rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11);
 int
 rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint32_t zero_point)
@@ -328,7 +328,7 @@ rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11);
 int
 rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint32_t zero_point)
@@ -353,7 +353,7 @@ rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11);
 int
 rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -378,7 +378,7 @@ rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11);
 int
 rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -403,7 +403,7 @@ rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11);
 int
 rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint64_t zero_point)
@@ -434,7 +434,7 @@ rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11);
 int
 rte_ml_io_uint64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint64_t zero_point)
@@ -581,7 +581,7 @@ __float32_to_float16_scalar_rtn(float x)
 	return u16;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11);
 int
 rte_ml_io_float32_to_float16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -666,7 +666,7 @@ __float16_to_float32_scalar_rtx(uint16_t f16)
 	return f32.f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11);
 int
 rte_ml_io_float16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_scalar_bfloat16.c b/lib/mldev/mldev_utils_scalar_bfloat16.c
index a098d31526..757d92b963 100644
--- a/lib/mldev/mldev_utils_scalar_bfloat16.c
+++ b/lib/mldev/mldev_utils_scalar_bfloat16.c
@@ -93,7 +93,7 @@ __float32_to_bfloat16_scalar_rtn(float x)
 	return u16;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11);
 int
 rte_ml_io_float32_to_bfloat16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -176,7 +176,7 @@ __bfloat16_to_float32_scalar_rtx(uint16_t f16)
 	return f32.f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11);
 int
 rte_ml_io_bfloat16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c
index b61e4be45c..e1abb52e90 100644
--- a/lib/mldev/rte_mldev.c
+++ b/lib/mldev/rte_mldev.c
@@ -24,14 +24,14 @@ struct rte_ml_op_pool_private {
 	/*< Size of private user data with each operation. */
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_dev);
 struct rte_ml_dev *
 rte_ml_dev_pmd_get_dev(int16_t dev_id)
 {
 	return &ml_dev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_named_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_named_dev);
 struct rte_ml_dev *
 rte_ml_dev_pmd_get_named_dev(const char *name)
 {
@@ -50,7 +50,7 @@ rte_ml_dev_pmd_get_named_dev(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_allocate);
 struct rte_ml_dev *
 rte_ml_dev_pmd_allocate(const char *name, uint8_t socket_id)
 {
@@ -124,7 +124,7 @@ rte_ml_dev_pmd_allocate(const char *name, uint8_t socket_id)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_release);
 int
 rte_ml_dev_pmd_release(struct rte_ml_dev *dev)
 {
@@ -160,7 +160,7 @@ rte_ml_dev_pmd_release(struct rte_ml_dev *dev)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_init, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_init, 22.11);
 int
 rte_ml_dev_init(size_t dev_max)
 {
@@ -196,14 +196,14 @@ rte_ml_dev_init(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_count, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_count, 22.11);
 uint16_t
 rte_ml_dev_count(void)
 {
 	return ml_dev_globals.nb_devs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_is_valid_dev, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_is_valid_dev, 22.11);
 int
 rte_ml_dev_is_valid_dev(int16_t dev_id)
 {
@@ -219,7 +219,7 @@ rte_ml_dev_is_valid_dev(int16_t dev_id)
 		return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_socket_id, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_socket_id, 22.11);
 int
 rte_ml_dev_socket_id(int16_t dev_id)
 {
@@ -235,7 +235,7 @@ rte_ml_dev_socket_id(int16_t dev_id)
 	return dev->data->socket_id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_info_get, 22.11);
 int
 rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info)
 {
@@ -259,7 +259,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info)
 	return dev->dev_ops->dev_info_get(dev, dev_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_configure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_configure, 22.11);
 int
 rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config)
 {
@@ -299,7 +299,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_close, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_close, 22.11);
 int
 rte_ml_dev_close(int16_t dev_id)
 {
@@ -323,7 +323,7 @@ rte_ml_dev_close(int16_t dev_id)
 	return dev->dev_ops->dev_close(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_start, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_start, 22.11);
 int
 rte_ml_dev_start(int16_t dev_id)
 {
@@ -351,7 +351,7 @@ rte_ml_dev_start(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stop, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stop, 22.11);
 int
 rte_ml_dev_stop(int16_t dev_id)
 {
@@ -379,7 +379,7 @@ rte_ml_dev_stop(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_count, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_count, 22.11);
 uint16_t
 rte_ml_dev_queue_pair_count(int16_t dev_id)
 {
@@ -395,7 +395,7 @@ rte_ml_dev_queue_pair_count(int16_t dev_id)
 	return dev->data->nb_queue_pairs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_setup, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_setup, 22.11);
 int
 rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id,
 			    const struct rte_ml_dev_qp_conf *qp_conf, int socket_id)
@@ -429,7 +429,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->dev_queue_pair_setup(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_get, 22.11);
 int
 rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats)
 {
@@ -453,7 +453,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats)
 	return dev->dev_ops->dev_stats_get(dev, stats);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_reset, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_reset, 22.11);
 void
 rte_ml_dev_stats_reset(int16_t dev_id)
 {
@@ -471,7 +471,7 @@ rte_ml_dev_stats_reset(int16_t dev_id)
 	dev->dev_ops->dev_stats_reset(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_names_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_names_get, 22.11);
 int
 rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 			    struct rte_ml_dev_xstats_map *xstats_map, uint32_t size)
@@ -490,7 +490,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in
 	return dev->dev_ops->dev_xstats_names_get(dev, mode, model_id, xstats_map, size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_by_name_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_by_name_get, 22.11);
 int
 rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_id, uint64_t *value)
 {
@@ -518,7 +518,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i
 	return dev->dev_ops->dev_xstats_by_name_get(dev, name, stat_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_get, 22.11);
 int
 rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 		      const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids)
@@ -547,7 +547,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t
 	return dev->dev_ops->dev_xstats_get(dev, mode, model_id, stat_ids, values, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_reset, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_reset, 22.11);
 int
 rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 			const uint16_t stat_ids[], uint16_t nb_ids)
@@ -566,7 +566,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_
 	return dev->dev_ops->dev_xstats_reset(dev, mode, model_id, stat_ids, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_dump, 22.11);
 int
 rte_ml_dev_dump(int16_t dev_id, FILE *fd)
 {
@@ -589,7 +589,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd)
 	return dev->dev_ops->dev_dump(dev, fd);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_selftest, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_selftest, 22.11);
 int
 rte_ml_dev_selftest(int16_t dev_id)
 {
@@ -607,7 +607,7 @@ rte_ml_dev_selftest(int16_t dev_id)
 	return dev->dev_ops->dev_selftest(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_load, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_load, 22.11);
 int
 rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t *model_id)
 {
@@ -635,7 +635,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t *
 	return dev->dev_ops->model_load(dev, params, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_unload, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_unload, 22.11);
 int
 rte_ml_model_unload(int16_t dev_id, uint16_t model_id)
 {
@@ -653,7 +653,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_unload(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_start, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_start, 22.11);
 int
 rte_ml_model_start(int16_t dev_id, uint16_t model_id)
 {
@@ -671,7 +671,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_start(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_stop, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_stop, 22.11);
 int
 rte_ml_model_stop(int16_t dev_id, uint16_t model_id)
 {
@@ -689,7 +689,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_stop(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_info_get, 22.11);
 int
 rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_info *model_info)
 {
@@ -713,7 +713,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf
 	return dev->dev_ops->model_info_get(dev, model_id, model_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_params_update, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_params_update, 22.11);
 int
 rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer)
 {
@@ -736,7 +736,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer)
 	return dev->dev_ops->model_params_update(dev, model_id, buffer);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_quantize, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_quantize, 22.11);
 int
 rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **dbuffer,
 		   struct rte_ml_buff_seg **qbuffer)
@@ -765,7 +765,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d
 	return dev->dev_ops->io_quantize(dev, model_id, dbuffer, qbuffer);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_dequantize, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_dequantize, 22.11);
 int
 rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **qbuffer,
 		     struct rte_ml_buff_seg **dbuffer)
@@ -806,7 +806,7 @@ ml_op_init(struct rte_mempool *mempool, __rte_unused void *opaque_arg, void *_op
 	op->mempool = mempool;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_create, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_create, 22.11);
 struct rte_mempool *
 rte_ml_op_pool_create(const char *name, unsigned int nb_elts, unsigned int cache_size,
 		      uint16_t user_size, int socket_id)
@@ -846,14 +846,14 @@ rte_ml_op_pool_create(const char *name, unsigned int nb_elts, unsigned int cache
 	return mp;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_free, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_free, 22.11);
 void
 rte_ml_op_pool_free(struct rte_mempool *mempool)
 {
 	rte_mempool_free(mempool);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_enqueue_burst, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_enqueue_burst, 22.11);
 uint16_t
 rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops)
 {
@@ -890,7 +890,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin
 	return dev->enqueue_burst(dev, qp_id, ops, nb_ops);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dequeue_burst, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dequeue_burst, 22.11);
 uint16_t
 rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops)
 {
@@ -927,7 +927,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin
 	return dev->dequeue_burst(dev, qp_id, ops, nb_ops);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_error_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_error_get, 22.11);
 int
 rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error *error)
 {
@@ -959,5 +959,5 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error
 	return dev->op_error_get(dev, op, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_logtype, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_logtype, 22.11);
 RTE_LOG_REGISTER_DEFAULT(rte_ml_dev_logtype, INFO);
diff --git a/lib/mldev/rte_mldev_pmd.c b/lib/mldev/rte_mldev_pmd.c
index 434360f2d3..53129a05d7 100644
--- a/lib/mldev/rte_mldev_pmd.c
+++ b/lib/mldev/rte_mldev_pmd.c
@@ -9,7 +9,7 @@
 
 #include "rte_mldev_pmd.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_create);
 struct rte_ml_dev *
 rte_ml_dev_pmd_create(const char *name, struct rte_device *device,
 		      struct rte_ml_dev_pmd_init_params *params)
@@ -44,7 +44,7 @@ rte_ml_dev_pmd_create(const char *name, struct rte_device *device,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_destroy);
 int
 rte_ml_dev_pmd_destroy(struct rte_ml_dev *dev)
 {
diff --git a/lib/net/rte_arp.c b/lib/net/rte_arp.c
index 3f8c69f69d..e2d78217e5 100644
--- a/lib/net/rte_arp.c
+++ b/lib/net/rte_arp.c
@@ -6,7 +6,7 @@
 #include <rte_arp.h>
 
 #define RARP_PKT_SIZE	64
-RTE_EXPORT_SYMBOL(rte_net_make_rarp_packet)
+RTE_EXPORT_SYMBOL(rte_net_make_rarp_packet);
 struct rte_mbuf *
 rte_net_make_rarp_packet(struct rte_mempool *mpool,
 		const struct rte_ether_addr *mac)
diff --git a/lib/net/rte_ether.c b/lib/net/rte_ether.c
index 6703145fc5..68369edd3d 100644
--- a/lib/net/rte_ether.c
+++ b/lib/net/rte_ether.c
@@ -8,7 +8,7 @@
 #include <rte_ether.h>
 #include <rte_errno.h>
 
-RTE_EXPORT_SYMBOL(rte_eth_random_addr)
+RTE_EXPORT_SYMBOL(rte_eth_random_addr);
 void
 rte_eth_random_addr(uint8_t *addr)
 {
@@ -20,7 +20,7 @@ rte_eth_random_addr(uint8_t *addr)
 	addr[0] |= RTE_ETHER_LOCAL_ADMIN_ADDR;	/* set local assignment bit */
 }
 
-RTE_EXPORT_SYMBOL(rte_ether_format_addr)
+RTE_EXPORT_SYMBOL(rte_ether_format_addr);
 void
 rte_ether_format_addr(char *buf, uint16_t size,
 		      const struct rte_ether_addr *eth_addr)
@@ -133,7 +133,7 @@ static unsigned int get_ether_sep(const char *s, char *sep)
  *  - Windows format six groups separated by hyphen
  *  - two groups hexadecimal digits
  */
-RTE_EXPORT_SYMBOL(rte_ether_unformat_addr)
+RTE_EXPORT_SYMBOL(rte_ether_unformat_addr);
 int
 rte_ether_unformat_addr(const char *s, struct rte_ether_addr *ea)
 {
diff --git a/lib/net/rte_net.c b/lib/net/rte_net.c
index 44fb6c0f51..a328d1f3cf 100644
--- a/lib/net/rte_net.c
+++ b/lib/net/rte_net.c
@@ -274,7 +274,7 @@ ptype_tunnel_with_udp(uint16_t *proto, const struct rte_mbuf *m,
 }
 
 /* parse ipv6 extended headers, update offset and return next proto */
-RTE_EXPORT_SYMBOL(rte_net_skip_ip6_ext)
+RTE_EXPORT_SYMBOL(rte_net_skip_ip6_ext);
 int
 rte_net_skip_ip6_ext(uint16_t proto, const struct rte_mbuf *m, uint32_t *off,
 	int *frag)
@@ -321,7 +321,7 @@ rte_net_skip_ip6_ext(uint16_t proto, const struct rte_mbuf *m, uint32_t *off,
 }
 
 /* parse mbuf data to get packet type */
-RTE_EXPORT_SYMBOL(rte_net_get_ptype)
+RTE_EXPORT_SYMBOL(rte_net_get_ptype);
 uint32_t rte_net_get_ptype(const struct rte_mbuf *m,
 	struct rte_net_hdr_lens *hdr_lens, uint32_t layers)
 {
diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c
index 3a589bdd6d..c21955d2d5 100644
--- a/lib/net/rte_net_crc.c
+++ b/lib/net/rte_net_crc.c
@@ -216,7 +216,7 @@ handlers_init(enum rte_net_crc_alg alg)
 
 /* Public API */
 
-RTE_EXPORT_SYMBOL(rte_net_crc_set_alg)
+RTE_EXPORT_SYMBOL(rte_net_crc_set_alg);
 struct rte_net_crc *rte_net_crc_set_alg(enum rte_net_crc_alg alg, enum rte_net_crc_type type)
 {
 	uint16_t max_simd_bitwidth;
@@ -256,13 +256,13 @@ struct rte_net_crc *rte_net_crc_set_alg(enum rte_net_crc_alg alg, enum rte_net_c
 	return crc;
 }
 
-RTE_EXPORT_SYMBOL(rte_net_crc_free)
+RTE_EXPORT_SYMBOL(rte_net_crc_free);
 void rte_net_crc_free(struct rte_net_crc *crc)
 {
 	rte_free(crc);
 }
 
-RTE_EXPORT_SYMBOL(rte_net_crc_calc)
+RTE_EXPORT_SYMBOL(rte_net_crc_calc);
 uint32_t rte_net_crc_calc(const struct rte_net_crc *ctx, const void *data, const uint32_t data_len)
 {
 	return handlers[ctx->alg].f[ctx->type](data, data_len);
diff --git a/lib/node/ethdev_ctrl.c b/lib/node/ethdev_ctrl.c
index f717903731..92207b74fb 100644
--- a/lib/node/ethdev_ctrl.c
+++ b/lib/node/ethdev_ctrl.c
@@ -22,7 +22,7 @@ static struct ethdev_ctrl {
 	uint16_t nb_graphs;
 } ctrl;
 
-RTE_EXPORT_SYMBOL(rte_node_eth_config)
+RTE_EXPORT_SYMBOL(rte_node_eth_config);
 int
 rte_node_eth_config(struct rte_node_ethdev_config *conf, uint16_t nb_confs,
 		    uint16_t nb_graphs)
@@ -141,7 +141,7 @@ rte_node_eth_config(struct rte_node_ethdev_config *conf, uint16_t nb_confs,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ethdev_rx_next_update, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ethdev_rx_next_update, 24.03);
 int
 rte_node_ethdev_rx_next_update(rte_node_t id, const char *edge_name)
 {
diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c
index f6db3219f0..dc6f7060b3 100644
--- a/lib/node/ip4_lookup.c
+++ b/lib/node/ip4_lookup.c
@@ -118,7 +118,7 @@ ip4_lookup_node_process_scalar(struct rte_graph *graph, struct rte_node *node,
 	return nb_objs;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_ip4_route_add)
+RTE_EXPORT_SYMBOL(rte_node_ip4_route_add);
 int
 rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop,
 		       enum rte_node_ip4_lookup_next next_node)
diff --git a/lib/node/ip4_lookup_fib.c b/lib/node/ip4_lookup_fib.c
index 0857d889fc..6b2a60dabc 100644
--- a/lib/node/ip4_lookup_fib.c
+++ b/lib/node/ip4_lookup_fib.c
@@ -193,7 +193,7 @@ ip4_lookup_fib_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_create, 25.07);
 int
 rte_node_ip4_fib_create(int socket, struct rte_fib_conf *conf)
 {
@@ -213,7 +213,7 @@ rte_node_ip4_fib_create(int socket, struct rte_fib_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_route_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_route_add, 25.07);
 int
 rte_node_ip4_fib_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop,
 			   enum rte_node_ip4_lookup_next next_node)
diff --git a/lib/node/ip4_reassembly.c b/lib/node/ip4_reassembly.c
index b61ddfd7d1..cc61eb3ada 100644
--- a/lib/node/ip4_reassembly.c
+++ b/lib/node/ip4_reassembly.c
@@ -128,7 +128,7 @@ ip4_reassembly_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return idx;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_reassembly_configure, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_reassembly_configure, 23.11);
 int
 rte_node_ip4_reassembly_configure(struct rte_node_ip4_reassembly_cfg *cfg, uint16_t cnt)
 {
diff --git a/lib/node/ip4_rewrite.c b/lib/node/ip4_rewrite.c
index 37bc3a511f..1e1eaa10b3 100644
--- a/lib/node/ip4_rewrite.c
+++ b/lib/node/ip4_rewrite.c
@@ -548,7 +548,7 @@ ip4_rewrite_set_next(uint16_t port_id, uint16_t next_index)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_ip4_rewrite_add)
+RTE_EXPORT_SYMBOL(rte_node_ip4_rewrite_add);
 int
 rte_node_ip4_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
 			 uint8_t rewrite_len, uint16_t dst_port)
diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c
index 83c0500c76..29eb2d6d12 100644
--- a/lib/node/ip6_lookup.c
+++ b/lib/node/ip6_lookup.c
@@ -258,7 +258,7 @@ ip6_lookup_node_process_scalar(struct rte_graph *graph, struct rte_node *node,
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_route_add, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_route_add, 23.07);
 int
 rte_node_ip6_route_add(const struct rte_ipv6_addr *ip, uint8_t depth, uint16_t next_hop,
 		       enum rte_node_ip6_lookup_next next_node)
diff --git a/lib/node/ip6_lookup_fib.c b/lib/node/ip6_lookup_fib.c
index 40c5c753df..2d990b6ec1 100644
--- a/lib/node/ip6_lookup_fib.c
+++ b/lib/node/ip6_lookup_fib.c
@@ -187,7 +187,7 @@ ip6_lookup_fib_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_create, 25.07);
 int
 rte_node_ip6_fib_create(int socket, struct rte_fib6_conf *conf)
 {
@@ -207,7 +207,7 @@ rte_node_ip6_fib_create(int socket, struct rte_fib6_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_route_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_route_add, 25.07);
 int
 rte_node_ip6_fib_route_add(const struct rte_ipv6_addr *ip, uint8_t depth, uint16_t next_hop,
 			   enum rte_node_ip6_lookup_next next_node)
diff --git a/lib/node/ip6_rewrite.c b/lib/node/ip6_rewrite.c
index d5488e7fa3..fd7501a803 100644
--- a/lib/node/ip6_rewrite.c
+++ b/lib/node/ip6_rewrite.c
@@ -273,7 +273,7 @@ ip6_rewrite_set_next(uint16_t port_id, uint16_t next_index)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_rewrite_add, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_rewrite_add, 23.07);
 int
 rte_node_ip6_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
 			 uint8_t rewrite_len, uint16_t dst_port)
diff --git a/lib/node/node_mbuf_dynfield.c b/lib/node/node_mbuf_dynfield.c
index 9dbc80f7e5..f632209511 100644
--- a/lib/node/node_mbuf_dynfield.c
+++ b/lib/node/node_mbuf_dynfield.c
@@ -20,7 +20,7 @@ static const struct rte_mbuf_dynfield node_mbuf_dynfield_desc = {
 	.align = alignof(rte_node_mbuf_dynfield_t),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_mbuf_dynfield_register, 25.07);
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_mbuf_dynfield_register, 25.07);;
 int rte_node_mbuf_dynfield_register(void)
 {
 	struct node_mbuf_dynfield_mz *f = NULL;
diff --git a/lib/node/udp4_input.c b/lib/node/udp4_input.c
index 5a74e28c85..c13934489c 100644
--- a/lib/node/udp4_input.c
+++ b/lib/node/udp4_input.c
@@ -56,7 +56,7 @@ static struct rte_hash_parameters udp4_params = {
 	.socket_id = 0,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_dst_port_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_dst_port_add, 23.11);
 int
 rte_node_udp4_dst_port_add(uint32_t dst_port, rte_edge_t next_node)
 {
@@ -78,7 +78,7 @@ rte_node_udp4_dst_port_add(uint32_t dst_port, rte_edge_t next_node)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_usr_node_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_usr_node_add, 23.11);
 int
 rte_node_udp4_usr_node_add(const char *usr_node)
 {
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 2a07b4c1f5..0df40185d2 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -200,7 +200,7 @@ pcapng_section_block(rte_pcapng_t *self,
 }
 
 /* Write an interface block for a DPDK port */
-RTE_EXPORT_SYMBOL(rte_pcapng_add_interface)
+RTE_EXPORT_SYMBOL(rte_pcapng_add_interface);
 int
 rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 			 const char *ifname, const char *ifdescr,
@@ -322,7 +322,7 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 /*
  * Write an Interface statistics block at the end of capture.
  */
-RTE_EXPORT_SYMBOL(rte_pcapng_write_stats)
+RTE_EXPORT_SYMBOL(rte_pcapng_write_stats);
 ssize_t
 rte_pcapng_write_stats(rte_pcapng_t *self, uint16_t port_id,
 		       uint64_t ifrecv, uint64_t ifdrop,
@@ -388,7 +388,7 @@ rte_pcapng_write_stats(rte_pcapng_t *self, uint16_t port_id,
 	return write(self->outfd, buf, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_pcapng_mbuf_size)
+RTE_EXPORT_SYMBOL(rte_pcapng_mbuf_size);
 uint32_t
 rte_pcapng_mbuf_size(uint32_t length)
 {
@@ -470,7 +470,7 @@ pcapng_vlan_insert(struct rte_mbuf *m, uint16_t ether_type, uint16_t tci)
  */
 
 /* Make a copy of original mbuf with pcapng header and options */
-RTE_EXPORT_SYMBOL(rte_pcapng_copy)
+RTE_EXPORT_SYMBOL(rte_pcapng_copy);
 struct rte_mbuf *
 rte_pcapng_copy(uint16_t port_id, uint32_t queue,
 		const struct rte_mbuf *md,
@@ -612,7 +612,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
 }
 
 /* Write pre-formatted packets to file. */
-RTE_EXPORT_SYMBOL(rte_pcapng_write_packets)
+RTE_EXPORT_SYMBOL(rte_pcapng_write_packets);
 ssize_t
 rte_pcapng_write_packets(rte_pcapng_t *self,
 			 struct rte_mbuf *pkts[], uint16_t nb_pkts)
@@ -682,7 +682,7 @@ rte_pcapng_write_packets(rte_pcapng_t *self,
 }
 
 /* Create new pcapng writer handle */
-RTE_EXPORT_SYMBOL(rte_pcapng_fdopen)
+RTE_EXPORT_SYMBOL(rte_pcapng_fdopen);
 rte_pcapng_t *
 rte_pcapng_fdopen(int fd,
 		  const char *osname, const char *hardware,
@@ -720,7 +720,7 @@ rte_pcapng_fdopen(int fd,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pcapng_close)
+RTE_EXPORT_SYMBOL(rte_pcapng_close);
 void
 rte_pcapng_close(rte_pcapng_t *self)
 {
diff --git a/lib/pci/rte_pci.c b/lib/pci/rte_pci.c
index e2f89a7f21..1bbdce250c 100644
--- a/lib/pci/rte_pci.c
+++ b/lib/pci/rte_pci.c
@@ -93,7 +93,7 @@ pci_dbdf_parse(const char *input, struct rte_pci_addr *dev_addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_device_name)
+RTE_EXPORT_SYMBOL(rte_pci_device_name);
 void
 rte_pci_device_name(const struct rte_pci_addr *addr,
 		char *output, size_t size)
@@ -104,7 +104,7 @@ rte_pci_device_name(const struct rte_pci_addr *addr,
 			    addr->devid, addr->function) >= 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_addr_cmp)
+RTE_EXPORT_SYMBOL(rte_pci_addr_cmp);
 int
 rte_pci_addr_cmp(const struct rte_pci_addr *addr,
 	     const struct rte_pci_addr *addr2)
@@ -127,7 +127,7 @@ rte_pci_addr_cmp(const struct rte_pci_addr *addr,
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_addr_parse)
+RTE_EXPORT_SYMBOL(rte_pci_addr_parse);
 int
 rte_pci_addr_parse(const char *str, struct rte_pci_addr *addr)
 {
diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c
index d21df8ab43..614b3f65c7 100644
--- a/lib/pdcp/rte_pdcp.c
+++ b/lib/pdcp/rte_pdcp.c
@@ -98,7 +98,7 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_establish, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_establish, 23.07);
 struct rte_pdcp_entity *
 rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf)
 {
@@ -199,7 +199,7 @@ pdcp_dl_release(struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[])
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_release, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_release, 23.07);
 int
 rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[])
 {
@@ -222,7 +222,7 @@ rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *ou
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_suspend, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_suspend, 23.07);
 int
 rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity,
 			struct rte_mbuf *out_mb[])
@@ -250,7 +250,7 @@ rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity,
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_control_pdu_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_control_pdu_create, 23.07);
 struct rte_mbuf *
 rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity,
 			    enum rte_pdcp_ctrl_pdu_type type)
@@ -291,7 +291,7 @@ rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity,
 	return m;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_t_reordering_expiry_handle, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_t_reordering_expiry_handle, 23.07);
 uint16_t
 rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[])
 {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index ba75b828f2..5559d7f7b9 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -418,7 +418,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_init)
+RTE_EXPORT_SYMBOL(rte_pdump_init);
 int
 rte_pdump_init(void)
 {
@@ -441,7 +441,7 @@ rte_pdump_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_uninit)
+RTE_EXPORT_SYMBOL(rte_pdump_uninit);
 int
 rte_pdump_uninit(void)
 {
@@ -612,7 +612,7 @@ pdump_enable(uint16_t port, uint16_t queue,
 					    ENABLE, ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable)
+RTE_EXPORT_SYMBOL(rte_pdump_enable);
 int
 rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		 struct rte_ring *ring,
@@ -623,7 +623,7 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 			    ring, mp, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf);
 int
 rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
 		     uint32_t flags, uint32_t snaplen,
@@ -658,7 +658,7 @@ pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
 					    ENABLE, ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_by_deviceid);
 int
 rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 			     uint32_t flags,
@@ -670,7 +670,7 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 					ring, mp, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf_by_deviceid);
 int
 rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
 				 uint32_t flags, uint32_t snaplen,
@@ -682,7 +682,7 @@ rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
 					ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_disable)
+RTE_EXPORT_SYMBOL(rte_pdump_disable);
 int
 rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 {
@@ -702,7 +702,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_disable_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_disable_by_deviceid);
 int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags)
@@ -739,7 +739,7 @@ pdump_sum_stats(uint16_t port, uint16_t nq,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_stats)
+RTE_EXPORT_SYMBOL(rte_pdump_stats);
 int
 rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
 {
diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c
index fa3c8b77ee..a77efa47d4 100644
--- a/lib/pipeline/rte_pipeline.c
+++ b/lib/pipeline/rte_pipeline.c
@@ -190,7 +190,7 @@ rte_pipeline_check_params(struct rte_pipeline_params *params)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_create);
 struct rte_pipeline *
 rte_pipeline_create(struct rte_pipeline_params *params)
 {
@@ -233,7 +233,7 @@ rte_pipeline_create(struct rte_pipeline_params *params)
 	return p;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_free)
+RTE_EXPORT_SYMBOL(rte_pipeline_free);
 int
 rte_pipeline_free(struct rte_pipeline *p)
 {
@@ -327,7 +327,7 @@ rte_table_check_params(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_create);
 int
 rte_pipeline_table_create(struct rte_pipeline *p,
 		struct rte_pipeline_table_params *params,
@@ -399,7 +399,7 @@ rte_pipeline_table_free(struct rte_table *table)
 	rte_free(table->default_entry);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_add)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_add);
 int
 rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
 	uint32_t table_id,
@@ -450,7 +450,7 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_delete)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_delete);
 int
 rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -484,7 +484,7 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add);
 int
 rte_pipeline_table_entry_add(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -546,7 +546,7 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p,
 		key_found, (void **) entry_ptr);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete);
 int
 rte_pipeline_table_entry_delete(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -586,7 +586,7 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p,
 	return (table->ops.f_delete)(table->h_table, key, key_found, entry);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add_bulk)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add_bulk);
 int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
 	uint32_t table_id,
 	void **keys,
@@ -653,7 +653,7 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
 		n_keys, key_found, (void **) entries_ptr);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete_bulk)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete_bulk);
 int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p,
 	uint32_t table_id,
 	void **keys,
@@ -811,7 +811,7 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_create);
 int
 rte_pipeline_port_in_create(struct rte_pipeline *p,
 		struct rte_pipeline_port_in_params *params,
@@ -862,7 +862,7 @@ rte_pipeline_port_in_free(struct rte_port_in *port)
 		port->ops.f_free(port->h_port);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_create);
 int
 rte_pipeline_port_out_create(struct rte_pipeline *p,
 		struct rte_pipeline_port_out_params *params,
@@ -910,7 +910,7 @@ rte_pipeline_port_out_free(struct rte_port_out *port)
 		port->ops.f_free(port->h_port);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_connect_to_table)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_connect_to_table);
 int
 rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
 		uint32_t port_id,
@@ -945,7 +945,7 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_enable)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_enable);
 int
 rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
 {
@@ -993,7 +993,7 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_disable)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_disable);
 int
 rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
 {
@@ -1049,7 +1049,7 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
 /*
  * Pipeline run-time
  */
-RTE_EXPORT_SYMBOL(rte_pipeline_check)
+RTE_EXPORT_SYMBOL(rte_pipeline_check);
 int
 rte_pipeline_check(struct rte_pipeline *p)
 {
@@ -1323,7 +1323,7 @@ rte_pipeline_action_handler_drop(struct rte_pipeline *p, uint64_t pkts_mask)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_run)
+RTE_EXPORT_SYMBOL(rte_pipeline_run);
 int
 rte_pipeline_run(struct rte_pipeline *p)
 {
@@ -1463,7 +1463,7 @@ rte_pipeline_run(struct rte_pipeline *p)
 	return (int) n_pkts;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_flush)
+RTE_EXPORT_SYMBOL(rte_pipeline_flush);
 int
 rte_pipeline_flush(struct rte_pipeline *p)
 {
@@ -1486,7 +1486,7 @@ rte_pipeline_flush(struct rte_pipeline *p)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_packet_insert)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_packet_insert);
 int
 rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
 	uint32_t port_id, struct rte_mbuf *pkt)
@@ -1498,7 +1498,7 @@ rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_hijack)
+RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_hijack);
 int rte_pipeline_ah_packet_hijack(struct rte_pipeline *p,
 	uint64_t pkts_mask)
 {
@@ -1508,7 +1508,7 @@ int rte_pipeline_ah_packet_hijack(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_drop)
+RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_drop);
 int rte_pipeline_ah_packet_drop(struct rte_pipeline *p,
 	uint64_t pkts_mask)
 {
@@ -1520,7 +1520,7 @@ int rte_pipeline_ah_packet_drop(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_stats_read);
 int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	struct rte_pipeline_port_in_stats *stats, int clear)
 {
@@ -1558,7 +1558,7 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_stats_read);
 int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	struct rte_pipeline_port_out_stats *stats, int clear)
 {
@@ -1593,7 +1593,7 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_stats_read);
 int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id,
 	struct rte_pipeline_table_stats *stats, int clear)
 {
diff --git a/lib/pipeline/rte_port_in_action.c b/lib/pipeline/rte_port_in_action.c
index 2378e64de9..e52b0f24d1 100644
--- a/lib/pipeline/rte_port_in_action.c
+++ b/lib/pipeline/rte_port_in_action.c
@@ -201,7 +201,7 @@ struct rte_port_in_action_profile {
 	int frozen;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_create, 18.05);
 struct rte_port_in_action_profile *
 rte_port_in_action_profile_create(uint32_t socket_id)
 {
@@ -218,7 +218,7 @@ rte_port_in_action_profile_create(uint32_t socket_id)
 	return ap;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_action_register, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_action_register, 18.05);
 int
 rte_port_in_action_profile_action_register(struct rte_port_in_action_profile *profile,
 	enum rte_port_in_action_type type,
@@ -258,7 +258,7 @@ rte_port_in_action_profile_action_register(struct rte_port_in_action_profile *pr
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_freeze, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_freeze, 18.05);
 int
 rte_port_in_action_profile_freeze(struct rte_port_in_action_profile *profile)
 {
@@ -271,7 +271,7 @@ rte_port_in_action_profile_freeze(struct rte_port_in_action_profile *profile)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_free, 18.05);
 int
 rte_port_in_action_profile_free(struct rte_port_in_action_profile *profile)
 {
@@ -320,7 +320,7 @@ action_data_init(struct rte_port_in_action *action,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_create, 18.05);
 struct rte_port_in_action *
 rte_port_in_action_create(struct rte_port_in_action_profile *profile,
 	uint32_t socket_id)
@@ -357,7 +357,7 @@ rte_port_in_action_create(struct rte_port_in_action_profile *profile,
 	return action;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_apply, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_apply, 18.05);
 int
 rte_port_in_action_apply(struct rte_port_in_action *action,
 	enum rte_port_in_action_type type,
@@ -505,7 +505,7 @@ ah_selector(struct rte_port_in_action *action)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_params_get, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_params_get, 18.05);
 int
 rte_port_in_action_params_get(struct rte_port_in_action *action,
 	struct rte_pipeline_port_in_params *params)
@@ -526,7 +526,7 @@ rte_port_in_action_params_get(struct rte_port_in_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_free, 18.05);
 int
 rte_port_in_action_free(struct rte_port_in_action *action)
 {
diff --git a/lib/pipeline/rte_swx_ctl.c b/lib/pipeline/rte_swx_ctl.c
index 4e9bb842a1..ea969e61a9 100644
--- a/lib/pipeline/rte_swx_ctl.c
+++ b/lib/pipeline/rte_swx_ctl.c
@@ -1171,7 +1171,7 @@ static struct rte_tailq_elem rte_swx_ctl_pipeline_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_ctl_pipeline_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_find, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_find, 22.11);
 struct rte_swx_ctl_pipeline *
 rte_swx_ctl_pipeline_find(const char *name)
 {
@@ -1251,7 +1251,7 @@ ctl_unregister(struct rte_swx_ctl_pipeline *ctl)
 	rte_mcfg_tailq_write_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_free, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_free, 20.11);
 void
 rte_swx_ctl_pipeline_free(struct rte_swx_ctl_pipeline *ctl)
 {
@@ -1274,7 +1274,7 @@ rte_swx_ctl_pipeline_free(struct rte_swx_ctl_pipeline *ctl)
 	free(ctl);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_create, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_create, 20.11);
 struct rte_swx_ctl_pipeline *
 rte_swx_ctl_pipeline_create(struct rte_swx_pipeline *p)
 {
@@ -1553,7 +1553,7 @@ rte_swx_ctl_pipeline_create(struct rte_swx_pipeline *p)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_add, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_add, 20.11);
 int
 rte_swx_ctl_pipeline_table_entry_add(struct rte_swx_ctl_pipeline *ctl,
 				     const char *table_name,
@@ -1668,7 +1668,7 @@ rte_swx_ctl_pipeline_table_entry_add(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_delete, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_delete, 20.11);
 int
 rte_swx_ctl_pipeline_table_entry_delete(struct rte_swx_ctl_pipeline *ctl,
 					const char *table_name,
@@ -1759,7 +1759,7 @@ rte_swx_ctl_pipeline_table_entry_delete(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_default_entry_add, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_default_entry_add, 20.11);
 int
 rte_swx_ctl_pipeline_table_default_entry_add(struct rte_swx_ctl_pipeline *ctl,
 					     const char *table_name,
@@ -2097,7 +2097,7 @@ table_abort(struct rte_swx_ctl_pipeline *ctl, uint32_t table_id)
 	table_pending_default_free(table);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_add, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_add, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_add(struct rte_swx_ctl_pipeline *ctl,
 					const char *selector_name,
@@ -2125,7 +2125,7 @@ rte_swx_ctl_pipeline_selector_group_add(struct rte_swx_ctl_pipeline *ctl,
 	return -ENOSPC;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_delete, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_delete, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_delete(struct rte_swx_ctl_pipeline *ctl,
 					   const char *selector_name,
@@ -2177,7 +2177,7 @@ rte_swx_ctl_pipeline_selector_group_delete(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_add, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_add, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_member_add(struct rte_swx_ctl_pipeline *ctl,
 					       const char *selector_name,
@@ -2237,7 +2237,7 @@ rte_swx_ctl_pipeline_selector_group_member_add(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_delete, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_delete, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_member_delete(struct rte_swx_ctl_pipeline *ctl,
 						  const char *selector_name,
@@ -2491,7 +2491,7 @@ learner_default_entry_duplicate(struct rte_swx_ctl_pipeline *ctl,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_add, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_add, 21.11);
 int
 rte_swx_ctl_pipeline_learner_default_entry_add(struct rte_swx_ctl_pipeline *ctl,
 					       const char *learner_name,
@@ -2565,7 +2565,7 @@ learner_abort(struct rte_swx_ctl_pipeline *ctl, uint32_t learner_id)
 	learner_pending_default_free(l);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_commit, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_commit, 20.11);
 int
 rte_swx_ctl_pipeline_commit(struct rte_swx_ctl_pipeline *ctl, int abort_on_fail)
 {
@@ -2652,7 +2652,7 @@ rte_swx_ctl_pipeline_commit(struct rte_swx_ctl_pipeline *ctl, int abort_on_fail)
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_abort, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_abort, 20.11);
 void
 rte_swx_ctl_pipeline_abort(struct rte_swx_ctl_pipeline *ctl)
 {
@@ -2987,7 +2987,7 @@ token_is_comment(const char *token)
 
 #define RTE_SWX_CTL_ENTRY_TOKENS_MAX 256
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_read, 20.11);
 struct rte_swx_table_entry *
 rte_swx_ctl_pipeline_table_entry_read(struct rte_swx_ctl_pipeline *ctl,
 				      const char *table_name,
@@ -3187,7 +3187,7 @@ rte_swx_ctl_pipeline_table_entry_read(struct rte_swx_ctl_pipeline *ctl,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_read, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_read, 21.11);
 struct rte_swx_table_entry *
 rte_swx_ctl_pipeline_learner_default_entry_read(struct rte_swx_ctl_pipeline *ctl,
 						const char *learner_name,
@@ -3340,7 +3340,7 @@ table_entry_printf(FILE *f,
 	fprintf(f, "\n");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_fprintf, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_fprintf, 20.11);
 int
 rte_swx_ctl_pipeline_table_fprintf(FILE *f,
 				   struct rte_swx_ctl_pipeline *ctl,
@@ -3391,7 +3391,7 @@ rte_swx_ctl_pipeline_table_fprintf(FILE *f,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_fprintf, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_fprintf, 21.08);
 int
 rte_swx_ctl_pipeline_selector_fprintf(FILE *f,
 				      struct rte_swx_ctl_pipeline *ctl,
diff --git a/lib/pipeline/rte_swx_ipsec.c b/lib/pipeline/rte_swx_ipsec.c
index 553056fad2..2b7d767105 100644
--- a/lib/pipeline/rte_swx_ipsec.c
+++ b/lib/pipeline/rte_swx_ipsec.c
@@ -178,7 +178,7 @@ static struct rte_tailq_elem rte_swx_ipsec_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_ipsec_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_find, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_find, 23.03);
 struct rte_swx_ipsec *
 rte_swx_ipsec_find(const char *name)
 {
@@ -263,7 +263,7 @@ ipsec_unregister(struct rte_swx_ipsec *ipsec)
 static void
 ipsec_session_free(struct rte_swx_ipsec *ipsec, struct rte_ipsec_session *s);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_free, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_free, 23.03);
 void
 rte_swx_ipsec_free(struct rte_swx_ipsec *ipsec)
 {
@@ -294,7 +294,7 @@ rte_swx_ipsec_free(struct rte_swx_ipsec *ipsec)
 	env_free(ipsec, ipsec->total_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_create, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_create, 23.03);
 int
 rte_swx_ipsec_create(struct rte_swx_ipsec **ipsec_out,
 		     const char *name,
@@ -722,7 +722,7 @@ rte_swx_ipsec_post_crypto(struct rte_swx_ipsec *ipsec)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_run, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_run, 23.03);
 void
 rte_swx_ipsec_run(struct rte_swx_ipsec *ipsec)
 {
@@ -1134,7 +1134,7 @@ do {                                   \
 	}                              \
 } while (0)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_read, 23.03);
 struct rte_swx_ipsec_sa_params *
 rte_swx_ipsec_sa_read(struct rte_swx_ipsec *ipsec __rte_unused,
 		      const char *string,
@@ -1768,7 +1768,7 @@ ipsec_session_free(struct rte_swx_ipsec *ipsec,
 	memset(s, 0, sizeof(*s));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_add, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_add, 23.03);
 int
 rte_swx_ipsec_sa_add(struct rte_swx_ipsec *ipsec,
 		     struct rte_swx_ipsec_sa_params *sa_params,
@@ -1808,7 +1808,7 @@ rte_swx_ipsec_sa_add(struct rte_swx_ipsec *ipsec,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_delete, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_delete, 23.03);
 void
 rte_swx_ipsec_sa_delete(struct rte_swx_ipsec *ipsec,
 			uint32_t sa_id)
diff --git a/lib/pipeline/rte_swx_pipeline.c b/lib/pipeline/rte_swx_pipeline.c
index 2193bc4ebf..d2d8730d2e 100644
--- a/lib/pipeline/rte_swx_pipeline.c
+++ b/lib/pipeline/rte_swx_pipeline.c
@@ -122,7 +122,7 @@ struct_type_field_find(struct struct_type *st, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_struct_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_struct_type_register, 20.11);
 int
 rte_swx_pipeline_struct_type_register(struct rte_swx_pipeline *p,
 				      const char *name,
@@ -254,7 +254,7 @@ port_in_type_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_type_register, 20.11);
 int
 rte_swx_pipeline_port_in_type_register(struct rte_swx_pipeline *p,
 				       const char *name,
@@ -298,7 +298,7 @@ port_in_find(struct rte_swx_pipeline *p, uint32_t port_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_config, 20.11);
 int
 rte_swx_pipeline_port_in_config(struct rte_swx_pipeline *p,
 				uint32_t port_id,
@@ -417,7 +417,7 @@ port_out_type_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_type_register, 20.11);
 int
 rte_swx_pipeline_port_out_type_register(struct rte_swx_pipeline *p,
 					const char *name,
@@ -463,7 +463,7 @@ port_out_find(struct rte_swx_pipeline *p, uint32_t port_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_config, 20.11);
 int
 rte_swx_pipeline_port_out_config(struct rte_swx_pipeline *p,
 				 uint32_t port_id,
@@ -570,7 +570,7 @@ port_out_free(struct rte_swx_pipeline *p)
 /*
  * Packet mirroring.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_mirroring_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_mirroring_config, 20.11);
 int
 rte_swx_pipeline_mirroring_config(struct rte_swx_pipeline *p,
 				  struct rte_swx_pipeline_mirroring_params *params)
@@ -767,7 +767,7 @@ extern_obj_mailbox_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_register, 20.11);
 int
 rte_swx_pipeline_extern_type_register(struct rte_swx_pipeline *p,
 	const char *name,
@@ -808,7 +808,7 @@ rte_swx_pipeline_extern_type_register(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_member_func_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_member_func_register, 20.11);
 int
 rte_swx_pipeline_extern_type_member_func_register(struct rte_swx_pipeline *p,
 	const char *extern_type_name,
@@ -846,7 +846,7 @@ rte_swx_pipeline_extern_type_member_func_register(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_object_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_object_config, 20.11);
 int
 rte_swx_pipeline_extern_object_config(struct rte_swx_pipeline *p,
 				      const char *extern_type_name,
@@ -1063,7 +1063,7 @@ extern_func_mailbox_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_func_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_func_register, 20.11);
 int
 rte_swx_pipeline_extern_func_register(struct rte_swx_pipeline *p,
 				      const char *name,
@@ -1192,7 +1192,7 @@ hash_func_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_hash_func_register, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_hash_func_register, 22.07);
 int
 rte_swx_pipeline_hash_func_register(struct rte_swx_pipeline *p,
 				    const char *name,
@@ -1293,7 +1293,7 @@ rss_find_by_id(struct rte_swx_pipeline *p, uint32_t rss_obj_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_rss_config, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_rss_config, 23.03);
 int
 rte_swx_pipeline_rss_config(struct rte_swx_pipeline *p, const char *name)
 {
@@ -1471,7 +1471,7 @@ header_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_header_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_header_register, 20.11);
 int
 rte_swx_pipeline_packet_header_register(struct rte_swx_pipeline *p,
 					const char *name,
@@ -1610,7 +1610,7 @@ metadata_field_parse(struct rte_swx_pipeline *p, const char *name)
 	return struct_type_field_find(p->metadata_st, &name[2]);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_metadata_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_metadata_register, 20.11);
 int
 rte_swx_pipeline_packet_metadata_register(struct rte_swx_pipeline *p,
 					  const char *struct_type_name)
@@ -7870,7 +7870,7 @@ action_does_learning(struct action *a)
 	return 0; /* FALSE */
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_action_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_action_config, 20.11);
 int
 rte_swx_pipeline_action_config(struct rte_swx_pipeline *p,
 			       const char *name,
@@ -8235,7 +8235,7 @@ table_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_type_register, 20.11);
 int
 rte_swx_pipeline_table_type_register(struct rte_swx_pipeline *p,
 				     const char *name,
@@ -8405,7 +8405,7 @@ table_match_fields_check(struct rte_swx_pipeline *p,
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_config, 20.11);
 int
 rte_swx_pipeline_table_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -8909,7 +8909,7 @@ selector_fields_check(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_selector_config, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_selector_config, 21.08);
 int
 rte_swx_pipeline_selector_config(struct rte_swx_pipeline *p,
 				 const char *name,
@@ -9382,7 +9382,7 @@ learner_action_learning_check(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_learner_config, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_learner_config, 21.11);
 int
 rte_swx_pipeline_learner_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -9956,7 +9956,7 @@ regarray_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_regarray_config, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_regarray_config, 21.05);
 int
 rte_swx_pipeline_regarray_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -10095,7 +10095,7 @@ metarray_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_metarray_config, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_metarray_config, 21.05);
 int
 rte_swx_pipeline_metarray_config(struct rte_swx_pipeline *p,
 				 const char *name,
@@ -10246,7 +10246,7 @@ static struct rte_tailq_elem rte_swx_pipeline_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_pipeline_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_find, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_find, 22.11);
 struct rte_swx_pipeline *
 rte_swx_pipeline_find(const char *name)
 {
@@ -10326,7 +10326,7 @@ pipeline_unregister(struct rte_swx_pipeline *p)
 	rte_mcfg_tailq_write_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_free, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_free, 20.11);
 void
 rte_swx_pipeline_free(struct rte_swx_pipeline *p)
 {
@@ -10472,7 +10472,7 @@ hash_funcs_register(struct rte_swx_pipeline *p)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_config, 20.11);
 int
 rte_swx_pipeline_config(struct rte_swx_pipeline **p, const char *name, int numa_node)
 {
@@ -10549,7 +10549,7 @@ rte_swx_pipeline_config(struct rte_swx_pipeline **p, const char *name, int numa_
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_instructions_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_instructions_config, 20.11);
 int
 rte_swx_pipeline_instructions_config(struct rte_swx_pipeline *p,
 				     const char **instructions,
@@ -10572,7 +10572,7 @@ rte_swx_pipeline_instructions_config(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build, 20.11);
 int
 rte_swx_pipeline_build(struct rte_swx_pipeline *p)
 {
@@ -10691,7 +10691,7 @@ rte_swx_pipeline_build(struct rte_swx_pipeline *p)
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_run, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_run, 20.11);
 void
 rte_swx_pipeline_run(struct rte_swx_pipeline *p, uint32_t n_instructions)
 {
@@ -10701,7 +10701,7 @@ rte_swx_pipeline_run(struct rte_swx_pipeline *p, uint32_t n_instructions)
 		instr_exec(p);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_flush, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_flush, 20.11);
 void
 rte_swx_pipeline_flush(struct rte_swx_pipeline *p)
 {
@@ -10718,7 +10718,7 @@ rte_swx_pipeline_flush(struct rte_swx_pipeline *p)
 /*
  * Control.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_info_get, 20.11);
 int
 rte_swx_ctl_pipeline_info_get(struct rte_swx_pipeline *p,
 			      struct rte_swx_ctl_pipeline_info *pipeline)
@@ -10752,7 +10752,7 @@ rte_swx_ctl_pipeline_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_numa_node_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_numa_node_get, 20.11);
 int
 rte_swx_ctl_pipeline_numa_node_get(struct rte_swx_pipeline *p, int *numa_node)
 {
@@ -10763,7 +10763,7 @@ rte_swx_ctl_pipeline_numa_node_get(struct rte_swx_pipeline *p, int *numa_node)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_info_get, 20.11);
 int
 rte_swx_ctl_action_info_get(struct rte_swx_pipeline *p,
 			    uint32_t action_id,
@@ -10783,7 +10783,7 @@ rte_swx_ctl_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_arg_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_arg_info_get, 20.11);
 int
 rte_swx_ctl_action_arg_info_get(struct rte_swx_pipeline *p,
 				uint32_t action_id,
@@ -10808,7 +10808,7 @@ rte_swx_ctl_action_arg_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_info_get, 20.11);
 int
 rte_swx_ctl_table_info_get(struct rte_swx_pipeline *p,
 			   uint32_t table_id,
@@ -10833,7 +10833,7 @@ rte_swx_ctl_table_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_match_field_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_match_field_info_get, 20.11);
 int
 rte_swx_ctl_table_match_field_info_get(struct rte_swx_pipeline *p,
 	uint32_t table_id,
@@ -10859,7 +10859,7 @@ rte_swx_ctl_table_match_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_action_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_action_info_get, 20.11);
 int
 rte_swx_ctl_table_action_info_get(struct rte_swx_pipeline *p,
 	uint32_t table_id,
@@ -10883,7 +10883,7 @@ rte_swx_ctl_table_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_ops_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_ops_get, 20.11);
 int
 rte_swx_ctl_table_ops_get(struct rte_swx_pipeline *p,
 			  uint32_t table_id,
@@ -10910,7 +10910,7 @@ rte_swx_ctl_table_ops_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_info_get, 21.08);
 int
 rte_swx_ctl_selector_info_get(struct rte_swx_pipeline *p,
 			      uint32_t selector_id,
@@ -10934,7 +10934,7 @@ rte_swx_ctl_selector_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_group_id_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_group_id_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_group_id_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -10957,7 +10957,7 @@ rte_swx_ctl_selector_group_id_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -10983,7 +10983,7 @@ rte_swx_ctl_selector_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_member_id_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_member_id_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_member_id_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -11006,7 +11006,7 @@ rte_swx_ctl_selector_member_id_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_info_get, 21.11);
 int
 rte_swx_ctl_learner_info_get(struct rte_swx_pipeline *p,
 			     uint32_t learner_id,
@@ -11032,7 +11032,7 @@ rte_swx_ctl_learner_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_match_field_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_match_field_info_get, 21.11);
 int
 rte_swx_ctl_learner_match_field_info_get(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11058,7 +11058,7 @@ rte_swx_ctl_learner_match_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_action_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_action_info_get, 21.11);
 int
 rte_swx_ctl_learner_action_info_get(struct rte_swx_pipeline *p,
 				    uint32_t learner_id,
@@ -11085,7 +11085,7 @@ rte_swx_ctl_learner_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_get, 22.07);
 int
 rte_swx_ctl_pipeline_learner_timeout_get(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11105,7 +11105,7 @@ rte_swx_ctl_pipeline_learner_timeout_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_set, 22.07);
 int
 rte_swx_ctl_pipeline_learner_timeout_set(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11137,7 +11137,7 @@ rte_swx_ctl_pipeline_learner_timeout_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_get, 20.11);
 int
 rte_swx_pipeline_table_state_get(struct rte_swx_pipeline *p,
 				 struct rte_swx_table_state **table_state)
@@ -11149,7 +11149,7 @@ rte_swx_pipeline_table_state_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_set, 20.11);
 int
 rte_swx_pipeline_table_state_set(struct rte_swx_pipeline *p,
 				 struct rte_swx_table_state *table_state)
@@ -11161,7 +11161,7 @@ rte_swx_pipeline_table_state_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_in_stats_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_in_stats_read, 20.11);
 int
 rte_swx_ctl_pipeline_port_in_stats_read(struct rte_swx_pipeline *p,
 					uint32_t port_id,
@@ -11180,7 +11180,7 @@ rte_swx_ctl_pipeline_port_in_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_out_stats_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_out_stats_read, 20.11);
 int
 rte_swx_ctl_pipeline_port_out_stats_read(struct rte_swx_pipeline *p,
 					 uint32_t port_id,
@@ -11199,7 +11199,7 @@ rte_swx_ctl_pipeline_port_out_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_stats_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_stats_read, 21.05);
 int
 rte_swx_ctl_pipeline_table_stats_read(struct rte_swx_pipeline *p,
 				      const char *table_name,
@@ -11227,7 +11227,7 @@ rte_swx_ctl_pipeline_table_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_stats_read, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_stats_read, 21.08);
 int
 rte_swx_ctl_pipeline_selector_stats_read(struct rte_swx_pipeline *p,
 	const char *selector_name,
@@ -11247,7 +11247,7 @@ rte_swx_ctl_pipeline_selector_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_stats_read, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_stats_read, 21.11);
 int
 rte_swx_ctl_pipeline_learner_stats_read(struct rte_swx_pipeline *p,
 					const char *learner_name,
@@ -11281,7 +11281,7 @@ rte_swx_ctl_pipeline_learner_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_regarray_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_regarray_info_get, 21.05);
 int
 rte_swx_ctl_regarray_info_get(struct rte_swx_pipeline *p,
 			      uint32_t regarray_id,
@@ -11301,7 +11301,7 @@ rte_swx_ctl_regarray_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read, 21.05);
 int
 rte_swx_ctl_pipeline_regarray_read(struct rte_swx_pipeline *p,
 				   const char *regarray_name,
@@ -11323,7 +11323,7 @@ rte_swx_ctl_pipeline_regarray_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write, 21.05);
 int
 rte_swx_ctl_pipeline_regarray_write(struct rte_swx_pipeline *p,
 				   const char *regarray_name,
@@ -11345,7 +11345,7 @@ rte_swx_ctl_pipeline_regarray_write(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_metarray_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_metarray_info_get, 21.05);
 int
 rte_swx_ctl_metarray_info_get(struct rte_swx_pipeline *p,
 			      uint32_t metarray_id,
@@ -11365,7 +11365,7 @@ rte_swx_ctl_metarray_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_add, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_add, 21.05);
 int
 rte_swx_ctl_meter_profile_add(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -11398,7 +11398,7 @@ rte_swx_ctl_meter_profile_add(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_delete, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_delete, 21.05);
 int
 rte_swx_ctl_meter_profile_delete(struct rte_swx_pipeline *p,
 				 const char *name)
@@ -11419,7 +11419,7 @@ rte_swx_ctl_meter_profile_delete(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset, 21.05);
 int
 rte_swx_ctl_meter_reset(struct rte_swx_pipeline *p,
 			const char *metarray_name,
@@ -11448,7 +11448,7 @@ rte_swx_ctl_meter_reset(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set, 21.05);
 int
 rte_swx_ctl_meter_set(struct rte_swx_pipeline *p,
 		      const char *metarray_name,
@@ -11485,7 +11485,7 @@ rte_swx_ctl_meter_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read, 21.05);
 int
 rte_swx_ctl_meter_stats_read(struct rte_swx_pipeline *p,
 			     const char *metarray_name,
@@ -11514,7 +11514,7 @@ rte_swx_ctl_meter_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_mirroring_session_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_mirroring_session_set, 20.11);
 int
 rte_swx_ctl_pipeline_mirroring_session_set(struct rte_swx_pipeline *p,
 					   uint32_t session_id,
@@ -11721,7 +11721,7 @@ rte_swx_ctl_pipeline_table_entry_id_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read_with_key, 22.11);
 int
 rte_swx_ctl_pipeline_regarray_read_with_key(struct rte_swx_pipeline *p,
 					    const char *regarray_name,
@@ -11739,7 +11739,7 @@ rte_swx_ctl_pipeline_regarray_read_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_pipeline_regarray_read(p, regarray_name, entry_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write_with_key, 22.11);
 int
 rte_swx_ctl_pipeline_regarray_write_with_key(struct rte_swx_pipeline *p,
 					     const char *regarray_name,
@@ -11757,7 +11757,7 @@ rte_swx_ctl_pipeline_regarray_write_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_pipeline_regarray_write(p, regarray_name, entry_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset_with_key, 22.11);
 int
 rte_swx_ctl_meter_reset_with_key(struct rte_swx_pipeline *p,
 				 const char *metarray_name,
@@ -11774,7 +11774,7 @@ rte_swx_ctl_meter_reset_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_reset(p, metarray_name, entry_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set_with_key, 22.11);
 int
 rte_swx_ctl_meter_set_with_key(struct rte_swx_pipeline *p,
 			       const char *metarray_name,
@@ -11792,7 +11792,7 @@ rte_swx_ctl_meter_set_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_set(p, metarray_name, entry_id, profile_name);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read_with_key, 22.11);
 int
 rte_swx_ctl_meter_stats_read_with_key(struct rte_swx_pipeline *p,
 				      const char *metarray_name,
@@ -11810,7 +11810,7 @@ rte_swx_ctl_meter_stats_read_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_stats_read(p, metarray_name, entry_id, stats);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_rss_info_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_rss_info_get, 23.03);
 int
 rte_swx_ctl_rss_info_get(struct rte_swx_pipeline *p,
 			 uint32_t rss_obj_id,
@@ -11831,7 +11831,7 @@ rte_swx_ctl_rss_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_size_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_size_read, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_size_read(struct rte_swx_pipeline *p,
 				       const char *rss_name,
@@ -11856,7 +11856,7 @@ rte_swx_ctl_pipeline_rss_key_size_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_read, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_read(struct rte_swx_pipeline *p,
 				  const char *rss_name,
@@ -11881,7 +11881,7 @@ rte_swx_ctl_pipeline_rss_key_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_write, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_write, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_write(struct rte_swx_pipeline *p,
 				   const char *rss_name,
@@ -14584,7 +14584,7 @@ pipeline_adjust(struct rte_swx_pipeline *p, struct instruction_group_list *igl)
 	instr_jmp_resolve(p->instructions, p->instruction_data, p->n_instructions);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_codegen, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_codegen, 22.11);
 int
 rte_swx_pipeline_codegen(FILE *spec_file,
 			 FILE *code_file,
@@ -14678,7 +14678,7 @@ rte_swx_pipeline_codegen(FILE *spec_file,
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build_from_lib, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build_from_lib, 22.11);
 int
 rte_swx_pipeline_build_from_lib(struct rte_swx_pipeline **pipeline,
 				const char *name,
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index c990d7eb56..f05e046c46 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2363,7 +2363,7 @@ struct rte_table_action_profile {
 	int frozen;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_create, 18.05);
 struct rte_table_action_profile *
 rte_table_action_profile_create(struct rte_table_action_common_config *common)
 {
@@ -2385,7 +2385,7 @@ rte_table_action_profile_create(struct rte_table_action_common_config *common)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_action_register, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_action_register, 18.05);
 int
 rte_table_action_profile_action_register(struct rte_table_action_profile *profile,
 	enum rte_table_action_type type,
@@ -2449,7 +2449,7 @@ rte_table_action_profile_action_register(struct rte_table_action_profile *profil
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_freeze, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_freeze, 18.05);
 int
 rte_table_action_profile_freeze(struct rte_table_action_profile *profile)
 {
@@ -2463,7 +2463,7 @@ rte_table_action_profile_freeze(struct rte_table_action_profile *profile)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_free, 18.05);
 int
 rte_table_action_profile_free(struct rte_table_action_profile *profile)
 {
@@ -2486,7 +2486,7 @@ struct rte_table_action {
 	struct meter_profile_data mp[METER_PROFILES_MAX];
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_create, 18.05);
 struct rte_table_action *
 rte_table_action_create(struct rte_table_action_profile *profile,
 	uint32_t socket_id)
@@ -2524,7 +2524,7 @@ action_data_get(void *data,
 	return &data_bytes[offset];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_apply, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_apply, 18.05);
 int
 rte_table_action_apply(struct rte_table_action *action,
 	void *data,
@@ -2606,7 +2606,7 @@ rte_table_action_apply(struct rte_table_action *action,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_dscp_table_update, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_dscp_table_update, 18.05);
 int
 rte_table_action_dscp_table_update(struct rte_table_action *action,
 	uint64_t dscp_mask,
@@ -2639,7 +2639,7 @@ rte_table_action_dscp_table_update(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_add, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_add, 18.05);
 int
 rte_table_action_meter_profile_add(struct rte_table_action *action,
 	uint32_t meter_profile_id,
@@ -2680,7 +2680,7 @@ rte_table_action_meter_profile_add(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_delete, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_delete, 18.05);
 int
 rte_table_action_meter_profile_delete(struct rte_table_action *action,
 	uint32_t meter_profile_id)
@@ -2704,7 +2704,7 @@ rte_table_action_meter_profile_delete(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_read, 18.05);
 int
 rte_table_action_meter_read(struct rte_table_action *action,
 	void *data,
@@ -2767,7 +2767,7 @@ rte_table_action_meter_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_ttl_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_ttl_read, 18.05);
 int
 rte_table_action_ttl_read(struct rte_table_action *action,
 	void *data,
@@ -2796,7 +2796,7 @@ rte_table_action_ttl_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_stats_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_stats_read, 18.05);
 int
 rte_table_action_stats_read(struct rte_table_action *action,
 	void *data,
@@ -2832,7 +2832,7 @@ rte_table_action_stats_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_time_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_time_read, 18.05);
 int
 rte_table_action_time_read(struct rte_table_action *action,
 	void *data,
@@ -2856,7 +2856,7 @@ rte_table_action_time_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_crypto_sym_session_get, 18.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_crypto_sym_session_get, 18.11);
 struct rte_cryptodev_sym_session *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data)
@@ -3444,7 +3444,7 @@ ah_selector(struct rte_table_action *action)
 	return ah_default;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_table_params_get, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_table_params_get, 18.05);
 int
 rte_table_action_table_params_get(struct rte_table_action *action,
 	struct rte_pipeline_table_params *params)
@@ -3470,7 +3470,7 @@ rte_table_action_table_params_get(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_free, 18.05);
 int
 rte_table_action_free(struct rte_table_action *action)
 {
diff --git a/lib/pmu/pmu.c b/lib/pmu/pmu.c
index 4c7271522a..b169e957ec 100644
--- a/lib/pmu/pmu.c
+++ b/lib/pmu/pmu.c
@@ -37,7 +37,7 @@ struct rte_pmu_event {
 	TAILQ_ENTRY(rte_pmu_event) next;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmu)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmu);
 struct rte_pmu rte_pmu;
 
 /* Stubs for arch-specific functions */
@@ -291,7 +291,7 @@ cleanup_events(struct rte_pmu_event_group *group)
 	group->enabled = false;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_pmu_enable_group, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_pmu_enable_group, 25.07);
 int
 __rte_pmu_enable_group(struct rte_pmu_event_group *group)
 {
@@ -393,7 +393,7 @@ free_event(struct rte_pmu_event *event)
 	free(event);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_add_event, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_add_event, 25.07);
 int
 rte_pmu_add_event(const char *name)
 {
@@ -436,7 +436,7 @@ rte_pmu_add_event(const char *name)
 	return event->index;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_init, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_init, 25.07);
 int
 rte_pmu_init(void)
 {
@@ -468,7 +468,7 @@ rte_pmu_init(void)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_fini, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_fini, 25.07);
 void
 rte_pmu_fini(void)
 {
diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c
index bdab2fbf6c..970214b17b 100644
--- a/lib/port/rte_port_ethdev.c
+++ b/lib/port/rte_port_ethdev.c
@@ -501,7 +501,7 @@ static int rte_port_ethdev_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ethdev_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_reader_ops);
 struct rte_port_in_ops rte_port_ethdev_reader_ops = {
 	.f_create = rte_port_ethdev_reader_create,
 	.f_free = rte_port_ethdev_reader_free,
@@ -509,7 +509,7 @@ struct rte_port_in_ops rte_port_ethdev_reader_ops = {
 	.f_stats = rte_port_ethdev_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_ops);
 struct rte_port_out_ops rte_port_ethdev_writer_ops = {
 	.f_create = rte_port_ethdev_writer_create,
 	.f_free = rte_port_ethdev_writer_free,
@@ -519,7 +519,7 @@ struct rte_port_out_ops rte_port_ethdev_writer_ops = {
 	.f_stats = rte_port_ethdev_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ethdev_writer_nodrop_ops = {
 	.f_create = rte_port_ethdev_writer_nodrop_create,
 	.f_free = rte_port_ethdev_writer_nodrop_free,
diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c
index c3a287b834..fac71da321 100644
--- a/lib/port/rte_port_eventdev.c
+++ b/lib/port/rte_port_eventdev.c
@@ -561,7 +561,7 @@ static int rte_port_eventdev_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_reader_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_reader_ops, 19.11);
 struct rte_port_in_ops rte_port_eventdev_reader_ops = {
 	.f_create = rte_port_eventdev_reader_create,
 	.f_free = rte_port_eventdev_reader_free,
@@ -569,7 +569,7 @@ struct rte_port_in_ops rte_port_eventdev_reader_ops = {
 	.f_stats = rte_port_eventdev_reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_ops, 19.11);
 struct rte_port_out_ops rte_port_eventdev_writer_ops = {
 	.f_create = rte_port_eventdev_writer_create,
 	.f_free = rte_port_eventdev_writer_free,
@@ -579,7 +579,7 @@ struct rte_port_out_ops rte_port_eventdev_writer_ops = {
 	.f_stats = rte_port_eventdev_writer_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_nodrop_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_nodrop_ops, 19.11);
 struct rte_port_out_ops rte_port_eventdev_writer_nodrop_ops = {
 	.f_create = rte_port_eventdev_writer_nodrop_create,
 	.f_free = rte_port_eventdev_writer_nodrop_free,
diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c
index dbc9efef1b..1f210986bd 100644
--- a/lib/port/rte_port_fd.c
+++ b/lib/port/rte_port_fd.c
@@ -495,7 +495,7 @@ static int rte_port_fd_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_fd_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_reader_ops);
 struct rte_port_in_ops rte_port_fd_reader_ops = {
 	.f_create = rte_port_fd_reader_create,
 	.f_free = rte_port_fd_reader_free,
@@ -503,7 +503,7 @@ struct rte_port_in_ops rte_port_fd_reader_ops = {
 	.f_stats = rte_port_fd_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_fd_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_writer_ops);
 struct rte_port_out_ops rte_port_fd_writer_ops = {
 	.f_create = rte_port_fd_writer_create,
 	.f_free = rte_port_fd_writer_free,
@@ -513,7 +513,7 @@ struct rte_port_out_ops rte_port_fd_writer_ops = {
 	.f_stats = rte_port_fd_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_fd_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_fd_writer_nodrop_ops = {
 	.f_create = rte_port_fd_writer_nodrop_create,
 	.f_free = rte_port_fd_writer_nodrop_free,
diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c
index 9444f5939c..914b276031 100644
--- a/lib/port/rte_port_frag.c
+++ b/lib/port/rte_port_frag.c
@@ -263,7 +263,7 @@ rte_port_frag_reader_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv4_frag_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv4_frag_ops);
 struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops = {
 	.f_create = rte_port_ring_reader_ipv4_frag_create,
 	.f_free = rte_port_ring_reader_frag_free,
@@ -271,7 +271,7 @@ struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops = {
 	.f_stats = rte_port_frag_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv6_frag_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv6_frag_ops);
 struct rte_port_in_ops rte_port_ring_reader_ipv6_frag_ops = {
 	.f_create = rte_port_ring_reader_ipv6_frag_create,
 	.f_free = rte_port_ring_reader_frag_free,
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 58ab7a1c5b..1bffbce8ee 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -315,7 +315,7 @@ rte_port_ras_writer_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv4_ras_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv4_ras_ops);
 struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops = {
 	.f_create = rte_port_ring_writer_ipv4_ras_create,
 	.f_free = rte_port_ring_writer_ras_free,
@@ -325,7 +325,7 @@ struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops = {
 	.f_stats = rte_port_ras_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv6_ras_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv6_ras_ops);
 struct rte_port_out_ops rte_port_ring_writer_ipv6_ras_ops = {
 	.f_create = rte_port_ring_writer_ipv6_ras_create,
 	.f_free = rte_port_ring_writer_ras_free,
diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c
index 307a576d65..dc61b20aa6 100644
--- a/lib/port/rte_port_ring.c
+++ b/lib/port/rte_port_ring.c
@@ -739,7 +739,7 @@ rte_port_ring_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ops);
 struct rte_port_in_ops rte_port_ring_reader_ops = {
 	.f_create = rte_port_ring_reader_create,
 	.f_free = rte_port_ring_reader_free,
@@ -747,7 +747,7 @@ struct rte_port_in_ops rte_port_ring_reader_ops = {
 	.f_stats = rte_port_ring_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ops);
 struct rte_port_out_ops rte_port_ring_writer_ops = {
 	.f_create = rte_port_ring_writer_create,
 	.f_free = rte_port_ring_writer_free,
@@ -757,7 +757,7 @@ struct rte_port_out_ops rte_port_ring_writer_ops = {
 	.f_stats = rte_port_ring_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ring_writer_nodrop_ops = {
 	.f_create = rte_port_ring_writer_nodrop_create,
 	.f_free = rte_port_ring_writer_nodrop_free,
@@ -767,7 +767,7 @@ struct rte_port_out_ops rte_port_ring_writer_nodrop_ops = {
 	.f_stats = rte_port_ring_writer_nodrop_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_reader_ops);
 struct rte_port_in_ops rte_port_ring_multi_reader_ops = {
 	.f_create = rte_port_ring_multi_reader_create,
 	.f_free = rte_port_ring_reader_free,
@@ -775,7 +775,7 @@ struct rte_port_in_ops rte_port_ring_multi_reader_ops = {
 	.f_stats = rte_port_ring_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_ops);
 struct rte_port_out_ops rte_port_ring_multi_writer_ops = {
 	.f_create = rte_port_ring_multi_writer_create,
 	.f_free = rte_port_ring_writer_free,
@@ -785,7 +785,7 @@ struct rte_port_out_ops rte_port_ring_multi_writer_ops = {
 	.f_stats = rte_port_ring_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ring_multi_writer_nodrop_ops = {
 	.f_create = rte_port_ring_multi_writer_nodrop_create,
 	.f_free = rte_port_ring_writer_nodrop_free,
diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c
index 3091078aa1..ab46e8dec6 100644
--- a/lib/port/rte_port_sched.c
+++ b/lib/port/rte_port_sched.c
@@ -279,7 +279,7 @@ rte_port_sched_writer_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_sched_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_sched_reader_ops);
 struct rte_port_in_ops rte_port_sched_reader_ops = {
 	.f_create = rte_port_sched_reader_create,
 	.f_free = rte_port_sched_reader_free,
@@ -287,7 +287,7 @@ struct rte_port_in_ops rte_port_sched_reader_ops = {
 	.f_stats = rte_port_sched_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sched_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_sched_writer_ops);
 struct rte_port_out_ops rte_port_sched_writer_ops = {
 	.f_create = rte_port_sched_writer_create,
 	.f_free = rte_port_sched_writer_free,
diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c
index 0557e12506..a492fa55ec 100644
--- a/lib/port/rte_port_source_sink.c
+++ b/lib/port/rte_port_source_sink.c
@@ -597,7 +597,7 @@ rte_port_sink_stats_read(void *port, struct rte_port_out_stats *stats,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_source_ops)
+RTE_EXPORT_SYMBOL(rte_port_source_ops);
 struct rte_port_in_ops rte_port_source_ops = {
 	.f_create = rte_port_source_create,
 	.f_free = rte_port_source_free,
@@ -605,7 +605,7 @@ struct rte_port_in_ops rte_port_source_ops = {
 	.f_stats = rte_port_source_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sink_ops)
+RTE_EXPORT_SYMBOL(rte_port_sink_ops);
 struct rte_port_out_ops rte_port_sink_ops = {
 	.f_create = rte_port_sink_create,
 	.f_free = rte_port_sink_free,
diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c
index 30c9d1283e..bfd6a82b56 100644
--- a/lib/port/rte_port_sym_crypto.c
+++ b/lib/port/rte_port_sym_crypto.c
@@ -529,7 +529,7 @@ static int rte_port_sym_crypto_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_reader_ops);
 struct rte_port_in_ops rte_port_sym_crypto_reader_ops = {
 	.f_create = rte_port_sym_crypto_reader_create,
 	.f_free = rte_port_sym_crypto_reader_free,
@@ -537,7 +537,7 @@ struct rte_port_in_ops rte_port_sym_crypto_reader_ops = {
 	.f_stats = rte_port_sym_crypto_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_ops);
 struct rte_port_out_ops rte_port_sym_crypto_writer_ops = {
 	.f_create = rte_port_sym_crypto_writer_create,
 	.f_free = rte_port_sym_crypto_writer_free,
@@ -547,7 +547,7 @@ struct rte_port_out_ops rte_port_sym_crypto_writer_ops = {
 	.f_stats = rte_port_sym_crypto_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_sym_crypto_writer_nodrop_ops = {
 	.f_create = rte_port_sym_crypto_writer_nodrop_create,
 	.f_free = rte_port_sym_crypto_writer_nodrop_free,
diff --git a/lib/port/rte_swx_port_ethdev.c b/lib/port/rte_swx_port_ethdev.c
index de6d0e5bb3..8c26794aa3 100644
--- a/lib/port/rte_swx_port_ethdev.c
+++ b/lib/port/rte_swx_port_ethdev.c
@@ -402,7 +402,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_reader_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_reader_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_ethdev_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -410,7 +410,7 @@ struct rte_swx_port_in_ops rte_swx_port_ethdev_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_writer_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_writer_ops, 20.11);
 struct rte_swx_port_out_ops rte_swx_port_ethdev_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_fd.c b/lib/port/rte_swx_port_fd.c
index 72783d2b0f..dfddf69ccc 100644
--- a/lib/port/rte_swx_port_fd.c
+++ b/lib/port/rte_swx_port_fd.c
@@ -345,7 +345,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_reader_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_reader_ops, 21.05);
 struct rte_swx_port_in_ops rte_swx_port_fd_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -353,7 +353,7 @@ struct rte_swx_port_in_ops rte_swx_port_fd_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_writer_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_writer_ops, 21.05);
 struct rte_swx_port_out_ops rte_swx_port_fd_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_ring.c b/lib/port/rte_swx_port_ring.c
index 3ac652ac09..f8d6b77e48 100644
--- a/lib/port/rte_swx_port_ring.c
+++ b/lib/port/rte_swx_port_ring.c
@@ -407,7 +407,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_reader_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_reader_ops, 21.05);
 struct rte_swx_port_in_ops rte_swx_port_ring_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -415,7 +415,7 @@ struct rte_swx_port_in_ops rte_swx_port_ring_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_writer_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_writer_ops, 21.05);
 struct rte_swx_port_out_ops rte_swx_port_ring_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_source_sink.c b/lib/port/rte_swx_port_source_sink.c
index af8b9ec68d..bcfcb8091e 100644
--- a/lib/port/rte_swx_port_source_sink.c
+++ b/lib/port/rte_swx_port_source_sink.c
@@ -202,7 +202,7 @@ source_stats_read(void *port, struct rte_swx_port_in_stats *stats)
 	memcpy(stats, &p->stats, sizeof(p->stats));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 	.create = source_create,
 	.free = source_free,
@@ -212,7 +212,7 @@ struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 
 #else
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 	.create = NULL,
 	.free = NULL,
@@ -383,7 +383,7 @@ sink_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_sink_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_sink_ops, 20.11);
 struct rte_swx_port_out_ops rte_swx_port_sink_ops = {
 	.create = sink_create,
 	.free = sink_free,
diff --git a/lib/power/power_common.c b/lib/power/power_common.c
index 2da034e9d0..3fae203e69 100644
--- a/lib/power/power_common.c
+++ b/lib/power/power_common.c
@@ -14,7 +14,7 @@
 
 #include "power_common.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_power_logtype, INFO);
 
 #define POWER_SYSFILE_SCALING_DRIVER   \
@@ -23,7 +23,7 @@ RTE_LOG_REGISTER_DEFAULT(rte_power_logtype, INFO);
 		"/sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor"
 #define POWER_CONVERT_TO_DECIMAL 10
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpufreq_check_scaling_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(cpufreq_check_scaling_driver);
 int
 cpufreq_check_scaling_driver(const char *driver_name)
 {
@@ -69,7 +69,7 @@ cpufreq_check_scaling_driver(const char *driver_name)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(open_core_sysfs_file)
+RTE_EXPORT_INTERNAL_SYMBOL(open_core_sysfs_file);
 int
 open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
 {
@@ -88,7 +88,7 @@ open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_u32)
+RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_u32);
 int
 read_core_sysfs_u32(FILE *f, uint32_t *val)
 {
@@ -114,7 +114,7 @@ read_core_sysfs_u32(FILE *f, uint32_t *val)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_s)
+RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_s);
 int
 read_core_sysfs_s(FILE *f, char *buf, unsigned int len)
 {
@@ -133,7 +133,7 @@ read_core_sysfs_s(FILE *f, char *buf, unsigned int len)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(write_core_sysfs_s)
+RTE_EXPORT_INTERNAL_SYMBOL(write_core_sysfs_s);
 int
 write_core_sysfs_s(FILE *f, const char *str)
 {
@@ -160,7 +160,7 @@ write_core_sysfs_s(FILE *f, const char *str)
  * set it into 'performance' if it is not by writing the sys file. The original
  * governor will be saved for rolling back.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(power_set_governor)
+RTE_EXPORT_INTERNAL_SYMBOL(power_set_governor);
 int
 power_set_governor(unsigned int lcore_id, const char *new_governor,
 		char *orig_governor, size_t orig_governor_len)
@@ -214,7 +214,7 @@ power_set_governor(unsigned int lcore_id, const char *new_governor,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(power_get_lcore_mapped_cpu_id)
+RTE_EXPORT_INTERNAL_SYMBOL(power_get_lcore_mapped_cpu_id);
 int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id)
 {
 	rte_cpuset_t lcore_cpus;
diff --git a/lib/power/rte_power_cpufreq.c b/lib/power/rte_power_cpufreq.c
index d4db03a4e5..c5964ee0e6 100644
--- a/lib/power/rte_power_cpufreq.c
+++ b/lib/power/rte_power_cpufreq.c
@@ -26,7 +26,7 @@ const char *power_env_str[] = {
 };
 
 /* register the ops struct in rte_power_cpufreq_ops, return 0 on success. */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_cpufreq_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_cpufreq_ops);
 int
 rte_power_register_cpufreq_ops(struct rte_power_cpufreq_ops *driver_ops)
 {
@@ -46,7 +46,7 @@ rte_power_register_cpufreq_ops(struct rte_power_cpufreq_ops *driver_ops)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_check_env_supported)
+RTE_EXPORT_SYMBOL(rte_power_check_env_supported);
 int
 rte_power_check_env_supported(enum power_management_env env)
 {
@@ -63,7 +63,7 @@ rte_power_check_env_supported(enum power_management_env env)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_env)
+RTE_EXPORT_SYMBOL(rte_power_set_env);
 int
 rte_power_set_env(enum power_management_env env)
 {
@@ -93,7 +93,7 @@ rte_power_set_env(enum power_management_env env)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_unset_env)
+RTE_EXPORT_SYMBOL(rte_power_unset_env);
 void
 rte_power_unset_env(void)
 {
@@ -103,13 +103,13 @@ rte_power_unset_env(void)
 	rte_spinlock_unlock(&global_env_cfg_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_env)
+RTE_EXPORT_SYMBOL(rte_power_get_env);
 enum power_management_env
 rte_power_get_env(void) {
 	return global_default_env;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_init)
+RTE_EXPORT_SYMBOL(rte_power_init);
 int
 rte_power_init(unsigned int lcore_id)
 {
@@ -143,7 +143,7 @@ rte_power_init(unsigned int lcore_id)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_exit)
+RTE_EXPORT_SYMBOL(rte_power_exit);
 int
 rte_power_exit(unsigned int lcore_id)
 {
@@ -156,7 +156,7 @@ rte_power_exit(unsigned int lcore_id)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freqs)
+RTE_EXPORT_SYMBOL(rte_power_freqs);
 uint32_t
 rte_power_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t n)
 {
@@ -164,7 +164,7 @@ rte_power_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t n)
 	return global_cpufreq_ops->get_avail_freqs(lcore_id, freqs, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_freq)
+RTE_EXPORT_SYMBOL(rte_power_get_freq);
 uint32_t
 rte_power_get_freq(unsigned int lcore_id)
 {
@@ -172,7 +172,7 @@ rte_power_get_freq(unsigned int lcore_id)
 	return global_cpufreq_ops->get_freq(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_freq)
+RTE_EXPORT_SYMBOL(rte_power_set_freq);
 uint32_t
 rte_power_set_freq(unsigned int lcore_id, uint32_t index)
 {
@@ -180,7 +180,7 @@ rte_power_set_freq(unsigned int lcore_id, uint32_t index)
 	return global_cpufreq_ops->set_freq(lcore_id, index);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_up)
+RTE_EXPORT_SYMBOL(rte_power_freq_up);
 int
 rte_power_freq_up(unsigned int lcore_id)
 {
@@ -188,7 +188,7 @@ rte_power_freq_up(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_up(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_down)
+RTE_EXPORT_SYMBOL(rte_power_freq_down);
 int
 rte_power_freq_down(unsigned int lcore_id)
 {
@@ -196,7 +196,7 @@ rte_power_freq_down(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_down(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_freq_max);
 int
 rte_power_freq_max(unsigned int lcore_id)
 {
@@ -204,7 +204,7 @@ rte_power_freq_max(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_max(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_freq_min);
 int
 rte_power_freq_min(unsigned int lcore_id)
 {
@@ -212,7 +212,7 @@ rte_power_freq_min(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_min(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_turbo_status)
+RTE_EXPORT_SYMBOL(rte_power_turbo_status);
 int
 rte_power_turbo_status(unsigned int lcore_id)
 {
@@ -220,7 +220,7 @@ rte_power_turbo_status(unsigned int lcore_id)
 	return global_cpufreq_ops->turbo_status(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_enable_turbo)
+RTE_EXPORT_SYMBOL(rte_power_freq_enable_turbo);
 int
 rte_power_freq_enable_turbo(unsigned int lcore_id)
 {
@@ -228,7 +228,7 @@ rte_power_freq_enable_turbo(unsigned int lcore_id)
 	return global_cpufreq_ops->enable_turbo(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_disable_turbo)
+RTE_EXPORT_SYMBOL(rte_power_freq_disable_turbo);
 int
 rte_power_freq_disable_turbo(unsigned int lcore_id)
 {
@@ -236,7 +236,7 @@ rte_power_freq_disable_turbo(unsigned int lcore_id)
 	return global_cpufreq_ops->disable_turbo(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_capabilities)
+RTE_EXPORT_SYMBOL(rte_power_get_capabilities);
 int
 rte_power_get_capabilities(unsigned int lcore_id,
 		struct rte_power_core_capabilities *caps)
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 6cacc562c2..77b940f493 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -497,7 +497,7 @@ get_monitor_callback(void)
 		clb_multiwait : clb_umwait;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_enable)
+RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_enable);
 int
 rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		uint16_t queue_id, enum rte_power_pmd_mgmt_type mode)
@@ -615,7 +615,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_disable)
+RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_disable);
 int
 rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 		uint16_t port_id, uint16_t queue_id)
@@ -691,21 +691,21 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_emptypoll_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_emptypoll_max);
 void
 rte_power_pmd_mgmt_set_emptypoll_max(unsigned int max)
 {
 	emptypoll_max = max;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_emptypoll_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_emptypoll_max);
 unsigned int
 rte_power_pmd_mgmt_get_emptypoll_max(void)
 {
 	return emptypoll_max;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_pause_duration)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_pause_duration);
 int
 rte_power_pmd_mgmt_set_pause_duration(unsigned int duration)
 {
@@ -718,14 +718,14 @@ rte_power_pmd_mgmt_set_pause_duration(unsigned int duration)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_pause_duration)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_pause_duration);
 unsigned int
 rte_power_pmd_mgmt_get_pause_duration(void)
 {
 	return pause_duration;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_min);
 int
 rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min)
 {
@@ -743,7 +743,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_max);
 int
 rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
 {
@@ -765,7 +765,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_min);
 int
 rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore)
 {
@@ -780,7 +780,7 @@ rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore)
 	return scale_freq_min[lcore];
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_max);
 int
 rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 {
diff --git a/lib/power/rte_power_qos.c b/lib/power/rte_power_qos.c
index be230d1c50..f7cd085819 100644
--- a/lib/power/rte_power_qos.c
+++ b/lib/power/rte_power_qos.c
@@ -18,7 +18,7 @@
 
 #define PM_QOS_CPU_RESUME_LATENCY_BUF_LEN	32
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_set_cpu_resume_latency, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_set_cpu_resume_latency, 24.11);
 int
 rte_power_qos_set_cpu_resume_latency(uint16_t lcore_id, int latency)
 {
@@ -72,7 +72,7 @@ rte_power_qos_set_cpu_resume_latency(uint16_t lcore_id, int latency)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_get_cpu_resume_latency, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_get_cpu_resume_latency, 24.11);
 int
 rte_power_qos_get_cpu_resume_latency(uint16_t lcore_id)
 {
diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c
index 30cd374127..c827d8bada 100644
--- a/lib/power/rte_power_uncore.c
+++ b/lib/power/rte_power_uncore.c
@@ -25,7 +25,7 @@ const char *uncore_env_str[] = {
 };
 
 /* register the ops struct in rte_power_uncore_ops, return 0 on success. */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_uncore_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_uncore_ops);
 int
 rte_power_register_uncore_ops(struct rte_power_uncore_ops *driver_ops)
 {
@@ -46,7 +46,7 @@ rte_power_register_uncore_ops(struct rte_power_uncore_ops *driver_ops)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_set_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_set_uncore_env, 23.11);
 int
 rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
 {
@@ -86,7 +86,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_unset_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_unset_uncore_env, 23.11);
 void
 rte_power_unset_uncore_env(void)
 {
@@ -95,14 +95,14 @@ rte_power_unset_uncore_env(void)
 	rte_spinlock_unlock(&global_env_cfg_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_get_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_get_uncore_env, 23.11);
 enum rte_uncore_power_mgmt_env
 rte_power_get_uncore_env(void)
 {
 	return global_uncore_env;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_init)
+RTE_EXPORT_SYMBOL(rte_power_uncore_init);
 int
 rte_power_uncore_init(unsigned int pkg, unsigned int die)
 {
@@ -134,7 +134,7 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_exit)
+RTE_EXPORT_SYMBOL(rte_power_uncore_exit);
 int
 rte_power_uncore_exit(unsigned int pkg, unsigned int die)
 {
@@ -148,7 +148,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_uncore_freq)
+RTE_EXPORT_SYMBOL(rte_power_get_uncore_freq);
 uint32_t
 rte_power_get_uncore_freq(unsigned int pkg, unsigned int die)
 {
@@ -156,7 +156,7 @@ rte_power_get_uncore_freq(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->get_freq(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_uncore_freq)
+RTE_EXPORT_SYMBOL(rte_power_set_uncore_freq);
 int
 rte_power_set_uncore_freq(unsigned int pkg, unsigned int die, uint32_t index)
 {
@@ -164,7 +164,7 @@ rte_power_set_uncore_freq(unsigned int pkg, unsigned int die, uint32_t index)
 	return global_uncore_ops->set_freq(pkg, die, index);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_uncore_freq_max);
 int
 rte_power_uncore_freq_max(unsigned int pkg, unsigned int die)
 {
@@ -172,7 +172,7 @@ rte_power_uncore_freq_max(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->freq_max(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_uncore_freq_min);
 int
 rte_power_uncore_freq_min(unsigned int pkg, unsigned int die)
 {
@@ -180,7 +180,7 @@ rte_power_uncore_freq_min(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->freq_min(pkg, die);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_uncore_freqs, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_uncore_freqs, 23.11);
 int
 rte_power_uncore_freqs(unsigned int pkg, unsigned int die,
 			uint32_t *freqs, uint32_t num)
@@ -189,7 +189,7 @@ rte_power_uncore_freqs(unsigned int pkg, unsigned int die,
 	return global_uncore_ops->get_avail_freqs(pkg, die, freqs, num);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_freqs)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_freqs);
 int
 rte_power_uncore_get_num_freqs(unsigned int pkg, unsigned int die)
 {
@@ -197,7 +197,7 @@ rte_power_uncore_get_num_freqs(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->get_num_freqs(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_pkgs)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_pkgs);
 unsigned int
 rte_power_uncore_get_num_pkgs(void)
 {
@@ -205,7 +205,7 @@ rte_power_uncore_get_num_pkgs(void)
 	return global_uncore_ops->get_num_pkgs();
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_dies)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_dies);
 unsigned int
 rte_power_uncore_get_num_dies(unsigned int pkg)
 {
diff --git a/lib/rawdev/rte_rawdev.c b/lib/rawdev/rte_rawdev.c
index 4da7956d5a..e1ea7667dc 100644
--- a/lib/rawdev/rte_rawdev.c
+++ b/lib/rawdev/rte_rawdev.c
@@ -23,7 +23,7 @@
 
 static struct rte_rawdev rte_rawdevices[RTE_RAWDEV_MAX_DEVS];
 
-RTE_EXPORT_SYMBOL(rte_rawdevs)
+RTE_EXPORT_SYMBOL(rte_rawdevs);
 struct rte_rawdev *rte_rawdevs = rte_rawdevices;
 
 static struct rte_rawdev_global rawdev_globals = {
@@ -31,14 +31,14 @@ static struct rte_rawdev_global rawdev_globals = {
 };
 
 /* Raw device, northbound API implementation */
-RTE_EXPORT_SYMBOL(rte_rawdev_count)
+RTE_EXPORT_SYMBOL(rte_rawdev_count);
 uint8_t
 rte_rawdev_count(void)
 {
 	return rawdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_rawdev_get_dev_id);
 uint16_t
 rte_rawdev_get_dev_id(const char *name)
 {
@@ -56,7 +56,7 @@ rte_rawdev_get_dev_id(const char *name)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_socket_id)
+RTE_EXPORT_SYMBOL(rte_rawdev_socket_id);
 int
 rte_rawdev_socket_id(uint16_t dev_id)
 {
@@ -68,7 +68,7 @@ rte_rawdev_socket_id(uint16_t dev_id)
 	return dev->socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_info_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_info_get);
 int
 rte_rawdev_info_get(uint16_t dev_id, struct rte_rawdev_info *dev_info,
 		size_t dev_private_size)
@@ -97,7 +97,7 @@ rte_rawdev_info_get(uint16_t dev_id, struct rte_rawdev_info *dev_info,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_configure)
+RTE_EXPORT_SYMBOL(rte_rawdev_configure);
 int
 rte_rawdev_configure(uint16_t dev_id, struct rte_rawdev_info *dev_conf,
 		size_t dev_private_size)
@@ -130,7 +130,7 @@ rte_rawdev_configure(uint16_t dev_id, struct rte_rawdev_info *dev_conf,
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_conf_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_conf_get);
 int
 rte_rawdev_queue_conf_get(uint16_t dev_id,
 			  uint16_t queue_id,
@@ -147,7 +147,7 @@ rte_rawdev_queue_conf_get(uint16_t dev_id,
 	return dev->dev_ops->queue_def_conf(dev, queue_id, queue_conf, queue_conf_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_setup)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_setup);
 int
 rte_rawdev_queue_setup(uint16_t dev_id,
 		       uint16_t queue_id,
@@ -164,7 +164,7 @@ rte_rawdev_queue_setup(uint16_t dev_id,
 	return dev->dev_ops->queue_setup(dev, queue_id, queue_conf, queue_conf_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_release)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_release);
 int
 rte_rawdev_queue_release(uint16_t dev_id, uint16_t queue_id)
 {
@@ -178,7 +178,7 @@ rte_rawdev_queue_release(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_release(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_count)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_count);
 uint16_t
 rte_rawdev_queue_count(uint16_t dev_id)
 {
@@ -192,7 +192,7 @@ rte_rawdev_queue_count(uint16_t dev_id)
 	return dev->dev_ops->queue_count(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_get_attr)
+RTE_EXPORT_SYMBOL(rte_rawdev_get_attr);
 int
 rte_rawdev_get_attr(uint16_t dev_id,
 		    const char *attr_name,
@@ -208,7 +208,7 @@ rte_rawdev_get_attr(uint16_t dev_id,
 	return dev->dev_ops->attr_get(dev, attr_name, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_set_attr)
+RTE_EXPORT_SYMBOL(rte_rawdev_set_attr);
 int
 rte_rawdev_set_attr(uint16_t dev_id,
 		    const char *attr_name,
@@ -224,7 +224,7 @@ rte_rawdev_set_attr(uint16_t dev_id,
 	return dev->dev_ops->attr_set(dev, attr_name, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_enqueue_buffers)
+RTE_EXPORT_SYMBOL(rte_rawdev_enqueue_buffers);
 int
 rte_rawdev_enqueue_buffers(uint16_t dev_id,
 			   struct rte_rawdev_buf **buffers,
@@ -241,7 +241,7 @@ rte_rawdev_enqueue_buffers(uint16_t dev_id,
 	return dev->dev_ops->enqueue_bufs(dev, buffers, count, context);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_dequeue_buffers)
+RTE_EXPORT_SYMBOL(rte_rawdev_dequeue_buffers);
 int
 rte_rawdev_dequeue_buffers(uint16_t dev_id,
 			   struct rte_rawdev_buf **buffers,
@@ -258,7 +258,7 @@ rte_rawdev_dequeue_buffers(uint16_t dev_id,
 	return dev->dev_ops->dequeue_bufs(dev, buffers, count, context);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_dump)
+RTE_EXPORT_SYMBOL(rte_rawdev_dump);
 int
 rte_rawdev_dump(uint16_t dev_id, FILE *f)
 {
@@ -282,7 +282,7 @@ xstats_get_count(uint16_t dev_id)
 	return dev->dev_ops->xstats_get_names(dev, NULL, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_names_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_names_get);
 int
 rte_rawdev_xstats_names_get(uint16_t dev_id,
 		struct rte_rawdev_xstats_name *xstats_names,
@@ -307,7 +307,7 @@ rte_rawdev_xstats_names_get(uint16_t dev_id,
 }
 
 /* retrieve rawdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_get);
 int
 rte_rawdev_xstats_get(uint16_t dev_id,
 		      const unsigned int ids[],
@@ -322,7 +322,7 @@ rte_rawdev_xstats_get(uint16_t dev_id,
 	return dev->dev_ops->xstats_get(dev, ids, values, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_by_name_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_by_name_get);
 uint64_t
 rte_rawdev_xstats_by_name_get(uint16_t dev_id,
 			      const char *name,
@@ -343,7 +343,7 @@ rte_rawdev_xstats_by_name_get(uint16_t dev_id,
 	return dev->dev_ops->xstats_get_by_name(dev, name, id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_reset);
 int
 rte_rawdev_xstats_reset(uint16_t dev_id,
 			const uint32_t ids[], uint32_t nb_ids)
@@ -356,7 +356,7 @@ rte_rawdev_xstats_reset(uint16_t dev_id,
 	return dev->dev_ops->xstats_reset(dev, ids, nb_ids);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_status_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_status_get);
 int
 rte_rawdev_firmware_status_get(uint16_t dev_id, rte_rawdev_obj_t status_info)
 {
@@ -368,7 +368,7 @@ rte_rawdev_firmware_status_get(uint16_t dev_id, rte_rawdev_obj_t status_info)
 	return dev->dev_ops->firmware_status_get(dev, status_info);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_version_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_version_get);
 int
 rte_rawdev_firmware_version_get(uint16_t dev_id, rte_rawdev_obj_t version_info)
 {
@@ -380,7 +380,7 @@ rte_rawdev_firmware_version_get(uint16_t dev_id, rte_rawdev_obj_t version_info)
 	return dev->dev_ops->firmware_version_get(dev, version_info);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_load)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_load);
 int
 rte_rawdev_firmware_load(uint16_t dev_id, rte_rawdev_obj_t firmware_image)
 {
@@ -395,7 +395,7 @@ rte_rawdev_firmware_load(uint16_t dev_id, rte_rawdev_obj_t firmware_image)
 	return dev->dev_ops->firmware_load(dev, firmware_image);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_unload)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_unload);
 int
 rte_rawdev_firmware_unload(uint16_t dev_id)
 {
@@ -407,7 +407,7 @@ rte_rawdev_firmware_unload(uint16_t dev_id)
 	return dev->dev_ops->firmware_unload(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_selftest)
+RTE_EXPORT_SYMBOL(rte_rawdev_selftest);
 int
 rte_rawdev_selftest(uint16_t dev_id)
 {
@@ -419,7 +419,7 @@ rte_rawdev_selftest(uint16_t dev_id)
 	return dev->dev_ops->dev_selftest(dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_start)
+RTE_EXPORT_SYMBOL(rte_rawdev_start);
 int
 rte_rawdev_start(uint16_t dev_id)
 {
@@ -448,7 +448,7 @@ rte_rawdev_start(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_stop)
+RTE_EXPORT_SYMBOL(rte_rawdev_stop);
 void
 rte_rawdev_stop(uint16_t dev_id)
 {
@@ -474,7 +474,7 @@ rte_rawdev_stop(uint16_t dev_id)
 	dev->started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_close)
+RTE_EXPORT_SYMBOL(rte_rawdev_close);
 int
 rte_rawdev_close(uint16_t dev_id)
 {
@@ -495,7 +495,7 @@ rte_rawdev_close(uint16_t dev_id)
 	return dev->dev_ops->dev_close(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_reset)
+RTE_EXPORT_SYMBOL(rte_rawdev_reset);
 int
 rte_rawdev_reset(uint16_t dev_id)
 {
@@ -524,7 +524,7 @@ rte_rawdev_find_free_device_index(void)
 	return RTE_RAWDEV_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_pmd_allocate)
+RTE_EXPORT_SYMBOL(rte_rawdev_pmd_allocate);
 struct rte_rawdev *
 rte_rawdev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id)
 {
@@ -566,7 +566,7 @@ rte_rawdev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id)
 	return rawdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_pmd_release)
+RTE_EXPORT_SYMBOL(rte_rawdev_pmd_release);
 int
 rte_rawdev_pmd_release(struct rte_rawdev *rawdev)
 {
diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c
index ac6d464b7f..b9c4e8b2e1 100644
--- a/lib/rcu/rte_rcu_qsbr.c
+++ b/lib/rcu/rte_rcu_qsbr.c
@@ -24,7 +24,7 @@
 	RTE_LOG_LINE_PREFIX(level, RCU, "%s(): ", __func__, __VA_ARGS__)
 
 /* Get the memory size of QSBR variable */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_get_memsize)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_get_memsize);
 size_t
 rte_rcu_qsbr_get_memsize(uint32_t max_threads)
 {
@@ -49,7 +49,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads)
 }
 
 /* Initialize a quiescent state variable */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_init)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_init);
 int
 rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
 {
@@ -81,7 +81,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
 /* Register a reader thread to report its quiescent state
  * on a QS variable.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_register)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_register);
 int
 rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -117,7 +117,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
 /* Remove a reader thread, from the list of threads reporting their
  * quiescent state on a QS variable.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_unregister)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_unregister);
 int
 rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -154,7 +154,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
 }
 
 /* Wait till the reader threads have entered quiescent state. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_synchronize)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_synchronize);
 void
 rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -175,7 +175,7 @@ rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
 }
 
 /* Dump the details of a single quiescent state variable to a file. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dump)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dump);
 int
 rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
 {
@@ -242,7 +242,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
 /* Create a queue used to store the data structure elements that can
  * be freed later. This queue is referred to as 'defer queue'.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_create)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_create);
 struct rte_rcu_qsbr_dq *
 rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params)
 {
@@ -319,7 +319,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params)
 /* Enqueue one resource to the defer queue to free after the grace
  * period is over.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_enqueue)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_enqueue);
 int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e)
 {
 	__rte_rcu_qsbr_dq_elem_t *dq_elem;
@@ -378,7 +378,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e)
 }
 
 /* Reclaim resources from the defer queue. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_reclaim)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_reclaim);
 int
 rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n,
 			unsigned int *freed, unsigned int *pending,
@@ -428,7 +428,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n,
 }
 
 /* Delete a defer queue. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_delete)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_delete);
 int
 rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq)
 {
@@ -454,5 +454,5 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rcu_log_type)
+RTE_EXPORT_SYMBOL(rte_rcu_log_type);
 RTE_LOG_REGISTER_DEFAULT(rte_rcu_log_type, ERR);
diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c
index 8ba797b278..d824c381b0 100644
--- a/lib/regexdev/rte_regexdev.c
+++ b/lib/regexdev/rte_regexdev.c
@@ -14,14 +14,14 @@
 #include "rte_regexdev_driver.h"
 
 static const char *MZ_RTE_REGEXDEV_DATA = "rte_regexdev_data";
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regex_devices, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regex_devices, 22.03);
 struct rte_regexdev rte_regex_devices[RTE_MAX_REGEXDEV_DEVS];
 /* Shared memory between primary and secondary processes. */
 static struct {
 	struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS];
 } *rte_regexdev_shared_data;
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_logtype, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_logtype, 22.03);
 RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO);
 
 static uint16_t
@@ -92,7 +92,7 @@ regexdev_check_name(const char *name)
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_register);
 struct rte_regexdev *
 rte_regexdev_register(const char *name)
 {
@@ -130,14 +130,14 @@ rte_regexdev_register(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_unregister);
 void
 rte_regexdev_unregister(struct rte_regexdev *dev)
 {
 	dev->state = RTE_REGEXDEV_UNUSED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_get_device_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_get_device_by_name);
 struct rte_regexdev *
 rte_regexdev_get_device_by_name(const char *name)
 {
@@ -146,7 +146,7 @@ rte_regexdev_get_device_by_name(const char *name)
 	return regexdev_allocated(name);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_count, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_count, 20.08);
 uint8_t
 rte_regexdev_count(void)
 {
@@ -160,7 +160,7 @@ rte_regexdev_count(void)
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_get_dev_id, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_get_dev_id, 20.08);
 int
 rte_regexdev_get_dev_id(const char *name)
 {
@@ -179,7 +179,7 @@ rte_regexdev_get_dev_id(const char *name)
 	return id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_is_valid_dev, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_is_valid_dev, 22.03);
 int
 rte_regexdev_is_valid_dev(uint16_t dev_id)
 {
@@ -204,14 +204,14 @@ regexdev_info_get(uint8_t dev_id, struct rte_regexdev_info *dev_info)
 
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_info_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_info_get, 20.08);
 int
 rte_regexdev_info_get(uint8_t dev_id, struct rte_regexdev_info *dev_info)
 {
 	return regexdev_info_get(dev_id, dev_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_configure, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_configure, 20.08);
 int
 rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg)
 {
@@ -306,7 +306,7 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_queue_pair_setup, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_queue_pair_setup, 20.08);
 int
 rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 			   const struct rte_regexdev_qp_conf *qp_conf)
@@ -339,7 +339,7 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->dev_qp_setup(dev, queue_pair_id, qp_conf);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_start, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_start, 20.08);
 int
 rte_regexdev_start(uint8_t dev_id)
 {
@@ -356,7 +356,7 @@ rte_regexdev_start(uint8_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_stop, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_stop, 20.08);
 int
 rte_regexdev_stop(uint8_t dev_id)
 {
@@ -371,7 +371,7 @@ rte_regexdev_stop(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_close, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_close, 20.08);
 int
 rte_regexdev_close(uint8_t dev_id)
 {
@@ -387,7 +387,7 @@ rte_regexdev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_get, 20.08);
 int
 rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 		      void *attr_value)
@@ -406,7 +406,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 	return dev->dev_ops->dev_attr_get(dev, attr_id, attr_value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_set, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_set, 20.08);
 int
 rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 		      const void *attr_value)
@@ -425,7 +425,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 	return dev->dev_ops->dev_attr_set(dev, attr_id, attr_value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_update, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_update, 20.08);
 int
 rte_regexdev_rule_db_update(uint8_t dev_id,
 			    const struct rte_regexdev_rule *rules,
@@ -445,7 +445,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id,
 	return dev->dev_ops->dev_rule_db_update(dev, rules, nb_rules);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_compile_activate, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_compile_activate, 20.08);
 int
 rte_regexdev_rule_db_compile_activate(uint8_t dev_id)
 {
@@ -458,7 +458,7 @@ rte_regexdev_rule_db_compile_activate(uint8_t dev_id)
 	return dev->dev_ops->dev_rule_db_compile_activate(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_import, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_import, 20.08);
 int
 rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db,
 			    uint32_t rule_db_len)
@@ -477,7 +477,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db,
 	return dev->dev_ops->dev_db_import(dev, rule_db, rule_db_len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_export, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_export, 20.08);
 int
 rte_regexdev_rule_db_export(uint8_t dev_id, char *rule_db)
 {
@@ -490,7 +490,7 @@ rte_regexdev_rule_db_export(uint8_t dev_id, char *rule_db)
 	return dev->dev_ops->dev_db_export(dev, rule_db);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_names_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_names_get, 20.08);
 int
 rte_regexdev_xstats_names_get(uint8_t dev_id,
 			      struct rte_regexdev_xstats_map *xstats_map)
@@ -509,7 +509,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id,
 	return dev->dev_ops->dev_xstats_names_get(dev, xstats_map);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_get, 20.08);
 int
 rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids,
 			uint64_t *values, uint16_t n)
@@ -531,7 +531,7 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids,
 	return dev->dev_ops->dev_xstats_get(dev, ids, values, n);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_by_name_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_by_name_get, 20.08);
 int
 rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name,
 				uint16_t *id, uint64_t *value)
@@ -557,7 +557,7 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name,
 	return dev->dev_ops->dev_xstats_by_name_get(dev, name, id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_reset, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_reset, 20.08);
 int
 rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids,
 			  uint16_t nb_ids)
@@ -575,7 +575,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids,
 	return dev->dev_ops->dev_xstats_reset(dev, ids, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_selftest, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_selftest, 20.08);
 int
 rte_regexdev_selftest(uint8_t dev_id)
 {
@@ -588,7 +588,7 @@ rte_regexdev_selftest(uint8_t dev_id)
 	return dev->dev_ops->dev_selftest(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_dump, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_dump, 20.08);
 int
 rte_regexdev_dump(uint8_t dev_id, FILE *f)
 {
diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
index be06530860..e2d8114c2f 100644
--- a/lib/reorder/rte_reorder.c
+++ b/lib/reorder/rte_reorder.c
@@ -35,7 +35,7 @@ EAL_REGISTER_TAILQ(rte_reorder_tailq)
 #define RTE_REORDER_NAMESIZE 32
 
 #define RTE_REORDER_SEQN_DYNFIELD_NAME "rte_reorder_seqn_dynfield"
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_seqn_dynfield_offset, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_seqn_dynfield_offset, 20.11);
 int rte_reorder_seqn_dynfield_offset = -1;
 
 /* A generic circular buffer */
@@ -61,14 +61,14 @@ struct __rte_cache_aligned rte_reorder_buffer {
 static void
 rte_reorder_free_mbufs(struct rte_reorder_buffer *b);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_memory_footprint_get, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_memory_footprint_get, 23.07);
 unsigned int
 rte_reorder_memory_footprint_get(unsigned int size)
 {
 	return sizeof(struct rte_reorder_buffer) + (2 * size * sizeof(struct rte_mbuf *));
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_init)
+RTE_EXPORT_SYMBOL(rte_reorder_init);
 struct rte_reorder_buffer *
 rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize,
 		const char *name, unsigned int size)
@@ -158,7 +158,7 @@ rte_reorder_entry_insert(struct rte_tailq_entry *new_te)
 	return te;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_create)
+RTE_EXPORT_SYMBOL(rte_reorder_create);
 struct rte_reorder_buffer*
 rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
 {
@@ -215,7 +215,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
 	return b;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_reset)
+RTE_EXPORT_SYMBOL(rte_reorder_reset);
 void
 rte_reorder_reset(struct rte_reorder_buffer *b)
 {
@@ -239,7 +239,7 @@ rte_reorder_free_mbufs(struct rte_reorder_buffer *b)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_free)
+RTE_EXPORT_SYMBOL(rte_reorder_free);
 void
 rte_reorder_free(struct rte_reorder_buffer *b)
 {
@@ -274,7 +274,7 @@ rte_reorder_free(struct rte_reorder_buffer *b)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_find_existing)
+RTE_EXPORT_SYMBOL(rte_reorder_find_existing);
 struct rte_reorder_buffer *
 rte_reorder_find_existing(const char *name)
 {
@@ -356,7 +356,7 @@ rte_reorder_fill_overflow(struct rte_reorder_buffer *b, unsigned n)
 	return order_head_adv;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_insert)
+RTE_EXPORT_SYMBOL(rte_reorder_insert);
 int
 rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf)
 {
@@ -423,7 +423,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_drain)
+RTE_EXPORT_SYMBOL(rte_reorder_drain);
 unsigned int
 rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs,
 		unsigned max_mbufs)
@@ -482,7 +482,7 @@ ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn)
 	return low;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_drain_up_to_seqn, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_drain_up_to_seqn, 23.03);
 unsigned int
 rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs,
 		const unsigned int max_mbufs, const rte_reorder_seqn_t seqn)
@@ -553,7 +553,7 @@ rte_reorder_is_empty(const struct rte_reorder_buffer *b)
 	return true;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_min_seqn_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_min_seqn_set, 23.03);
 unsigned int
 rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, rte_reorder_seqn_t min_seqn)
 {
diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c
index 046db131ca..216ac4180c 100644
--- a/lib/rib/rte_rib.c
+++ b/lib/rib/rte_rib.c
@@ -102,7 +102,7 @@ node_free(struct rte_rib *rib, struct rte_rib_node *ent)
 	rte_mempool_put(rib->node_pool, ent);
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup)
+RTE_EXPORT_SYMBOL(rte_rib_lookup);
 struct rte_rib_node *
 rte_rib_lookup(struct rte_rib *rib, uint32_t ip)
 {
@@ -122,7 +122,7 @@ rte_rib_lookup(struct rte_rib *rib, uint32_t ip)
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup_parent)
+RTE_EXPORT_SYMBOL(rte_rib_lookup_parent);
 struct rte_rib_node *
 rte_rib_lookup_parent(struct rte_rib_node *ent)
 {
@@ -154,7 +154,7 @@ __rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup_exact)
+RTE_EXPORT_SYMBOL(rte_rib_lookup_exact);
 struct rte_rib_node *
 rte_rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -172,7 +172,7 @@ rte_rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
  *  for a given in args ip/depth prefix
  *  last = NULL means the first invocation
  */
-RTE_EXPORT_SYMBOL(rte_rib_get_nxt)
+RTE_EXPORT_SYMBOL(rte_rib_get_nxt);
 struct rte_rib_node *
 rte_rib_get_nxt(struct rte_rib *rib, uint32_t ip,
 	uint8_t depth, struct rte_rib_node *last, int flag)
@@ -213,7 +213,7 @@ rte_rib_get_nxt(struct rte_rib *rib, uint32_t ip,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_remove)
+RTE_EXPORT_SYMBOL(rte_rib_remove);
 void
 rte_rib_remove(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -246,7 +246,7 @@ rte_rib_remove(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_insert)
+RTE_EXPORT_SYMBOL(rte_rib_insert);
 struct rte_rib_node *
 rte_rib_insert(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -353,7 +353,7 @@ rte_rib_insert(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	return new_node;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_ip)
+RTE_EXPORT_SYMBOL(rte_rib_get_ip);
 int
 rte_rib_get_ip(const struct rte_rib_node *node, uint32_t *ip)
 {
@@ -365,7 +365,7 @@ rte_rib_get_ip(const struct rte_rib_node *node, uint32_t *ip)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_depth)
+RTE_EXPORT_SYMBOL(rte_rib_get_depth);
 int
 rte_rib_get_depth(const struct rte_rib_node *node, uint8_t *depth)
 {
@@ -377,14 +377,14 @@ rte_rib_get_depth(const struct rte_rib_node *node, uint8_t *depth)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_ext)
+RTE_EXPORT_SYMBOL(rte_rib_get_ext);
 void *
 rte_rib_get_ext(struct rte_rib_node *node)
 {
 	return (node == NULL) ? NULL : &node->ext[0];
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_nh)
+RTE_EXPORT_SYMBOL(rte_rib_get_nh);
 int
 rte_rib_get_nh(const struct rte_rib_node *node, uint64_t *nh)
 {
@@ -396,7 +396,7 @@ rte_rib_get_nh(const struct rte_rib_node *node, uint64_t *nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_set_nh)
+RTE_EXPORT_SYMBOL(rte_rib_set_nh);
 int
 rte_rib_set_nh(struct rte_rib_node *node, uint64_t nh)
 {
@@ -408,7 +408,7 @@ rte_rib_set_nh(struct rte_rib_node *node, uint64_t nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_create)
+RTE_EXPORT_SYMBOL(rte_rib_create);
 struct rte_rib *
 rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
 {
@@ -490,7 +490,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_find_existing)
+RTE_EXPORT_SYMBOL(rte_rib_find_existing);
 struct rte_rib *
 rte_rib_find_existing(const char *name)
 {
@@ -516,7 +516,7 @@ rte_rib_find_existing(const char *name)
 	return rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_free)
+RTE_EXPORT_SYMBOL(rte_rib_free);
 void
 rte_rib_free(struct rte_rib *rib)
 {
diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c
index ded5fd044f..86e1d8f1cc 100644
--- a/lib/rib/rte_rib6.c
+++ b/lib/rib/rte_rib6.c
@@ -115,7 +115,7 @@ node_free(struct rte_rib6 *rib, struct rte_rib6_node *ent)
 	rte_mempool_put(rib->node_pool, ent);
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup);
 struct rte_rib6_node *
 rte_rib6_lookup(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip)
@@ -137,7 +137,7 @@ rte_rib6_lookup(struct rte_rib6 *rib,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup_parent)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup_parent);
 struct rte_rib6_node *
 rte_rib6_lookup_parent(struct rte_rib6_node *ent)
 {
@@ -153,7 +153,7 @@ rte_rib6_lookup_parent(struct rte_rib6_node *ent)
 	return tmp;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup_exact)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup_exact);
 struct rte_rib6_node *
 rte_rib6_lookup_exact(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -191,7 +191,7 @@ rte_rib6_lookup_exact(struct rte_rib6 *rib,
  *  for a given in args ip/depth prefix
  *  last = NULL means the first invocation
  */
-RTE_EXPORT_SYMBOL(rte_rib6_get_nxt)
+RTE_EXPORT_SYMBOL(rte_rib6_get_nxt);
 struct rte_rib6_node *
 rte_rib6_get_nxt(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip,
@@ -237,7 +237,7 @@ rte_rib6_get_nxt(struct rte_rib6 *rib,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_remove)
+RTE_EXPORT_SYMBOL(rte_rib6_remove);
 void
 rte_rib6_remove(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -271,7 +271,7 @@ rte_rib6_remove(struct rte_rib6 *rib,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_insert)
+RTE_EXPORT_SYMBOL(rte_rib6_insert);
 struct rte_rib6_node *
 rte_rib6_insert(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -399,7 +399,7 @@ rte_rib6_insert(struct rte_rib6 *rib,
 	return new_node;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_ip)
+RTE_EXPORT_SYMBOL(rte_rib6_get_ip);
 int
 rte_rib6_get_ip(const struct rte_rib6_node *node,
 		struct rte_ipv6_addr *ip)
@@ -412,7 +412,7 @@ rte_rib6_get_ip(const struct rte_rib6_node *node,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_depth)
+RTE_EXPORT_SYMBOL(rte_rib6_get_depth);
 int
 rte_rib6_get_depth(const struct rte_rib6_node *node, uint8_t *depth)
 {
@@ -424,14 +424,14 @@ rte_rib6_get_depth(const struct rte_rib6_node *node, uint8_t *depth)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_ext)
+RTE_EXPORT_SYMBOL(rte_rib6_get_ext);
 void *
 rte_rib6_get_ext(struct rte_rib6_node *node)
 {
 	return (node == NULL) ? NULL : &node->ext[0];
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_nh)
+RTE_EXPORT_SYMBOL(rte_rib6_get_nh);
 int
 rte_rib6_get_nh(const struct rte_rib6_node *node, uint64_t *nh)
 {
@@ -443,7 +443,7 @@ rte_rib6_get_nh(const struct rte_rib6_node *node, uint64_t *nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_set_nh)
+RTE_EXPORT_SYMBOL(rte_rib6_set_nh);
 int
 rte_rib6_set_nh(struct rte_rib6_node *node, uint64_t nh)
 {
@@ -455,7 +455,7 @@ rte_rib6_set_nh(struct rte_rib6_node *node, uint64_t nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_create)
+RTE_EXPORT_SYMBOL(rte_rib6_create);
 struct rte_rib6 *
 rte_rib6_create(const char *name, int socket_id,
 		const struct rte_rib6_conf *conf)
@@ -539,7 +539,7 @@ rte_rib6_create(const char *name, int socket_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_find_existing)
+RTE_EXPORT_SYMBOL(rte_rib6_find_existing);
 struct rte_rib6 *
 rte_rib6_find_existing(const char *name)
 {
@@ -570,7 +570,7 @@ rte_rib6_find_existing(const char *name)
 	return rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_free)
+RTE_EXPORT_SYMBOL(rte_rib6_free);
 void
 rte_rib6_free(struct rte_rib6 *rib)
 {
diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index edd63aa535..548ba059fa 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -53,7 +53,7 @@ EAL_REGISTER_TAILQ(rte_ring_tailq)
 #define HTD_MAX_DEF	8
 
 /* return the size of memory occupied by a ring */
-RTE_EXPORT_SYMBOL(rte_ring_get_memsize_elem)
+RTE_EXPORT_SYMBOL(rte_ring_get_memsize_elem);
 ssize_t
 rte_ring_get_memsize_elem(unsigned int esize, unsigned int count)
 {
@@ -81,7 +81,7 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count)
 }
 
 /* return the size of memory occupied by a ring */
-RTE_EXPORT_SYMBOL(rte_ring_get_memsize)
+RTE_EXPORT_SYMBOL(rte_ring_get_memsize);
 ssize_t
 rte_ring_get_memsize(unsigned int count)
 {
@@ -121,7 +121,7 @@ reset_headtail(void *p)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ring_reset)
+RTE_EXPORT_SYMBOL(rte_ring_reset);
 void
 rte_ring_reset(struct rte_ring *r)
 {
@@ -180,7 +180,7 @@ get_sync_type(uint32_t flags, enum rte_ring_sync_type *prod_st,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ring_init)
+RTE_EXPORT_SYMBOL(rte_ring_init);
 int
 rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
 	unsigned int flags)
@@ -248,7 +248,7 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
 }
 
 /* create the ring for a given element size */
-RTE_EXPORT_SYMBOL(rte_ring_create_elem)
+RTE_EXPORT_SYMBOL(rte_ring_create_elem);
 struct rte_ring *
 rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
 		int socket_id, unsigned int flags)
@@ -318,7 +318,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
 }
 
 /* create the ring */
-RTE_EXPORT_SYMBOL(rte_ring_create)
+RTE_EXPORT_SYMBOL(rte_ring_create);
 struct rte_ring *
 rte_ring_create(const char *name, unsigned int count, int socket_id,
 		unsigned int flags)
@@ -328,7 +328,7 @@ rte_ring_create(const char *name, unsigned int count, int socket_id,
 }
 
 /* free the ring */
-RTE_EXPORT_SYMBOL(rte_ring_free)
+RTE_EXPORT_SYMBOL(rte_ring_free);
 void
 rte_ring_free(struct rte_ring *r)
 {
@@ -422,7 +422,7 @@ ring_dump_hts_headtail(FILE *f, const char *prefix,
 	fprintf(f, "%stail=%"PRIu32"\n", prefix, hts->ht.pos.tail);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ring_headtail_dump, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ring_headtail_dump, 25.03);
 void
 rte_ring_headtail_dump(FILE *f, const char *prefix,
 		const struct rte_ring_headtail *r)
@@ -451,7 +451,7 @@ rte_ring_headtail_dump(FILE *f, const char *prefix,
 }
 
 /* dump the status of the ring on the console */
-RTE_EXPORT_SYMBOL(rte_ring_dump)
+RTE_EXPORT_SYMBOL(rte_ring_dump);
 void
 rte_ring_dump(FILE *f, const struct rte_ring *r)
 {
@@ -470,7 +470,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 }
 
 /* dump the status of all rings on the console */
-RTE_EXPORT_SYMBOL(rte_ring_list_dump)
+RTE_EXPORT_SYMBOL(rte_ring_list_dump);
 void
 rte_ring_list_dump(FILE *f)
 {
@@ -489,7 +489,7 @@ rte_ring_list_dump(FILE *f)
 }
 
 /* search a ring from its name */
-RTE_EXPORT_SYMBOL(rte_ring_lookup)
+RTE_EXPORT_SYMBOL(rte_ring_lookup);
 struct rte_ring *
 rte_ring_lookup(const char *name)
 {
diff --git a/lib/ring/rte_soring.c b/lib/ring/rte_soring.c
index 0d8abba69c..88dc808362 100644
--- a/lib/ring/rte_soring.c
+++ b/lib/ring/rte_soring.c
@@ -92,7 +92,7 @@ soring_dump_stage_headtail(FILE *f, const char *prefix,
 	fprintf(f, "%shead=%"PRIu32"\n", prefix, st->sht.head);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dump, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dump, 25.03);
 void
 rte_soring_dump(FILE *f, const struct rte_soring *r)
 {
@@ -120,7 +120,7 @@ rte_soring_dump(FILE *f, const struct rte_soring *r)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_get_memsize, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_get_memsize, 25.03);
 ssize_t
 rte_soring_get_memsize(const struct rte_soring_param *prm)
 {
@@ -154,7 +154,7 @@ soring_compilation_checks(void)
 		offsetof(struct soring_stage_headtail, unused));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_init, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_init, 25.03);
 int
 rte_soring_init(struct rte_soring *r, const struct rte_soring_param *prm)
 {
diff --git a/lib/ring/soring.c b/lib/ring/soring.c
index 797484d6bf..f8a901c3e9 100644
--- a/lib/ring/soring.c
+++ b/lib/ring/soring.c
@@ -491,7 +491,7 @@ soring_release(struct rte_soring *r, const void *objs,
  * Public functions (data-path) start here.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_release, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_release, 25.03);
 void
 rte_soring_release(struct rte_soring *r, const void *objs,
 	uint32_t stage, uint32_t n, uint32_t ftoken)
@@ -500,7 +500,7 @@ rte_soring_release(struct rte_soring *r, const void *objs,
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_releasx, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_releasx, 25.03);
 void
 rte_soring_releasx(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t stage, uint32_t n, uint32_t ftoken)
@@ -508,7 +508,7 @@ rte_soring_releasx(struct rte_soring *r, const void *objs,
 	soring_release(r, objs, meta, stage, n, ftoken);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_bulk, 25.03);
 uint32_t
 rte_soring_enqueue_bulk(struct rte_soring *r, const void *objs, uint32_t n,
 	uint32_t *free_space)
@@ -517,7 +517,7 @@ rte_soring_enqueue_bulk(struct rte_soring *r, const void *objs, uint32_t n,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_bulk, 25.03);
 uint32_t
 rte_soring_enqueux_bulk(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t n, uint32_t *free_space)
@@ -526,7 +526,7 @@ rte_soring_enqueux_bulk(struct rte_soring *r, const void *objs,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_burst, 25.03);
 uint32_t
 rte_soring_enqueue_burst(struct rte_soring *r, const void *objs, uint32_t n,
 	uint32_t *free_space)
@@ -535,7 +535,7 @@ rte_soring_enqueue_burst(struct rte_soring *r, const void *objs, uint32_t n,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_burst, 25.03);
 uint32_t
 rte_soring_enqueux_burst(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t n, uint32_t *free_space)
@@ -544,7 +544,7 @@ rte_soring_enqueux_burst(struct rte_soring *r, const void *objs,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_bulk, 25.03);
 uint32_t
 rte_soring_dequeue_bulk(struct rte_soring *r, void *objs, uint32_t num,
 	uint32_t *available)
@@ -553,7 +553,7 @@ rte_soring_dequeue_bulk(struct rte_soring *r, void *objs, uint32_t num,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_bulk, 25.03);
 uint32_t
 rte_soring_dequeux_bulk(struct rte_soring *r, void *objs, void *meta,
 	uint32_t num, uint32_t *available)
@@ -562,7 +562,7 @@ rte_soring_dequeux_bulk(struct rte_soring *r, void *objs, void *meta,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_burst, 25.03);
 uint32_t
 rte_soring_dequeue_burst(struct rte_soring *r, void *objs, uint32_t num,
 	uint32_t *available)
@@ -571,7 +571,7 @@ rte_soring_dequeue_burst(struct rte_soring *r, void *objs, uint32_t num,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_burst, 25.03);
 uint32_t
 rte_soring_dequeux_burst(struct rte_soring *r, void *objs, void *meta,
 	uint32_t num, uint32_t *available)
@@ -580,7 +580,7 @@ rte_soring_dequeux_burst(struct rte_soring *r, void *objs, void *meta,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_bulk, 25.03);
 uint32_t
 rte_soring_acquire_bulk(struct rte_soring *r, void *objs,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -589,7 +589,7 @@ rte_soring_acquire_bulk(struct rte_soring *r, void *objs,
 			RTE_RING_QUEUE_FIXED, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_bulk, 25.03);
 uint32_t
 rte_soring_acquirx_bulk(struct rte_soring *r, void *objs, void *meta,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -598,7 +598,7 @@ rte_soring_acquirx_bulk(struct rte_soring *r, void *objs, void *meta,
 			RTE_RING_QUEUE_FIXED, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_burst, 25.03);
 uint32_t
 rte_soring_acquire_burst(struct rte_soring *r, void *objs,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -607,7 +607,7 @@ rte_soring_acquire_burst(struct rte_soring *r, void *objs,
 			RTE_RING_QUEUE_VARIABLE, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_burst, 25.03);
 uint32_t
 rte_soring_acquirx_burst(struct rte_soring *r, void *objs, void *meta,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -616,7 +616,7 @@ rte_soring_acquirx_burst(struct rte_soring *r, void *objs, void *meta,
 			RTE_RING_QUEUE_VARIABLE, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_count, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_count, 25.03);
 unsigned int
 rte_soring_count(const struct rte_soring *r)
 {
@@ -626,7 +626,7 @@ rte_soring_count(const struct rte_soring *r)
 	return (count > r->capacity) ? r->capacity : count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_free_count, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_free_count, 25.03);
 unsigned int
 rte_soring_free_count(const struct rte_soring *r)
 {
diff --git a/lib/sched/rte_approx.c b/lib/sched/rte_approx.c
index 86c7d1d3fb..bd935a7e36 100644
--- a/lib/sched/rte_approx.c
+++ b/lib/sched/rte_approx.c
@@ -140,7 +140,7 @@ find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num, uint32_t de
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_approx)
+RTE_EXPORT_SYMBOL(rte_approx);
 int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
 {
 	uint32_t alpha_num, d_num, denum;
diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c
index b5d8988894..f483797907 100644
--- a/lib/sched/rte_pie.c
+++ b/lib/sched/rte_pie.c
@@ -10,7 +10,7 @@
 #include "rte_sched_log.h"
 #include "rte_pie.h"
 
-RTE_EXPORT_SYMBOL(rte_pie_rt_data_init)
+RTE_EXPORT_SYMBOL(rte_pie_rt_data_init);
 int
 rte_pie_rt_data_init(struct rte_pie *pie)
 {
@@ -24,7 +24,7 @@ rte_pie_rt_data_init(struct rte_pie *pie)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pie_config_init)
+RTE_EXPORT_SYMBOL(rte_pie_config_init);
 int
 rte_pie_config_init(struct rte_pie_config *pie_cfg,
 	const uint16_t qdelay_ref,
diff --git a/lib/sched/rte_red.c b/lib/sched/rte_red.c
index d7534d0bee..f8d1074695 100644
--- a/lib/sched/rte_red.c
+++ b/lib/sched/rte_red.c
@@ -9,22 +9,22 @@
 #include <rte_common.h>
 
 static int rte_red_init_done = 0;     /**< Flag to indicate that global initialisation is done */
-RTE_EXPORT_SYMBOL(rte_red_rand_val)
+RTE_EXPORT_SYMBOL(rte_red_rand_val);
 uint32_t rte_red_rand_val = 0;        /**< Random value cache */
-RTE_EXPORT_SYMBOL(rte_red_rand_seed)
+RTE_EXPORT_SYMBOL(rte_red_rand_seed);
 uint32_t rte_red_rand_seed = 0;       /**< Seed for random number generation */
 
 /**
  * table[i] = log2(1-Wq) * Scale * -1
  *       Wq = 1/(2^i)
  */
-RTE_EXPORT_SYMBOL(rte_red_log2_1_minus_Wq)
+RTE_EXPORT_SYMBOL(rte_red_log2_1_minus_Wq);
 uint16_t rte_red_log2_1_minus_Wq[RTE_RED_WQ_LOG2_NUM];
 
 /**
  * table[i] = 2^(i/16) * Scale
  */
-RTE_EXPORT_SYMBOL(rte_red_pow2_frac_inv)
+RTE_EXPORT_SYMBOL(rte_red_pow2_frac_inv);
 uint16_t rte_red_pow2_frac_inv[16];
 
 /**
@@ -69,7 +69,7 @@ __rte_red_init_tables(void)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_red_rt_data_init)
+RTE_EXPORT_SYMBOL(rte_red_rt_data_init);
 int
 rte_red_rt_data_init(struct rte_red *red)
 {
@@ -82,7 +82,7 @@ rte_red_rt_data_init(struct rte_red *red)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_red_config_init)
+RTE_EXPORT_SYMBOL(rte_red_config_init);
 int
 rte_red_config_init(struct rte_red_config *red_cfg,
 	const uint16_t wq_log2,
diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
index 453f935ac8..9f53bed557 100644
--- a/lib/sched/rte_sched.c
+++ b/lib/sched/rte_sched.c
@@ -884,7 +884,7 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_get_memory_footprint)
+RTE_EXPORT_SYMBOL(rte_sched_port_get_memory_footprint);
 uint32_t
 rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
 	struct rte_sched_subport_params **subport_params)
@@ -928,7 +928,7 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
 	return size0 + size1;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_config)
+RTE_EXPORT_SYMBOL(rte_sched_port_config);
 struct rte_sched_port *
 rte_sched_port_config(struct rte_sched_port_params *params)
 {
@@ -1049,7 +1049,7 @@ rte_sched_subport_free(struct rte_sched_port *port,
 	rte_free(subport);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_free)
+RTE_EXPORT_SYMBOL(rte_sched_port_free);
 void
 rte_sched_port_free(struct rte_sched_port *port)
 {
@@ -1163,7 +1163,7 @@ rte_sched_cman_config(struct rte_sched_port *port,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_tc_ov_config)
+RTE_EXPORT_SYMBOL(rte_sched_subport_tc_ov_config);
 int
 rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1189,7 +1189,7 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_config)
+RTE_EXPORT_SYMBOL(rte_sched_subport_config);
 int
 rte_sched_subport_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1383,7 +1383,7 @@ rte_sched_subport_config(struct rte_sched_port *port,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_pipe_config)
+RTE_EXPORT_SYMBOL(rte_sched_pipe_config);
 int
 rte_sched_pipe_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1508,7 +1508,7 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_pipe_profile_add)
+RTE_EXPORT_SYMBOL(rte_sched_subport_pipe_profile_add);
 int
 rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1574,7 +1574,7 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_subport_profile_add)
+RTE_EXPORT_SYMBOL(rte_sched_port_subport_profile_add);
 int
 rte_sched_port_subport_profile_add(struct rte_sched_port *port,
 	struct rte_sched_subport_profile_params *params,
@@ -1656,7 +1656,7 @@ rte_sched_port_qindex(struct rte_sched_port *port,
 		(RTE_SCHED_QUEUES_PER_PIPE - 1));
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_write)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_write);
 void
 rte_sched_port_pkt_write(struct rte_sched_port *port,
 			 struct rte_mbuf *pkt,
@@ -1670,7 +1670,7 @@ rte_sched_port_pkt_write(struct rte_sched_port *port,
 	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_tree_path)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_tree_path);
 void
 rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
 				  const struct rte_mbuf *pkt,
@@ -1686,14 +1686,14 @@ rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
 	*queue = rte_sched_port_tc_queue(port, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_color)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_color);
 enum rte_color
 rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
 {
 	return (enum rte_color)rte_mbuf_sched_color_get(pkt);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_read_stats)
+RTE_EXPORT_SYMBOL(rte_sched_subport_read_stats);
 int
 rte_sched_subport_read_stats(struct rte_sched_port *port,
 			     uint32_t subport_id,
@@ -1739,7 +1739,7 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_queue_read_stats)
+RTE_EXPORT_SYMBOL(rte_sched_queue_read_stats);
 int
 rte_sched_queue_read_stats(struct rte_sched_port *port,
 	uint32_t queue_id,
@@ -2055,7 +2055,7 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port,
  * ----->|_______|----->|_______|----->|_______|----->|_______|----->
  *   p01            p11            p21            p31
  */
-RTE_EXPORT_SYMBOL(rte_sched_port_enqueue)
+RTE_EXPORT_SYMBOL(rte_sched_port_enqueue);
 int
 rte_sched_port_enqueue(struct rte_sched_port *port, struct rte_mbuf **pkts,
 		       uint32_t n_pkts)
@@ -2967,7 +2967,7 @@ rte_sched_port_exceptions(struct rte_sched_subport *subport, int second_pass)
 	return exceptions;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_dequeue)
+RTE_EXPORT_SYMBOL(rte_sched_port_dequeue);
 int
 rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts)
 {
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index c47fe44da0..dbb6773758 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -31,12 +31,12 @@
 #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata"
 #define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata"
 
-RTE_EXPORT_SYMBOL(rte_security_dynfield_offset)
+RTE_EXPORT_SYMBOL(rte_security_dynfield_offset);
 int rte_security_dynfield_offset = -1;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_oop_dynfield_offset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_oop_dynfield_offset, 23.11);
 int rte_security_oop_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_security_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_security_dynfield_register);
 int
 rte_security_dynfield_register(void)
 {
@@ -50,7 +50,7 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_security_oop_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_security_oop_dynfield_register);
 int
 rte_security_oop_dynfield_register(void)
 {
@@ -65,7 +65,7 @@ rte_security_oop_dynfield_register(void)
 	return rte_security_oop_dynfield_offset;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_create)
+RTE_EXPORT_SYMBOL(rte_security_session_create);
 void *
 rte_security_session_create(void *ctx,
 			    struct rte_security_session_conf *conf,
@@ -100,7 +100,7 @@ rte_security_session_create(void *ctx,
 	return (void *)sess;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_update)
+RTE_EXPORT_SYMBOL(rte_security_session_update);
 int
 rte_security_session_update(void *ctx, void *sess, struct rte_security_session_conf *conf)
 {
@@ -114,7 +114,7 @@ rte_security_session_update(void *ctx, void *sess, struct rte_security_session_c
 	return instance->ops->session_update(instance->device, sess, conf);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_get_size)
+RTE_EXPORT_SYMBOL(rte_security_session_get_size);
 unsigned int
 rte_security_session_get_size(void *ctx)
 {
@@ -126,7 +126,7 @@ rte_security_session_get_size(void *ctx)
 			instance->ops->session_get_size(instance->device));
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_session_stats_get);
 int
 rte_security_session_stats_get(void *ctx, void *sess, struct rte_security_stats *stats)
 {
@@ -140,7 +140,7 @@ rte_security_session_stats_get(void *ctx, void *sess, struct rte_security_stats
 	return instance->ops->session_stats_get(instance->device, sess, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_destroy)
+RTE_EXPORT_SYMBOL(rte_security_session_destroy);
 int
 rte_security_session_destroy(void *ctx, void *sess)
 {
@@ -163,7 +163,7 @@ rte_security_session_destroy(void *ctx, void *sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_create)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_create);
 int
 rte_security_macsec_sc_create(void *ctx, struct rte_security_macsec_sc *conf)
 {
@@ -180,7 +180,7 @@ rte_security_macsec_sc_create(void *ctx, struct rte_security_macsec_sc *conf)
 	return sc_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_create)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_create);
 int
 rte_security_macsec_sa_create(void *ctx, struct rte_security_macsec_sa *conf)
 {
@@ -197,7 +197,7 @@ rte_security_macsec_sa_create(void *ctx, struct rte_security_macsec_sa *conf)
 	return sa_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_destroy)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_destroy);
 int
 rte_security_macsec_sc_destroy(void *ctx, uint16_t sc_id,
 			       enum rte_security_macsec_direction dir)
@@ -217,7 +217,7 @@ rte_security_macsec_sc_destroy(void *ctx, uint16_t sc_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_destroy)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_destroy);
 int
 rte_security_macsec_sa_destroy(void *ctx, uint16_t sa_id,
 			       enum rte_security_macsec_direction dir)
@@ -237,7 +237,7 @@ rte_security_macsec_sa_destroy(void *ctx, uint16_t sa_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_stats_get);
 int
 rte_security_macsec_sc_stats_get(void *ctx, uint16_t sc_id,
 				 enum rte_security_macsec_direction dir,
@@ -251,7 +251,7 @@ rte_security_macsec_sc_stats_get(void *ctx, uint16_t sc_id,
 	return instance->ops->macsec_sc_stats_get(instance->device, sc_id, dir, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_stats_get);
 int
 rte_security_macsec_sa_stats_get(void *ctx, uint16_t sa_id,
 				 enum rte_security_macsec_direction dir,
@@ -265,7 +265,7 @@ rte_security_macsec_sa_stats_get(void *ctx, uint16_t sa_id,
 	return instance->ops->macsec_sa_stats_get(instance->device, sa_id, dir, stats);
 }
 
-RTE_EXPORT_SYMBOL(__rte_security_set_pkt_metadata)
+RTE_EXPORT_SYMBOL(__rte_security_set_pkt_metadata);
 int
 __rte_security_set_pkt_metadata(void *ctx, void *sess, struct rte_mbuf *m, void *params)
 {
@@ -280,7 +280,7 @@ __rte_security_set_pkt_metadata(void *ctx, void *sess, struct rte_mbuf *m, void
 	return instance->ops->set_pkt_metadata(instance->device, sess, m, params);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_security_capabilities_get);
 const struct rte_security_capability *
 rte_security_capabilities_get(void *ctx)
 {
@@ -291,7 +291,7 @@ rte_security_capabilities_get(void *ctx)
 	return instance->ops->capabilities_get(instance->device);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_capability_get)
+RTE_EXPORT_SYMBOL(rte_security_capability_get);
 const struct rte_security_capability *
 rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 {
@@ -344,7 +344,7 @@ rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_rx_inject_configure, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_rx_inject_configure, 23.11);
 int
 rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
 {
@@ -357,7 +357,7 @@ rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
 	return instance->ops->rx_inject_configure(instance->device, port_id, enable);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_inb_pkt_rx_inject, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_inb_pkt_rx_inject, 23.11);
 uint16_t
 rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
 			       uint16_t nb_pkts)
diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c
index 4c78fe4b4b..2fcfd57204 100644
--- a/lib/stack/rte_stack.c
+++ b/lib/stack/rte_stack.c
@@ -45,7 +45,7 @@ rte_stack_get_memsize(unsigned int count, uint32_t flags)
 		return rte_stack_std_get_memsize(count);
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_create)
+RTE_EXPORT_SYMBOL(rte_stack_create);
 struct rte_stack *
 rte_stack_create(const char *name, unsigned int count, int socket_id,
 		 uint32_t flags)
@@ -131,7 +131,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
 	return s;
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_free)
+RTE_EXPORT_SYMBOL(rte_stack_free);
 void
 rte_stack_free(struct rte_stack *s)
 {
@@ -164,7 +164,7 @@ rte_stack_free(struct rte_stack *s)
 	rte_memzone_free(s->memzone);
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_lookup)
+RTE_EXPORT_SYMBOL(rte_stack_lookup);
 struct rte_stack *
 rte_stack_lookup(const char *name)
 {
diff --git a/lib/table/rte_swx_table_em.c b/lib/table/rte_swx_table_em.c
index 4ec54cb635..a8a5ee1b75 100644
--- a/lib/table/rte_swx_table_em.c
+++ b/lib/table/rte_swx_table_em.c
@@ -648,7 +648,7 @@ table_footprint(struct rte_swx_table_params *params,
 	return memory_footprint;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_unoptimized_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_unoptimized_ops, 20.11);
 struct rte_swx_table_ops rte_swx_table_exact_match_unoptimized_ops = {
 	.footprint_get = table_footprint,
 	.mailbox_size_get = table_mailbox_size_get_unoptimized,
@@ -659,7 +659,7 @@ struct rte_swx_table_ops rte_swx_table_exact_match_unoptimized_ops = {
 	.free = table_free,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_ops, 20.11);
 struct rte_swx_table_ops rte_swx_table_exact_match_ops = {
 	.footprint_get = table_footprint,
 	.mailbox_size_get = table_mailbox_size_get,
diff --git a/lib/table/rte_swx_table_learner.c b/lib/table/rte_swx_table_learner.c
index 2d61bceeaf..03ba4173a4 100644
--- a/lib/table/rte_swx_table_learner.c
+++ b/lib/table/rte_swx_table_learner.c
@@ -273,7 +273,7 @@ table_entry_id_get(struct table *t, struct table_bucket *b, size_t bucket_key_po
 	return (bucket_id << TABLE_KEYS_PER_BUCKET_LOG2) + bucket_key_pos;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_footprint_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_footprint_get, 21.11);
 uint64_t
 rte_swx_table_learner_footprint_get(struct rte_swx_table_learner_params *params)
 {
@@ -285,7 +285,7 @@ rte_swx_table_learner_footprint_get(struct rte_swx_table_learner_params *params)
 	return status ? 0 : p.total_size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_create, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_create, 21.11);
 void *
 rte_swx_table_learner_create(struct rte_swx_table_learner_params *params, int numa_node)
 {
@@ -309,7 +309,7 @@ rte_swx_table_learner_create(struct rte_swx_table_learner_params *params, int nu
 	return t;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_free, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_free, 21.11);
 void
 rte_swx_table_learner_free(void *table)
 {
@@ -321,7 +321,7 @@ rte_swx_table_learner_free(void *table)
 	env_free(t, t->params.total_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_timeout_update, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_timeout_update, 22.07);
 int
 rte_swx_table_learner_timeout_update(void *table,
 				     uint32_t key_timeout_id,
@@ -359,14 +359,14 @@ struct mailbox {
 	int state;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_mailbox_size_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_mailbox_size_get, 21.11);
 uint64_t
 rte_swx_table_learner_mailbox_size_get(void)
 {
 	return sizeof(struct mailbox);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_lookup, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_lookup, 21.11);
 int
 rte_swx_table_learner_lookup(void *table,
 			     void *mailbox,
@@ -453,7 +453,7 @@ rte_swx_table_learner_lookup(void *table,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm, 22.07);
 void
 rte_swx_table_learner_rearm(void *table,
 			    void *mailbox,
@@ -477,7 +477,7 @@ rte_swx_table_learner_rearm(void *table,
 	b->time[bucket_key_pos] = (input_time + key_timeout) >> 32;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm_new, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm_new, 22.07);
 void
 rte_swx_table_learner_rearm_new(void *table,
 				void *mailbox,
@@ -502,7 +502,7 @@ rte_swx_table_learner_rearm_new(void *table,
 	b->key_timeout_id[bucket_key_pos] = (uint8_t)key_timeout_id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_add, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_add, 21.11);
 uint32_t
 rte_swx_table_learner_add(void *table,
 			  void *mailbox,
@@ -579,7 +579,7 @@ rte_swx_table_learner_add(void *table,
 	return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_delete, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_delete, 21.11);
 void
 rte_swx_table_learner_delete(void *table __rte_unused,
 			     void *mailbox)
diff --git a/lib/table/rte_swx_table_selector.c b/lib/table/rte_swx_table_selector.c
index d42f67f157..060ee4a4b6 100644
--- a/lib/table/rte_swx_table_selector.c
+++ b/lib/table/rte_swx_table_selector.c
@@ -171,7 +171,7 @@ struct table {
 	uint32_t n_members_per_group_max_log2;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_footprint_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_footprint_get, 21.08);
 uint64_t
 rte_swx_table_selector_footprint_get(uint32_t n_groups_max, uint32_t n_members_per_group_max)
 {
@@ -184,7 +184,7 @@ rte_swx_table_selector_footprint_get(uint32_t n_groups_max, uint32_t n_members_p
 	return sizeof(struct table) + group_table_size + members_size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_free, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_free, 21.08);
 void
 rte_swx_table_selector_free(void *table)
 {
@@ -262,7 +262,7 @@ group_set(struct table *t,
 	  uint32_t group_id,
 	  struct rte_swx_table_selector_group *group);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_create, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_create, 21.08);
 void *
 rte_swx_table_selector_create(struct rte_swx_table_selector_params *params,
 			      struct rte_swx_table_selector_group **groups,
@@ -532,7 +532,7 @@ group_set(struct table *t,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_group_set, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_group_set, 21.08);
 int
 rte_swx_table_selector_group_set(void *table,
 				 uint32_t group_id,
@@ -547,14 +547,14 @@ struct mailbox {
 
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_mailbox_size_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_mailbox_size_get, 21.08);
 uint64_t
 rte_swx_table_selector_mailbox_size_get(void)
 {
 	return sizeof(struct mailbox);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_select, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_select, 21.08);
 int
 rte_swx_table_selector_select(void *table,
 			      void *mailbox __rte_unused,
diff --git a/lib/table/rte_swx_table_wm.c b/lib/table/rte_swx_table_wm.c
index c57738dda3..1b7fa514f5 100644
--- a/lib/table/rte_swx_table_wm.c
+++ b/lib/table/rte_swx_table_wm.c
@@ -458,7 +458,7 @@ table_lookup(void *table,
 	return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_wildcard_match_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_wildcard_match_ops, 21.05);
 struct rte_swx_table_ops rte_swx_table_wildcard_match_ops = {
 	.footprint_get = NULL,
 	.mailbox_size_get = table_mailbox_size_get,
diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c
index 74fa0145d8..24601a35ca 100644
--- a/lib/table/rte_table_acl.c
+++ b/lib/table/rte_table_acl.c
@@ -782,7 +782,7 @@ rte_table_acl_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_acl_ops)
+RTE_EXPORT_SYMBOL(rte_table_acl_ops);
 struct rte_table_ops rte_table_acl_ops = {
 	.f_create = rte_table_acl_create,
 	.f_free = rte_table_acl_free,
diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c
index 55356e5999..08646bc103 100644
--- a/lib/table/rte_table_array.c
+++ b/lib/table/rte_table_array.c
@@ -197,7 +197,7 @@ rte_table_array_stats_read(void *table, struct rte_table_stats *stats, int clear
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_array_ops)
+RTE_EXPORT_SYMBOL(rte_table_array_ops);
 struct rte_table_ops rte_table_array_ops = {
 	.f_create = rte_table_array_create,
 	.f_free = rte_table_array_free,
diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c
index a2b920fa92..5b55754cbe 100644
--- a/lib/table/rte_table_hash_cuckoo.c
+++ b/lib/table/rte_table_hash_cuckoo.c
@@ -314,7 +314,7 @@ rte_table_hash_cuckoo_stats_read(void *table, struct rte_table_stats *stats,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_cuckoo_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_cuckoo_ops);
 struct rte_table_ops rte_table_hash_cuckoo_ops = {
 	.f_create = rte_table_hash_cuckoo_create,
 	.f_free = rte_table_hash_cuckoo_free,
diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c
index 86e8eeb4c8..6c220ad971 100644
--- a/lib/table/rte_table_hash_ext.c
+++ b/lib/table/rte_table_hash_ext.c
@@ -998,7 +998,7 @@ rte_table_hash_ext_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_ext_ops);
 struct rte_table_ops rte_table_hash_ext_ops	 = {
 	.f_create = rte_table_hash_ext_create,
 	.f_free = rte_table_hash_ext_free,
diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c
index da24a7985d..e05d7bf99a 100644
--- a/lib/table/rte_table_hash_key16.c
+++ b/lib/table/rte_table_hash_key16.c
@@ -1167,7 +1167,7 @@ rte_table_hash_key16_stats_read(void *table, struct rte_table_stats *stats, int
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key16_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key16_lru_ops);
 struct rte_table_ops rte_table_hash_key16_lru_ops = {
 	.f_create = rte_table_hash_create_key16_lru,
 	.f_free = rte_table_hash_free_key16_lru,
@@ -1179,7 +1179,7 @@ struct rte_table_ops rte_table_hash_key16_lru_ops = {
 	.f_stats = rte_table_hash_key16_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key16_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key16_ext_ops);
 struct rte_table_ops rte_table_hash_key16_ext_ops = {
 	.f_create = rte_table_hash_create_key16_ext,
 	.f_free = rte_table_hash_free_key16_ext,
diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c
index 297931a2a5..c2200c09b0 100644
--- a/lib/table/rte_table_hash_key32.c
+++ b/lib/table/rte_table_hash_key32.c
@@ -1200,7 +1200,7 @@ rte_table_hash_key32_stats_read(void *table, struct rte_table_stats *stats, int
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key32_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key32_lru_ops);
 struct rte_table_ops rte_table_hash_key32_lru_ops = {
 	.f_create = rte_table_hash_create_key32_lru,
 	.f_free = rte_table_hash_free_key32_lru,
@@ -1212,7 +1212,7 @@ struct rte_table_ops rte_table_hash_key32_lru_ops = {
 	.f_stats = rte_table_hash_key32_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key32_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key32_ext_ops);
 struct rte_table_ops rte_table_hash_key32_ext_ops = {
 	.f_create = rte_table_hash_create_key32_ext,
 	.f_free = rte_table_hash_free_key32_ext,
diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c
index 746863082f..08d3e53743 100644
--- a/lib/table/rte_table_hash_key8.c
+++ b/lib/table/rte_table_hash_key8.c
@@ -1134,7 +1134,7 @@ rte_table_hash_key8_stats_read(void *table, struct rte_table_stats *stats, int c
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key8_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key8_lru_ops);
 struct rte_table_ops rte_table_hash_key8_lru_ops = {
 	.f_create = rte_table_hash_create_key8_lru,
 	.f_free = rte_table_hash_free_key8_lru,
@@ -1146,7 +1146,7 @@ struct rte_table_ops rte_table_hash_key8_lru_ops = {
 	.f_stats = rte_table_hash_key8_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key8_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key8_ext_ops);
 struct rte_table_ops rte_table_hash_key8_ext_ops = {
 	.f_create = rte_table_hash_create_key8_ext,
 	.f_free = rte_table_hash_free_key8_ext,
diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c
index 548f5eebf2..d6cd928a96 100644
--- a/lib/table/rte_table_hash_lru.c
+++ b/lib/table/rte_table_hash_lru.c
@@ -946,7 +946,7 @@ rte_table_hash_lru_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_lru_ops);
 struct rte_table_ops rte_table_hash_lru_ops = {
 	.f_create = rte_table_hash_lru_create,
 	.f_free = rte_table_hash_lru_free,
diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c
index 6fd0c30f85..3afa1b4c95 100644
--- a/lib/table/rte_table_lpm.c
+++ b/lib/table/rte_table_lpm.c
@@ -356,7 +356,7 @@ rte_table_lpm_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_lpm_ops)
+RTE_EXPORT_SYMBOL(rte_table_lpm_ops);
 struct rte_table_ops rte_table_lpm_ops = {
 	.f_create = rte_table_lpm_create,
 	.f_free = rte_table_lpm_free,
diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c
index 9159784dfa..a81195e88b 100644
--- a/lib/table/rte_table_lpm_ipv6.c
+++ b/lib/table/rte_table_lpm_ipv6.c
@@ -357,7 +357,7 @@ rte_table_lpm_ipv6_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_lpm_ipv6_ops)
+RTE_EXPORT_SYMBOL(rte_table_lpm_ipv6_ops);
 struct rte_table_ops rte_table_lpm_ipv6_ops = {
 	.f_create = rte_table_lpm_ipv6_create,
 	.f_free = rte_table_lpm_ipv6_free,
diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c
index 3d2ac55c49..2d70e0761f 100644
--- a/lib/table/rte_table_stub.c
+++ b/lib/table/rte_table_stub.c
@@ -82,7 +82,7 @@ rte_table_stub_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_stub_ops)
+RTE_EXPORT_SYMBOL(rte_table_stub_ops);
 struct rte_table_ops rte_table_stub_ops = {
 	.f_create = rte_table_stub_create,
 	.f_free = NULL,
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c
index 1cbbffbf3f..d40057e197 100644
--- a/lib/telemetry/telemetry.c
+++ b/lib/telemetry/telemetry.c
@@ -115,14 +115,14 @@ register_cmd(const char *cmd, const char *help,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_telemetry_register_cmd)
+RTE_EXPORT_SYMBOL(rte_telemetry_register_cmd);
 int
 rte_telemetry_register_cmd(const char *cmd, telemetry_cb fn, const char *help)
 {
 	return register_cmd(cmd, help, fn, NULL, NULL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_telemetry_register_cmd_arg, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_telemetry_register_cmd_arg, 24.11);
 int
 rte_telemetry_register_cmd_arg(const char *cmd, telemetry_arg_cb fn, void *arg, const char *help)
 {
@@ -655,7 +655,7 @@ telemetry_v2_init(void)
 
 #endif /* !RTE_EXEC_ENV_WINDOWS */
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_init);
 int32_t
 rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_t *cpuset)
 {
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index c120600622..fb014fe389 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -17,7 +17,7 @@
 
 #define RTE_TEL_UINT_HEX_STR_BUF_LEN 64
 
-RTE_EXPORT_SYMBOL(rte_tel_data_start_array)
+RTE_EXPORT_SYMBOL(rte_tel_data_start_array);
 int
 rte_tel_data_start_array(struct rte_tel_data *d, enum rte_tel_value_type type)
 {
@@ -32,7 +32,7 @@ rte_tel_data_start_array(struct rte_tel_data *d, enum rte_tel_value_type type)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_start_dict)
+RTE_EXPORT_SYMBOL(rte_tel_data_start_dict);
 int
 rte_tel_data_start_dict(struct rte_tel_data *d)
 {
@@ -41,7 +41,7 @@ rte_tel_data_start_dict(struct rte_tel_data *d)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_string);
 int
 rte_tel_data_string(struct rte_tel_data *d, const char *str)
 {
@@ -54,7 +54,7 @@ rte_tel_data_string(struct rte_tel_data *d, const char *str)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_string);
 int
 rte_tel_data_add_array_string(struct rte_tel_data *d, const char *str)
 {
@@ -67,7 +67,7 @@ rte_tel_data_add_array_string(struct rte_tel_data *d, const char *str)
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_int)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_int);
 int
 rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
 {
@@ -79,7 +79,7 @@ rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_uint)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_uint);
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
 {
@@ -91,14 +91,14 @@ rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_u64)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_u64);
 int
 rte_tel_data_add_array_u64(struct rte_tel_data *d, uint64_t x)
 {
 	return rte_tel_data_add_array_uint(d, x);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_container)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_container);
 int
 rte_tel_data_add_array_container(struct rte_tel_data *d,
 		struct rte_tel_data *val, int keep)
@@ -131,7 +131,7 @@ rte_tel_uint_to_hex_encoded_str(char *buf, size_t buf_len, uint64_t val,
 	return len < (int)buf_len ? 0 : -EINVAL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_array_uint_hex, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_array_uint_hex, 23.03);
 int
 rte_tel_data_add_array_uint_hex(struct rte_tel_data *d, uint64_t val,
 				uint8_t display_bitwidth)
@@ -162,7 +162,7 @@ valid_name(const char *name)
 	return true;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_string);
 int
 rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name,
 		const char *val)
@@ -188,7 +188,7 @@ rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_int)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_int);
 int
 rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
 {
@@ -208,7 +208,7 @@ rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_uint)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_uint);
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
 		const char *name, uint64_t val)
@@ -229,14 +229,14 @@ rte_tel_data_add_dict_uint(struct rte_tel_data *d,
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_u64)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_u64);
 int
 rte_tel_data_add_dict_u64(struct rte_tel_data *d, const char *name, uint64_t val)
 {
 	return rte_tel_data_add_dict_uint(d, name, val);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_container)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_container);
 int
 rte_tel_data_add_dict_container(struct rte_tel_data *d, const char *name,
 		struct rte_tel_data *val, int keep)
@@ -262,7 +262,7 @@ rte_tel_data_add_dict_container(struct rte_tel_data *d, const char *name,
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_dict_uint_hex, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_dict_uint_hex, 23.03);
 int
 rte_tel_data_add_dict_uint_hex(struct rte_tel_data *d, const char *name,
 			       uint64_t val, uint8_t display_bitwidth)
@@ -279,14 +279,14 @@ rte_tel_data_add_dict_uint_hex(struct rte_tel_data *d, const char *name,
 	return rte_tel_data_add_dict_string(d, name, hex_str);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_alloc)
+RTE_EXPORT_SYMBOL(rte_tel_data_alloc);
 struct rte_tel_data *
 rte_tel_data_alloc(void)
 {
 	return malloc(sizeof(struct rte_tel_data));
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_free)
+RTE_EXPORT_SYMBOL(rte_tel_data_free);
 void
 rte_tel_data_free(struct rte_tel_data *data)
 {
diff --git a/lib/telemetry/telemetry_legacy.c b/lib/telemetry/telemetry_legacy.c
index 89ec750c09..f832bd9ac5 100644
--- a/lib/telemetry/telemetry_legacy.c
+++ b/lib/telemetry/telemetry_legacy.c
@@ -53,7 +53,7 @@ struct json_command callbacks[TELEMETRY_LEGACY_MAX_CALLBACKS] = {
 int num_legacy_callbacks = 1;
 static rte_spinlock_t callback_sl = RTE_SPINLOCK_INITIALIZER;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_legacy_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_legacy_register);
 int
 rte_telemetry_legacy_register(const char *cmd,
 		enum rte_telemetry_legacy_data_req data_req,
diff --git a/lib/timer/rte_timer.c b/lib/timer/rte_timer.c
index b349c2abbc..f76079e8ce 100644
--- a/lib/timer/rte_timer.c
+++ b/lib/timer/rte_timer.c
@@ -85,7 +85,7 @@ timer_data_valid(uint32_t id)
 	timer_data = &rte_timer_data_arr[id];				\
 } while (0)
 
-RTE_EXPORT_SYMBOL(rte_timer_data_alloc)
+RTE_EXPORT_SYMBOL(rte_timer_data_alloc);
 int
 rte_timer_data_alloc(uint32_t *id_ptr)
 {
@@ -110,7 +110,7 @@ rte_timer_data_alloc(uint32_t *id_ptr)
 	return -ENOSPC;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_data_dealloc)
+RTE_EXPORT_SYMBOL(rte_timer_data_dealloc);
 int
 rte_timer_data_dealloc(uint32_t id)
 {
@@ -128,7 +128,7 @@ rte_timer_data_dealloc(uint32_t id)
  * secondary processes should be empty, the zeroth entry can be shared by
  * multiple processes.
  */
-RTE_EXPORT_SYMBOL(rte_timer_subsystem_init)
+RTE_EXPORT_SYMBOL(rte_timer_subsystem_init);
 int
 rte_timer_subsystem_init(void)
 {
@@ -188,7 +188,7 @@ rte_timer_subsystem_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_subsystem_finalize)
+RTE_EXPORT_SYMBOL(rte_timer_subsystem_finalize);
 void
 rte_timer_subsystem_finalize(void)
 {
@@ -208,7 +208,7 @@ rte_timer_subsystem_finalize(void)
 }
 
 /* Initialize the timer handle tim for use */
-RTE_EXPORT_SYMBOL(rte_timer_init)
+RTE_EXPORT_SYMBOL(rte_timer_init);
 void
 rte_timer_init(struct rte_timer *tim)
 {
@@ -545,7 +545,7 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
 }
 
 /* Reset and start the timer associated with the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_reset)
+RTE_EXPORT_SYMBOL(rte_timer_reset);
 int
 rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
 		      enum rte_timer_type type, unsigned int tim_lcore,
@@ -555,7 +555,7 @@ rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
 				   tim_lcore, fct, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_reset)
+RTE_EXPORT_SYMBOL(rte_timer_alt_reset);
 int
 rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
 		    uint64_t ticks, enum rte_timer_type type,
@@ -577,7 +577,7 @@ rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
 }
 
 /* loop until rte_timer_reset() succeed */
-RTE_EXPORT_SYMBOL(rte_timer_reset_sync)
+RTE_EXPORT_SYMBOL(rte_timer_reset_sync);
 void
 rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
 		     enum rte_timer_type type, unsigned tim_lcore,
@@ -627,14 +627,14 @@ __rte_timer_stop(struct rte_timer *tim,
 }
 
 /* Stop the timer associated with the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_stop)
+RTE_EXPORT_SYMBOL(rte_timer_stop);
 int
 rte_timer_stop(struct rte_timer *tim)
 {
 	return rte_timer_alt_stop(default_data_id, tim);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_stop)
+RTE_EXPORT_SYMBOL(rte_timer_alt_stop);
 int
 rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
 {
@@ -646,7 +646,7 @@ rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
 }
 
 /* loop until rte_timer_stop() succeed */
-RTE_EXPORT_SYMBOL(rte_timer_stop_sync)
+RTE_EXPORT_SYMBOL(rte_timer_stop_sync);
 void
 rte_timer_stop_sync(struct rte_timer *tim)
 {
@@ -655,7 +655,7 @@ rte_timer_stop_sync(struct rte_timer *tim)
 }
 
 /* Test the PENDING status of the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_pending)
+RTE_EXPORT_SYMBOL(rte_timer_pending);
 int
 rte_timer_pending(struct rte_timer *tim)
 {
@@ -790,7 +790,7 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
 	priv_timer[lcore_id].running_tim = NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_manage)
+RTE_EXPORT_SYMBOL(rte_timer_manage);
 int
 rte_timer_manage(void)
 {
@@ -803,7 +803,7 @@ rte_timer_manage(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_manage)
+RTE_EXPORT_SYMBOL(rte_timer_alt_manage);
 int
 rte_timer_alt_manage(uint32_t timer_data_id,
 		     unsigned int *poll_lcores,
@@ -985,7 +985,7 @@ rte_timer_alt_manage(uint32_t timer_data_id,
 }
 
 /* Walk pending lists, stopping timers and calling user-specified function */
-RTE_EXPORT_SYMBOL(rte_timer_stop_all)
+RTE_EXPORT_SYMBOL(rte_timer_stop_all);
 int
 rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores,
 		   int nb_walk_lcores,
@@ -1018,7 +1018,7 @@ rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_next_ticks)
+RTE_EXPORT_SYMBOL(rte_timer_next_ticks);
 int64_t
 rte_timer_next_ticks(void)
 {
@@ -1072,14 +1072,14 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_dump_stats)
+RTE_EXPORT_SYMBOL(rte_timer_dump_stats);
 int
 rte_timer_dump_stats(FILE *f)
 {
 	return rte_timer_alt_dump_stats(default_data_id, f);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_dump_stats)
+RTE_EXPORT_SYMBOL(rte_timer_alt_dump_stats);
 int
 rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
 {
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 9b4f332f94..1111ecbe0b 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -572,7 +572,7 @@ find_vhost_user_socket(const char *path)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_attach_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_attach_vdpa_device);
 int
 rte_vhost_driver_attach_vdpa_device(const char *path,
 		struct rte_vdpa_device *dev)
@@ -591,7 +591,7 @@ rte_vhost_driver_attach_vdpa_device(const char *path,
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_detach_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_detach_vdpa_device);
 int
 rte_vhost_driver_detach_vdpa_device(const char *path)
 {
@@ -606,7 +606,7 @@ rte_vhost_driver_detach_vdpa_device(const char *path)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_device);
 struct rte_vdpa_device *
 rte_vhost_driver_get_vdpa_device(const char *path)
 {
@@ -622,7 +622,7 @@ rte_vhost_driver_get_vdpa_device(const char *path)
 	return dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_dev_type)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_dev_type);
 int
 rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type)
 {
@@ -651,7 +651,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_disable_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_disable_features);
 int
 rte_vhost_driver_disable_features(const char *path, uint64_t features)
 {
@@ -672,7 +672,7 @@ rte_vhost_driver_disable_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_enable_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_enable_features);
 int
 rte_vhost_driver_enable_features(const char *path, uint64_t features)
 {
@@ -696,7 +696,7 @@ rte_vhost_driver_enable_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_features);
 int
 rte_vhost_driver_set_features(const char *path, uint64_t features)
 {
@@ -718,7 +718,7 @@ rte_vhost_driver_set_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_features);
 int
 rte_vhost_driver_get_features(const char *path, uint64_t *features)
 {
@@ -754,7 +754,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_protocol_features);
 int
 rte_vhost_driver_set_protocol_features(const char *path,
 		uint64_t protocol_features)
@@ -769,7 +769,7 @@ rte_vhost_driver_set_protocol_features(const char *path,
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_protocol_features);
 int
 rte_vhost_driver_get_protocol_features(const char *path,
 		uint64_t *protocol_features)
@@ -808,7 +808,7 @@ rte_vhost_driver_get_protocol_features(const char *path,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_queue_num)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_queue_num);
 int
 rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num)
 {
@@ -844,7 +844,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_max_queue_num)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_max_queue_num);
 int
 rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
 {
@@ -902,7 +902,7 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
  * (the default case), or client (when RTE_VHOST_USER_CLIENT) flag
  * is set.
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_register)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_register);
 int
 rte_vhost_driver_register(const char *path, uint64_t flags)
 {
@@ -1068,7 +1068,7 @@ vhost_user_remove_reconnect(struct vhost_user_socket *vsocket)
 /**
  * Unregister the specified vhost socket
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_unregister)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_unregister);
 int
 rte_vhost_driver_unregister(const char *path)
 {
@@ -1152,7 +1152,7 @@ rte_vhost_driver_unregister(const char *path)
 /*
  * Register ops so that we can add/remove device to data core.
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_callback_register)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_callback_register);
 int
 rte_vhost_driver_callback_register(const char *path,
 	struct rte_vhost_device_ops const * const ops)
@@ -1180,7 +1180,7 @@ vhost_driver_callback_get(const char *path)
 	return vsocket ? vsocket->notify_ops : NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_start)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_start);
 int
 rte_vhost_driver_start(const char *path)
 {
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index bc2dd8d2e1..2ddcc49a35 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -50,7 +50,7 @@ __vdpa_find_device_by_name(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_find_device_by_name)
+RTE_EXPORT_SYMBOL(rte_vdpa_find_device_by_name);
 struct rte_vdpa_device *
 rte_vdpa_find_device_by_name(const char *name)
 {
@@ -63,7 +63,7 @@ rte_vdpa_find_device_by_name(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_rte_device)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_rte_device);
 struct rte_device *
 rte_vdpa_get_rte_device(struct rte_vdpa_device *vdpa_dev)
 {
@@ -73,7 +73,7 @@ rte_vdpa_get_rte_device(struct rte_vdpa_device *vdpa_dev)
 	return vdpa_dev->device;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_register_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_register_device);
 struct rte_vdpa_device *
 rte_vdpa_register_device(struct rte_device *rte_dev,
 		struct rte_vdpa_dev_ops *ops)
@@ -129,7 +129,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_unregister_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_unregister_device);
 int
 rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 {
@@ -151,7 +151,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_relay_vring_used)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_relay_vring_used);
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
 {
@@ -263,7 +263,7 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_queue_num)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_queue_num);
 int
 rte_vdpa_get_queue_num(struct rte_vdpa_device *dev, uint32_t *queue_num)
 {
@@ -273,7 +273,7 @@ rte_vdpa_get_queue_num(struct rte_vdpa_device *dev, uint32_t *queue_num)
 	return dev->ops->get_queue_num(dev, queue_num);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_features)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_features);
 int
 rte_vdpa_get_features(struct rte_vdpa_device *dev, uint64_t *features)
 {
@@ -283,7 +283,7 @@ rte_vdpa_get_features(struct rte_vdpa_device *dev, uint64_t *features)
 	return dev->ops->get_features(dev, features);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_protocol_features);
 int
 rte_vdpa_get_protocol_features(struct rte_vdpa_device *dev, uint64_t *features)
 {
@@ -294,7 +294,7 @@ rte_vdpa_get_protocol_features(struct rte_vdpa_device *dev, uint64_t *features)
 	return dev->ops->get_protocol_features(dev, features);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_stats_names)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_stats_names);
 int
 rte_vdpa_get_stats_names(struct rte_vdpa_device *dev,
 		struct rte_vdpa_stat_name *stats_names,
@@ -309,7 +309,7 @@ rte_vdpa_get_stats_names(struct rte_vdpa_device *dev,
 	return dev->ops->get_stats_names(dev, stats_names, size);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_stats)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_stats);
 int
 rte_vdpa_get_stats(struct rte_vdpa_device *dev, uint16_t qid,
 		struct rte_vdpa_stat *stats, unsigned int n)
@@ -323,7 +323,7 @@ rte_vdpa_get_stats(struct rte_vdpa_device *dev, uint16_t qid,
 	return dev->ops->get_stats(dev, qid, stats, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_reset_stats)
+RTE_EXPORT_SYMBOL(rte_vdpa_reset_stats);
 int
 rte_vdpa_reset_stats(struct rte_vdpa_device *dev, uint16_t qid)
 {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index a2e3e2635d..a928abbe99 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -861,7 +861,7 @@ vhost_enable_linearbuf(int vid)
 	dev->linearbuf = 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_mtu)
+RTE_EXPORT_SYMBOL(rte_vhost_get_mtu);
 int
 rte_vhost_get_mtu(int vid, uint16_t *mtu)
 {
@@ -881,7 +881,7 @@ rte_vhost_get_mtu(int vid, uint16_t *mtu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_numa_node)
+RTE_EXPORT_SYMBOL(rte_vhost_get_numa_node);
 int
 rte_vhost_get_numa_node(int vid)
 {
@@ -908,7 +908,7 @@ rte_vhost_get_numa_node(int vid)
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_num)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_num);
 uint16_t
 rte_vhost_get_vring_num(int vid)
 {
@@ -920,7 +920,7 @@ rte_vhost_get_vring_num(int vid)
 	return dev->nr_vring;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_ifname)
+RTE_EXPORT_SYMBOL(rte_vhost_get_ifname);
 int
 rte_vhost_get_ifname(int vid, char *buf, size_t len)
 {
@@ -937,7 +937,7 @@ rte_vhost_get_ifname(int vid, char *buf, size_t len)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_features)
+RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_features);
 int
 rte_vhost_get_negotiated_features(int vid, uint64_t *features)
 {
@@ -951,7 +951,7 @@ rte_vhost_get_negotiated_features(int vid, uint64_t *features)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_protocol_features);
 int
 rte_vhost_get_negotiated_protocol_features(int vid,
 					   uint64_t *protocol_features)
@@ -966,7 +966,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_mem_table)
+RTE_EXPORT_SYMBOL(rte_vhost_get_mem_table);
 int
 rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem)
 {
@@ -990,7 +990,7 @@ rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_vring)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_vring);
 int
 rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
 			  struct rte_vhost_vring *vring)
@@ -1027,7 +1027,7 @@ rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_ring_inflight)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_ring_inflight);
 int
 rte_vhost_get_vhost_ring_inflight(int vid, uint16_t vring_idx,
 				  struct rte_vhost_ring_inflight *vring)
@@ -1063,7 +1063,7 @@ rte_vhost_get_vhost_ring_inflight(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_split)
+RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_split);
 int
 rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
 				  uint16_t idx)
@@ -1100,7 +1100,7 @@ rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_packed);
 int
 rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
 				   uint16_t head, uint16_t last,
@@ -1169,7 +1169,7 @@ rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_split)
+RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_split);
 int
 rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
 				  uint16_t last_used_idx, uint16_t idx)
@@ -1211,7 +1211,7 @@ rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_packed);
 int
 rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
 				   uint16_t head)
@@ -1258,7 +1258,7 @@ rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_split)
+RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_split);
 int
 rte_vhost_set_last_inflight_io_split(int vid, uint16_t vring_idx,
 				     uint16_t idx)
@@ -1294,7 +1294,7 @@ rte_vhost_set_last_inflight_io_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_packed);
 int
 rte_vhost_set_last_inflight_io_packed(int vid, uint16_t vring_idx,
 				      uint16_t head)
@@ -1345,7 +1345,7 @@ rte_vhost_set_last_inflight_io_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_call)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_call);
 int
 rte_vhost_vring_call(int vid, uint16_t vring_idx)
 {
@@ -1382,7 +1382,7 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_call_nonblock)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_call_nonblock);
 int
 rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)
 {
@@ -1420,7 +1420,7 @@ rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_avail_entries)
+RTE_EXPORT_SYMBOL(rte_vhost_avail_entries);
 uint16_t
 rte_vhost_avail_entries(int vid, uint16_t queue_id)
 {
@@ -1517,7 +1517,7 @@ vhost_enable_guest_notification(struct virtio_net *dev,
 		return vhost_enable_notify_split(dev, vq, enable);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_enable_guest_notification)
+RTE_EXPORT_SYMBOL(rte_vhost_enable_guest_notification);
 int
 rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 {
@@ -1551,7 +1551,7 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_notify_guest, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_notify_guest, 23.07);
 void
 rte_vhost_notify_guest(int vid, uint16_t queue_id)
 {
@@ -1588,7 +1588,7 @@ rte_vhost_notify_guest(int vid, uint16_t queue_id)
 	rte_rwlock_read_unlock(&vq->access_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_log_write)
+RTE_EXPORT_SYMBOL(rte_vhost_log_write);
 void
 rte_vhost_log_write(int vid, uint64_t addr, uint64_t len)
 {
@@ -1600,7 +1600,7 @@ rte_vhost_log_write(int vid, uint64_t addr, uint64_t len)
 	vhost_log_write(dev, addr, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_log_used_vring)
+RTE_EXPORT_SYMBOL(rte_vhost_log_used_vring);
 void
 rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
 			 uint64_t offset, uint64_t len)
@@ -1621,7 +1621,7 @@ rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
 	vhost_log_used_vring(dev, vq, offset, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_rx_queue_count)
+RTE_EXPORT_SYMBOL(rte_vhost_rx_queue_count);
 uint32_t
 rte_vhost_rx_queue_count(int vid, uint16_t qid)
 {
@@ -1659,7 +1659,7 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vdpa_device);
 struct rte_vdpa_device *
 rte_vhost_get_vdpa_device(int vid)
 {
@@ -1671,7 +1671,7 @@ rte_vhost_get_vdpa_device(int vid)
 	return dev->vdpa_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_log_base)
+RTE_EXPORT_SYMBOL(rte_vhost_get_log_base);
 int
 rte_vhost_get_log_base(int vid, uint64_t *log_base,
 		uint64_t *log_size)
@@ -1687,7 +1687,7 @@ rte_vhost_get_log_base(int vid, uint64_t *log_base,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base);
 int
 rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 		uint16_t *last_avail_idx, uint16_t *last_used_idx)
@@ -1718,7 +1718,7 @@ rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_vring_base)
+RTE_EXPORT_SYMBOL(rte_vhost_set_vring_base);
 int
 rte_vhost_set_vring_base(int vid, uint16_t queue_id,
 		uint16_t last_avail_idx, uint16_t last_used_idx)
@@ -1751,7 +1751,7 @@ rte_vhost_set_vring_base(int vid, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base_from_inflight)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base_from_inflight);
 int
 rte_vhost_get_vring_base_from_inflight(int vid,
 				       uint16_t queue_id,
@@ -1786,7 +1786,7 @@ rte_vhost_get_vring_base_from_inflight(int vid,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_extern_callback_register)
+RTE_EXPORT_SYMBOL(rte_vhost_extern_callback_register);
 int
 rte_vhost_extern_callback_register(int vid,
 		struct rte_vhost_user_extern_ops const * const ops, void *ctx)
@@ -1874,7 +1874,7 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register, 20.08);
 int
 rte_vhost_async_channel_register(int vid, uint16_t queue_id)
 {
@@ -1908,7 +1908,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register_thread_unsafe, 21.08);
 int
 rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -1931,7 +1931,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
 	return async_channel_register(dev, vq);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister, 20.08);
 int
 rte_vhost_async_channel_unregister(int vid, uint16_t queue_id)
 {
@@ -1978,7 +1978,7 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister_thread_unsafe, 21.08);
 int
 rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -2013,7 +2013,7 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_configure, 22.03);
 int
 rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 {
@@ -2090,7 +2090,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight, 21.08);
 int
 rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
 {
@@ -2129,7 +2129,7 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight_thread_unsafe, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight_thread_unsafe, 22.07);
 int
 rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -2158,7 +2158,7 @@ rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_monitor_addr)
+RTE_EXPORT_SYMBOL(rte_vhost_get_monitor_addr);
 int
 rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
 		struct rte_vhost_power_monitor_cond *pmc)
@@ -2209,7 +2209,7 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get_names)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get_names);
 int
 rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
 		struct rte_vhost_stat_name *name, unsigned int size)
@@ -2237,7 +2237,7 @@ rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
 	return VHOST_NB_VQ_STATS;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get);
 int
 rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
 		struct rte_vhost_stat *stats, unsigned int n)
@@ -2284,7 +2284,7 @@ rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_reset)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_reset);
 int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -2320,7 +2320,7 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_unconfigure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_unconfigure, 22.11);
 int
 rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
 {
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 648e2d731b..ed5b164846 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -1782,7 +1782,7 @@ vhost_crypto_complete_one_vm_requests(struct rte_crypto_op **ops,
 	return processed;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_driver_start)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_driver_start);
 int
 rte_vhost_crypto_driver_start(const char *path)
 {
@@ -1804,7 +1804,7 @@ rte_vhost_crypto_driver_start(const char *path)
 	return rte_vhost_driver_start(path);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_create)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_create);
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
@@ -1888,7 +1888,7 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_free)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_free);
 int
 rte_vhost_crypto_free(int vid)
 {
@@ -1918,7 +1918,7 @@ rte_vhost_crypto_free(int vid)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_set_zero_copy)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_set_zero_copy);
 int
 rte_vhost_crypto_set_zero_copy(int vid, enum rte_vhost_crypto_zero_copy option)
 {
@@ -1974,7 +1974,7 @@ rte_vhost_crypto_set_zero_copy(int vid, enum rte_vhost_crypto_zero_copy option)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_fetch_requests)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_fetch_requests);
 uint16_t
 rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
@@ -2104,7 +2104,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_finalize_requests)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_finalize_requests);
 uint16_t
 rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops,
 		uint16_t nb_ops, int *callfds, uint16_t *nb_callfds)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index b73dec6a22..f5578df43e 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -3360,7 +3360,7 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_backend_config_change)
+RTE_EXPORT_SYMBOL(rte_vhost_backend_config_change);
 int
 rte_vhost_backend_config_change(int vid, bool need_reply)
 {
@@ -3423,7 +3423,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vhost_host_notifier_ctrl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vhost_host_notifier_ctrl);
 int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable)
 {
 	struct virtio_net *dev;
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 77545d0a4d..699bac781b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -1740,7 +1740,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nb_tx;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_vhost_enqueue_burst);
 uint16_t
 rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf **__rte_restrict pkts, uint16_t count)
@@ -2342,7 +2342,7 @@ vhost_poll_enqueue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nr_cpl_pkts;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_poll_enqueue_completed, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_poll_enqueue_completed, 20.08);
 uint16_t
 rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -2398,7 +2398,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 	return n_pkts_cpl;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue_thread_unsafe, 21.08);
 uint16_t
 rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -2456,7 +2456,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 	return n_pkts_cpl;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue, 22.07);
 uint16_t
 rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts,
 		uint16_t count, int16_t dma_id, uint16_t vchan_id)
@@ -2572,7 +2572,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nb_tx;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_submit_enqueue_burst, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_submit_enqueue_burst, 20.08);
 uint16_t
 rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -3594,7 +3594,7 @@ virtio_dev_tx_packed_compliant(struct virtio_net *dev,
 	return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_vhost_dequeue_burst);
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
@@ -4204,7 +4204,7 @@ virtio_dev_tx_async_packed_compliant(struct virtio_net *dev, struct vhost_virtqu
 				pkts, count, dma_id, vchan_id, false);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_try_dequeue_burst, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_try_dequeue_burst, 22.07);
 uint16_t
 rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [PATCH] dpdk: support quick jump to API definition
@ 2025-08-28  2:59  1% Chengwen Feng
  2025-08-29  2:34  1% ` [PATCH v2 0/3] " Chengwen Feng
                   ` (3 more replies)
  0 siblings, 4 replies; 77+ results
From: Chengwen Feng @ 2025-08-28  2:59 UTC (permalink / raw)
  To: thomas, david.marchand; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

And also change the gen-version-map.py to ensure it only identifies
RTE_EXPORT_XXX_SYMBOL that end with a semicolon.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 buildtools/gen-version-map.py                 |    6 +-
 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_glue.c         |    2 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mlx5/windows/mlx5_glue.c       |    2 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/node_mbuf_dynfield.c                 |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 401 files changed, 4177 insertions(+), 4177 deletions(-)

diff --git a/buildtools/gen-version-map.py b/buildtools/gen-version-map.py
index 57e08a8c0f..fb7f7f2c59 100755
--- a/buildtools/gen-version-map.py
+++ b/buildtools/gen-version-map.py
@@ -9,10 +9,10 @@
 
 # From eal_export.h
 export_exp_sym_regexp = re.compile(
-    r"^RTE_EXPORT_EXPERIMENTAL_SYMBOL\(([^,]+), ([0-9]+.[0-9]+)\)"
+    r"^RTE_EXPORT_EXPERIMENTAL_SYMBOL\(([^,]+), ([0-9]+.[0-9]+)\);"
 )
-export_int_sym_regexp = re.compile(r"^RTE_EXPORT_INTERNAL_SYMBOL\(([^)]+)\)")
-export_sym_regexp = re.compile(r"^RTE_EXPORT_SYMBOL\(([^)]+)\)")
+export_int_sym_regexp = re.compile(r"^RTE_EXPORT_INTERNAL_SYMBOL\(([^)]+)\);")
+export_sym_regexp = re.compile(r"^RTE_EXPORT_SYMBOL\(([^)]+)\);")
 ver_sym_regexp = re.compile(r"^RTE_VERSION_SYMBOL\(([^,]+), [^,]+, ([^,]+),")
 ver_exp_sym_regexp = re.compile(r"^RTE_VERSION_EXPERIMENTAL_SYMBOL\([^,]+, ([^,]+),")
 default_sym_regexp = re.compile(r"^RTE_DEFAULT_SYMBOL\(([^,]+), [^,]+, ([^,]+),")
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index b7f02f56e1..7160a5dc96 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -4636,7 +4636,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acc_configure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acc_configure, 22.11);
 int
 rte_acc_configure(const char *dev_name, struct rte_acc_conf *conf)
 {
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 82cf98da5d..4bc6acfd9f 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -3367,7 +3367,7 @@ static int agx100_configure(const char *dev_name, const struct rte_fpga_5gnr_fec
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_5gnr_fec_configure, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_5gnr_fec_configure, 20.11);
 int rte_fpga_5gnr_fec_configure(const char *dev_name, const struct rte_fpga_5gnr_fec_conf *conf)
 {
 	struct rte_bbdev *bbdev = rte_bbdev_get_named_dev(dev_name);
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 4723a51dcf..73c98afd9a 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -2453,7 +2453,7 @@ set_default_fpga_conf(struct rte_fpga_lte_fec_conf *def_conf)
 }
 
 /* Initial configuration of FPGA LTE FEC device */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_lte_fec_configure, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fpga_lte_fec_configure, 20.11);
 int
 rte_fpga_lte_fec_configure(const char *dev_name,
 		const struct rte_fpga_lte_fec_conf *conf)
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index ac766e283e..15f4440061 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -261,7 +261,7 @@ auxiliary_parse(const char *name, void *addr)
 }
 
 /* Register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_register);
 void
 rte_auxiliary_register(struct rte_auxiliary_driver *driver)
 {
@@ -269,7 +269,7 @@ rte_auxiliary_register(struct rte_auxiliary_driver *driver)
 }
 
 /* Unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_auxiliary_unregister);
 void
 rte_auxiliary_unregister(struct rte_auxiliary_driver *driver)
 {
diff --git a/drivers/bus/cdx/cdx.c b/drivers/bus/cdx/cdx.c
index 729d54337c..d492e08931 100644
--- a/drivers/bus/cdx/cdx.c
+++ b/drivers/bus/cdx/cdx.c
@@ -140,13 +140,13 @@ cdx_get_kernel_driver_by_path(const char *filename, char *driver_name,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_map_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_map_device);
 int rte_cdx_map_device(struct rte_cdx_device *dev)
 {
 	return cdx_vfio_map_resource(dev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unmap_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unmap_device);
 void rte_cdx_unmap_device(struct rte_cdx_device *dev)
 {
 	cdx_vfio_unmap_resource(dev);
@@ -481,7 +481,7 @@ cdx_parse(const char *name, void *addr)
 }
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_register);
 void
 rte_cdx_register(struct rte_cdx_driver *driver)
 {
@@ -490,7 +490,7 @@ rte_cdx_register(struct rte_cdx_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_unregister);
 void
 rte_cdx_unregister(struct rte_cdx_driver *driver)
 {
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 37e0c424d4..ef7e33145d 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -551,7 +551,7 @@ cdx_vfio_map_resource(struct rte_cdx_device *dev)
 		return cdx_vfio_map_resource_secondary(dev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_enable);
 int
 rte_cdx_vfio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -586,7 +586,7 @@ rte_cdx_vfio_intr_enable(const struct rte_intr_handle *intr_handle)
 }
 
 /* disable MSI interrupts */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_intr_disable);
 int
 rte_cdx_vfio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -614,7 +614,7 @@ rte_cdx_vfio_intr_disable(const struct rte_intr_handle *intr_handle)
 }
 
 /* Enable Bus Mastering */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_enable);
 int
 rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 {
@@ -660,7 +660,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 }
 
 /* Disable Bus Mastering */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cdx_vfio_bm_disable);
 int
 rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
 {
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 5420733019..fd391dbb8e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -60,19 +60,19 @@ struct netcfg_info *dpaa_netcfg;
 /* define a variable to hold the portal_key, once created.*/
 static pthread_key_t dpaa_portal_key;
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_svr_family)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_svr_family);
 unsigned int dpaa_svr_family;
 
 #define FSL_DPAA_BUS_NAME	dpaa_bus
 
-RTE_EXPORT_INTERNAL_SYMBOL(per_lcore_dpaa_io)
+RTE_EXPORT_INTERNAL_SYMBOL(per_lcore_dpaa_io);
 RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io);
 
 #define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield"
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_seqn_dynfield_offset);
 int dpaa_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_eth_port_cfg)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_eth_port_cfg);
 struct fm_eth_port_cfg *
 dpaa_get_eth_port_cfg(int dev_id)
 {
@@ -320,7 +320,7 @@ dpaa_clean_device_list(void)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_init);
 int rte_dpaa_portal_init(void *arg)
 {
 	static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = {
@@ -399,7 +399,7 @@ int rte_dpaa_portal_init(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_init);
 int
 rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
 {
@@ -428,7 +428,7 @@ rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_close)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_portal_fq_close);
 int rte_dpaa_portal_fq_close(struct qman_fq *fq)
 {
 	return fsl_qman_fq_portal_destroy(fq->qp);
@@ -556,7 +556,7 @@ rte_dpaa_bus_scan(void)
 }
 
 /* register a dpaa bus based dpaa driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_register);
 void
 rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 {
@@ -568,7 +568,7 @@ rte_dpaa_driver_register(struct rte_dpaa_driver *driver)
 }
 
 /* un-register a dpaa bus based dpaa driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_driver_unregister);
 void
 rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver)
 {
diff --git a/drivers/bus/dpaa/dpaa_bus_base_symbols.c b/drivers/bus/dpaa/dpaa_bus_base_symbols.c
index 522cdca27e..d829d48381 100644
--- a/drivers/bus/dpaa/dpaa_bus_base_symbols.c
+++ b/drivers/bus/dpaa/dpaa_bus_base_symbols.c
@@ -5,96 +5,96 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(fman_ip_rev)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_hi)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_lo)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_mcast_filter_table)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_reset_mcast_filter_table)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_clear_mac_addr)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_add_mac_addr)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get_all)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_get_all)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_enable_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_disable_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_rx_status)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_bp)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_threshold)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_threshold)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_quanta)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_quanta)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fdoff)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_err_fqid)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_ic_params)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fdoff)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_maxfrm)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_maxfrm)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_sg_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_sg)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_discard_rx_errors)
-RTE_EXPORT_INTERNAL_SYMBOL(fman_if_receive_rx_errors)
-RTE_EXPORT_INTERNAL_SYMBOL(netcfg_acquire)
-RTE_EXPORT_INTERNAL_SYMBOL(netcfg_release)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_new_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_free_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_get_params)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_release)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_acquire)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_query_free_buffers)
-RTE_EXPORT_INTERNAL_SYMBOL(bman_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_fqid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_reserve_fqid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_pool_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_cgrid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_release_cgrid_range)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_ioctl_version_number)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_speed)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_restart_link_autoneg)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_set_fq_lookup_table)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_register_cb)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_poll_free)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_poll_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_clear_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dqrr_consume)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_static_dequeue_add)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_dca_index)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_create_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_fqid)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_state)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_init_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_retire_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_oos_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_np)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_frm_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_set_vdq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_volatile_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi_fq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_modify_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_create_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_delete_cgr)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_caam)
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_fd)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_thread_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(fsl_qman_fq_portal_create)
+RTE_EXPORT_INTERNAL_SYMBOL(fman_ip_rev);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_hi);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_dealloc_bufs_mask_lo);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_mcast_filter_table);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_reset_mcast_filter_table);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_clear_mac_addr);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_add_mac_addr);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_get_all);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_get_all);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_bmi_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_promiscuous_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_enable_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_disable_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_rx_status);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_loopback_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_bp);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_threshold);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_threshold);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fc_quanta);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fc_quanta);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_fdoff);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_err_fqid);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_ic_params);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_fdoff);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_maxfrm);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_maxfrm);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_get_sg_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_set_sg);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_discard_rx_errors);
+RTE_EXPORT_INTERNAL_SYMBOL(fman_if_receive_rx_errors);
+RTE_EXPORT_INTERNAL_SYMBOL(netcfg_acquire);
+RTE_EXPORT_INTERNAL_SYMBOL(netcfg_release);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_new_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_free_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_get_params);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_release);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_acquire);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_query_free_buffers);
+RTE_EXPORT_INTERNAL_SYMBOL(bman_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_fqid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_reserve_fqid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_pool_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_alloc_cgrid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_release_cgrid_range);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_ioctl_version_number);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_update_link_speed);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_restart_link_autoneg);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_set_fq_lookup_table);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_register_cb);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_ern_poll_free);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_irqsource_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_irqsource_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_poll_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_clear_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_portal_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dqrr_consume);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_static_dequeue_add);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_dca_index);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_create_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_fqid);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_state);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_init_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_retire_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_oos_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_np);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_query_fq_frm_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_set_vdq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_volatile_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_enqueue_multi_fq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_modify_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_create_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_delete_cgr);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_caam);
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_get_qm_channel_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_fd);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(qman_fq_portal_thread_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(fsl_qman_fq_portal_create);
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index ebc0c1fb4f..490193b535 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -30,10 +30,10 @@
 struct rte_fslmc_bus rte_fslmc_bus;
 
 #define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield"
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_seqn_dynfield_offset);
 int dpaa2_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_get_device_count)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_get_device_count);
 uint32_t
 rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type)
 {
@@ -528,7 +528,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
 }
 
 /*register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_register);
 void
 rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 {
@@ -538,7 +538,7 @@ rte_fslmc_driver_register(struct rte_dpaa2_driver *driver)
 }
 
 /*un-register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_driver_unregister);
 void
 rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver)
 {
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 68439cbd8c..63c490cb4e 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -84,7 +84,7 @@ enum {
 	FSLMC_VFIO_SOCKET_REQ_MEM
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_get_mcp_ptr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_get_mcp_ptr);
 void *
 dpaa2_get_mcp_ptr(int portal_idx)
 {
@@ -156,7 +156,7 @@ fslmc_io_virt2phy(const void *virtaddr)
 }
 
 /*register a fslmc bus based dpaa2 driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_object_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_object_register);
 void
 rte_fslmc_object_register(struct rte_dpaa2_object *object)
 {
@@ -987,7 +987,7 @@ fslmc_unmap_dma(uint64_t vaddr, uint64_t iovaddr, size_t len)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_vaddr_to_iova);
 uint64_t
 rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
 	uint64_t size)
@@ -1006,7 +1006,7 @@ rte_fslmc_cold_mem_vaddr_to_iova(void *vaddr,
 	return RTE_BAD_IOVA;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_cold_mem_iova_to_vaddr);
 void *
 rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
 	uint64_t size)
@@ -1023,7 +1023,7 @@ rte_fslmc_cold_mem_iova_to_vaddr(uint64_t iova,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_vaddr_to_iova);
 __rte_hot uint64_t
 rte_fslmc_mem_vaddr_to_iova(void *vaddr)
 {
@@ -1033,7 +1033,7 @@ rte_fslmc_mem_vaddr_to_iova(void *vaddr)
 	return rte_fslmc_cold_mem_vaddr_to_iova(vaddr, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_mem_iova_to_vaddr);
 __rte_hot void *
 rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
 {
@@ -1043,7 +1043,7 @@ rte_fslmc_mem_iova_to_vaddr(uint64_t iova)
 	return rte_fslmc_cold_mem_iova_to_vaddr(iova, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_vaddr_to_iova)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_vaddr_to_iova);
 uint64_t
 rte_fslmc_io_vaddr_to_iova(void *vaddr)
 {
@@ -1059,7 +1059,7 @@ rte_fslmc_io_vaddr_to_iova(void *vaddr)
 	return RTE_BAD_IOVA;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_iova_to_vaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_io_iova_to_vaddr);
 void *
 rte_fslmc_io_iova_to_vaddr(uint64_t iova)
 {
@@ -1150,14 +1150,14 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fslmc_vfio_mem_dmamap)
+RTE_EXPORT_SYMBOL(rte_fslmc_vfio_mem_dmamap);
 int
 rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 {
 	return fslmc_map_dma(vaddr, iova, size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_vfio_mem_dmaunmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_fslmc_vfio_mem_dmaunmap);
 int
 rte_fslmc_vfio_mem_dmaunmap(uint64_t iova, uint64_t size)
 {
@@ -1275,7 +1275,7 @@ static intptr_t vfio_map_mcp_obj(const char *mcp_obj)
 
 #define IRQ_SET_BUF_LEN  (sizeof(struct vfio_irq_set) + sizeof(int))
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_enable);
 int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
 {
 	int len, ret;
@@ -1307,7 +1307,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_intr_disable);
 int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
 {
 	struct vfio_irq_set *irq_set;
diff --git a/drivers/bus/fslmc/mc/dpbp.c b/drivers/bus/fslmc/mc/dpbp.c
index 08f24d33e8..57f05958d3 100644
--- a/drivers/bus/fslmc/mc/dpbp.c
+++ b/drivers/bus/fslmc/mc/dpbp.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_open);
 int dpbp_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpbp_id,
@@ -160,7 +160,7 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_enable);
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -183,7 +183,7 @@ int dpbp_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_disable);
 int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -240,7 +240,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_reset);
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -264,7 +264,7 @@ int dpbp_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_attributes);
 int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -336,7 +336,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
  * Return:  '0' on Success; Error code otherwise.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_num_free_bufs)
+RTE_EXPORT_INTERNAL_SYMBOL(dpbp_get_num_free_bufs);
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpci.c b/drivers/bus/fslmc/mc/dpci.c
index 9df3827f92..288deb82bc 100644
--- a/drivers/bus/fslmc/mc/dpci.c
+++ b/drivers/bus/fslmc/mc/dpci.c
@@ -317,7 +317,7 @@ int dpci_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_rx_queue);
 int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
@@ -480,7 +480,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_opr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_set_opr);
 int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -519,7 +519,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpci_get_opr)
+RTE_EXPORT_INTERNAL_SYMBOL(dpci_get_opr);
 int dpci_get_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpcon.c b/drivers/bus/fslmc/mc/dpcon.c
index b9f2f50e12..e9441a5dc9 100644
--- a/drivers/bus/fslmc/mc/dpcon.c
+++ b/drivers/bus/fslmc/mc/dpcon.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_open);
 int dpcon_open(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       int dpcon_id,
@@ -67,7 +67,7 @@ int dpcon_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_close);
 int dpcon_close(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -168,7 +168,7 @@ int dpcon_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_enable);
 int dpcon_enable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -192,7 +192,7 @@ int dpcon_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_disable);
 int dpcon_disable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token)
@@ -251,7 +251,7 @@ int dpcon_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_reset);
 int dpcon_reset(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -275,7 +275,7 @@ int dpcon_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpcon_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpcon_get_attributes);
 int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint32_t cmd_flags,
 			 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpdmai.c b/drivers/bus/fslmc/mc/dpdmai.c
index 97e90b09f1..24b7e55064 100644
--- a/drivers/bus/fslmc/mc/dpdmai.c
+++ b/drivers/bus/fslmc/mc/dpdmai.c
@@ -26,7 +26,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_open);
 int dpdmai_open(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		int dpdmai_id,
@@ -65,7 +65,7 @@ int dpdmai_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_close);
 int dpdmai_close(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -175,7 +175,7 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_enable);
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token)
@@ -199,7 +199,7 @@ int dpdmai_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_disable);
 int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token)
@@ -282,7 +282,7 @@ int dpdmai_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_attributes);
 int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
 			  uint32_t cmd_flags,
 			  uint16_t token,
@@ -327,7 +327,7 @@ int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_set_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_set_rx_queue);
 int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -370,7 +370,7 @@ int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_rx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_rx_queue);
 int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -421,7 +421,7 @@ int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_tx_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(dpdmai_get_tx_queue);
 int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpio.c b/drivers/bus/fslmc/mc/dpio.c
index 8cdf8f432a..3937805dcf 100644
--- a/drivers/bus/fslmc/mc/dpio.c
+++ b/drivers/bus/fslmc/mc/dpio.c
@@ -28,7 +28,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_open)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_open);
 int dpio_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpio_id,
@@ -64,7 +64,7 @@ int dpio_open(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_close)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_close);
 int dpio_close(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -177,7 +177,7 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_enable);
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token)
@@ -201,7 +201,7 @@ int dpio_enable(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_disable);
 int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token)
@@ -259,7 +259,7 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_reset);
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token)
@@ -284,7 +284,7 @@ int dpio_reset(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_attributes)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_attributes);
 int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -330,7 +330,7 @@ int dpio_get_attributes(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination);
 int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -359,7 +359,7 @@ int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination);
 int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -396,7 +396,7 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_by_core_id)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_by_core_id);
 int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
 					uint32_t cmd_flags,
 					uint16_t token,
@@ -425,7 +425,7 @@ int dpio_set_stashing_destination_by_core_id(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_source)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_set_stashing_destination_source);
 int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -454,7 +454,7 @@ int dpio_set_stashing_destination_source(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination_source)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_get_stashing_destination_source);
 int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -491,7 +491,7 @@ int dpio_get_stashing_destination_source(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_add_static_dequeue_channel)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_add_static_dequeue_channel);
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
 				    uint16_t token,
@@ -531,7 +531,7 @@ int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpio_remove_static_dequeue_channel)
+RTE_EXPORT_INTERNAL_SYMBOL(dpio_remove_static_dequeue_channel);
 int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				       uint32_t cmd_flags,
 				       uint16_t token,
diff --git a/drivers/bus/fslmc/mc/dpmng.c b/drivers/bus/fslmc/mc/dpmng.c
index 47c85cd80d..1a468df32f 100644
--- a/drivers/bus/fslmc/mc/dpmng.c
+++ b/drivers/bus/fslmc/mc/dpmng.c
@@ -20,7 +20,7 @@
  *
  * Return:	'0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mc_get_version)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_get_version);
 int mc_get_version(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   struct mc_version *mc_ver_info)
@@ -60,7 +60,7 @@ int mc_get_version(struct fsl_mc_io *mc_io,
  *
  * Return:     '0' on Success; Error code otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mc_get_soc_version)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_get_soc_version);
 int mc_get_soc_version(struct fsl_mc_io *mc_io,
 		       uint32_t cmd_flags,
 		       struct mc_soc_version *mc_platform_info)
diff --git a/drivers/bus/fslmc/mc/mc_sys.c b/drivers/bus/fslmc/mc/mc_sys.c
index ef4c8dd3b8..0facfbf1de 100644
--- a/drivers/bus/fslmc/mc/mc_sys.c
+++ b/drivers/bus/fslmc/mc/mc_sys.c
@@ -53,7 +53,7 @@ static int mc_status_to_error(enum mc_cmd_status status)
 	return -EINVAL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mc_send_command)
+RTE_EXPORT_INTERNAL_SYMBOL(mc_send_command);
 int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd)
 {
 	enum mc_cmd_status status;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
index 925e83e97d..c641709016 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c
@@ -96,7 +96,7 @@ dpaa2_create_dpbp_device(int vdev_fd __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dpbp_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dpbp_dev);
 struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void)
 {
 	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
@@ -110,7 +110,7 @@ struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void)
 	return dpbp_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dpbp_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dpbp_dev);
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
 {
 	struct dpaa2_dpbp_dev *dpbp_dev = NULL;
@@ -124,7 +124,7 @@ void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dpbp_supported)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dpbp_supported);
 int dpaa2_dpbp_supported(void)
 {
 	if (TAILQ_EMPTY(&dpbp_dev_list))
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index b546da82f6..f99a7a2afa 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -152,7 +152,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_alloc_dpci_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_alloc_dpci_dev);
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void)
 {
 	struct dpaa2_dpci_dev *dpci_dev = NULL;
@@ -166,7 +166,7 @@ struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void)
 	return dpci_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_free_dpci_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_free_dpci_dev);
 void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci)
 {
 	struct dpaa2_dpci_dev *dpci_dev = NULL;
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index e32471d8b5..c777a66e35 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -48,12 +48,12 @@
 
 #define NUM_HOST_CPUS RTE_MAX_LCORE
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_io_portal)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_io_portal);
 struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
-RTE_EXPORT_INTERNAL_SYMBOL(per_lcore__dpaa2_io)
+RTE_EXPORT_INTERNAL_SYMBOL(per_lcore__dpaa2_io);
 RTE_DEFINE_PER_LCORE(struct dpaa2_io_portal_t, _dpaa2_io);
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_global_active_dqs_list)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_global_active_dqs_list);
 struct swp_active_dqs rte_global_active_dqs_list[NUM_MAX_SWP];
 
 TAILQ_HEAD(dpio_dev_list, dpaa2_dpio_dev);
@@ -62,14 +62,14 @@ static struct dpio_dev_list dpio_dev_list
 static uint32_t io_space_count;
 
 /* Variable to store DPAA2 platform type */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_svr_family)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_svr_family);
 uint32_t dpaa2_svr_family;
 
 /* Variable to store DPAA2 DQRR size */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dqrr_size)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dqrr_size);
 uint8_t dpaa2_dqrr_size;
 /* Variable to store DPAA2 EQCR size */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eqcr_size)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eqcr_size);
 uint8_t dpaa2_eqcr_size;
 
 /* Variable to hold the portal_key, once created.*/
@@ -339,7 +339,7 @@ static struct dpaa2_dpio_dev *dpaa2_get_qbman_swp(void)
 	return dpio_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_swp)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_swp);
 int
 dpaa2_affine_qbman_swp(void)
 {
@@ -361,7 +361,7 @@ dpaa2_affine_qbman_swp(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_ethrx_swp)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_affine_qbman_ethrx_swp);
 int
 dpaa2_affine_qbman_ethrx_swp(void)
 {
@@ -623,7 +623,7 @@ dpaa2_create_dpio_device(int vdev_fd,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dq_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_dq_storage);
 void
 dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage)
 {
@@ -635,7 +635,7 @@ dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dq_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_alloc_dq_storage);
 int
 dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage)
 {
@@ -658,7 +658,7 @@ dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_eq_descriptors)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_free_eq_descriptors);
 uint32_t
 dpaa2_free_eq_descriptors(void)
 {
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index f13168dce3..f41a165faa 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -327,7 +327,7 @@ uint16_t qbman_fq_attr_get_opridsz(struct qbman_fq_query_rslt *r)
 	return r->opridsz;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_query_state)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_query_state);
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r)
 {
@@ -385,7 +385,7 @@ int qbman_fq_state_overflow_error(const struct qbman_fq_query_np_rslt *r)
 	return (int)((r->st1 & 0x40) >> 6);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_state_frame_count)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_fq_state_frame_count);
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r)
 {
 	return (r->frm_cnt & 0x00FFFFFF);
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 84853924e7..a203f02bfb 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -407,7 +407,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
 	return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_interrupt_clear_status)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_interrupt_clear_status);
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
 {
 	qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
@@ -609,13 +609,13 @@ enum qb_enqueue_commands {
 #define QB_ENQUEUE_CMD_NLIS_SHIFT            14
 #define QB_ENQUEUE_CMD_IS_NESN_SHIFT         15
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_clear);
 void qbman_eq_desc_clear(struct qbman_eq_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_no_orp)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_no_orp);
 void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
 {
 	d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_ORP_ENABLE_SHIFT);
@@ -625,7 +625,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
 		d->eq.verb |= enqueue_rejects_to_fq;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_orp)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_orp);
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete)
 {
@@ -665,7 +665,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
 	d->eq.seqnum |= 1 << QB_ENQUEUE_CMD_IS_NESN_SHIFT;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_response)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_response);
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				dma_addr_t storage_phys,
 				int stash)
@@ -674,20 +674,20 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 	d->eq.wae = stash;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_token)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_token);
 void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token)
 {
 	d->eq.rspid = token;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_fq)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_fq);
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
 {
 	d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_TARGET_TYPE_SHIFT);
 	d->eq.tgtid = fqid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_qd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_qd);
 void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
 			  uint16_t qd_bin, uint8_t qd_prio)
 {
@@ -705,7 +705,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
 		d->eq.verb &= ~(1 << QB_ENQUEUE_CMD_IRQ_ON_DISPATCH_SHIFT);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_dca)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_eq_desc_set_dca);
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park)
 {
@@ -1227,7 +1227,7 @@ static int qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple);
 int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 				      const struct qbman_eq_desc *d,
 				      const struct qbman_fd *fd,
@@ -1502,7 +1502,7 @@ static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s,
 	return num_enqueued;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_fd);
 int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 					 const struct qbman_eq_desc *d,
 					 struct qbman_fd **fd,
@@ -1758,7 +1758,7 @@ static int qbman_swp_enqueue_multiple_desc_mem_back(struct qbman_swp *s,
 
 	return num_enqueued;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_desc)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_enqueue_multiple_desc);
 int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 					   const struct qbman_eq_desc *d,
 					   const struct qbman_fd *fd,
@@ -1785,7 +1785,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
 	*enabled = src | (1 << channel_idx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_push_set)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_push_set);
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
 {
 	uint16_t dqsrc;
@@ -1823,13 +1823,13 @@ enum qb_pull_dt_e {
 	qb_pull_dt_framequeue
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_clear);
 void qbman_pull_desc_clear(struct qbman_pull_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_storage)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_storage);
 void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 				 struct qbman_result *storage,
 				 dma_addr_t storage_phys,
@@ -1850,7 +1850,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 	d->pull.rsp_addr = storage_phys;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_numframes)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_numframes);
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes)
 {
@@ -1862,7 +1862,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
 	d->pull.tok = token;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_fq)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_pull_desc_set_fq);
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
 {
 	d->pull.verb |= 1 << QB_VDQCR_VERB_DCT_SHIFT;
@@ -1978,7 +1978,7 @@ static int qbman_swp_pull_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_pull)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_pull);
 int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 {
 	if (!s->stash_off)
@@ -2006,7 +2006,7 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
 
 #include <rte_prefetch.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_prefetch_dqrr_next)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_prefetch_dqrr_next);
 void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
 {
 	const struct qbman_result *p;
@@ -2020,7 +2020,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s)
  * only once, so repeated calls can return a sequence of DQRR entries, without
  * requiring they be consumed immediately or in any particular order.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_next)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_next);
 const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
 {
 	if (!s->stash_off)
@@ -2224,7 +2224,7 @@ const struct qbman_result *qbman_swp_dqrr_next_mem_back(struct qbman_swp *s)
 }
 
 /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_consume)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_consume);
 void qbman_swp_dqrr_consume(struct qbman_swp *s,
 			    const struct qbman_result *dq)
 {
@@ -2233,7 +2233,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s,
 }
 
 /* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_idx_consume)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_dqrr_idx_consume);
 void qbman_swp_dqrr_idx_consume(struct qbman_swp *s,
 			    uint8_t dqrr_index)
 {
@@ -2244,7 +2244,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s,
 /* Polling user-provided storage */
 /*********************************/
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_has_new_result)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_has_new_result);
 int qbman_result_has_new_result(struct qbman_swp *s,
 				struct qbman_result *dq)
 {
@@ -2273,7 +2273,7 @@ int qbman_result_has_new_result(struct qbman_swp *s,
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_new_result)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_new_result);
 int qbman_check_new_result(struct qbman_result *dq)
 {
 	if (dq->dq.tok == 0)
@@ -2289,7 +2289,7 @@ int qbman_check_new_result(struct qbman_result *dq)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_command_complete)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_check_command_complete);
 int qbman_check_command_complete(struct qbman_result *dq)
 {
 	struct qbman_swp *s;
@@ -2377,19 +2377,19 @@ int qbman_result_is_FQPN(const struct qbman_result *dq)
 
 /* These APIs assume qbman_result_is_DQ() is TRUE */
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_flags)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_flags);
 uint8_t qbman_result_DQ_flags(const struct qbman_result *dq)
 {
 	return dq->dq.stat;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_seqnum)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_seqnum);
 uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq)
 {
 	return dq->dq.seqnum;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_odpid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_odpid);
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq)
 {
 	return dq->dq.oprid;
@@ -2410,13 +2410,13 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq)
 	return dq->dq.fq_frm_cnt;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fqd_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fqd_ctx);
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq)
 {
 	return dq->dq.fqd_ctx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_DQ_fd);
 const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq)
 {
 	return (const struct qbman_fd *)&dq->dq.fd[0];
@@ -2425,7 +2425,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq)
 /**************************************/
 /* Parsing state-change notifications */
 /**************************************/
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_SCN_state)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_SCN_state);
 uint8_t qbman_result_SCN_state(const struct qbman_result *scn)
 {
 	return scn->scn.state;
@@ -2485,25 +2485,25 @@ uint64_t qbman_result_cgcu_icnt(const struct qbman_result *scn)
 /********************/
 /* Parsing EQ RESP  */
 /********************/
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_fd);
 struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp)
 {
 	return (struct qbman_fd *)&eqresp->eq_resp.fd[0];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_set_rspid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_set_rspid);
 void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val)
 {
 	eqresp->eq_resp.rspid = val;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rspid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rspid);
 uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp)
 {
 	return eqresp->eq_resp.rspid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rc)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_result_eqresp_rc);
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp)
 {
 	if (eqresp->eq_resp.rc == 0xE)
@@ -2518,14 +2518,14 @@ uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp)
 #define QB_BR_RC_VALID_SHIFT  5
 #define QB_BR_RCDI_SHIFT      6
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_clear);
 void qbman_release_desc_clear(struct qbman_release_desc *d)
 {
 	memset(d, 0, sizeof(*d));
 	d->br.verb = 1 << QB_BR_RC_VALID_SHIFT;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_set_bpid)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_release_desc_set_bpid);
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid)
 {
 	d->br.bpid = bpid;
@@ -2640,7 +2640,7 @@ static int qbman_swp_release_mem_back(struct qbman_swp *s,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_release)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_release);
 int qbman_swp_release(struct qbman_swp *s,
 			     const struct qbman_release_desc *d,
 			     const uint64_t *buffers,
@@ -2767,7 +2767,7 @@ static int qbman_swp_acquire_cinh_direct(struct qbman_swp *s, uint16_t bpid,
 	return num;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_acquire)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_swp_acquire);
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers)
 {
@@ -2951,13 +2951,13 @@ int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
 				  1, ctx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_idx);
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr)
 {
 	return QBMAN_IDX_FROM_DQRR(dqrr);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_from_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(qbman_get_dqrr_from_idx);
 struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx)
 {
 	struct qbman_result *dq;
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index ca9e49f548..cd1375af96 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -45,7 +45,7 @@ static TAILQ_HEAD(, rte_afu_driver) ifpga_afu_drv_list =
 
 
 /* register a ifpga bus based driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_register);
 void rte_ifpga_driver_register(struct rte_afu_driver *driver)
 {
 	RTE_VERIFY(driver);
@@ -54,7 +54,7 @@ void rte_ifpga_driver_register(struct rte_afu_driver *driver)
 }
 
 /* un-register a fpga bus based driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_driver_unregister);
 void rte_ifpga_driver_unregister(struct rte_afu_driver *driver)
 {
 	TAILQ_REMOVE(&ifpga_afu_drv_list, driver, next);
@@ -74,7 +74,7 @@ ifpga_find_afu_dev(const struct rte_rawdev *rdev,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_find_afu_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ifpga_find_afu_by_name);
 struct rte_afu_device *
 rte_ifpga_find_afu_by_name(const char *name)
 {
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index 3f13e1d6ac..de48704948 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -49,7 +49,7 @@
  */
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -71,7 +71,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev)
 {
@@ -413,7 +413,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv __rte_unused,
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int rte_pci_read_config(const struct rte_pci_device *dev,
 		void *buf, size_t len, off_t offset)
 {
@@ -460,7 +460,7 @@ int rte_pci_read_config(const struct rte_pci_device *dev,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int rte_pci_write_config(const struct rte_pci_device *dev,
 		const void *buf, size_t len, off_t offset)
 {
@@ -503,7 +503,7 @@ int rte_pci_write_config(const struct rte_pci_device *dev,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 		      void *buf, size_t len, off_t offset)
 {
@@ -515,7 +515,7 @@ int rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 		       const void *buf, size_t len, off_t offset)
 {
@@ -526,7 +526,7 @@ int rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 	return len;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 		struct rte_pci_ioport *p)
@@ -588,7 +588,7 @@ pci_uio_ioport_read(struct rte_pci_ioport *p,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p,
 		void *data, size_t len, off_t offset)
@@ -631,7 +631,7 @@ pci_uio_ioport_write(struct rte_pci_ioport *p,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p,
 		const void *data, size_t len, off_t offset)
@@ -645,7 +645,7 @@ rte_pci_ioport_write(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p)
 {
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index c20d159218..1eb87c8fe6 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -55,7 +55,7 @@ pci_get_kernel_driver_by_path(const char *filename, char *dri_name,
 }
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -86,7 +86,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev)
 {
@@ -630,7 +630,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv,
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int rte_pci_read_config(const struct rte_pci_device *device,
 		void *buf, size_t len, off_t offset)
 {
@@ -654,7 +654,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int rte_pci_write_config(const struct rte_pci_device *device,
 		const void *buf, size_t len, off_t offset)
 {
@@ -678,7 +678,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int rte_pci_mmio_read(const struct rte_pci_device *device, int bar,
 		void *buf, size_t len, off_t offset)
 {
@@ -701,7 +701,7 @@ int rte_pci_mmio_read(const struct rte_pci_device *device, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int rte_pci_mmio_write(const struct rte_pci_device *device, int bar,
 		const void *buf, size_t len, off_t offset)
 {
@@ -723,7 +723,7 @@ int rte_pci_mmio_write(const struct rte_pci_device *device, int bar,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 		struct rte_pci_ioport *p)
@@ -751,7 +751,7 @@ rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p,
 		void *data, size_t len, off_t offset)
@@ -771,7 +771,7 @@ rte_pci_ioport_read(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p,
 		const void *data, size_t len, off_t offset)
@@ -791,7 +791,7 @@ rte_pci_ioport_write(struct rte_pci_ioport *p,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p)
 {
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index c88634f790..39e564c2e9 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -33,7 +33,7 @@
 
 #define SYSFS_PCI_DEVICES "/sys/bus/pci/devices"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_get_sysfs_path)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_get_sysfs_path);
 const char *rte_pci_get_sysfs_path(void)
 {
 	const char *path = NULL;
@@ -479,7 +479,7 @@ pci_dump_one_device(FILE *f, struct rte_pci_device *dev)
 }
 
 /* dump devices on the bus */
-RTE_EXPORT_SYMBOL(rte_pci_dump)
+RTE_EXPORT_SYMBOL(rte_pci_dump);
 void
 rte_pci_dump(FILE *f)
 {
@@ -504,7 +504,7 @@ pci_parse(const char *name, void *addr)
 }
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_register);
 void
 rte_pci_register(struct rte_pci_driver *driver)
 {
@@ -512,7 +512,7 @@ rte_pci_register(struct rte_pci_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_unregister);
 void
 rte_pci_unregister(struct rte_pci_driver *driver)
 {
@@ -800,7 +800,7 @@ rte_pci_get_iommu_class(void)
 	return iova_mode;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_has_capability_list, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_has_capability_list, 23.11);
 bool
 rte_pci_has_capability_list(const struct rte_pci_device *dev)
 {
@@ -812,14 +812,14 @@ rte_pci_has_capability_list(const struct rte_pci_device *dev)
 	return (status & RTE_PCI_STATUS_CAP_LIST) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_capability, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_capability, 23.11);
 off_t
 rte_pci_find_capability(const struct rte_pci_device *dev, uint8_t cap)
 {
 	return rte_pci_find_next_capability(dev, cap, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_next_capability, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_next_capability, 23.11);
 off_t
 rte_pci_find_next_capability(const struct rte_pci_device *dev, uint8_t cap,
 	off_t offset)
@@ -857,7 +857,7 @@ rte_pci_find_next_capability(const struct rte_pci_device *dev, uint8_t cap,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_ext_capability, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_find_ext_capability, 20.11);
 off_t
 rte_pci_find_ext_capability(const struct rte_pci_device *dev, uint32_t cap)
 {
@@ -900,7 +900,7 @@ rte_pci_find_ext_capability(const struct rte_pci_device *dev, uint32_t cap)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_set_bus_master, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_set_bus_master, 21.08);
 int
 rte_pci_set_bus_master(const struct rte_pci_device *dev, bool enable)
 {
@@ -929,7 +929,7 @@ rte_pci_set_bus_master(const struct rte_pci_device *dev, bool enable)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_pasid_set_state)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pci_pasid_set_state);
 int
 rte_pci_pasid_set_state(const struct rte_pci_device *dev,
 		off_t offset, bool enable)
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index e7e449306e..fc899efd3b 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -37,7 +37,7 @@ DEFINE_DEVPROPKEY(DEVPKEY_Device_Numa_Node, 0x540b947e, 0x8b40, 0x45bc,
  */
 
 /* Map pci device */
-RTE_EXPORT_SYMBOL(rte_pci_map_device)
+RTE_EXPORT_SYMBOL(rte_pci_map_device);
 int
 rte_pci_map_device(struct rte_pci_device *dev)
 {
@@ -52,7 +52,7 @@ rte_pci_map_device(struct rte_pci_device *dev)
 }
 
 /* Unmap pci device */
-RTE_EXPORT_SYMBOL(rte_pci_unmap_device)
+RTE_EXPORT_SYMBOL(rte_pci_unmap_device);
 void
 rte_pci_unmap_device(struct rte_pci_device *dev __rte_unused)
 {
@@ -64,7 +64,7 @@ rte_pci_unmap_device(struct rte_pci_device *dev __rte_unused)
 }
 
 /* Read PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_read_config)
+RTE_EXPORT_SYMBOL(rte_pci_read_config);
 int
 rte_pci_read_config(const struct rte_pci_device *dev __rte_unused,
 	void *buf __rte_unused, size_t len __rte_unused,
@@ -79,7 +79,7 @@ rte_pci_read_config(const struct rte_pci_device *dev __rte_unused,
 }
 
 /* Write PCI config space. */
-RTE_EXPORT_SYMBOL(rte_pci_write_config)
+RTE_EXPORT_SYMBOL(rte_pci_write_config);
 int
 rte_pci_write_config(const struct rte_pci_device *dev __rte_unused,
 	const void *buf __rte_unused, size_t len __rte_unused,
@@ -94,7 +94,7 @@ rte_pci_write_config(const struct rte_pci_device *dev __rte_unused,
 }
 
 /* Read PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_read, 23.07);
 int
 rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 		      void *buf, size_t len, off_t offset)
@@ -107,7 +107,7 @@ rte_pci_mmio_read(const struct rte_pci_device *dev, int bar,
 }
 
 /* Write PCI MMIO space. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pci_mmio_write, 23.07);
 int
 rte_pci_mmio_write(const struct rte_pci_device *dev, int bar,
 		       const void *buf, size_t len, off_t offset)
@@ -131,7 +131,7 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv __rte_unused,
 	return RTE_IOVA_DC;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_map)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_map);
 int
 rte_pci_ioport_map(struct rte_pci_device *dev __rte_unused,
 	int bar __rte_unused, struct rte_pci_ioport *p __rte_unused)
@@ -145,7 +145,7 @@ rte_pci_ioport_map(struct rte_pci_device *dev __rte_unused,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_read)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_read);
 void
 rte_pci_ioport_read(struct rte_pci_ioport *p __rte_unused,
 	void *data __rte_unused, size_t len __rte_unused,
@@ -158,7 +158,7 @@ rte_pci_ioport_read(struct rte_pci_ioport *p __rte_unused,
 	 */
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_unmap);
 int
 rte_pci_ioport_unmap(struct rte_pci_ioport *p __rte_unused)
 {
@@ -181,7 +181,7 @@ pci_device_iommu_support_va(const struct rte_pci_device *dev __rte_unused)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_ioport_write)
+RTE_EXPORT_SYMBOL(rte_pci_ioport_write);
 void
 rte_pci_ioport_write(struct rte_pci_ioport *p __rte_unused,
 		const void *data __rte_unused, size_t len __rte_unused,
diff --git a/drivers/bus/platform/platform.c b/drivers/bus/platform/platform.c
index 0f50027236..9fdbb29e19 100644
--- a/drivers/bus/platform/platform.c
+++ b/drivers/bus/platform/platform.c
@@ -29,14 +29,14 @@
 
 #define PLATFORM_BUS_DEVICES_PATH "/sys/bus/platform/devices"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_register);
 void
 rte_platform_register(struct rte_platform_driver *pdrv)
 {
 	TAILQ_INSERT_TAIL(&platform_bus.driver_list, pdrv, next);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_platform_unregister);
 void
 rte_platform_unregister(struct rte_platform_driver *pdrv)
 {
diff --git a/drivers/bus/uacce/uacce.c b/drivers/bus/uacce/uacce.c
index 87e68b3dbf..679738c665 100644
--- a/drivers/bus/uacce/uacce.c
+++ b/drivers/bus/uacce/uacce.c
@@ -583,7 +583,7 @@ uacce_dev_iterate(const void *start, const char *str,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_avail_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_avail_queues);
 int
 rte_uacce_avail_queues(struct rte_uacce_device *dev)
 {
@@ -597,7 +597,7 @@ rte_uacce_avail_queues(struct rte_uacce_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_alloc);
 int
 rte_uacce_queue_alloc(struct rte_uacce_device *dev, struct rte_uacce_qcontex *qctx)
 {
@@ -612,7 +612,7 @@ rte_uacce_queue_alloc(struct rte_uacce_device *dev, struct rte_uacce_qcontex *qc
 	return -EIO;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_free);
 void
 rte_uacce_queue_free(struct rte_uacce_qcontex *qctx)
 {
@@ -622,7 +622,7 @@ rte_uacce_queue_free(struct rte_uacce_qcontex *qctx)
 	qctx->fd = -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_start)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_start);
 int
 rte_uacce_queue_start(struct rte_uacce_qcontex *qctx)
 {
@@ -630,7 +630,7 @@ rte_uacce_queue_start(struct rte_uacce_qcontex *qctx)
 	return ioctl(qctx->fd, UACCE_CMD_START_Q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_ioctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_ioctl);
 int
 rte_uacce_queue_ioctl(struct rte_uacce_qcontex *qctx, unsigned long cmd, void *arg)
 {
@@ -640,7 +640,7 @@ rte_uacce_queue_ioctl(struct rte_uacce_qcontex *qctx, unsigned long cmd, void *a
 	return ioctl(qctx->fd, cmd, arg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_mmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_mmap);
 void *
 rte_uacce_queue_mmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 {
@@ -666,7 +666,7 @@ rte_uacce_queue_mmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_queue_unmap);
 void
 rte_uacce_queue_unmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 {
@@ -676,7 +676,7 @@ rte_uacce_queue_unmap(struct rte_uacce_qcontex *qctx, enum rte_uacce_qfrt qfrt)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_register);
 void
 rte_uacce_register(struct rte_uacce_driver *driver)
 {
@@ -684,7 +684,7 @@ rte_uacce_register(struct rte_uacce_driver *driver)
 	driver->bus = &uacce_bus;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_uacce_unregister);
 void
 rte_uacce_unregister(struct rte_uacce_driver *driver)
 {
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index be375f63dc..c1c510c448 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -52,7 +52,7 @@ static struct vdev_custom_scans vdev_custom_scans =
 static rte_spinlock_t vdev_custom_scan_lock = RTE_SPINLOCK_INITIALIZER;
 
 /* register a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_register);
 void
 rte_vdev_register(struct rte_vdev_driver *driver)
 {
@@ -60,14 +60,14 @@ rte_vdev_register(struct rte_vdev_driver *driver)
 }
 
 /* unregister a driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdev_unregister);
 void
 rte_vdev_unregister(struct rte_vdev_driver *driver)
 {
 	TAILQ_REMOVE(&vdev_driver_list, driver, next);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_add_custom_scan)
+RTE_EXPORT_SYMBOL(rte_vdev_add_custom_scan);
 int
 rte_vdev_add_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 {
@@ -96,7 +96,7 @@ rte_vdev_add_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	return (custom_scan == NULL) ? -1 : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_remove_custom_scan)
+RTE_EXPORT_SYMBOL(rte_vdev_remove_custom_scan);
 int
 rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 {
@@ -321,7 +321,7 @@ insert_vdev(const char *name, const char *args,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_init)
+RTE_EXPORT_SYMBOL(rte_vdev_init);
 int
 rte_vdev_init(const char *name, const char *args)
 {
@@ -361,7 +361,7 @@ vdev_remove_driver(struct rte_vdev_device *dev)
 	return driver->remove(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdev_uninit)
+RTE_EXPORT_SYMBOL(rte_vdev_uninit);
 int
 rte_vdev_uninit(const char *name)
 {
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index ed18d4da96..67c17b9286 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -165,7 +165,7 @@ static const char *map_names[VMBUS_MAX_RESOURCE] = {
 
 
 /* map the resources of a vmbus device in virtual memory */
-RTE_EXPORT_SYMBOL(rte_vmbus_map_device)
+RTE_EXPORT_SYMBOL(rte_vmbus_map_device);
 int
 rte_vmbus_map_device(struct rte_vmbus_device *dev)
 {
@@ -224,7 +224,7 @@ rte_vmbus_map_device(struct rte_vmbus_device *dev)
 	return vmbus_uio_map_resource(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_unmap_device)
+RTE_EXPORT_SYMBOL(rte_vmbus_unmap_device);
 void
 rte_vmbus_unmap_device(struct rte_vmbus_device *dev)
 {
@@ -341,7 +341,7 @@ vmbus_scan_one(const char *name)
 /*
  * Scan the content of the vmbus, and the devices in the devices list
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_scan)
+RTE_EXPORT_SYMBOL(rte_vmbus_scan);
 int
 rte_vmbus_scan(void)
 {
@@ -373,19 +373,19 @@ rte_vmbus_scan(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_mask)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_mask);
 void rte_vmbus_irq_mask(struct rte_vmbus_device *device)
 {
 	vmbus_uio_irq_control(device, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_unmask)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_unmask);
 void rte_vmbus_irq_unmask(struct rte_vmbus_device *device)
 {
 	vmbus_uio_irq_control(device, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_irq_read)
+RTE_EXPORT_SYMBOL(rte_vmbus_irq_read);
 int rte_vmbus_irq_read(struct rte_vmbus_device *device)
 {
 	return vmbus_uio_irq_read(device);
diff --git a/drivers/bus/vmbus/vmbus_channel.c b/drivers/bus/vmbus/vmbus_channel.c
index a876c909dd..03820015ae 100644
--- a/drivers/bus/vmbus/vmbus_channel.c
+++ b/drivers/bus/vmbus/vmbus_channel.c
@@ -48,7 +48,7 @@ vmbus_set_event(const struct vmbus_channel *chan)
 /*
  * Set the wait between when hypervisor examines the trigger.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_set_latency)
+RTE_EXPORT_SYMBOL(rte_vmbus_set_latency);
 void
 rte_vmbus_set_latency(const struct rte_vmbus_device *dev,
 		      const struct vmbus_channel *chan,
@@ -78,7 +78,7 @@ rte_vmbus_set_latency(const struct rte_vmbus_device *dev,
  * Since this in userspace, rely on the monitor page.
  * Can't do a hypercall from userspace.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_tx)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_tx);
 void
 rte_vmbus_chan_signal_tx(const struct vmbus_channel *chan)
 {
@@ -96,7 +96,7 @@ rte_vmbus_chan_signal_tx(const struct vmbus_channel *chan)
 
 
 /* Do a simple send directly using transmit ring. */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_send)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_send);
 int rte_vmbus_chan_send(struct vmbus_channel *chan, uint16_t type,
 			void *data, uint32_t dlen,
 			uint64_t xactid, uint32_t flags, bool *need_sig)
@@ -140,7 +140,7 @@ int rte_vmbus_chan_send(struct vmbus_channel *chan, uint16_t type,
 }
 
 /* Do a scatter/gather send where the descriptor points to data. */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_send_sglist)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_send_sglist);
 int rte_vmbus_chan_send_sglist(struct vmbus_channel *chan,
 			       struct vmbus_gpa sg[], uint32_t sglen,
 			       void *data, uint32_t dlen,
@@ -184,7 +184,7 @@ int rte_vmbus_chan_send_sglist(struct vmbus_channel *chan,
 	return error;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_rx_empty)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_rx_empty);
 bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
 {
 	const struct vmbus_br *br = &channel->rxbr;
@@ -194,7 +194,7 @@ bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
 }
 
 /* Signal host after reading N bytes */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_read)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_signal_read);
 void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read)
 {
 	struct vmbus_br *rbr = &chan->rxbr;
@@ -225,7 +225,7 @@ void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read)
 	vmbus_set_event(chan);
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv);
 int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
 			uint64_t *request_id)
 {
@@ -273,7 +273,7 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
 }
 
 /* TODO: replace this with inplace ring buffer (no copy) */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv_raw)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_recv_raw);
 int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan,
 			    void *data, uint32_t *len)
 {
@@ -344,7 +344,7 @@ int vmbus_chan_create(const struct rte_vmbus_device *device,
 }
 
 /* Setup the primary channel */
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_open)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_open);
 int rte_vmbus_chan_open(struct rte_vmbus_device *device,
 			struct vmbus_channel **new_chan)
 {
@@ -365,7 +365,7 @@ int rte_vmbus_chan_open(struct rte_vmbus_device *device,
 	return err;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_max_channels)
+RTE_EXPORT_SYMBOL(rte_vmbus_max_channels);
 int rte_vmbus_max_channels(const struct rte_vmbus_device *device)
 {
 	if (vmbus_uio_subchannels_supported(device, device->primary))
@@ -375,7 +375,7 @@ int rte_vmbus_max_channels(const struct rte_vmbus_device *device)
 }
 
 /* Setup secondary channel */
-RTE_EXPORT_SYMBOL(rte_vmbus_subchan_open)
+RTE_EXPORT_SYMBOL(rte_vmbus_subchan_open);
 int rte_vmbus_subchan_open(struct vmbus_channel *primary,
 			   struct vmbus_channel **new_chan)
 {
@@ -391,13 +391,13 @@ int rte_vmbus_subchan_open(struct vmbus_channel *primary,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_sub_channel_index)
+RTE_EXPORT_SYMBOL(rte_vmbus_sub_channel_index);
 uint16_t rte_vmbus_sub_channel_index(const struct vmbus_channel *chan)
 {
 	return chan->subchannel_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_vmbus_chan_close)
+RTE_EXPORT_SYMBOL(rte_vmbus_chan_close);
 void rte_vmbus_chan_close(struct vmbus_channel *chan)
 {
 	const struct rte_vmbus_device *device = chan->device;
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index a787d8b18d..a567b0755b 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -192,7 +192,7 @@ vmbus_ignore_device(struct rte_vmbus_device *dev)
  * all registered drivers that have a matching entry in its id_table
  * for discovered devices.
  */
-RTE_EXPORT_SYMBOL(rte_vmbus_probe)
+RTE_EXPORT_SYMBOL(rte_vmbus_probe);
 int
 rte_vmbus_probe(void)
 {
@@ -282,7 +282,7 @@ vmbus_devargs_lookup(struct rte_vmbus_device *dev)
 }
 
 /* register vmbus driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_register);
 void
 rte_vmbus_register(struct rte_vmbus_driver *driver)
 {
@@ -293,7 +293,7 @@ rte_vmbus_register(struct rte_vmbus_driver *driver)
 }
 
 /* unregister vmbus driver */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vmbus_unregister);
 void
 rte_vmbus_unregister(struct rte_vmbus_driver *driver)
 {
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 0e6777e6ca..17048c1a7e 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -305,7 +305,7 @@ ot_ipsec_inb_tunnel_hdr_fill(struct roc_ot_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_fill);
 int
 cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -415,7 +415,7 @@ cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_fill);
 int
 cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -580,21 +580,21 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_valid)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_inb_sa_valid);
 bool
 cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa)
 {
 	return !!sa->w2.s.valid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_valid)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ot_ipsec_outb_sa_valid);
 bool
 cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
 {
 	return !!sa->w2.s.valid;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_ivlen_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_ivlen_get);
 uint8_t
 cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		     enum rte_crypto_auth_algorithm a_algo,
@@ -631,7 +631,7 @@ cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 	return ivlen;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_icvlen_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_icvlen_get);
 uint8_t
 cnxk_ipsec_icvlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		      enum rte_crypto_auth_algorithm a_algo,
@@ -678,7 +678,7 @@ cnxk_ipsec_icvlen_get(enum rte_crypto_cipher_algorithm c_algo,
 	return icv;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_roundup_byte)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_roundup_byte);
 uint8_t
 cnxk_ipsec_outb_roundup_byte(enum rte_crypto_cipher_algorithm c_algo,
 			     enum rte_crypto_aead_algorithm aead_algo)
@@ -709,7 +709,7 @@ cnxk_ipsec_outb_roundup_byte(enum rte_crypto_cipher_algorithm c_algo,
 	return roundup_byte;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_rlens_get)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ipsec_outb_rlens_get);
 int
 cnxk_ipsec_outb_rlens_get(struct cnxk_ipsec_outb_rlens *rlens,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -984,7 +984,7 @@ on_fill_ipsec_common_sa(struct rte_security_ipsec_xform *ipsec,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_outb_sa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_outb_sa_create);
 int
 cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
 			     struct rte_crypto_sym_xform *crypto_xform,
@@ -1130,7 +1130,7 @@ cnxk_on_ipsec_outb_sa_create(struct rte_security_ipsec_xform *ipsec,
 	return ctx_len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_inb_sa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_on_ipsec_inb_sa_create);
 int
 cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
 			    struct rte_crypto_sym_xform *crypto_xform,
@@ -1484,7 +1484,7 @@ ow_ipsec_inb_tunnel_hdr_fill(struct roc_ow_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_inb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_inb_sa_fill);
 int
 cnxk_ow_ipsec_inb_sa_fill(struct roc_ow_ipsec_inb_sa *sa,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
@@ -1591,7 +1591,7 @@ cnxk_ow_ipsec_inb_sa_fill(struct roc_ow_ipsec_inb_sa *sa,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_outb_sa_fill)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ow_ipsec_outb_sa_fill);
 int
 cnxk_ow_ipsec_outb_sa_fill(struct roc_ow_ipsec_outb_sa *sa,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
diff --git a/drivers/common/cnxk/cnxk_utils.c b/drivers/common/cnxk/cnxk_utils.c
index 8ca4664d25..cbd8779ce4 100644
--- a/drivers/common/cnxk/cnxk_utils.c
+++ b/drivers/common/cnxk/cnxk_utils.c
@@ -10,7 +10,7 @@
 
 #include "cnxk_utils.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_err_to_rte_err)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_err_to_rte_err);
 int
 roc_nix_tm_err_to_rte_err(int errorcode)
 {
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 88f229163a..b511e2d17e 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -243,7 +243,7 @@ plt_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
 static int plt_init_cb_num;
 static roc_plt_init_cb_t plt_init_cbs[PLT_INIT_CB_MAX];
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init_cb_register)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init_cb_register);
 int
 roc_plt_init_cb_register(roc_plt_init_cb_t cb)
 {
@@ -254,7 +254,7 @@ roc_plt_init_cb_register(roc_plt_init_cb_t cb)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_control_lmt_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_control_lmt_id_get);
 uint16_t
 roc_plt_control_lmt_id_get(void)
 {
@@ -266,7 +266,7 @@ roc_plt_control_lmt_id_get(void)
 		return ROC_NUM_LMT_LINES - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_lmt_validate)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_lmt_validate);
 uint16_t
 roc_plt_lmt_validate(void)
 {
@@ -281,7 +281,7 @@ roc_plt_lmt_validate(void)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_plt_init);
 int
 roc_plt_init(void)
 {
@@ -321,31 +321,31 @@ roc_plt_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_base)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_base);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, INFO);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_mbox)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_mbox);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_mbox, mbox, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_cpt)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_cpt);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_cpt, crypto, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ml)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ml);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ml, ml, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npa)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npa);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_npa, mempool, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_nix)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_nix);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_nix, nix, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npc)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_npc);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_npc, flow, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_sso)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_sso);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_sso, event, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tim)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tim);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_tim, timer, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tm)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_tm);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_tm, tm, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_dpi)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_dpi);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_dpi, dpi, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_rep)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_rep);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_rep, rep, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_esw)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_esw);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_esw, esw, NOTICE);
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ree)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_logtype_ree);
 RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ree, ree, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform_base_symbols.c b/drivers/common/cnxk/roc_platform_base_symbols.c
index 7f0fe601ad..b8d2026dd5 100644
--- a/drivers/common/cnxk/roc_platform_base_symbols.c
+++ b/drivers/common/cnxk/roc_platform_base_symbols.c
@@ -5,545 +5,545 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_xcbc_key_derive)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_hash_key_derive)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_npa_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_start_rxtx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_stop_rxtx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_state)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_get_linkinfo)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_mode)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_supported_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_change)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_tx_control)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_misc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_handler)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_available)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_max_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_rxc_time_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_flush)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_reload)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_eng_grp_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lmtline_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_ctx_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_parse_hdr_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_afs_print)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lfs_print)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_wait_queue_idle)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure_v2)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_tx_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_delete_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_rx_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_rss_action_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_vlan_tpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_repte_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_is_repte_pfs_vf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_md5_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha1_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha256_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha512_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_lmt_base_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_num_lmtlines_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_ring_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_list_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_nix_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_meta_aura_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_dev_pffunc_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_outb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_reass_inb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_outb_sa_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_is_supported)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_hw_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_active_lmac_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_lmac_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_threshold_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_custom_tag_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_intr_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_recovery)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_port_map_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sc_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_stats_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read64)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write64)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read32)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write32)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_save)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_ap2mlip)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_mlip2ap)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_pa_to_offset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_offset_to_pa)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_write_job)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_valid_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_done_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_enqueue)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_dequeue)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_queue_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_lf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_sl)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_on)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_off)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_on)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_off)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_model)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_lbk)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_esw)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_base_chan)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_rx_chan_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vwqe_interval)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_sdp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_vf_or_sdp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf_func)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_inl_ipsec_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cpt_ctx_cache_sync)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_pkt_len)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_rep_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_level_to_idx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_to_idx)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_timeunit_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_count_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free_all)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_pre_color_tbl_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_connect)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_get_reg_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_reg_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_queues_ctx_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cqe_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_cpt_lfs_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_desc_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_npa_bp_cfg)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_count_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_bpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_aura_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_lf_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_inj_lf_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sa_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_rx_inject_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_spi_range)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_sz)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_reassembly_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_probed)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_multi_channel)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_is_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_put)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_rq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_soft_exp_poll_switch)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_is_with_inl_dev)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sso_pffunc_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_tag_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_sa_sync)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ctx_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cpt_lf_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ts_pkind_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_lock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_unlock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_pool_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_eng_caps_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_custom_meta_pool_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_xaq_realloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_qptr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_release)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_err_intr_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ras_intr_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_queue_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_queue_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_cq_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_cq_irqs)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_rxtx_start_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_event_start_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_loopback_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_entries_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_del)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_promisc_mode_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_state_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_mtu_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_rx_len_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_promisc_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_rx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mcast_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_custom_fmt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_switch_hdr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_eeprom_info_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_drop_re_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_rx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_tx_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_clock_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_sync_time_adjust)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_is_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_is_sso_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_modify)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_cman_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_head_tail_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_head_tail_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_default_fill)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_flowkey_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_default_setup)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_num_xstats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_names_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_flush_spin)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prepare_rate_limited_tree)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_prepare_tree)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_format_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_aura_fc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_free_resources)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_add)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_pkt_mode_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_name_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_smq_flush)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_xmit_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_suspend_resume)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prealloc_res)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_shaper_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_parent_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_rlimit_sq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rlimit_sq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_count)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_max)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_root_has_sp)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_egress_link_cfg_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_leaf_cnt)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_lvl)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_next)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_next)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_is_user_hierarchy_enabled)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_tree_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_max_prio)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_is_leaf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_default_red_algo)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_cnt_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_have_link_access)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_alloc_and_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_strip_vtag_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_insert_ena_dis)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_tpid_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_lf_init_cb_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_pc_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_drop_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_limit_modify)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_range_update_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_zero_aura_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_bp_configure)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_lock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_unlock)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_ctx_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_update)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_limit_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_sub_return)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_sub_return)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_read_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_clear_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_free_mcam_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_read_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_get_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_clear_counter)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_move)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_all_resources)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entries)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_enable_all_entries)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_ena_dis_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_write_entry)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_low_priority_mcam)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_profile_name_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_capa_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_key_type_config_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_validate_portid_action)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_parse)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_sdp_channel_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_create)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_merge_base_steering_rule)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_aged_flow_ctx_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_defrag_mcam_banks)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_key_type)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_mcam_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_attach)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_detach)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_msix_offsets_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_config_lf)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_read)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_write)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_len_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_prog)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_qp_get_base)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_check)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_process)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_register)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_unregister)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_hmac_opad_ipad_gen)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_auth_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ciph_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ctx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_pf_func_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_ns_to_gw)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_link)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_unlink)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_gwc_invalidate)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_release)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_from_tag)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stats_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_hws_link_status)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_qos_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_init_xaq_aura)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_free_xaq_aura)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_alloc_xaq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_release_xaq)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_set_priority)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stash_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dump)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_base_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config_hwwqe)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_interval)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_free)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_init)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_error_msg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(roc_clk_freq_get)
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_ec_grp_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ae_fpm_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_xcbc_key_derive);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_aes_hash_key_derive);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_npa_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_start_rxtx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_stop_rxtx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_state);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_get_linkinfo);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_set_link_mode);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_intlbk_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_ptp_rx_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_fec_supported_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_change);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_tx_control);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_cgx_cpri_mode_misc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_handler);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_available);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_max_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_bphy_intr_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_inline_ipsec_inb_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_rxc_time_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_flush);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_ctx_reload);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lf_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_dev_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_eng_grp_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_iq_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lmtline_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_ctx_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_int_misc_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_parse_hdr_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_afs_print);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_cpt_lfs_print);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_wait_queue_idle);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_configure_v2);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_dpi_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_tx_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_delete_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_mcam_rx_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_npc_rss_action_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_vlan_tpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_process_repte_notify_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_nix_repte_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_eswitch_is_repte_pfs_vf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_md5_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha1_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha256_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_hash_sha512_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_maxpools_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_lmt_base_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_num_lmtlines_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_rvu_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_mcs_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_ring_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_list_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_cpt_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_npa_nix_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_meta_aura_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_inject_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_rx_chan_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_idev_nix_inl_dev_pffunc_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ot_ipsec_outb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_reass_inb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ow_ipsec_outb_sa_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_is_supported);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_hw_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_active_lmac_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_lmac_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_threshold_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_ctrl_pkt_rule_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_custom_tag_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_intr_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_recovery);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_event_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rsrc_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_policy_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_pn_table_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_cam_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_policy_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_rx_sc_sa_map_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_tx_sc_sa_map_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_entry_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sa_port_map_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_flowid_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_secy_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_sc_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_port_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_mcs_stats_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read64);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write64);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_read32);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_write32);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_reg_save);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_ap2mlip);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_mlip2ap);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_pa_to_offset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_addr_offset_to_pa);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_write_job);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_valid_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_is_done_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_enqueue);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_dequeue);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_scratch_queue_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_lf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_jcmdq_enqueue_sl);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_on);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_clk_force_off);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_on);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dma_stall_off);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_mlip_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_blk_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ml_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_model);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_lbk);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_esw);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_base_chan);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_rx_chan_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vwqe_interval);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_sdp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_vf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_is_vf_or_sdp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_get_pf_func);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_inl_ipsec_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cpt_ctx_cache_sync);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_pkt_len);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_max_rep_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_level_to_idx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_to_idx);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_timeunit_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_count_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_free_all);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_pre_color_tbl_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_connect);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpf_lf_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_get_reg_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lf_reg_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_queues_ctx_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cqe_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_cpt_lfs_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_desc_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_config_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_fc_npa_bp_cfg);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_pfc_mode_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_count_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_bpids_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_chan_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_chan_bpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_aura_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_lf_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_inj_lf_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sa_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_rx_inject_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_spi_range);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_sz);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_sa_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_reassembly_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_probed);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_is_multi_channel);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_is_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq_put);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_rq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_soft_exp_poll_switch);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inb_is_with_inl_dev);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_rq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_outb_sso_pffunc_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_inb_tag_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_sa_sync);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ctx_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_cpt_lf_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_ts_pkind_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_lock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_unlock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_meta_pool_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_eng_caps_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_custom_meta_pool_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_xaq_realloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_qptr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_inl_dev_cpt_release);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_queue_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_err_intr_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ras_intr_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_queue_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_queue_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_register_cq_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_unregister_cq_irqs);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_rxtx_start_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_event_start_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_loopback_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_entries_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_addr_del);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_promisc_mode_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_state_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_mtu_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_max_rx_len_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mac_link_info_get_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_mcam_entry_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_mcast_list_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_promisc_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mac_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_rx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_npc_mcast_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_custom_fmt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_lso_fmt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_switch_hdr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_eeprom_info_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rx_drop_re_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_rx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_tx_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_clock_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_sync_time_adjust);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_info_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_ptp_is_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_is_sso_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_modify);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_cman_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_cq_head_tail_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_sq_head_tail_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_q_err_cb_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_default_fill);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_key_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_reta_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_flowkey_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_rss_default_setup);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_num_xstats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_stats_queue_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_xstats_names_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_flush_spin);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prepare_rate_limited_tree);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_prepare_tree);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_mark_format_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_sq_aura_fc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_free_resources);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_add);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_pkt_mode_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_name_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_smq_flush);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_xmit_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_hierarchy_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_suspend_resume);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_prealloc_res);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_shaper_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_parent_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_pfc_rlimit_sq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rlimit_sq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_count);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_rsrc_max);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_root_has_sp);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_egress_link_cfg_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_leaf_cnt);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_lvl);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_next);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_profile_next);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_node_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_is_user_hierarchy_enabled);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_tree_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_max_prio);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_is_leaf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_shaper_default_red_algo);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_cnt_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_tm_lvl_have_link_access);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_alloc_and_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_mcam_entry_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_strip_vtag_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_insert_ena_dis);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_nix_vlan_tpid_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_lf_init_cb_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_op_range_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_op_pc_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_drop_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_limit_modify);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_pool_range_update_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_zero_aura_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_aura_bp_configure);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_lock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dev_unlock);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_ctx_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_update);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npa_buf_type_limit_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mark_actions_sub_return);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_vtag_actions_sub_return);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_read_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_inl_mcam_clear_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_free_mcam_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_read_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_get_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_clear_counter);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_move);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_free_all_resources);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entries);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_enable_all_entries);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_alloc_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_ena_dis_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_write_entry);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_low_priority_mcam);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_profile_name_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_capa_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_kex_key_type_config_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_validate_portid_action);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_parse);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_sdp_channel_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_create);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_mcam_merge_base_steering_rule);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_aged_flow_ctx_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_defrag_mcam_banks);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_get_key_type);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_npc_flow_mcam_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_attach);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_queues_detach);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_msix_offsets_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_config_lf);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_read);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_af_reg_write);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_len_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_rule_db_prog);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_qp_get_base);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_err_intr_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_iq_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_ree_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_id_range_check);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_process);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_irq_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_register);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_rvu_lf_msg_handler_unregister);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_hmac_opad_ipad_gen);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_auth_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ciph_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_se_ctx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_pf_func_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_ns_to_gw);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_link);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_unlink);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hws_gwc_invalidate);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_release);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_agq_from_tag);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stats_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_hws_link_status);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_qos_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_init_xaq_aura);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_free_xaq_aura);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_alloc_xaq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_release_xaq);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_set_priority);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_hwgrp_stash_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_rsrc_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_sso_dump);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_base_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_config_hwwqe);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_interval);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_lf_free);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_init);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_tim_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_error_msg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(roc_clk_freq_get);
diff --git a/drivers/common/cpt/cpt_fpm_tables.c b/drivers/common/cpt/cpt_fpm_tables.c
index 0cb14733d9..9216a5de6c 100644
--- a/drivers/common/cpt/cpt_fpm_tables.c
+++ b/drivers/common/cpt/cpt_fpm_tables.c
@@ -1082,7 +1082,7 @@ static rte_spinlock_t lock = RTE_SPINLOCK_INITIALIZER;
 static uint8_t *fpm_table;
 static int nb_devs;
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_init)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_init);
 int cpt_fpm_init(uint64_t *fpm_table_iova)
 {
 	int i, len = 0;
@@ -1127,7 +1127,7 @@ int cpt_fpm_init(uint64_t *fpm_table_iova)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_clear)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_fpm_clear);
 void cpt_fpm_clear(void)
 {
 	rte_spinlock_lock(&lock);
diff --git a/drivers/common/cpt/cpt_pmd_ops_helper.c b/drivers/common/cpt/cpt_pmd_ops_helper.c
index c7e6f37026..c5d29205f1 100644
--- a/drivers/common/cpt/cpt_pmd_ops_helper.c
+++ b/drivers/common/cpt/cpt_pmd_ops_helper.c
@@ -15,7 +15,7 @@
 #define CPT_MAX_ASYM_OP_NUM_PARAMS 5
 #define CPT_MAX_ASYM_OP_MOD_LEN 1024
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_direct_mode)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_direct_mode);
 int32_t
 cpt_pmd_ops_helper_get_mlen_direct_mode(void)
 {
@@ -30,7 +30,7 @@ cpt_pmd_ops_helper_get_mlen_direct_mode(void)
 	return len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_sg_mode)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_get_mlen_sg_mode);
 int
 cpt_pmd_ops_helper_get_mlen_sg_mode(void)
 {
@@ -46,7 +46,7 @@ cpt_pmd_ops_helper_get_mlen_sg_mode(void)
 	return len;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_asym_get_mlen)
+RTE_EXPORT_INTERNAL_SYMBOL(cpt_pmd_ops_helper_asym_get_mlen);
 int
 cpt_pmd_ops_helper_asym_get_mlen(void)
 {
diff --git a/drivers/common/dpaax/caamflib.c b/drivers/common/dpaax/caamflib.c
index 82a7413b5f..b5bf48704c 100644
--- a/drivers/common/dpaax/caamflib.c
+++ b/drivers/common/dpaax/caamflib.c
@@ -15,5 +15,5 @@
  * - SEC HW block revision format is "v"
  * - SEC revision format is "x.y"
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rta_sec_era)
+RTE_EXPORT_INTERNAL_SYMBOL(rta_sec_era);
 enum rta_sec_era rta_sec_era;
diff --git a/drivers/common/dpaax/dpaa_of.c b/drivers/common/dpaax/dpaa_of.c
index 23035f530d..b58370dfca 100644
--- a/drivers/common/dpaax/dpaa_of.c
+++ b/drivers/common/dpaax/dpaa_of.c
@@ -214,7 +214,7 @@ linear_dir(struct dt_dir *d)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_init_path)
+RTE_EXPORT_INTERNAL_SYMBOL(of_init_path);
 int
 of_init_path(const char *dt_path)
 {
@@ -299,7 +299,7 @@ check_compatible(const struct dt_file *f, const char *compatible)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_find_compatible_node)
+RTE_EXPORT_INTERNAL_SYMBOL(of_find_compatible_node);
 const struct device_node *
 of_find_compatible_node(const struct device_node *from,
 			const char *type __rte_unused,
@@ -325,7 +325,7 @@ of_find_compatible_node(const struct device_node *from,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_property)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_property);
 const void *
 of_get_property(const struct device_node *from, const char *name,
 		size_t *lenp)
@@ -345,7 +345,7 @@ of_get_property(const struct device_node *from, const char *name,
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_available)
+RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_available);
 bool
 of_device_is_available(const struct device_node *dev_node)
 {
@@ -362,7 +362,7 @@ of_device_is_available(const struct device_node *dev_node)
 	return false;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_find_node_by_phandle)
+RTE_EXPORT_INTERNAL_SYMBOL(of_find_node_by_phandle);
 const struct device_node *
 of_find_node_by_phandle(uint64_t ph)
 {
@@ -376,7 +376,7 @@ of_find_node_by_phandle(uint64_t ph)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_parent)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_parent);
 const struct device_node *
 of_get_parent(const struct device_node *dev_node)
 {
@@ -392,7 +392,7 @@ of_get_parent(const struct device_node *dev_node)
 	return &d->parent->node.node;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_next_child)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_next_child);
 const struct device_node *
 of_get_next_child(const struct device_node *dev_node,
 		  const struct device_node *prev)
@@ -422,7 +422,7 @@ of_get_next_child(const struct device_node *dev_node,
 	return &c->node.node;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_n_addr_cells)
+RTE_EXPORT_INTERNAL_SYMBOL(of_n_addr_cells);
 uint32_t
 of_n_addr_cells(const struct device_node *dev_node)
 {
@@ -467,7 +467,7 @@ of_n_size_cells(const struct device_node *dev_node)
 	return OF_DEFAULT_NS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_address);
 const uint32_t *
 of_get_address(const struct device_node *dev_node, size_t idx,
 	       uint64_t *size, uint32_t *flags __rte_unused)
@@ -497,7 +497,7 @@ of_get_address(const struct device_node *dev_node, size_t idx,
 	return (const uint32_t *)buf;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_translate_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_translate_address);
 uint64_t
 of_translate_address(const struct device_node *dev_node,
 		     const uint32_t *addr)
@@ -544,7 +544,7 @@ of_translate_address(const struct device_node *dev_node,
 	return phys_addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_compatible)
+RTE_EXPORT_INTERNAL_SYMBOL(of_device_is_compatible);
 bool
 of_device_is_compatible(const struct device_node *dev_node,
 			const char *compatible)
@@ -585,7 +585,7 @@ static const void *of_get_mac_addr(const struct device_node *np,
  * this case, the real MAC is in 'local-mac-address', and 'mac-address' exists
  * but is all zeros.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(of_get_mac_address)
+RTE_EXPORT_INTERNAL_SYMBOL(of_get_mac_address);
 const void *of_get_mac_address(const struct device_node *np)
 {
 	const void *addr;
diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c
index 1220d9654b..59cc65e9d4 100644
--- a/drivers/common/dpaax/dpaax_iova_table.c
+++ b/drivers/common/dpaax/dpaax_iova_table.c
@@ -9,7 +9,7 @@
 #include "dpaax_logs.h"
 
 /* Global table reference */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_p)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_p);
 struct dpaax_iova_table *dpaax_iova_table_p;
 
 static int dpaax_handle_memevents(void);
@@ -155,7 +155,7 @@ read_memory_node(unsigned int *count)
 	return nodes;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_populate)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_populate);
 int
 dpaax_iova_table_populate(void)
 {
@@ -257,7 +257,7 @@ dpaax_iova_table_populate(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_depopulate)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_depopulate);
 void
 dpaax_iova_table_depopulate(void)
 {
@@ -267,7 +267,7 @@ dpaax_iova_table_depopulate(void)
 	DPAAX_DEBUG("IOVA Table cleaned");
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_update)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_update);
 int
 dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
 {
@@ -354,7 +354,7 @@ dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length)
  * Dump the table, with its entries, on screen. Only works in Debug Mode
  * Not for weak hearted - the tables can get quite large
  */
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_iova_table_dump);
 void
 dpaax_iova_table_dump(void)
 {
@@ -467,5 +467,5 @@ dpaax_handle_memevents(void)
 					       dpaax_memevent_cb, NULL);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaax_logger)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaax_logger);
 RTE_LOG_REGISTER_DEFAULT(dpaax_logger, ERR);
diff --git a/drivers/common/ionic/ionic_common_uio.c b/drivers/common/ionic/ionic_common_uio.c
index aaefab918c..b21e24573e 100644
--- a/drivers/common/ionic/ionic_common_uio.c
+++ b/drivers/common/ionic/ionic_common_uio.c
@@ -104,7 +104,7 @@ uio_get_idx_for_devname(struct uio_name *name_cache, char *devname)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mnet_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mnet_devices);
 void
 ionic_uio_scan_mnet_devices(void)
 {
@@ -148,7 +148,7 @@ ionic_uio_scan_mnet_devices(void)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mcrypt_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_scan_mcrypt_devices);
 void
 ionic_uio_scan_mcrypt_devices(void)
 {
@@ -304,7 +304,7 @@ uio_get_map_res_addr(int uio_idx, int size, int res_idx)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_get_rsrc)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_get_rsrc);
 void
 ionic_uio_get_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 {
@@ -323,7 +323,7 @@ ionic_uio_get_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 	bar->vaddr = ((char *)bar->vaddr) + offs;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_rel_rsrc)
+RTE_EXPORT_INTERNAL_SYMBOL(ionic_uio_rel_rsrc);
 void
 ionic_uio_rel_rsrc(const char *name, int idx, struct ionic_dev_bar *bar)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_common_auxiliary.c b/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
index 3ee2f4638a..81ff1ded67 100644
--- a/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
+++ b/drivers/common/mlx5/linux/mlx5_common_auxiliary.c
@@ -19,7 +19,7 @@
 #define AUXILIARY_SYSFS_PATH "/sys/bus/auxiliary/devices"
 #define MLX5_AUXILIARY_PREFIX "mlx5_core.sf."
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_auxiliary_get_child_name)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_auxiliary_get_child_name);
 int
 mlx5_auxiliary_get_child_name(const char *dev, const char *node,
 			      char *child, size_t size)
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.c b/drivers/common/mlx5/linux/mlx5_common_os.c
index 2867e21618..d045f77d33 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.c
+++ b/drivers/common/mlx5/linux/mlx5_common_os.c
@@ -28,11 +28,11 @@
 #include "mlx5_glue.h"
 
 #ifdef MLX5_GLUE
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 const struct mlx5_glue *mlx5_glue;
 #endif
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_pci_addr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_pci_addr);
 int
 mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr)
 {
@@ -92,7 +92,7 @@ mlx5_get_pci_addr(const char *dev_path, struct rte_pci_addr *pci_addr)
  * @return
  *   port_name field set according to recognized name format.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_translate_port_name)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_translate_port_name);
 void
 mlx5_translate_port_name(const char *port_name_in,
 			 struct mlx5_switch_info *port_info_out)
@@ -159,7 +159,7 @@ mlx5_translate_port_name(const char *port_name_in,
 	port_info_out->name_type = MLX5_PHYS_PORT_NAME_TYPE_UNKNOWN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_ifname_sysfs)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_ifname_sysfs);
 int
 mlx5_get_ifname_sysfs(const char *ibdev_path, char *ifname)
 {
@@ -893,7 +893,7 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
  * @return
  *   Pointer to an `ibv_context` on success, or NULL on failure, with `rte_errno` set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx);
 void *
 mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 {
@@ -931,7 +931,7 @@ mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 	return (void *)ctx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_device_guid)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_get_device_guid);
 int
 mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
 {
@@ -977,7 +977,7 @@ mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len)
  * indirect mkey created by the DevX API.
  * This mkey should be used for DevX commands requesting mkey as a parameter.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create);
 int
 mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 			    size_t length, struct mlx5_pmd_wrapped_mr *pmd_mr)
@@ -1017,7 +1017,7 @@ mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy);
 void
 mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
 {
@@ -1049,7 +1049,7 @@ mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *pmd_mr)
  *  - Interrupt handle on success.
  *  - NULL on failure, with rte_errno set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_create);
 struct rte_intr_handle *
 mlx5_os_interrupt_handler_create(int mode, bool set_fd_nonblock, int fd,
 				 rte_intr_callback_fn cb, void *cb_arg)
@@ -1151,7 +1151,7 @@ mlx5_intr_callback_unregister(const struct rte_intr_handle *handle,
  *   Callback argument for cb.
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_interrupt_handler_destroy);
 void
 mlx5_os_interrupt_handler_destroy(struct rte_intr_handle *intr_handle,
 				  rte_intr_callback_fn cb, void *cb_arg)
diff --git a/drivers/common/mlx5/linux/mlx5_common_verbs.c b/drivers/common/mlx5/linux/mlx5_common_verbs.c
index 98260df470..aba729a80a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_verbs.c
+++ b/drivers/common/mlx5/linux/mlx5_common_verbs.c
@@ -106,7 +106,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx)
  * @return
  *   0 on successful registration, -1 otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_reg_mr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_reg_mr);
 int
 mlx5_common_verbs_reg_mr(void *pd, void *addr, size_t length,
 			 struct mlx5_pmd_mr *pmd_mr)
@@ -136,7 +136,7 @@ mlx5_common_verbs_reg_mr(void *pd, void *addr, size_t length,
  *   pmd_mr struct set with lkey, address, length and pointer to mr object
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_dereg_mr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_verbs_dereg_mr);
 void
 mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
 {
@@ -154,7 +154,7 @@ mlx5_common_verbs_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
  * @param[out] dereg_mr_cb
  *   Pointer to dereg_mr func
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb);
 void
 mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
 {
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index a91eaa429d..0e35fd91c7 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -1580,7 +1580,7 @@ mlx5_glue_dv_destroy_steering_anchor(struct mlx5dv_steering_anchor *sa)
 #endif
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) {
 	.version = MLX5_GLUE_VERSION,
diff --git a/drivers/common/mlx5/linux/mlx5_nl.c b/drivers/common/mlx5/linux/mlx5_nl.c
index 86166e92d0..5810161631 100644
--- a/drivers/common/mlx5/linux/mlx5_nl.c
+++ b/drivers/common/mlx5/linux/mlx5_nl.c
@@ -196,7 +196,7 @@ RTE_ATOMIC(uint32_t) atomic_sn;
  *   A file descriptor on success, a negative errno value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_init);
 int
 mlx5_nl_init(int protocol, int groups)
 {
@@ -643,7 +643,7 @@ mlx5_nl_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
  * @return
  *    0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vf_mac_addr_modify)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vf_mac_addr_modify);
 int
 mlx5_nl_vf_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
 			   struct rte_ether_addr *mac, int vf_index)
@@ -731,7 +731,7 @@ mlx5_nl_vf_mac_addr_modify(int nlsk_fd, unsigned int iface_idx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_add)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_add);
 int
 mlx5_nl_mac_addr_add(int nlsk_fd, unsigned int iface_idx,
 		     uint64_t *mac_own, struct rte_ether_addr *mac,
@@ -769,7 +769,7 @@ mlx5_nl_mac_addr_add(int nlsk_fd, unsigned int iface_idx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_remove)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_remove);
 int
 mlx5_nl_mac_addr_remove(int nlsk_fd, unsigned int iface_idx, uint64_t *mac_own,
 			struct rte_ether_addr *mac, uint32_t index)
@@ -794,7 +794,7 @@ mlx5_nl_mac_addr_remove(int nlsk_fd, unsigned int iface_idx, uint64_t *mac_own,
  * @param n
  *   @p mac_addrs array size.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_sync)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_sync);
 void
 mlx5_nl_mac_addr_sync(int nlsk_fd, unsigned int iface_idx,
 		      struct rte_ether_addr *mac_addrs, int n)
@@ -851,7 +851,7 @@ mlx5_nl_mac_addr_sync(int nlsk_fd, unsigned int iface_idx,
  * @param mac_own
  *   BITFIELD_DECLARE array to store the mac.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_flush)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_mac_addr_flush);
 void
 mlx5_nl_mac_addr_flush(int nlsk_fd, unsigned int iface_idx,
 		       struct rte_ether_addr *mac_addrs, int n,
@@ -930,7 +930,7 @@ mlx5_nl_device_flags(int nlsk_fd, unsigned int iface_idx, uint32_t flags,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_promisc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_promisc);
 int
 mlx5_nl_promisc(int nlsk_fd, unsigned int iface_idx, int enable)
 {
@@ -957,7 +957,7 @@ mlx5_nl_promisc(int nlsk_fd, unsigned int iface_idx, int enable)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_allmulti)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_allmulti);
 int
 mlx5_nl_allmulti(int nlsk_fd, unsigned int iface_idx, int enable)
 {
@@ -1147,7 +1147,7 @@ mlx5_nl_port_info(int nl, uint32_t pindex, struct mlx5_nl_port_info *data)
  *   A valid (nonzero) interface index on success, 0 otherwise and rte_errno
  *   is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_ifindex)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_ifindex);
 unsigned int
 mlx5_nl_ifindex(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info *dev_info)
 {
@@ -1204,7 +1204,7 @@ mlx5_nl_ifindex(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info
  *   Port state (ibv_port_state) on success, negative on error
  *   and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_port_state)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_port_state);
 int
 mlx5_nl_port_state(int nl, const char *name, uint32_t pindex, struct mlx5_dev_info *dev_info)
 {
@@ -1240,7 +1240,7 @@ mlx5_nl_port_state(int nl, const char *name, uint32_t pindex, struct mlx5_dev_in
  *   A valid (nonzero) number of ports on success, 0 otherwise
  *   and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_portnum)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_portnum);
 unsigned int
 mlx5_nl_portnum(int nl, const char *name, struct mlx5_dev_info *dev_info)
 {
@@ -1447,7 +1447,7 @@ mlx5_nl_switch_info_cb(struct nlmsghdr *nh, void *arg)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_switch_info)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_switch_info);
 int
 mlx5_nl_switch_info(int nl, unsigned int ifindex,
 		    struct mlx5_switch_info *info)
@@ -1498,7 +1498,7 @@ mlx5_nl_switch_info(int nl, unsigned int ifindex,
  * @param[in] ifindex
  *   Interface index of network device to delete.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_delete)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_delete);
 void
 mlx5_nl_vlan_vmwa_delete(struct mlx5_nl_vlan_vmwa_context *vmwa,
 		      uint32_t ifindex)
@@ -1576,7 +1576,7 @@ nl_attr_nest_end(struct nlmsghdr *nlh, struct nlattr *nest)
  * @param[in] tag
  *   VLAN tag for VLAN network device to create.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_vlan_vmwa_create);
 uint32_t
 mlx5_nl_vlan_vmwa_create(struct mlx5_nl_vlan_vmwa_context *vmwa,
 			 uint32_t ifindex, uint16_t tag)
@@ -1729,7 +1729,7 @@ mlx5_nl_generic_family_id_get(int nlsk_fd, const char *name)
  *   otherwise and rte_errno is set.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_family_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_family_id_get);
 int
 mlx5_nl_devlink_family_id_get(int nlsk_fd)
 {
@@ -1956,7 +1956,7 @@ mlx5_nl_enable_roce_set(int nlsk_fd, int family_id, const char *pci_addr,
  * @return
  *  0 on success, negative on failure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_parse_link_status_update)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_parse_link_status_update);
 int
 mlx5_nl_parse_link_status_update(struct nlmsghdr *hdr, uint32_t *ifindex)
 {
@@ -1988,7 +1988,7 @@ mlx5_nl_parse_link_status_update(struct nlmsghdr *hdr, uint32_t *ifindex)
  *  0 on success, including the case when there are no events.
  *  Negative on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_read_events)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_read_events);
 int
 mlx5_nl_read_events(int nlsk_fd, mlx5_nl_event_cb *cb, void *cb_arg)
 {
@@ -2076,7 +2076,7 @@ mlx5_nl_esw_multiport_cb(struct nlmsghdr *nh, void *arg)
 
 #define NL_ESW_MULTIPORT_PARAM "esw_multiport"
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_esw_multiport_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_devlink_esw_multiport_get);
 int
 mlx5_nl_devlink_esw_multiport_get(int nlsk_fd, int family_id, const char *pci_addr, int *enable)
 {
@@ -2115,14 +2115,14 @@ mlx5_nl_devlink_esw_multiport_get(int nlsk_fd, int family_id, const char *pci_ad
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_init);
 int
 mlx5_nl_rdma_monitor_init(void)
 {
 	return mlx5_nl_init(NETLINK_RDMA, RDMA_NL_GROUP_NOTIFICATION);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_info_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_info_get);
 void
 mlx5_nl_rdma_monitor_info_get(struct nlmsghdr *hdr, struct mlx5_nl_port_info *data)
 {
@@ -2217,7 +2217,7 @@ mlx5_nl_rdma_monitor_cap_get_cb(struct nlmsghdr *hdr, void *arg)
  * @return
  *   0 on success, negative on error and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_cap_get)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_nl_rdma_monitor_cap_get);
 int
 mlx5_nl_rdma_monitor_cap_get(int nl, uint8_t *cap)
 {
diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 84a93e7dbd..98249c2c9e 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -21,7 +21,7 @@
 #include "mlx5_common_defs.h"
 #include "mlx5_common_private.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(haswell_broadwell_cpu)
+RTE_EXPORT_INTERNAL_SYMBOL(haswell_broadwell_cpu);
 uint8_t haswell_broadwell_cpu;
 
 /* Driver type key for new device global syntax. */
@@ -138,7 +138,7 @@ driver_get(uint32_t class)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_kvargs_process)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_kvargs_process);
 int
 mlx5_kvargs_process(struct mlx5_kvargs_ctrl *mkvlist, const char *const keys[],
 		    arg_handler_t handler, void *opaque_arg)
@@ -475,7 +475,7 @@ to_mlx5_device(const struct rte_device *rte_dev)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_to_pci_str)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_to_pci_str);
 int
 mlx5_dev_to_pci_str(const struct rte_device *dev, char *addr, size_t size)
 {
@@ -525,7 +525,7 @@ mlx5_dev_mempool_register(struct mlx5_common_device *cdev,
  * @param mp
  *   Mempool being unregistered.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_unregister);
 void
 mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev,
 			    struct rte_mempool *mp)
@@ -605,7 +605,7 @@ mlx5_dev_mempool_event_cb(enum rte_mempool_event event, struct rte_mempool *mp,
  * Callbacks addresses are local in each process.
  * Therefore, each process can register private callbacks.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_subscribe)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_mempool_subscribe);
 int
 mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev)
 {
@@ -1235,7 +1235,7 @@ mlx5_common_dev_dma_unmap(struct rte_device *rte_dev, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_class_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_class_driver_register);
 void
 mlx5_class_driver_register(struct mlx5_class_driver *driver)
 {
@@ -1258,7 +1258,7 @@ static bool mlx5_common_initialized;
  * for multiple PMDs. Each mlx5 PMD that depends on mlx5_common module,
  * must invoke in its constructor.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_common_init);
 void
 mlx5_common_init(void)
 {
@@ -1417,7 +1417,7 @@ mlx5_devx_alloc_uar(struct mlx5_common_device *cdev)
 	return uar;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_release)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_release);
 void
 mlx5_devx_uar_release(struct mlx5_uar *uar)
 {
@@ -1426,7 +1426,7 @@ mlx5_devx_uar_release(struct mlx5_uar *uar)
 	memset(uar, 0, sizeof(*uar));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_prepare)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_uar_prepare);
 int
 mlx5_devx_uar_prepare(struct mlx5_common_device *cdev, struct mlx5_uar *uar)
 {
diff --git a/drivers/common/mlx5/mlx5_common_devx.c b/drivers/common/mlx5/mlx5_common_devx.c
index 18a53769c9..929b794ba7 100644
--- a/drivers/common/mlx5/mlx5_common_devx.c
+++ b/drivers/common/mlx5/mlx5_common_devx.c
@@ -24,7 +24,7 @@
  * @param[in] cq
  *   DevX CQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_destroy);
 void
 mlx5_devx_cq_destroy(struct mlx5_devx_cq *cq)
 {
@@ -81,7 +81,7 @@ mlx5_cq_init(struct mlx5_devx_cq *cq_obj, uint16_t cq_size)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cq_create);
 int
 mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
 		    struct mlx5_devx_cq_attr *attr, int socket)
@@ -197,7 +197,7 @@ mlx5_devx_cq_create(void *ctx, struct mlx5_devx_cq *cq_obj, uint16_t log_desc_n,
  * @param[in] sq
  *   DevX SQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_destroy);
 void
 mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
 {
@@ -242,7 +242,7 @@ mlx5_devx_sq_destroy(struct mlx5_devx_sq *sq)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_sq_create);
 int
 mlx5_devx_sq_create(void *ctx, struct mlx5_devx_sq *sq_obj, uint16_t log_wqbb_n,
 		    struct mlx5_devx_create_sq_attr *attr, int socket)
@@ -380,7 +380,7 @@ mlx5_devx_rmp_destroy(struct mlx5_devx_rmp *rmp)
  * @param[in] qp
  *   DevX QP to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_destroy);
 void
 mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp)
 {
@@ -419,7 +419,7 @@ mlx5_devx_qp_destroy(struct mlx5_devx_qp *qp)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp_create);
 int
 mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint32_t queue_size,
 		    struct mlx5_devx_qp_attr *attr, int socket)
@@ -490,7 +490,7 @@ mlx5_devx_qp_create(void *ctx, struct mlx5_devx_qp *qp_obj, uint32_t queue_size,
  * @param[in] rq
  *   DevX RQ to destroy.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_destroy);
 void
 mlx5_devx_rq_destroy(struct mlx5_devx_rq *rq)
 {
@@ -766,7 +766,7 @@ mlx5_devx_rq_shared_create(void *ctx, struct mlx5_devx_rq *rq_obj,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_rq_create);
 int
 mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
 		    uint32_t wqe_size, uint16_t log_wqbb_n,
@@ -790,7 +790,7 @@ mlx5_devx_rq_create(void *ctx, struct mlx5_devx_rq *rq_obj,
  * @return
  *	 0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp2rts)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_qp2rts);
 int
 mlx5_devx_qp2rts(struct mlx5_devx_qp *qp, uint32_t remote_qp_id)
 {
diff --git a/drivers/common/mlx5/mlx5_common_mp.c b/drivers/common/mlx5/mlx5_common_mp.c
index 1ff268f348..44ccee4cfa 100644
--- a/drivers/common/mlx5/mlx5_common_mp.c
+++ b/drivers/common/mlx5/mlx5_common_mp.c
@@ -25,7 +25,7 @@
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mr_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mr_create);
 int
 mlx5_mp_req_mr_create(struct mlx5_common_device *cdev, uintptr_t addr)
 {
@@ -65,7 +65,7 @@ mlx5_mp_req_mr_create(struct mlx5_common_device *cdev, uintptr_t addr)
  * @param reg
  *   True to register the mempool, False to unregister.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mempool_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_mempool_reg);
 int
 mlx5_mp_req_mempool_reg(struct mlx5_common_device *cdev,
 			struct rte_mempool *mempool, bool reg,
@@ -116,7 +116,7 @@ mlx5_mp_req_mempool_reg(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_queue_state_modify)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_queue_state_modify);
 int
 mlx5_mp_req_queue_state_modify(struct mlx5_mp_id *mp_id,
 			       struct mlx5_mp_arg_queue_state_modify *sm)
@@ -155,7 +155,7 @@ mlx5_mp_req_queue_state_modify(struct mlx5_mp_id *mp_id,
  * @return
  *   fd on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_verbs_cmd_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_req_verbs_cmd_fd);
 int
 mlx5_mp_req_verbs_cmd_fd(struct mlx5_mp_id *mp_id)
 {
@@ -197,7 +197,7 @@ mlx5_mp_req_verbs_cmd_fd(struct mlx5_mp_id *mp_id)
 /**
  * Initialize by primary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_primary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_primary);
 int
 mlx5_mp_init_primary(const char *name, const rte_mp_t primary_action)
 {
@@ -215,7 +215,7 @@ mlx5_mp_init_primary(const char *name, const rte_mp_t primary_action)
 /**
  * Un-initialize by primary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_primary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_primary);
 void
 mlx5_mp_uninit_primary(const char *name)
 {
@@ -226,7 +226,7 @@ mlx5_mp_uninit_primary(const char *name)
 /**
  * Initialize by secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_init_secondary);
 int
 mlx5_mp_init_secondary(const char *name, const rte_mp_t secondary_action)
 {
@@ -237,7 +237,7 @@ mlx5_mp_init_secondary(const char *name, const rte_mp_t secondary_action)
 /**
  * Un-initialize by secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mp_uninit_secondary);
 void
 mlx5_mp_uninit_secondary(const char *name)
 {
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index c41ffff2d5..a928515728 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -52,7 +52,7 @@ struct mlx5_mempool_reg {
 	bool is_extmem;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mprq_buf_free_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mprq_buf_free_cb);
 void
 mlx5_mprq_buf_free_cb(void *addr __rte_unused, void *opaque)
 {
@@ -251,7 +251,7 @@ mlx5_mr_btree_init(struct mlx5_mr_btree *bt, int n, int socket)
  * @param bt
  *   Pointer to B-tree structure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_btree_free)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_btree_free);
 void
 mlx5_mr_btree_free(struct mlx5_mr_btree *bt)
 {
@@ -302,7 +302,7 @@ mlx5_mr_btree_dump(struct mlx5_mr_btree *bt __rte_unused)
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_ctrl_init)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_ctrl_init);
 int
 mlx5_mr_ctrl_init(struct mlx5_mr_ctrl *mr_ctrl, uint32_t *dev_gen_ptr,
 		  int socket)
@@ -969,7 +969,7 @@ mlx5_mr_create_primary(void *pd,
  * @return
  *   Searched LKey on success, UINT32_MAX on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_create);
 uint32_t
 mlx5_mr_create(struct mlx5_common_device *cdev,
 	       struct mlx5_mr_share_cache *share_cache,
@@ -1064,7 +1064,7 @@ mr_lookup_caches(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *   Searched LKey on success, UINT32_MAX on no match.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_addr2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_addr2mr_bh);
 uint32_t
 mlx5_mr_addr2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, uintptr_t addr)
 {
@@ -1155,7 +1155,7 @@ mlx5_mr_create_cache(struct mlx5_mr_share_cache *share_cache, int socket)
  * @param mr_ctrl
  *   Pointer to per-queue MR local cache.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_flush_local_cache)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_flush_local_cache);
 void
 mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl)
 {
@@ -1810,7 +1810,7 @@ mlx5_mr_mempool_register_secondary(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_register);
 int
 mlx5_mr_mempool_register(struct mlx5_common_device *cdev,
 			 struct rte_mempool *mp, bool is_extmem)
@@ -1876,7 +1876,7 @@ mlx5_mr_mempool_unregister_secondary(struct mlx5_common_device *cdev,
  * @return
  *   0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_unregister);
 int
 mlx5_mr_mempool_unregister(struct mlx5_common_device *cdev,
 			   struct rte_mempool *mp)
@@ -1988,7 +1988,7 @@ mlx5_lookup_mempool_regs(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *  0 on success, (-1) on failure and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_populate_cache)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool_populate_cache);
 int
 mlx5_mr_mempool_populate_cache(struct mlx5_mr_ctrl *mr_ctrl,
 			       struct rte_mempool *mp)
@@ -2048,7 +2048,7 @@ mlx5_mr_mempool_populate_cache(struct mlx5_mr_ctrl *mr_ctrl,
  * @return
  *   MR lkey on success, UINT32_MAX on failure.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mempool2mr_bh);
 uint32_t
 mlx5_mr_mempool2mr_bh(struct mlx5_mr_ctrl *mr_ctrl,
 		      struct rte_mempool *mp, uintptr_t addr)
@@ -2075,7 +2075,7 @@ mlx5_mr_mempool2mr_bh(struct mlx5_mr_ctrl *mr_ctrl,
 	return lkey;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mb2mr_bh)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_mr_mb2mr_bh);
 uint32_t
 mlx5_mr_mb2mr_bh(struct mlx5_mr_ctrl *mr_ctrl, struct rte_mbuf *mb)
 {
diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c
index 8bd43bc166..10b1c90fa9 100644
--- a/drivers/common/mlx5/mlx5_common_pci.c
+++ b/drivers/common/mlx5/mlx5_common_pci.c
@@ -103,14 +103,14 @@ pci_ids_table_update(const struct rte_pci_id *driver_id_table)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_pci)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_pci);
 bool
 mlx5_dev_is_pci(const struct rte_device *dev)
 {
 	return strcmp(dev->bus->name, "pci") == 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_vf_pci)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_dev_is_vf_pci);
 bool
 mlx5_dev_is_vf_pci(const struct rte_pci_device *pci_dev)
 {
diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c
index 14056cebcb..88f2d48c2e 100644
--- a/drivers/common/mlx5/mlx5_common_utils.c
+++ b/drivers/common/mlx5/mlx5_common_utils.c
@@ -27,7 +27,7 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_create);
 struct mlx5_list *
 mlx5_list_create(const char *name, void *ctx, bool lcores_share,
 		 mlx5_list_create_cb cb_create,
@@ -122,7 +122,7 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst,
 	return entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_lookup)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_lookup);
 struct mlx5_list_entry *
 mlx5_list_lookup(struct mlx5_list *list, void *ctx)
 {
@@ -263,7 +263,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst,
 	return local_entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_register);
 struct mlx5_list_entry *
 mlx5_list_register(struct mlx5_list *list, void *ctx)
 {
@@ -323,7 +323,7 @@ _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst,
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_unregister);
 int
 mlx5_list_unregister(struct mlx5_list *list,
 		      struct mlx5_list_entry *entry)
@@ -371,7 +371,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_destroy);
 void
 mlx5_list_destroy(struct mlx5_list *list)
 {
@@ -379,7 +379,7 @@ mlx5_list_destroy(struct mlx5_list *list)
 	mlx5_free(list);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_get_entry_num)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_list_get_entry_num);
 uint32_t
 mlx5_list_get_entry_num(struct mlx5_list *list)
 {
@@ -389,7 +389,7 @@ mlx5_list_get_entry_num(struct mlx5_list *list)
 
 /********************* Hash List **********************/
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_create);
 struct mlx5_hlist *
 mlx5_hlist_create(const char *name, uint32_t size, bool direct_key,
 		  bool lcores_share, void *ctx, mlx5_list_create_cb cb_create,
@@ -455,7 +455,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key,
 }
 
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_lookup)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_lookup);
 struct mlx5_list_entry *
 mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx)
 {
@@ -468,7 +468,7 @@ mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx)
 	return _mlx5_list_lookup(&h->buckets[idx].l, &h->l_const, ctx);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_register)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_register);
 struct mlx5_list_entry*
 mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx)
 {
@@ -497,7 +497,7 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx)
 	return entry;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_unregister);
 int
 mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry)
 {
@@ -516,7 +516,7 @@ mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_hlist_destroy);
 void
 mlx5_hlist_destroy(struct mlx5_hlist *h)
 {
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index cf601254ab..82ba2106a8 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -87,7 +87,7 @@ mlx5_devx_get_hca_cap(void *ctx, uint32_t *in, uint32_t *out,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_read)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_read);
 int
 mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id, uint32_t arg,
 			    uint32_t *data, uint32_t dw_cnt)
@@ -138,7 +138,7 @@ mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id, uint32_t arg,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_write)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_register_write);
 int
 mlx5_devx_cmd_register_write(void *ctx, uint16_t reg_id, uint32_t arg,
 			     uint32_t *data, uint32_t dw_cnt)
@@ -179,7 +179,7 @@ mlx5_devx_cmd_register_write(void *ctx, uint16_t reg_id, uint32_t arg,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc_general)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc_general);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc_general(void *ctx,
 		struct mlx5_devx_counter_attr *attr)
@@ -229,7 +229,7 @@ mlx5_devx_cmd_flow_counter_alloc_general(void *ctx,
  *   Pointer to counter object on success, a negative value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_alloc);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
 {
@@ -281,7 +281,7 @@ mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_counter_query);
 int
 mlx5_devx_cmd_flow_counter_query(struct mlx5_devx_obj *dcs,
 				 int clear, uint32_t n_counters,
@@ -343,7 +343,7 @@ mlx5_devx_cmd_flow_counter_query(struct mlx5_devx_obj *dcs,
  *   Pointer to Devx mkey on success, a negative value otherwise and rte_errno
  *   is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_mkey_create);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_mkey_create(void *ctx,
 			  struct mlx5_devx_mkey_attr *attr)
@@ -447,7 +447,7 @@ mlx5_devx_cmd_mkey_create(void *ctx,
  * @return
  *   0 on success, non-zero value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_get_out_command_status)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_get_out_command_status);
 int
 mlx5_devx_get_out_command_status(void *out)
 {
@@ -474,7 +474,7 @@ mlx5_devx_get_out_command_status(void *out)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_destroy);
 int
 mlx5_devx_cmd_destroy(struct mlx5_devx_obj *obj)
 {
@@ -634,7 +634,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
  * @return
  *   0 on success, a negative errno otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_match_sample_info_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_match_sample_info_query);
 int
 mlx5_devx_cmd_match_sample_info_query(void *ctx, uint32_t sample_field_id,
 				      struct mlx5_devx_match_sample_info_query_attr *attr)
@@ -672,7 +672,7 @@ mlx5_devx_cmd_match_sample_info_query(void *ctx, uint32_t sample_field_id,
 #endif
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_parse_samples)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_parse_samples);
 int
 mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
 				  uint32_t *ids,
@@ -727,7 +727,7 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flex_parser)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flex_parser);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flex_parser(void *ctx,
 				 struct mlx5_devx_graph_node_attr *data)
@@ -928,7 +928,7 @@ mlx5_devx_query_pkt_integrity_match(void *hcattr)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_hca_attr)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_hca_attr);
 int
 mlx5_devx_cmd_query_hca_attr(void *ctx,
 			     struct mlx5_hca_attr *attr)
@@ -1438,7 +1438,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_qp_query_tis_td)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_qp_query_tis_td);
 int
 mlx5_devx_cmd_qp_query_tis_td(void *qp, uint32_t tis_num,
 			      uint32_t *tis_td)
@@ -1525,7 +1525,7 @@ devx_cmd_fill_wq_data(void *wq_ctx, struct mlx5_devx_wq_attr *wq_attr)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rq(void *ctx,
 			struct mlx5_devx_create_rq_attr *rq_attr,
@@ -1584,7 +1584,7 @@ mlx5_devx_cmd_create_rq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rq);
 int
 mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
 			struct mlx5_devx_modify_rq_attr *rq_attr)
@@ -1638,7 +1638,7 @@ mlx5_devx_cmd_modify_rq(struct mlx5_devx_obj *rq,
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_rq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_rq);
 int
 mlx5_devx_cmd_query_rq(struct mlx5_devx_obj *rq_obj, void *out, size_t outlen)
 {
@@ -1668,7 +1668,7 @@ mlx5_devx_cmd_query_rq(struct mlx5_devx_obj *rq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rmp)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rmp);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rmp(void *ctx,
 			 struct mlx5_devx_create_rmp_attr *rmp_attr,
@@ -1716,7 +1716,7 @@ mlx5_devx_cmd_create_rmp(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tir)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tir);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_tir(void *ctx,
 			 struct mlx5_devx_tir_attr *tir_attr)
@@ -1785,7 +1785,7 @@ mlx5_devx_cmd_create_tir(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_tir)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_tir);
 int
 mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
 			 struct mlx5_devx_modify_tir_attr *modify_tir_attr)
@@ -1870,7 +1870,7 @@ mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rqt)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_rqt);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_rqt(void *ctx,
 			 struct mlx5_devx_rqt_attr *rqt_attr)
@@ -1925,7 +1925,7 @@ mlx5_devx_cmd_create_rqt(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rqt)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_rqt);
 int
 mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			 struct mlx5_devx_rqt_attr *rqt_attr)
@@ -1974,7 +1974,7 @@ mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  **/
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_sq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_sq(void *ctx,
 			struct mlx5_devx_create_sq_attr *sq_attr)
@@ -2041,7 +2041,7 @@ mlx5_devx_cmd_create_sq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_sq);
 int
 mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
 			struct mlx5_devx_modify_sq_attr *sq_attr)
@@ -2081,7 +2081,7 @@ mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_sq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_sq);
 int
 mlx5_devx_cmd_query_sq(struct mlx5_devx_obj *sq_obj, void *out, size_t outlen)
 {
@@ -2109,7 +2109,7 @@ mlx5_devx_cmd_query_sq(struct mlx5_devx_obj *sq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tis)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_tis);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_tis(void *ctx,
 			 struct mlx5_devx_tis_attr *tis_attr)
@@ -2153,7 +2153,7 @@ mlx5_devx_cmd_create_tis(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_td)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_td);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_td(void *ctx)
 {
@@ -2196,7 +2196,7 @@ mlx5_devx_cmd_create_td(void *ctx)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_dump);
 int
 mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused,
 			void *rx_domain __rte_unused,
@@ -2222,7 +2222,7 @@ mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused,
 	return -ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_single_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_flow_single_dump);
 int
 mlx5_devx_cmd_flow_single_dump(void *rule_info __rte_unused,
 			FILE *file __rte_unused)
@@ -2248,7 +2248,7 @@ mlx5_devx_cmd_flow_single_dump(void *rule_info __rte_unused,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_cq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_cq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_cq(void *ctx, struct mlx5_devx_cq_attr *attr)
 {
@@ -2317,7 +2317,7 @@ mlx5_devx_cmd_create_cq(void *ctx, struct mlx5_devx_cq_attr *attr)
  * @return
  *   0 if Query successful, else non-zero return value from devx_obj_query API
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_cq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_cq);
 int
 mlx5_devx_cmd_query_cq(struct mlx5_devx_obj *cq_obj, void *out, size_t outlen)
 {
@@ -2345,7 +2345,7 @@ mlx5_devx_cmd_query_cq(struct mlx5_devx_obj *cq_obj, void *out, size_t outlen)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtq);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_virtq(void *ctx,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2422,7 +2422,7 @@ mlx5_devx_cmd_create_virtq(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_virtq);
 int
 mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2521,7 +2521,7 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtq)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtq);
 int
 mlx5_devx_cmd_query_virtq(struct mlx5_devx_obj *virtq_obj,
 			   struct mlx5_devx_virtq_attr *attr)
@@ -2564,7 +2564,7 @@ mlx5_devx_cmd_query_virtq(struct mlx5_devx_obj *virtq_obj,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_qp)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_qp);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_qp(void *ctx,
 			struct mlx5_devx_qp_attr *attr)
@@ -2667,7 +2667,7 @@ mlx5_devx_cmd_create_qp(void *ctx,
  * @return
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_qp_state)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_modify_qp_state);
 int
 mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
 			      uint32_t remote_qp_id)
@@ -2745,7 +2745,7 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtio_q_counters)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_virtio_q_counters);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
 {
@@ -2777,7 +2777,7 @@ mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
 	return couners_obj;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtio_q_counters)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_virtio_q_counters);
 int
 mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
 				   struct mlx5_devx_virtio_q_couners_attr *attr)
@@ -2827,7 +2827,7 @@ mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_hit_aso_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_hit_aso_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flow_hit_aso_obj(void *ctx, uint32_t pd)
 {
@@ -2870,7 +2870,7 @@ mlx5_devx_cmd_create_flow_hit_aso_obj(void *ctx, uint32_t pd)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_alloc_pd)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_alloc_pd);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_alloc_pd(void *ctx)
 {
@@ -2911,7 +2911,7 @@ mlx5_devx_cmd_alloc_pd(void *ctx)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_meter_aso_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_flow_meter_aso_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_flow_meter_aso_obj(void *ctx, uint32_t pd,
 						uint32_t log_obj_size)
@@ -2965,7 +2965,7 @@ mlx5_devx_cmd_create_flow_meter_aso_obj(void *ctx, uint32_t pd,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_conn_track_offload_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_conn_track_offload_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_conn_track_offload_obj(void *ctx, uint32_t pd,
 					    uint32_t log_obj_size)
@@ -3012,7 +3012,7 @@ mlx5_devx_cmd_create_conn_track_offload_obj(void *ctx, uint32_t pd,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_geneve_tlv_option)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_geneve_tlv_option);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_geneve_tlv_option(void *ctx,
 				  struct mlx5_devx_geneve_tlv_option_attr *attr)
@@ -3075,7 +3075,7 @@ mlx5_devx_cmd_create_geneve_tlv_option(void *ctx,
  * @return
  *   0 on success, a negative errno otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_geneve_tlv_option)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_geneve_tlv_option);
 int
 mlx5_devx_cmd_query_geneve_tlv_option(void *ctx,
 				      struct mlx5_devx_obj *geneve_tlv_opt_obj,
@@ -3113,7 +3113,7 @@ mlx5_devx_cmd_query_geneve_tlv_option(void *ctx,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_wq_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_wq_query);
 int
 mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id)
 {
@@ -3154,7 +3154,7 @@ mlx5_devx_cmd_wq_query(void *wq, uint32_t *counter_set_id)
  *   Pointer to counter object on success, a NULL value otherwise and
  *   rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_alloc);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome)
 {
@@ -3196,7 +3196,7 @@ mlx5_devx_cmd_queue_counter_alloc(void *ctx, int *syndrome)
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_query)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_queue_counter_query);
 int
 mlx5_devx_cmd_queue_counter_query(struct mlx5_devx_obj *dcs, int clear,
 				  uint32_t *out_of_buffers)
@@ -3232,7 +3232,7 @@ mlx5_devx_cmd_queue_counter_query(struct mlx5_devx_obj *dcs, int clear,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_dek_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_dek_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_dek_obj(void *ctx, struct mlx5_devx_dek_attr *attr)
 {
@@ -3283,7 +3283,7 @@ mlx5_devx_cmd_create_dek_obj(void *ctx, struct mlx5_devx_dek_attr *attr)
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_import_kek_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_import_kek_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_import_kek_obj(void *ctx,
 				    struct mlx5_devx_import_kek_attr *attr)
@@ -3331,7 +3331,7 @@ mlx5_devx_cmd_create_import_kek_obj(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_credential_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_credential_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_credential_obj(void *ctx,
 				    struct mlx5_devx_credential_attr *attr)
@@ -3380,7 +3380,7 @@ mlx5_devx_cmd_create_credential_obj(void *ctx,
  * @return
  *   The DevX object created, NULL otherwise and rte_errno is set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_crypto_login_obj)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_create_crypto_login_obj);
 struct mlx5_devx_obj *
 mlx5_devx_cmd_create_crypto_login_obj(void *ctx,
 				      struct mlx5_devx_crypto_login_attr *attr)
@@ -3432,7 +3432,7 @@ mlx5_devx_cmd_create_crypto_login_obj(void *ctx,
  * @return
  *   0 on success, a negative value otherwise.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_lag)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_devx_cmd_query_lag);
 int
 mlx5_devx_cmd_query_lag(void *ctx,
 			struct mlx5_devx_lag_context *lag_ctx)
diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c
index 28fb19b285..a1077f59d4 100644
--- a/drivers/common/mlx5/mlx5_malloc.c
+++ b/drivers/common/mlx5/mlx5_malloc.c
@@ -169,7 +169,7 @@ mlx5_malloc_socket_internal(size_t size, unsigned int align, int socket, bool ze
 		      rte_malloc_socket(NULL, size, align, socket);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_malloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_malloc);
 void *
 mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
 {
@@ -220,7 +220,7 @@ mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
 	return addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_realloc)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_realloc);
 void *
 mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
 	     int socket)
@@ -268,7 +268,7 @@ mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
 	return new_addr;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_free)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_free);
 void
 mlx5_free(void *addr)
 {
@@ -289,7 +289,7 @@ mlx5_free(void *addr)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_memory_stat_dump)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_memory_stat_dump);
 void
 mlx5_memory_stat_dump(void)
 {
diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c
index 7fac361460..3212f13369 100644
--- a/drivers/common/mlx5/windows/mlx5_common_os.c
+++ b/drivers/common/mlx5/windows/mlx5_common_os.c
@@ -282,7 +282,7 @@ mlx5_os_open_device(struct mlx5_common_device *cdev, uint32_t classes)
  * @return
  *   Pointer to an `ibv_context` on success, or NULL on failure, with `rte_errno` set.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_get_physical_device_ctx);
 void *
 mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
 {
@@ -314,7 +314,7 @@ mlx5_os_get_physical_device_ctx(struct mlx5_common_device *cdev)
  * @return
  *   umem on successful registration, NULL and errno otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_reg);
 void *
 mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
 {
@@ -345,7 +345,7 @@ mlx5_os_umem_reg(void *ctx, void *addr, size_t size, uint32_t access)
  * @return
  *   0 on successful release, negative number otherwise
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_dereg)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_umem_dereg);
 int
 mlx5_os_umem_dereg(void *pumem)
 {
@@ -446,7 +446,7 @@ mlx5_os_dereg_mr(struct mlx5_pmd_mr *pmd_mr)
  *   Pointer to dereg_mr func
  *
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_set_reg_mr_cb);
 void
 mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
 {
@@ -458,7 +458,7 @@ mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb, mlx5_dereg_mr_t *dereg_mr_cb)
  * In Windows, no need to wrap the MR, no known issue for it in kernel.
  * Use the regular function to create direct MR.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_create);
 int
 mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 			    size_t length, struct mlx5_pmd_wrapped_mr *wpmd_mr)
@@ -478,7 +478,7 @@ mlx5_os_wrapped_mkey_create(void *ctx, void *pd, uint32_t pdn, void *addr,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_os_wrapped_mkey_destroy);
 void
 mlx5_os_wrapped_mkey_destroy(struct mlx5_pmd_wrapped_mr *wpmd_mr)
 {
diff --git a/drivers/common/mlx5/windows/mlx5_glue.c b/drivers/common/mlx5/windows/mlx5_glue.c
index 066e2fdce3..9c24d1c941 100644
--- a/drivers/common/mlx5/windows/mlx5_glue.c
+++ b/drivers/common/mlx5/windows/mlx5_glue.c
@@ -410,7 +410,7 @@ mlx5_glue_devx_set_mtu(void *ctx, uint32_t mtu)
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue)
+RTE_EXPORT_INTERNAL_SYMBOL(mlx5_glue);
 alignas(RTE_CACHE_LINE_SIZE)
 const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue){
 	.version = MLX5_GLUE_VERSION,
diff --git a/drivers/common/mvep/mvep_common.c b/drivers/common/mvep/mvep_common.c
index 2035300cce..cede7b9004 100644
--- a/drivers/common/mvep/mvep_common.c
+++ b/drivers/common/mvep/mvep_common.c
@@ -19,7 +19,7 @@ struct mvep {
 
 static struct mvep mvep;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_init);
 int rte_mvep_init(enum mvep_module_type module __rte_unused,
 		  struct rte_kvargs *kvlist __rte_unused)
 {
@@ -36,7 +36,7 @@ int rte_mvep_init(enum mvep_module_type module __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mvep_deinit);
 int rte_mvep_deinit(enum mvep_module_type module __rte_unused)
 {
 	mvep.ref_count--;
diff --git a/drivers/common/nfp/nfp_common.c b/drivers/common/nfp/nfp_common.c
index 475f64daab..46254499b9 100644
--- a/drivers/common/nfp/nfp_common.c
+++ b/drivers/common/nfp/nfp_common.c
@@ -15,7 +15,7 @@
  */
 #define NFP_NET_POLL_TIMEOUT    5000
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig_real)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig_real);
 int
 nfp_reconfig_real(struct nfp_hw *hw,
 		uint32_t update)
@@ -80,7 +80,7 @@ nfp_reconfig_real(struct nfp_hw *hw,
  *   - (0) if OK to reconfigure the device.
  *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_reconfig);
 int
 nfp_reconfig(struct nfp_hw *hw,
 		uint32_t ctrl,
@@ -125,7 +125,7 @@ nfp_reconfig(struct nfp_hw *hw,
  *   - (0) if OK to reconfigure the device.
  *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_ext_reconfig)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_ext_reconfig);
 int
 nfp_ext_reconfig(struct nfp_hw *hw,
 		uint32_t ctrl_ext,
@@ -153,7 +153,7 @@ nfp_ext_reconfig(struct nfp_hw *hw,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_read_mac)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_read_mac);
 void
 nfp_read_mac(struct nfp_hw *hw)
 {
@@ -166,7 +166,7 @@ nfp_read_mac(struct nfp_hw *hw)
 	memcpy(&hw->mac_addr.addr_bytes[4], &tmp, 2);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_write_mac)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_write_mac);
 void
 nfp_write_mac(struct nfp_hw *hw,
 		uint8_t *mac)
@@ -183,7 +183,7 @@ nfp_write_mac(struct nfp_hw *hw,
 			hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_enable_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_enable_queues);
 void
 nfp_enable_queues(struct nfp_hw *hw,
 		uint16_t nb_rx_queues,
@@ -207,7 +207,7 @@ nfp_enable_queues(struct nfp_hw *hw,
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, enabled_queues);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_disable_queues)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_disable_queues);
 void
 nfp_disable_queues(struct nfp_hw *hw)
 {
diff --git a/drivers/common/nfp/nfp_common_pci.c b/drivers/common/nfp/nfp_common_pci.c
index 4a2fb5e82d..12c17b09b2 100644
--- a/drivers/common/nfp/nfp_common_pci.c
+++ b/drivers/common/nfp/nfp_common_pci.c
@@ -258,7 +258,7 @@ nfp_common_init(void)
 	nfp_common_initialized = true;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_class_driver_register)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_class_driver_register);
 void
 nfp_class_driver_register(struct nfp_class_driver *driver)
 {
diff --git a/drivers/common/nfp/nfp_dev.c b/drivers/common/nfp/nfp_dev.c
index 486ed2cdfe..a8eb213e5a 100644
--- a/drivers/common/nfp/nfp_dev.c
+++ b/drivers/common/nfp/nfp_dev.c
@@ -50,7 +50,7 @@ const struct nfp_dev_info nfp_dev_info[NFP_DEV_CNT] = {
 	},
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(nfp_dev_info_get)
+RTE_EXPORT_INTERNAL_SYMBOL(nfp_dev_info_get);
 const struct nfp_dev_info *
 nfp_dev_info_get(uint16_t device_id)
 {
diff --git a/drivers/common/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c
index 74c7a859a4..f1b39deea7 100644
--- a/drivers/common/nitrox/nitrox_device.c
+++ b/drivers/common/nitrox/nitrox_device.c
@@ -65,7 +65,7 @@ ndev_release(struct nitrox_device *ndev)
 TAILQ_HEAD(ndrv_list, nitrox_driver);
 static struct ndrv_list ndrv_list = TAILQ_HEAD_INITIALIZER(ndrv_list);
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_register_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_register_driver);
 void
 nitrox_register_driver(struct nitrox_driver *ndrv)
 {
diff --git a/drivers/common/nitrox/nitrox_logs.c b/drivers/common/nitrox/nitrox_logs.c
index e4ebb39ff1..6187452cda 100644
--- a/drivers/common/nitrox/nitrox_logs.c
+++ b/drivers/common/nitrox/nitrox_logs.c
@@ -5,5 +5,5 @@
 #include <eal_export.h>
 #include <rte_log.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_logtype);
 RTE_LOG_REGISTER_DEFAULT(nitrox_logtype, NOTICE);
diff --git a/drivers/common/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c
index 8f481e6876..8084b1421f 100644
--- a/drivers/common/nitrox/nitrox_qp.c
+++ b/drivers/common/nitrox/nitrox_qp.c
@@ -104,7 +104,7 @@ nitrox_release_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr)
 	return rte_memzone_free(qp->cmdq.mz);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_setup);
 int
 nitrox_qp_setup(struct nitrox_qp *qp, uint8_t *bar_addr, const char *dev_name,
 		uint32_t nb_descriptors, uint8_t instr_size, int socket_id)
@@ -147,7 +147,7 @@ nitrox_release_ridq(struct nitrox_qp *qp)
 	rte_free(qp->ridq);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_release)
+RTE_EXPORT_INTERNAL_SYMBOL(nitrox_qp_release);
 int
 nitrox_qp_release(struct nitrox_qp *qp, uint8_t *bar_addr)
 {
diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c
index 9e0bbf453f..d0018673f8 100644
--- a/drivers/common/octeontx/octeontx_mbox.c
+++ b/drivers/common/octeontx/octeontx_mbox.c
@@ -70,7 +70,7 @@ struct mbox_intf_ver {
 	uint32_t minor:10;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_logtype_mbox)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_logtype_mbox);
 RTE_LOG_REGISTER(octeontx_logtype_mbox, pmd.octeontx.mbox, NOTICE);
 
 static inline void
@@ -194,7 +194,7 @@ mbox_send(struct mbox *m, struct octeontx_mbox_hdr *hdr, const void *txmsg,
 	return res;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_ram_mbox_base)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_ram_mbox_base);
 int
 octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain)
 {
@@ -219,7 +219,7 @@ octeontx_mbox_set_ram_mbox_base(uint8_t *ram_mbox_base, uint16_t domain)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_reg)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_set_reg);
 int
 octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain)
 {
@@ -244,7 +244,7 @@ octeontx_mbox_set_reg(uint8_t *reg, uint16_t domain)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_send)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_send);
 int
 octeontx_mbox_send(struct octeontx_mbox_hdr *hdr, void *txdata,
 				 uint16_t txlen, void *rxdata, uint16_t rxlen)
@@ -309,7 +309,7 @@ octeontx_check_mbox_version(struct mbox_intf_ver *app_intf_ver,
 	return result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_init)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_mbox_init);
 int
 octeontx_mbox_init(void)
 {
@@ -349,7 +349,7 @@ octeontx_mbox_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(octeontx_get_global_domain)
+RTE_EXPORT_INTERNAL_SYMBOL(octeontx_get_global_domain);
 uint16_t
 octeontx_get_global_domain(void)
 {
diff --git a/drivers/common/sfc_efx/sfc_base_symbols.c b/drivers/common/sfc_efx/sfc_base_symbols.c
index bbb6f39924..1f62696c3b 100644
--- a/drivers/common/sfc_efx/sfc_base_symbols.c
+++ b/drivers/common/sfc_efx/sfc_base_symbols.c
@@ -5,274 +5,274 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(efx_crc32_calculate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_irq)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qprime)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpending)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_check_init_done)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpoll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_usecs_to_ticks)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qmoderate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_mac_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_vlan_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_stats)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_restore)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_supported_filters)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_rx)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_tx)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_local)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_full)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_eth_local)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ether_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_uc_def)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_mc_def)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_encap_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_vxlan)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_geneve)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_nvgre)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_rss_context)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_dwords)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable_unlocked)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_trigger)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_line)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_message)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fatal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_from_sdu)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_get_all_ucast_mcast)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_drain)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_up)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_multicast_list_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_include_fcs_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stat_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_get_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_upload)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_periodic)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_update)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_get_limits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_invalid)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_phy_port)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_function)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_mh_function)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_id_by_selector)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_recirc_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_ct_mark_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_bit_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_mport_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_clone)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_equal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_is_valid)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decap)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_pop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_dst_mac)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_src_mac)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decr_ip_ttl)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_nat)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_push)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_encap)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_count)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_flag)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_deliver)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_drop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_specs_equal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_class_cmp)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_recirc_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_do_ct_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_outer_rule_id_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_dst_mac_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_src_mac_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_update)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_eh_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_get_nb_count)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_counter_id)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_clear_fw_rsrc_ids)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free_type)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_start)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_stop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_give_credits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_mport_alloc_alias)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_read_mport_journal)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_replay)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_new_epoch)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_start)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_poll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_abort)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_client_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_own_client_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_timeout)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_proxy_handle)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_reboot)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_family)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_family_probe_bar)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_probe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_limits)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_version)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_bar_region)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_vi_pool)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_unprobe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_destroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_reset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_cfg_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_version)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_board_info)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_hw_unavailable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_hw_unavailable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_mask)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_calculate_pcie_link_bandwidth)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_subvariant)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_fw_subvariant)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_check_pcie_link_speed)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_config_add)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_reconfigure)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_map)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_verify)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lane_count_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lp_cap_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_oui_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_media_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_module_get_info)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_fec_type_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_link_state_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_poll)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_loopback_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_type_name)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_vlan_strip_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_port_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_hash_flags_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_hash_default_support_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_default_support_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc_v2)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_free)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_mode_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_key_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_tbl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qflush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qenable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate_es_super_buffer)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_pkt_length_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_hash_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_get_layout)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_layout_check)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_set)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_list)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_supported_num_get)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_is_supported)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_describe)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_insert)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_delete)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_add)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_remove)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_clear)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_reconfigure)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_size)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_nbufs)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpost)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpace)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qflush)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qenable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_enable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_disable)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_write)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_post)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_post)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_dma_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso2_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_vlantci_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_checksum_create)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_init)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_fini)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qcreate)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstart)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstop)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qdestroy)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_doorbell_offset)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_features)
-RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_verify_features)
+RTE_EXPORT_INTERNAL_SYMBOL(efx_crc32_calculate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_irq);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qprime);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpending);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qcreate_check_init_done);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpoll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_usecs_to_ticks);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_ev_qmoderate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_mac_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_vlan_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vswitch_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_evb_vport_stats);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_restore);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_supported_filters);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_rx);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_init_tx);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_local);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ipv4_full);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_eth_local);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_ether_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_uc_def);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_mc_def);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_encap_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_vxlan);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_geneve);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_nvgre);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_filter_spec_set_rss_context);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_dwords);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_hash_bytes);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_disable_unlocked);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_trigger);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_line);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_status_message);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_intr_fatal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_from_sdu);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_pdu_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_get_all_ucast_mcast);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_drain);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_up);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_fcntl_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_multicast_list_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_filter_default_rxq_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_include_fcs_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stat_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_get_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_upload);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_periodic);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mac_stats_update);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_get_limits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_invalid);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_phy_port);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_function);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_pcie_mh_function);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_id_by_selector);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_recirc_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_ct_mark_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_by_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_field_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_bit_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_mport_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_clone);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_equal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_is_valid);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_spec_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decap);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_pop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_dst_mac);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_set_src_mac);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_decr_ip_ttl);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_nat);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_vlan_push);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_encap);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_count);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_flag);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_mark_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_deliver);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_populate_drop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_specs_equal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_specs_class_cmp);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_recirc_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_do_ct_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_outer_rule_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_match_spec_outer_rule_id_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mac_addr_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_dst_mac_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_src_mac_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_update);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_encap_header_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_eh_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_get_nb_count);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_fill_in_counter_id);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_clear_fw_rsrc_ids);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free_type);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_start);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_stop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_counters_stream_give_credits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_rule_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_mport_alloc_alias);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_mport_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_read_mport_journal);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_replay);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mae_action_set_list_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_new_epoch);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_start);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_poll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_request_abort);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_client_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_own_client_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_client_mac_addr_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_timeout);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_get_proxy_handle);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mcdi_reboot);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_mon_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_family);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_family_probe_bar);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_probe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_limits);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_drv_version);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_bar_region);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_vi_pool);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_unprobe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_destroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_reset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_cfg_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_version);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_board_info);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_hw_unavailable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_hw_unavailable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_mask);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_calculate_pcie_link_bandwidth);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_get_fw_subvariant);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_set_fw_subvariant);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_check_pcie_link_speed);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_config_add);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_reconfigure);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_nic_dma_map);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_verify);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_adv_cap_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lane_count_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_lp_cap_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_oui_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_media_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_module_get_info);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_fec_type_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_phy_link_state_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_poll);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_loopback_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_loopback_type_name);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_vlan_strip_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_port_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_hash_flags_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_hash_default_support_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_default_support_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_alloc_v2);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_context_free);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_mode_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_key_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_scale_tbl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qpush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qflush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rxq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qenable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qcreate_es_super_buffer);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_pkt_length_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_pseudo_hdr_hash_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_get_layout);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_rx_prefix_layout_check);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_set);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_sram_buf_tbl_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_list);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_supported_num_get);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_is_supported);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_describe);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_insert);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_table_entry_delete);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_add);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_udp_remove);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_config_clear);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tunnel_reconfigure);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_size);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_txq_nbufs);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpost);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpace);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qflush);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qenable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_enable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_disable);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_write);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qpio_post);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_post);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_dma_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_tso2_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_vlantci_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_tx_qdesc_checksum_create);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_init);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_fini);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qcreate);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstart);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qstop);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_qdestroy);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_doorbell_offset);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_get_features);
+RTE_EXPORT_INTERNAL_SYMBOL(efx_virtio_verify_features);
diff --git a/drivers/common/sfc_efx/sfc_efx.c b/drivers/common/sfc_efx/sfc_efx.c
index 60f20ef262..0cde581485 100644
--- a/drivers/common/sfc_efx/sfc_efx.c
+++ b/drivers/common/sfc_efx/sfc_efx.c
@@ -36,7 +36,7 @@ sfc_efx_kvarg_dev_class_handler(__rte_unused const char *key,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_dev_class_get)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_dev_class_get);
 enum sfc_efx_dev_class
 sfc_efx_dev_class_get(struct rte_devargs *devargs)
 {
@@ -95,7 +95,7 @@ sfc_efx_pci_config_readd(efsys_pci_config_t *configp, uint32_t offset,
 	return (rc < 0 || rc != sizeof(*edp)) ? EIO : 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_family)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_family);
 int
 sfc_efx_family(struct rte_pci_device *pci_dev,
 	       efx_bar_region_t *mem_ebrp, efx_family_t *family)
diff --git a/drivers/common/sfc_efx/sfc_efx_mcdi.c b/drivers/common/sfc_efx/sfc_efx_mcdi.c
index 1fe3515d2d..647108cb45 100644
--- a/drivers/common/sfc_efx/sfc_efx_mcdi.c
+++ b/drivers/common/sfc_efx/sfc_efx_mcdi.c
@@ -265,7 +265,7 @@ sfc_efx_mcdi_ev_proxy_response(void *arg, uint32_t handle, efx_rc_t result)
 	mcdi->proxy_result = result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_init)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_init);
 int
 sfc_efx_mcdi_init(struct sfc_efx_mcdi *mcdi,
 		  uint32_t logtype, const char *log_prefix, efx_nic_t *nic,
@@ -322,7 +322,7 @@ sfc_efx_mcdi_init(struct sfc_efx_mcdi *mcdi,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_fini)
+RTE_EXPORT_INTERNAL_SYMBOL(sfc_efx_mcdi_fini);
 void
 sfc_efx_mcdi_fini(struct sfc_efx_mcdi *mcdi)
 {
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 31ec88c7d6..c0b312ed75 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -875,14 +875,14 @@ cn10k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_ev
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver1_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver1_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn10k_cpt_sg_ver1_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
 	return cn10k_cpt_crypto_adapter_enqueue(ws, ev, nb_events, false);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver2_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_sg_ver2_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn10k_cpt_sg_ver2_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -1216,7 +1216,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
@@ -1241,7 +1241,7 @@ cn10k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 	return (uintptr_t)cop;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_vector_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cpt_crypto_adapter_vector_dequeue);
 uintptr_t
 cn10k_cpt_crypto_adapter_vector_dequeue(uintptr_t get_work1)
 {
@@ -1345,7 +1345,7 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
 }
 
 #if defined(RTE_ARCH_ARM64)
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1489,7 +1489,7 @@ cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 	return count + i;
 }
 #else
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn10k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1969,7 +1969,7 @@ cn10k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_rx_inject_configure)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_cryptodev_sec_rx_inject_configure);
 int
 cn10k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
 {
diff --git a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
index 6ef7c5bb22..40ff647b29 100644
--- a/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn20k_cryptodev_ops.c
@@ -776,7 +776,7 @@ ca_lmtst_burst_submit(struct ops_burst *burst)
 	return i;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_enqueue);
 uint16_t __rte_hot
 cn20k_cpt_crypto_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -1167,7 +1167,7 @@ cn20k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn20k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
@@ -1192,7 +1192,7 @@ cn20k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 	return (uintptr_t)cop;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_vector_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cpt_crypto_adapter_vector_dequeue);
 uintptr_t
 cn20k_cpt_crypto_adapter_vector_dequeue(uintptr_t get_work1)
 {
@@ -1707,7 +1707,7 @@ cn20k_sym_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 }
 
 #if defined(RTE_ARCH_ARM64)
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1851,7 +1851,7 @@ cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 	return count + i;
 }
 #else
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_inb_rx_inject);
 uint16_t __rte_hot
 cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 				  struct rte_security_session **sess, uint16_t nb_pkts)
@@ -1864,7 +1864,7 @@ cn20k_cryptodev_sec_inb_rx_inject(void *dev, struct rte_mbuf **pkts,
 }
 #endif
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_rx_inject_configure)
+RTE_EXPORT_INTERNAL_SYMBOL(cn20k_cryptodev_sec_rx_inject_configure);
 int
 cn20k_cryptodev_sec_rx_inject_configure(void *device, uint16_t port_id, bool enable)
 {
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index c94e9e0f92..82e6121954 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -407,7 +407,7 @@ cn9k_ca_meta_info_extract(struct rte_crypto_op *op,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_enqueue);
 uint16_t
 cn9k_cpt_crypto_adapter_enqueue(uintptr_t base, struct rte_crypto_op *op)
 {
@@ -665,7 +665,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_cpt_crypto_adapter_dequeue);
 uintptr_t
 cn9k_cpt_crypto_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 261e14b418..9894cb51ce 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -979,7 +979,7 @@ cnxk_cpt_queue_pair_event_error_query(struct rte_cryptodev *dev, uint16_t qp_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qptr_get, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qptr_get, 24.03);
 struct rte_pmd_cnxk_crypto_qptr *
 rte_pmd_cnxk_crypto_qptr_get(uint8_t dev_id, uint16_t qp_id)
 {
@@ -1042,7 +1042,7 @@ cnxk_crypto_cn9k_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, uint1
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_submit, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_submit, 24.03);
 void
 rte_pmd_cnxk_crypto_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, uint16_t nb_inst)
 {
@@ -1054,7 +1054,7 @@ rte_pmd_cnxk_crypto_submit(struct rte_pmd_cnxk_crypto_qptr *qptr, void *inst, ui
 	plt_err("Invalid cnxk model");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_flush, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_flush, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_flush(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			       struct rte_pmd_cnxk_crypto_cptr *cptr, bool invalidate)
@@ -1079,7 +1079,7 @@ rte_pmd_cnxk_crypto_cptr_flush(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return roc_cpt_lf_ctx_flush(&qp->lf, cptr, invalidate);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_get, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_get, 24.07);
 struct rte_pmd_cnxk_crypto_cptr *
 rte_pmd_cnxk_crypto_cptr_get(struct rte_pmd_cnxk_crypto_sess *rte_sess)
 {
@@ -1133,7 +1133,7 @@ rte_pmd_cnxk_crypto_cptr_get(struct rte_pmd_cnxk_crypto_sess *rte_sess)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_read, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_read, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_read(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			      struct rte_pmd_cnxk_crypto_cptr *cptr, void *data, uint32_t len)
@@ -1167,7 +1167,7 @@ rte_pmd_cnxk_crypto_cptr_read(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_write, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_cptr_write, 24.07);
 int
 rte_pmd_cnxk_crypto_cptr_write(struct rte_pmd_cnxk_crypto_qptr *qptr,
 			       struct rte_pmd_cnxk_crypto_cptr *cptr, void *data, uint32_t len)
@@ -1205,7 +1205,7 @@ rte_pmd_cnxk_crypto_cptr_write(struct rte_pmd_cnxk_crypto_qptr *qptr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qp_stats_get, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_crypto_qp_stats_get, 24.07);
 int
 rte_pmd_cnxk_crypto_qp_stats_get(struct rte_pmd_cnxk_crypto_qptr *qptr,
 				 struct rte_pmd_cnxk_crypto_qp_stats *stats)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index ca10d88da7..12ff985e09 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -4161,7 +4161,7 @@ dpaa2_sec_process_ordered_event(struct qbman_swp *swp,
 	ev->event_ptr = crypto_op;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_attach);
 int
 dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
@@ -4242,7 +4242,7 @@ dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_sec_eventq_detach);
 int
 dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
 			int qp_id)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 65bbd38b17..921652900a 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -3511,7 +3511,7 @@ dpaa_sec_process_atomic_event(void *event,
 	return qman_cb_dqrr_defer;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_attach);
 int
 dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
@@ -3556,7 +3556,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_sec_eventq_detach);
 int
 dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
 			int qp_id)
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index 88657f49cc..9a11f5e985 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -657,7 +657,7 @@ submit_request_to_sso(struct ssows *ws, uintptr_t req,
 	ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_enqueue);
 uint16_t __rte_hot
 otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op)
 {
@@ -948,7 +948,7 @@ otx_cpt_dequeue_sym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
 	return otx_cpt_pkt_dequeue(qptr, ops, nb_ops, OP_TYPE_SYM);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(otx_crypto_adapter_dequeue);
 uintptr_t __rte_hot
 otx_crypto_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 1ca8443431..770ef03650 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -358,7 +358,7 @@ update_max_nb_qp(struct scheduler_ctx *sched_ctx)
 }
 
 /** Attach a device to the scheduler. */
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_attach)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_attach);
 int
 rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t worker_id)
 {
@@ -421,7 +421,7 @@ rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t worker_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_detach)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_worker_detach);
 int
 rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t worker_id)
 {
@@ -480,7 +480,7 @@ rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t worker_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_set);
 int
 rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
 		enum rte_cryptodev_scheduler_mode mode)
@@ -545,7 +545,7 @@ rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_mode_get);
 enum rte_cryptodev_scheduler_mode
 rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
 {
@@ -567,7 +567,7 @@ rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
 	return sched_ctx->mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_set);
 int
 rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
 		uint32_t enable_reorder)
@@ -597,7 +597,7 @@ rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_ordering_get);
 int
 rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
 {
@@ -619,7 +619,7 @@ rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
 	return (int)sched_ctx->reordering_enabled;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_load_user_scheduler)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_load_user_scheduler);
 int
 rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
 		struct rte_cryptodev_scheduler *scheduler) {
@@ -692,7 +692,7 @@ rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_workers_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_workers_get);
 int
 rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t *workers)
 {
@@ -724,7 +724,7 @@ rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t *workers)
 	return (int)nb_workers;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_set);
 int
 rte_cryptodev_scheduler_option_set(uint8_t scheduler_id,
 		enum rte_cryptodev_schedule_option_type option_type,
@@ -757,7 +757,7 @@ rte_cryptodev_scheduler_option_set(uint8_t scheduler_id,
 	return sched_ctx->ops.option_set(dev, option_type, option);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_scheduler_option_get);
 int
 rte_cryptodev_scheduler_option_get(uint8_t scheduler_id,
 		enum rte_cryptodev_schedule_option_type option_type,
diff --git a/drivers/dma/cnxk/cnxk_dmadev_fp.c b/drivers/dma/cnxk/cnxk_dmadev_fp.c
index dea73c5b41..887fc70628 100644
--- a/drivers/dma/cnxk/cnxk_dmadev_fp.c
+++ b/drivers/dma/cnxk/cnxk_dmadev_fp.c
@@ -446,7 +446,7 @@ cnxk_dma_adapter_format_event(uint64_t event)
 	return w0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn10k_dma_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn10k_dma_adapter_enqueue);
 uint16_t
 cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -506,7 +506,7 @@ cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_dual_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_dual_enqueue);
 uint16_t
 cn9k_dma_adapter_dual_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -577,7 +577,7 @@ cn9k_dma_adapter_dual_enqueue(void *ws, struct rte_event ev[], uint16_t nb_event
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_enqueue)
+RTE_EXPORT_INTERNAL_SYMBOL(cn9k_dma_adapter_enqueue);
 uint16_t
 cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 {
@@ -645,7 +645,7 @@ cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
 	return count;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_dma_adapter_dequeue)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_dma_adapter_dequeue);
 uintptr_t
 cnxk_dma_adapter_dequeue(uintptr_t get_work1)
 {
diff --git a/drivers/event/cnxk/cnxk_worker.c b/drivers/event/cnxk/cnxk_worker.c
index 5e5beb6aac..008f4277c1 100644
--- a/drivers/event/cnxk/cnxk_worker.c
+++ b/drivers/event/cnxk/cnxk_worker.c
@@ -13,7 +13,7 @@ struct pwords {
 	uint64_t u[5];
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_wait_head, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_wait_head, 23.11);
 void
 rte_pmd_cnxk_eventdev_wait_head(uint8_t dev, uint8_t port)
 {
@@ -30,7 +30,7 @@ rte_pmd_cnxk_eventdev_wait_head(uint8_t dev, uint8_t port)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_is_head, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_eventdev_is_head, 23.11);
 uint8_t
 rte_pmd_cnxk_eventdev_is_head(uint8_t dev, uint8_t port)
 {
diff --git a/drivers/event/dlb2/rte_pmd_dlb2.c b/drivers/event/dlb2/rte_pmd_dlb2.c
index 80186dd07d..e77a30ff7d 100644
--- a/drivers/event/dlb2/rte_pmd_dlb2.c
+++ b/drivers/event/dlb2/rte_pmd_dlb2.c
@@ -10,7 +10,7 @@
 #include "dlb2_priv.h"
 #include "dlb2_inline_fns.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_token_pop_mode, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_token_pop_mode, 20.11);
 int
 rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
 				uint8_t port_id,
@@ -40,7 +40,7 @@ rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_port_param, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dlb2_set_port_param, 25.07);
 int
 rte_pmd_dlb2_set_port_param(uint8_t dev_id,
 			    uint8_t port_id,
diff --git a/drivers/mempool/cnxk/cn10k_hwpool_ops.c b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
index e83e872f40..855c60944e 100644
--- a/drivers/mempool/cnxk/cn10k_hwpool_ops.c
+++ b/drivers/mempool/cnxk/cn10k_hwpool_ops.c
@@ -201,7 +201,7 @@ cn10k_hwpool_populate(struct rte_mempool *hp, unsigned int max_objs,
 	return hp->size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_mbuf_exchange, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_mbuf_exchange, 23.07);
 int
 rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2)
 {
@@ -229,14 +229,14 @@ rte_pmd_cnxk_mempool_mbuf_exchange(struct rte_mbuf *m1, struct rte_mbuf *m2)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_is_hwpool, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_is_hwpool, 23.07);
 int
 rte_pmd_cnxk_mempool_is_hwpool(struct rte_mempool *mp)
 {
 	return !!(CNXK_MEMPOOL_FLAGS(mp) & CNXK_MEMPOOL_F_IS_HWPOOL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_range_check_disable, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_mempool_range_check_disable, 23.07);
 int
 rte_pmd_cnxk_mempool_range_check_disable(struct rte_mempool *mp)
 {
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 7dacaa9513..3b80d2b2a7 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -33,11 +33,11 @@
  * is to optimize the PA_to_VA searches until a better mechanism (algo) is
  * available.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_memsegs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_memsegs);
 struct dpaa_memseg_list rte_dpaa_memsegs
 	= TAILQ_HEAD_INITIALIZER(rte_dpaa_memsegs);
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_bpid_info)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa_bpid_info);
 struct dpaa_bp_info *rte_dpaa_bpid_info;
 
 RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_mempool, NOTICE);
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 118eb76db7..4fea1bfd37 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -34,13 +34,13 @@
 
 #include <dpaax_iova_table.h>
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info);
 struct dpaa2_bp_info *rte_dpaa2_bpid_info;
 static struct dpaa2_bp_list *h_bp_list;
 
 static int16_t s_dpaa2_pool_ops_idx = RTE_MEMPOOL_MAX_OPS_IDX;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mpool_get_ops_idx)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mpool_get_ops_idx);
 int rte_dpaa2_mpool_get_ops_idx(void)
 {
 	return s_dpaa2_pool_ops_idx;
@@ -298,7 +298,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_bpid_info_init);
 int rte_dpaa2_bpid_info_init(struct rte_mempool *mp)
 {
 	struct dpaa2_bp_info *bp_info = mempool_to_bpinfo(mp);
@@ -322,7 +322,7 @@ int rte_dpaa2_bpid_info_init(struct rte_mempool *mp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_pool_bpid)
+RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_pool_bpid);
 uint16_t
 rte_dpaa2_mbuf_pool_bpid(struct rte_mempool *mp)
 {
@@ -337,7 +337,7 @@ rte_dpaa2_mbuf_pool_bpid(struct rte_mempool *mp)
 	return bp_info->bpid;
 }
 
-RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_from_buf_addr)
+RTE_EXPORT_SYMBOL(rte_dpaa2_mbuf_from_buf_addr);
 struct rte_mbuf *
 rte_dpaa2_mbuf_from_buf_addr(struct rte_mempool *mp, void *buf_addr)
 {
@@ -353,7 +353,7 @@ rte_dpaa2_mbuf_from_buf_addr(struct rte_mempool *mp, void *buf_addr)
 			bp_info->meta_data_size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mbuf_alloc_bulk)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dpaa2_mbuf_alloc_bulk);
 int
 rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 			  void **obj_table, unsigned int count)
diff --git a/drivers/net/atlantic/rte_pmd_atlantic.c b/drivers/net/atlantic/rte_pmd_atlantic.c
index b5b6ab7d4b..c306bf02d2 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic.c
+++ b/drivers/net/atlantic/rte_pmd_atlantic.c
@@ -9,7 +9,7 @@
 #include "atl_ethdev.h"
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_enable, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_enable, 19.05);
 int
 rte_pmd_atl_macsec_enable(uint16_t port,
 			  uint8_t encr, uint8_t repl_prot)
@@ -26,7 +26,7 @@ rte_pmd_atl_macsec_enable(uint16_t port,
 	return atl_macsec_enable(dev, encr, repl_prot);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_disable, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_disable, 19.05);
 int
 rte_pmd_atl_macsec_disable(uint16_t port)
 {
@@ -42,7 +42,7 @@ rte_pmd_atl_macsec_disable(uint16_t port)
 	return atl_macsec_disable(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_txsc, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_txsc, 19.05);
 int
 rte_pmd_atl_macsec_config_txsc(uint16_t port, uint8_t *mac)
 {
@@ -58,7 +58,7 @@ rte_pmd_atl_macsec_config_txsc(uint16_t port, uint8_t *mac)
 	return atl_macsec_config_txsc(dev, mac);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_rxsc, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_config_rxsc, 19.05);
 int
 rte_pmd_atl_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 {
@@ -74,7 +74,7 @@ rte_pmd_atl_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 	return atl_macsec_config_rxsc(dev, mac, pi);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_txsa, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_txsa, 19.05);
 int
 rte_pmd_atl_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -91,7 +91,7 @@ rte_pmd_atl_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 	return atl_macsec_select_txsa(dev, idx, an, pn, key);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_rxsa, 19.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_atl_macsec_select_rxsa, 19.05);
 int
 rte_pmd_atl_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index 4974e390e7..8691c8769d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -40,7 +40,7 @@ int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg)
 		true : false;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_tx_loopback);
 int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on)
 {
 	struct rte_eth_dev *eth_dev;
@@ -82,7 +82,7 @@ rte_pmd_bnxt_set_all_queues_drop_en_cb(struct bnxt_vnic_info *vnic, void *onptr)
 	vnic->bd_stall = !(*on);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_all_queues_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_all_queues_drop_en);
 int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 {
 	struct rte_eth_dev *eth_dev;
@@ -134,7 +134,7 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_addr);
 int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
 				struct rte_ether_addr *mac_addr)
 {
@@ -175,7 +175,7 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rate_limit)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rate_limit);
 int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
 				uint32_t tx_rate, uint64_t q_msk)
 {
@@ -233,7 +233,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_mac_anti_spoof);
 int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
@@ -294,7 +294,7 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_anti_spoof);
 int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
@@ -354,7 +354,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq_cb(struct bnxt_vnic_info *vnic, void *onptr)
 	vnic->vlan_strip = *on;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_stripq);
 int
 rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -398,7 +398,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rxmode)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_rxmode);
 int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 				uint16_t rx_mask, uint8_t on)
 {
@@ -497,7 +497,7 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_filter);
 int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 				    uint64_t vf_mask, uint8_t vlan_on)
 {
@@ -593,7 +593,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_stats);
 int rte_pmd_bnxt_get_vf_stats(uint16_t port,
 			      uint16_t vf_id,
 			      struct rte_eth_stats *stats)
@@ -631,7 +631,7 @@ int rte_pmd_bnxt_get_vf_stats(uint16_t port,
 				     NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_reset_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_reset_vf_stats);
 int rte_pmd_bnxt_reset_vf_stats(uint16_t port,
 				uint16_t vf_id)
 {
@@ -667,7 +667,7 @@ int rte_pmd_bnxt_reset_vf_stats(uint16_t port,
 	return bnxt_hwrm_func_clr_stats(bp, bp->pf->first_vf_id + vf_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_rx_status)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_rx_status);
 int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id)
 {
 	struct rte_eth_dev *dev;
@@ -702,7 +702,7 @@ int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id)
 	return bnxt_vf_vnic_count(bp, vf_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_tx_drop_count)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_get_vf_tx_drop_count);
 int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
 				      uint64_t *count)
 {
@@ -739,7 +739,7 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
 					     count);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_mac_addr_add)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_mac_addr_add);
 int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct rte_ether_addr *addr,
 				uint32_t vf_id)
 {
@@ -823,7 +823,7 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct rte_ether_addr *addr,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_vlan_insert);
 int
 rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
 		uint16_t vlan_id)
@@ -869,7 +869,7 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_persist_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_bnxt_set_vf_persist_stats);
 int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on)
 {
 	struct rte_eth_dev_info dev_info;
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 1677615435..6454805f6e 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -1404,7 +1404,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 	rte_pktmbuf_free(pkt);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_conf_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_conf_get);
 int
 rte_eth_bond_8023ad_conf_get(uint16_t port_id,
 		struct rte_eth_bond_8023ad_conf *conf)
@@ -1422,7 +1422,7 @@ rte_eth_bond_8023ad_conf_get(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_set);
 int
 rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id,
 		enum rte_bond_8023ad_agg_selection agg_selection)
@@ -1447,7 +1447,7 @@ rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_agg_selection_get);
 int rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id)
 {
 	struct rte_eth_dev *bond_dev;
@@ -1495,7 +1495,7 @@ bond_8023ad_setup_validate(uint16_t port_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_setup)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_setup);
 int
 rte_eth_bond_8023ad_setup(uint16_t port_id,
 		struct rte_eth_bond_8023ad_conf *conf)
@@ -1517,7 +1517,7 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_member_info)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_member_info);
 int
 rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
 		struct rte_eth_bond_8023ad_member_info *info)
@@ -1579,7 +1579,7 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect);
 int
 rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled)
@@ -1601,7 +1601,7 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib);
 int
 rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled)
@@ -1623,7 +1623,7 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_distrib_get);
 int
 rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 {
@@ -1638,7 +1638,7 @@ rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_collect_get);
 int
 rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 {
@@ -1653,7 +1653,7 @@ rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 	return ACTOR_STATE(port, COLLECTING);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_slowtx)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_ext_slowtx);
 int
 rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt)
@@ -1715,7 +1715,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			bond_mode_8023ad_ext_periodic_cb, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_enable)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_enable);
 int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port)
 {
@@ -1742,7 +1742,7 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port)
 	return retval;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_disable)
+RTE_EXPORT_SYMBOL(rte_eth_bond_8023ad_dedicated_queues_disable);
 int
 rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port)
 {
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 9e5df67c18..25ceb82ce7 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -150,7 +150,7 @@ deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_create)
+RTE_EXPORT_SYMBOL(rte_eth_bond_create);
 int
 rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id)
 {
@@ -189,7 +189,7 @@ rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id)
 	return bond_dev->data->port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_free)
+RTE_EXPORT_SYMBOL(rte_eth_bond_free);
 int
 rte_eth_bond_free(const char *name)
 {
@@ -634,7 +634,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
 
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_member_add)
+RTE_EXPORT_SYMBOL(rte_eth_bond_member_add);
 int
 rte_eth_bond_member_add(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -773,7 +773,7 @@ __eth_bond_member_remove_lock_free(uint16_t bonding_port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_member_remove)
+RTE_EXPORT_SYMBOL(rte_eth_bond_member_remove);
 int
 rte_eth_bond_member_remove(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -796,7 +796,7 @@ rte_eth_bond_member_remove(uint16_t bonding_port_id, uint16_t member_port_id)
 	return retval;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mode_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mode_set);
 int
 rte_eth_bond_mode_set(uint16_t bonding_port_id, uint8_t mode)
 {
@@ -814,7 +814,7 @@ rte_eth_bond_mode_set(uint16_t bonding_port_id, uint8_t mode)
 	return bond_ethdev_mode_set(bonding_eth_dev, mode);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mode_get);
 int
 rte_eth_bond_mode_get(uint16_t bonding_port_id)
 {
@@ -828,7 +828,7 @@ rte_eth_bond_mode_get(uint16_t bonding_port_id)
 	return internals->mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_primary_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_primary_set);
 int
 rte_eth_bond_primary_set(uint16_t bonding_port_id, uint16_t member_port_id)
 {
@@ -850,7 +850,7 @@ rte_eth_bond_primary_set(uint16_t bonding_port_id, uint16_t member_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_primary_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_primary_get);
 int
 rte_eth_bond_primary_get(uint16_t bonding_port_id)
 {
@@ -867,7 +867,7 @@ rte_eth_bond_primary_get(uint16_t bonding_port_id)
 	return internals->current_primary_port;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_members_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_members_get);
 int
 rte_eth_bond_members_get(uint16_t bonding_port_id, uint16_t members[],
 			uint16_t len)
@@ -892,7 +892,7 @@ rte_eth_bond_members_get(uint16_t bonding_port_id, uint16_t members[],
 	return internals->member_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_active_members_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_active_members_get);
 int
 rte_eth_bond_active_members_get(uint16_t bonding_port_id, uint16_t members[],
 		uint16_t len)
@@ -916,7 +916,7 @@ rte_eth_bond_active_members_get(uint16_t bonding_port_id, uint16_t members[],
 	return internals->active_member_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_set);
 int
 rte_eth_bond_mac_address_set(uint16_t bonding_port_id,
 		struct rte_ether_addr *mac_addr)
@@ -943,7 +943,7 @@ rte_eth_bond_mac_address_set(uint16_t bonding_port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_reset)
+RTE_EXPORT_SYMBOL(rte_eth_bond_mac_address_reset);
 int
 rte_eth_bond_mac_address_reset(uint16_t bonding_port_id)
 {
@@ -985,7 +985,7 @@ rte_eth_bond_mac_address_reset(uint16_t bonding_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_set);
 int
 rte_eth_bond_xmit_policy_set(uint16_t bonding_port_id, uint8_t policy)
 {
@@ -1016,7 +1016,7 @@ rte_eth_bond_xmit_policy_set(uint16_t bonding_port_id, uint8_t policy)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_get)
+RTE_EXPORT_SYMBOL(rte_eth_bond_xmit_policy_get);
 int
 rte_eth_bond_xmit_policy_get(uint16_t bonding_port_id)
 {
@@ -1030,7 +1030,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonding_port_id)
 	return internals->balance_xmit_policy;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_bond_link_monitoring_set)
+RTE_EXPORT_SYMBOL(rte_eth_bond_link_monitoring_set);
 int
 rte_eth_bond_link_monitoring_set(uint16_t bonding_port_id, uint32_t internal_ms)
 {
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 6c723c9cec..c87a020adb 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -13,7 +13,7 @@ cnxk_ethdev_rx_offload_cb_t cnxk_ethdev_rx_offload_cb;
 
 #define NIX_TM_DFLT_RR_WT 71
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_model_str_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_model_str_get, 23.11);
 const char *
 rte_pmd_cnxk_model_str_get(void)
 {
@@ -89,14 +89,14 @@ nix_inl_cq_sz_clamp_up(struct roc_nix *nix, struct rte_mempool *mp,
 	return nb_desc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ethdev_rx_offload_cb_register)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_ethdev_rx_offload_cb_register);
 void
 cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb)
 {
 	cnxk_ethdev_rx_offload_cb = cb;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cnxk_nix_inb_mode_set)
+RTE_EXPORT_INTERNAL_SYMBOL(cnxk_nix_inb_mode_set);
 int
 cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)
 {
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index ac6ee79f78..8af31c74f2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -306,21 +306,21 @@ cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_submit, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_submit, 23.11);
 uint16_t
 rte_pmd_cnxk_inl_dev_submit(struct rte_pmd_cnxk_inl_dev_q *qptr, void *inst, uint16_t nb_inst)
 {
 	return cnxk_pmd_ops.inl_dev_submit((struct roc_nix_inl_dev_q *)qptr, inst, nb_inst);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_qptr_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_dev_qptr_get, 23.11);
 struct rte_pmd_cnxk_inl_dev_q *
 rte_pmd_cnxk_inl_dev_qptr_get(void)
 {
 	return roc_nix_inl_dev_qptr_get(0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_cpt_q_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_cpt_q_stats_get, 23.11);
 int
 rte_pmd_cnxk_cpt_q_stats_get(uint16_t portid, enum rte_pmd_cnxk_cpt_q_stats_type type,
 			     struct rte_pmd_cnxk_cpt_q_stats *stats, uint16_t idx)
@@ -332,7 +332,7 @@ rte_pmd_cnxk_cpt_q_stats_get(uint16_t portid, enum rte_pmd_cnxk_cpt_q_stats_type
 					    (struct roc_nix_cpt_lf_stats *)stats, idx);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_session_base_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_session_base_get, 23.11);
 union rte_pmd_cnxk_ipsec_hw_sa *
 rte_pmd_cnxk_hw_session_base_get(uint16_t portid, bool inb)
 {
@@ -348,7 +348,7 @@ rte_pmd_cnxk_hw_session_base_get(uint16_t portid, bool inb)
 	return (union rte_pmd_cnxk_ipsec_hw_sa *)sa_base;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_sa_flush, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_sa_flush, 23.11);
 int
 rte_pmd_cnxk_sa_flush(uint16_t portid, union rte_pmd_cnxk_ipsec_hw_sa *sess, bool inb)
 {
@@ -375,7 +375,7 @@ rte_pmd_cnxk_sa_flush(uint16_t portid, union rte_pmd_cnxk_ipsec_hw_sa *sess, boo
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_read, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_read, 22.07);
 int
 rte_pmd_cnxk_hw_sa_read(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw_sa *data,
 			uint32_t len, bool inb)
@@ -421,7 +421,7 @@ rte_pmd_cnxk_hw_sa_read(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_write, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_sa_write, 22.07);
 int
 rte_pmd_cnxk_hw_sa_write(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_hw_sa *data,
 			 uint32_t len, bool inb)
@@ -462,7 +462,7 @@ rte_pmd_cnxk_hw_sa_write(uint16_t portid, void *sess, union rte_pmd_cnxk_ipsec_h
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_ipsec_res, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_inl_ipsec_res, 23.11);
 union rte_pmd_cnxk_cpt_res_s *
 rte_pmd_cnxk_inl_ipsec_res(struct rte_mbuf *mbuf)
 {
@@ -481,7 +481,7 @@ rte_pmd_cnxk_inl_ipsec_res(struct rte_mbuf *mbuf)
 	return (void *)(wqe + 64 + desc_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_inline_inb_cfg_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_cnxk_hw_inline_inb_cfg_set, 23.11);
 void
 rte_pmd_cnxk_hw_inline_inb_cfg_set(uint16_t portid, struct rte_pmd_cnxk_ipsec_inb_cfg *cfg)
 {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 00b57cb715..32e34eb272 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1295,7 +1295,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_attach);
 int
 dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
@@ -1361,7 +1361,7 @@ dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa_eth_eventq_detach);
 int
 dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id)
@@ -1803,7 +1803,7 @@ is_dpaa_supported(struct rte_eth_dev *dev)
 	return is_device_supported(dev, &rte_dpaa_pmd);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa_set_tx_loopback);
 int
 rte_pmd_dpaa_set_tx_loopback(uint16_t port, uint8_t on)
 {
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index b1d473429a..6cb811597c 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -29,7 +29,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		uint64_t req_dist_set,
 		struct dpkg_profile_cfg *kg_cfg);
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_set_custom_hash)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_set_custom_hash);
 int
 rte_pmd_dpaa2_set_custom_hash(uint16_t port_id,
 	uint16_t offset, uint8_t size)
diff --git a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
index 8c685120bd..f8ca9a3874 100644
--- a/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
+++ b/drivers/net/dpaa2/base/dpaa2_tlu_hash.c
@@ -144,7 +144,7 @@ static void hash_init(void)
 		}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_tlu_hash, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_tlu_hash, 21.11);
 uint32_t rte_pmd_dpaa2_get_tlu_hash(uint8_t *data, int size)
 {
 	static int init;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 998d1e7c53..3e5e8fe407 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -2240,7 +2240,7 @@ dpaa2_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_attach);
 int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		struct dpaa2_dpcon_dev *dpcon,
@@ -2327,7 +2327,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_detach)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_eth_eventq_detach);
 int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id)
 {
@@ -2413,7 +2413,7 @@ dpaa2_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *ops)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_thread_init, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_thread_init, 21.08);
 void
 rte_pmd_dpaa2_thread_init(void)
 {
@@ -2853,7 +2853,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_dev_is_dpaa2, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_dev_is_dpaa2, 24.11);
 int
 rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 {
@@ -2869,7 +2869,7 @@ rte_pmd_dpaa2_dev_is_dpaa2(uint32_t eth_id)
 	return dev->device->driver == &rte_dpaa2_pmd.driver;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_ep_name, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_ep_name, 24.11);
 const char *
 rte_pmd_dpaa2_ep_name(uint32_t eth_id)
 {
@@ -2895,7 +2895,7 @@ rte_pmd_dpaa2_ep_name(uint32_t eth_id)
 }
 
 #if defined(RTE_LIBRTE_IEEE1588)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_one_step_ts, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_get_one_step_ts, 24.11);
 int
 rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
 {
@@ -2924,7 +2924,7 @@ rte_pmd_dpaa2_get_one_step_ts(uint16_t port_id, bool mc_query)
 	return priv->ptp_correction_offset;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_set_one_step_ts, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_set_one_step_ts, 24.11);
 int
 rte_pmd_dpaa2_set_one_step_ts(uint16_t port_id, uint16_t offset, uint8_t ch_update)
 {
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 1908d1e865..95bd99fe80 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -55,7 +55,7 @@ static struct dpaa2_dpdmux_dev *get_dpdmux_from_id(uint32_t dpdmux_id)
 	return dpdmux_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_mux_flow_create)
+RTE_EXPORT_SYMBOL(rte_pmd_dpaa2_mux_flow_create);
 int
 rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 	struct rte_flow_item pattern[],
@@ -366,7 +366,7 @@ rte_pmd_dpaa2_mux_flow_l2(uint32_t dpdmux_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_rx_frame_len, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_rx_frame_len, 21.05);
 int
 rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 {
@@ -394,7 +394,7 @@ rte_pmd_dpaa2_mux_rx_frame_len(uint32_t dpdmux_id, uint16_t max_rx_frame_len)
 }
 
 /* dump the status of the dpaa2_mux counters on the console */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_dump_counter, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_dpaa2_mux_dump_counter, 24.11);
 void
 rte_pmd_dpaa2_mux_dump_counter(FILE *f, uint32_t dpdmux_id, int num_if)
 {
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 67d065bb7c..3c76df4c6f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1581,7 +1581,7 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q,
 	*dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dev_tx_multi_txq_ordered)
+RTE_EXPORT_INTERNAL_SYMBOL(dpaa2_dev_tx_multi_txq_ordered);
 uint16_t
 dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		struct rte_mbuf **bufs, uint16_t nb_pkts)
diff --git a/drivers/net/intel/i40e/rte_pmd_i40e.c b/drivers/net/intel/i40e/rte_pmd_i40e.c
index a358f68bc5..19b29b8576 100644
--- a/drivers/net/intel/i40e/rte_pmd_i40e.c
+++ b/drivers/net/intel/i40e/rte_pmd_i40e.c
@@ -14,7 +14,7 @@
 #include "i40e_rxtx.h"
 #include "rte_pmd_i40e.h"
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ping_vfs)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ping_vfs);
 int
 rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf)
 {
@@ -40,7 +40,7 @@ rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_anti_spoof);
 int
 rte_pmd_i40e_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -145,7 +145,7 @@ i40e_add_rm_all_vlan_filter(struct i40e_vsi *vsi, uint8_t add)
 	return I40E_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_anti_spoof);
 int
 rte_pmd_i40e_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -406,7 +406,7 @@ i40e_vsi_set_tx_loopback(struct i40e_vsi *vsi, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tx_loopback);
 int
 rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on)
 {
@@ -450,7 +450,7 @@ rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_unicast_promisc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_unicast_promisc);
 int
 rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -492,7 +492,7 @@ rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_multicast_promisc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_multicast_promisc);
 int
 rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -534,7 +534,7 @@ rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_mac_addr);
 int
 rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 			     struct rte_ether_addr *mac_addr)
@@ -625,7 +625,7 @@ rte_pmd_i40e_remove_vf_mac_addr(uint16_t port, uint16_t vf_id,
 }
 
 /* Set vlan strip on/off for specific VF from host */
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_stripq);
 int
 rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf_id, uint8_t on)
 {
@@ -662,7 +662,7 @@ rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_insert);
 int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id,
 				    uint16_t vlan_id)
 {
@@ -728,7 +728,7 @@ int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_broadcast)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_broadcast);
 int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id,
 				  uint8_t on)
 {
@@ -795,7 +795,7 @@ int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_tag)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_tag);
 int rte_pmd_i40e_set_vf_vlan_tag(uint16_t port, uint16_t vf_id, uint8_t on)
 {
 	struct rte_eth_dev *dev;
@@ -890,7 +890,7 @@ i40e_vlan_filter_count(struct i40e_vsi *vsi)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_vlan_filter);
 int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id,
 				    uint64_t vf_mask, uint8_t on)
 {
@@ -973,7 +973,7 @@ int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_vf_stats);
 int
 rte_pmd_i40e_get_vf_stats(uint16_t port,
 			  uint16_t vf_id,
@@ -1019,7 +1019,7 @@ rte_pmd_i40e_get_vf_stats(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_reset_vf_stats)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_reset_vf_stats);
 int
 rte_pmd_i40e_reset_vf_stats(uint16_t port,
 			    uint16_t vf_id)
@@ -1054,7 +1054,7 @@ rte_pmd_i40e_reset_vf_stats(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_max_bw)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_max_bw);
 int
 rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw)
 {
@@ -1144,7 +1144,7 @@ rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_bw_alloc)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_bw_alloc);
 int
 rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id,
 				uint8_t tc_num, uint8_t *bw_weight)
@@ -1259,7 +1259,7 @@ rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_max_bw)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_vf_tc_max_bw);
 int
 rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id,
 			      uint8_t tc_no, uint32_t bw)
@@ -1378,7 +1378,7 @@ rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tc_strict_prio)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_set_tc_strict_prio);
 int
 rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map)
 {
@@ -1624,7 +1624,7 @@ i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_process_ddp_package)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_process_ddp_package);
 int
 rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff,
 				 uint32_t size,
@@ -1809,7 +1809,7 @@ i40e_get_tlv_section_size(struct i40e_profile_section_header *sec)
 	return nb_tlv;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_info)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_info);
 int rte_pmd_i40e_get_ddp_info(uint8_t *pkg_buff, uint32_t pkg_size,
 	uint8_t *info_buff, uint32_t info_size,
 	enum rte_pmd_i40e_package_info type)
@@ -2118,7 +2118,7 @@ int rte_pmd_i40e_get_ddp_info(uint8_t *pkg_buff, uint32_t pkg_size,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_list)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_get_ddp_list);
 int
 rte_pmd_i40e_get_ddp_list(uint16_t port, uint8_t *buff, uint32_t size)
 {
@@ -2250,7 +2250,7 @@ static int check_invalid_ptype_mapping(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_update)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_update);
 int
 rte_pmd_i40e_ptype_mapping_update(
 			uint16_t port,
@@ -2289,7 +2289,7 @@ rte_pmd_i40e_ptype_mapping_update(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_reset);
 int rte_pmd_i40e_ptype_mapping_reset(uint16_t port)
 {
 	struct rte_eth_dev *dev;
@@ -2306,7 +2306,7 @@ int rte_pmd_i40e_ptype_mapping_reset(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_get);
 int rte_pmd_i40e_ptype_mapping_get(
 			uint16_t port,
 			struct rte_pmd_i40e_ptype_mapping *mapping_items,
@@ -2342,7 +2342,7 @@ int rte_pmd_i40e_ptype_mapping_get(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_replace)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_ptype_mapping_replace);
 int rte_pmd_i40e_ptype_mapping_replace(uint16_t port,
 				       uint32_t target,
 				       uint8_t mask,
@@ -2381,7 +2381,7 @@ int rte_pmd_i40e_ptype_mapping_replace(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_add_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_add_vf_mac_addr);
 int
 rte_pmd_i40e_add_vf_mac_addr(uint16_t port, uint16_t vf_id,
 			     struct rte_ether_addr *mac_addr)
@@ -2429,7 +2429,7 @@ rte_pmd_i40e_add_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_reset);
 int rte_pmd_i40e_flow_type_mapping_reset(uint16_t port)
 {
 	struct rte_eth_dev *dev;
@@ -2446,7 +2446,7 @@ int rte_pmd_i40e_flow_type_mapping_reset(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_get);
 int rte_pmd_i40e_flow_type_mapping_get(
 			uint16_t port,
 			struct rte_pmd_i40e_flow_type_mapping *mapping_items)
@@ -2472,7 +2472,7 @@ int rte_pmd_i40e_flow_type_mapping_get(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_update)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_type_mapping_update);
 int
 rte_pmd_i40e_flow_type_mapping_update(
 			uint16_t port,
@@ -2526,7 +2526,7 @@ rte_pmd_i40e_flow_type_mapping_update(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_query_vfid_by_mac)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_query_vfid_by_mac);
 int
 rte_pmd_i40e_query_vfid_by_mac(uint16_t port,
 			const struct rte_ether_addr *vf_mac)
@@ -2997,7 +2997,7 @@ i40e_queue_region_get_all_info(struct i40e_pf *pf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_rss_queue_region_conf)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_rss_queue_region_conf);
 int rte_pmd_i40e_rss_queue_region_conf(uint16_t port_id,
 		enum rte_pmd_i40e_queue_region_op op_type, void *arg)
 {
@@ -3063,7 +3063,7 @@ int rte_pmd_i40e_rss_queue_region_conf(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_add_del_packet_template)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_flow_add_del_packet_template);
 int rte_pmd_i40e_flow_add_del_packet_template(
 			uint16_t port,
 			const struct rte_pmd_i40e_pkt_template_conf *conf,
@@ -3097,7 +3097,7 @@ int rte_pmd_i40e_flow_add_del_packet_template(
 	return i40e_flow_add_del_fdir_filter(dev, &filter_conf, add);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_get)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_get);
 int
 rte_pmd_i40e_inset_get(uint16_t port, uint8_t pctype,
 		       struct rte_pmd_i40e_inset *inset,
@@ -3170,7 +3170,7 @@ rte_pmd_i40e_inset_get(uint16_t port, uint8_t pctype,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_set)
+RTE_EXPORT_SYMBOL(rte_pmd_i40e_inset_set);
 int
 rte_pmd_i40e_inset_set(uint16_t port, uint8_t pctype,
 		       struct rte_pmd_i40e_inset *inset,
@@ -3245,7 +3245,7 @@ rte_pmd_i40e_inset_set(uint16_t port, uint8_t pctype,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_info, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_info, 20.08);
 int
 rte_pmd_i40e_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 {
@@ -3262,7 +3262,7 @@ rte_pmd_i40e_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_stats, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_get_fdir_stats, 20.08);
 int
 rte_pmd_i40e_get_fdir_stats(uint16_t port, struct rte_eth_fdir_stats *fdir_stat)
 {
@@ -3279,7 +3279,7 @@ rte_pmd_i40e_get_fdir_stats(uint16_t port, struct rte_eth_fdir_stats *fdir_stat)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_gre_key_len, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_gre_key_len, 20.08);
 int
 rte_pmd_i40e_set_gre_key_len(uint16_t port, uint8_t len)
 {
@@ -3299,7 +3299,7 @@ rte_pmd_i40e_set_gre_key_len(uint16_t port, uint8_t len)
 	return i40e_dev_set_gre_key_len(hw, len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_switch_dev, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_switch_dev, 19.11);
 int
 rte_pmd_i40e_set_switch_dev(uint16_t port_id, struct rte_eth_dev *switch_dev)
 {
@@ -3321,7 +3321,7 @@ rte_pmd_i40e_set_switch_dev(uint16_t port_id, struct rte_eth_dev *switch_dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_pf_src_prune, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_i40e_set_pf_src_prune, 23.07);
 int
 rte_pmd_i40e_set_pf_src_prune(uint16_t port, uint8_t on)
 {
diff --git a/drivers/net/intel/iavf/iavf_base_symbols.c b/drivers/net/intel/iavf/iavf_base_symbols.c
index 2111b14aa8..706aa36a92 100644
--- a/drivers/net/intel/iavf/iavf_base_symbols.c
+++ b/drivers/net/intel/iavf/iavf_base_symbols.c
@@ -5,10 +5,10 @@
 #include <eal_export.h>
 
 /* Symbols from the base driver are exported separately below. */
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_init_adminq)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_shutdown_adminq)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_clean_arq_element)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_set_mac_type)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_aq_send_msg_to_pf)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_parse_hw_config)
-RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_init_adminq);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_shutdown_adminq);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_clean_arq_element);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_set_mac_type);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_aq_send_msg_to_pf);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_parse_hw_config);
+RTE_EXPORT_INTERNAL_SYMBOL(iavf_vf_reset);
diff --git a/drivers/net/intel/iavf/iavf_rxtx.c b/drivers/net/intel/iavf/iavf_rxtx.c
index 7033a74610..ff298e164b 100644
--- a/drivers/net/intel/iavf/iavf_rxtx.c
+++ b/drivers/net/intel/iavf/iavf_rxtx.c
@@ -75,23 +75,23 @@ struct offload_info {
 };
 
 /* Offset of mbuf dynamic field for protocol extraction's metadata */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, 20.11);
 int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
 
 /* Mask of mbuf dynamic flags for protocol extraction's type */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_vlan_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_vlan_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_tcp_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_tcp_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask, 20.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask, 21.11);
 uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;
 
 uint8_t
diff --git a/drivers/net/intel/ice/ice_diagnose.c b/drivers/net/intel/ice/ice_diagnose.c
index 298d1eda5c..db89b793ed 100644
--- a/drivers/net/intel/ice/ice_diagnose.c
+++ b/drivers/net/intel/ice/ice_diagnose.c
@@ -410,7 +410,7 @@ ice_dump_pkg(struct rte_eth_dev *dev, uint8_t **buff, uint32_t *size)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_package, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_package, 19.11);
 int rte_pmd_ice_dump_package(uint16_t port, uint8_t **buff, uint32_t *size)
 {
 	struct rte_eth_dev *dev;
@@ -499,7 +499,7 @@ ice_dump_switch(struct rte_eth_dev *dev, uint8_t **buff2, uint32_t *size)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_switch, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_switch, 22.11);
 int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size)
 {
 	struct rte_eth_dev *dev;
@@ -801,7 +801,7 @@ query_node_recursive(struct ice_hw *hw, struct rte_eth_dev_data *ethdata,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_txsched, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ice_dump_txsched, 24.03);
 int
 rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_device.c b/drivers/net/intel/idpf/idpf_common_device.c
index ff1fbcd2b4..cdf804e119 100644
--- a/drivers/net/intel/idpf/idpf_common_device.c
+++ b/drivers/net/intel/idpf/idpf_common_device.c
@@ -382,7 +382,7 @@ idpf_get_pkt_type(struct idpf_adapter *adapter)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_init);
 int
 idpf_adapter_init(struct idpf_adapter *adapter)
 {
@@ -443,7 +443,7 @@ idpf_adapter_init(struct idpf_adapter *adapter)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_adapter_deinit);
 int
 idpf_adapter_deinit(struct idpf_adapter *adapter)
 {
@@ -456,7 +456,7 @@ idpf_adapter_deinit(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_init);
 int
 idpf_vport_init(struct idpf_vport *vport,
 		struct virtchnl2_create_vport *create_vport_info,
@@ -570,7 +570,7 @@ idpf_vport_init(struct idpf_vport *vport,
 err_create_vport:
 	return ret;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_deinit)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_deinit);
 int
 idpf_vport_deinit(struct idpf_vport *vport)
 {
@@ -588,7 +588,7 @@ idpf_vport_deinit(struct idpf_vport *vport)
 
 	return 0;
 }
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_rss_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_rss_config);
 int
 idpf_vport_rss_config(struct idpf_vport *vport)
 {
@@ -615,7 +615,7 @@ idpf_vport_rss_config(struct idpf_vport *vport)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config);
 int
 idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
@@ -691,7 +691,7 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config_by_qids)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_map_config_by_qids);
 int
 idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues)
 {
@@ -767,7 +767,7 @@ idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_unmap_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_irq_unmap_config);
 int
 idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 {
@@ -779,7 +779,7 @@ idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_info_init)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_info_init);
 int
 idpf_vport_info_init(struct idpf_vport *vport,
 			    struct virtchnl2_create_vport *vport_info)
@@ -816,7 +816,7 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_stats_update)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vport_stats_update);
 void
 idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx.c b/drivers/net/intel/idpf/idpf_common_rxtx.c
index eb25b091d8..4e6fa28ac2 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx.c
@@ -11,7 +11,7 @@
 int idpf_timestamp_dynfield_offset = -1;
 uint64_t idpf_timestamp_dynflag;
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_thresh_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_thresh_check);
 int
 idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 {
@@ -27,7 +27,7 @@ idpf_qc_rx_thresh_check(uint16_t nb_desc, uint16_t thresh)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_thresh_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_thresh_check);
 int
 idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
 			uint16_t tx_free_thresh)
@@ -76,7 +76,7 @@ idpf_qc_tx_thresh_check(uint16_t nb_desc, uint16_t tx_rs_thresh,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rxq_mbufs_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rxq_mbufs_release);
 void
 idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 {
@@ -93,7 +93,7 @@ idpf_qc_rxq_mbufs_release(struct idpf_rx_queue *rxq)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_descq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_descq_reset);
 void
 idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 {
@@ -113,7 +113,7 @@ idpf_qc_split_rx_descq_reset(struct idpf_rx_queue *rxq)
 	rxq->expected_gen_id = 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_bufq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_bufq_reset);
 void
 idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 {
@@ -149,7 +149,7 @@ idpf_qc_split_rx_bufq_reset(struct idpf_rx_queue *rxq)
 	rxq->bufq2 = NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rx_queue_reset);
 void
 idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
@@ -158,7 +158,7 @@ idpf_qc_split_rx_queue_reset(struct idpf_rx_queue *rxq)
 	idpf_qc_split_rx_bufq_reset(rxq->bufq2);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rx_queue_reset);
 void
 idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 {
@@ -190,7 +190,7 @@ idpf_qc_single_rx_queue_reset(struct idpf_rx_queue *rxq)
 	rxq->rxrearm_nb = 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_descq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_descq_reset);
 void
 idpf_qc_split_tx_descq_reset(struct ci_tx_queue *txq)
 {
@@ -229,7 +229,7 @@ idpf_qc_split_tx_descq_reset(struct ci_tx_queue *txq)
 	txq->tx_next_rs = txq->tx_rs_thresh - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_complq_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_tx_complq_reset);
 void
 idpf_qc_split_tx_complq_reset(struct ci_tx_queue *cq)
 {
@@ -248,7 +248,7 @@ idpf_qc_split_tx_complq_reset(struct ci_tx_queue *cq)
 	cq->expected_gen_id = 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_tx_queue_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_tx_queue_reset);
 void
 idpf_qc_single_tx_queue_reset(struct ci_tx_queue *txq)
 {
@@ -286,7 +286,7 @@ idpf_qc_single_tx_queue_reset(struct ci_tx_queue *txq)
 	txq->tx_next_rs = txq->tx_rs_thresh - 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_queue_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_rx_queue_release);
 void
 idpf_qc_rx_queue_release(void *rxq)
 {
@@ -317,7 +317,7 @@ idpf_qc_rx_queue_release(void *rxq)
 	rte_free(q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_queue_release)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_queue_release);
 void
 idpf_qc_tx_queue_release(void *txq)
 {
@@ -337,7 +337,7 @@ idpf_qc_tx_queue_release(void *txq)
 	rte_free(q);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_ts_mbuf_register)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_ts_mbuf_register);
 int
 idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 {
@@ -355,7 +355,7 @@ idpf_qc_ts_mbuf_register(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rxq_mbufs_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_single_rxq_mbufs_alloc);
 int
 idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
@@ -391,7 +391,7 @@ idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rxq_mbufs_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_split_rxq_mbufs_alloc);
 int
 idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq)
 {
@@ -615,7 +615,7 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 	rx_bufq->rx_tail = next_avail;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts);
 uint16_t
 idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			 uint16_t nb_pkts)
@@ -848,7 +848,7 @@ idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
 				 IDPF_TXD_FLEX_CTX_MSS_RT_M);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts);
 uint16_t
 idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 uint16_t nb_pkts)
@@ -1040,7 +1040,7 @@ idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts);
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1159,7 +1159,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_scatter_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_scatter_pkts);
 uint16_t
 idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts)
@@ -1337,7 +1337,7 @@ idpf_xmit_cleanup(struct ci_tx_queue *txq)
 }
 
 /* TX function */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts);
 uint16_t
 idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			  uint16_t nb_pkts)
@@ -1505,7 +1505,7 @@ idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 }
 
 /* TX prep functions */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_prep_pkts)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_prep_pkts);
 uint16_t
 idpf_dp_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		  uint16_t nb_pkts)
@@ -1607,7 +1607,7 @@ idpf_rxq_vec_setup_default(struct idpf_rx_queue *rxq)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_singleq_rx_vec_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_singleq_rx_vec_setup);
 int __rte_cold
 idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
@@ -1615,7 +1615,7 @@ idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq)
 	return idpf_rxq_vec_setup_default(rxq);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_splitq_rx_vec_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_splitq_rx_vec_setup);
 int __rte_cold
 idpf_qc_splitq_rx_vec_setup(struct idpf_rx_queue *rxq)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
index 1babc5114b..aedee7b046 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx_avx2.c
@@ -475,7 +475,7 @@ _idpf_singleq_recv_raw_pkts_vec_avx2(struct idpf_rx_queue *rxq, struct rte_mbuf
  * Notice:
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx2)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx2);
 uint16_t
 idpf_dp_singleq_recv_pkts_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 {
@@ -618,7 +618,7 @@ idpf_singleq_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts
 	return nb_pkts;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx2)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx2);
 uint16_t
 idpf_dp_singleq_xmit_pkts_avx2(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts)
diff --git a/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c b/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
index 06e73c8725..c9e7b39de2 100644
--- a/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/net/intel/idpf/idpf_common_rxtx_avx512.c
@@ -532,7 +532,7 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
  * Notice:
  * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet
  */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_recv_pkts_avx512);
 uint16_t
 idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 				 uint16_t nb_pkts)
@@ -990,7 +990,7 @@ _idpf_splitq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq,
 }
 
 /* only bufq2 can receive pkts */
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_recv_pkts_avx512);
 uint16_t
 idpf_dp_splitq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts,
 			     uint16_t nb_pkts)
@@ -1159,7 +1159,7 @@ idpf_singleq_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_singleq_xmit_pkts_avx512);
 uint16_t
 idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				 uint16_t nb_pkts)
@@ -1361,7 +1361,7 @@ idpf_splitq_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts_avx512)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_dp_splitq_xmit_pkts_avx512);
 uint16_t
 idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 				uint16_t nb_pkts)
@@ -1369,7 +1369,7 @@ idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return idpf_splitq_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_vec_avx512_setup)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_qc_tx_vec_avx512_setup);
 int __rte_cold
 idpf_qc_tx_vec_avx512_setup(struct ci_tx_queue *txq)
 {
diff --git a/drivers/net/intel/idpf/idpf_common_virtchnl.c b/drivers/net/intel/idpf/idpf_common_virtchnl.c
index bab854e191..871893a9ed 100644
--- a/drivers/net/intel/idpf/idpf_common_virtchnl.c
+++ b/drivers/net/intel/idpf/idpf_common_virtchnl.c
@@ -160,7 +160,7 @@ idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
 #define MAX_TRY_TIMES 200
 #define ASQ_DELAY_MS  10
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_one_msg_read)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_one_msg_read);
 int
 idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_len,
 		     uint8_t *buf)
@@ -185,7 +185,7 @@ idpf_vc_one_msg_read(struct idpf_adapter *adapter, uint32_t ops, uint16_t buf_le
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_cmd_execute)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_cmd_execute);
 int
 idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 {
@@ -235,7 +235,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_api_version_check)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_api_version_check);
 int
 idpf_vc_api_version_check(struct idpf_adapter *adapter)
 {
@@ -276,7 +276,7 @@ idpf_vc_api_version_check(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_caps_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_caps_get);
 int
 idpf_vc_caps_get(struct idpf_adapter *adapter)
 {
@@ -301,7 +301,7 @@ idpf_vc_caps_get(struct idpf_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_create)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_create);
 int
 idpf_vc_vport_create(struct idpf_vport *vport,
 		     struct virtchnl2_create_vport *create_vport_info)
@@ -338,7 +338,7 @@ idpf_vc_vport_create(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_destroy);
 int
 idpf_vc_vport_destroy(struct idpf_vport *vport)
 {
@@ -363,7 +363,7 @@ idpf_vc_vport_destroy(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_add)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_add);
 int
 idpf_vc_queue_grps_add(struct idpf_vport *vport,
 		       struct virtchnl2_add_queue_groups *p2p_queue_grps_info,
@@ -396,7 +396,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_del)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_grps_del);
 int idpf_vc_queue_grps_del(struct idpf_vport *vport,
 			  uint16_t num_q_grps,
 			  struct virtchnl2_queue_group_id *qg_ids)
@@ -431,7 +431,7 @@ int idpf_vc_queue_grps_del(struct idpf_vport *vport,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_set);
 int
 idpf_vc_rss_key_set(struct idpf_vport *vport)
 {
@@ -466,7 +466,7 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_key_get);
 int idpf_vc_rss_key_get(struct idpf_vport *vport)
 {
 	struct idpf_adapter *adapter = vport->adapter;
@@ -509,7 +509,7 @@ int idpf_vc_rss_key_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_set);
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -544,7 +544,7 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_lut_get);
 int
 idpf_vc_rss_lut_get(struct idpf_vport *vport)
 {
@@ -587,7 +587,7 @@ idpf_vc_rss_lut_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_get)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_get);
 int
 idpf_vc_rss_hash_get(struct idpf_vport *vport)
 {
@@ -620,7 +620,7 @@ idpf_vc_rss_hash_get(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_set)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rss_hash_set);
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
@@ -647,7 +647,7 @@ idpf_vc_rss_hash_set(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_irq_map_unmap_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_irq_map_unmap_config);
 int
 idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
@@ -689,7 +689,7 @@ idpf_vc_irq_map_unmap_config(struct idpf_vport *vport, uint16_t nb_rxq, bool map
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_alloc);
 int
 idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 {
@@ -720,7 +720,7 @@ idpf_vc_vectors_alloc(struct idpf_vport *vport, uint16_t num_vectors)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_dealloc)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vectors_dealloc);
 int
 idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 {
@@ -748,7 +748,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ena_dis_one_queue);
 int
 idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 			  uint32_t type, bool on)
@@ -787,7 +787,7 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queue_switch);
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 		     bool rx, bool on, uint32_t type)
@@ -828,7 +828,7 @@ idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
 }
 
 #define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queues_ena_dis)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_queues_ena_dis);
 int
 idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 {
@@ -897,7 +897,7 @@ idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_ena_dis)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_vport_ena_dis);
 int
 idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 {
@@ -923,7 +923,7 @@ idpf_vc_vport_ena_dis(struct idpf_vport *vport, bool enable)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ptype_info_query)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ptype_info_query);
 int
 idpf_vc_ptype_info_query(struct idpf_adapter *adapter,
 			 struct virtchnl2_get_ptype_info *req_ptype_info,
@@ -946,7 +946,7 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter,
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_stats_query)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_stats_query);
 int
 idpf_vc_stats_query(struct idpf_vport *vport,
 		struct virtchnl2_vport_stats **pstats)
@@ -974,7 +974,7 @@ idpf_vc_stats_query(struct idpf_vport *vport,
 }
 
 #define IDPF_RX_BUF_STRIDE		64
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config);
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 {
@@ -1064,7 +1064,7 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config_by_info)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_rxq_config_by_info);
 int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info,
 			       uint16_t num_qs)
 {
@@ -1100,7 +1100,7 @@ int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_in
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config);
 int
 idpf_vc_txq_config(struct idpf_vport *vport, struct ci_tx_queue *txq)
 {
@@ -1172,7 +1172,7 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct ci_tx_queue *txq)
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config_by_info)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_txq_config_by_info);
 int
 idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info,
 		       uint16_t num_qs)
@@ -1208,7 +1208,7 @@ idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *
 	return err;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_recv)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_recv);
 int
 idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 		  struct idpf_ctlq_msg *q_msg)
@@ -1216,7 +1216,7 @@ idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_post_rx_buffs)
+RTE_EXPORT_INTERNAL_SYMBOL(idpf_vc_ctlq_post_rx_buffs);
 int
 idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			   u16 *buff_count, struct idpf_dma_mem **buffs)
diff --git a/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c b/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
index 2ee87c94c2..a0b4a2b6a9 100644
--- a/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/intel/ipn3ke/ipn3ke_ethdev.c
@@ -35,7 +35,7 @@ static const struct rte_afu_uuid afu_uuid_ipn3ke_map[] = {
 	{ 0, 0 /* sentinel */ },
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(ipn3ke_bridge_func)
+RTE_EXPORT_INTERNAL_SYMBOL(ipn3ke_bridge_func);
 struct ipn3ke_pub_func ipn3ke_bridge_func;
 
 static int
diff --git a/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c b/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
index c2300a8955..c4ffb3d100 100644
--- a/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/intel/ixgbe/rte_pmd_ixgbe.c
@@ -10,7 +10,7 @@
 #include <eal_export.h>
 #include "rte_pmd_ixgbe.h"
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_addr)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_addr);
 int
 rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf,
 			      struct rte_ether_addr *mac_addr)
@@ -47,7 +47,7 @@ rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_ping_vf)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_ping_vf);
 int
 rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf)
 {
@@ -80,7 +80,7 @@ rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_anti_spoof);
 int
 rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -111,7 +111,7 @@ rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_anti_spoof)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_mac_anti_spoof);
 int
 rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -141,7 +141,7 @@ rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_insert)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_insert);
 int
 rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id)
 {
@@ -178,7 +178,7 @@ rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tx_loopback)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tx_loopback);
 int
 rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on)
 {
@@ -209,7 +209,7 @@ rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_all_queues_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_all_queues_drop_en);
 int
 rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on)
 {
@@ -240,7 +240,7 @@ rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_split_drop_en)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_split_drop_en);
 int
 rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -276,7 +276,7 @@ rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_stripq)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_stripq);
 int
 rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -324,7 +324,7 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rxmode)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rxmode);
 int
 rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf,
 			    uint16_t rx_mask, uint8_t on)
@@ -372,7 +372,7 @@ rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rx)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rx);
 int
 rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -423,7 +423,7 @@ rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_tx)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_tx);
 int
 rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on)
 {
@@ -474,7 +474,7 @@ rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_vlan_filter);
 int
 rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 				 uint64_t vf_mask, uint8_t vlan_on)
@@ -510,7 +510,7 @@ rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rate_limit)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_vf_rate_limit);
 int
 rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf,
 				uint32_t tx_rate, uint64_t q_msk)
@@ -527,7 +527,7 @@ rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf,
 	return ixgbe_set_vf_rate_limit(dev, vf, tx_rate, q_msk);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_enable)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_enable);
 int
 rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp)
 {
@@ -552,7 +552,7 @@ rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_disable)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_disable);
 int
 rte_pmd_ixgbe_macsec_disable(uint16_t port)
 {
@@ -572,7 +572,7 @@ rte_pmd_ixgbe_macsec_disable(uint16_t port)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_txsc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_txsc);
 int
 rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac)
 {
@@ -598,7 +598,7 @@ rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_rxsc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_config_rxsc);
 int
 rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 {
@@ -625,7 +625,7 @@ rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_txsa)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_txsa);
 int
 rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -682,7 +682,7 @@ rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_rxsa)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_macsec_select_rxsa);
 int
 rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 				 uint32_t pn, uint8_t *key)
@@ -726,7 +726,7 @@ rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tc_bw_alloc)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_set_tc_bw_alloc);
 int
 rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 			      uint8_t tc_num,
@@ -800,7 +800,7 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_upd_fctrl_sbp)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_upd_fctrl_sbp);
 int
 rte_pmd_ixgbe_upd_fctrl_sbp(uint16_t port, int enable)
 {
@@ -830,7 +830,7 @@ rte_pmd_ixgbe_upd_fctrl_sbp(uint16_t port, int enable)
 }
 
 #ifdef RTE_LIBRTE_IXGBE_BYPASS
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_init)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_init);
 int
 rte_pmd_ixgbe_bypass_init(uint16_t port_id)
 {
@@ -846,7 +846,7 @@ rte_pmd_ixgbe_bypass_init(uint16_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_show);
 int
 rte_pmd_ixgbe_bypass_state_show(uint16_t port_id, uint32_t *state)
 {
@@ -861,7 +861,7 @@ rte_pmd_ixgbe_bypass_state_show(uint16_t port_id, uint32_t *state)
 	return ixgbe_bypass_state_show(dev, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_set)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_state_set);
 int
 rte_pmd_ixgbe_bypass_state_set(uint16_t port_id, uint32_t *new_state)
 {
@@ -876,7 +876,7 @@ rte_pmd_ixgbe_bypass_state_set(uint16_t port_id, uint32_t *new_state)
 	return ixgbe_bypass_state_store(dev, new_state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_show);
 int
 rte_pmd_ixgbe_bypass_event_show(uint16_t port_id,
 				uint32_t event,
@@ -893,7 +893,7 @@ rte_pmd_ixgbe_bypass_event_show(uint16_t port_id,
 	return ixgbe_bypass_event_show(dev, event, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_store)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_event_store);
 int
 rte_pmd_ixgbe_bypass_event_store(uint16_t port_id,
 				 uint32_t event,
@@ -910,7 +910,7 @@ rte_pmd_ixgbe_bypass_event_store(uint16_t port_id,
 	return ixgbe_bypass_event_store(dev, event, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_store)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_store);
 int
 rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port_id, uint32_t timeout)
 {
@@ -925,7 +925,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port_id, uint32_t timeout)
 	return ixgbe_bypass_wd_timeout_store(dev, timeout);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_ver_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_ver_show);
 int
 rte_pmd_ixgbe_bypass_ver_show(uint16_t port_id, uint32_t *ver)
 {
@@ -940,7 +940,7 @@ rte_pmd_ixgbe_bypass_ver_show(uint16_t port_id, uint32_t *ver)
 	return ixgbe_bypass_ver_show(dev, ver);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_show)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_timeout_show);
 int
 rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port_id, uint32_t *wd_timeout)
 {
@@ -955,7 +955,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port_id, uint32_t *wd_timeout)
 	return ixgbe_bypass_wd_timeout_show(dev, wd_timeout);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_reset)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_bypass_wd_reset);
 int
 rte_pmd_ixgbe_bypass_wd_reset(uint16_t port_id)
 {
@@ -1024,7 +1024,7 @@ STATIC void rte_pmd_ixgbe_release_swfw(struct ixgbe_hw *hw, u32 mask)
 	ixgbe_release_swfw_semaphore(hw, mask);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_lock)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_lock);
 int
 rte_pmd_ixgbe_mdio_lock(uint16_t port)
 {
@@ -1052,7 +1052,7 @@ rte_pmd_ixgbe_mdio_lock(uint16_t port)
 	return IXGBE_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlock)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlock);
 int
 rte_pmd_ixgbe_mdio_unlock(uint16_t port)
 {
@@ -1080,7 +1080,7 @@ rte_pmd_ixgbe_mdio_unlock(uint16_t port)
 	return IXGBE_SUCCESS;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_read)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_read);
 int
 rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
 				 uint32_t dev_type, uint16_t *phy_data)
@@ -1128,7 +1128,7 @@ rte_pmd_ixgbe_mdio_unlocked_read(uint16_t port, uint32_t reg_addr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_write)
+RTE_EXPORT_SYMBOL(rte_pmd_ixgbe_mdio_unlocked_write);
 int
 rte_pmd_ixgbe_mdio_unlocked_write(uint16_t port, uint32_t reg_addr,
 				  uint32_t dev_type, uint16_t phy_data)
@@ -1176,7 +1176,7 @@ rte_pmd_ixgbe_mdio_unlocked_write(uint16_t port, uint32_t reg_addr,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_info, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_info, 20.08);
 int
 rte_pmd_ixgbe_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 {
@@ -1193,7 +1193,7 @@ rte_pmd_ixgbe_get_fdir_info(uint16_t port, struct rte_eth_fdir_info *fdir_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_stats, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_ixgbe_get_fdir_stats, 20.08);
 int
 rte_pmd_ixgbe_get_fdir_stats(uint16_t port,
 			     struct rte_eth_fdir_stats *fdir_stats)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1321be779b..d79bc3d745 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -3379,7 +3379,7 @@ mlx5_set_metadata_mask(struct rte_eth_dev *dev)
 	DRV_LOG(DEBUG, "metadata reg_c0 mask %08X", sh->dv_regc0_mask);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_get_dyn_flag_names, 20.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_get_dyn_flag_names, 20.02);
 int
 rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n)
 {
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8db372123c..ce4d2246a6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -7880,7 +7880,7 @@ mlx5_flow_cache_flow_toggle(struct rte_eth_dev *dev, bool orig_prio)
  * @return
  *   Negative value on error, positive on success.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_flow_engine_set_mode, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_flow_engine_set_mode, 23.03);
 int
 rte_pmd_mlx5_flow_engine_set_mode(enum rte_pmd_mlx5_flow_engine_mode mode, uint32_t flags)
 {
@@ -10986,7 +10986,7 @@ mlx5_action_handle_detach(struct rte_eth_dev *dev)
 	(MLX5DV_DR_DOMAIN_SYNC_FLAGS_SW | MLX5DV_DR_DOMAIN_SYNC_FLAGS_HW)
 #endif
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_sync_flow, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_sync_flow, 20.11);
 int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -12263,7 +12263,7 @@ mlx5_flow_discover_ipv6_tc_support(struct rte_eth_dev *dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_create_geneve_tlv_parser, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_create_geneve_tlv_parser, 24.03);
 void *
 rte_pmd_mlx5_create_geneve_tlv_parser(uint16_t port_id,
 				      const struct rte_pmd_mlx5_geneve_tlv tlv_list[],
@@ -12281,7 +12281,7 @@ rte_pmd_mlx5_create_geneve_tlv_parser(uint16_t port_id,
 #endif
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_destroy_geneve_tlv_parser, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_destroy_geneve_tlv_parser, 24.03);
 int
 rte_pmd_mlx5_destroy_geneve_tlv_parser(void *handle)
 {
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 5e8c312d00..cc26c785c0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -1832,7 +1832,7 @@ mlxreg_host_shaper_config(struct rte_eth_dev *dev,
 #endif
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_host_shaper_config, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_host_shaper_config, 22.07);
 int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate,
 				    uint32_t flags)
 {
@@ -1874,7 +1874,7 @@ int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate,
  * @return
  *   0 for Success, non-zero value depending on failure type
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_rxq_dump_contexts, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_rxq_dump_contexts, 24.07);
 int rte_pmd_mlx5_rxq_dump_contexts(uint16_t port_id, uint16_t queue_id, const char *filename)
 {
 	struct rte_eth_dev *dev;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 77c5848c37..9bfef96b5f 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -3311,7 +3311,7 @@ mlx5_external_rx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
 	return &priv->ext_rxqs[dpdk_idx - RTE_PMD_MLX5_EXTERNAL_RX_QUEUE_ID_MIN];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_map, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_map, 22.03);
 int
 rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 				      uint32_t hw_idx)
@@ -3345,7 +3345,7 @@ rte_pmd_mlx5_external_rx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_unmap, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_rx_queue_id_unmap, 22.03);
 int
 rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
 {
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fe9da7f8c1..41d427d8c4 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -777,7 +777,7 @@ mlx5_tx_burst_mode_get(struct rte_eth_dev *dev,
  *   0 for success, non-zero value depending on failure.
  *
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_txq_dump_contexts, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_txq_dump_contexts, 24.07);
 int rte_pmd_mlx5_txq_dump_contexts(uint16_t port_id, uint16_t queue_id, const char *filename)
 {
 	struct rte_eth_dev *dev;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index b090d8274d..565dcf804d 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -1415,7 +1415,7 @@ mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq)
 	return txq->is_hairpin ? txq->obj->sq->id : txq->obj->sq_obj.sq->id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_sq_enable, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_sq_enable, 22.07);
 int
 rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num)
 {
@@ -1597,7 +1597,7 @@ mlx5_external_tx_queue_get_validate(uint16_t port_id, uint16_t dpdk_idx)
 	return &priv->ext_txqs[dpdk_idx - MLX5_EXTERNAL_TX_QUEUE_ID_MIN];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_map, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_map, 24.07);
 int
 rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 				      uint32_t hw_idx)
@@ -1631,7 +1631,7 @@ rte_pmd_mlx5_external_tx_queue_id_map(uint16_t port_id, uint16_t dpdk_idx,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_unmap, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmd_mlx5_external_tx_queue_id_unmap, 24.07);
 int
 rte_pmd_mlx5_external_tx_queue_id_unmap(uint16_t port_id, uint16_t dpdk_idx)
 {
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9451431144..f11ad9251a 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -45,7 +45,7 @@ struct octeontx_vdev_init_params {
 	uint8_t	nr_port;
 };
 
-RTE_EXPORT_SYMBOL(rte_octeontx_pchan_map)
+RTE_EXPORT_SYMBOL(rte_octeontx_pchan_map);
 uint16_t
 rte_octeontx_pchan_map[OCTEONTX_MAX_BGX_PORTS][OCTEONTX_MAX_LMAC_PER_BGX];
 
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index b1085bf390..962106fa2c 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -457,7 +457,7 @@ do_eth_dev_ring_create(const char *name,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_from_rings)
+RTE_EXPORT_SYMBOL(rte_eth_from_rings);
 int
 rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[],
 		const unsigned int nb_rx_queues,
@@ -516,7 +516,7 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[],
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_from_ring)
+RTE_EXPORT_SYMBOL(rte_eth_from_ring);
 int
 rte_eth_from_ring(struct rte_ring *r)
 {
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index 91a1c3a98e..40d6e768bc 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -517,7 +517,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_softnic,
 	PMD_PARAM_CPU_ID "=<uint32> "
 );
 
-RTE_EXPORT_SYMBOL(rte_pmd_softnic_manage)
+RTE_EXPORT_SYMBOL(rte_pmd_softnic_manage);
 int
 rte_pmd_softnic_manage(uint16_t port_id)
 {
diff --git a/drivers/net/softnic/rte_eth_softnic_thread.c b/drivers/net/softnic/rte_eth_softnic_thread.c
index f72c836199..d18d7cf9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic_thread.c
+++ b/drivers/net/softnic/rte_eth_softnic_thread.c
@@ -555,7 +555,7 @@ rte_pmd_softnic_run_internal(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_softnic_run)
+RTE_EXPORT_SYMBOL(rte_pmd_softnic_run);
 int
 rte_pmd_softnic_run(uint16_t port_id)
 {
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 44bf2e3241..cd6698f353 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1046,7 +1046,7 @@ vhost_driver_setup(struct rte_eth_dev *eth_dev)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_vhost_get_queue_event)
+RTE_EXPORT_SYMBOL(rte_eth_vhost_get_queue_event);
 int
 rte_eth_vhost_get_queue_event(uint16_t port_id,
 		struct rte_eth_vhost_queue_event *event)
@@ -1084,7 +1084,7 @@ rte_eth_vhost_get_queue_event(uint16_t port_id,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_vhost_get_vid_from_port_id)
+RTE_EXPORT_SYMBOL(rte_eth_vhost_get_vid_from_port_id);
 int
 rte_eth_vhost_get_vid_from_port_id(uint16_t port_id)
 {
diff --git a/drivers/power/kvm_vm/guest_channel.c b/drivers/power/kvm_vm/guest_channel.c
index 42bfcedb56..7abffc2e3c 100644
--- a/drivers/power/kvm_vm/guest_channel.c
+++ b/drivers/power/kvm_vm/guest_channel.c
@@ -152,7 +152,7 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_guest_channel_send_msg)
+RTE_EXPORT_SYMBOL(rte_power_guest_channel_send_msg);
 int rte_power_guest_channel_send_msg(struct rte_power_channel_packet *pkt,
 			unsigned int lcore_id)
 {
@@ -214,7 +214,7 @@ int power_guest_channel_read_msg(void *pkt,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_guest_channel_receive_msg)
+RTE_EXPORT_SYMBOL(rte_power_guest_channel_receive_msg);
 int rte_power_guest_channel_receive_msg(void *pkt,
 		size_t pkt_len,
 		unsigned int lcore_id)
diff --git a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
index 60c2080740..bcb4373ec7 100644
--- a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
+++ b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c
@@ -17,7 +17,7 @@
 #include "cnxk_rvu_lf.h"
 #include "cnxk_rvu_lf_driver.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_id_range_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_id_range_set);
 int
 rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to)
 {
@@ -32,7 +32,7 @@ rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to)
 	return roc_rvu_lf_msg_id_range_set(roc_rvu_lf, from, to);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_process);
 int
 rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id,
 			void *req, uint16_t req_len, void *rsp, uint16_t rsp_len)
@@ -48,7 +48,7 @@ rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id,
 	return roc_rvu_lf_msg_process(roc_rvu_lf, vf, msg_id, req, req_len, rsp, rsp_len);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_register);
 int
 rte_pmd_rvu_lf_msg_handler_register(uint8_t dev_id, rte_pmd_rvu_lf_msg_handler_cb_fn cb)
 {
@@ -63,7 +63,7 @@ rte_pmd_rvu_lf_msg_handler_register(uint8_t dev_id, rte_pmd_rvu_lf_msg_handler_c
 	return roc_rvu_lf_msg_handler_register(roc_rvu_lf, (roc_rvu_lf_msg_handler_cb_fn)cb);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_msg_handler_unregister);
 int
 rte_pmd_rvu_lf_msg_handler_unregister(uint8_t dev_id)
 {
@@ -78,7 +78,7 @@ rte_pmd_rvu_lf_msg_handler_unregister(uint8_t dev_id)
 	return roc_rvu_lf_msg_handler_unregister(roc_rvu_lf);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_register);
 int
 rte_pmd_rvu_lf_irq_register(uint8_t dev_id, unsigned int irq,
 			    rte_pmd_rvu_lf_intr_callback_fn cb, void *data)
@@ -94,7 +94,7 @@ rte_pmd_rvu_lf_irq_register(uint8_t dev_id, unsigned int irq,
 	return roc_rvu_lf_irq_register(roc_rvu_lf, irq, (roc_rvu_lf_intr_cb_fn)cb, data);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_irq_unregister);
 int
 rte_pmd_rvu_lf_irq_unregister(uint8_t dev_id, unsigned int irq,
 			      rte_pmd_rvu_lf_intr_callback_fn cb, void *data)
@@ -110,7 +110,7 @@ rte_pmd_rvu_lf_irq_unregister(uint8_t dev_id, unsigned int irq,
 	return roc_rvu_lf_irq_unregister(roc_rvu_lf, irq, (roc_rvu_lf_intr_cb_fn)cb, data);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_bar_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_bar_get);
 int
 rte_pmd_rvu_lf_bar_get(uint8_t dev_id, uint8_t bar_num, size_t *va, size_t *mask)
 {
@@ -135,21 +135,21 @@ rte_pmd_rvu_lf_bar_get(uint8_t dev_id, uint8_t bar_num, size_t *va, size_t *mask
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_npa_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_npa_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_npa_pf_func_get(void)
 {
 	return roc_npa_pf_func_get();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_sso_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_sso_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_sso_pf_func_get(void)
 {
 	return roc_sso_pf_func_get();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_pf_func_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmd_rvu_lf_pf_func_get);
 uint16_t
 rte_pmd_rvu_lf_pf_func_get(uint8_t dev_id)
 {
diff --git a/drivers/raw/ifpga/rte_pmd_ifpga.c b/drivers/raw/ifpga/rte_pmd_ifpga.c
index 620b35624b..5b2b634da2 100644
--- a/drivers/raw/ifpga/rte_pmd_ifpga.c
+++ b/drivers/raw/ifpga/rte_pmd_ifpga.c
@@ -13,7 +13,7 @@
 #include "base/ifpga_sec_mgr.h"
 
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_dev_id);
 int
 rte_pmd_ifpga_get_dev_id(const char *pci_addr, uint16_t *dev_id)
 {
@@ -102,7 +102,7 @@ get_share_data(struct opae_adapter *adapter)
 	return sd;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_rsu_status)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_rsu_status);
 int
 rte_pmd_ifpga_get_rsu_status(uint16_t dev_id, uint32_t *stat, uint32_t *prog)
 {
@@ -125,7 +125,7 @@ rte_pmd_ifpga_get_rsu_status(uint16_t dev_id, uint32_t *stat, uint32_t *prog)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_set_rsu_status)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_set_rsu_status);
 int
 rte_pmd_ifpga_set_rsu_status(uint16_t dev_id, uint32_t stat, uint32_t prog)
 {
@@ -267,7 +267,7 @@ get_port_property(struct opae_adapter *adapter, uint16_t port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_property)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_property);
 int
 rte_pmd_ifpga_get_property(uint16_t dev_id, rte_pmd_ifpga_prop *prop)
 {
@@ -304,7 +304,7 @@ rte_pmd_ifpga_get_property(uint16_t dev_id, rte_pmd_ifpga_prop *prop)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_phy_info)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_get_phy_info);
 int
 rte_pmd_ifpga_get_phy_info(uint16_t dev_id, rte_pmd_ifpga_phy_info *info)
 {
@@ -345,7 +345,7 @@ rte_pmd_ifpga_get_phy_info(uint16_t dev_id, rte_pmd_ifpga_phy_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_update_flash)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_update_flash);
 int
 rte_pmd_ifpga_update_flash(uint16_t dev_id, const char *image,
 	uint64_t *status)
@@ -359,7 +359,7 @@ rte_pmd_ifpga_update_flash(uint16_t dev_id, const char *image,
 	return opae_mgr_update_flash(adapter->mgr, image, status);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_stop_update)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_stop_update);
 int
 rte_pmd_ifpga_stop_update(uint16_t dev_id, int force)
 {
@@ -372,7 +372,7 @@ rte_pmd_ifpga_stop_update(uint16_t dev_id, int force)
 	return opae_mgr_stop_flash_update(adapter->mgr, force);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reboot_try)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reboot_try);
 int
 rte_pmd_ifpga_reboot_try(uint16_t dev_id)
 {
@@ -399,7 +399,7 @@ rte_pmd_ifpga_reboot_try(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reload)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_reload);
 int
 rte_pmd_ifpga_reload(uint16_t dev_id, int type, int page)
 {
@@ -412,7 +412,7 @@ rte_pmd_ifpga_reload(uint16_t dev_id, int type, int page)
 	return opae_mgr_reload(adapter->mgr, type, page);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_partial_reconfigure)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_partial_reconfigure);
 int
 rte_pmd_ifpga_partial_reconfigure(uint16_t dev_id, int port, const char *file)
 {
@@ -427,7 +427,7 @@ rte_pmd_ifpga_partial_reconfigure(uint16_t dev_id, int port, const char *file)
 	return ifpga_rawdev_partial_reconfigure(dev, port, file);
 }
 
-RTE_EXPORT_SYMBOL(rte_pmd_ifpga_cleanup)
+RTE_EXPORT_SYMBOL(rte_pmd_ifpga_cleanup);
 void
 rte_pmd_ifpga_cleanup(void)
 {
diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c
index 7056b1c117..5ec1202bd4 100644
--- a/lib/acl/acl_bld.c
+++ b/lib/acl/acl_bld.c
@@ -1622,7 +1622,7 @@ get_first_load_size(const struct rte_acl_config *cfg)
 	return (ofs < max_ofs) ? sizeof(uint32_t) : sizeof(uint8_t);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_build)
+RTE_EXPORT_SYMBOL(rte_acl_build);
 int
 rte_acl_build(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg)
 {
diff --git a/lib/acl/acl_run_scalar.c b/lib/acl/acl_run_scalar.c
index 32ebe3119b..24d160bf8c 100644
--- a/lib/acl/acl_run_scalar.c
+++ b/lib/acl/acl_run_scalar.c
@@ -108,7 +108,7 @@ scalar_transition(const uint64_t *trans_table, uint64_t transition,
 	return transition;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify_scalar)
+RTE_EXPORT_SYMBOL(rte_acl_classify_scalar);
 int
 rte_acl_classify_scalar(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories)
diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c
index 8c0ca29618..60e9d7d336 100644
--- a/lib/acl/rte_acl.c
+++ b/lib/acl/rte_acl.c
@@ -264,7 +264,7 @@ acl_get_best_alg(void)
 	return alg[i];
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_set_ctx_classify)
+RTE_EXPORT_SYMBOL(rte_acl_set_ctx_classify);
 extern int
 rte_acl_set_ctx_classify(struct rte_acl_ctx *ctx, enum rte_acl_classify_alg alg)
 {
@@ -287,7 +287,7 @@ rte_acl_set_ctx_classify(struct rte_acl_ctx *ctx, enum rte_acl_classify_alg alg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify_alg)
+RTE_EXPORT_SYMBOL(rte_acl_classify_alg);
 int
 rte_acl_classify_alg(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories,
@@ -300,7 +300,7 @@ rte_acl_classify_alg(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	return classify_fns[alg](ctx, data, results, num, categories);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_classify)
+RTE_EXPORT_SYMBOL(rte_acl_classify);
 int
 rte_acl_classify(const struct rte_acl_ctx *ctx, const uint8_t **data,
 	uint32_t *results, uint32_t num, uint32_t categories)
@@ -309,7 +309,7 @@ rte_acl_classify(const struct rte_acl_ctx *ctx, const uint8_t **data,
 		ctx->alg);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_find_existing)
+RTE_EXPORT_SYMBOL(rte_acl_find_existing);
 struct rte_acl_ctx *
 rte_acl_find_existing(const char *name)
 {
@@ -334,7 +334,7 @@ rte_acl_find_existing(const char *name)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_free)
+RTE_EXPORT_SYMBOL(rte_acl_free);
 void
 rte_acl_free(struct rte_acl_ctx *ctx)
 {
@@ -367,7 +367,7 @@ rte_acl_free(struct rte_acl_ctx *ctx)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_create)
+RTE_EXPORT_SYMBOL(rte_acl_create);
 struct rte_acl_ctx *
 rte_acl_create(const struct rte_acl_param *param)
 {
@@ -464,7 +464,7 @@ acl_check_rule(const struct rte_acl_rule_data *rd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_acl_add_rules)
+RTE_EXPORT_SYMBOL(rte_acl_add_rules);
 int
 rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules,
 	uint32_t num)
@@ -494,7 +494,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules,
  * Reset all rules.
  * Note that RT structures are not affected.
  */
-RTE_EXPORT_SYMBOL(rte_acl_reset_rules)
+RTE_EXPORT_SYMBOL(rte_acl_reset_rules);
 void
 rte_acl_reset_rules(struct rte_acl_ctx *ctx)
 {
@@ -505,7 +505,7 @@ rte_acl_reset_rules(struct rte_acl_ctx *ctx)
 /*
  * Reset all rules and destroys RT structures.
  */
-RTE_EXPORT_SYMBOL(rte_acl_reset)
+RTE_EXPORT_SYMBOL(rte_acl_reset);
 void
 rte_acl_reset(struct rte_acl_ctx *ctx)
 {
@@ -518,7 +518,7 @@ rte_acl_reset(struct rte_acl_ctx *ctx)
 /*
  * Dump ACL context to the stdout.
  */
-RTE_EXPORT_SYMBOL(rte_acl_dump)
+RTE_EXPORT_SYMBOL(rte_acl_dump);
 void
 rte_acl_dump(const struct rte_acl_ctx *ctx)
 {
@@ -538,7 +538,7 @@ rte_acl_dump(const struct rte_acl_ctx *ctx)
 /*
  * Dump all ACL contexts to the stdout.
  */
-RTE_EXPORT_SYMBOL(rte_acl_list_dump)
+RTE_EXPORT_SYMBOL(rte_acl_list_dump);
 void
 rte_acl_list_dump(void)
 {
diff --git a/lib/argparse/rte_argparse.c b/lib/argparse/rte_argparse.c
index 331f05f01d..1ddec956e9 100644
--- a/lib/argparse/rte_argparse.c
+++ b/lib/argparse/rte_argparse.c
@@ -793,7 +793,7 @@ show_args_help(const struct rte_argparse *obj)
 		printf("\n");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse, 24.03);
 int
 rte_argparse_parse(const struct rte_argparse *obj, int argc, char **argv)
 {
@@ -832,7 +832,7 @@ rte_argparse_parse(const struct rte_argparse *obj, int argc, char **argv)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse_type, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_argparse_parse_type, 24.03);
 int
 rte_argparse_parse_type(const char *str, enum rte_argparse_value_type val_type, void *val)
 {
diff --git a/lib/bbdev/bbdev_trace_points.c b/lib/bbdev/bbdev_trace_points.c
index 942c7be819..ac7ab2d553 100644
--- a/lib/bbdev/bbdev_trace_points.c
+++ b/lib/bbdev/bbdev_trace_points.c
@@ -22,9 +22,9 @@ RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_queue_start,
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_queue_stop,
 	lib.bbdev.queue.stop)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_enqueue, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_enqueue, 25.03);
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_enqueue,
 	lib.bbdev.enq)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_dequeue, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_bbdev_trace_dequeue, 25.03);
 RTE_TRACE_POINT_REGISTER(rte_bbdev_trace_dequeue,
 	lib.bbdev.deq)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index e0f8c8eb0d..eecaae2396 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -93,7 +93,7 @@ static rte_spinlock_t rte_bbdev_cb_lock = RTE_SPINLOCK_INITIALIZER;
  * Global array of all devices. This is not static because it's used by the
  * inline enqueue and dequeue functions
  */
-RTE_EXPORT_SYMBOL(rte_bbdev_devices)
+RTE_EXPORT_SYMBOL(rte_bbdev_devices);
 struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
 
 /* Global array with rte_bbdev_data structures */
@@ -175,7 +175,7 @@ find_free_dev_id(void)
 	return RTE_BBDEV_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_allocate)
+RTE_EXPORT_SYMBOL(rte_bbdev_allocate);
 struct rte_bbdev *
 rte_bbdev_allocate(const char *name)
 {
@@ -235,7 +235,7 @@ rte_bbdev_allocate(const char *name)
 	return bbdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_release)
+RTE_EXPORT_SYMBOL(rte_bbdev_release);
 int
 rte_bbdev_release(struct rte_bbdev *bbdev)
 {
@@ -271,7 +271,7 @@ rte_bbdev_release(struct rte_bbdev *bbdev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_get_named_dev)
+RTE_EXPORT_SYMBOL(rte_bbdev_get_named_dev);
 struct rte_bbdev *
 rte_bbdev_get_named_dev(const char *name)
 {
@@ -292,14 +292,14 @@ rte_bbdev_get_named_dev(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_count)
+RTE_EXPORT_SYMBOL(rte_bbdev_count);
 uint16_t
 rte_bbdev_count(void)
 {
 	return num_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_is_valid)
+RTE_EXPORT_SYMBOL(rte_bbdev_is_valid);
 bool
 rte_bbdev_is_valid(uint16_t dev_id)
 {
@@ -309,7 +309,7 @@ rte_bbdev_is_valid(uint16_t dev_id)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_find_next)
+RTE_EXPORT_SYMBOL(rte_bbdev_find_next);
 uint16_t
 rte_bbdev_find_next(uint16_t dev_id)
 {
@@ -320,7 +320,7 @@ rte_bbdev_find_next(uint16_t dev_id)
 	return dev_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_setup_queues)
+RTE_EXPORT_SYMBOL(rte_bbdev_setup_queues);
 int
 rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
 {
@@ -413,7 +413,7 @@ rte_bbdev_setup_queues(uint16_t dev_id, uint16_t num_queues, int socket_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_intr_enable)
+RTE_EXPORT_SYMBOL(rte_bbdev_intr_enable);
 int
 rte_bbdev_intr_enable(uint16_t dev_id)
 {
@@ -446,7 +446,7 @@ rte_bbdev_intr_enable(uint16_t dev_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_configure)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_configure);
 int
 rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
 		const struct rte_bbdev_queue_conf *conf)
@@ -568,7 +568,7 @@ rte_bbdev_queue_configure(uint16_t dev_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_start)
+RTE_EXPORT_SYMBOL(rte_bbdev_start);
 int
 rte_bbdev_start(uint16_t dev_id)
 {
@@ -603,7 +603,7 @@ rte_bbdev_start(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stop)
+RTE_EXPORT_SYMBOL(rte_bbdev_stop);
 int
 rte_bbdev_stop(uint16_t dev_id)
 {
@@ -627,7 +627,7 @@ rte_bbdev_stop(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_close)
+RTE_EXPORT_SYMBOL(rte_bbdev_close);
 int
 rte_bbdev_close(uint16_t dev_id)
 {
@@ -675,7 +675,7 @@ rte_bbdev_close(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_start)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_start);
 int
 rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
 {
@@ -708,7 +708,7 @@ rte_bbdev_queue_start(uint16_t dev_id, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_stop)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_stop);
 int
 rte_bbdev_queue_stop(uint16_t dev_id, uint16_t queue_id)
 {
@@ -773,7 +773,7 @@ reset_stats_in_queues(struct rte_bbdev *dev)
 	rte_bbdev_log_debug("Reset stats on %u", dev->data->dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stats_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_stats_get);
 int
 rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
 {
@@ -797,7 +797,7 @@ rte_bbdev_stats_get(uint16_t dev_id, struct rte_bbdev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_bbdev_stats_reset);
 int
 rte_bbdev_stats_reset(uint16_t dev_id)
 {
@@ -815,7 +815,7 @@ rte_bbdev_stats_reset(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_info_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_info_get);
 int
 rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
 {
@@ -844,7 +844,7 @@ rte_bbdev_info_get(uint16_t dev_id, struct rte_bbdev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_info_get);
 int
 rte_bbdev_queue_info_get(uint16_t dev_id, uint16_t queue_id,
 		struct rte_bbdev_queue_info *queue_info)
@@ -931,7 +931,7 @@ bbdev_op_init(struct rte_mempool *mempool, void *arg, void *element,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_bbdev_op_pool_create);
 struct rte_mempool *
 rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
 		unsigned int num_elements, unsigned int cache_size,
@@ -979,7 +979,7 @@ rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_callback_register)
+RTE_EXPORT_SYMBOL(rte_bbdev_callback_register);
 int
 rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
 		rte_bbdev_cb_fn cb_fn, void *cb_arg)
@@ -1025,7 +1025,7 @@ rte_bbdev_callback_register(uint16_t dev_id, enum rte_bbdev_event_type event,
 	return (user_cb == NULL) ? -ENOMEM : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_bbdev_callback_unregister);
 int
 rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
 		rte_bbdev_cb_fn cb_fn, void *cb_arg)
@@ -1071,7 +1071,7 @@ rte_bbdev_callback_unregister(uint16_t dev_id, enum rte_bbdev_event_type event,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_pmd_callback_process)
+RTE_EXPORT_SYMBOL(rte_bbdev_pmd_callback_process);
 void
 rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
 	enum rte_bbdev_event_type event, void *ret_param)
@@ -1114,7 +1114,7 @@ rte_bbdev_pmd_callback_process(struct rte_bbdev *dev,
 	rte_spinlock_unlock(&rte_bbdev_cb_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_enable)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_enable);
 int
 rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
 {
@@ -1126,7 +1126,7 @@ rte_bbdev_queue_intr_enable(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_intr_enable(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_disable)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_disable);
 int
 rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
 {
@@ -1138,7 +1138,7 @@ rte_bbdev_queue_intr_disable(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_intr_disable(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_ctl)
+RTE_EXPORT_SYMBOL(rte_bbdev_queue_intr_ctl);
 int
 rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
 		void *data)
@@ -1176,7 +1176,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_bbdev_op_type_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_op_type_str);
 const char *
 rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
 {
@@ -1197,7 +1197,7 @@ rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_device_status_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_device_status_str);
 const char *
 rte_bbdev_device_status_str(enum rte_bbdev_device_status status)
 {
@@ -1221,7 +1221,7 @@ rte_bbdev_device_status_str(enum rte_bbdev_device_status status)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bbdev_enqueue_status_str)
+RTE_EXPORT_SYMBOL(rte_bbdev_enqueue_status_str);
 const char *
 rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status)
 {
@@ -1241,7 +1241,7 @@ rte_bbdev_enqueue_status_str(enum rte_bbdev_enqueue_status status)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_queue_ops_dump, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_queue_ops_dump, 24.11);
 int
 rte_bbdev_queue_ops_dump(uint16_t dev_id, uint16_t queue_id, FILE *f)
 {
@@ -1281,7 +1281,7 @@ rte_bbdev_queue_ops_dump(uint16_t dev_id, uint16_t queue_id, FILE *f)
 	return dev->dev_ops->queue_ops_dump(dev, queue_id, f);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_ops_param_string, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bbdev_ops_param_string, 24.11);
 char *
 rte_bbdev_ops_param_string(void *op, enum rte_bbdev_op_type op_type, char *str, uint32_t len)
 {
diff --git a/lib/bitratestats/rte_bitrate.c b/lib/bitratestats/rte_bitrate.c
index 592e478e06..4fe8ce452c 100644
--- a/lib/bitratestats/rte_bitrate.c
+++ b/lib/bitratestats/rte_bitrate.c
@@ -29,7 +29,7 @@ struct rte_stats_bitrates {
 	uint16_t id_stats_set;
 };
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_create)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_create);
 struct rte_stats_bitrates *
 rte_stats_bitrate_create(void)
 {
@@ -37,14 +37,14 @@ rte_stats_bitrate_create(void)
 		RTE_CACHE_LINE_SIZE);
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_free)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_free);
 void
 rte_stats_bitrate_free(struct rte_stats_bitrates *bitrate_data)
 {
 	rte_free(bitrate_data);
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_reg)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_reg);
 int
 rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data)
 {
@@ -66,7 +66,7 @@ rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data)
 	return return_value;
 }
 
-RTE_EXPORT_SYMBOL(rte_stats_bitrate_calc)
+RTE_EXPORT_SYMBOL(rte_stats_bitrate_calc);
 int
 rte_stats_bitrate_calc(struct rte_stats_bitrates *bitrate_data,
 			uint16_t port_id)
diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c
index 5239b3e11e..2a21934a22 100644
--- a/lib/bpf/bpf.c
+++ b/lib/bpf/bpf.c
@@ -11,7 +11,7 @@
 
 #include "bpf_impl.h"
 
-RTE_EXPORT_SYMBOL(rte_bpf_destroy)
+RTE_EXPORT_SYMBOL(rte_bpf_destroy);
 void
 rte_bpf_destroy(struct rte_bpf *bpf)
 {
@@ -22,7 +22,7 @@ rte_bpf_destroy(struct rte_bpf *bpf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_get_jit)
+RTE_EXPORT_SYMBOL(rte_bpf_get_jit);
 int
 rte_bpf_get_jit(const struct rte_bpf *bpf, struct rte_bpf_jit *jit)
 {
diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index 86e703299d..129457741d 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -518,7 +518,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
+RTE_EXPORT_SYMBOL(rte_bpf_convert);
 struct rte_bpf_prm *
 rte_bpf_convert(const struct bpf_program *prog)
 {
diff --git a/lib/bpf/bpf_dump.c b/lib/bpf/bpf_dump.c
index 6ee0e32b43..e2a4f48a2d 100644
--- a/lib/bpf/bpf_dump.c
+++ b/lib/bpf/bpf_dump.c
@@ -44,7 +44,7 @@ static const char *const jump_tbl[16] = {
 	[EBPF_CALL >> 4] = "call", [EBPF_EXIT >> 4] = "exit",
 };
 
-RTE_EXPORT_SYMBOL(rte_bpf_dump)
+RTE_EXPORT_SYMBOL(rte_bpf_dump);
 void rte_bpf_dump(FILE *f, const struct ebpf_insn *buf, uint32_t len)
 {
 	uint32_t i;
diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c
index 4b5ea9f1a4..7090be62e1 100644
--- a/lib/bpf/bpf_exec.c
+++ b/lib/bpf/bpf_exec.c
@@ -476,7 +476,7 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM])
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_exec_burst)
+RTE_EXPORT_SYMBOL(rte_bpf_exec_burst);
 uint32_t
 rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	uint32_t num)
@@ -496,7 +496,7 @@ rte_bpf_exec_burst(const struct rte_bpf *bpf, void *ctx[], uint64_t rc[],
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_exec)
+RTE_EXPORT_SYMBOL(rte_bpf_exec);
 uint64_t
 rte_bpf_exec(const struct rte_bpf *bpf, void *ctx)
 {
diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c
index 556e613762..5050cbf34d 100644
--- a/lib/bpf/bpf_load.c
+++ b/lib/bpf/bpf_load.c
@@ -80,7 +80,7 @@ bpf_check_xsym(const struct rte_bpf_xsym *xsym)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_load);
 struct rte_bpf *
 rte_bpf_load(const struct rte_bpf_prm *prm)
 {
diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c
index 1d30ba17e2..26cf263ba2 100644
--- a/lib/bpf/bpf_load_elf.c
+++ b/lib/bpf/bpf_load_elf.c
@@ -295,7 +295,7 @@ bpf_load_elf(const struct rte_bpf_prm *prm, int32_t fd, const char *section)
 	return bpf;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load);
 struct rte_bpf *
 rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	const char *sname)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index 01f813c56b..7167603bf0 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -466,7 +466,7 @@ bpf_eth_unload(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_unload)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_unload);
 void
 rte_bpf_eth_rx_unload(uint16_t port, uint16_t queue)
 {
@@ -478,7 +478,7 @@ rte_bpf_eth_rx_unload(uint16_t port, uint16_t queue)
 	rte_spinlock_unlock(&cbh->lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_unload)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_unload);
 void
 rte_bpf_eth_tx_unload(uint16_t port, uint16_t queue)
 {
@@ -560,7 +560,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_rx_elf_load);
 int
 rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
@@ -577,7 +577,7 @@ rte_bpf_eth_rx_elf_load(uint16_t port, uint16_t queue,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_eth_tx_elf_load);
 int
 rte_bpf_eth_tx_elf_load(uint16_t port, uint16_t queue,
 	const struct rte_bpf_prm *prm, const char *fname, const char *sname,
diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c
index dea0d703ca..fdefa70e91 100644
--- a/lib/bpf/bpf_stub.c
+++ b/lib/bpf/bpf_stub.c
@@ -11,7 +11,7 @@
  */
 
 #ifndef RTE_LIBRTE_BPF_ELF
-RTE_EXPORT_SYMBOL(rte_bpf_elf_load)
+RTE_EXPORT_SYMBOL(rte_bpf_elf_load);
 struct rte_bpf *
 rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 	const char *sname)
@@ -29,7 +29,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname,
 #endif
 
 #ifndef RTE_HAS_LIBPCAP
-RTE_EXPORT_SYMBOL(rte_bpf_convert)
+RTE_EXPORT_SYMBOL(rte_bpf_convert);
 struct rte_bpf_prm *
 rte_bpf_convert(const struct bpf_program *prog)
 {
diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c
index 8bbdcf146e..fcf6e31924 100644
--- a/lib/cfgfile/rte_cfgfile.c
+++ b/lib/cfgfile/rte_cfgfile.c
@@ -159,7 +159,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_load)
+RTE_EXPORT_SYMBOL(rte_cfgfile_load);
 struct rte_cfgfile *
 rte_cfgfile_load(const char *filename, int flags)
 {
@@ -167,7 +167,7 @@ rte_cfgfile_load(const char *filename, int flags)
 					    &default_cfgfile_params);
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_load_with_params)
+RTE_EXPORT_SYMBOL(rte_cfgfile_load_with_params);
 struct rte_cfgfile *
 rte_cfgfile_load_with_params(const char *filename, int flags,
 			     const struct rte_cfgfile_parameters *params)
@@ -272,7 +272,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_create)
+RTE_EXPORT_SYMBOL(rte_cfgfile_create);
 struct rte_cfgfile *
 rte_cfgfile_create(int flags)
 {
@@ -329,7 +329,7 @@ rte_cfgfile_create(int flags)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_add_section)
+RTE_EXPORT_SYMBOL(rte_cfgfile_add_section);
 int
 rte_cfgfile_add_section(struct rte_cfgfile *cfg, const char *sectionname)
 {
@@ -371,7 +371,7 @@ rte_cfgfile_add_section(struct rte_cfgfile *cfg, const char *sectionname)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_add_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_add_entry);
 int rte_cfgfile_add_entry(struct rte_cfgfile *cfg,
 		const char *sectionname, const char *entryname,
 		const char *entryvalue)
@@ -396,7 +396,7 @@ int rte_cfgfile_add_entry(struct rte_cfgfile *cfg,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_set_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_set_entry);
 int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname, const char *entryvalue)
 {
@@ -425,7 +425,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_save)
+RTE_EXPORT_SYMBOL(rte_cfgfile_save);
 int rte_cfgfile_save(struct rte_cfgfile *cfg, const char *filename)
 {
 	int i, j;
@@ -450,7 +450,7 @@ int rte_cfgfile_save(struct rte_cfgfile *cfg, const char *filename)
 	return fclose(f);
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_close)
+RTE_EXPORT_SYMBOL(rte_cfgfile_close);
 int rte_cfgfile_close(struct rte_cfgfile *cfg)
 {
 	int i;
@@ -474,7 +474,7 @@ int rte_cfgfile_close(struct rte_cfgfile *cfg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_num_sections)
+RTE_EXPORT_SYMBOL(rte_cfgfile_num_sections);
 int
 rte_cfgfile_num_sections(struct rte_cfgfile *cfg, const char *sectionname,
 size_t length)
@@ -488,7 +488,7 @@ size_t length)
 	return num_sections;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_sections)
+RTE_EXPORT_SYMBOL(rte_cfgfile_sections);
 int
 rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
 	int max_sections)
@@ -501,14 +501,14 @@ rte_cfgfile_sections(struct rte_cfgfile *cfg, char *sections[],
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_has_section)
+RTE_EXPORT_SYMBOL(rte_cfgfile_has_section);
 int
 rte_cfgfile_has_section(struct rte_cfgfile *cfg, const char *sectionname)
 {
 	return _get_section(cfg, sectionname) != NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries);
 int
 rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
 	const char *sectionname)
@@ -519,7 +519,7 @@ rte_cfgfile_section_num_entries(struct rte_cfgfile *cfg,
 	return s->num_entries;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries_by_index)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_num_entries_by_index);
 int
 rte_cfgfile_section_num_entries_by_index(struct rte_cfgfile *cfg,
 	char *sectionname, int index)
@@ -532,7 +532,7 @@ rte_cfgfile_section_num_entries_by_index(struct rte_cfgfile *cfg,
 	strlcpy(sectionname, sect->name, CFG_NAME_LEN);
 	return sect->num_entries;
 }
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries);
 int
 rte_cfgfile_section_entries(struct rte_cfgfile *cfg, const char *sectionname,
 		struct rte_cfgfile_entry *entries, int max_entries)
@@ -546,7 +546,7 @@ rte_cfgfile_section_entries(struct rte_cfgfile *cfg, const char *sectionname,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries_by_index)
+RTE_EXPORT_SYMBOL(rte_cfgfile_section_entries_by_index);
 int
 rte_cfgfile_section_entries_by_index(struct rte_cfgfile *cfg, int index,
 		char *sectionname,
@@ -564,7 +564,7 @@ rte_cfgfile_section_entries_by_index(struct rte_cfgfile *cfg, int index,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_get_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_get_entry);
 const char *
 rte_cfgfile_get_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname)
@@ -580,7 +580,7 @@ rte_cfgfile_get_entry(struct rte_cfgfile *cfg, const char *sectionname,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cfgfile_has_entry)
+RTE_EXPORT_SYMBOL(rte_cfgfile_has_entry);
 int
 rte_cfgfile_has_entry(struct rte_cfgfile *cfg, const char *sectionname,
 		const char *entryname)
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index d1003f0b8e..eae053b184 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -40,7 +40,7 @@ cmdline_complete_buffer(struct rdline *rdl, const char *buf,
 	return cmdline_complete(cl, buf, state, dstbuf, dstsize);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_write_char)
+RTE_EXPORT_SYMBOL(cmdline_write_char);
 int
 cmdline_write_char(struct rdline *rdl, char c)
 {
@@ -59,7 +59,7 @@ cmdline_write_char(struct rdline *rdl, char c)
 }
 
 
-RTE_EXPORT_SYMBOL(cmdline_set_prompt)
+RTE_EXPORT_SYMBOL(cmdline_set_prompt);
 void
 cmdline_set_prompt(struct cmdline *cl, const char *prompt)
 {
@@ -68,7 +68,7 @@ cmdline_set_prompt(struct cmdline *cl, const char *prompt)
 	strlcpy(cl->prompt, prompt, sizeof(cl->prompt));
 }
 
-RTE_EXPORT_SYMBOL(cmdline_new)
+RTE_EXPORT_SYMBOL(cmdline_new);
 struct cmdline *
 cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 {
@@ -99,14 +99,14 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 	return cl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_rdline)
+RTE_EXPORT_SYMBOL(cmdline_get_rdline);
 struct rdline*
 cmdline_get_rdline(struct cmdline *cl)
 {
 	return &cl->rdl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_free)
+RTE_EXPORT_SYMBOL(cmdline_free);
 void
 cmdline_free(struct cmdline *cl)
 {
@@ -122,7 +122,7 @@ cmdline_free(struct cmdline *cl)
 	free(cl);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_printf)
+RTE_EXPORT_SYMBOL(cmdline_printf);
 void
 cmdline_printf(const struct cmdline *cl, const char *fmt, ...)
 {
@@ -138,7 +138,7 @@ cmdline_printf(const struct cmdline *cl, const char *fmt, ...)
 	va_end(ap);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_in)
+RTE_EXPORT_SYMBOL(cmdline_in);
 int
 cmdline_in(struct cmdline *cl, const char *buf, int size)
 {
@@ -176,7 +176,7 @@ cmdline_in(struct cmdline *cl, const char *buf, int size)
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_quit)
+RTE_EXPORT_SYMBOL(cmdline_quit);
 void
 cmdline_quit(struct cmdline *cl)
 {
@@ -186,7 +186,7 @@ cmdline_quit(struct cmdline *cl)
 	rdline_quit(&cl->rdl);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_interact)
+RTE_EXPORT_SYMBOL(cmdline_interact);
 void
 cmdline_interact(struct cmdline *cl)
 {
diff --git a/lib/cmdline/cmdline_cirbuf.c b/lib/cmdline/cmdline_cirbuf.c
index 07d9fc6b90..b74d61bb52 100644
--- a/lib/cmdline/cmdline_cirbuf.c
+++ b/lib/cmdline/cmdline_cirbuf.c
@@ -13,7 +13,7 @@
 #include <eal_export.h>
 
 
-RTE_EXPORT_SYMBOL(cirbuf_init)
+RTE_EXPORT_SYMBOL(cirbuf_init);
 int
 cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int maxlen)
 {
@@ -29,7 +29,7 @@ cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int max
 
 /* multiple add */
 
-RTE_EXPORT_SYMBOL(cirbuf_add_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_add_buf_head);
 int
 cirbuf_add_buf_head(struct cirbuf *cbuf, const char *c, unsigned int n)
 {
@@ -61,7 +61,7 @@ cirbuf_add_buf_head(struct cirbuf *cbuf, const char *c, unsigned int n)
 
 /* multiple add */
 
-RTE_EXPORT_SYMBOL(cirbuf_add_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_add_buf_tail);
 int
 cirbuf_add_buf_tail(struct cirbuf *cbuf, const char *c, unsigned int n)
 {
@@ -105,7 +105,7 @@ __cirbuf_add_head(struct cirbuf * cbuf, char c)
 	cbuf->len ++;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_head_safe)
+RTE_EXPORT_SYMBOL(cirbuf_add_head_safe);
 int
 cirbuf_add_head_safe(struct cirbuf * cbuf, char c)
 {
@@ -116,7 +116,7 @@ cirbuf_add_head_safe(struct cirbuf * cbuf, char c)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_head)
+RTE_EXPORT_SYMBOL(cirbuf_add_head);
 void
 cirbuf_add_head(struct cirbuf * cbuf, char c)
 {
@@ -136,7 +136,7 @@ __cirbuf_add_tail(struct cirbuf * cbuf, char c)
 	cbuf->len ++;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_tail_safe)
+RTE_EXPORT_SYMBOL(cirbuf_add_tail_safe);
 int
 cirbuf_add_tail_safe(struct cirbuf * cbuf, char c)
 {
@@ -147,7 +147,7 @@ cirbuf_add_tail_safe(struct cirbuf * cbuf, char c)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_add_tail)
+RTE_EXPORT_SYMBOL(cirbuf_add_tail);
 void
 cirbuf_add_tail(struct cirbuf * cbuf, char c)
 {
@@ -190,7 +190,7 @@ __cirbuf_shift_right(struct cirbuf *cbuf)
 }
 
 /* XXX we could do a better algorithm here... */
-RTE_EXPORT_SYMBOL(cirbuf_align_left)
+RTE_EXPORT_SYMBOL(cirbuf_align_left);
 int
 cirbuf_align_left(struct cirbuf * cbuf)
 {
@@ -212,7 +212,7 @@ cirbuf_align_left(struct cirbuf * cbuf)
 }
 
 /* XXX we could do a better algorithm here... */
-RTE_EXPORT_SYMBOL(cirbuf_align_right)
+RTE_EXPORT_SYMBOL(cirbuf_align_right);
 int
 cirbuf_align_right(struct cirbuf * cbuf)
 {
@@ -235,7 +235,7 @@ cirbuf_align_right(struct cirbuf * cbuf)
 
 /* buffer del */
 
-RTE_EXPORT_SYMBOL(cirbuf_del_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_del_buf_head);
 int
 cirbuf_del_buf_head(struct cirbuf *cbuf, unsigned int size)
 {
@@ -256,7 +256,7 @@ cirbuf_del_buf_head(struct cirbuf *cbuf, unsigned int size)
 
 /* buffer del */
 
-RTE_EXPORT_SYMBOL(cirbuf_del_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_del_buf_tail);
 int
 cirbuf_del_buf_tail(struct cirbuf *cbuf, unsigned int size)
 {
@@ -287,7 +287,7 @@ __cirbuf_del_head(struct cirbuf * cbuf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_head_safe)
+RTE_EXPORT_SYMBOL(cirbuf_del_head_safe);
 int
 cirbuf_del_head_safe(struct cirbuf * cbuf)
 {
@@ -298,7 +298,7 @@ cirbuf_del_head_safe(struct cirbuf * cbuf)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_head)
+RTE_EXPORT_SYMBOL(cirbuf_del_head);
 void
 cirbuf_del_head(struct cirbuf * cbuf)
 {
@@ -317,7 +317,7 @@ __cirbuf_del_tail(struct cirbuf * cbuf)
 	}
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_tail_safe)
+RTE_EXPORT_SYMBOL(cirbuf_del_tail_safe);
 int
 cirbuf_del_tail_safe(struct cirbuf * cbuf)
 {
@@ -328,7 +328,7 @@ cirbuf_del_tail_safe(struct cirbuf * cbuf)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(cirbuf_del_tail)
+RTE_EXPORT_SYMBOL(cirbuf_del_tail);
 void
 cirbuf_del_tail(struct cirbuf * cbuf)
 {
@@ -337,7 +337,7 @@ cirbuf_del_tail(struct cirbuf * cbuf)
 
 /* convert to buffer */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_buf_head)
+RTE_EXPORT_SYMBOL(cirbuf_get_buf_head);
 int
 cirbuf_get_buf_head(struct cirbuf *cbuf, char *c, unsigned int size)
 {
@@ -376,7 +376,7 @@ cirbuf_get_buf_head(struct cirbuf *cbuf, char *c, unsigned int size)
 
 /* convert to buffer */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_buf_tail)
+RTE_EXPORT_SYMBOL(cirbuf_get_buf_tail);
 int
 cirbuf_get_buf_tail(struct cirbuf *cbuf, char *c, unsigned int size)
 {
@@ -416,7 +416,7 @@ cirbuf_get_buf_tail(struct cirbuf *cbuf, char *c, unsigned int size)
 
 /* get head or get tail */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_head)
+RTE_EXPORT_SYMBOL(cirbuf_get_head);
 char
 cirbuf_get_head(struct cirbuf * cbuf)
 {
@@ -425,7 +425,7 @@ cirbuf_get_head(struct cirbuf * cbuf)
 
 /* get head or get tail */
 
-RTE_EXPORT_SYMBOL(cirbuf_get_tail)
+RTE_EXPORT_SYMBOL(cirbuf_get_tail);
 char
 cirbuf_get_tail(struct cirbuf * cbuf)
 {
diff --git a/lib/cmdline/cmdline_parse.c b/lib/cmdline/cmdline_parse.c
index 201fddb8c3..cfaba5f83b 100644
--- a/lib/cmdline/cmdline_parse.c
+++ b/lib/cmdline/cmdline_parse.c
@@ -50,7 +50,7 @@ iscomment(char c)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_isendoftoken)
+RTE_EXPORT_SYMBOL(cmdline_isendoftoken);
 int
 cmdline_isendoftoken(char c)
 {
@@ -298,21 +298,21 @@ __cmdline_parse(struct cmdline *cl, const char *buf, bool call_fn)
 	return linelen;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse)
+RTE_EXPORT_SYMBOL(cmdline_parse);
 int
 cmdline_parse(struct cmdline *cl, const char *buf)
 {
 	return __cmdline_parse(cl, buf, true);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_check)
+RTE_EXPORT_SYMBOL(cmdline_parse_check);
 int
 cmdline_parse_check(struct cmdline *cl, const char *buf)
 {
 	return __cmdline_parse(cl, buf, false);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete)
+RTE_EXPORT_SYMBOL(cmdline_complete);
 int
 cmdline_complete(struct cmdline *cl, const char *buf, int *state,
 		 char *dst, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_bool.c b/lib/cmdline/cmdline_parse_bool.c
index e03cc3d545..4ef6b8ac68 100644
--- a/lib/cmdline/cmdline_parse_bool.c
+++ b/lib/cmdline/cmdline_parse_bool.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse_bool.h"
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(cmdline_token_bool_ops, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(cmdline_token_bool_ops, 25.03);
 struct cmdline_token_ops cmdline_token_bool_ops = {
 	.parse = cmdline_parse_bool,
 	.complete_get_nb = NULL,
diff --git a/lib/cmdline/cmdline_parse_etheraddr.c b/lib/cmdline/cmdline_parse_etheraddr.c
index 7358572ba1..eec5a71b9d 100644
--- a/lib/cmdline/cmdline_parse_etheraddr.c
+++ b/lib/cmdline/cmdline_parse_etheraddr.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_etheraddr.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_etheraddr_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_etheraddr_ops);
 struct cmdline_token_ops cmdline_token_etheraddr_ops = {
 	.parse = cmdline_parse_etheraddr,
 	.complete_get_nb = NULL,
@@ -22,7 +22,7 @@ struct cmdline_token_ops cmdline_token_etheraddr_ops = {
 	.get_help = cmdline_get_help_etheraddr,
 };
 
-RTE_EXPORT_SYMBOL(cmdline_parse_etheraddr)
+RTE_EXPORT_SYMBOL(cmdline_parse_etheraddr);
 int
 cmdline_parse_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 	const char *buf, void *res, unsigned ressize)
@@ -54,7 +54,7 @@ cmdline_parse_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_etheraddr)
+RTE_EXPORT_SYMBOL(cmdline_get_help_etheraddr);
 int
 cmdline_get_help_etheraddr(__rte_unused cmdline_parse_token_hdr_t *tk,
 			       char *dstbuf, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_ipaddr.c b/lib/cmdline/cmdline_parse_ipaddr.c
index 55522016c8..c44275fd42 100644
--- a/lib/cmdline/cmdline_parse_ipaddr.c
+++ b/lib/cmdline/cmdline_parse_ipaddr.c
@@ -15,7 +15,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_ipaddr.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_ipaddr_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_ipaddr_ops);
 struct cmdline_token_ops cmdline_token_ipaddr_ops = {
 	.parse = cmdline_parse_ipaddr,
 	.complete_get_nb = NULL,
@@ -26,7 +26,7 @@ struct cmdline_token_ops cmdline_token_ipaddr_ops = {
 #define PREFIXMAX 128
 #define V4PREFIXMAX 32
 
-RTE_EXPORT_SYMBOL(cmdline_parse_ipaddr)
+RTE_EXPORT_SYMBOL(cmdline_parse_ipaddr);
 int
 cmdline_parse_ipaddr(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	unsigned ressize)
@@ -93,7 +93,7 @@ cmdline_parse_ipaddr(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_ipaddr)
+RTE_EXPORT_SYMBOL(cmdline_get_help_ipaddr);
 int cmdline_get_help_ipaddr(cmdline_parse_token_hdr_t *tk, char *dstbuf,
 			    unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_parse_num.c b/lib/cmdline/cmdline_parse_num.c
index f21796bedb..a4be661ed5 100644
--- a/lib/cmdline/cmdline_parse_num.c
+++ b/lib/cmdline/cmdline_parse_num.c
@@ -21,7 +21,7 @@
 #define debug_printf(...) do {} while (0)
 #endif
 
-RTE_EXPORT_SYMBOL(cmdline_token_num_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_num_ops);
 struct cmdline_token_ops cmdline_token_num_ops = {
 	.parse = cmdline_parse_num,
 	.complete_get_nb = NULL,
@@ -94,7 +94,7 @@ check_res_size(struct cmdline_token_num_data *nd, unsigned ressize)
 }
 
 /* parse an int */
-RTE_EXPORT_SYMBOL(cmdline_parse_num)
+RTE_EXPORT_SYMBOL(cmdline_parse_num);
 int
 cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
 	unsigned ressize)
@@ -316,7 +316,7 @@ cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
 
 
 /* parse an int */
-RTE_EXPORT_SYMBOL(cmdline_get_help_num)
+RTE_EXPORT_SYMBOL(cmdline_get_help_num);
 int
 cmdline_get_help_num(cmdline_parse_token_hdr_t *tk, char *dstbuf, unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_parse_portlist.c b/lib/cmdline/cmdline_parse_portlist.c
index ef6ce223b5..e1a35c0385 100644
--- a/lib/cmdline/cmdline_parse_portlist.c
+++ b/lib/cmdline/cmdline_parse_portlist.c
@@ -14,7 +14,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_portlist.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_portlist_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_portlist_ops);
 struct cmdline_token_ops cmdline_token_portlist_ops = {
 	.parse = cmdline_parse_portlist,
 	.complete_get_nb = NULL,
@@ -70,7 +70,7 @@ parse_ports(cmdline_portlist_t *pl, const char *str)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_portlist)
+RTE_EXPORT_SYMBOL(cmdline_parse_portlist);
 int
 cmdline_parse_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 	const char *buf, void *res, unsigned ressize)
@@ -107,7 +107,7 @@ cmdline_parse_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_portlist)
+RTE_EXPORT_SYMBOL(cmdline_get_help_portlist);
 int
 cmdline_get_help_portlist(__rte_unused cmdline_parse_token_hdr_t *tk,
 		char *dstbuf, unsigned int size)
diff --git a/lib/cmdline/cmdline_parse_string.c b/lib/cmdline/cmdline_parse_string.c
index 731947159f..e6a68656a6 100644
--- a/lib/cmdline/cmdline_parse_string.c
+++ b/lib/cmdline/cmdline_parse_string.c
@@ -12,7 +12,7 @@
 #include "cmdline_parse.h"
 #include "cmdline_parse_string.h"
 
-RTE_EXPORT_SYMBOL(cmdline_token_string_ops)
+RTE_EXPORT_SYMBOL(cmdline_token_string_ops);
 struct cmdline_token_ops cmdline_token_string_ops = {
 	.parse = cmdline_parse_string,
 	.complete_get_nb = cmdline_complete_get_nb_string,
@@ -49,7 +49,7 @@ get_next_token(const char *s)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_parse_string)
+RTE_EXPORT_SYMBOL(cmdline_parse_string);
 int
 cmdline_parse_string(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	unsigned ressize)
@@ -135,7 +135,7 @@ cmdline_parse_string(cmdline_parse_token_hdr_t *tk, const char *buf, void *res,
 	return token_len;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete_get_nb_string)
+RTE_EXPORT_SYMBOL(cmdline_complete_get_nb_string);
 int cmdline_complete_get_nb_string(cmdline_parse_token_hdr_t *tk)
 {
 	struct cmdline_token_string *tk2;
@@ -159,7 +159,7 @@ int cmdline_complete_get_nb_string(cmdline_parse_token_hdr_t *tk)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_complete_get_elt_string)
+RTE_EXPORT_SYMBOL(cmdline_complete_get_elt_string);
 int cmdline_complete_get_elt_string(cmdline_parse_token_hdr_t *tk, int idx,
 				    char *dstbuf, unsigned int size)
 {
@@ -192,7 +192,7 @@ int cmdline_complete_get_elt_string(cmdline_parse_token_hdr_t *tk, int idx,
 }
 
 
-RTE_EXPORT_SYMBOL(cmdline_get_help_string)
+RTE_EXPORT_SYMBOL(cmdline_get_help_string);
 int cmdline_get_help_string(cmdline_parse_token_hdr_t *tk, char *dstbuf,
 			    unsigned int size)
 {
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 3b8d435e98..f9b9959331 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -54,7 +54,7 @@ rdline_init(struct rdline *rdl,
 	return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
 }
 
-RTE_EXPORT_SYMBOL(rdline_new)
+RTE_EXPORT_SYMBOL(rdline_new);
 struct rdline *
 rdline_new(rdline_write_char_t *write_char,
 	   rdline_validate_t *validate,
@@ -71,14 +71,14 @@ rdline_new(rdline_write_char_t *write_char,
 	return rdl;
 }
 
-RTE_EXPORT_SYMBOL(rdline_free)
+RTE_EXPORT_SYMBOL(rdline_free);
 void
 rdline_free(struct rdline *rdl)
 {
 	free(rdl);
 }
 
-RTE_EXPORT_SYMBOL(rdline_newline)
+RTE_EXPORT_SYMBOL(rdline_newline);
 void
 rdline_newline(struct rdline *rdl, const char *prompt)
 {
@@ -103,7 +103,7 @@ rdline_newline(struct rdline *rdl, const char *prompt)
 	rdl->history_cur_line = -1;
 }
 
-RTE_EXPORT_SYMBOL(rdline_stop)
+RTE_EXPORT_SYMBOL(rdline_stop);
 void
 rdline_stop(struct rdline *rdl)
 {
@@ -112,7 +112,7 @@ rdline_stop(struct rdline *rdl)
 	rdl->status = RDLINE_INIT;
 }
 
-RTE_EXPORT_SYMBOL(rdline_quit)
+RTE_EXPORT_SYMBOL(rdline_quit);
 void
 rdline_quit(struct rdline *rdl)
 {
@@ -121,7 +121,7 @@ rdline_quit(struct rdline *rdl)
 	rdl->status = RDLINE_EXITED;
 }
 
-RTE_EXPORT_SYMBOL(rdline_restart)
+RTE_EXPORT_SYMBOL(rdline_restart);
 void
 rdline_restart(struct rdline *rdl)
 {
@@ -130,7 +130,7 @@ rdline_restart(struct rdline *rdl)
 	rdl->status = RDLINE_RUNNING;
 }
 
-RTE_EXPORT_SYMBOL(rdline_reset)
+RTE_EXPORT_SYMBOL(rdline_reset);
 void
 rdline_reset(struct rdline *rdl)
 {
@@ -145,7 +145,7 @@ rdline_reset(struct rdline *rdl)
 	rdl->history_cur_line = -1;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_buffer)
+RTE_EXPORT_SYMBOL(rdline_get_buffer);
 const char *
 rdline_get_buffer(struct rdline *rdl)
 {
@@ -182,7 +182,7 @@ display_right_buffer(struct rdline *rdl, int force)
 				  CIRBUF_GET_LEN(&rdl->right));
 }
 
-RTE_EXPORT_SYMBOL(rdline_redisplay)
+RTE_EXPORT_SYMBOL(rdline_redisplay);
 void
 rdline_redisplay(struct rdline *rdl)
 {
@@ -201,7 +201,7 @@ rdline_redisplay(struct rdline *rdl)
 	display_right_buffer(rdl, 1);
 }
 
-RTE_EXPORT_SYMBOL(rdline_char_in)
+RTE_EXPORT_SYMBOL(rdline_char_in);
 int
 rdline_char_in(struct rdline *rdl, char c)
 {
@@ -573,7 +573,7 @@ rdline_get_history_size(struct rdline * rdl)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_history_item)
+RTE_EXPORT_SYMBOL(rdline_get_history_item);
 char *
 rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 {
@@ -600,21 +600,21 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_history_buffer_size)
+RTE_EXPORT_SYMBOL(rdline_get_history_buffer_size);
 size_t
 rdline_get_history_buffer_size(struct rdline *rdl)
 {
 	return sizeof(rdl->history_buf);
 }
 
-RTE_EXPORT_SYMBOL(rdline_get_opaque)
+RTE_EXPORT_SYMBOL(rdline_get_opaque);
 void *
 rdline_get_opaque(struct rdline *rdl)
 {
 	return rdl != NULL ? rdl->opaque : NULL;
 }
 
-RTE_EXPORT_SYMBOL(rdline_add_history)
+RTE_EXPORT_SYMBOL(rdline_add_history);
 int
 rdline_add_history(struct rdline * rdl, const char * buf)
 {
@@ -644,7 +644,7 @@ rdline_add_history(struct rdline * rdl, const char * buf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rdline_clear_history)
+RTE_EXPORT_SYMBOL(rdline_clear_history);
 void
 rdline_clear_history(struct rdline * rdl)
 {
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index f3d62acdae..53131e17c8 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -14,7 +14,7 @@
 
 #include <eal_export.h>
 
-RTE_EXPORT_SYMBOL(cmdline_file_new)
+RTE_EXPORT_SYMBOL(cmdline_file_new);
 struct cmdline *
 cmdline_file_new(cmdline_parse_ctx_t *ctx, const char *prompt, const char *path)
 {
@@ -32,7 +32,7 @@ cmdline_file_new(cmdline_parse_ctx_t *ctx, const char *prompt, const char *path)
 	return cmdline_new(ctx, prompt, fd, -1);
 }
 
-RTE_EXPORT_SYMBOL(cmdline_stdin_new)
+RTE_EXPORT_SYMBOL(cmdline_stdin_new);
 struct cmdline *
 cmdline_stdin_new(cmdline_parse_ctx_t *ctx, const char *prompt)
 {
@@ -46,7 +46,7 @@ cmdline_stdin_new(cmdline_parse_ctx_t *ctx, const char *prompt)
 	return cl;
 }
 
-RTE_EXPORT_SYMBOL(cmdline_stdin_exit)
+RTE_EXPORT_SYMBOL(cmdline_stdin_exit);
 void
 cmdline_stdin_exit(struct cmdline *cl)
 {
diff --git a/lib/cmdline/cmdline_vt100.c b/lib/cmdline/cmdline_vt100.c
index 272088a0c6..8eaa3efb36 100644
--- a/lib/cmdline/cmdline_vt100.c
+++ b/lib/cmdline/cmdline_vt100.c
@@ -42,7 +42,7 @@ const char *cmdline_vt100_commands[] = {
 	vt100_bs,
 };
 
-RTE_EXPORT_SYMBOL(vt100_init)
+RTE_EXPORT_SYMBOL(vt100_init);
 void
 vt100_init(struct cmdline_vt100 *vt)
 {
@@ -72,7 +72,7 @@ match_command(char *buf, unsigned int size)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(vt100_parser)
+RTE_EXPORT_SYMBOL(vt100_parser);
 int
 vt100_parser(struct cmdline_vt100 *vt, char ch)
 {
diff --git a/lib/compressdev/rte_comp.c b/lib/compressdev/rte_comp.c
index 662691796a..c22e25c762 100644
--- a/lib/compressdev/rte_comp.c
+++ b/lib/compressdev/rte_comp.c
@@ -6,7 +6,7 @@
 #include "rte_comp.h"
 #include "rte_compressdev_internal.h"
 
-RTE_EXPORT_SYMBOL(rte_comp_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_comp_get_feature_name);
 const char *
 rte_comp_get_feature_name(uint64_t flag)
 {
@@ -125,7 +125,7 @@ rte_comp_op_init(struct rte_mempool *mempool,
 	op->mempool = mempool;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_comp_op_pool_create);
 struct rte_mempool *
 rte_comp_op_pool_create(const char *name,
 		unsigned int nb_elts, unsigned int cache_size,
@@ -181,7 +181,7 @@ rte_comp_op_pool_create(const char *name,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_alloc)
+RTE_EXPORT_SYMBOL(rte_comp_op_alloc);
 struct rte_comp_op *
 rte_comp_op_alloc(struct rte_mempool *mempool)
 {
@@ -197,7 +197,7 @@ rte_comp_op_alloc(struct rte_mempool *mempool)
 	return op;
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_bulk_alloc)
+RTE_EXPORT_SYMBOL(rte_comp_op_bulk_alloc);
 int
 rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -223,7 +223,7 @@ rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
  * @param op
  *   Compress operation
  */
-RTE_EXPORT_SYMBOL(rte_comp_op_free)
+RTE_EXPORT_SYMBOL(rte_comp_op_free);
 void
 rte_comp_op_free(struct rte_comp_op *op)
 {
@@ -231,7 +231,7 @@ rte_comp_op_free(struct rte_comp_op *op)
 		rte_mempool_put(op->mempool, op);
 }
 
-RTE_EXPORT_SYMBOL(rte_comp_op_bulk_free)
+RTE_EXPORT_SYMBOL(rte_comp_op_bulk_free);
 void
 rte_comp_op_bulk_free(struct rte_comp_op **ops, uint16_t nb_ops)
 {
diff --git a/lib/compressdev/rte_compressdev.c b/lib/compressdev/rte_compressdev.c
index 33de3f511b..cbb7c812f4 100644
--- a/lib/compressdev/rte_compressdev.c
+++ b/lib/compressdev/rte_compressdev.c
@@ -29,7 +29,7 @@ static struct rte_compressdev_global compressdev_globals = {
 		.max_devs		= RTE_COMPRESS_MAX_DEVS
 };
 
-RTE_EXPORT_SYMBOL(rte_compressdev_capability_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_capability_get);
 const struct rte_compressdev_capabilities *
 rte_compressdev_capability_get(uint8_t dev_id,
 			enum rte_comp_algorithm algo)
@@ -53,7 +53,7 @@ rte_compressdev_capability_get(uint8_t dev_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_compressdev_get_feature_name);
 const char *
 rte_compressdev_get_feature_name(uint64_t flag)
 {
@@ -83,7 +83,7 @@ rte_compressdev_get_dev(uint8_t dev_id)
 	return &compressdev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_get_named_dev)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_get_named_dev);
 struct rte_compressdev *
 rte_compressdev_pmd_get_named_dev(const char *name)
 {
@@ -120,7 +120,7 @@ rte_compressdev_is_valid_dev(uint8_t dev_id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_compressdev_get_dev_id);
 int
 rte_compressdev_get_dev_id(const char *name)
 {
@@ -139,14 +139,14 @@ rte_compressdev_get_dev_id(const char *name)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_count)
+RTE_EXPORT_SYMBOL(rte_compressdev_count);
 uint8_t
 rte_compressdev_count(void)
 {
 	return compressdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_devices_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_devices_get);
 uint8_t
 rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
 	uint8_t nb_devices)
@@ -172,7 +172,7 @@ rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_socket_id)
+RTE_EXPORT_SYMBOL(rte_compressdev_socket_id);
 int
 rte_compressdev_socket_id(uint8_t dev_id)
 {
@@ -230,7 +230,7 @@ rte_compressdev_find_free_device_index(void)
 	return RTE_COMPRESS_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_allocate)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_allocate);
 struct rte_compressdev *
 rte_compressdev_pmd_allocate(const char *name, int socket_id)
 {
@@ -277,7 +277,7 @@ rte_compressdev_pmd_allocate(const char *name, int socket_id)
 	return compressdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_release_device)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_release_device);
 int
 rte_compressdev_pmd_release_device(struct rte_compressdev *compressdev)
 {
@@ -298,7 +298,7 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *compressdev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_count)
+RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_count);
 uint16_t
 rte_compressdev_queue_pair_count(uint8_t dev_id)
 {
@@ -424,7 +424,7 @@ rte_compressdev_queue_pairs_release(struct rte_compressdev *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_configure)
+RTE_EXPORT_SYMBOL(rte_compressdev_configure);
 int
 rte_compressdev_configure(uint8_t dev_id, struct rte_compressdev_config *config)
 {
@@ -460,7 +460,7 @@ rte_compressdev_configure(uint8_t dev_id, struct rte_compressdev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_start)
+RTE_EXPORT_SYMBOL(rte_compressdev_start);
 int
 rte_compressdev_start(uint8_t dev_id)
 {
@@ -494,7 +494,7 @@ rte_compressdev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stop)
+RTE_EXPORT_SYMBOL(rte_compressdev_stop);
 void
 rte_compressdev_stop(uint8_t dev_id)
 {
@@ -520,7 +520,7 @@ rte_compressdev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_close)
+RTE_EXPORT_SYMBOL(rte_compressdev_close);
 int
 rte_compressdev_close(uint8_t dev_id)
 {
@@ -557,7 +557,7 @@ rte_compressdev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_setup)
+RTE_EXPORT_SYMBOL(rte_compressdev_queue_pair_setup);
 int
 rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		uint32_t max_inflight_ops, int socket_id)
@@ -593,7 +593,7 @@ rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_setup(dev, queue_pair_id, max_inflight_ops, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_compressdev_dequeue_burst);
 uint16_t
 rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -603,7 +603,7 @@ rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 	return dev->dequeue_burst(dev->data->queue_pairs[qp_id], ops, nb_ops);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_compressdev_enqueue_burst);
 uint16_t
 rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_comp_op **ops, uint16_t nb_ops)
@@ -613,7 +613,7 @@ rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 	return dev->enqueue_burst(dev->data->queue_pairs[qp_id], ops, nb_ops);
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stats_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_stats_get);
 int
 rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats)
 {
@@ -638,7 +638,7 @@ rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_compressdev_stats_reset);
 void
 rte_compressdev_stats_reset(uint8_t dev_id)
 {
@@ -657,7 +657,7 @@ rte_compressdev_stats_reset(uint8_t dev_id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_info_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_info_get);
 void
 rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info)
 {
@@ -679,7 +679,7 @@ rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info)
 	dev_info->driver_name = dev->device->driver->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_create);
 int
 rte_compressdev_private_xform_create(uint8_t dev_id,
 		const struct rte_comp_xform *xform,
@@ -706,7 +706,7 @@ rte_compressdev_private_xform_create(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_free)
+RTE_EXPORT_SYMBOL(rte_compressdev_private_xform_free);
 int
 rte_compressdev_private_xform_free(uint8_t dev_id, void *priv_xform)
 {
@@ -731,7 +731,7 @@ rte_compressdev_private_xform_free(uint8_t dev_id, void *priv_xform)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stream_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_stream_create);
 int
 rte_compressdev_stream_create(uint8_t dev_id,
 		const struct rte_comp_xform *xform,
@@ -759,7 +759,7 @@ rte_compressdev_stream_create(uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_compressdev_stream_free)
+RTE_EXPORT_SYMBOL(rte_compressdev_stream_free);
 int
 rte_compressdev_stream_free(uint8_t dev_id, void *stream)
 {
@@ -784,7 +784,7 @@ rte_compressdev_stream_free(uint8_t dev_id, void *stream)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_name_get)
+RTE_EXPORT_SYMBOL(rte_compressdev_name_get);
 const char *
 rte_compressdev_name_get(uint8_t dev_id)
 {
diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c
index 7e11ad7148..5fad809337 100644
--- a/lib/compressdev/rte_compressdev_pmd.c
+++ b/lib/compressdev/rte_compressdev_pmd.c
@@ -56,7 +56,7 @@ rte_compressdev_pmd_parse_uint_arg(const char *key __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_parse_input_args)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_parse_input_args);
 int
 rte_compressdev_pmd_parse_input_args(
 		struct rte_compressdev_pmd_init_params *params,
@@ -93,7 +93,7 @@ rte_compressdev_pmd_parse_input_args(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_create)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_create);
 struct rte_compressdev *
 rte_compressdev_pmd_create(const char *name,
 		struct rte_device *device,
@@ -143,7 +143,7 @@ rte_compressdev_pmd_create(const char *name,
 	return compressdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_compressdev_pmd_destroy)
+RTE_EXPORT_SYMBOL(rte_compressdev_pmd_destroy);
 int
 rte_compressdev_pmd_destroy(struct rte_compressdev *compressdev)
 {
diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index d79d561bf6..ce43a9fde7 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -56,7 +56,7 @@ rte_cryptodev_pmd_parse_uint_arg(const char *key __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_parse_input_args)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_parse_input_args);
 int
 rte_cryptodev_pmd_parse_input_args(
 		struct rte_cryptodev_pmd_init_params *params,
@@ -100,7 +100,7 @@ rte_cryptodev_pmd_parse_input_args(
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create);
 struct rte_cryptodev *
 rte_cryptodev_pmd_create(const char *name,
 		struct rte_device *device,
@@ -151,7 +151,7 @@ rte_cryptodev_pmd_create(const char *name,
 	return cryptodev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_destroy);
 int
 rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 {
@@ -175,7 +175,7 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_probing_finish);
 void
 rte_cryptodev_pmd_probing_finish(struct rte_cryptodev *cryptodev)
 {
@@ -214,7 +214,7 @@ dummy_crypto_dequeue_burst(__rte_unused void *qp,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_reset);
 void
 cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
 {
@@ -233,7 +233,7 @@ cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
 	*fp_ops = dummy;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_set)
+RTE_EXPORT_INTERNAL_SYMBOL(cryptodev_fp_ops_set);
 void
 cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
 		     const struct rte_cryptodev *dev)
@@ -246,7 +246,7 @@ cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
 	fp_ops->qp_depth_used = dev->qp_depth_used;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_session_event_mdata_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_session_event_mdata_get);
 void *
 rte_cryptodev_session_event_mdata_get(struct rte_crypto_op *op)
 {
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 69737adcbe..e890026e69 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -43,11 +43,11 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_free,
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
 	lib.cryptodev.asym.free)
 
-RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_enqueue_burst)
+RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_enqueue_burst);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
 	lib.cryptodev.enq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_dequeue_burst)
+RTE_EXPORT_SYMBOL(__rte_cryptodev_trace_dequeue_burst);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_dequeue_burst,
 	lib.cryptodev.deq.burst)
 
@@ -201,6 +201,6 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_op_pool_create,
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_count,
 	lib.cryptodev.count)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_cryptodev_trace_qp_depth_used, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_cryptodev_trace_qp_depth_used, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_qp_depth_used,
 	lib.cryptodev.qp_depth_used)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..8e45370391 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -36,7 +36,7 @@ static uint8_t nb_drivers;
 
 static struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodevs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodevs);
 struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices;
 
 static struct rte_cryptodev_global cryptodev_globals = {
@@ -46,13 +46,13 @@ static struct rte_cryptodev_global cryptodev_globals = {
 };
 
 /* Public fastpath APIs. */
-RTE_EXPORT_SYMBOL(rte_crypto_fp_ops)
+RTE_EXPORT_SYMBOL(rte_crypto_fp_ops);
 struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
 
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_logtype)
+RTE_EXPORT_SYMBOL(rte_cryptodev_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_cryptodev_logtype, INFO);
 
 /**
@@ -109,7 +109,7 @@ crypto_cipher_algorithm_strings[] = {
  * The crypto cipher operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_cipher_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_cipher_operation_strings);
 const char *
 rte_crypto_cipher_operation_strings[] = {
 		[RTE_CRYPTO_CIPHER_OP_ENCRYPT]	= "encrypt",
@@ -182,7 +182,7 @@ crypto_aead_algorithm_strings[] = {
  * The crypto AEAD operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_aead_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_aead_operation_strings);
 const char *
 rte_crypto_aead_operation_strings[] = {
 	[RTE_CRYPTO_AEAD_OP_ENCRYPT]	= "encrypt",
@@ -210,7 +210,7 @@ crypto_asym_xform_strings[] = {
 /**
  * Asymmetric crypto operation strings identifiers.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_asym_op_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_asym_op_strings);
 const char *rte_crypto_asym_op_strings[] = {
 	[RTE_CRYPTO_ASYM_OP_ENCRYPT]	= "encrypt",
 	[RTE_CRYPTO_ASYM_OP_DECRYPT]	= "decrypt",
@@ -221,7 +221,7 @@ const char *rte_crypto_asym_op_strings[] = {
 /**
  * Asymmetric crypto key exchange operation strings identifiers.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_asym_ke_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_asym_ke_strings);
 const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE] = "priv_key_generate",
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE] = "pub_key_generate",
@@ -246,7 +246,7 @@ struct rte_cryptodev_asym_session_pool_private_data {
 	/**< Session user data will be placed after sess_private_data */
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_cipher_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_cipher_algo_enum);
 int
 rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
 		const char *algo_string)
@@ -267,7 +267,7 @@ rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_auth_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_auth_algo_enum);
 int
 rte_cryptodev_get_auth_algo_enum(enum rte_crypto_auth_algorithm *algo_enum,
 		const char *algo_string)
@@ -288,7 +288,7 @@ rte_cryptodev_get_auth_algo_enum(enum rte_crypto_auth_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_aead_algo_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_aead_algo_enum);
 int
 rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
 		const char *algo_string)
@@ -309,7 +309,7 @@ rte_cryptodev_get_aead_algo_enum(enum rte_crypto_aead_algorithm *algo_enum,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_xform_enum)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_xform_enum);
 int
 rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 		const char *xform_string)
@@ -331,7 +331,7 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_cipher_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_cipher_algo_string, 23.03);
 const char *
 rte_cryptodev_get_cipher_algo_string(enum rte_crypto_cipher_algorithm algo_enum)
 {
@@ -345,7 +345,7 @@ rte_cryptodev_get_cipher_algo_string(enum rte_crypto_cipher_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_auth_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_auth_algo_string, 23.03);
 const char *
 rte_cryptodev_get_auth_algo_string(enum rte_crypto_auth_algorithm algo_enum)
 {
@@ -359,7 +359,7 @@ rte_cryptodev_get_auth_algo_string(enum rte_crypto_auth_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_aead_algo_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_get_aead_algo_string, 23.03);
 const char *
 rte_cryptodev_get_aead_algo_string(enum rte_crypto_aead_algorithm algo_enum)
 {
@@ -373,7 +373,7 @@ rte_cryptodev_get_aead_algo_string(enum rte_crypto_aead_algorithm algo_enum)
 	return alg_str;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_get_xform_string, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_get_xform_string, 23.03);
 const char *
 rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum)
 {
@@ -391,14 +391,14 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum)
  * The crypto auth operation strings identifiers.
  * It could be used in application command line.
  */
-RTE_EXPORT_SYMBOL(rte_crypto_auth_operation_strings)
+RTE_EXPORT_SYMBOL(rte_crypto_auth_operation_strings);
 const char *
 rte_crypto_auth_operation_strings[] = {
 		[RTE_CRYPTO_AUTH_OP_VERIFY]	= "verify",
 		[RTE_CRYPTO_AUTH_OP_GENERATE]	= "generate"
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_get);
 const struct rte_cryptodev_symmetric_capability *
 rte_cryptodev_sym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_sym_capability_idx *idx)
@@ -468,7 +468,7 @@ param_range_check(uint16_t size, const struct rte_crypto_param_range *range)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_capability_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_capability_get);
 const struct rte_cryptodev_asymmetric_xform_capability *
 rte_cryptodev_asym_capability_get(uint8_t dev_id,
 		const struct rte_cryptodev_asym_capability_idx *idx)
@@ -498,7 +498,7 @@ rte_cryptodev_asym_capability_get(uint8_t dev_id,
 	return asym_cap;
 };
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_cipher)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_cipher);
 int
 rte_cryptodev_sym_capability_check_cipher(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -521,7 +521,7 @@ rte_cryptodev_sym_capability_check_cipher(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_auth)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_auth);
 int
 rte_cryptodev_sym_capability_check_auth(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -550,7 +550,7 @@ rte_cryptodev_sym_capability_check_auth(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_aead)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_capability_check_aead);
 int
 rte_cryptodev_sym_capability_check_aead(
 		const struct rte_cryptodev_symmetric_capability *capability,
@@ -585,7 +585,7 @@ rte_cryptodev_sym_capability_check_aead(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_optype)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_optype);
 int
 rte_cryptodev_asym_xform_capability_check_optype(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -602,7 +602,7 @@ rte_cryptodev_asym_xform_capability_check_optype(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_modlen)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_modlen);
 int
 rte_cryptodev_asym_xform_capability_check_modlen(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -638,7 +638,7 @@ rte_cryptodev_asym_xform_capability_check_modlen(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_hash)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_xform_capability_check_hash);
 bool
 rte_cryptodev_asym_xform_capability_check_hash(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -655,7 +655,7 @@ rte_cryptodev_asym_xform_capability_check_hash(
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_xform_capability_check_opcap, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_asym_xform_capability_check_opcap, 24.11);
 int
 rte_cryptodev_asym_xform_capability_check_opcap(
 	const struct rte_cryptodev_asymmetric_xform_capability *capability,
@@ -789,7 +789,7 @@ cryptodev_cb_init(struct rte_cryptodev *dev)
 	return -ENOMEM;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_feature_name)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_feature_name);
 const char *
 rte_cryptodev_get_feature_name(uint64_t flag)
 {
@@ -853,14 +853,14 @@ rte_cryptodev_get_feature_name(uint64_t flag)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_dev);
 struct rte_cryptodev *
 rte_cryptodev_pmd_get_dev(uint8_t dev_id)
 {
 	return &cryptodev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_named_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_get_named_dev);
 struct rte_cryptodev *
 rte_cryptodev_pmd_get_named_dev(const char *name)
 {
@@ -891,7 +891,7 @@ rte_cryptodev_is_valid_device_data(uint8_t dev_id)
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_is_valid_dev)
+RTE_EXPORT_SYMBOL(rte_cryptodev_is_valid_dev);
 unsigned int
 rte_cryptodev_is_valid_dev(uint8_t dev_id)
 {
@@ -913,7 +913,7 @@ rte_cryptodev_is_valid_dev(uint8_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_dev_id);
 int
 rte_cryptodev_get_dev_id(const char *name)
 {
@@ -940,7 +940,7 @@ rte_cryptodev_get_dev_id(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_count)
+RTE_EXPORT_SYMBOL(rte_cryptodev_count);
 uint8_t
 rte_cryptodev_count(void)
 {
@@ -949,7 +949,7 @@ rte_cryptodev_count(void)
 	return cryptodev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_device_count_by_driver)
+RTE_EXPORT_SYMBOL(rte_cryptodev_device_count_by_driver);
 uint8_t
 rte_cryptodev_device_count_by_driver(uint8_t driver_id)
 {
@@ -966,7 +966,7 @@ rte_cryptodev_device_count_by_driver(uint8_t driver_id)
 	return dev_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_devices_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_devices_get);
 uint8_t
 rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
 	uint8_t nb_devices)
@@ -995,7 +995,7 @@ rte_cryptodev_devices_get(const char *driver_name, uint8_t *devices,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_sec_ctx)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_sec_ctx);
 void *
 rte_cryptodev_get_sec_ctx(uint8_t dev_id)
 {
@@ -1011,7 +1011,7 @@ rte_cryptodev_get_sec_ctx(uint8_t dev_id)
 	return sec_ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_socket_id)
+RTE_EXPORT_SYMBOL(rte_cryptodev_socket_id);
 int
 rte_cryptodev_socket_id(uint8_t dev_id)
 {
@@ -1106,7 +1106,7 @@ rte_cryptodev_find_free_device_index(void)
 	return RTE_CRYPTO_MAX_DEVS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_allocate);
 struct rte_cryptodev *
 rte_cryptodev_pmd_allocate(const char *name, int socket_id)
 {
@@ -1166,7 +1166,7 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id)
 	return cryptodev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_release_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_release_device);
 int
 rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 {
@@ -1196,7 +1196,7 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_count)
+RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_count);
 uint16_t
 rte_cryptodev_queue_pair_count(uint8_t dev_id)
 {
@@ -1279,7 +1279,7 @@ rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_reset, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_reset, 24.11);
 int
 rte_cryptodev_queue_pair_reset(uint8_t dev_id, uint16_t queue_pair_id,
 		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
@@ -1304,7 +1304,7 @@ rte_cryptodev_queue_pair_reset(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_reset(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_configure)
+RTE_EXPORT_SYMBOL(rte_cryptodev_configure);
 int
 rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 {
@@ -1352,7 +1352,7 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_start)
+RTE_EXPORT_SYMBOL(rte_cryptodev_start);
 int
 rte_cryptodev_start(uint8_t dev_id)
 {
@@ -1390,7 +1390,7 @@ rte_cryptodev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stop)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stop);
 void
 rte_cryptodev_stop(uint8_t dev_id)
 {
@@ -1420,7 +1420,7 @@ rte_cryptodev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_close)
+RTE_EXPORT_SYMBOL(rte_cryptodev_close);
 int
 rte_cryptodev_close(uint8_t dev_id)
 {
@@ -1463,7 +1463,7 @@ rte_cryptodev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_qp_status)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_qp_status);
 int
 rte_cryptodev_get_qp_status(uint8_t dev_id, uint16_t queue_pair_id)
 {
@@ -1518,7 +1518,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct rte_mempool *mp,
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_setup)
+RTE_EXPORT_SYMBOL(rte_cryptodev_queue_pair_setup);
 int
 rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
@@ -1572,7 +1572,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->queue_pair_setup(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_add_enq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_add_enq_callback);
 struct rte_cryptodev_cb *
 rte_cryptodev_add_enq_callback(uint8_t dev_id,
 			       uint16_t qp_id,
@@ -1643,7 +1643,7 @@ rte_cryptodev_add_enq_callback(uint8_t dev_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_remove_enq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_remove_enq_callback);
 int
 rte_cryptodev_remove_enq_callback(uint8_t dev_id,
 				  uint16_t qp_id,
@@ -1720,7 +1720,7 @@ rte_cryptodev_remove_enq_callback(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_add_deq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_add_deq_callback);
 struct rte_cryptodev_cb *
 rte_cryptodev_add_deq_callback(uint8_t dev_id,
 			       uint16_t qp_id,
@@ -1792,7 +1792,7 @@ rte_cryptodev_add_deq_callback(uint8_t dev_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_remove_deq_callback)
+RTE_EXPORT_SYMBOL(rte_cryptodev_remove_deq_callback);
 int
 rte_cryptodev_remove_deq_callback(uint8_t dev_id,
 				  uint16_t qp_id,
@@ -1869,7 +1869,7 @@ rte_cryptodev_remove_deq_callback(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stats_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stats_get);
 int
 rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
 {
@@ -1896,7 +1896,7 @@ rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_stats_reset)
+RTE_EXPORT_SYMBOL(rte_cryptodev_stats_reset);
 void
 rte_cryptodev_stats_reset(uint8_t dev_id)
 {
@@ -1916,7 +1916,7 @@ rte_cryptodev_stats_reset(uint8_t dev_id)
 	dev->dev_ops->stats_reset(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_info_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_info_get);
 void
 rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 {
@@ -1942,7 +1942,7 @@ rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)
 
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_callback_register)
+RTE_EXPORT_SYMBOL(rte_cryptodev_callback_register);
 int
 rte_cryptodev_callback_register(uint8_t dev_id,
 			enum rte_cryptodev_event_type event,
@@ -1988,7 +1988,7 @@ rte_cryptodev_callback_register(uint8_t dev_id,
 	return (user_cb == NULL) ? -ENOMEM : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_cryptodev_callback_unregister);
 int
 rte_cryptodev_callback_unregister(uint8_t dev_id,
 			enum rte_cryptodev_event_type event,
@@ -2037,7 +2037,7 @@ rte_cryptodev_callback_unregister(uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_callback_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_callback_process);
 void
 rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 	enum rte_cryptodev_event_type event)
@@ -2060,7 +2060,7 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 	rte_spinlock_unlock(&rte_cryptodev_cb_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_event_error_query, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_cryptodev_queue_pair_event_error_query, 23.03);
 int
 rte_cryptodev_queue_pair_event_error_query(uint8_t dev_id, uint16_t qp_id)
 {
@@ -2080,7 +2080,7 @@ rte_cryptodev_queue_pair_event_error_query(uint8_t dev_id, uint16_t qp_id)
 	return dev->dev_ops->queue_pair_event_error_query(dev, qp_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_pool_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_pool_create);
 struct rte_mempool *
 rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -2119,7 +2119,7 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_pool_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_pool_create);
 struct rte_mempool *
 rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
 	uint32_t cache_size, uint16_t user_data_size, int socket_id)
@@ -2170,7 +2170,7 @@ rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_create);
 void *
 rte_cryptodev_sym_session_create(uint8_t dev_id,
 		struct rte_crypto_sym_xform *xforms,
@@ -2238,7 +2238,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_create)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_create);
 int
 rte_cryptodev_asym_session_create(uint8_t dev_id,
 		struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
@@ -2315,7 +2315,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_free)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_free);
 int
 rte_cryptodev_sym_session_free(uint8_t dev_id, void *_sess)
 {
@@ -2362,7 +2362,7 @@ rte_cryptodev_sym_session_free(uint8_t dev_id, void *_sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_free)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_free);
 int
 rte_cryptodev_asym_session_free(uint8_t dev_id, void *sess)
 {
@@ -2394,14 +2394,14 @@ rte_cryptodev_asym_session_free(uint8_t dev_id, void *sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_header_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_header_session_size);
 unsigned int
 rte_cryptodev_asym_get_header_session_size(void)
 {
 	return sizeof(struct rte_cryptodev_asym_session);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_get_private_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_get_private_session_size);
 unsigned int
 rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
 {
@@ -2424,7 +2424,7 @@ rte_cryptodev_sym_get_private_session_size(uint8_t dev_id)
 	return priv_sess_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_private_session_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_get_private_session_size);
 unsigned int
 rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
 {
@@ -2447,7 +2447,7 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
 	return priv_sess_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_set_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_set_user_data);
 int
 rte_cryptodev_sym_session_set_user_data(void *_sess, void *data,
 		uint16_t size)
@@ -2467,7 +2467,7 @@ rte_cryptodev_sym_session_set_user_data(void *_sess, void *data,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_get_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_session_get_user_data);
 void *
 rte_cryptodev_sym_session_get_user_data(void *_sess)
 {
@@ -2484,7 +2484,7 @@ rte_cryptodev_sym_session_get_user_data(void *_sess)
 	return data;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_set_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_set_user_data);
 int
 rte_cryptodev_asym_session_set_user_data(void *session, void *data, uint16_t size)
 {
@@ -2504,7 +2504,7 @@ rte_cryptodev_asym_session_set_user_data(void *session, void *data, uint16_t siz
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_get_user_data)
+RTE_EXPORT_SYMBOL(rte_cryptodev_asym_session_get_user_data);
 void *
 rte_cryptodev_asym_session_get_user_data(void *session)
 {
@@ -2529,7 +2529,7 @@ sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum)
 		vec->status[i] = errnum;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_sym_cpu_crypto_process)
+RTE_EXPORT_SYMBOL(rte_cryptodev_sym_cpu_crypto_process);
 uint32_t
 rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
 	void *_sess, union rte_crypto_sym_ofs ofs,
@@ -2556,7 +2556,7 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id,
 	return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_get_raw_dp_ctx_size)
+RTE_EXPORT_SYMBOL(rte_cryptodev_get_raw_dp_ctx_size);
 int
 rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
 {
@@ -2583,7 +2583,7 @@ rte_cryptodev_get_raw_dp_ctx_size(uint8_t dev_id)
 	return RTE_ALIGN_CEIL((size + priv_size), 8);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_configure_raw_dp_ctx)
+RTE_EXPORT_SYMBOL(rte_cryptodev_configure_raw_dp_ctx);
 int
 rte_cryptodev_configure_raw_dp_ctx(uint8_t dev_id, uint16_t qp_id,
 	struct rte_crypto_raw_dp_ctx *ctx,
@@ -2607,7 +2607,7 @@ rte_cryptodev_configure_raw_dp_ctx(uint8_t dev_id, uint16_t qp_id,
 			sess_type, session_ctx, is_update);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_session_event_mdata_set)
+RTE_EXPORT_SYMBOL(rte_cryptodev_session_event_mdata_set);
 int
 rte_cryptodev_session_event_mdata_set(uint8_t dev_id, void *sess,
 	enum rte_crypto_op_type op_type,
@@ -2651,7 +2651,7 @@ rte_cryptodev_session_event_mdata_set(uint8_t dev_id, void *sess,
 		return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_burst);
 uint32_t
 rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 	struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
@@ -2661,7 +2661,7 @@ rte_cryptodev_raw_enqueue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 			ofs, user_data, enqueue_status);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_done)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_enqueue_done);
 int
 rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
 		uint32_t n)
@@ -2669,7 +2669,7 @@ rte_cryptodev_raw_enqueue_done(struct rte_crypto_raw_dp_ctx *ctx,
 	return ctx->enqueue_done(ctx->qp_data, ctx->drv_ctx_data, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_burst);
 uint32_t
 rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 	rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
@@ -2683,7 +2683,7 @@ rte_cryptodev_raw_dequeue_burst(struct rte_crypto_raw_dp_ctx *ctx,
 		out_user_data, is_user_data_array, n_success_jobs, status);
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_done)
+RTE_EXPORT_SYMBOL(rte_cryptodev_raw_dequeue_done);
 int
 rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx,
 		uint32_t n)
@@ -2710,7 +2710,7 @@ rte_crypto_op_init(struct rte_mempool *mempool,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_crypto_op_pool_create)
+RTE_EXPORT_SYMBOL(rte_crypto_op_pool_create);
 struct rte_mempool *
 rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
 		unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
@@ -2780,7 +2780,7 @@ rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
 	return mp;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create_dev_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_pmd_create_dev_name);
 int
 rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix)
 {
@@ -2810,7 +2810,7 @@ TAILQ_HEAD(cryptodev_driver_list, cryptodev_driver);
 static struct cryptodev_driver_list cryptodev_driver_list =
 	TAILQ_HEAD_INITIALIZER(cryptodev_driver_list);
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_driver_id_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_driver_id_get);
 int
 rte_cryptodev_driver_id_get(const char *name)
 {
@@ -2836,7 +2836,7 @@ rte_cryptodev_driver_id_get(const char *name)
 	return driver_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_name_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_name_get);
 const char *
 rte_cryptodev_name_get(uint8_t dev_id)
 {
@@ -2856,7 +2856,7 @@ rte_cryptodev_name_get(uint8_t dev_id)
 	return dev->data->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cryptodev_driver_name_get)
+RTE_EXPORT_SYMBOL(rte_cryptodev_driver_name_get);
 const char *
 rte_cryptodev_driver_name_get(uint8_t driver_id)
 {
@@ -2872,7 +2872,7 @@ rte_cryptodev_driver_name_get(uint8_t driver_id)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_allocate_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_cryptodev_allocate_driver);
 uint8_t
 rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
 		const struct rte_driver *drv)
diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c
index a35967f7b7..10374d8d72 100644
--- a/lib/dispatcher/rte_dispatcher.c
+++ b/lib/dispatcher/rte_dispatcher.c
@@ -267,7 +267,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher)
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_create, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_create, 23.11);
 struct rte_dispatcher *
 rte_dispatcher_create(uint8_t event_dev_id)
 {
@@ -302,7 +302,7 @@ rte_dispatcher_create(uint8_t event_dev_id)
 	return dispatcher;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_free, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_free, 23.11);
 int
 rte_dispatcher_free(struct rte_dispatcher *dispatcher)
 {
@@ -320,7 +320,7 @@ rte_dispatcher_free(struct rte_dispatcher *dispatcher)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_service_id_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_service_id_get, 23.11);
 uint32_t
 rte_dispatcher_service_id_get(const struct rte_dispatcher *dispatcher)
 {
@@ -344,7 +344,7 @@ lcore_port_index(struct rte_dispatcher_lcore *lcore,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_bind_port_to_lcore, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_bind_port_to_lcore, 23.11);
 int
 rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher,
 	uint8_t event_port_id, uint16_t batch_size, uint64_t timeout,
@@ -374,7 +374,7 @@ rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unbind_port_from_lcore, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unbind_port_from_lcore, 23.11);
 int
 rte_dispatcher_unbind_port_from_lcore(struct rte_dispatcher *dispatcher,
 	uint8_t event_port_id, unsigned int lcore_id)
@@ -457,7 +457,7 @@ evd_install_handler(struct rte_dispatcher *dispatcher,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_register, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_register, 23.11);
 int
 rte_dispatcher_register(struct rte_dispatcher *dispatcher,
 	rte_dispatcher_match_t match_fun, void *match_data,
@@ -529,7 +529,7 @@ evd_uninstall_handler(struct rte_dispatcher *dispatcher, int handler_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unregister, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_unregister, 23.11);
 int
 rte_dispatcher_unregister(struct rte_dispatcher *dispatcher, int handler_id)
 {
@@ -583,7 +583,7 @@ evd_alloc_finalizer(struct rte_dispatcher *dispatcher)
 	return finalizer;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_register, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_register, 23.11);
 int
 rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher,
 	rte_dispatcher_finalize_t finalize_fun, void *finalize_data)
@@ -601,7 +601,7 @@ rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher,
 	return finalizer->id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_unregister, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_finalize_unregister, 23.11);
 int
 rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher,
 	int finalizer_id)
@@ -653,14 +653,14 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state)
 	RTE_VERIFY(rc == 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_start, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_start, 23.11);
 void
 rte_dispatcher_start(struct rte_dispatcher *dispatcher)
 {
 	evd_set_service_runstate(dispatcher, 1);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stop, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stop, 23.11);
 void
 rte_dispatcher_stop(struct rte_dispatcher *dispatcher)
 {
@@ -677,7 +677,7 @@ evd_aggregate_stats(struct rte_dispatcher_stats *result,
 	result->ev_drop_count += part->ev_drop_count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_get, 23.11);
 void
 rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher,
 	struct rte_dispatcher_stats *stats)
@@ -694,7 +694,7 @@ rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_reset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_dispatcher_stats_reset, 23.11);
 void
 rte_dispatcher_stats_reset(struct rte_dispatcher *dispatcher)
 {
diff --git a/lib/distributor/rte_distributor.c b/lib/distributor/rte_distributor.c
index dde7ce2677..ca35ad97d9 100644
--- a/lib/distributor/rte_distributor.c
+++ b/lib/distributor/rte_distributor.c
@@ -32,7 +32,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
 
 /**** Burst Packet APIs called by workers ****/
 
-RTE_EXPORT_SYMBOL(rte_distributor_request_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_request_pkt);
 void
 rte_distributor_request_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **oldpkt,
@@ -85,7 +85,7 @@ rte_distributor_request_pkt(struct rte_distributor *d,
 			rte_memory_order_release);
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_poll_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_poll_pkt);
 int
 rte_distributor_poll_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **pkts)
@@ -130,7 +130,7 @@ rte_distributor_poll_pkt(struct rte_distributor *d,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_get_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_get_pkt);
 int
 rte_distributor_get_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **pkts,
@@ -161,7 +161,7 @@ rte_distributor_get_pkt(struct rte_distributor *d,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_distributor_return_pkt)
+RTE_EXPORT_SYMBOL(rte_distributor_return_pkt);
 int
 rte_distributor_return_pkt(struct rte_distributor *d,
 		unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
@@ -444,7 +444,7 @@ release(struct rte_distributor *d, unsigned int wkr)
 
 
 /* process a set of packets to distribute them to workers */
-RTE_EXPORT_SYMBOL(rte_distributor_process)
+RTE_EXPORT_SYMBOL(rte_distributor_process);
 int
 rte_distributor_process(struct rte_distributor *d,
 		struct rte_mbuf **mbufs, unsigned int num_mbufs)
@@ -615,7 +615,7 @@ rte_distributor_process(struct rte_distributor *d,
 }
 
 /* return to the caller, packets returned from workers */
-RTE_EXPORT_SYMBOL(rte_distributor_returned_pkts)
+RTE_EXPORT_SYMBOL(rte_distributor_returned_pkts);
 int
 rte_distributor_returned_pkts(struct rte_distributor *d,
 		struct rte_mbuf **mbufs, unsigned int max_mbufs)
@@ -662,7 +662,7 @@ total_outstanding(const struct rte_distributor *d)
  * Flush the distributor, so that there are no outstanding packets in flight or
  * queued up.
  */
-RTE_EXPORT_SYMBOL(rte_distributor_flush)
+RTE_EXPORT_SYMBOL(rte_distributor_flush);
 int
 rte_distributor_flush(struct rte_distributor *d)
 {
@@ -695,7 +695,7 @@ rte_distributor_flush(struct rte_distributor *d)
 }
 
 /* clears the internal returns array in the distributor */
-RTE_EXPORT_SYMBOL(rte_distributor_clear_returns)
+RTE_EXPORT_SYMBOL(rte_distributor_clear_returns);
 void
 rte_distributor_clear_returns(struct rte_distributor *d)
 {
@@ -717,7 +717,7 @@ rte_distributor_clear_returns(struct rte_distributor *d)
 }
 
 /* creates a distributor instance */
-RTE_EXPORT_SYMBOL(rte_distributor_create)
+RTE_EXPORT_SYMBOL(rte_distributor_create);
 struct rte_distributor *
 rte_distributor_create(const char *name,
 		unsigned int socket_id,
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 17ee0808a9..65cb34d3e1 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -22,7 +22,7 @@
 
 static int16_t dma_devices_max;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_fp_objs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_fp_objs);
 struct rte_dma_fp_object *rte_dma_fp_objs;
 static struct rte_dma_dev *rte_dma_devices;
 static struct {
@@ -39,7 +39,7 @@ RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO);
 #define RTE_DMA_LOG(level, ...) \
 	RTE_LOG_LINE(level, DMADEV, "" __VA_ARGS__)
 
-RTE_EXPORT_SYMBOL(rte_dma_dev_max)
+RTE_EXPORT_SYMBOL(rte_dma_dev_max);
 int
 rte_dma_dev_max(size_t dev_max)
 {
@@ -57,7 +57,7 @@ rte_dma_dev_max(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_next_dev)
+RTE_EXPORT_SYMBOL(rte_dma_next_dev);
 int16_t
 rte_dma_next_dev(int16_t start_dev_id)
 {
@@ -352,7 +352,7 @@ dma_release(struct rte_dma_dev *dev)
 	memset(dev, 0, sizeof(struct rte_dma_dev));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_allocate);
 struct rte_dma_dev *
 rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size)
 {
@@ -370,7 +370,7 @@ rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_release);
 int
 rte_dma_pmd_release(const char *name)
 {
@@ -390,7 +390,7 @@ rte_dma_pmd_release(const char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_get_dev_id_by_name)
+RTE_EXPORT_SYMBOL(rte_dma_get_dev_id_by_name);
 int
 rte_dma_get_dev_id_by_name(const char *name)
 {
@@ -406,7 +406,7 @@ rte_dma_get_dev_id_by_name(const char *name)
 	return dev->data->dev_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_is_valid)
+RTE_EXPORT_SYMBOL(rte_dma_is_valid);
 bool
 rte_dma_is_valid(int16_t dev_id)
 {
@@ -415,7 +415,7 @@ rte_dma_is_valid(int16_t dev_id)
 		rte_dma_devices[dev_id].state != RTE_DMA_DEV_UNUSED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_get_dev_by_id)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_dma_pmd_get_dev_by_id);
 struct rte_dma_dev *
 rte_dma_pmd_get_dev_by_id(int16_t dev_id)
 {
@@ -425,7 +425,7 @@ rte_dma_pmd_get_dev_by_id(int16_t dev_id)
 	return &rte_dma_devices[dev_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_count_avail)
+RTE_EXPORT_SYMBOL(rte_dma_count_avail);
 uint16_t
 rte_dma_count_avail(void)
 {
@@ -443,7 +443,7 @@ rte_dma_count_avail(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_info_get)
+RTE_EXPORT_SYMBOL(rte_dma_info_get);
 int
 rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info)
 {
@@ -475,7 +475,7 @@ rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_configure)
+RTE_EXPORT_SYMBOL(rte_dma_configure);
 int
 rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf)
 {
@@ -533,7 +533,7 @@ rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_start)
+RTE_EXPORT_SYMBOL(rte_dma_start);
 int
 rte_dma_start(int16_t dev_id)
 {
@@ -567,7 +567,7 @@ rte_dma_start(int16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stop)
+RTE_EXPORT_SYMBOL(rte_dma_stop);
 int
 rte_dma_stop(int16_t dev_id)
 {
@@ -596,7 +596,7 @@ rte_dma_stop(int16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_close)
+RTE_EXPORT_SYMBOL(rte_dma_close);
 int
 rte_dma_close(int16_t dev_id)
 {
@@ -625,7 +625,7 @@ rte_dma_close(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_vchan_setup)
+RTE_EXPORT_SYMBOL(rte_dma_vchan_setup);
 int
 rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan,
 		    const struct rte_dma_vchan_conf *conf)
@@ -720,7 +720,7 @@ rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stats_get)
+RTE_EXPORT_SYMBOL(rte_dma_stats_get);
 int
 rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats)
 {
@@ -743,7 +743,7 @@ rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats)
 	return dev->dev_ops->stats_get(dev, vchan, stats, sizeof(struct rte_dma_stats));
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_stats_reset)
+RTE_EXPORT_SYMBOL(rte_dma_stats_reset);
 int
 rte_dma_stats_reset(int16_t dev_id, uint16_t vchan)
 {
@@ -769,7 +769,7 @@ rte_dma_stats_reset(int16_t dev_id, uint16_t vchan)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_vchan_status)
+RTE_EXPORT_SYMBOL(rte_dma_vchan_status);
 int
 rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *status)
 {
@@ -837,7 +837,7 @@ dma_dump_capability(FILE *f, uint64_t dev_capa)
 	(void)fprintf(f, "\n");
 }
 
-RTE_EXPORT_SYMBOL(rte_dma_dump)
+RTE_EXPORT_SYMBOL(rte_dma_dump);
 int
 rte_dma_dump(int16_t dev_id, FILE *f)
 {
diff --git a/lib/dmadev/rte_dmadev_trace_points.c b/lib/dmadev/rte_dmadev_trace_points.c
index 1c8998fb98..f5103d27da 100644
--- a/lib/dmadev/rte_dmadev_trace_points.c
+++ b/lib/dmadev/rte_dmadev_trace_points.c
@@ -37,30 +37,30 @@ RTE_TRACE_POINT_REGISTER(rte_dma_trace_vchan_status,
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_dump,
 	lib.dmadev.dump)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy,
 	lib.dmadev.copy)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy_sg, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_copy_sg, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy_sg,
 	lib.dmadev.copy_sg)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_fill, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_fill, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_fill,
 	lib.dmadev.fill)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_submit, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_submit, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_submit,
 	lib.dmadev.submit)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed,
 	lib.dmadev.completed)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed_status, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_completed_status, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed_status,
 	lib.dmadev.completed_status)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_burst_capacity, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_dma_trace_burst_capacity, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_dma_trace_burst_capacity,
 	lib.dmadev.burst_capacity)
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 32243f293a..2a644d720c 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -136,7 +136,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -154,7 +154,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -163,7 +163,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/arm/rte_hypervisor.c b/lib/eal/arm/rte_hypervisor.c
index 51b224fb94..45e6ef667b 100644
--- a/lib/eal/arm/rte_hypervisor.c
+++ b/lib/eal/arm/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 4826b370ea..b9c3ab30a6 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -27,7 +27,7 @@ RTE_INIT(rte_power_intrinsics_init)
  * This function uses WFE/WFET instruction to make lcore suspend
  * execution on ARM.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -80,7 +80,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on ARM.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -94,7 +94,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
  * on ARM.
  * Note that lcore_id is not used here.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -108,7 +108,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 #endif /* RTE_ARCH_64 */
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index 0a2311a342..6df7a6ab4d 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -17,14 +17,14 @@
 static struct rte_bus_list rte_bus_list =
 	TAILQ_HEAD_INITIALIZER(rte_bus_list);
 
-RTE_EXPORT_SYMBOL(rte_bus_name)
+RTE_EXPORT_SYMBOL(rte_bus_name);
 const char *
 rte_bus_name(const struct rte_bus *bus)
 {
 	return bus->name;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_register);
 void
 rte_bus_register(struct rte_bus *bus)
 {
@@ -41,7 +41,7 @@ rte_bus_register(struct rte_bus *bus)
 	EAL_LOG(DEBUG, "Registered [%s] bus.", rte_bus_name(bus));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_unregister);
 void
 rte_bus_unregister(struct rte_bus *bus)
 {
@@ -50,7 +50,7 @@ rte_bus_unregister(struct rte_bus *bus)
 }
 
 /* Scan all the buses for registered devices */
-RTE_EXPORT_SYMBOL(rte_bus_scan)
+RTE_EXPORT_SYMBOL(rte_bus_scan);
 int
 rte_bus_scan(void)
 {
@@ -68,7 +68,7 @@ rte_bus_scan(void)
 }
 
 /* Probe all devices of all buses */
-RTE_EXPORT_SYMBOL(rte_bus_probe)
+RTE_EXPORT_SYMBOL(rte_bus_probe);
 int
 rte_bus_probe(void)
 {
@@ -130,7 +130,7 @@ bus_dump_one(FILE *f, struct rte_bus *bus)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_dump)
+RTE_EXPORT_SYMBOL(rte_bus_dump);
 void
 rte_bus_dump(FILE *f)
 {
@@ -147,7 +147,7 @@ rte_bus_dump(FILE *f)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find)
+RTE_EXPORT_SYMBOL(rte_bus_find);
 struct rte_bus *
 rte_bus_find(const struct rte_bus *start, rte_bus_cmp_t cmp,
 	     const void *data)
@@ -183,7 +183,7 @@ bus_find_device(const struct rte_bus *bus, const void *_dev)
 	return dev == NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find_by_device)
+RTE_EXPORT_SYMBOL(rte_bus_find_by_device);
 struct rte_bus *
 rte_bus_find_by_device(const struct rte_device *dev)
 {
@@ -198,7 +198,7 @@ cmp_bus_name(const struct rte_bus *bus, const void *_name)
 	return strcmp(rte_bus_name(bus), name);
 }
 
-RTE_EXPORT_SYMBOL(rte_bus_find_by_name)
+RTE_EXPORT_SYMBOL(rte_bus_find_by_name);
 struct rte_bus *
 rte_bus_find_by_name(const char *busname)
 {
@@ -230,7 +230,7 @@ rte_bus_find_by_device_name(const char *str)
 /*
  * Get iommu class of devices on the bus.
  */
-RTE_EXPORT_SYMBOL(rte_bus_get_iommu_class)
+RTE_EXPORT_SYMBOL(rte_bus_get_iommu_class);
 enum rte_iova_mode
 rte_bus_get_iommu_class(void)
 {
diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c
index 0f10c6894b..787f3d53f5 100644
--- a/lib/eal/common/eal_common_class.c
+++ b/lib/eal/common/eal_common_class.c
@@ -15,7 +15,7 @@
 static struct rte_class_list rte_class_list =
 	TAILQ_HEAD_INITIALIZER(rte_class_list);
 
-RTE_EXPORT_SYMBOL(rte_class_register)
+RTE_EXPORT_SYMBOL(rte_class_register);
 void
 rte_class_register(struct rte_class *class)
 {
@@ -26,7 +26,7 @@ rte_class_register(struct rte_class *class)
 	EAL_LOG(DEBUG, "Registered [%s] device class.", class->name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_unregister)
+RTE_EXPORT_SYMBOL(rte_class_unregister);
 void
 rte_class_unregister(struct rte_class *class)
 {
@@ -34,7 +34,7 @@ rte_class_unregister(struct rte_class *class)
 	EAL_LOG(DEBUG, "Unregistered [%s] device class.", class->name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_find)
+RTE_EXPORT_SYMBOL(rte_class_find);
 struct rte_class *
 rte_class_find(const struct rte_class *start, rte_class_cmp_t cmp,
 	       const void *data)
@@ -61,7 +61,7 @@ cmp_class_name(const struct rte_class *class, const void *_name)
 	return strcmp(class->name, name);
 }
 
-RTE_EXPORT_SYMBOL(rte_class_find_by_name)
+RTE_EXPORT_SYMBOL(rte_class_find_by_name);
 struct rte_class *
 rte_class_find_by_name(const char *name)
 {
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 7fc7611a07..8804b9f171 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -29,7 +29,7 @@ static char runtime_dir[PATH_MAX];
 /* internal configuration */
 static struct internal_config internal_config;
 
-RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir)
+RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir);
 const char *
 rte_eal_get_runtime_dir(void)
 {
@@ -61,7 +61,7 @@ eal_get_internal_configuration(void)
 	return &internal_config;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_iova_mode)
+RTE_EXPORT_SYMBOL(rte_eal_iova_mode);
 enum rte_iova_mode
 rte_eal_iova_mode(void)
 {
@@ -69,7 +69,7 @@ rte_eal_iova_mode(void)
 }
 
 /* Get the EAL base address */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_baseaddr)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_baseaddr);
 uint64_t
 rte_eal_get_baseaddr(void)
 {
@@ -78,7 +78,7 @@ rte_eal_get_baseaddr(void)
 		       eal_get_baseaddr();
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_process_type)
+RTE_EXPORT_SYMBOL(rte_eal_process_type);
 enum rte_proc_type_t
 rte_eal_process_type(void)
 {
@@ -86,7 +86,7 @@ rte_eal_process_type(void)
 }
 
 /* Return user provided mbuf pool ops name */
-RTE_EXPORT_SYMBOL(rte_eal_mbuf_user_pool_ops)
+RTE_EXPORT_SYMBOL(rte_eal_mbuf_user_pool_ops);
 const char *
 rte_eal_mbuf_user_pool_ops(void)
 {
@@ -94,14 +94,14 @@ rte_eal_mbuf_user_pool_ops(void)
 }
 
 /* return non-zero if hugepages are enabled. */
-RTE_EXPORT_SYMBOL(rte_eal_has_hugepages)
+RTE_EXPORT_SYMBOL(rte_eal_has_hugepages);
 int
 rte_eal_has_hugepages(void)
 {
 	return !internal_config.no_hugetlbfs;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_has_pci)
+RTE_EXPORT_SYMBOL(rte_eal_has_pci);
 int
 rte_eal_has_pci(void)
 {
diff --git a/lib/eal/common/eal_common_cpuflags.c b/lib/eal/common/eal_common_cpuflags.c
index cbd49a151b..b86fd01b89 100644
--- a/lib/eal/common/eal_common_cpuflags.c
+++ b/lib/eal/common/eal_common_cpuflags.c
@@ -8,7 +8,7 @@
 #include <rte_common.h>
 #include <rte_cpuflags.h>
 
-RTE_EXPORT_SYMBOL(rte_cpu_is_supported)
+RTE_EXPORT_SYMBOL(rte_cpu_is_supported);
 int
 rte_cpu_is_supported(void)
 {
diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c
index 7a42546da2..af1d7353df 100644
--- a/lib/eal/common/eal_common_debug.c
+++ b/lib/eal/common/eal_common_debug.c
@@ -14,7 +14,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(__rte_panic)
+RTE_EXPORT_SYMBOL(__rte_panic);
 void
 __rte_panic(const char *funcname, const char *format, ...)
 {
@@ -32,7 +32,7 @@ __rte_panic(const char *funcname, const char *format, ...)
  * Like rte_panic this terminates the application. However, no traceback is
  * provided and no core-dump is generated.
  */
-RTE_EXPORT_SYMBOL(rte_exit)
+RTE_EXPORT_SYMBOL(rte_exit);
 void
 rte_exit(int exit_code, const char *format, ...)
 {
diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c
index 7185de0cb9..5937db32ca 100644
--- a/lib/eal/common/eal_common_dev.c
+++ b/lib/eal/common/eal_common_dev.c
@@ -21,49 +21,49 @@
 #include "eal_private.h"
 #include "hotplug_mp.h"
 
-RTE_EXPORT_SYMBOL(rte_driver_name)
+RTE_EXPORT_SYMBOL(rte_driver_name);
 const char *
 rte_driver_name(const struct rte_driver *driver)
 {
 	return driver->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_bus)
+RTE_EXPORT_SYMBOL(rte_dev_bus);
 const struct rte_bus *
 rte_dev_bus(const struct rte_device *dev)
 {
 	return dev->bus;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_bus_info)
+RTE_EXPORT_SYMBOL(rte_dev_bus_info);
 const char *
 rte_dev_bus_info(const struct rte_device *dev)
 {
 	return dev->bus_info;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_devargs)
+RTE_EXPORT_SYMBOL(rte_dev_devargs);
 const struct rte_devargs *
 rte_dev_devargs(const struct rte_device *dev)
 {
 	return dev->devargs;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_driver)
+RTE_EXPORT_SYMBOL(rte_dev_driver);
 const struct rte_driver *
 rte_dev_driver(const struct rte_device *dev)
 {
 	return dev->driver;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_name)
+RTE_EXPORT_SYMBOL(rte_dev_name);
 const char *
 rte_dev_name(const struct rte_device *dev)
 {
 	return dev->name;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_numa_node)
+RTE_EXPORT_SYMBOL(rte_dev_numa_node);
 int
 rte_dev_numa_node(const struct rte_device *dev)
 {
@@ -122,7 +122,7 @@ static int cmp_dev_name(const struct rte_device *dev, const void *_name)
 	return strcmp(dev->name, name);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_is_probed)
+RTE_EXPORT_SYMBOL(rte_dev_is_probed);
 int
 rte_dev_is_probed(const struct rte_device *dev)
 {
@@ -155,7 +155,7 @@ build_devargs(const char *busname, const char *devname,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_hotplug_add)
+RTE_EXPORT_SYMBOL(rte_eal_hotplug_add);
 int
 rte_eal_hotplug_add(const char *busname, const char *devname,
 		    const char *drvargs)
@@ -240,7 +240,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_probe)
+RTE_EXPORT_SYMBOL(rte_dev_probe);
 int
 rte_dev_probe(const char *devargs)
 {
@@ -334,7 +334,7 @@ rte_dev_probe(const char *devargs)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_hotplug_remove)
+RTE_EXPORT_SYMBOL(rte_eal_hotplug_remove);
 int
 rte_eal_hotplug_remove(const char *busname, const char *devname)
 {
@@ -378,7 +378,7 @@ local_dev_remove(struct rte_device *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_remove)
+RTE_EXPORT_SYMBOL(rte_dev_remove);
 int
 rte_dev_remove(struct rte_device *dev)
 {
@@ -476,7 +476,7 @@ rte_dev_remove(struct rte_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_register)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_register);
 int
 rte_dev_event_callback_register(const char *device_name,
 				rte_dev_event_cb_fn cb_fn,
@@ -545,7 +545,7 @@ rte_dev_event_callback_register(const char *device_name,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_unregister);
 int
 rte_dev_event_callback_unregister(const char *device_name,
 				  rte_dev_event_cb_fn cb_fn,
@@ -599,7 +599,7 @@ rte_dev_event_callback_unregister(const char *device_name,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_callback_process)
+RTE_EXPORT_SYMBOL(rte_dev_event_callback_process);
 void
 rte_dev_event_callback_process(const char *device_name,
 			       enum rte_dev_event_type event)
@@ -626,7 +626,7 @@ rte_dev_event_callback_process(const char *device_name,
 	rte_spinlock_unlock(&dev_event_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_iterator_init)
+RTE_EXPORT_SYMBOL(rte_dev_iterator_init);
 int
 rte_dev_iterator_init(struct rte_dev_iterator *it,
 		      const char *dev_str)
@@ -779,7 +779,7 @@ bus_next_dev_cmp(const struct rte_bus *bus,
 	it->device = dev;
 	return dev == NULL;
 }
-RTE_EXPORT_SYMBOL(rte_dev_iterator_next)
+RTE_EXPORT_SYMBOL(rte_dev_iterator_next);
 struct rte_device *
 rte_dev_iterator_next(struct rte_dev_iterator *it)
 {
@@ -824,7 +824,7 @@ rte_dev_iterator_next(struct rte_dev_iterator *it)
 	return it->device;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_dma_map)
+RTE_EXPORT_SYMBOL(rte_dev_dma_map);
 int
 rte_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova,
 		size_t len)
@@ -842,7 +842,7 @@ rte_dev_dma_map(struct rte_device *dev, void *addr, uint64_t iova,
 	return dev->bus->dma_map(dev, addr, iova, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_dev_dma_unmap);
 int
 rte_dev_dma_unmap(struct rte_device *dev, void *addr, uint64_t iova,
 		  size_t len)
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index c523429d67..c72c60ff4b 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -181,7 +181,7 @@ bus_name_cmp(const struct rte_bus *bus, const void *name)
 	return strncmp(bus->name, name, strlen(bus->name));
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_parse)
+RTE_EXPORT_SYMBOL(rte_devargs_parse);
 int
 rte_devargs_parse(struct rte_devargs *da, const char *dev)
 {
@@ -248,7 +248,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_parsef)
+RTE_EXPORT_SYMBOL(rte_devargs_parsef);
 int
 rte_devargs_parsef(struct rte_devargs *da, const char *format, ...)
 {
@@ -283,7 +283,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_reset)
+RTE_EXPORT_SYMBOL(rte_devargs_reset);
 void
 rte_devargs_reset(struct rte_devargs *da)
 {
@@ -293,7 +293,7 @@ rte_devargs_reset(struct rte_devargs *da)
 	da->data = NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_insert)
+RTE_EXPORT_SYMBOL(rte_devargs_insert);
 int
 rte_devargs_insert(struct rte_devargs **da)
 {
@@ -325,7 +325,7 @@ rte_devargs_insert(struct rte_devargs **da)
 }
 
 /* store in allowed list parameter for later parsing */
-RTE_EXPORT_SYMBOL(rte_devargs_add)
+RTE_EXPORT_SYMBOL(rte_devargs_add);
 int
 rte_devargs_add(enum rte_devtype devtype, const char *devargs_str)
 {
@@ -362,7 +362,7 @@ rte_devargs_add(enum rte_devtype devtype, const char *devargs_str)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_devargs_remove)
+RTE_EXPORT_SYMBOL(rte_devargs_remove);
 int
 rte_devargs_remove(struct rte_devargs *devargs)
 {
@@ -385,7 +385,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 }
 
 /* count the number of devices of a specified type */
-RTE_EXPORT_SYMBOL(rte_devargs_type_count)
+RTE_EXPORT_SYMBOL(rte_devargs_type_count);
 unsigned int
 rte_devargs_type_count(enum rte_devtype devtype)
 {
@@ -401,7 +401,7 @@ rte_devargs_type_count(enum rte_devtype devtype)
 }
 
 /* dump the user devices on the console */
-RTE_EXPORT_SYMBOL(rte_devargs_dump)
+RTE_EXPORT_SYMBOL(rte_devargs_dump);
 void
 rte_devargs_dump(FILE *f)
 {
@@ -416,7 +416,7 @@ rte_devargs_dump(FILE *f)
 }
 
 /* bus-aware rte_devargs iterator. */
-RTE_EXPORT_SYMBOL(rte_devargs_next)
+RTE_EXPORT_SYMBOL(rte_devargs_next);
 struct rte_devargs *
 rte_devargs_next(const char *busname, const struct rte_devargs *start)
 {
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index 3f933c3f7b..256a041789 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -17,10 +17,10 @@
 #define strerror_r(errnum, buf, buflen) strerror_s(buf, buflen, errnum)
 #endif
 
-RTE_EXPORT_SYMBOL(per_lcore__rte_errno)
+RTE_EXPORT_SYMBOL(per_lcore__rte_errno);
 RTE_DEFINE_PER_LCORE(int, _rte_errno);
 
-RTE_EXPORT_SYMBOL(rte_strerror)
+RTE_EXPORT_SYMBOL(rte_strerror);
 const char *
 rte_strerror(int errnum)
 {
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 8bdcefb717..4fe80cee1e 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -686,7 +686,7 @@ fully_validate(const char *name, unsigned int elt_sz, unsigned int len)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_init)
+RTE_EXPORT_SYMBOL(rte_fbarray_init);
 int
 rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
 		unsigned int elt_sz)
@@ -813,7 +813,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_attach)
+RTE_EXPORT_SYMBOL(rte_fbarray_attach);
 int
 rte_fbarray_attach(struct rte_fbarray *arr)
 {
@@ -902,7 +902,7 @@ rte_fbarray_attach(struct rte_fbarray *arr)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_detach)
+RTE_EXPORT_SYMBOL(rte_fbarray_detach);
 int
 rte_fbarray_detach(struct rte_fbarray *arr)
 {
@@ -956,7 +956,7 @@ rte_fbarray_detach(struct rte_fbarray *arr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_destroy)
+RTE_EXPORT_SYMBOL(rte_fbarray_destroy);
 int
 rte_fbarray_destroy(struct rte_fbarray *arr)
 {
@@ -1043,7 +1043,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_get)
+RTE_EXPORT_SYMBOL(rte_fbarray_get);
 void *
 rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx)
 {
@@ -1063,21 +1063,21 @@ rte_fbarray_get(const struct rte_fbarray *arr, unsigned int idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_set_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_set_used);
 int
 rte_fbarray_set_used(struct rte_fbarray *arr, unsigned int idx)
 {
 	return set_used(arr, idx, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_set_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_set_free);
 int
 rte_fbarray_set_free(struct rte_fbarray *arr, unsigned int idx)
 {
 	return set_used(arr, idx, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_is_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_is_used);
 int
 rte_fbarray_is_used(struct rte_fbarray *arr, unsigned int idx)
 {
@@ -1147,28 +1147,28 @@ fbarray_find(struct rte_fbarray *arr, unsigned int start, bool next, bool used)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_free);
 int
 rte_fbarray_find_next_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_used);
 int
 rte_fbarray_find_next_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_free);
 int
 rte_fbarray_find_prev_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_used);
 int
 rte_fbarray_find_prev_used(struct rte_fbarray *arr, unsigned int start)
 {
@@ -1227,7 +1227,7 @@ fbarray_find_n(struct rte_fbarray *arr, unsigned int start, unsigned int n,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_free);
 int
 rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1235,7 +1235,7 @@ rte_fbarray_find_next_n_free(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_next_n_used);
 int
 rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1243,7 +1243,7 @@ rte_fbarray_find_next_n_used(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_free);
 int
 rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1251,7 +1251,7 @@ rte_fbarray_find_prev_n_free(struct rte_fbarray *arr, unsigned int start,
 	return fbarray_find_n(arr, start, n, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_prev_n_used);
 int
 rte_fbarray_find_prev_n_used(struct rte_fbarray *arr, unsigned int start,
 		unsigned int n)
@@ -1395,28 +1395,28 @@ fbarray_find_biggest(struct rte_fbarray *arr, unsigned int start, bool used,
 	return biggest_idx;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_free);
 int
 rte_fbarray_find_biggest_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_biggest_used);
 int
 rte_fbarray_find_biggest_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_free);
 int
 rte_fbarray_find_rev_biggest_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_biggest(arr, start, false, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_biggest_used);
 int
 rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start)
 {
@@ -1424,35 +1424,35 @@ rte_fbarray_find_rev_biggest_used(struct rte_fbarray *arr, unsigned int start)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_free);
 int
 rte_fbarray_find_contig_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, true, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_contig_used);
 int
 rte_fbarray_find_contig_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, true, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_free)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_free);
 int
 rte_fbarray_find_rev_contig_free(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, false, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_used)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_rev_contig_used);
 int
 rte_fbarray_find_rev_contig_used(struct rte_fbarray *arr, unsigned int start)
 {
 	return fbarray_find_contig(arr, start, false, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_find_idx)
+RTE_EXPORT_SYMBOL(rte_fbarray_find_idx);
 int
 rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt)
 {
@@ -1479,7 +1479,7 @@ rte_fbarray_find_idx(const struct rte_fbarray *arr, const void *elt)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_fbarray_dump_metadata)
+RTE_EXPORT_SYMBOL(rte_fbarray_dump_metadata);
 void
 rte_fbarray_dump_metadata(struct rte_fbarray *arr, FILE *f)
 {
diff --git a/lib/eal/common/eal_common_hexdump.c b/lib/eal/common/eal_common_hexdump.c
index 28159f298a..e560aad214 100644
--- a/lib/eal/common/eal_common_hexdump.c
+++ b/lib/eal/common/eal_common_hexdump.c
@@ -8,7 +8,7 @@
 
 #define LINE_LEN 128
 
-RTE_EXPORT_SYMBOL(rte_hexdump)
+RTE_EXPORT_SYMBOL(rte_hexdump);
 void
 rte_hexdump(FILE *f, const char *title, const void *buf, unsigned int len)
 {
@@ -47,7 +47,7 @@ rte_hexdump(FILE *f, const char *title, const void *buf, unsigned int len)
 	fflush(f);
 }
 
-RTE_EXPORT_SYMBOL(rte_memdump)
+RTE_EXPORT_SYMBOL(rte_memdump);
 void
 rte_memdump(FILE *f, const char *title, const void *buf, unsigned int len)
 {
diff --git a/lib/eal/common/eal_common_hypervisor.c b/lib/eal/common/eal_common_hypervisor.c
index 7158fd25de..6231294eab 100644
--- a/lib/eal/common/eal_common_hypervisor.c
+++ b/lib/eal/common/eal_common_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get_name)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get_name);
 const char *
 rte_hypervisor_get_name(enum rte_hypervisor id)
 {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index b42fa862f3..4775d894c3 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -30,7 +30,7 @@
 #define RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags) \
 	(!!(flags & RTE_INTR_INSTANCE_F_SHARED))
 
-RTE_EXPORT_SYMBOL(rte_intr_instance_alloc)
+RTE_EXPORT_SYMBOL(rte_intr_instance_alloc);
 struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
 {
 	struct rte_intr_handle *intr_handle;
@@ -98,7 +98,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_dup)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_dup);
 struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
 {
 	struct rte_intr_handle *intr_handle;
@@ -124,7 +124,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
 	return intr_handle;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_event_list_update)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_event_list_update);
 int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
 {
 	struct rte_epoll_event *tmp_elist;
@@ -175,7 +175,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_instance_free)
+RTE_EXPORT_SYMBOL(rte_intr_instance_free);
 void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
 {
 	if (intr_handle == NULL)
@@ -191,7 +191,7 @@ void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_fd_set)
+RTE_EXPORT_SYMBOL(rte_intr_fd_set);
 int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -203,7 +203,7 @@ int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_fd_get)
+RTE_EXPORT_SYMBOL(rte_intr_fd_get);
 int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -213,7 +213,7 @@ int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_type_set)
+RTE_EXPORT_SYMBOL(rte_intr_type_set);
 int rte_intr_type_set(struct rte_intr_handle *intr_handle,
 	enum rte_intr_handle_type type)
 {
@@ -226,7 +226,7 @@ int rte_intr_type_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_type_get)
+RTE_EXPORT_SYMBOL(rte_intr_type_get);
 enum rte_intr_handle_type rte_intr_type_get(
 	const struct rte_intr_handle *intr_handle)
 {
@@ -237,7 +237,7 @@ enum rte_intr_handle_type rte_intr_type_get(
 	return RTE_INTR_HANDLE_UNKNOWN;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_set);
 int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -249,7 +249,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dev_fd_get);
 int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -259,7 +259,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_set);
 int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
 				 int max_intr)
 {
@@ -280,7 +280,7 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_max_intr_get);
 int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -290,7 +290,7 @@ int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_set);
 int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -302,7 +302,7 @@ int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_efd_get);
 int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -312,7 +312,7 @@ int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_intr_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_nb_intr_get);
 int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -322,7 +322,7 @@ int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_set);
 int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
 	uint8_t efd_counter_size)
 {
@@ -335,7 +335,7 @@ int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_counter_size_get);
 int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -345,7 +345,7 @@ int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_get);
 int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
 	int index)
 {
@@ -363,7 +363,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efds_index_set);
 int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
 	int index, int fd)
 {
@@ -383,7 +383,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_get);
 struct rte_epoll_event *rte_intr_elist_index_get(
 	struct rte_intr_handle *intr_handle, int index)
 {
@@ -401,7 +401,7 @@ struct rte_epoll_event *rte_intr_elist_index_get(
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_elist_index_set);
 int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
 	int index, struct rte_epoll_event elist)
 {
@@ -421,7 +421,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_alloc);
 int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
 	const char *name, int size)
 {
@@ -455,7 +455,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_get);
 int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
 				int index)
 {
@@ -473,7 +473,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_index_set);
 int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
 				int index, int vec)
 {
@@ -493,7 +493,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
 	return -rte_errno;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_vec_list_free);
 void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
 {
 	if (intr_handle == NULL)
@@ -506,7 +506,7 @@ void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
 	intr_handle->vec_list_size = 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_get);
 void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
 {
 	CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -516,7 +516,7 @@ void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_set)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_instance_windows_handle_set);
 int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
 	void *windows_handle)
 {
diff --git a/lib/eal/common/eal_common_launch.c b/lib/eal/common/eal_common_launch.c
index a7deac6ecd..a408e44bbd 100644
--- a/lib/eal/common/eal_common_launch.c
+++ b/lib/eal/common/eal_common_launch.c
@@ -16,7 +16,7 @@
 /*
  * Wait until a lcore finished its job.
  */
-RTE_EXPORT_SYMBOL(rte_eal_wait_lcore)
+RTE_EXPORT_SYMBOL(rte_eal_wait_lcore);
 int
 rte_eal_wait_lcore(unsigned worker_id)
 {
@@ -32,7 +32,7 @@ rte_eal_wait_lcore(unsigned worker_id)
  * function f with argument arg. Once the execution is done, the
  * remote lcore switches to WAIT state.
  */
-RTE_EXPORT_SYMBOL(rte_eal_remote_launch)
+RTE_EXPORT_SYMBOL(rte_eal_remote_launch);
 int
 rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
 {
@@ -64,7 +64,7 @@ rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
  * rte_eal_remote_launch() for all of them. If call_main is true
  * (set to CALL_MAIN), also call the function on the main lcore.
  */
-RTE_EXPORT_SYMBOL(rte_eal_mp_remote_launch)
+RTE_EXPORT_SYMBOL(rte_eal_mp_remote_launch);
 int
 rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
 			 enum rte_rmt_call_main_t call_main)
@@ -94,7 +94,7 @@ rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
 /*
  * Return the state of the lcore identified by worker_id.
  */
-RTE_EXPORT_SYMBOL(rte_eal_get_lcore_state)
+RTE_EXPORT_SYMBOL(rte_eal_get_lcore_state);
 enum rte_lcore_state_t
 rte_eal_get_lcore_state(unsigned lcore_id)
 {
@@ -105,7 +105,7 @@ rte_eal_get_lcore_state(unsigned lcore_id)
  * Do a rte_eal_wait_lcore() for every lcore. The return values are
  * ignored.
  */
-RTE_EXPORT_SYMBOL(rte_eal_mp_wait_lcore)
+RTE_EXPORT_SYMBOL(rte_eal_mp_wait_lcore);
 void
 rte_eal_mp_wait_lcore(void)
 {
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 5c8b0f9aa2..b031c37caf 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -19,19 +19,19 @@
 #include "eal_private.h"
 #include "eal_thread.h"
 
-RTE_EXPORT_SYMBOL(rte_get_main_lcore)
+RTE_EXPORT_SYMBOL(rte_get_main_lcore);
 unsigned int rte_get_main_lcore(void)
 {
 	return rte_eal_get_configuration()->main_lcore;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_count)
+RTE_EXPORT_SYMBOL(rte_lcore_count);
 unsigned int rte_lcore_count(void)
 {
 	return rte_eal_get_configuration()->lcore_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_index)
+RTE_EXPORT_SYMBOL(rte_lcore_index);
 int rte_lcore_index(int lcore_id)
 {
 	if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -47,7 +47,7 @@ int rte_lcore_index(int lcore_id)
 	return lcore_config[lcore_id].core_index;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id)
+RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id);
 int rte_lcore_to_cpu_id(int lcore_id)
 {
 	if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -63,13 +63,13 @@ int rte_lcore_to_cpu_id(int lcore_id)
 	return lcore_config[lcore_id].core_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_cpuset)
+RTE_EXPORT_SYMBOL(rte_lcore_cpuset);
 rte_cpuset_t rte_lcore_cpuset(unsigned int lcore_id)
 {
 	return lcore_config[lcore_id].cpuset;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_lcore_role)
+RTE_EXPORT_SYMBOL(rte_eal_lcore_role);
 enum rte_lcore_role_t
 rte_eal_lcore_role(unsigned int lcore_id)
 {
@@ -80,7 +80,7 @@ rte_eal_lcore_role(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id];
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_has_role)
+RTE_EXPORT_SYMBOL(rte_lcore_has_role);
 int
 rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
 {
@@ -92,7 +92,7 @@ rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
 	return cfg->lcore_role[lcore_id] == role;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_is_enabled)
+RTE_EXPORT_SYMBOL(rte_lcore_is_enabled);
 int rte_lcore_is_enabled(unsigned int lcore_id)
 {
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -102,7 +102,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
 	return cfg->lcore_role[lcore_id] == ROLE_RTE;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_next_lcore)
+RTE_EXPORT_SYMBOL(rte_get_next_lcore);
 unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 {
 	i++;
@@ -122,7 +122,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id)
+RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id);
 unsigned int
 rte_lcore_to_socket_id(unsigned int lcore_id)
 {
@@ -231,7 +231,7 @@ rte_eal_cpu_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_socket_count)
+RTE_EXPORT_SYMBOL(rte_socket_count);
 unsigned int
 rte_socket_count(void)
 {
@@ -239,7 +239,7 @@ rte_socket_count(void)
 	return config->numa_node_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_socket_id_by_idx)
+RTE_EXPORT_SYMBOL(rte_socket_id_by_idx);
 int
 rte_socket_id_by_idx(unsigned int idx)
 {
@@ -289,7 +289,7 @@ free_callback(struct lcore_callback *callback)
 	free(callback);
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_callback_register)
+RTE_EXPORT_SYMBOL(rte_lcore_callback_register);
 void *
 rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
 	rte_lcore_uninit_cb uninit, void *arg)
@@ -340,7 +340,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
 	return callback;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_lcore_callback_unregister);
 void
 rte_lcore_callback_unregister(void *handle)
 {
@@ -426,7 +426,7 @@ eal_lcore_non_eal_release(unsigned int lcore_id)
 	rte_rwlock_write_unlock(&lcore_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_iterate)
+RTE_EXPORT_SYMBOL(rte_lcore_iterate);
 int
 rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg)
 {
@@ -463,7 +463,7 @@ lcore_role_str(enum rte_lcore_role_t role)
 
 static rte_lcore_usage_cb lcore_usage_cb;
 
-RTE_EXPORT_SYMBOL(rte_lcore_register_usage_cb)
+RTE_EXPORT_SYMBOL(rte_lcore_register_usage_cb);
 void
 rte_lcore_register_usage_cb(rte_lcore_usage_cb cb)
 {
@@ -510,7 +510,7 @@ lcore_dump_cb(unsigned int lcore_id, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_lcore_dump)
+RTE_EXPORT_SYMBOL(rte_lcore_dump);
 void
 rte_lcore_dump(FILE *f)
 {
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
index 8a7920ed0f..bcf88ac661 100644
--- a/lib/eal/common/eal_common_lcore_var.c
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -76,7 +76,7 @@ lcore_var_alloc(size_t size, size_t align)
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_lcore_var_alloc, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_lcore_var_alloc, 24.11);
 void *
 rte_lcore_var_alloc(size_t size, size_t align)
 {
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index 84ee3f3959..f82aca83b5 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -70,140 +70,140 @@ eal_mcfg_update_from_internal(void)
 	mcfg->version = RTE_VERSION;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mem_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mem_get_lock);
 rte_rwlock_t *
 rte_mcfg_mem_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->memory_hotplug_lock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_lock);
 void
 rte_mcfg_mem_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_unlock);
 void
 rte_mcfg_mem_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_lock);
 void
 rte_mcfg_mem_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mem_write_unlock);
 void
 rte_mcfg_mem_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_mem_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_tailq_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_tailq_get_lock);
 rte_rwlock_t *
 rte_mcfg_tailq_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->qlock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_lock);
 void
 rte_mcfg_tailq_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_unlock);
 void
 rte_mcfg_tailq_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_lock);
 void
 rte_mcfg_tailq_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_tailq_write_unlock);
 void
 rte_mcfg_tailq_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_tailq_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mempool_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mempool_get_lock);
 rte_rwlock_t *
 rte_mcfg_mempool_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->mplock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_lock);
 void
 rte_mcfg_mempool_read_lock(void)
 {
 	rte_rwlock_read_lock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_unlock);
 void
 rte_mcfg_mempool_read_unlock(void)
 {
 	rte_rwlock_read_unlock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_lock);
 void
 rte_mcfg_mempool_write_lock(void)
 {
 	rte_rwlock_write_lock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_mempool_write_unlock);
 void
 rte_mcfg_mempool_write_unlock(void)
 {
 	rte_rwlock_write_unlock(rte_mcfg_mempool_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_timer_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_timer_get_lock);
 rte_spinlock_t *
 rte_mcfg_timer_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->tlock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_timer_lock)
+RTE_EXPORT_SYMBOL(rte_mcfg_timer_lock);
 void
 rte_mcfg_timer_lock(void)
 {
 	rte_spinlock_lock(rte_mcfg_timer_get_lock());
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_timer_unlock)
+RTE_EXPORT_SYMBOL(rte_mcfg_timer_unlock);
 void
 rte_mcfg_timer_unlock(void)
 {
 	rte_spinlock_unlock(rte_mcfg_timer_get_lock());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_ethdev_get_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_ethdev_get_lock);
 rte_spinlock_t *
 rte_mcfg_ethdev_get_lock(void)
 {
 	return &rte_eal_get_configuration()->mem_config->ethdev_lock;
 }
 
-RTE_EXPORT_SYMBOL(rte_mcfg_get_single_file_segments)
+RTE_EXPORT_SYMBOL(rte_mcfg_get_single_file_segments);
 bool
 rte_mcfg_get_single_file_segments(void)
 {
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index 38ccc734e8..1e55c75570 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -343,7 +343,7 @@ virt2memseg_list(const void *addr)
 	return msl;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2memseg_list)
+RTE_EXPORT_SYMBOL(rte_mem_virt2memseg_list);
 struct rte_memseg_list *
 rte_mem_virt2memseg_list(const void *addr)
 {
@@ -381,7 +381,7 @@ find_virt_legacy(const struct rte_memseg_list *msl __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_iova2virt)
+RTE_EXPORT_SYMBOL(rte_mem_iova2virt);
 void *
 rte_mem_iova2virt(rte_iova_t iova)
 {
@@ -403,7 +403,7 @@ rte_mem_iova2virt(rte_iova_t iova)
 	return vi.virt;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2memseg)
+RTE_EXPORT_SYMBOL(rte_mem_virt2memseg);
 struct rte_memseg *
 rte_mem_virt2memseg(const void *addr, const struct rte_memseg_list *msl)
 {
@@ -425,7 +425,7 @@ physmem_size(const struct rte_memseg_list *msl, void *arg)
 }
 
 /* get the total size of memory */
-RTE_EXPORT_SYMBOL(rte_eal_get_physmem_size)
+RTE_EXPORT_SYMBOL(rte_eal_get_physmem_size);
 uint64_t
 rte_eal_get_physmem_size(void)
 {
@@ -474,7 +474,7 @@ dump_memseg(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
  * Defining here because declared in rte_memory.h, but the actual implementation
  * is in eal_common_memalloc.c, like all other memalloc internals.
  */
-RTE_EXPORT_SYMBOL(rte_mem_event_callback_register)
+RTE_EXPORT_SYMBOL(rte_mem_event_callback_register);
 int
 rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
 		void *arg)
@@ -491,7 +491,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
 	return eal_memalloc_mem_event_callback_register(name, clb, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_event_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_mem_event_callback_unregister);
 int
 rte_mem_event_callback_unregister(const char *name, void *arg)
 {
@@ -507,7 +507,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg)
 	return eal_memalloc_mem_event_callback_unregister(name, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_register)
+RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_register);
 int
 rte_mem_alloc_validator_register(const char *name,
 		rte_mem_alloc_validator_t clb, int socket_id, size_t limit)
@@ -525,7 +525,7 @@ rte_mem_alloc_validator_register(const char *name,
 			limit);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_unregister)
+RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_unregister);
 int
 rte_mem_alloc_validator_unregister(const char *name, int socket_id)
 {
@@ -542,7 +542,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id)
 }
 
 /* Dump the physical memory layout on console */
-RTE_EXPORT_SYMBOL(rte_dump_physmem_layout)
+RTE_EXPORT_SYMBOL(rte_dump_physmem_layout);
 void
 rte_dump_physmem_layout(FILE *f)
 {
@@ -614,14 +614,14 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask)
+RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask);
 int
 rte_mem_check_dma_mask(uint8_t maskbits)
 {
 	return check_dma_mask(maskbits, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_mem_check_dma_mask_thread_unsafe);
 int
 rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits)
 {
@@ -635,7 +635,7 @@ rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits)
  * initialization. PMDs should use rte_mem_check_dma_mask if addressing
  * limitations by the device.
  */
-RTE_EXPORT_SYMBOL(rte_mem_set_dma_mask)
+RTE_EXPORT_SYMBOL(rte_mem_set_dma_mask);
 void
 rte_mem_set_dma_mask(uint8_t maskbits)
 {
@@ -646,14 +646,14 @@ rte_mem_set_dma_mask(uint8_t maskbits)
 }
 
 /* return the number of memory channels */
-RTE_EXPORT_SYMBOL(rte_memory_get_nchannel)
+RTE_EXPORT_SYMBOL(rte_memory_get_nchannel);
 unsigned rte_memory_get_nchannel(void)
 {
 	return rte_eal_get_configuration()->mem_config->nchannel;
 }
 
 /* return the number of memory rank */
-RTE_EXPORT_SYMBOL(rte_memory_get_nrank)
+RTE_EXPORT_SYMBOL(rte_memory_get_nrank);
 unsigned rte_memory_get_nrank(void)
 {
 	return rte_eal_get_configuration()->mem_config->nrank;
@@ -677,7 +677,7 @@ rte_eal_memdevice_init(void)
 }
 
 /* Lock page in physical memory and prevent from swapping. */
-RTE_EXPORT_SYMBOL(rte_mem_lock_page)
+RTE_EXPORT_SYMBOL(rte_mem_lock_page);
 int
 rte_mem_lock_page(const void *virt)
 {
@@ -687,7 +687,7 @@ rte_mem_lock_page(const void *virt)
 	return rte_mem_lock((void *)aligned, page_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_contig_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_contig_walk_thread_unsafe);
 int
 rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
 {
@@ -727,7 +727,7 @@ rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_contig_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_contig_walk);
 int
 rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
 {
@@ -741,7 +741,7 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_walk_thread_unsafe);
 int
 rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
 {
@@ -770,7 +770,7 @@ rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_walk);
 int
 rte_memseg_walk(rte_memseg_walk_t func, void *arg)
 {
@@ -784,7 +784,7 @@ rte_memseg_walk(rte_memseg_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_list_walk_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_list_walk_thread_unsafe);
 int
 rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
 {
@@ -804,7 +804,7 @@ rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_list_walk)
+RTE_EXPORT_SYMBOL(rte_memseg_list_walk);
 int
 rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
 {
@@ -818,7 +818,7 @@ rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_thread_unsafe);
 int
 rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
 {
@@ -861,7 +861,7 @@ rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd);
 int
 rte_memseg_get_fd(const struct rte_memseg *ms)
 {
@@ -874,7 +874,7 @@ rte_memseg_get_fd(const struct rte_memseg *ms)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset_thread_unsafe)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset_thread_unsafe);
 int
 rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
 		size_t *offset)
@@ -918,7 +918,7 @@ rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset)
+RTE_EXPORT_SYMBOL(rte_memseg_get_fd_offset);
 int
 rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset)
 {
@@ -931,7 +931,7 @@ rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_register)
+RTE_EXPORT_SYMBOL(rte_extmem_register);
 int
 rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
 		unsigned int n_pages, size_t page_sz)
@@ -981,7 +981,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_unregister)
+RTE_EXPORT_SYMBOL(rte_extmem_unregister);
 int
 rte_extmem_unregister(void *va_addr, size_t len)
 {
@@ -1037,14 +1037,14 @@ sync_memory(void *va_addr, size_t len, bool attach)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_attach)
+RTE_EXPORT_SYMBOL(rte_extmem_attach);
 int
 rte_extmem_attach(void *va_addr, size_t len)
 {
 	return sync_memory(va_addr, len, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_extmem_detach)
+RTE_EXPORT_SYMBOL(rte_extmem_detach);
 int
 rte_extmem_detach(void *va_addr, size_t len)
 {
@@ -1702,7 +1702,7 @@ RTE_INIT(memory_telemetry)
 
 #endif /* telemetry !RTE_EXEC_ENV_WINDOWS */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_memzero_explicit, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_memzero_explicit, 25.07);
 void
 rte_memzero_explicit(void *dst, size_t sz)
 {
diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c
index db43af13a8..77ab3a61cc 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -26,7 +26,7 @@
 /* Default count used until rte_memzone_max_set() is called */
 #define DEFAULT_MAX_MEMZONE_COUNT 2560
 
-RTE_EXPORT_SYMBOL(rte_memzone_max_set)
+RTE_EXPORT_SYMBOL(rte_memzone_max_set);
 int
 rte_memzone_max_set(size_t max)
 {
@@ -48,7 +48,7 @@ rte_memzone_max_set(size_t max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_memzone_max_get)
+RTE_EXPORT_SYMBOL(rte_memzone_max_get);
 size_t
 rte_memzone_max_get(void)
 {
@@ -266,7 +266,7 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,
  * specified alignment and boundary). If the allocation cannot be done,
  * return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve_bounded)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve_bounded);
 const struct rte_memzone *
 rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,
 			    unsigned flags, unsigned align, unsigned bound)
@@ -279,7 +279,7 @@ rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,
  * Return a pointer to a correctly filled memzone descriptor (with a
  * specified alignment). If the allocation cannot be done, return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve_aligned)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve_aligned);
 const struct rte_memzone *
 rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,
 			    unsigned flags, unsigned align)
@@ -292,7 +292,7 @@ rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,
  * Return a pointer to a correctly filled memzone descriptor. If the
  * allocation cannot be done, return NULL.
  */
-RTE_EXPORT_SYMBOL(rte_memzone_reserve)
+RTE_EXPORT_SYMBOL(rte_memzone_reserve);
 const struct rte_memzone *
 rte_memzone_reserve(const char *name, size_t len, int socket_id,
 		    unsigned flags)
@@ -301,7 +301,7 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,
 					       flags, RTE_CACHE_LINE_SIZE, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_memzone_free)
+RTE_EXPORT_SYMBOL(rte_memzone_free);
 int
 rte_memzone_free(const struct rte_memzone *mz)
 {
@@ -348,7 +348,7 @@ rte_memzone_free(const struct rte_memzone *mz)
 /*
  * Lookup for the memzone identified by the given name
  */
-RTE_EXPORT_SYMBOL(rte_memzone_lookup)
+RTE_EXPORT_SYMBOL(rte_memzone_lookup);
 const struct rte_memzone *
 rte_memzone_lookup(const char *name)
 {
@@ -425,7 +425,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg)
 }
 
 /* Dump all reserved memory zones on console */
-RTE_EXPORT_SYMBOL(rte_memzone_dump)
+RTE_EXPORT_SYMBOL(rte_memzone_dump);
 void
 rte_memzone_dump(FILE *f)
 {
@@ -467,7 +467,7 @@ rte_eal_memzone_init(void)
 }
 
 /* Walk all reserved memory zones */
-RTE_EXPORT_SYMBOL(rte_memzone_walk)
+RTE_EXPORT_SYMBOL(rte_memzone_walk);
 void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *),
 		      void *arg)
 {
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 3169dd069f..bd4d03db30 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -167,7 +167,7 @@ eal_get_application_usage_hook(void)
 }
 
 /* Set a per-application usage message */
-RTE_EXPORT_SYMBOL(rte_set_application_usage_hook)
+RTE_EXPORT_SYMBOL(rte_set_application_usage_hook);
 rte_usage_hook_t
 rte_set_application_usage_hook(rte_usage_hook_t usage_func)
 {
@@ -767,7 +767,7 @@ check_core_list(int *lcores, unsigned int count)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_parse_coremask)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_parse_coremask);
 int
 rte_eal_parse_coremask(const char *coremask, int *cores)
 {
@@ -2080,7 +2080,7 @@ eal_check_common_options(struct internal_config *internal_cfg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth)
+RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth);
 uint16_t
 rte_vect_get_max_simd_bitwidth(void)
 {
@@ -2089,7 +2089,7 @@ rte_vect_get_max_simd_bitwidth(void)
 	return internal_conf->max_simd_bitwidth.bitwidth;
 }
 
-RTE_EXPORT_SYMBOL(rte_vect_set_max_simd_bitwidth)
+RTE_EXPORT_SYMBOL(rte_vect_set_max_simd_bitwidth);
 int
 rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)
 {
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 0dea787e38..edc1b8bfe9 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -143,7 +143,7 @@ create_socket_path(const char *name, char *buf, int len)
 		strlcpy(buf, prefix, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_primary_proc_alive)
+RTE_EXPORT_SYMBOL(rte_eal_primary_proc_alive);
 int
 rte_eal_primary_proc_alive(const char *config_file_path)
 {
@@ -199,7 +199,7 @@ validate_action_name(const char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_register)
+RTE_EXPORT_SYMBOL(rte_mp_action_register);
 int
 rte_mp_action_register(const char *name, rte_mp_t action)
 {
@@ -236,7 +236,7 @@ rte_mp_action_register(const char *name, rte_mp_t action)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_unregister)
+RTE_EXPORT_SYMBOL(rte_mp_action_unregister);
 void
 rte_mp_action_unregister(const char *name)
 {
@@ -840,7 +840,7 @@ check_input(const struct rte_mp_msg *msg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_sendmsg)
+RTE_EXPORT_SYMBOL(rte_mp_sendmsg);
 int
 rte_mp_sendmsg(struct rte_mp_msg *msg)
 {
@@ -994,7 +994,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_sync)
+RTE_EXPORT_SYMBOL(rte_mp_request_sync);
 int
 rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 		const struct timespec *ts)
@@ -1092,7 +1092,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_async)
+RTE_EXPORT_SYMBOL(rte_mp_request_async);
 int
 rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 		rte_mp_async_reply_t clb)
@@ -1245,7 +1245,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_reply)
+RTE_EXPORT_SYMBOL(rte_mp_reply);
 int
 rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
 {
@@ -1298,7 +1298,7 @@ set_mp_status(enum mp_status status)
 	return rte_atomic_load_explicit(&mcfg->mp_status, rte_memory_order_relaxed) == desired;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_disable)
+RTE_EXPORT_SYMBOL(rte_mp_disable);
 bool
 rte_mp_disable(void)
 {
diff --git a/lib/eal/common/eal_common_string_fns.c b/lib/eal/common/eal_common_string_fns.c
index fa87831c3a..0b4f814951 100644
--- a/lib/eal/common/eal_common_string_fns.c
+++ b/lib/eal/common/eal_common_string_fns.c
@@ -13,7 +13,7 @@
 #include <rte_errno.h>
 
 /* split string into tokens */
-RTE_EXPORT_SYMBOL(rte_strsplit)
+RTE_EXPORT_SYMBOL(rte_strsplit);
 int
 rte_strsplit(char *string, int stringlen,
 	     char **tokens, int maxtokens, char delim)
@@ -48,7 +48,7 @@ rte_strsplit(char *string, int stringlen,
  * Return negative value and NUL-terminate if dst is too short,
  * Otherwise return number of bytes copied.
  */
-RTE_EXPORT_SYMBOL(rte_strscpy)
+RTE_EXPORT_SYMBOL(rte_strscpy);
 ssize_t
 rte_strscpy(char *dst, const char *src, size_t dsize)
 {
@@ -71,7 +71,7 @@ rte_strscpy(char *dst, const char *src, size_t dsize)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_str_to_size)
+RTE_EXPORT_SYMBOL(rte_str_to_size);
 uint64_t
 rte_str_to_size(const char *str)
 {
@@ -110,7 +110,7 @@ rte_str_to_size(const char *str)
 	return size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_size_to_str, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_size_to_str, 25.07);
 char *
 rte_size_to_str(char *buf, int buf_size, uint64_t count, bool use_iec, const char *unit)
 {
diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c
index 47080d75ac..9f8adaf97f 100644
--- a/lib/eal/common/eal_common_tailqs.c
+++ b/lib/eal/common/eal_common_tailqs.c
@@ -23,7 +23,7 @@ static struct rte_tailq_elem_head rte_tailq_elem_head =
 /* number of tailqs registered, -1 before call to rte_eal_tailqs_init */
 static int rte_tailqs_count = -1;
 
-RTE_EXPORT_SYMBOL(rte_eal_tailq_lookup)
+RTE_EXPORT_SYMBOL(rte_eal_tailq_lookup);
 struct rte_tailq_head *
 rte_eal_tailq_lookup(const char *name)
 {
@@ -42,7 +42,7 @@ rte_eal_tailq_lookup(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_dump_tailq)
+RTE_EXPORT_SYMBOL(rte_dump_tailq);
 void
 rte_dump_tailq(FILE *f)
 {
@@ -108,7 +108,7 @@ rte_eal_tailq_update(struct rte_tailq_elem *t)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_tailq_register)
+RTE_EXPORT_SYMBOL(rte_eal_tailq_register);
 int
 rte_eal_tailq_register(struct rte_tailq_elem *t)
 {
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index c0622c5c23..3e37ab1742 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -23,15 +23,15 @@
 #include "eal_thread.h"
 #include "eal_trace.h"
 
-RTE_EXPORT_SYMBOL(per_lcore__lcore_id)
+RTE_EXPORT_SYMBOL(per_lcore__lcore_id);
 RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
-RTE_EXPORT_SYMBOL(per_lcore__thread_id)
+RTE_EXPORT_SYMBOL(per_lcore__thread_id);
 RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 static RTE_DEFINE_PER_LCORE(unsigned int, _numa_id) =
 	(unsigned int)SOCKET_ID_ANY;
 static RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
 
-RTE_EXPORT_SYMBOL(rte_socket_id)
+RTE_EXPORT_SYMBOL(rte_socket_id);
 unsigned rte_socket_id(void)
 {
 	return RTE_PER_LCORE(_numa_id);
@@ -86,7 +86,7 @@ thread_update_affinity(rte_cpuset_t *cpusetp)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity);
 int
 rte_thread_set_affinity(rte_cpuset_t *cpusetp)
 {
@@ -99,7 +99,7 @@ rte_thread_set_affinity(rte_cpuset_t *cpusetp)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity);
 void
 rte_thread_get_affinity(rte_cpuset_t *cpusetp)
 {
@@ -288,7 +288,7 @@ static uint32_t control_thread_start(void *arg)
 	return start_routine(start_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_create_control)
+RTE_EXPORT_SYMBOL(rte_thread_create_control);
 int
 rte_thread_create_control(rte_thread_t *thread, const char *name,
 		rte_thread_func start_routine, void *arg)
@@ -348,7 +348,7 @@ add_internal_prefix(char *prefixed_name, const char *name, size_t size)
 	strlcpy(prefixed_name + prefixlen, name, size - prefixlen);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_create_internal_control)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_create_internal_control);
 int
 rte_thread_create_internal_control(rte_thread_t *id, const char *name,
 		rte_thread_func func, void *arg)
@@ -359,7 +359,7 @@ rte_thread_create_internal_control(rte_thread_t *id, const char *name,
 	return rte_thread_create_control(id, prefixed_name, func, arg);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_set_prefixed_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thread_set_prefixed_name);
 void
 rte_thread_set_prefixed_name(rte_thread_t id, const char *name)
 {
@@ -369,7 +369,7 @@ rte_thread_set_prefixed_name(rte_thread_t id, const char *name)
 	rte_thread_set_name(id, prefixed_name);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_register)
+RTE_EXPORT_SYMBOL(rte_thread_register);
 int
 rte_thread_register(void)
 {
@@ -402,7 +402,7 @@ rte_thread_register(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_unregister)
+RTE_EXPORT_SYMBOL(rte_thread_unregister);
 void
 rte_thread_unregister(void)
 {
@@ -416,7 +416,7 @@ rte_thread_unregister(void)
 			lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_init)
+RTE_EXPORT_SYMBOL(rte_thread_attr_init);
 int
 rte_thread_attr_init(rte_thread_attr_t *attr)
 {
@@ -429,7 +429,7 @@ rte_thread_attr_init(rte_thread_attr_t *attr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_attr_set_priority);
 int
 rte_thread_attr_set_priority(rte_thread_attr_t *thread_attr,
 		enum rte_thread_priority priority)
@@ -442,7 +442,7 @@ rte_thread_attr_set_priority(rte_thread_attr_t *thread_attr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_set_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_attr_set_affinity);
 int
 rte_thread_attr_set_affinity(rte_thread_attr_t *thread_attr,
 		rte_cpuset_t *cpuset)
@@ -458,7 +458,7 @@ rte_thread_attr_set_affinity(rte_thread_attr_t *thread_attr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_attr_get_affinity)
+RTE_EXPORT_SYMBOL(rte_thread_attr_get_affinity);
 int
 rte_thread_attr_get_affinity(rte_thread_attr_t *thread_attr,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c
index bbf8b8b11b..121b850c27 100644
--- a/lib/eal/common/eal_common_timer.c
+++ b/lib/eal/common/eal_common_timer.c
@@ -19,10 +19,10 @@
 static uint64_t eal_tsc_resolution_hz;
 
 /* Pointer to user delay function */
-RTE_EXPORT_SYMBOL(rte_delay_us)
+RTE_EXPORT_SYMBOL(rte_delay_us);
 void (*rte_delay_us)(unsigned int) = NULL;
 
-RTE_EXPORT_SYMBOL(rte_delay_us_block)
+RTE_EXPORT_SYMBOL(rte_delay_us_block);
 void
 rte_delay_us_block(unsigned int us)
 {
@@ -32,7 +32,7 @@ rte_delay_us_block(unsigned int us)
 		rte_pause();
 }
 
-RTE_EXPORT_SYMBOL(rte_get_tsc_hz)
+RTE_EXPORT_SYMBOL(rte_get_tsc_hz);
 uint64_t
 rte_get_tsc_hz(void)
 {
@@ -79,7 +79,7 @@ set_tsc_freq(void)
 	mcfg->tsc_hz = freq;
 }
 
-RTE_EXPORT_SYMBOL(rte_delay_us_callback_register)
+RTE_EXPORT_SYMBOL(rte_delay_us_callback_register);
 void rte_delay_us_callback_register(void (*userfunc)(unsigned int))
 {
 	rte_delay_us = userfunc;
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index be1f78a68d..d5e8aaedfa 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -17,9 +17,9 @@
 #include <eal_export.h>
 #include "eal_trace.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05);
 RTE_DEFINE_PER_LCORE(volatile int, trace_point_sz);
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05);
 RTE_DEFINE_PER_LCORE(void *, trace_mem);
 static RTE_DEFINE_PER_LCORE(char *, ctf_field);
 
@@ -97,7 +97,7 @@ eal_trace_fini(void)
 	eal_trace_args_free();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_is_enabled, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_is_enabled, 20.05);
 bool
 rte_trace_is_enabled(void)
 {
@@ -115,7 +115,7 @@ trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
 			rte_memory_order_release);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_set, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_set, 20.05);
 void
 rte_trace_mode_set(enum rte_trace_mode mode)
 {
@@ -127,7 +127,7 @@ rte_trace_mode_set(enum rte_trace_mode mode)
 	trace.mode = mode;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_get, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_mode_get, 20.05);
 enum
 rte_trace_mode rte_trace_mode_get(void)
 {
@@ -140,7 +140,7 @@ trace_point_is_invalid(rte_trace_point_t *t)
 	return (t == NULL) || (trace_id_get(t) >= trace.nb_trace_points);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_is_enabled, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_is_enabled, 20.05);
 bool
 rte_trace_point_is_enabled(rte_trace_point_t *t)
 {
@@ -153,7 +153,7 @@ rte_trace_point_is_enabled(rte_trace_point_t *t)
 	return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_enable, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_enable, 20.05);
 int
 rte_trace_point_enable(rte_trace_point_t *t)
 {
@@ -169,7 +169,7 @@ rte_trace_point_enable(rte_trace_point_t *t)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_disable, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_disable, 20.05);
 int
 rte_trace_point_disable(rte_trace_point_t *t)
 {
@@ -185,7 +185,7 @@ rte_trace_point_disable(rte_trace_point_t *t)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_pattern, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_pattern, 20.05);
 int
 rte_trace_pattern(const char *pattern, bool enable)
 {
@@ -210,7 +210,7 @@ rte_trace_pattern(const char *pattern, bool enable)
 	return rc | found;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_regexp, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_regexp, 20.05);
 int
 rte_trace_regexp(const char *regex, bool enable)
 {
@@ -240,7 +240,7 @@ rte_trace_regexp(const char *regex, bool enable)
 	return rc | found;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_lookup, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_point_lookup, 20.05);
 rte_trace_point_t *
 rte_trace_point_lookup(const char *name)
 {
@@ -291,7 +291,7 @@ trace_lcore_mem_dump(FILE *f)
 	rte_spinlock_unlock(&trace->lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_dump, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_dump, 20.05);
 void
 rte_trace_dump(FILE *f)
 {
@@ -327,7 +327,7 @@ thread_get_name(rte_thread_t id, char *name, size_t len)
 	RTE_SET_USED(len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05);
 void
 __rte_trace_mem_per_thread_alloc(void)
 {
@@ -449,7 +449,7 @@ trace_mem_free(void)
 	rte_spinlock_unlock(&trace->lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05);
 void
 __rte_trace_point_emit_field(size_t sz, const char *in, const char *datatype)
 {
@@ -476,7 +476,7 @@ __rte_trace_point_emit_field(size_t sz, const char *in, const char *datatype)
 	RTE_PER_LCORE(ctf_field) = field;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05);
 int
 __rte_trace_point_register(rte_trace_point_t *handle, const char *name,
 		void (*register_fn)(void))
diff --git a/lib/eal/common/eal_common_trace_ctf.c b/lib/eal/common/eal_common_trace_ctf.c
index aa60a705d1..72177e097c 100644
--- a/lib/eal/common/eal_common_trace_ctf.c
+++ b/lib/eal/common/eal_common_trace_ctf.c
@@ -357,7 +357,7 @@ meta_fixup(struct trace *trace, char *meta)
 	meta_fix_freq_offset(trace, meta);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_metadata_dump, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_metadata_dump, 20.05);
 int
 rte_trace_metadata_dump(FILE *f)
 {
diff --git a/lib/eal/common/eal_common_trace_points.c b/lib/eal/common/eal_common_trace_points.c
index 0903f3c639..790df83098 100644
--- a/lib/eal/common/eal_common_trace_points.c
+++ b/lib/eal/common/eal_common_trace_points.c
@@ -9,58 +9,58 @@
 #include <eal_export.h>
 #include <eal_trace_internal.h>
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_void, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_void, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_void,
 	lib.eal.generic.void)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u64, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u64, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u64,
 	lib.eal.generic.u64)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u32, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u32, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u32,
 	lib.eal.generic.u32)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u16, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u16, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u16,
 	lib.eal.generic.u16)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u8, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_u8, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_u8,
 	lib.eal.generic.u8)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i64, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i64, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i64,
 	lib.eal.generic.i64)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i32, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i32, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i32,
 	lib.eal.generic.i32)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i16, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i16, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i16,
 	lib.eal.generic.i16)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i8, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_i8, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_i8,
 	lib.eal.generic.i8)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_int, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_int, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_int,
 	lib.eal.generic.int)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_long, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_long, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_long,
 	lib.eal.generic.long)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_float, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_float, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_float,
 	lib.eal.generic.float)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_double, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_double, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_double,
 	lib.eal.generic.double)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_ptr, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_ptr, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_ptr,
 	lib.eal.generic.ptr)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_str, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_str, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_str,
 	lib.eal.generic.string)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_size_t, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_size_t, 20.11);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_size_t,
 	lib.eal.generic.size_t)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_func, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_func, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_func,
 	lib.eal.generic.func)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_blob, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eal_trace_generic_blob, 23.03);
 RTE_TRACE_POINT_REGISTER(rte_eal_trace_generic_blob,
 	lib.eal.generic.blob)
 
diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c
index e1996433b7..bde8111af2 100644
--- a/lib/eal/common/eal_common_trace_utils.c
+++ b/lib/eal/common/eal_common_trace_utils.c
@@ -410,7 +410,7 @@ trace_mem_save(struct trace *trace, struct __rte_trace_header *hdr,
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_save, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_save, 20.05);
 int
 rte_trace_save(void)
 {
diff --git a/lib/eal/common/eal_common_uuid.c b/lib/eal/common/eal_common_uuid.c
index 0e0924d08d..941ae246b8 100644
--- a/lib/eal/common/eal_common_uuid.c
+++ b/lib/eal/common/eal_common_uuid.c
@@ -78,7 +78,7 @@ static void uuid_unpack(const rte_uuid_t in, struct uuid *uu)
 	memcpy(uu->node, ptr, 6);
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_is_null)
+RTE_EXPORT_SYMBOL(rte_uuid_is_null);
 bool rte_uuid_is_null(const rte_uuid_t uu)
 {
 	const uint8_t *cp = uu;
@@ -93,7 +93,7 @@ bool rte_uuid_is_null(const rte_uuid_t uu)
 /*
  * rte_uuid_compare() - compare two UUIDs.
  */
-RTE_EXPORT_SYMBOL(rte_uuid_compare)
+RTE_EXPORT_SYMBOL(rte_uuid_compare);
 int rte_uuid_compare(const rte_uuid_t uu1, const rte_uuid_t uu2)
 {
 	struct uuid	uuid1, uuid2;
@@ -113,7 +113,7 @@ int rte_uuid_compare(const rte_uuid_t uu1, const rte_uuid_t uu2)
 	return memcmp(uuid1.node, uuid2.node, 6);
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_parse)
+RTE_EXPORT_SYMBOL(rte_uuid_parse);
 int rte_uuid_parse(const char *in, rte_uuid_t uu)
 {
 	struct uuid	uuid;
@@ -156,7 +156,7 @@ int rte_uuid_parse(const char *in, rte_uuid_t uu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_uuid_unparse)
+RTE_EXPORT_SYMBOL(rte_uuid_unparse);
 void rte_uuid_unparse(const rte_uuid_t uu, char *out, size_t len)
 {
 	struct uuid uuid;
diff --git a/lib/eal/common/rte_bitset.c b/lib/eal/common/rte_bitset.c
index 78001b1ee8..4fe0a1b61a 100644
--- a/lib/eal/common/rte_bitset.c
+++ b/lib/eal/common/rte_bitset.c
@@ -10,7 +10,7 @@
 #include <eal_export.h>
 #include "rte_bitset.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bitset_to_str, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_bitset_to_str, 24.11);
 ssize_t
 rte_bitset_to_str(const uint64_t *bitset, size_t num_bits, char *buf, size_t capacity)
 {
diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c
index 08a4d595da..f6f2a7a93f 100644
--- a/lib/eal/common/rte_keepalive.c
+++ b/lib/eal/common/rte_keepalive.c
@@ -64,7 +64,7 @@ print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core)
 	      );
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_dispatch_pings)
+RTE_EXPORT_SYMBOL(rte_keepalive_dispatch_pings);
 void
 rte_keepalive_dispatch_pings(__rte_unused void *ptr_timer,
 	void *ptr_data)
@@ -119,7 +119,7 @@ rte_keepalive_dispatch_pings(__rte_unused void *ptr_timer,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_create)
+RTE_EXPORT_SYMBOL(rte_keepalive_create);
 struct rte_keepalive *
 rte_keepalive_create(rte_keepalive_failure_callback_t callback,
 	void *data)
@@ -138,7 +138,7 @@ rte_keepalive_create(rte_keepalive_failure_callback_t callback,
 	return keepcfg;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_register_relay_callback)
+RTE_EXPORT_SYMBOL(rte_keepalive_register_relay_callback);
 void rte_keepalive_register_relay_callback(struct rte_keepalive *keepcfg,
 	rte_keepalive_relay_callback_t callback,
 	void *data)
@@ -147,7 +147,7 @@ void rte_keepalive_register_relay_callback(struct rte_keepalive *keepcfg,
 	keepcfg->relay_callback_data = data;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_register_core)
+RTE_EXPORT_SYMBOL(rte_keepalive_register_core);
 void
 rte_keepalive_register_core(struct rte_keepalive *keepcfg, const int id_core)
 {
@@ -157,14 +157,14 @@ rte_keepalive_register_core(struct rte_keepalive *keepcfg, const int id_core)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_mark_alive)
+RTE_EXPORT_SYMBOL(rte_keepalive_mark_alive);
 void
 rte_keepalive_mark_alive(struct rte_keepalive *keepcfg)
 {
 	keepcfg->live_data[rte_lcore_id()].core_state = RTE_KA_STATE_ALIVE;
 }
 
-RTE_EXPORT_SYMBOL(rte_keepalive_mark_sleep)
+RTE_EXPORT_SYMBOL(rte_keepalive_mark_sleep);
 void
 rte_keepalive_mark_sleep(struct rte_keepalive *keepcfg)
 {
diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c
index 3a86c19490..297604f6b6 100644
--- a/lib/eal/common/rte_malloc.c
+++ b/lib/eal/common/rte_malloc.c
@@ -49,14 +49,14 @@ mem_free(void *addr, const bool trace_ena, bool zero)
 		EAL_LOG(ERR, "Error: Invalid memory");
 }
 
-RTE_EXPORT_SYMBOL(rte_free)
+RTE_EXPORT_SYMBOL(rte_free);
 void
 rte_free(void *addr)
 {
 	mem_free(addr, true, false);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_free_sensitive, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_free_sensitive, 25.07);
 void
 rte_free_sensitive(void *addr)
 {
@@ -99,7 +99,7 @@ malloc_socket(const char *type, size_t size, unsigned int align,
 /*
  * Allocate memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_malloc_socket)
+RTE_EXPORT_SYMBOL(rte_malloc_socket);
 void *
 rte_malloc_socket(const char *type, size_t size, unsigned int align,
 		int socket_arg)
@@ -116,7 +116,7 @@ eal_malloc_no_trace(const char *type, size_t size, unsigned int align)
 /*
  * Allocate memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_malloc)
+RTE_EXPORT_SYMBOL(rte_malloc);
 void *
 rte_malloc(const char *type, size_t size, unsigned align)
 {
@@ -126,7 +126,7 @@ rte_malloc(const char *type, size_t size, unsigned align)
 /*
  * Allocate zero'd memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_zmalloc_socket)
+RTE_EXPORT_SYMBOL(rte_zmalloc_socket);
 void *
 rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
 {
@@ -156,7 +156,7 @@ rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
 /*
  * Allocate zero'd memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_zmalloc)
+RTE_EXPORT_SYMBOL(rte_zmalloc);
 void *
 rte_zmalloc(const char *type, size_t size, unsigned align)
 {
@@ -166,7 +166,7 @@ rte_zmalloc(const char *type, size_t size, unsigned align)
 /*
  * Allocate zero'd memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_calloc_socket)
+RTE_EXPORT_SYMBOL(rte_calloc_socket);
 void *
 rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket)
 {
@@ -176,7 +176,7 @@ rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int
 /*
  * Allocate zero'd memory on default heap.
  */
-RTE_EXPORT_SYMBOL(rte_calloc)
+RTE_EXPORT_SYMBOL(rte_calloc);
 void *
 rte_calloc(const char *type, size_t num, size_t size, unsigned align)
 {
@@ -186,7 +186,7 @@ rte_calloc(const char *type, size_t num, size_t size, unsigned align)
 /*
  * Resize allocated memory on specified heap.
  */
-RTE_EXPORT_SYMBOL(rte_realloc_socket)
+RTE_EXPORT_SYMBOL(rte_realloc_socket);
 void *
 rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
 {
@@ -238,14 +238,14 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
 /*
  * Resize allocated memory.
  */
-RTE_EXPORT_SYMBOL(rte_realloc)
+RTE_EXPORT_SYMBOL(rte_realloc);
 void *
 rte_realloc(void *ptr, size_t size, unsigned int align)
 {
 	return rte_realloc_socket(ptr, size, align, SOCKET_ID_ANY);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_validate)
+RTE_EXPORT_SYMBOL(rte_malloc_validate);
 int
 rte_malloc_validate(const void *ptr, size_t *size)
 {
@@ -260,7 +260,7 @@ rte_malloc_validate(const void *ptr, size_t *size)
 /*
  * Function to retrieve data for heap on given socket
  */
-RTE_EXPORT_SYMBOL(rte_malloc_get_socket_stats)
+RTE_EXPORT_SYMBOL(rte_malloc_get_socket_stats);
 int
 rte_malloc_get_socket_stats(int socket,
 		struct rte_malloc_socket_stats *socket_stats)
@@ -279,7 +279,7 @@ rte_malloc_get_socket_stats(int socket,
 /*
  * Function to dump contents of all heaps
  */
-RTE_EXPORT_SYMBOL(rte_malloc_dump_heaps)
+RTE_EXPORT_SYMBOL(rte_malloc_dump_heaps);
 void
 rte_malloc_dump_heaps(FILE *f)
 {
@@ -292,7 +292,7 @@ rte_malloc_dump_heaps(FILE *f)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_get_socket)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_get_socket);
 int
 rte_malloc_heap_get_socket(const char *name)
 {
@@ -329,7 +329,7 @@ rte_malloc_heap_get_socket(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_socket_is_external)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_socket_is_external);
 int
 rte_malloc_heap_socket_is_external(int socket_id)
 {
@@ -358,7 +358,7 @@ rte_malloc_heap_socket_is_external(int socket_id)
 /*
  * Print stats on memory type. If type is NULL, info on all types is printed
  */
-RTE_EXPORT_SYMBOL(rte_malloc_dump_stats)
+RTE_EXPORT_SYMBOL(rte_malloc_dump_stats);
 void
 rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
 {
@@ -388,7 +388,7 @@ rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
 /*
  * Return the IO address of a virtual address obtained through rte_malloc
  */
-RTE_EXPORT_SYMBOL(rte_malloc_virt2iova)
+RTE_EXPORT_SYMBOL(rte_malloc_virt2iova);
 rte_iova_t
 rte_malloc_virt2iova(const void *addr)
 {
@@ -426,7 +426,7 @@ find_named_heap(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_add)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_add);
 int
 rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 		rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz)
@@ -482,7 +482,7 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_remove)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_remove);
 int
 rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
 {
@@ -598,21 +598,21 @@ sync_memory(const char *heap_name, void *va_addr, size_t len, bool attach)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_attach)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_attach);
 int
 rte_malloc_heap_memory_attach(const char *heap_name, void *va_addr, size_t len)
 {
 	return sync_memory(heap_name, va_addr, len, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_detach)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_memory_detach);
 int
 rte_malloc_heap_memory_detach(const char *heap_name, void *va_addr, size_t len)
 {
 	return sync_memory(heap_name, va_addr, len, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_create)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_create);
 int
 rte_malloc_heap_create(const char *heap_name)
 {
@@ -664,7 +664,7 @@ rte_malloc_heap_create(const char *heap_name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_malloc_heap_destroy)
+RTE_EXPORT_SYMBOL(rte_malloc_heap_destroy);
 int
 rte_malloc_heap_destroy(const char *heap_name)
 {
diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index 576a32a46c..d995113793 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -83,7 +83,7 @@ __rte_srand_lfsr258(uint64_t seed, struct rte_rand_state *state)
 	state->z5 = __rte_rand_lfsr258_gen_seed(&lcg_seed, 8388608UL);
 }
 
-RTE_EXPORT_SYMBOL(rte_srand)
+RTE_EXPORT_SYMBOL(rte_srand);
 void
 rte_srand(uint64_t seed)
 {
@@ -144,7 +144,7 @@ struct rte_rand_state *__rte_rand_get_state(void)
 	return RTE_LCORE_VAR(rand_state);
 }
 
-RTE_EXPORT_SYMBOL(rte_rand)
+RTE_EXPORT_SYMBOL(rte_rand);
 uint64_t
 rte_rand(void)
 {
@@ -155,7 +155,7 @@ rte_rand(void)
 	return __rte_rand_lfsr258(state);
 }
 
-RTE_EXPORT_SYMBOL(rte_rand_max)
+RTE_EXPORT_SYMBOL(rte_rand_max);
 uint64_t
 rte_rand_max(uint64_t upper_bound)
 {
@@ -195,7 +195,7 @@ rte_rand_max(uint64_t upper_bound)
 	return res;
 }
 
-RTE_EXPORT_SYMBOL(rte_drand)
+RTE_EXPORT_SYMBOL(rte_drand);
 double
 rte_drand(void)
 {
diff --git a/lib/eal/common/rte_reciprocal.c b/lib/eal/common/rte_reciprocal.c
index 99c54df141..12b329484c 100644
--- a/lib/eal/common/rte_reciprocal.c
+++ b/lib/eal/common/rte_reciprocal.c
@@ -13,7 +13,7 @@
 
 #include "rte_reciprocal.h"
 
-RTE_EXPORT_SYMBOL(rte_reciprocal_value)
+RTE_EXPORT_SYMBOL(rte_reciprocal_value);
 struct rte_reciprocal rte_reciprocal_value(uint32_t d)
 {
 	struct rte_reciprocal R;
@@ -101,7 +101,7 @@ divide_128_div_64_to_64(uint64_t u1, uint64_t u0, uint64_t v, uint64_t *r)
 	return q1*b + q0;
 }
 
-RTE_EXPORT_SYMBOL(rte_reciprocal_value_u64)
+RTE_EXPORT_SYMBOL(rte_reciprocal_value_u64);
 struct rte_reciprocal_u64
 rte_reciprocal_value_u64(uint64_t d)
 {
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d2ac9d3f14..83cf5d3e12 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -121,7 +121,7 @@ rte_service_init(void)
 	return -ENOMEM;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_finalize)
+RTE_EXPORT_SYMBOL(rte_service_finalize);
 void
 rte_service_finalize(void)
 {
@@ -176,7 +176,7 @@ service_mt_safe(struct rte_service_spec_impl *s)
 	return !!(s->spec.capabilities & RTE_SERVICE_CAP_MT_SAFE);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_set_stats_enable)
+RTE_EXPORT_SYMBOL(rte_service_set_stats_enable);
 int32_t
 rte_service_set_stats_enable(uint32_t id, int32_t enabled)
 {
@@ -191,7 +191,7 @@ rte_service_set_stats_enable(uint32_t id, int32_t enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_set_runstate_mapped_check)
+RTE_EXPORT_SYMBOL(rte_service_set_runstate_mapped_check);
 int32_t
 rte_service_set_runstate_mapped_check(uint32_t id, int32_t enabled)
 {
@@ -206,14 +206,14 @@ rte_service_set_runstate_mapped_check(uint32_t id, int32_t enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_count)
+RTE_EXPORT_SYMBOL(rte_service_get_count);
 uint32_t
 rte_service_get_count(void)
 {
 	return rte_service_count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_by_name)
+RTE_EXPORT_SYMBOL(rte_service_get_by_name);
 int32_t
 rte_service_get_by_name(const char *name, uint32_t *service_id)
 {
@@ -232,7 +232,7 @@ rte_service_get_by_name(const char *name, uint32_t *service_id)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_get_name)
+RTE_EXPORT_SYMBOL(rte_service_get_name);
 const char *
 rte_service_get_name(uint32_t id)
 {
@@ -241,7 +241,7 @@ rte_service_get_name(uint32_t id)
 	return s->spec.name;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_probe_capability)
+RTE_EXPORT_SYMBOL(rte_service_probe_capability);
 int32_t
 rte_service_probe_capability(uint32_t id, uint32_t capability)
 {
@@ -250,7 +250,7 @@ rte_service_probe_capability(uint32_t id, uint32_t capability)
 	return !!(s->spec.capabilities & capability);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_register)
+RTE_EXPORT_SYMBOL(rte_service_component_register);
 int32_t
 rte_service_component_register(const struct rte_service_spec *spec,
 			       uint32_t *id_ptr)
@@ -285,7 +285,7 @@ rte_service_component_register(const struct rte_service_spec *spec,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_unregister)
+RTE_EXPORT_SYMBOL(rte_service_component_unregister);
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
@@ -307,7 +307,7 @@ rte_service_component_unregister(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_component_runstate_set)
+RTE_EXPORT_SYMBOL(rte_service_component_runstate_set);
 int32_t
 rte_service_component_runstate_set(uint32_t id, uint32_t runstate)
 {
@@ -328,7 +328,7 @@ rte_service_component_runstate_set(uint32_t id, uint32_t runstate)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_runstate_set)
+RTE_EXPORT_SYMBOL(rte_service_runstate_set);
 int32_t
 rte_service_runstate_set(uint32_t id, uint32_t runstate)
 {
@@ -350,7 +350,7 @@ rte_service_runstate_set(uint32_t id, uint32_t runstate)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_runstate_get)
+RTE_EXPORT_SYMBOL(rte_service_runstate_get);
 int32_t
 rte_service_runstate_get(uint32_t id)
 {
@@ -461,7 +461,7 @@ service_run(uint32_t i, struct core_state *cs, const uint64_t *mapped_services,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_may_be_active)
+RTE_EXPORT_SYMBOL(rte_service_may_be_active);
 int32_t
 rte_service_may_be_active(uint32_t id)
 {
@@ -483,7 +483,7 @@ rte_service_may_be_active(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_run_iter_on_app_lcore)
+RTE_EXPORT_SYMBOL(rte_service_run_iter_on_app_lcore);
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
@@ -543,7 +543,7 @@ service_runner_func(void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_may_be_active)
+RTE_EXPORT_SYMBOL(rte_service_lcore_may_be_active);
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
@@ -559,7 +559,7 @@ rte_service_lcore_may_be_active(uint32_t lcore)
 			       rte_memory_order_acquire);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_count)
+RTE_EXPORT_SYMBOL(rte_service_lcore_count);
 int32_t
 rte_service_lcore_count(void)
 {
@@ -573,7 +573,7 @@ rte_service_lcore_count(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_list)
+RTE_EXPORT_SYMBOL(rte_service_lcore_list);
 int32_t
 rte_service_lcore_list(uint32_t array[], uint32_t n)
 {
@@ -598,7 +598,7 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_count_services)
+RTE_EXPORT_SYMBOL(rte_service_lcore_count_services);
 int32_t
 rte_service_lcore_count_services(uint32_t lcore)
 {
@@ -612,7 +612,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	return rte_bitset_count_set(cs->mapped_services, RTE_SERVICE_NUM_MAX);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_start_with_defaults)
+RTE_EXPORT_SYMBOL(rte_service_start_with_defaults);
 int32_t
 rte_service_start_with_defaults(void)
 {
@@ -686,7 +686,7 @@ service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_map_lcore_set)
+RTE_EXPORT_SYMBOL(rte_service_map_lcore_set);
 int32_t
 rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
 {
@@ -695,7 +695,7 @@ rte_service_map_lcore_set(uint32_t id, uint32_t lcore, uint32_t enabled)
 	return service_update(id, lcore, &on, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_map_lcore_get)
+RTE_EXPORT_SYMBOL(rte_service_map_lcore_get);
 int32_t
 rte_service_map_lcore_get(uint32_t id, uint32_t lcore)
 {
@@ -723,7 +723,7 @@ set_lcore_state(uint32_t lcore, int32_t state)
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_lcore_reset_all);
 int32_t
 rte_service_lcore_reset_all(void)
 {
@@ -750,7 +750,7 @@ rte_service_lcore_reset_all(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_add)
+RTE_EXPORT_SYMBOL(rte_service_lcore_add);
 int32_t
 rte_service_lcore_add(uint32_t lcore)
 {
@@ -774,7 +774,7 @@ rte_service_lcore_add(uint32_t lcore)
 	return rte_eal_wait_lcore(lcore);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_del)
+RTE_EXPORT_SYMBOL(rte_service_lcore_del);
 int32_t
 rte_service_lcore_del(uint32_t lcore)
 {
@@ -799,7 +799,7 @@ rte_service_lcore_del(uint32_t lcore)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_start)
+RTE_EXPORT_SYMBOL(rte_service_lcore_start);
 int32_t
 rte_service_lcore_start(uint32_t lcore)
 {
@@ -833,7 +833,7 @@ rte_service_lcore_start(uint32_t lcore)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_stop)
+RTE_EXPORT_SYMBOL(rte_service_lcore_stop);
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
@@ -974,7 +974,7 @@ attr_get_service_cycles(uint32_t service_id)
 	return attr_get(service_id, lcore_attr_get_service_cycles);
 }
 
-RTE_EXPORT_SYMBOL(rte_service_attr_get)
+RTE_EXPORT_SYMBOL(rte_service_attr_get);
 int32_t
 rte_service_attr_get(uint32_t id, uint32_t attr_id, uint64_t *attr_value)
 {
@@ -1002,7 +1002,7 @@ rte_service_attr_get(uint32_t id, uint32_t attr_id, uint64_t *attr_value)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_attr_get)
+RTE_EXPORT_SYMBOL(rte_service_lcore_attr_get);
 int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
@@ -1027,7 +1027,7 @@ rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_service_attr_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_attr_reset_all);
 int32_t
 rte_service_attr_reset_all(uint32_t id)
 {
@@ -1046,7 +1046,7 @@ rte_service_attr_reset_all(uint32_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_service_lcore_attr_reset_all)
+RTE_EXPORT_SYMBOL(rte_service_lcore_attr_reset_all);
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
@@ -1100,7 +1100,7 @@ service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 	fprintf(f, "\n");
 }
 
-RTE_EXPORT_SYMBOL(rte_service_dump)
+RTE_EXPORT_SYMBOL(rte_service_dump);
 int32_t
 rte_service_dump(FILE *f, uint32_t id)
 {
diff --git a/lib/eal/common/rte_version.c b/lib/eal/common/rte_version.c
index 627b89d4a8..529aedfa71 100644
--- a/lib/eal/common/rte_version.c
+++ b/lib/eal/common/rte_version.c
@@ -5,31 +5,31 @@
 #include <eal_export.h>
 #include <rte_version.h>
 
-RTE_EXPORT_SYMBOL(rte_version_prefix)
+RTE_EXPORT_SYMBOL(rte_version_prefix);
 const char *
 rte_version_prefix(void) { return RTE_VER_PREFIX; }
 
-RTE_EXPORT_SYMBOL(rte_version_year)
+RTE_EXPORT_SYMBOL(rte_version_year);
 unsigned int
 rte_version_year(void) { return RTE_VER_YEAR; }
 
-RTE_EXPORT_SYMBOL(rte_version_month)
+RTE_EXPORT_SYMBOL(rte_version_month);
 unsigned int
 rte_version_month(void) { return RTE_VER_MONTH; }
 
-RTE_EXPORT_SYMBOL(rte_version_minor)
+RTE_EXPORT_SYMBOL(rte_version_minor);
 unsigned int
 rte_version_minor(void) { return RTE_VER_MINOR; }
 
-RTE_EXPORT_SYMBOL(rte_version_suffix)
+RTE_EXPORT_SYMBOL(rte_version_suffix);
 const char *
 rte_version_suffix(void) { return RTE_VER_SUFFIX; }
 
-RTE_EXPORT_SYMBOL(rte_version_release)
+RTE_EXPORT_SYMBOL(rte_version_release);
 unsigned int
 rte_version_release(void) { return RTE_VER_RELEASE; }
 
-RTE_EXPORT_SYMBOL(rte_version)
+RTE_EXPORT_SYMBOL(rte_version);
 const char *
 rte_version(void)
 {
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index c1ab8d86d2..7da0e2914c 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -74,7 +74,7 @@ static struct flock wr_lock = {
 struct lcore_config lcore_config[RTE_MAX_LCORE];
 
 /* used by rte_rdtsc() */
-RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
+RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map);
 int rte_cycles_vmware_tsc_map;
 
 
@@ -517,7 +517,7 @@ sync_func(__rte_unused void *arg)
 	return 0;
 }
 /* Abstraction for port I/0 privilege */
-RTE_EXPORT_SYMBOL(rte_eal_iopl_init)
+RTE_EXPORT_SYMBOL(rte_eal_iopl_init);
 int
 rte_eal_iopl_init(void)
 {
@@ -538,7 +538,7 @@ static void rte_eal_init_alert(const char *msg)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -888,7 +888,7 @@ rte_eal_init(int argc, char **argv)
 	return fctret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -917,7 +917,7 @@ rte_eal_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
+RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev);
 int rte_eal_create_uio_dev(void)
 {
 	const struct internal_config *internal_conf =
@@ -925,20 +925,20 @@ int rte_eal_create_uio_dev(void)
 	return internal_conf->create_uio_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode);
 enum rte_intr_mode
 rte_eal_vfio_intr_mode(void)
 {
 	return RTE_INTR_MODE_NONE;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token);
 void
 rte_eal_vfio_get_vf_token(__rte_unused rte_uuid_t vf_token)
 {
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_setup_device)
+RTE_EXPORT_SYMBOL(rte_vfio_setup_device);
 int rte_vfio_setup_device(__rte_unused const char *sysfs_base,
 		      __rte_unused const char *dev_addr,
 		      __rte_unused int *vfio_dev_fd,
@@ -948,7 +948,7 @@ int rte_vfio_setup_device(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_release_device)
+RTE_EXPORT_SYMBOL(rte_vfio_release_device);
 int rte_vfio_release_device(__rte_unused const char *sysfs_base,
 			__rte_unused const char *dev_addr,
 			__rte_unused int fd)
@@ -957,33 +957,33 @@ int rte_vfio_release_device(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_enable)
+RTE_EXPORT_SYMBOL(rte_vfio_enable);
 int rte_vfio_enable(__rte_unused const char *modname)
 {
 	rte_errno = ENOTSUP;
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_is_enabled);
 int rte_vfio_is_enabled(__rte_unused const char *modname)
 {
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled);
 int rte_vfio_noiommu_is_enabled(void)
 {
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_clear_group)
+RTE_EXPORT_SYMBOL(rte_vfio_clear_group);
 int rte_vfio_clear_group(__rte_unused int vfio_group_fd)
 {
 	rte_errno = ENOTSUP;
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_num)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_num);
 int
 rte_vfio_get_group_num(__rte_unused const char *sysfs_base,
 		       __rte_unused const char *dev_addr,
@@ -993,7 +993,7 @@ rte_vfio_get_group_num(__rte_unused const char *sysfs_base,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd);
 int
 rte_vfio_get_container_fd(void)
 {
@@ -1001,7 +1001,7 @@ rte_vfio_get_container_fd(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd);
 int
 rte_vfio_get_group_fd(__rte_unused int iommu_group_num)
 {
@@ -1009,7 +1009,7 @@ rte_vfio_get_group_fd(__rte_unused int iommu_group_num)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_create)
+RTE_EXPORT_SYMBOL(rte_vfio_container_create);
 int
 rte_vfio_container_create(void)
 {
@@ -1017,7 +1017,7 @@ rte_vfio_container_create(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_destroy)
+RTE_EXPORT_SYMBOL(rte_vfio_container_destroy);
 int
 rte_vfio_container_destroy(__rte_unused int container_fd)
 {
@@ -1025,7 +1025,7 @@ rte_vfio_container_destroy(__rte_unused int container_fd)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind);
 int
 rte_vfio_container_group_bind(__rte_unused int container_fd,
 		__rte_unused int iommu_group_num)
@@ -1034,7 +1034,7 @@ rte_vfio_container_group_bind(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind);
 int
 rte_vfio_container_group_unbind(__rte_unused int container_fd,
 		__rte_unused int iommu_group_num)
@@ -1043,7 +1043,7 @@ rte_vfio_container_group_unbind(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -1054,7 +1054,7 @@ rte_vfio_container_dma_map(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c03e281e67..ae318313de 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -207,7 +207,7 @@ eal_alarm_callback(void *arg __rte_unused)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -260,7 +260,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c
index 737d1040ea..ca2b721d09 100644
--- a/lib/eal/freebsd/eal_dev.c
+++ b/lib/eal/freebsd/eal_dev.c
@@ -8,7 +8,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -16,7 +16,7 @@ rte_dev_event_monitor_start(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -24,7 +24,7 @@ rte_dev_event_monitor_stop(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -32,7 +32,7 @@ rte_dev_hotplug_handle_enable(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 5c3ab6699e..72865b7be5 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -81,7 +81,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb, void *cb_arg)
@@ -213,7 +213,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 				rte_intr_callback_fn cb_fn, void *cb_arg,
@@ -270,7 +270,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -358,7 +358,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 		rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -371,7 +371,7 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -413,7 +413,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -454,7 +454,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
@@ -656,7 +656,7 @@ rte_eal_intr_init(void)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
 		int epfd, int op, unsigned int vec, void *data)
@@ -670,7 +670,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -680,14 +680,14 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
@@ -695,7 +695,7 @@ rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -703,7 +703,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -711,7 +711,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 		int maxevents, int timeout)
@@ -724,7 +724,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -737,7 +737,7 @@ rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 {
@@ -749,21 +749,21 @@ rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 6d3d46a390..37b7852430 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -36,7 +36,7 @@ uint64_t eal_get_baseaddr(void)
 /*
  * Get physical address of any mapped virtual address in the current process.
  */
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virtaddr)
 {
@@ -45,7 +45,7 @@ rte_mem_virt2phy(const void *virtaddr)
 	(void)virtaddr;
 	return RTE_BAD_IOVA;
 }
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virtaddr)
 {
@@ -297,7 +297,7 @@ rte_eal_hugepage_attach(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c
index 7ed76ed796..53755f6b54 100644
--- a/lib/eal/freebsd/eal_thread.c
+++ b/lib/eal/freebsd/eal_thread.c
@@ -26,7 +26,7 @@
 #include "eal_thread.h"
 
 /* require calling thread tid by gettid() */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int rte_sys_gettid(void)
 {
 	long lwpid;
@@ -34,7 +34,7 @@ int rte_sys_gettid(void)
 	return (int)lwpid;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
 	char truncated[RTE_THREAD_NAME_SIZE];
diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c
index d21ffa2694..46c90e3b03 100644
--- a/lib/eal/freebsd/eal_timer.c
+++ b/lib/eal/freebsd/eal_timer.c
@@ -24,7 +24,7 @@
 #warning HPET is not supported in FreeBSD
 #endif
 
-RTE_EXPORT_SYMBOL(eal_timer_source)
+RTE_EXPORT_SYMBOL(eal_timer_source);
 enum timer_source eal_timer_source = EAL_TIMER_TSC;
 
 uint64_t
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 52efb8626b..e3b3f99830 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -79,7 +79,7 @@ static struct flock wr_lock = {
 struct lcore_config lcore_config[RTE_MAX_LCORE];
 
 /* used by rte_rdtsc() */
-RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
+RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map);
 int rte_cycles_vmware_tsc_map;
 
 
@@ -828,7 +828,7 @@ sync_func(__rte_unused void *arg)
  * iopl() call is mostly for the i386 architecture. For other architectures,
  * return -1 to indicate IO privilege can't be changed in this way.
  */
-RTE_EXPORT_SYMBOL(rte_eal_iopl_init)
+RTE_EXPORT_SYMBOL(rte_eal_iopl_init);
 int
 rte_eal_iopl_init(void)
 {
@@ -924,7 +924,7 @@ eal_worker_thread_create(unsigned int lcore_id)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -1305,7 +1305,7 @@ mark_freeable(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -1348,7 +1348,7 @@ rte_eal_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
+RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev);
 int rte_eal_create_uio_dev(void)
 {
 	const struct internal_config *internal_conf =
@@ -1357,7 +1357,7 @@ int rte_eal_create_uio_dev(void)
 	return internal_conf->create_uio_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode);
 enum rte_intr_mode
 rte_eal_vfio_intr_mode(void)
 {
@@ -1367,7 +1367,7 @@ rte_eal_vfio_intr_mode(void)
 	return internal_conf->vfio_intr_mode;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token)
+RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token);
 void
 rte_eal_vfio_get_vf_token(rte_uuid_t vf_token)
 {
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index eb6a21d4f0..4bb5117cdc 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -135,7 +135,7 @@ eal_alarm_callback(void *arg __rte_unused)
 	rte_spinlock_unlock(&alarm_list_lk);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -200,7 +200,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 33b78464d5..c1801cd520 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -304,7 +304,7 @@ dev_uev_handler(__rte_unused void *param)
 	free(uevent.devname);
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -355,7 +355,7 @@ rte_dev_event_monitor_start(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -424,7 +424,7 @@ dev_sigbus_handler_unregister(void)
 	return rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -440,7 +440,7 @@ rte_dev_hotplug_handle_enable(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 4ec78de82c..c705b2617e 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -483,7 +483,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb, void *cb_arg)
@@ -568,7 +568,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 				rte_intr_callback_fn cb_fn, void *cb_arg,
@@ -620,7 +620,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -687,7 +687,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb_fn, void *cb_arg)
@@ -700,7 +700,7 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
@@ -781,7 +781,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
  * auto-masked. In fact, for interrupt handle types VFIO_MSIX and VFIO_MSI,
  * this function is no-op.
  */
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
@@ -834,7 +834,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
@@ -1313,7 +1313,7 @@ eal_init_tls_epfd(void)
 	return pfd;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
@@ -1386,7 +1386,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	       int maxevents, int timeout)
@@ -1394,7 +1394,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return eal_epoll_wait(epfd, events, maxevents, timeout, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -1419,7 +1419,7 @@ eal_epoll_data_safe_free(struct rte_epoll_event *ev)
 	ev->epfd = -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd,
 	      struct rte_epoll_event *event)
@@ -1461,7 +1461,7 @@ rte_epoll_ctl(int epfd, int op, int fd,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		int op, unsigned int vec, void *data)
@@ -1527,7 +1527,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
@@ -1546,7 +1546,7 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -1594,7 +1594,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
@@ -1609,14 +1609,14 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	rte_intr_max_intr_set(intr_handle, 0);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
 	return !(!rte_intr_nb_efd_get(intr_handle));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -1627,7 +1627,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 				rte_intr_nb_efd_get(intr_handle));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -1640,7 +1640,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index e433c1afee..0c6fd8799d 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -89,7 +89,7 @@ uint64_t eal_get_baseaddr(void)
 /*
  * Get physical address of any mapped virtual address in the current process.
  */
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virtaddr)
 {
@@ -147,7 +147,7 @@ rte_mem_virt2phy(const void *virtaddr)
 	return physaddr;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virtaddr)
 {
@@ -1688,7 +1688,7 @@ rte_eal_hugepage_attach(void)
 			eal_hugepage_attach();
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c
index c0056f825d..530fb265ba 100644
--- a/lib/eal/linux/eal_thread.c
+++ b/lib/eal/linux/eal_thread.c
@@ -17,13 +17,13 @@
 #include "eal_private.h"
 
 /* require calling thread tid by gettid() */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int rte_sys_gettid(void)
 {
 	return (int)syscall(SYS_gettid);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
 	int ret = ENOSYS;
diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c
index 0e670a0af6..3bb91c682a 100644
--- a/lib/eal/linux/eal_timer.c
+++ b/lib/eal/linux/eal_timer.c
@@ -19,7 +19,7 @@
 #include <eal_export.h>
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(eal_timer_source)
+RTE_EXPORT_SYMBOL(eal_timer_source);
 enum timer_source eal_timer_source = EAL_TIMER_HPET;
 
 #ifdef RTE_LIBEAL_USE_HPET
@@ -95,7 +95,7 @@ hpet_msb_inc(__rte_unused void *arg)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_hpet_hz)
+RTE_EXPORT_SYMBOL(rte_get_hpet_hz);
 uint64_t
 rte_get_hpet_hz(void)
 {
@@ -108,7 +108,7 @@ rte_get_hpet_hz(void)
 	return eal_hpet_resolution_hz;
 }
 
-RTE_EXPORT_SYMBOL(rte_get_hpet_cycles)
+RTE_EXPORT_SYMBOL(rte_get_hpet_cycles);
 uint64_t
 rte_get_hpet_cycles(void)
 {
@@ -135,7 +135,7 @@ rte_get_hpet_cycles(void)
  * Open and mmap /dev/hpet (high precision event timer) that will
  * provide our time reference.
  */
-RTE_EXPORT_SYMBOL(rte_eal_hpet_init)
+RTE_EXPORT_SYMBOL(rte_eal_hpet_init);
 int
 rte_eal_hpet_init(int make_default)
 {
diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index 805f0ff92c..1cd6914bb2 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -517,7 +517,7 @@ get_vfio_cfg_by_container_fd(int container_fd)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_fd);
 int
 rte_vfio_get_group_fd(int iommu_group_num)
 {
@@ -716,7 +716,7 @@ vfio_sync_default_container(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_clear_group)
+RTE_EXPORT_SYMBOL(rte_vfio_clear_group);
 int
 rte_vfio_clear_group(int vfio_group_fd)
 {
@@ -740,7 +740,7 @@ rte_vfio_clear_group(int vfio_group_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_setup_device)
+RTE_EXPORT_SYMBOL(rte_vfio_setup_device);
 int
 rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -994,7 +994,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_release_device)
+RTE_EXPORT_SYMBOL(rte_vfio_release_device);
 int
 rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
 		    int vfio_dev_fd)
@@ -1083,7 +1083,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_enable)
+RTE_EXPORT_SYMBOL(rte_vfio_enable);
 int
 rte_vfio_enable(const char *modname)
 {
@@ -1160,7 +1160,7 @@ rte_vfio_enable(const char *modname)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_is_enabled);
 int
 rte_vfio_is_enabled(const char *modname)
 {
@@ -1243,7 +1243,7 @@ vfio_set_iommu_type(int vfio_container_fd)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vfio_get_device_info, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vfio_get_device_info, 24.03);
 int
 rte_vfio_get_device_info(const char *sysfs_base, const char *dev_addr,
 		int *vfio_dev_fd, struct vfio_device_info *device_info)
@@ -1303,7 +1303,7 @@ vfio_has_supported_extensions(int vfio_container_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd)
+RTE_EXPORT_SYMBOL(rte_vfio_get_container_fd);
 int
 rte_vfio_get_container_fd(void)
 {
@@ -1375,7 +1375,7 @@ rte_vfio_get_container_fd(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_get_group_num)
+RTE_EXPORT_SYMBOL(rte_vfio_get_group_num);
 int
 rte_vfio_get_group_num(const char *sysfs_base,
 		const char *dev_addr, int *iommu_group_num)
@@ -2045,7 +2045,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled)
+RTE_EXPORT_SYMBOL(rte_vfio_noiommu_is_enabled);
 int
 rte_vfio_noiommu_is_enabled(void)
 {
@@ -2078,7 +2078,7 @@ rte_vfio_noiommu_is_enabled(void)
 	return c == 'Y';
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_create)
+RTE_EXPORT_SYMBOL(rte_vfio_container_create);
 int
 rte_vfio_container_create(void)
 {
@@ -2104,7 +2104,7 @@ rte_vfio_container_create(void)
 	return vfio_cfgs[i].vfio_container_fd;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_destroy)
+RTE_EXPORT_SYMBOL(rte_vfio_container_destroy);
 int
 rte_vfio_container_destroy(int container_fd)
 {
@@ -2130,7 +2130,7 @@ rte_vfio_container_destroy(int container_fd)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_bind);
 int
 rte_vfio_container_group_bind(int container_fd, int iommu_group_num)
 {
@@ -2145,7 +2145,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num)
 	return vfio_get_group_fd(vfio_cfg, iommu_group_num);
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind)
+RTE_EXPORT_SYMBOL(rte_vfio_container_group_unbind);
 int
 rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
 {
@@ -2186,7 +2186,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova,
 		uint64_t len)
@@ -2207,7 +2207,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova,
 	return container_dma_map(vfio_cfg, vaddr, iova, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova,
 		uint64_t len)
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 19fbf37e3e..9ad981f8fe 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -62,7 +62,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -80,7 +80,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -89,7 +89,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/loongarch/rte_hypervisor.c b/lib/eal/loongarch/rte_hypervisor.c
index 7dd70fe90c..0a463e98b6 100644
--- a/lib/eal/loongarch/rte_hypervisor.c
+++ b/lib/eal/loongarch/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/loongarch/rte_power_intrinsics.c b/lib/eal/loongarch/rte_power_intrinsics.c
index e1a2b2d7ed..6c8e063609 100644
--- a/lib/eal/loongarch/rte_power_intrinsics.c
+++ b/lib/eal/loongarch/rte_power_intrinsics.c
@@ -10,7 +10,7 @@
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -24,7 +24,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -36,7 +36,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on LOONGARCH.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -45,7 +45,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index a78a7d1b53..8569fdb3f7 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -86,7 +86,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -104,7 +104,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -113,7 +113,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/ppc/rte_hypervisor.c b/lib/eal/ppc/rte_hypervisor.c
index 51b224fb94..45e6ef667b 100644
--- a/lib/eal/ppc/rte_hypervisor.c
+++ b/lib/eal/ppc/rte_hypervisor.c
@@ -5,7 +5,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/ppc/rte_power_intrinsics.c b/lib/eal/ppc/rte_power_intrinsics.c
index d9d8eb8d51..de1ebaad52 100644
--- a/lib/eal/ppc/rte_power_intrinsics.c
+++ b/lib/eal/ppc/rte_power_intrinsics.c
@@ -10,7 +10,7 @@
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -24,7 +24,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -36,7 +36,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on PPC64.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -45,7 +45,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
index 4dec491b0d..815028220c 100644
--- a/lib/eal/riscv/rte_cpuflags.c
+++ b/lib/eal/riscv/rte_cpuflags.c
@@ -91,7 +91,7 @@ rte_cpu_get_features(hwcap_registers_t out)
 /*
  * Checks if a particular flag is available on current machine.
  */
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -109,7 +109,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return (regs[feat->reg] >> feat->bit) & 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -118,7 +118,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/riscv/rte_hypervisor.c b/lib/eal/riscv/rte_hypervisor.c
index 73020f7753..acc698b8a4 100644
--- a/lib/eal/riscv/rte_hypervisor.c
+++ b/lib/eal/riscv/rte_hypervisor.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include "rte_hypervisor.h"
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/riscv/rte_power_intrinsics.c b/lib/eal/riscv/rte_power_intrinsics.c
index 11eff53ff2..9a84447a20 100644
--- a/lib/eal/riscv/rte_power_intrinsics.c
+++ b/lib/eal/riscv/rte_power_intrinsics.c
@@ -12,7 +12,7 @@
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		  const uint64_t tsc_timestamp)
@@ -26,7 +26,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -38,7 +38,7 @@ rte_power_pause(const uint64_t tsc_timestamp)
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -50,7 +50,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 /**
  * This function is not supported on RISC-V 64
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 			const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/unix/eal_debug.c b/lib/eal/unix/eal_debug.c
index e3689531e4..86e02b9665 100644
--- a/lib/eal/unix/eal_debug.c
+++ b/lib/eal/unix/eal_debug.c
@@ -47,7 +47,7 @@ static char *safe_itoa(long val, char *buf, size_t len, unsigned int radix)
  * Most of libc is therefore not safe, include RTE_LOG (calls syslog);
  * backtrace_symbols (calls malloc), etc.
  */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void rte_dump_stack(void)
 {
 	void *func[BACKTRACE_SIZE];
@@ -124,7 +124,7 @@ void rte_dump_stack(void)
 #else /* !RTE_BACKTRACE */
 
 /* stub if not enabled */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void rte_dump_stack(void) { }
 
 #endif /* RTE_BACKTRACE */
diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c
index 6b8451cd3e..b67cfc0b7b 100644
--- a/lib/eal/unix/eal_filesystem.c
+++ b/lib/eal/unix/eal_filesystem.c
@@ -78,7 +78,7 @@ int eal_create_runtime_dir(void)
 }
 
 /* parse a sysfs (or other) file containing one integer value */
-RTE_EXPORT_SYMBOL(eal_parse_sysfs_value)
+RTE_EXPORT_SYMBOL(eal_parse_sysfs_value);
 int eal_parse_sysfs_value(const char *filename, unsigned long *val)
 {
 	FILE *f;
diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c
index f2c16fb8a7..1627e62de9 100644
--- a/lib/eal/unix/eal_firmware.c
+++ b/lib/eal/unix/eal_firmware.c
@@ -147,7 +147,7 @@ firmware_read(const char *name, void **buf, size_t *bufsz)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read);
 int
 rte_firmware_read(const char *name, void **buf, size_t *bufsz)
 {
diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c
index 55b647c736..4ba28b714d 100644
--- a/lib/eal/unix/eal_unix_memory.c
+++ b/lib/eal/unix/eal_unix_memory.c
@@ -110,7 +110,7 @@ mem_rte_to_sys_prot(int prot)
 	return sys_prot;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map);
 void *
 rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	int fd, uint64_t offset)
@@ -134,14 +134,14 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	return mem_map(requested_addr, size, sys_prot, sys_flags, fd, offset);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap);
 int
 rte_mem_unmap(void *virt, size_t size)
 {
 	return mem_unmap(virt, size);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size);
 size_t
 rte_mem_page_size(void)
 {
@@ -165,7 +165,7 @@ rte_mem_page_size(void)
 	return page_size;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock);
 int
 rte_mem_lock(const void *virt, size_t size)
 {
diff --git a/lib/eal/unix/eal_unix_timer.c b/lib/eal/unix/eal_unix_timer.c
index 3dbcf61e90..27679601cf 100644
--- a/lib/eal/unix/eal_unix_timer.c
+++ b/lib/eal/unix/eal_unix_timer.c
@@ -8,7 +8,7 @@
 #include <eal_export.h>
 #include <rte_cycles.h>
 
-RTE_EXPORT_SYMBOL(rte_delay_us_sleep)
+RTE_EXPORT_SYMBOL(rte_delay_us_sleep);
 void
 rte_delay_us_sleep(unsigned int us)
 {
diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
index 950c0848ba..c1bb4d7091 100644
--- a/lib/eal/unix/rte_thread.c
+++ b/lib/eal/unix/rte_thread.c
@@ -119,7 +119,7 @@ thread_start_wrapper(void *arg)
 }
 #endif
 
-RTE_EXPORT_SYMBOL(rte_thread_create)
+RTE_EXPORT_SYMBOL(rte_thread_create);
 int
 rte_thread_create(rte_thread_t *thread_id,
 		const rte_thread_attr_t *thread_attr,
@@ -228,7 +228,7 @@ rte_thread_create(rte_thread_t *thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_join)
+RTE_EXPORT_SYMBOL(rte_thread_join);
 int
 rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 {
@@ -251,21 +251,21 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_detach)
+RTE_EXPORT_SYMBOL(rte_thread_detach);
 int
 rte_thread_detach(rte_thread_t thread_id)
 {
 	return pthread_detach((pthread_t)thread_id.opaque_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_equal)
+RTE_EXPORT_SYMBOL(rte_thread_equal);
 int
 rte_thread_equal(rte_thread_t t1, rte_thread_t t2)
 {
 	return pthread_equal((pthread_t)t1.opaque_id, (pthread_t)t2.opaque_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_self)
+RTE_EXPORT_SYMBOL(rte_thread_self);
 rte_thread_t
 rte_thread_self(void)
 {
@@ -278,7 +278,7 @@ rte_thread_self(void)
 	return thread_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_priority)
+RTE_EXPORT_SYMBOL(rte_thread_get_priority);
 int
 rte_thread_get_priority(rte_thread_t thread_id,
 	enum rte_thread_priority *priority)
@@ -301,7 +301,7 @@ rte_thread_get_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_set_priority);
 int
 rte_thread_set_priority(rte_thread_t thread_id,
 	enum rte_thread_priority priority)
@@ -323,7 +323,7 @@ rte_thread_set_priority(rte_thread_t thread_id,
 		&param);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_create)
+RTE_EXPORT_SYMBOL(rte_thread_key_create);
 int
 rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *))
 {
@@ -346,7 +346,7 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *))
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_delete)
+RTE_EXPORT_SYMBOL(rte_thread_key_delete);
 int
 rte_thread_key_delete(rte_thread_key key)
 {
@@ -369,7 +369,7 @@ rte_thread_key_delete(rte_thread_key key)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_set)
+RTE_EXPORT_SYMBOL(rte_thread_value_set);
 int
 rte_thread_value_set(rte_thread_key key, const void *value)
 {
@@ -390,7 +390,7 @@ rte_thread_value_set(rte_thread_key key, const void *value)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_get)
+RTE_EXPORT_SYMBOL(rte_thread_value_get);
 void *
 rte_thread_value_get(rte_thread_key key)
 {
@@ -402,7 +402,7 @@ rte_thread_value_get(rte_thread_key key)
 	return pthread_getspecific(key->thread_index);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id);
 int
 rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		const rte_cpuset_t *cpuset)
@@ -411,7 +411,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		sizeof(*cpuset), cpuset);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id);
 int
 rte_thread_get_affinity_by_id(rte_thread_t thread_id,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 4f0a164d9b..a38c69ddfd 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -75,7 +75,7 @@ eal_proc_type_detect(void)
 	return ptype;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_disable)
+RTE_EXPORT_SYMBOL(rte_mp_disable);
 bool
 rte_mp_disable(void)
 {
@@ -191,12 +191,12 @@ rte_eal_init_alert(const char *msg)
  * until eal_common_trace.c can be compiled.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_point_sz, 20.05);
 RTE_DEFINE_PER_LCORE(volatile int, trace_point_sz);
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(per_lcore_trace_mem, 20.05);
 RTE_DEFINE_PER_LCORE(void *, trace_mem);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_mem_per_thread_alloc, 20.05);
 void
 __rte_trace_mem_per_thread_alloc(void)
 {
@@ -207,7 +207,7 @@ trace_mem_per_thread_free(void)
 {
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_emit_field, 20.05);
 void
 __rte_trace_point_emit_field(size_t sz, const char *field,
 	const char *type)
@@ -217,7 +217,7 @@ __rte_trace_point_emit_field(size_t sz, const char *field,
 	RTE_SET_USED(type);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_trace_point_register, 20.05);
 int
 __rte_trace_point_register(rte_trace_point_t *trace, const char *name,
 	void (*register_fn)(void))
@@ -228,7 +228,7 @@ __rte_trace_point_register(rte_trace_point_t *trace, const char *name,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_cleanup)
+RTE_EXPORT_SYMBOL(rte_eal_cleanup);
 int
 rte_eal_cleanup(void)
 {
@@ -246,7 +246,7 @@ rte_eal_cleanup(void)
 }
 
 /* Launch threads, called at application init(). */
-RTE_EXPORT_SYMBOL(rte_eal_init)
+RTE_EXPORT_SYMBOL(rte_eal_init);
 int
 rte_eal_init(int argc, char **argv)
 {
@@ -520,7 +520,7 @@ eal_asprintf(char **buffer, const char *format, ...)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_map);
 int
 rte_vfio_container_dma_map(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -531,7 +531,7 @@ rte_vfio_container_dma_map(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap)
+RTE_EXPORT_SYMBOL(rte_vfio_container_dma_unmap);
 int
 rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 			__rte_unused uint64_t vaddr,
@@ -542,7 +542,7 @@ rte_vfio_container_dma_unmap(__rte_unused int container_fd,
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_firmware_read);
 int
 rte_firmware_read(__rte_unused const char *name,
 			__rte_unused void **buf,
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index 0b11d331dc..11d35a7828 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -84,7 +84,7 @@ alarm_task_exec(void *arg)
 	task->ret = alarm_set(task->entry, task->deadline);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_set)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_set);
 int
 rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
@@ -186,7 +186,7 @@ alarm_matches(const struct alarm_entry *ap,
 	return (ap->cb_fn == cb_fn) && (any_arg || ap->cb_arg == cb_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel)
+RTE_EXPORT_SYMBOL(rte_eal_alarm_cancel);
 int
 rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
 {
diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c
index a4549e1179..7355826cb8 100644
--- a/lib/eal/windows/eal_debug.c
+++ b/lib/eal/windows/eal_debug.c
@@ -15,7 +15,7 @@
 #define BACKTRACE_SIZE 256
 
 /* dump the stack of the calling core */
-RTE_EXPORT_SYMBOL(rte_dump_stack)
+RTE_EXPORT_SYMBOL(rte_dump_stack);
 void
 rte_dump_stack(void)
 {
diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c
index 9c7463edf2..4c74162ca0 100644
--- a/lib/eal/windows/eal_dev.c
+++ b/lib/eal/windows/eal_dev.c
@@ -7,7 +7,7 @@
 
 #include "eal_private.h"
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_start);
 int
 rte_dev_event_monitor_start(void)
 {
@@ -15,7 +15,7 @@ rte_dev_event_monitor_start(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop)
+RTE_EXPORT_SYMBOL(rte_dev_event_monitor_stop);
 int
 rte_dev_event_monitor_stop(void)
 {
@@ -23,7 +23,7 @@ rte_dev_event_monitor_stop(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_enable);
 int
 rte_dev_hotplug_handle_enable(void)
 {
@@ -31,7 +31,7 @@ rte_dev_hotplug_handle_enable(void)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable)
+RTE_EXPORT_SYMBOL(rte_dev_hotplug_handle_disable);
 int
 rte_dev_hotplug_handle_disable(void)
 {
diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c
index 5ff30c7631..14b0cfeee8 100644
--- a/lib/eal/windows/eal_interrupts.c
+++ b/lib/eal/windows/eal_interrupts.c
@@ -109,14 +109,14 @@ rte_eal_intr_init(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_is_intr)
+RTE_EXPORT_SYMBOL(rte_thread_is_intr);
 int
 rte_thread_is_intr(void)
 {
 	return rte_thread_equal(intr_thread, rte_thread_self());
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_rx_ctl);
 int
 rte_intr_rx_ctl(__rte_unused struct rte_intr_handle *intr_handle,
 		__rte_unused int epfd, __rte_unused int op,
@@ -150,7 +150,7 @@ eal_intr_thread_cancel(void)
 	WaitForSingleObject(intr_thread_handle, INFINITE);
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_register)
+RTE_EXPORT_SYMBOL(rte_intr_callback_register);
 int
 rte_intr_callback_register(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -159,7 +159,7 @@ rte_intr_callback_register(
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_pending);
 int
 rte_intr_callback_unregister_pending(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -169,7 +169,7 @@ rte_intr_callback_unregister_pending(
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister);
 int
 rte_intr_callback_unregister(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -178,7 +178,7 @@ rte_intr_callback_unregister(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync)
+RTE_EXPORT_SYMBOL(rte_intr_callback_unregister_sync);
 int
 rte_intr_callback_unregister_sync(
 	__rte_unused const struct rte_intr_handle *intr_handle,
@@ -187,28 +187,28 @@ rte_intr_callback_unregister_sync(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_enable)
+RTE_EXPORT_SYMBOL(rte_intr_enable);
 int
 rte_intr_enable(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_ack)
+RTE_EXPORT_SYMBOL(rte_intr_ack);
 int
 rte_intr_ack(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_intr_disable)
+RTE_EXPORT_SYMBOL(rte_intr_disable);
 int
 rte_intr_disable(__rte_unused const struct rte_intr_handle *intr_handle)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_enable);
 int
 rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 {
@@ -218,14 +218,14 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_efd_disable);
 void
 rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 {
 	RTE_SET_USED(intr_handle);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_dp_is_en);
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
@@ -234,7 +234,7 @@ rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_allow_others);
 int
 rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 {
@@ -243,7 +243,7 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_cap_multiple);
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
@@ -252,7 +252,7 @@ rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait)
+RTE_EXPORT_SYMBOL(rte_epoll_wait);
 int
 rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 		int maxevents, int timeout)
@@ -265,7 +265,7 @@ rte_epoll_wait(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible)
+RTE_EXPORT_SYMBOL(rte_epoll_wait_interruptible);
 int
 rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 			     int maxevents, int timeout)
@@ -278,7 +278,7 @@ rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_epoll_ctl)
+RTE_EXPORT_SYMBOL(rte_epoll_ctl);
 int
 rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 {
@@ -290,14 +290,14 @@ rte_epoll_ctl(int epfd, int op, int fd, struct rte_epoll_event *event)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_tls_epfd);
 int
 rte_intr_tls_epfd(void)
 {
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_intr_free_epoll_fd);
 void
 rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 {
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 9f85191016..4bc251598e 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -213,7 +213,7 @@ eal_mem_virt2iova_cleanup(void)
 		CloseHandle(virt2phys_device);
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2phy)
+RTE_EXPORT_SYMBOL(rte_mem_virt2phy);
 phys_addr_t
 rte_mem_virt2phy(const void *virt)
 {
@@ -234,7 +234,7 @@ rte_mem_virt2phy(const void *virt)
 	return phys.QuadPart;
 }
 
-RTE_EXPORT_SYMBOL(rte_mem_virt2iova)
+RTE_EXPORT_SYMBOL(rte_mem_virt2iova);
 rte_iova_t
 rte_mem_virt2iova(const void *virt)
 {
@@ -250,7 +250,7 @@ rte_mem_virt2iova(const void *virt)
 }
 
 /* Always using physical addresses under Windows if they can be obtained. */
-RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs)
+RTE_EXPORT_SYMBOL(rte_eal_using_phys_addrs);
 int
 rte_eal_using_phys_addrs(void)
 {
@@ -522,7 +522,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_map);
 void *
 rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	int fd, uint64_t offset)
@@ -606,7 +606,7 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
 	return virt;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_unmap);
 int
 rte_mem_unmap(void *virt, size_t size)
 {
@@ -630,7 +630,7 @@ eal_get_baseaddr(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_page_size);
 size_t
 rte_mem_page_size(void)
 {
@@ -642,7 +642,7 @@ rte_mem_page_size(void)
 	return info.dwPageSize;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mem_lock);
 int
 rte_mem_lock(const void *virt, size_t size)
 {
diff --git a/lib/eal/windows/eal_mp.c b/lib/eal/windows/eal_mp.c
index 6703355318..48653ef02a 100644
--- a/lib/eal/windows/eal_mp.c
+++ b/lib/eal/windows/eal_mp.c
@@ -25,7 +25,7 @@ rte_mp_channel_cleanup(void)
 	EAL_LOG_NOT_IMPLEMENTED();
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_register)
+RTE_EXPORT_SYMBOL(rte_mp_action_register);
 int
 rte_mp_action_register(const char *name, rte_mp_t action)
 {
@@ -35,7 +35,7 @@ rte_mp_action_register(const char *name, rte_mp_t action)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_action_unregister)
+RTE_EXPORT_SYMBOL(rte_mp_action_unregister);
 void
 rte_mp_action_unregister(const char *name)
 {
@@ -43,7 +43,7 @@ rte_mp_action_unregister(const char *name)
 	EAL_LOG_NOT_IMPLEMENTED();
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_sendmsg)
+RTE_EXPORT_SYMBOL(rte_mp_sendmsg);
 int
 rte_mp_sendmsg(struct rte_mp_msg *msg)
 {
@@ -52,7 +52,7 @@ rte_mp_sendmsg(struct rte_mp_msg *msg)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_sync)
+RTE_EXPORT_SYMBOL(rte_mp_request_sync);
 int
 rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	const struct timespec *ts)
@@ -64,7 +64,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_request_async)
+RTE_EXPORT_SYMBOL(rte_mp_request_async);
 int
 rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 		rte_mp_async_reply_t clb)
@@ -76,7 +76,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_mp_reply)
+RTE_EXPORT_SYMBOL(rte_mp_reply);
 int
 rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
 {
diff --git a/lib/eal/windows/eal_thread.c b/lib/eal/windows/eal_thread.c
index 3eeb94a589..811ae007ba 100644
--- a/lib/eal/windows/eal_thread.c
+++ b/lib/eal/windows/eal_thread.c
@@ -72,7 +72,7 @@ eal_thread_ack_command(void)
 }
 
 /* get current thread ID */
-RTE_EXPORT_SYMBOL(rte_sys_gettid)
+RTE_EXPORT_SYMBOL(rte_sys_gettid);
 int
 rte_sys_gettid(void)
 {
diff --git a/lib/eal/windows/eal_timer.c b/lib/eal/windows/eal_timer.c
index 33cbac6a03..ccaa743b5b 100644
--- a/lib/eal/windows/eal_timer.c
+++ b/lib/eal/windows/eal_timer.c
@@ -15,7 +15,7 @@
 #define US_PER_SEC 1E6
 #define CYC_PER_100KHZ 1E5
 
-RTE_EXPORT_SYMBOL(rte_delay_us_sleep)
+RTE_EXPORT_SYMBOL(rte_delay_us_sleep);
 void
 rte_delay_us_sleep(unsigned int us)
 {
diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c
index 85e5a57346..e1bae54ec8 100644
--- a/lib/eal/windows/rte_thread.c
+++ b/lib/eal/windows/rte_thread.c
@@ -182,7 +182,7 @@ thread_func_wrapper(void *arg)
 	return (DWORD)ctx.thread_func(ctx.routine_args);
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_create)
+RTE_EXPORT_SYMBOL(rte_thread_create);
 int
 rte_thread_create(rte_thread_t *thread_id,
 		  const rte_thread_attr_t *thread_attr,
@@ -260,7 +260,7 @@ rte_thread_create(rte_thread_t *thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_join)
+RTE_EXPORT_SYMBOL(rte_thread_join);
 int
 rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 {
@@ -301,7 +301,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_detach)
+RTE_EXPORT_SYMBOL(rte_thread_detach);
 int
 rte_thread_detach(rte_thread_t thread_id)
 {
@@ -311,14 +311,14 @@ rte_thread_detach(rte_thread_t thread_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_equal)
+RTE_EXPORT_SYMBOL(rte_thread_equal);
 int
 rte_thread_equal(rte_thread_t t1, rte_thread_t t2)
 {
 	return t1.opaque_id == t2.opaque_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_self)
+RTE_EXPORT_SYMBOL(rte_thread_self);
 rte_thread_t
 rte_thread_self(void)
 {
@@ -329,7 +329,7 @@ rte_thread_self(void)
 	return thread_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_name)
+RTE_EXPORT_SYMBOL(rte_thread_set_name);
 void
 rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 {
@@ -371,7 +371,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
 		EAL_LOG(DEBUG, "Failed to set thread name");
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_priority)
+RTE_EXPORT_SYMBOL(rte_thread_get_priority);
 int
 rte_thread_get_priority(rte_thread_t thread_id,
 	enum rte_thread_priority *priority)
@@ -411,7 +411,7 @@ rte_thread_get_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_priority)
+RTE_EXPORT_SYMBOL(rte_thread_set_priority);
 int
 rte_thread_set_priority(rte_thread_t thread_id,
 			enum rte_thread_priority priority)
@@ -450,7 +450,7 @@ rte_thread_set_priority(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_create)
+RTE_EXPORT_SYMBOL(rte_thread_key_create);
 int
 rte_thread_key_create(rte_thread_key *key,
 		__rte_unused void (*destructor)(void *))
@@ -471,7 +471,7 @@ rte_thread_key_create(rte_thread_key *key,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_key_delete)
+RTE_EXPORT_SYMBOL(rte_thread_key_delete);
 int
 rte_thread_key_delete(rte_thread_key key)
 {
@@ -490,7 +490,7 @@ rte_thread_key_delete(rte_thread_key key)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_set)
+RTE_EXPORT_SYMBOL(rte_thread_value_set);
 int
 rte_thread_value_set(rte_thread_key key, const void *value)
 {
@@ -511,7 +511,7 @@ rte_thread_value_set(rte_thread_key key, const void *value)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_value_get)
+RTE_EXPORT_SYMBOL(rte_thread_value_get);
 void *
 rte_thread_value_get(rte_thread_key key)
 {
@@ -531,7 +531,7 @@ rte_thread_value_get(rte_thread_key key)
 	return output;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_set_affinity_by_id);
 int
 rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 		const rte_cpuset_t *cpuset)
@@ -572,7 +572,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id)
+RTE_EXPORT_SYMBOL(rte_thread_get_affinity_by_id);
 int
 rte_thread_get_affinity_by_id(rte_thread_t thread_id,
 		rte_cpuset_t *cpuset)
diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
index 5d1f352e04..90495b19a4 100644
--- a/lib/eal/x86/rte_cpuflags.c
+++ b/lib/eal/x86/rte_cpuflags.c
@@ -149,7 +149,7 @@ struct feature_entry rte_cpu_feature_table[] = {
 	FEAT_DEF(INVTSC, 0x80000007, 0, RTE_REG_EDX,  8)
 };
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_enabled);
 int
 rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 {
@@ -192,7 +192,7 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
 	return feat->value;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name)
+RTE_EXPORT_SYMBOL(rte_cpu_get_flag_name);
 const char *
 rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 {
@@ -201,7 +201,7 @@ rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
 	return rte_cpu_feature_table[feature].name;
 }
 
-RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support)
+RTE_EXPORT_SYMBOL(rte_cpu_get_intrinsics_support);
 void
 rte_cpu_get_intrinsics_support(struct rte_cpu_intrinsics *intrinsics)
 {
diff --git a/lib/eal/x86/rte_hypervisor.c b/lib/eal/x86/rte_hypervisor.c
index 0c649c1d41..6756cd10c0 100644
--- a/lib/eal/x86/rte_hypervisor.c
+++ b/lib/eal/x86/rte_hypervisor.c
@@ -14,7 +14,7 @@
 /* See http://lwn.net/Articles/301888/ */
 #define HYPERVISOR_INFO_LEAF 0x40000000
 
-RTE_EXPORT_SYMBOL(rte_hypervisor_get)
+RTE_EXPORT_SYMBOL(rte_hypervisor_get);
 enum rte_hypervisor
 rte_hypervisor_get(void)
 {
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index 1cb2e908c0..70fe5deb5b 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -159,7 +159,7 @@ __check_val_size(const uint8_t sz)
  * For more information about usage of these instructions, please refer to
  * Intel(R) 64 and IA-32 Architectures Software Developer's Manual.
  */
-RTE_EXPORT_SYMBOL(rte_power_monitor)
+RTE_EXPORT_SYMBOL(rte_power_monitor);
 int
 rte_power_monitor(const struct rte_power_monitor_cond *pmc,
 		const uint64_t tsc_timestamp)
@@ -221,7 +221,7 @@ rte_power_monitor(const struct rte_power_monitor_cond *pmc,
  * information about usage of this instruction, please refer to Intel(R) 64 and
  * IA-32 Architectures Software Developer's Manual.
  */
-RTE_EXPORT_SYMBOL(rte_power_pause)
+RTE_EXPORT_SYMBOL(rte_power_pause);
 int
 rte_power_pause(const uint64_t tsc_timestamp)
 {
@@ -266,7 +266,7 @@ RTE_INIT(rte_power_intrinsics_init) {
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup)
+RTE_EXPORT_SYMBOL(rte_power_monitor_wakeup);
 int
 rte_power_monitor_wakeup(const unsigned int lcore_id)
 {
@@ -316,7 +316,7 @@ rte_power_monitor_wakeup(const unsigned int lcore_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_monitor_multi)
+RTE_EXPORT_SYMBOL(rte_power_monitor_multi);
 int
 rte_power_monitor_multi(const struct rte_power_monitor_cond pmc[],
 		const uint32_t num, const uint64_t tsc_timestamp)
diff --git a/lib/eal/x86/rte_spinlock.c b/lib/eal/x86/rte_spinlock.c
index da783919e5..8f000366aa 100644
--- a/lib/eal/x86/rte_spinlock.c
+++ b/lib/eal/x86/rte_spinlock.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include "rte_cpuflags.h"
 
-RTE_EXPORT_SYMBOL(rte_rtm_supported)
+RTE_EXPORT_SYMBOL(rte_rtm_supported);
 uint8_t rte_rtm_supported; /* cache the flag to avoid the overhead
 			      of the rte_cpu_get_flag_enabled function */
 
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index b0e44e5c51..066e35ae4b 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -497,7 +497,7 @@ efd_search_hash(struct rte_efd_table * const table,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_create)
+RTE_EXPORT_SYMBOL(rte_efd_create);
 struct rte_efd_table *
 rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
 		uint64_t online_cpu_socket_bitmask, uint8_t offline_cpu_socket)
@@ -722,7 +722,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_find_existing)
+RTE_EXPORT_SYMBOL(rte_efd_find_existing);
 struct rte_efd_table *
 rte_efd_find_existing(const char *name)
 {
@@ -749,7 +749,7 @@ rte_efd_find_existing(const char *name)
 	return table;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_free)
+RTE_EXPORT_SYMBOL(rte_efd_free);
 void
 rte_efd_free(struct rte_efd_table *table)
 {
@@ -1166,7 +1166,7 @@ efd_compute_update(struct rte_efd_table * const table,
 	return RTE_EFD_UPDATE_FAILED;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_update)
+RTE_EXPORT_SYMBOL(rte_efd_update);
 int
 rte_efd_update(struct rte_efd_table * const table, const unsigned int socket_id,
 		const void *key, const efd_value_t value)
@@ -1190,7 +1190,7 @@ rte_efd_update(struct rte_efd_table * const table, const unsigned int socket_id,
 	return status;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_delete)
+RTE_EXPORT_SYMBOL(rte_efd_delete);
 int
 rte_efd_delete(struct rte_efd_table * const table, const unsigned int socket_id,
 		const void *key, efd_value_t * const prev_value)
@@ -1307,7 +1307,7 @@ efd_lookup_internal(const struct efd_online_group_entry * const group,
 	return value;
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_lookup)
+RTE_EXPORT_SYMBOL(rte_efd_lookup);
 efd_value_t
 rte_efd_lookup(const struct rte_efd_table * const table,
 		const unsigned int socket_id, const void *key)
@@ -1329,7 +1329,7 @@ rte_efd_lookup(const struct rte_efd_table * const table,
 			table->lookup_fn);
 }
 
-RTE_EXPORT_SYMBOL(rte_efd_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_efd_lookup_bulk);
 void rte_efd_lookup_bulk(const struct rte_efd_table * const table,
 		const unsigned int socket_id, const int num_keys,
 		const void **key_list, efd_value_t * const value_list)
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index ec0c1e1176..47a02da4a7 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -75,7 +75,7 @@ eth_dev_get(uint16_t port_id)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocate);
 struct rte_eth_dev *
 rte_eth_dev_allocate(const char *name)
 {
@@ -130,7 +130,7 @@ rte_eth_dev_allocate(const char *name)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocated)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_allocated);
 struct rte_eth_dev *
 rte_eth_dev_allocated(const char *name)
 {
@@ -153,7 +153,7 @@ rte_eth_dev_allocated(const char *name)
  * makes sure that the same device would have the same port ID both
  * in the primary and secondary process.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_attach_secondary)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_attach_secondary);
 struct rte_eth_dev *
 rte_eth_dev_attach_secondary(const char *name)
 {
@@ -184,7 +184,7 @@ rte_eth_dev_attach_secondary(const char *name)
 	return eth_dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_callback_process)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_callback_process);
 int
 rte_eth_dev_callback_process(struct rte_eth_dev *dev,
 	enum rte_eth_event_type event, void *ret_param)
@@ -212,7 +212,7 @@ rte_eth_dev_callback_process(struct rte_eth_dev *dev,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_probing_finish);
 void
 rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 {
@@ -232,7 +232,7 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	dev->state = RTE_ETH_DEV_ATTACHED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_release_port)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_release_port);
 int
 rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 {
@@ -291,7 +291,7 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_create);
 int
 rte_eth_dev_create(struct rte_device *device, const char *name,
 	size_t priv_data_size,
@@ -367,7 +367,7 @@ rte_eth_dev_create(struct rte_device *device, const char *name,
 	return retval;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_destroy);
 int
 rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
 	ethdev_uninit_t ethdev_uninit)
@@ -388,7 +388,7 @@ rte_eth_dev_destroy(struct rte_eth_dev *ethdev,
 	return rte_eth_dev_release_port(ethdev);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_get_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_get_by_name);
 struct rte_eth_dev *
 rte_eth_dev_get_by_name(const char *name)
 {
@@ -400,7 +400,7 @@ rte_eth_dev_get_by_name(const char *name)
 	return &rte_eth_devices[pid];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_rx_hairpin_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_rx_hairpin_queue);
 int
 rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 {
@@ -409,7 +409,7 @@ rte_eth_dev_is_rx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_tx_hairpin_queue)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_is_tx_hairpin_queue);
 int
 rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 {
@@ -418,7 +418,7 @@ rte_eth_dev_is_tx_hairpin_queue(struct rte_eth_dev *dev, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_internal_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dev_internal_reset);
 void
 rte_eth_dev_internal_reset(struct rte_eth_dev *dev)
 {
@@ -629,7 +629,7 @@ eth_dev_tokenise_representor_list(char *p_val, struct rte_eth_devargs *eth_devar
 	return result;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devargs_parse)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devargs_parse);
 int
 rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_devargs,
 		      unsigned int nb_da)
@@ -692,7 +692,7 @@ eth_dev_dma_mzone_name(char *name, size_t len, uint16_t port_id, uint16_t queue_
 			port_id, queue_id, ring_name);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_free);
 int
 rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
 		uint16_t queue_id)
@@ -717,7 +717,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name,
 	return rc;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_reserve)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_dma_zone_reserve);
 const struct rte_memzone *
 rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
 			 uint16_t queue_id, size_t size, unsigned int align,
@@ -753,7 +753,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name,
 			RTE_MEMZONE_IOVA_CONTIG, align);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_bind)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_bind);
 int
 rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
 				struct rte_hairpin_peer_info *peer_info,
@@ -772,7 +772,7 @@ rte_eth_hairpin_queue_peer_bind(uint16_t cur_port, uint16_t cur_queue,
 	return dev->dev_ops->hairpin_queue_peer_bind(dev, cur_queue, peer_info, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_unbind)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_unbind);
 int
 rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
 				  uint32_t direction)
@@ -787,7 +787,7 @@ rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
 	return dev->dev_ops->hairpin_queue_peer_unbind(dev, cur_queue, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_update)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_hairpin_queue_peer_update);
 int
 rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
 				  struct rte_hairpin_peer_info *cur_info,
@@ -809,7 +809,7 @@ rte_eth_hairpin_queue_peer_update(uint16_t peer_port, uint16_t peer_queue,
 						       cur_info, peer_info, direction);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_ip_reassembly_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_ip_reassembly_dynfield_register);
 int
 rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
 {
@@ -838,7 +838,7 @@ rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_pkt_burst_dummy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_pkt_burst_dummy);
 uint16_t
 rte_eth_pkt_burst_dummy(void *queue __rte_unused,
 		struct rte_mbuf **pkts __rte_unused,
@@ -847,7 +847,7 @@ rte_eth_pkt_burst_dummy(void *queue __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_representor_id_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_representor_id_get);
 int
 rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
@@ -943,7 +943,7 @@ rte_eth_representor_id_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_alloc)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_alloc);
 int
 rte_eth_switch_domain_alloc(uint16_t *domain_id)
 {
@@ -964,7 +964,7 @@ rte_eth_switch_domain_alloc(uint16_t *domain_id)
 	return -ENOSPC;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_free)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_switch_domain_free);
 int
 rte_eth_switch_domain_free(uint16_t domain_id)
 {
@@ -981,7 +981,7 @@ rte_eth_switch_domain_free(uint16_t domain_id)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_get_restore_flags)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_get_restore_flags);
 uint64_t
 rte_eth_get_restore_flags(struct rte_eth_dev *dev, enum rte_eth_dev_operation op)
 {
diff --git a/lib/ethdev/ethdev_linux_ethtool.c b/lib/ethdev/ethdev_linux_ethtool.c
index 5eddda1da3..0205181e80 100644
--- a/lib/ethdev/ethdev_linux_ethtool.c
+++ b/lib/ethdev/ethdev_linux_ethtool.c
@@ -133,7 +133,7 @@ static const uint32_t link_modes[] = {
 	[120] =  800000, /* ETHTOOL_LINK_MODE_800000baseVR4_Full_BIT */
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_ethtool)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_ethtool);
 uint32_t
 rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 {
@@ -157,7 +157,7 @@ rte_eth_link_speed_ethtool(enum ethtool_link_mode_bit_indices bit)
 	return rte_eth_speed_bitflag(speed, duplex);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_glink)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_glink);
 uint32_t
 rte_eth_link_speed_glink(const uint32_t *bitmap, int8_t nwords)
 {
@@ -178,7 +178,7 @@ rte_eth_link_speed_glink(const uint32_t *bitmap, int8_t nwords)
 	return ethdev_bitmap;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_gset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_link_speed_gset);
 uint32_t
 rte_eth_link_speed_gset(uint32_t legacy_bitmap)
 {
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 285d377d91..222b17d8ce 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -286,7 +286,7 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.clbk = (void * __rte_atomic *)(uintptr_t)dev->pre_tx_burst_cbs;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_call_rx_callbacks)
+RTE_EXPORT_SYMBOL(rte_eth_call_rx_callbacks);
 uint16_t
 rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
 	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
@@ -310,7 +310,7 @@ rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
 	return nb_rx;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_call_tx_callbacks)
+RTE_EXPORT_SYMBOL(rte_eth_call_tx_callbacks);
 uint16_t
 rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
 	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c
index 071c508327..444f82a723 100644
--- a/lib/ethdev/ethdev_trace_points.c
+++ b/lib/ethdev/ethdev_trace_points.c
@@ -26,30 +26,30 @@ RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_stop,
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_close,
 	lib.ethdev.close)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_empty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_empty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst_empty,
 	lib.ethdev.rx.burst.empty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_nonempty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_rx_burst_nonempty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_rx_burst_nonempty,
 	lib.ethdev.rx.burst.nonempty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_tx_burst, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_ethdev_trace_tx_burst, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_ethdev_trace_tx_burst,
 	lib.ethdev.tx.burst)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_empty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_empty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks_empty,
 	lib.ethdev.call_rx_callbacks.empty)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_nonempty, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_call_rx_callbacks_nonempty, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_rx_callbacks_nonempty,
 	lib.ethdev.call_rx_callbacks.nonempty)
 
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_call_tx_callbacks,
 	lib.ethdev.call_tx_callbacks)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_tx_queue_count, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eth_trace_tx_queue_count, 24.03);
 RTE_TRACE_POINT_REGISTER(rte_eth_trace_tx_queue_count,
 	lib.ethdev.tx_queue_count)
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index dd7c00bc94..92ba1e9b28 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -39,11 +39,11 @@
 
 #define ETH_XSTATS_ITER_NUM	0x100
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devices)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eth_devices);
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
 /* public fast-path API */
-RTE_EXPORT_SYMBOL(rte_eth_fp_ops)
+RTE_EXPORT_SYMBOL(rte_eth_fp_ops);
 struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
 /* spinlock for add/remove Rx callbacks */
@@ -176,7 +176,7 @@ static const struct {
 	{RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT, "symmetric_toeplitz_sort"},
 };
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_init)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_init);
 int
 rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str)
 {
@@ -293,7 +293,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_next)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_next);
 uint16_t
 rte_eth_iterator_next(struct rte_dev_iterator *iter)
 {
@@ -334,7 +334,7 @@ rte_eth_iterator_next(struct rte_dev_iterator *iter)
 	return RTE_MAX_ETHPORTS;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_iterator_cleanup)
+RTE_EXPORT_SYMBOL(rte_eth_iterator_cleanup);
 void
 rte_eth_iterator_cleanup(struct rte_dev_iterator *iter)
 {
@@ -353,7 +353,7 @@ rte_eth_iterator_cleanup(struct rte_dev_iterator *iter)
 	memset(iter, 0, sizeof(*iter));
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next)
+RTE_EXPORT_SYMBOL(rte_eth_find_next);
 uint16_t
 rte_eth_find_next(uint16_t port_id)
 {
@@ -378,7 +378,7 @@ rte_eth_find_next(uint16_t port_id)
 	     port_id < RTE_MAX_ETHPORTS; \
 	     port_id = rte_eth_find_next(port_id + 1))
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_of)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_of);
 uint16_t
 rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent)
 {
@@ -392,7 +392,7 @@ rte_eth_find_next_of(uint16_t port_id, const struct rte_device *parent)
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_sibling)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_sibling);
 uint16_t
 rte_eth_find_next_sibling(uint16_t port_id, uint16_t ref_port_id)
 {
@@ -413,7 +413,7 @@ eth_dev_is_allocated(const struct rte_eth_dev *ethdev)
 	return ethdev->data != NULL && ethdev->data->name[0] != '\0';
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_is_valid_port)
+RTE_EXPORT_SYMBOL(rte_eth_dev_is_valid_port);
 int
 rte_eth_dev_is_valid_port(uint16_t port_id)
 {
@@ -440,7 +440,7 @@ eth_is_valid_owner_id(uint64_t owner_id)
 	return 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_find_next_owned_by)
+RTE_EXPORT_SYMBOL(rte_eth_find_next_owned_by);
 uint64_t
 rte_eth_find_next_owned_by(uint16_t port_id, const uint64_t owner_id)
 {
@@ -454,7 +454,7 @@ rte_eth_find_next_owned_by(uint16_t port_id, const uint64_t owner_id)
 	return port_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_new)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_new);
 int
 rte_eth_dev_owner_new(uint64_t *owner_id)
 {
@@ -530,7 +530,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_set);
 int
 rte_eth_dev_owner_set(const uint16_t port_id,
 		      const struct rte_eth_dev_owner *owner)
@@ -551,7 +551,7 @@ rte_eth_dev_owner_set(const uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_unset)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_unset);
 int
 rte_eth_dev_owner_unset(const uint16_t port_id, const uint64_t owner_id)
 {
@@ -573,7 +573,7 @@ rte_eth_dev_owner_unset(const uint16_t port_id, const uint64_t owner_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_delete)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_delete);
 int
 rte_eth_dev_owner_delete(const uint64_t owner_id)
 {
@@ -611,7 +611,7 @@ rte_eth_dev_owner_delete(const uint64_t owner_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_owner_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_owner_get);
 int
 rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner)
 {
@@ -650,7 +650,7 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_socket_id)
+RTE_EXPORT_SYMBOL(rte_eth_dev_socket_id);
 int
 rte_eth_dev_socket_id(uint16_t port_id)
 {
@@ -676,7 +676,7 @@ rte_eth_dev_socket_id(uint16_t port_id)
 	return socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_sec_ctx)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_sec_ctx);
 void *
 rte_eth_dev_get_sec_ctx(uint16_t port_id)
 {
@@ -690,7 +690,7 @@ rte_eth_dev_get_sec_ctx(uint16_t port_id)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_count_avail)
+RTE_EXPORT_SYMBOL(rte_eth_dev_count_avail);
 uint16_t
 rte_eth_dev_count_avail(void)
 {
@@ -707,7 +707,7 @@ rte_eth_dev_count_avail(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_count_total)
+RTE_EXPORT_SYMBOL(rte_eth_dev_count_total);
 uint16_t
 rte_eth_dev_count_total(void)
 {
@@ -721,7 +721,7 @@ rte_eth_dev_count_total(void)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_name_by_port)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_name_by_port);
 int
 rte_eth_dev_get_name_by_port(uint16_t port_id, char *name)
 {
@@ -748,7 +748,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_port_by_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_port_by_name);
 int
 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id)
 {
@@ -839,7 +839,7 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_queue_is_valid, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_queue_is_valid, 23.07);
 int
 rte_eth_rx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 {
@@ -851,7 +851,7 @@ rte_eth_rx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 	return eth_dev_validate_rx_queue(dev, queue_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_queue_is_valid, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_queue_is_valid, 23.07);
 int
 rte_eth_tx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 {
@@ -863,7 +863,7 @@ rte_eth_tx_queue_is_valid(uint16_t port_id, uint16_t queue_id)
 	return eth_dev_validate_tx_queue(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_start);
 int
 rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id)
 {
@@ -908,7 +908,7 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_queue_stop);
 int
 rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id)
 {
@@ -946,7 +946,7 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_start);
 int
 rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id)
 {
@@ -991,7 +991,7 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_queue_stop);
 int
 rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 {
@@ -1029,7 +1029,7 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_speed_bitflag)
+RTE_EXPORT_SYMBOL(rte_eth_speed_bitflag);
 uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
@@ -1087,7 +1087,7 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_offload_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_offload_name);
 const char *
 rte_eth_dev_rx_offload_name(uint64_t offload)
 {
@@ -1106,7 +1106,7 @@ rte_eth_dev_rx_offload_name(uint64_t offload)
 	return name;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_tx_offload_name)
+RTE_EXPORT_SYMBOL(rte_eth_dev_tx_offload_name);
 const char *
 rte_eth_dev_tx_offload_name(uint64_t offload)
 {
@@ -1168,7 +1168,7 @@ eth_dev_offload_names(uint64_t bitmask, char *buf, size_t size,
 	return buf;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_capability_name, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_capability_name, 21.11);
 const char *
 rte_eth_dev_capability_name(uint64_t capability)
 {
@@ -1318,7 +1318,7 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_configure)
+RTE_EXPORT_SYMBOL(rte_eth_dev_configure);
 int
 rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1782,7 +1782,7 @@ eth_dev_config_restore(struct rte_eth_dev *dev,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_start)
+RTE_EXPORT_SYMBOL(rte_eth_dev_start);
 int
 rte_eth_dev_start(uint16_t port_id)
 {
@@ -1857,7 +1857,7 @@ rte_eth_dev_start(uint16_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_stop)
+RTE_EXPORT_SYMBOL(rte_eth_dev_stop);
 int
 rte_eth_dev_stop(uint16_t port_id)
 {
@@ -1888,7 +1888,7 @@ rte_eth_dev_stop(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_up)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_up);
 int
 rte_eth_dev_set_link_up(uint16_t port_id)
 {
@@ -1907,7 +1907,7 @@ rte_eth_dev_set_link_up(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_down)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_link_down);
 int
 rte_eth_dev_set_link_down(uint16_t port_id)
 {
@@ -1926,7 +1926,7 @@ rte_eth_dev_set_link_down(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get, 24.11);
 int
 rte_eth_speed_lanes_get(uint16_t port_id, uint32_t *lane)
 {
@@ -1940,7 +1940,7 @@ rte_eth_speed_lanes_get(uint16_t port_id, uint32_t *lane)
 	return eth_err(port_id, dev->dev_ops->speed_lanes_get(dev, lane));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get_capability, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_get_capability, 24.11);
 int
 rte_eth_speed_lanes_get_capability(uint16_t port_id,
 				   struct rte_eth_speed_lanes_capa *speed_lanes_capa,
@@ -1967,7 +1967,7 @@ rte_eth_speed_lanes_get_capability(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_set, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_speed_lanes_set, 24.11);
 int
 rte_eth_speed_lanes_set(uint16_t port_id, uint32_t speed_lanes_capa)
 {
@@ -1981,7 +1981,7 @@ rte_eth_speed_lanes_set(uint16_t port_id, uint32_t speed_lanes_capa)
 	return eth_err(port_id, dev->dev_ops->speed_lanes_set(dev, speed_lanes_capa));
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_close)
+RTE_EXPORT_SYMBOL(rte_eth_dev_close);
 int
 rte_eth_dev_close(uint16_t port_id)
 {
@@ -2016,7 +2016,7 @@ rte_eth_dev_close(uint16_t port_id)
 	return firsterr;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_reset)
+RTE_EXPORT_SYMBOL(rte_eth_dev_reset);
 int
 rte_eth_dev_reset(uint16_t port_id)
 {
@@ -2042,7 +2042,7 @@ rte_eth_dev_reset(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_is_removed)
+RTE_EXPORT_SYMBOL(rte_eth_dev_is_removed);
 int
 rte_eth_dev_is_removed(uint16_t port_id)
 {
@@ -2270,7 +2270,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_queue_setup)
+RTE_EXPORT_SYMBOL(rte_eth_rx_queue_setup);
 int
 rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc, unsigned int socket_id,
@@ -2496,7 +2496,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	return eth_err(port_id, ret);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_hairpin_queue_setup, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_hairpin_queue_setup, 19.11);
 int
 rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 			       uint16_t nb_rx_desc,
@@ -2602,7 +2602,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_queue_setup)
+RTE_EXPORT_SYMBOL(rte_eth_tx_queue_setup);
 int
 rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -2714,7 +2714,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 		       tx_queue_id, nb_tx_desc, socket_id, &local_conf));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_hairpin_queue_setup, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_hairpin_queue_setup, 19.11);
 int
 rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 			       uint16_t nb_tx_desc,
@@ -2814,7 +2814,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_bind, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_bind, 20.11);
 int
 rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port)
 {
@@ -2842,7 +2842,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_unbind, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_unbind, 20.11);
 int
 rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port)
 {
@@ -2870,7 +2870,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_get_peer_ports, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_hairpin_get_peer_ports, 20.11);
 int
 rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
 			       size_t len, uint32_t direction)
@@ -2909,7 +2909,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_drop_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_drop_callback);
 void
 rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent,
 		void *userdata __rte_unused)
@@ -2919,7 +2919,7 @@ rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent,
 	rte_eth_trace_tx_buffer_drop_callback((void **)pkts, unsent);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_count_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_count_callback);
 void
 rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent,
 		void *userdata)
@@ -2932,7 +2932,7 @@ rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent,
 	rte_eth_trace_tx_buffer_count_callback((void **)pkts, unsent, *count);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_set_err_callback)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_set_err_callback);
 int
 rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer,
 		buffer_tx_error_fn cbfn, void *userdata)
@@ -2951,7 +2951,7 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_init)
+RTE_EXPORT_SYMBOL(rte_eth_tx_buffer_init);
 int
 rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size)
 {
@@ -2973,7 +2973,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_done_cleanup)
+RTE_EXPORT_SYMBOL(rte_eth_tx_done_cleanup);
 int
 rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
 {
@@ -3001,7 +3001,7 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_enable)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_enable);
 int
 rte_eth_promiscuous_enable(uint16_t port_id)
 {
@@ -3028,7 +3028,7 @@ rte_eth_promiscuous_enable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_disable)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_disable);
 int
 rte_eth_promiscuous_disable(uint16_t port_id)
 {
@@ -3056,7 +3056,7 @@ rte_eth_promiscuous_disable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_promiscuous_get)
+RTE_EXPORT_SYMBOL(rte_eth_promiscuous_get);
 int
 rte_eth_promiscuous_get(uint16_t port_id)
 {
@@ -3070,7 +3070,7 @@ rte_eth_promiscuous_get(uint16_t port_id)
 	return dev->data->promiscuous;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_enable)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_enable);
 int
 rte_eth_allmulticast_enable(uint16_t port_id)
 {
@@ -3096,7 +3096,7 @@ rte_eth_allmulticast_enable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_disable)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_disable);
 int
 rte_eth_allmulticast_disable(uint16_t port_id)
 {
@@ -3124,7 +3124,7 @@ rte_eth_allmulticast_disable(uint16_t port_id)
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_allmulticast_get)
+RTE_EXPORT_SYMBOL(rte_eth_allmulticast_get);
 int
 rte_eth_allmulticast_get(uint16_t port_id)
 {
@@ -3138,7 +3138,7 @@ rte_eth_allmulticast_get(uint16_t port_id)
 	return dev->data->all_multicast;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_link_get)
+RTE_EXPORT_SYMBOL(rte_eth_link_get);
 int
 rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link)
 {
@@ -3167,7 +3167,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_link_get_nowait)
+RTE_EXPORT_SYMBOL(rte_eth_link_get_nowait);
 int
 rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link)
 {
@@ -3196,7 +3196,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_speed_to_str, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_speed_to_str, 20.11);
 const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
@@ -3260,7 +3260,7 @@ rte_eth_link_speed_to_str(uint32_t link_speed)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_to_str, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_to_str, 20.11);
 int
 rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 {
@@ -3297,7 +3297,7 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_stats_get)
+RTE_EXPORT_SYMBOL(rte_eth_stats_get);
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
 {
@@ -3325,7 +3325,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_stats_reset)
+RTE_EXPORT_SYMBOL(rte_eth_stats_reset);
 int
 rte_eth_stats_reset(uint16_t port_id)
 {
@@ -3387,7 +3387,7 @@ eth_dev_get_xstats_count(uint16_t port_id)
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_id_by_name)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_id_by_name);
 int
 rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name,
 		uint64_t *id)
@@ -3523,7 +3523,7 @@ eth_xstats_get_by_name_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 
 
 /* retrieve ethdev extended statistics names */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names_by_id)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names_by_id);
 int
 rte_eth_xstats_get_names_by_id(uint16_t port_id,
 	struct rte_eth_xstat_name *xstats_names, unsigned int size,
@@ -3616,7 +3616,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id,
 	return size;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_names);
 int
 rte_eth_xstats_get_names(uint16_t port_id,
 	struct rte_eth_xstat_name *xstats_names,
@@ -3743,7 +3743,7 @@ eth_xtats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 /* retrieve ethdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get_by_id)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get_by_id);
 int
 rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids,
 			 uint64_t *values, unsigned int size)
@@ -3830,7 +3830,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids,
 	return (i == size) ? (int32_t)size : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_xstats_get)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_get);
 int
 rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats,
 	unsigned int n)
@@ -3882,7 +3882,7 @@ rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats,
 }
 
 /* reset ethdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_eth_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_eth_xstats_reset);
 int
 rte_eth_xstats_reset(uint16_t port_id)
 {
@@ -3904,7 +3904,7 @@ rte_eth_xstats_reset(uint16_t port_id)
 	return rte_eth_stats_reset(port_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_set_counter, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_set_counter, 25.03);
 int
 rte_eth_xstats_set_counter(uint16_t port_id, uint64_t id, int on_off)
 {
@@ -3934,7 +3934,7 @@ rte_eth_xstats_set_counter(uint16_t port_id, uint64_t id, int on_off)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_query_state, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_xstats_query_state, 25.03);
 int
 rte_eth_xstats_query_state(uint16_t port_id, uint64_t id)
 {
@@ -3978,7 +3978,7 @@ eth_dev_set_queue_stats_mapping(uint16_t port_id, uint16_t queue_id,
 	return dev->dev_ops->queue_stats_mapping_set(dev, queue_id, stat_idx, is_rx);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_tx_queue_stats_mapping)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_tx_queue_stats_mapping);
 int
 rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id,
 		uint8_t stat_idx)
@@ -3995,7 +3995,7 @@ rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_rx_queue_stats_mapping)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_rx_queue_stats_mapping);
 int
 rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id,
 		uint8_t stat_idx)
@@ -4012,7 +4012,7 @@ rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_fw_version_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_fw_version_get);
 int
 rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size)
 {
@@ -4038,7 +4038,7 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_info_get);
 int
 rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
 {
@@ -4103,7 +4103,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_conf_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_conf_get, 21.11);
 int
 rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf)
 {
@@ -4126,7 +4126,7 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_supported_ptypes)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_supported_ptypes);
 int
 rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
 				 uint32_t *ptypes, int num)
@@ -4168,7 +4168,7 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return j;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_ptypes)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_ptypes);
 int
 rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 				 uint32_t *set_ptypes, unsigned int num)
@@ -4264,7 +4264,7 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_macaddrs_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_macaddrs_get, 21.11);
 int
 rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
 	unsigned int num)
@@ -4292,7 +4292,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
 	return num;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_macaddr_get)
+RTE_EXPORT_SYMBOL(rte_eth_macaddr_get);
 int
 rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 {
@@ -4315,7 +4315,7 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_mtu)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_mtu);
 int
 rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu)
 {
@@ -4337,7 +4337,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_mtu)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_mtu);
 int
 rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
 {
@@ -4384,7 +4384,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_vlan_filter)
+RTE_EXPORT_SYMBOL(rte_eth_dev_vlan_filter);
 int
 rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 {
@@ -4432,7 +4432,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_strip_on_queue)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_strip_on_queue);
 int
 rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id,
 				    int on)
@@ -4456,7 +4456,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_ether_type)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_ether_type);
 int
 rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
 				enum rte_vlan_type vlan_type,
@@ -4477,7 +4477,7 @@ rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_offload)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_offload);
 int
 rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 {
@@ -4574,7 +4574,7 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_vlan_offload)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_vlan_offload);
 int
 rte_eth_dev_get_vlan_offload(uint16_t port_id)
 {
@@ -4603,7 +4603,7 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_pvid)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_vlan_pvid);
 int
 rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on)
 {
@@ -4622,7 +4622,7 @@ rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_get);
 int
 rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
@@ -4649,7 +4649,7 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_flow_ctrl_set);
 int
 rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 {
@@ -4680,7 +4680,7 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_priority_flow_ctrl_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_priority_flow_ctrl_set);
 int
 rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 				   struct rte_eth_pfc_conf *pfc_conf)
@@ -4763,7 +4763,7 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_info_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_info_get, 22.03);
 int
 rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id,
 		struct rte_eth_pfc_queue_info *pfc_queue_info)
@@ -4791,7 +4791,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priority_flow_ctrl_queue_configure, 22.03);
 int
 rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id,
 		struct rte_eth_pfc_queue_conf *pfc_queue_conf)
@@ -4910,7 +4910,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_update)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_update);
 int
 rte_eth_dev_rss_reta_update(uint16_t port_id,
 			    struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -4963,7 +4963,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_query)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_reta_query);
 int
 rte_eth_dev_rss_reta_query(uint16_t port_id,
 			   struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -4996,7 +4996,7 @@ rte_eth_dev_rss_reta_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_update)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_update);
 int
 rte_eth_dev_rss_hash_update(uint16_t port_id,
 			    struct rte_eth_rss_conf *rss_conf)
@@ -5063,7 +5063,7 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_conf_get)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rss_hash_conf_get);
 int
 rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
 			      struct rte_eth_rss_conf *rss_conf)
@@ -5105,7 +5105,7 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_rss_algo_name, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_rss_algo_name, 23.11);
 const char *
 rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
 {
@@ -5120,7 +5120,7 @@ rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
 	return name;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_find_rss_algo, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_find_rss_algo, 24.03);
 int
 rte_eth_find_rss_algo(const char *name, uint32_t *algo)
 {
@@ -5136,7 +5136,7 @@ rte_eth_find_rss_algo(const char *name, uint32_t *algo)
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_add)
+RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_add);
 int
 rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 				struct rte_eth_udp_tunnel *udp_tunnel)
@@ -5168,7 +5168,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_delete)
+RTE_EXPORT_SYMBOL(rte_eth_dev_udp_tunnel_port_delete);
 int
 rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 				   struct rte_eth_udp_tunnel *udp_tunnel)
@@ -5200,7 +5200,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_led_on)
+RTE_EXPORT_SYMBOL(rte_eth_led_on);
 int
 rte_eth_led_on(uint16_t port_id)
 {
@@ -5219,7 +5219,7 @@ rte_eth_led_on(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_led_off)
+RTE_EXPORT_SYMBOL(rte_eth_led_off);
 int
 rte_eth_led_off(uint16_t port_id)
 {
@@ -5238,7 +5238,7 @@ rte_eth_led_off(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get_capability, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get_capability, 20.11);
 int
 rte_eth_fec_get_capability(uint16_t port_id,
 			   struct rte_eth_fec_capa *speed_fec_capa,
@@ -5266,7 +5266,7 @@ rte_eth_fec_get_capability(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_get, 20.11);
 int
 rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa)
 {
@@ -5292,7 +5292,7 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_fec_set, 20.11);
 int
 rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa)
 {
@@ -5342,7 +5342,7 @@ eth_dev_get_mac_addr_index(uint16_t port_id, const struct rte_ether_addr *addr)
 
 static const struct rte_ether_addr null_mac_addr;
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_add)
+RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_add);
 int
 rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			uint32_t pool)
@@ -5409,7 +5409,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_remove)
+RTE_EXPORT_SYMBOL(rte_eth_dev_mac_addr_remove);
 int
 rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
 {
@@ -5452,7 +5452,7 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_default_mac_addr_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_default_mac_addr_set);
 int
 rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 {
@@ -5526,7 +5526,7 @@ eth_dev_get_hash_mac_addr_index(uint16_t port_id,
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_uc_hash_table_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_uc_hash_table_set);
 int
 rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr,
 				uint8_t on)
@@ -5592,7 +5592,7 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_uc_all_hash_table_set)
+RTE_EXPORT_SYMBOL(rte_eth_dev_uc_all_hash_table_set);
 int
 rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on)
 {
@@ -5611,7 +5611,7 @@ rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_set_queue_rate_limit)
+RTE_EXPORT_SYMBOL(rte_eth_set_queue_rate_limit);
 int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
 					uint32_t tx_rate)
 {
@@ -5652,7 +5652,7 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_set, 22.07);
 int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id,
 			       uint8_t avail_thresh)
 {
@@ -5685,7 +5685,7 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_query, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_avail_thresh_query, 22.07);
 int rte_eth_rx_avail_thresh_query(uint16_t port_id, uint16_t *queue_id,
 				 uint8_t *avail_thresh)
 {
@@ -5726,7 +5726,7 @@ RTE_INIT(eth_dev_init_cb_lists)
 		TAILQ_INIT(&rte_eth_devices[i].link_intr_cbs);
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_callback_register)
+RTE_EXPORT_SYMBOL(rte_eth_dev_callback_register);
 int
 rte_eth_dev_callback_register(uint16_t port_id,
 			enum rte_eth_event_type event,
@@ -5796,7 +5796,7 @@ rte_eth_dev_callback_register(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_callback_unregister)
+RTE_EXPORT_SYMBOL(rte_eth_dev_callback_unregister);
 int
 rte_eth_dev_callback_unregister(uint16_t port_id,
 			enum rte_eth_event_type event,
@@ -5862,7 +5862,7 @@ rte_eth_dev_callback_unregister(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl);
 int
 rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
 {
@@ -5902,7 +5902,7 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q_get_fd)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q_get_fd);
 int
 rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
 {
@@ -5941,7 +5941,7 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
 	return fd;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_ctl_q);
 int
 rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
 			  int epfd, int op, void *data)
@@ -5985,7 +5985,7 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_enable)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_enable);
 int
 rte_eth_dev_rx_intr_enable(uint16_t port_id,
 			   uint16_t queue_id)
@@ -6009,7 +6009,7 @@ rte_eth_dev_rx_intr_enable(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_disable)
+RTE_EXPORT_SYMBOL(rte_eth_dev_rx_intr_disable);
 int
 rte_eth_dev_rx_intr_disable(uint16_t port_id,
 			    uint16_t queue_id)
@@ -6034,7 +6034,7 @@ rte_eth_dev_rx_intr_disable(uint16_t port_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_eth_add_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_rx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_rx_callback_fn fn, void *user_param)
@@ -6094,7 +6094,7 @@ rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_add_first_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_first_rx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_rx_callback_fn fn, void *user_param)
@@ -6137,7 +6137,7 @@ rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_add_tx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_add_tx_callback);
 const struct rte_eth_rxtx_callback *
 rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id,
 		rte_tx_callback_fn fn, void *user_param)
@@ -6199,7 +6199,7 @@ rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id,
 	return cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_remove_rx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_remove_rx_callback);
 int
 rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id,
 		const struct rte_eth_rxtx_callback *user_cb)
@@ -6236,7 +6236,7 @@ rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_remove_tx_callback)
+RTE_EXPORT_SYMBOL(rte_eth_remove_tx_callback);
 int
 rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id,
 		const struct rte_eth_rxtx_callback *user_cb)
@@ -6273,7 +6273,7 @@ rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_rx_queue_info_get);
 int
 rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	struct rte_eth_rxq_info *qinfo)
@@ -6322,7 +6322,7 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_queue_info_get)
+RTE_EXPORT_SYMBOL(rte_eth_tx_queue_info_get);
 int
 rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	struct rte_eth_txq_info *qinfo)
@@ -6371,7 +6371,7 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_recycle_rx_queue_info_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_recycle_rx_queue_info_get, 23.11);
 int
 rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 		struct rte_eth_recycle_rxq_info *recycle_rxq_info)
@@ -6394,7 +6394,7 @@ rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_burst_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_rx_burst_mode_get);
 int
 rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode)
@@ -6428,7 +6428,7 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_tx_burst_mode_get)
+RTE_EXPORT_SYMBOL(rte_eth_tx_burst_mode_get);
 int
 rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 			  struct rte_eth_burst_mode *mode)
@@ -6462,7 +6462,7 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_get_monitor_addr, 21.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_get_monitor_addr, 21.02);
 int
 rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
 		struct rte_power_monitor_cond *pmc)
@@ -6495,7 +6495,7 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_mc_addr_list)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_mc_addr_list);
 int
 rte_eth_dev_set_mc_addr_list(uint16_t port_id,
 			     struct rte_ether_addr *mc_addr_set,
@@ -6518,7 +6518,7 @@ rte_eth_dev_set_mc_addr_list(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_enable)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_enable);
 int
 rte_eth_timesync_enable(uint16_t port_id)
 {
@@ -6537,7 +6537,7 @@ rte_eth_timesync_enable(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_disable)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_disable);
 int
 rte_eth_timesync_disable(uint16_t port_id)
 {
@@ -6556,7 +6556,7 @@ rte_eth_timesync_disable(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_rx_timestamp)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_rx_timestamp);
 int
 rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp,
 				   uint32_t flags)
@@ -6585,7 +6585,7 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_tx_timestamp)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_tx_timestamp);
 int
 rte_eth_timesync_read_tx_timestamp(uint16_t port_id,
 				   struct timespec *timestamp)
@@ -6614,7 +6614,7 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id,
 
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_adjust_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_adjust_time);
 int
 rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta)
 {
@@ -6633,7 +6633,7 @@ rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_timesync_adjust_freq, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_timesync_adjust_freq, 24.11);
 int
 rte_eth_timesync_adjust_freq(uint16_t port_id, int64_t ppm)
 {
@@ -6652,7 +6652,7 @@ rte_eth_timesync_adjust_freq(uint16_t port_id, int64_t ppm)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_read_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_read_time);
 int
 rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp)
 {
@@ -6678,7 +6678,7 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_timesync_write_time)
+RTE_EXPORT_SYMBOL(rte_eth_timesync_write_time);
 int
 rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp)
 {
@@ -6704,7 +6704,7 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_read_clock, 19.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_read_clock, 19.08);
 int
 rte_eth_read_clock(uint16_t port_id, uint64_t *clock)
 {
@@ -6729,7 +6729,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_reg_info)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_reg_info);
 int
 rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info)
 {
@@ -6760,7 +6760,7 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_reg_info_ext, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_reg_info_ext, 24.11);
 int
 rte_eth_dev_get_reg_info_ext(uint16_t port_id, struct rte_dev_reg_info *info)
 {
@@ -6796,7 +6796,7 @@ rte_eth_dev_get_reg_info_ext(uint16_t port_id, struct rte_dev_reg_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom_length)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom_length);
 int
 rte_eth_dev_get_eeprom_length(uint16_t port_id)
 {
@@ -6815,7 +6815,7 @@ rte_eth_dev_get_eeprom_length(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_eeprom);
 int
 rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 {
@@ -6841,7 +6841,7 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_set_eeprom)
+RTE_EXPORT_SYMBOL(rte_eth_dev_set_eeprom);
 int
 rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 {
@@ -6867,7 +6867,7 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_info, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_info, 18.05);
 int
 rte_eth_dev_get_module_info(uint16_t port_id,
 			    struct rte_eth_dev_module_info *modinfo)
@@ -6894,7 +6894,7 @@ rte_eth_dev_get_module_info(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_eeprom, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_get_module_eeprom, 18.05);
 int
 rte_eth_dev_get_module_eeprom(uint16_t port_id,
 			      struct rte_dev_eeprom_info *info)
@@ -6935,7 +6935,7 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_get_dcb_info)
+RTE_EXPORT_SYMBOL(rte_eth_dev_get_dcb_info);
 int
 rte_eth_dev_get_dcb_info(uint16_t port_id,
 			     struct rte_eth_dcb_info *dcb_info)
@@ -6983,7 +6983,7 @@ eth_dev_adjust_nb_desc(uint16_t *nb_desc,
 	*nb_desc = (uint16_t)nb_desc_32;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_adjust_nb_rx_tx_desc)
+RTE_EXPORT_SYMBOL(rte_eth_dev_adjust_nb_rx_tx_desc);
 int
 rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id,
 				 uint16_t *nb_rx_desc,
@@ -7009,7 +7009,7 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_hairpin_capability_get, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_hairpin_capability_get, 19.11);
 int
 rte_eth_dev_hairpin_capability_get(uint16_t port_id,
 				   struct rte_eth_hairpin_cap *cap)
@@ -7037,7 +7037,7 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_pool_ops_supported)
+RTE_EXPORT_SYMBOL(rte_eth_dev_pool_ops_supported);
 int
 rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
 {
@@ -7064,7 +7064,7 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_representor_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_representor_info_get, 21.05);
 int
 rte_eth_representor_info_get(uint16_t port_id,
 			     struct rte_eth_representor_info *info)
@@ -7084,7 +7084,7 @@ rte_eth_representor_info_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_rx_metadata_negotiate)
+RTE_EXPORT_SYMBOL(rte_eth_rx_metadata_negotiate);
 int
 rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
 {
@@ -7120,7 +7120,7 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_capability_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_capability_get, 22.03);
 int
 rte_eth_ip_reassembly_capability_get(uint16_t port_id,
 		struct rte_eth_ip_reassembly_params *reassembly_capa)
@@ -7156,7 +7156,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_get, 22.03);
 int
 rte_eth_ip_reassembly_conf_get(uint16_t port_id,
 		struct rte_eth_ip_reassembly_params *conf)
@@ -7190,7 +7190,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_set, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_ip_reassembly_conf_set, 22.03);
 int
 rte_eth_ip_reassembly_conf_set(uint16_t port_id,
 		const struct rte_eth_ip_reassembly_params *conf)
@@ -7231,7 +7231,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priv_dump, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_priv_dump, 22.03);
 int
 rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
 {
@@ -7250,7 +7250,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file)
 	return eth_err(port_id, dev->dev_ops->eth_dev_priv_dump(dev, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_descriptor_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_rx_descriptor_dump, 22.11);
 int
 rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 			   uint16_t offset, uint16_t num, FILE *file)
@@ -7277,7 +7277,7 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 		       dev->dev_ops->eth_rx_descriptor_dump(dev, queue_id, offset, num, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_descriptor_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_tx_descriptor_dump, 22.11);
 int
 rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 			   uint16_t offset, uint16_t num, FILE *file)
@@ -7304,7 +7304,7 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id,
 		       dev->dev_ops->eth_tx_descriptor_dump(dev, queue_id, offset, num, file));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_buffer_split_get_supported_hdr_ptypes, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_buffer_split_get_supported_hdr_ptypes, 22.11);
 int
 rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes, int num)
 {
@@ -7344,7 +7344,7 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes
 	return j;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_count_aggr_ports, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_count_aggr_ports, 23.03);
 int rte_eth_dev_count_aggr_ports(uint16_t port_id)
 {
 	struct rte_eth_dev *dev;
@@ -7362,7 +7362,7 @@ int rte_eth_dev_count_aggr_ports(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_map_aggr_tx_affinity, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_dev_map_aggr_tx_affinity, 23.03);
 int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id,
 				     uint8_t affinity)
 {
@@ -7418,5 +7418,5 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_eth_dev_logtype)
+RTE_EXPORT_SYMBOL(rte_eth_dev_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c
index a8460e6977..413db0acd9 100644
--- a/lib/ethdev/rte_ethdev_cman.c
+++ b/lib/ethdev/rte_ethdev_cman.c
@@ -12,7 +12,7 @@
 #include "ethdev_trace.h"
 
 /* Get congestion management information for a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_info_get, 22.11);
 int
 rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info)
 {
@@ -41,7 +41,7 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info)
 }
 
 /* Initialize congestion management structure with default values */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_init, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_init, 22.11);
 int
 rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config)
 {
@@ -70,7 +70,7 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config)
 }
 
 /* Configure congestion management on a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_set, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_set, 22.11);
 int
 rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *config)
 {
@@ -98,7 +98,7 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf
 }
 
 /* Retrieve congestion management configuration of a port */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_cman_config_get, 22.11);
 int
 rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config)
 {
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index fe8f43caff..25801717a7 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -23,11 +23,11 @@
 #define FLOW_LOG RTE_ETHDEV_LOG_LINE
 
 /* Mbuf dynamic field name for metadata. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_offs, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_offs, 19.11);
 int32_t rte_flow_dynf_metadata_offs = -1;
 
 /* Mbuf dynamic field flag bit number for metadata. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_mask, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_mask, 19.11);
 uint64_t rte_flow_dynf_metadata_mask;
 
 /**
@@ -281,7 +281,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
 	MK_FLOW_ACTION(JUMP_TO_TABLE_INDEX, sizeof(struct rte_flow_action_jump_to_table_index)),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_register, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dynf_metadata_register, 19.11);
 int
 rte_flow_dynf_metadata_register(void)
 {
@@ -370,7 +370,7 @@ rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error)
 }
 
 /* Check whether a flow rule can be created on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_validate)
+RTE_EXPORT_SYMBOL(rte_flow_validate);
 int
 rte_flow_validate(uint16_t port_id,
 		  const struct rte_flow_attr *attr,
@@ -407,7 +407,7 @@ rte_flow_validate(uint16_t port_id,
 }
 
 /* Create a flow rule on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_create)
+RTE_EXPORT_SYMBOL(rte_flow_create);
 struct rte_flow *
 rte_flow_create(uint16_t port_id,
 		const struct rte_flow_attr *attr,
@@ -438,7 +438,7 @@ rte_flow_create(uint16_t port_id,
 }
 
 /* Destroy a flow rule on a given port. */
-RTE_EXPORT_SYMBOL(rte_flow_destroy)
+RTE_EXPORT_SYMBOL(rte_flow_destroy);
 int
 rte_flow_destroy(uint16_t port_id,
 		 struct rte_flow *flow,
@@ -465,7 +465,7 @@ rte_flow_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_update, 23.07);
 int
 rte_flow_actions_update(uint16_t port_id,
 			struct rte_flow *flow,
@@ -493,7 +493,7 @@ rte_flow_actions_update(uint16_t port_id,
 }
 
 /* Destroy all flow rules associated with a port. */
-RTE_EXPORT_SYMBOL(rte_flow_flush)
+RTE_EXPORT_SYMBOL(rte_flow_flush);
 int
 rte_flow_flush(uint16_t port_id,
 	       struct rte_flow_error *error)
@@ -520,7 +520,7 @@ rte_flow_flush(uint16_t port_id,
 }
 
 /* Query an existing flow rule. */
-RTE_EXPORT_SYMBOL(rte_flow_query)
+RTE_EXPORT_SYMBOL(rte_flow_query);
 int
 rte_flow_query(uint16_t port_id,
 	       struct rte_flow *flow,
@@ -550,7 +550,7 @@ rte_flow_query(uint16_t port_id,
 }
 
 /* Restrict ingress traffic to the defined flow rules. */
-RTE_EXPORT_SYMBOL(rte_flow_isolate)
+RTE_EXPORT_SYMBOL(rte_flow_isolate);
 int
 rte_flow_isolate(uint16_t port_id,
 		 int set,
@@ -578,7 +578,7 @@ rte_flow_isolate(uint16_t port_id,
 }
 
 /* Initialize flow error structure. */
-RTE_EXPORT_SYMBOL(rte_flow_error_set)
+RTE_EXPORT_SYMBOL(rte_flow_error_set);
 int
 rte_flow_error_set(struct rte_flow_error *error,
 		   int code,
@@ -1114,7 +1114,7 @@ rte_flow_conv_name(int is_action,
 }
 
 /** Helper function to convert flow API objects. */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_conv, 18.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_conv, 18.11);
 int
 rte_flow_conv(enum rte_flow_conv_op op,
 	      void *dst,
@@ -1186,7 +1186,7 @@ rte_flow_conv(enum rte_flow_conv_op op,
 }
 
 /** Store a full rte_flow description. */
-RTE_EXPORT_SYMBOL(rte_flow_copy)
+RTE_EXPORT_SYMBOL(rte_flow_copy);
 size_t
 rte_flow_copy(struct rte_flow_desc *desc, size_t len,
 	      const struct rte_flow_attr *attr,
@@ -1241,7 +1241,7 @@ rte_flow_copy(struct rte_flow_desc *desc, size_t len,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dev_dump, 20.02)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_dev_dump, 20.02);
 int
 rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow,
 			FILE *file, struct rte_flow_error *error)
@@ -1263,7 +1263,7 @@ rte_flow_dev_dump(uint16_t port_id, struct rte_flow *flow,
 				  NULL, rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_aged_flows, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_aged_flows, 20.05);
 int
 rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
 		    uint32_t nb_contexts, struct rte_flow_error *error)
@@ -1289,7 +1289,7 @@ rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_q_aged_flows, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_q_aged_flows, 22.11);
 int
 rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts,
 			  uint32_t nb_contexts, struct rte_flow_error *error)
@@ -1317,7 +1317,7 @@ rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_create, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_create, 21.05);
 struct rte_flow_action_handle *
 rte_flow_action_handle_create(uint16_t port_id,
 			      const struct rte_flow_indir_action_conf *conf,
@@ -1345,7 +1345,7 @@ rte_flow_action_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_destroy, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_destroy, 21.05);
 int
 rte_flow_action_handle_destroy(uint16_t port_id,
 			       struct rte_flow_action_handle *handle,
@@ -1369,7 +1369,7 @@ rte_flow_action_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_update, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_update, 21.05);
 int
 rte_flow_action_handle_update(uint16_t port_id,
 			      struct rte_flow_action_handle *handle,
@@ -1394,7 +1394,7 @@ rte_flow_action_handle_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query, 21.05);
 int
 rte_flow_action_handle_query(uint16_t port_id,
 			     const struct rte_flow_action_handle *handle,
@@ -1419,7 +1419,7 @@ rte_flow_action_handle_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_decap_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_decap_set, 20.11);
 int
 rte_flow_tunnel_decap_set(uint16_t port_id,
 			  struct rte_flow_tunnel *tunnel,
@@ -1449,7 +1449,7 @@ rte_flow_tunnel_decap_set(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_match, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_match, 20.11);
 int
 rte_flow_tunnel_match(uint16_t port_id,
 		      struct rte_flow_tunnel *tunnel,
@@ -1479,7 +1479,7 @@ rte_flow_tunnel_match(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_restore_info, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_get_restore_info, 20.11);
 int
 rte_flow_get_restore_info(uint16_t port_id,
 			  struct rte_mbuf *m,
@@ -1514,7 +1514,7 @@ static struct {
 	.desc = { .name = "RTE_MBUF_F_RX_RESTORE_INFO", },
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_restore_info_dynflag, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_restore_info_dynflag, 23.07);
 uint64_t
 rte_flow_restore_info_dynflag(void)
 {
@@ -1535,7 +1535,7 @@ rte_flow_restore_info_dynflag_register(void)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_action_decap_release, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_action_decap_release, 20.11);
 int
 rte_flow_tunnel_action_decap_release(uint16_t port_id,
 				     struct rte_flow_action *actions,
@@ -1565,7 +1565,7 @@ rte_flow_tunnel_action_decap_release(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_item_release, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_tunnel_item_release, 20.11);
 int
 rte_flow_tunnel_item_release(uint16_t port_id,
 			     struct rte_flow_item *items,
@@ -1593,7 +1593,7 @@ rte_flow_tunnel_item_release(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_SYMBOL(rte_flow_pick_transfer_proxy)
+RTE_EXPORT_SYMBOL(rte_flow_pick_transfer_proxy);
 int
 rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id,
 			     struct rte_flow_error *error)
@@ -1621,7 +1621,7 @@ rte_flow_pick_transfer_proxy(uint16_t port_id, uint16_t *proxy_port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_create, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_create, 21.11);
 struct rte_flow_item_flex_handle *
 rte_flow_flex_item_create(uint16_t port_id,
 			  const struct rte_flow_item_flex_conf *conf,
@@ -1648,7 +1648,7 @@ rte_flow_flex_item_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_release, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_flex_item_release, 21.11);
 int
 rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
@@ -1670,7 +1670,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_info_get, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_info_get, 22.03);
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
@@ -1707,7 +1707,7 @@ rte_flow_info_get(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_configure, 22.03);
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
@@ -1766,7 +1766,7 @@ rte_flow_configure(uint16_t port_id,
 				  NULL, rte_strerror(EINVAL));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_create, 22.03);
 struct rte_flow_pattern_template *
 rte_flow_pattern_template_create(uint16_t port_id,
 		const struct rte_flow_pattern_template_attr *template_attr,
@@ -1823,7 +1823,7 @@ rte_flow_pattern_template_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pattern_template_destroy, 22.03);
 int
 rte_flow_pattern_template_destroy(uint16_t port_id,
 		struct rte_flow_pattern_template *pattern_template,
@@ -1854,7 +1854,7 @@ rte_flow_pattern_template_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_create, 22.03);
 struct rte_flow_actions_template *
 rte_flow_actions_template_create(uint16_t port_id,
 			const struct rte_flow_actions_template_attr *template_attr,
@@ -1921,7 +1921,7 @@ rte_flow_actions_template_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_actions_template_destroy, 22.03);
 int
 rte_flow_actions_template_destroy(uint16_t port_id,
 			struct rte_flow_actions_template *actions_template,
@@ -1952,7 +1952,7 @@ rte_flow_actions_template_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_create, 22.03);
 struct rte_flow_template_table *
 rte_flow_template_table_create(uint16_t port_id,
 			const struct rte_flow_template_table_attr *table_attr,
@@ -2026,7 +2026,7 @@ rte_flow_template_table_create(uint16_t port_id,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_destroy, 22.03);
 int
 rte_flow_template_table_destroy(uint16_t port_id,
 				struct rte_flow_template_table *template_table,
@@ -2057,7 +2057,7 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_group_set_miss_actions, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_group_set_miss_actions, 23.11);
 int
 rte_flow_group_set_miss_actions(uint16_t port_id,
 				uint32_t group_id,
@@ -2080,7 +2080,7 @@ rte_flow_group_set_miss_actions(uint16_t port_id,
 				  NULL, rte_strerror(ENOTSUP));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create, 22.03);
 struct rte_flow *
 rte_flow_async_create(uint16_t port_id,
 		      uint32_t queue_id,
@@ -2122,7 +2122,7 @@ rte_flow_async_create(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index, 23.03);
 struct rte_flow *
 rte_flow_async_create_by_index(uint16_t port_id,
 			       uint32_t queue_id,
@@ -2161,7 +2161,7 @@ rte_flow_async_create_by_index(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index_with_pattern, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_create_by_index_with_pattern, 24.11);
 struct rte_flow *
 rte_flow_async_create_by_index_with_pattern(uint16_t port_id,
 					    uint32_t queue_id,
@@ -2206,7 +2206,7 @@ rte_flow_async_create_by_index_with_pattern(uint16_t port_id,
 	return flow;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_destroy, 22.03);
 int
 rte_flow_async_destroy(uint16_t port_id,
 		       uint32_t queue_id,
@@ -2237,7 +2237,7 @@ rte_flow_async_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_actions_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_actions_update, 23.07);
 int
 rte_flow_async_actions_update(uint16_t port_id,
 			      uint32_t queue_id,
@@ -2272,7 +2272,7 @@ rte_flow_async_actions_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_push, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_push, 22.03);
 int
 rte_flow_push(uint16_t port_id,
 	      uint32_t queue_id,
@@ -2297,7 +2297,7 @@ rte_flow_push(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pull, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_pull, 22.03);
 int
 rte_flow_pull(uint16_t port_id,
 	      uint32_t queue_id,
@@ -2324,7 +2324,7 @@ rte_flow_pull(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_create, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_create, 22.03);
 struct rte_flow_action_handle *
 rte_flow_async_action_handle_create(uint16_t port_id,
 		uint32_t queue_id,
@@ -2361,7 +2361,7 @@ rte_flow_async_action_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_destroy, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_destroy, 22.03);
 int
 rte_flow_async_action_handle_destroy(uint16_t port_id,
 		uint32_t queue_id,
@@ -2391,7 +2391,7 @@ rte_flow_async_action_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_update, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_update, 22.03);
 int
 rte_flow_async_action_handle_update(uint16_t port_id,
 		uint32_t queue_id,
@@ -2423,7 +2423,7 @@ rte_flow_async_action_handle_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query, 22.11);
 int
 rte_flow_async_action_handle_query(uint16_t port_id,
 		uint32_t queue_id,
@@ -2455,7 +2455,7 @@ rte_flow_async_action_handle_query(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query_update, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_handle_query_update, 23.03);
 int
 rte_flow_action_handle_query_update(uint16_t port_id,
 				    struct rte_flow_action_handle *handle,
@@ -2481,7 +2481,7 @@ rte_flow_action_handle_query_update(uint16_t port_id,
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query_update, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_handle_query_update, 23.03);
 int
 rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
 					  const struct rte_flow_op_attr *attr,
@@ -2508,7 +2508,7 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
 								  user_data, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_create, 23.07);
 struct rte_flow_action_list_handle *
 rte_flow_action_list_handle_create(uint16_t port_id,
 				   const
@@ -2536,7 +2536,7 @@ rte_flow_action_list_handle_create(uint16_t port_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_destroy, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_destroy, 23.07);
 int
 rte_flow_action_list_handle_destroy(uint16_t port_id,
 				    struct rte_flow_action_list_handle *handle,
@@ -2559,7 +2559,7 @@ rte_flow_action_list_handle_destroy(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_create, 23.07);
 struct rte_flow_action_list_handle *
 rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id,
 					 const struct rte_flow_op_attr *attr,
@@ -2596,7 +2596,7 @@ rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id,
 	return handle;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_destroy, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_destroy, 23.07);
 int
 rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id,
 				 const struct rte_flow_op_attr *op_attr,
@@ -2624,7 +2624,7 @@ rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_query_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_action_list_handle_query_update, 23.07);
 int
 rte_flow_action_list_handle_query_update(uint16_t port_id,
 			 const struct rte_flow_action_list_handle *handle,
@@ -2651,7 +2651,7 @@ rte_flow_action_list_handle_query_update(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_query_update, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_action_list_handle_query_update, 23.07);
 int
 rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_id,
 			 const struct rte_flow_op_attr *attr,
@@ -2686,7 +2686,7 @@ rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_table_hash, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_table_hash, 23.11);
 int
 rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table *table,
 			 const struct rte_flow_item pattern[], uint8_t pattern_template_index,
@@ -2708,7 +2708,7 @@ rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_encap_hash, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_calc_encap_hash, 24.03);
 int
 rte_flow_calc_encap_hash(uint16_t port_id, const struct rte_flow_item pattern[],
 			 enum rte_flow_encap_hash_field dest_field, uint8_t hash_len,
@@ -2738,7 +2738,7 @@ rte_flow_calc_encap_hash(uint16_t port_id, const struct rte_flow_item pattern[],
 	return flow_err(port_id, ret, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resizable, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resizable, 24.03);
 bool
 rte_flow_template_table_resizable(__rte_unused uint16_t port_id,
 				  const struct rte_flow_template_table_attr *tbl_attr)
@@ -2747,7 +2747,7 @@ rte_flow_template_table_resizable(__rte_unused uint16_t port_id,
 		RTE_FLOW_TABLE_SPECIALIZE_RESIZABLE) != 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize, 24.03);
 int
 rte_flow_template_table_resize(uint16_t port_id,
 			       struct rte_flow_template_table *table,
@@ -2771,7 +2771,7 @@ rte_flow_template_table_resize(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_update_resized, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_async_update_resized, 24.03);
 int
 rte_flow_async_update_resized(uint16_t port_id, uint32_t queue,
 			      const struct rte_flow_op_attr *attr,
@@ -2796,7 +2796,7 @@ rte_flow_async_update_resized(uint16_t port_id, uint32_t queue,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize_complete, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_flow_template_table_resize_complete, 24.03);
 int
 rte_flow_template_table_resize_complete(uint16_t port_id,
 					struct rte_flow_template_table *table,
@@ -3032,7 +3032,7 @@ rte_flow_dummy_async_action_list_handle_query_update(
 				  rte_strerror(ENOSYS));
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_flow_fp_default_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_flow_fp_default_ops);
 struct rte_flow_fp_ops rte_flow_fp_default_ops = {
 	.async_create = rte_flow_dummy_async_create,
 	.async_create_by_index = rte_flow_dummy_async_create_by_index,
diff --git a/lib/ethdev/rte_mtr.c b/lib/ethdev/rte_mtr.c
index c6f0698ed3..e4bd02c73b 100644
--- a/lib/ethdev/rte_mtr.c
+++ b/lib/ethdev/rte_mtr.c
@@ -78,7 +78,7 @@ __extension__ ({					\
 })
 
 /* MTR capabilities get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_capabilities_get, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_capabilities_get, 17.11);
 int
 rte_mtr_capabilities_get(uint16_t port_id,
 	struct rte_mtr_capabilities *cap,
@@ -95,7 +95,7 @@ rte_mtr_capabilities_get(uint16_t port_id,
 }
 
 /* MTR meter profile add */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_add, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_add, 17.11);
 int
 rte_mtr_meter_profile_add(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -114,7 +114,7 @@ rte_mtr_meter_profile_add(uint16_t port_id,
 }
 
 /** MTR meter profile delete */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_delete, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_delete, 17.11);
 int
 rte_mtr_meter_profile_delete(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -131,7 +131,7 @@ rte_mtr_meter_profile_delete(uint16_t port_id,
 }
 
 /** MTR meter profile get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_get, 22.11);
 struct rte_flow_meter_profile *
 rte_mtr_meter_profile_get(uint16_t port_id,
 	uint32_t meter_profile_id,
@@ -148,7 +148,7 @@ rte_mtr_meter_profile_get(uint16_t port_id,
 }
 
 /* MTR meter policy validate */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_validate, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_validate, 21.05);
 int
 rte_mtr_meter_policy_validate(uint16_t port_id,
 	struct rte_mtr_meter_policy_params *policy,
@@ -165,7 +165,7 @@ rte_mtr_meter_policy_validate(uint16_t port_id,
 }
 
 /* MTR meter policy add */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_add, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_add, 21.05);
 int
 rte_mtr_meter_policy_add(uint16_t port_id,
 	uint32_t policy_id,
@@ -183,7 +183,7 @@ rte_mtr_meter_policy_add(uint16_t port_id,
 }
 
 /** MTR meter policy delete */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_delete, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_delete, 21.05);
 int
 rte_mtr_meter_policy_delete(uint16_t port_id,
 	uint32_t policy_id,
@@ -200,7 +200,7 @@ rte_mtr_meter_policy_delete(uint16_t port_id,
 }
 
 /** MTR meter policy get */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_get, 22.11);
 struct rte_flow_meter_policy *
 rte_mtr_meter_policy_get(uint16_t port_id,
 	uint32_t policy_id,
@@ -217,7 +217,7 @@ rte_mtr_meter_policy_get(uint16_t port_id,
 }
 
 /** MTR object create */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_create, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_create, 17.11);
 int
 rte_mtr_create(uint16_t port_id,
 	uint32_t mtr_id,
@@ -236,7 +236,7 @@ rte_mtr_create(uint16_t port_id,
 }
 
 /** MTR object destroy */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_destroy, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_destroy, 17.11);
 int
 rte_mtr_destroy(uint16_t port_id,
 	uint32_t mtr_id,
@@ -253,7 +253,7 @@ rte_mtr_destroy(uint16_t port_id,
 }
 
 /** MTR object meter enable */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_enable, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_enable, 17.11);
 int
 rte_mtr_meter_enable(uint16_t port_id,
 	uint32_t mtr_id,
@@ -270,7 +270,7 @@ rte_mtr_meter_enable(uint16_t port_id,
 }
 
 /** MTR object meter disable */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_disable, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_disable, 17.11);
 int
 rte_mtr_meter_disable(uint16_t port_id,
 	uint32_t mtr_id,
@@ -287,7 +287,7 @@ rte_mtr_meter_disable(uint16_t port_id,
 }
 
 /** MTR object meter profile update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_profile_update, 17.11);
 int
 rte_mtr_meter_profile_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -305,7 +305,7 @@ rte_mtr_meter_profile_update(uint16_t port_id,
 }
 
 /** MTR object meter policy update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_update, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_policy_update, 21.05);
 int
 rte_mtr_meter_policy_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -323,7 +323,7 @@ rte_mtr_meter_policy_update(uint16_t port_id,
 }
 
 /** MTR object meter DSCP table update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_dscp_table_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_dscp_table_update, 17.11);
 int
 rte_mtr_meter_dscp_table_update(uint16_t port_id,
 	uint32_t mtr_id, enum rte_mtr_color_in_protocol proto,
@@ -341,7 +341,7 @@ rte_mtr_meter_dscp_table_update(uint16_t port_id,
 }
 
 /** MTR object meter VLAN table update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_vlan_table_update, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_meter_vlan_table_update, 22.07);
 int
 rte_mtr_meter_vlan_table_update(uint16_t port_id,
 	uint32_t mtr_id, enum rte_mtr_color_in_protocol proto,
@@ -359,7 +359,7 @@ rte_mtr_meter_vlan_table_update(uint16_t port_id,
 }
 
 /** Set the input color protocol on MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_set, 22.07);
 int
 rte_mtr_color_in_protocol_set(uint16_t port_id,
 	uint32_t mtr_id,
@@ -378,7 +378,7 @@ rte_mtr_color_in_protocol_set(uint16_t port_id,
 }
 
 /** Get input color protocols of MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_get, 22.07);
 int
 rte_mtr_color_in_protocol_get(uint16_t port_id,
 	uint32_t mtr_id,
@@ -396,7 +396,7 @@ rte_mtr_color_in_protocol_get(uint16_t port_id,
 }
 
 /** Get input color protocol priority of MTR object */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_priority_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_color_in_protocol_priority_get, 22.07);
 int
 rte_mtr_color_in_protocol_priority_get(uint16_t port_id,
 	uint32_t mtr_id,
@@ -415,7 +415,7 @@ rte_mtr_color_in_protocol_priority_get(uint16_t port_id,
 }
 
 /** MTR object enabled stats update */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_update, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_update, 17.11);
 int
 rte_mtr_stats_update(uint16_t port_id,
 	uint32_t mtr_id,
@@ -433,7 +433,7 @@ rte_mtr_stats_update(uint16_t port_id,
 }
 
 /** MTR object stats read */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_read, 17.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mtr_stats_read, 17.11);
 int
 rte_mtr_stats_read(uint16_t port_id,
 	uint32_t mtr_id,
diff --git a/lib/ethdev/rte_tm.c b/lib/ethdev/rte_tm.c
index 66b8934c3b..cb858deff9 100644
--- a/lib/ethdev/rte_tm.c
+++ b/lib/ethdev/rte_tm.c
@@ -59,7 +59,7 @@ __extension__ ({					\
 })
 
 /* Get number of leaf nodes */
-RTE_EXPORT_SYMBOL(rte_tm_get_number_of_leaf_nodes)
+RTE_EXPORT_SYMBOL(rte_tm_get_number_of_leaf_nodes);
 int
 rte_tm_get_number_of_leaf_nodes(uint16_t port_id,
 	uint32_t *n_leaf_nodes,
@@ -89,7 +89,7 @@ rte_tm_get_number_of_leaf_nodes(uint16_t port_id,
 }
 
 /* Check node type (leaf or non-leaf) */
-RTE_EXPORT_SYMBOL(rte_tm_node_type_get)
+RTE_EXPORT_SYMBOL(rte_tm_node_type_get);
 int
 rte_tm_node_type_get(uint16_t port_id,
 	uint32_t node_id,
@@ -107,7 +107,7 @@ rte_tm_node_type_get(uint16_t port_id,
 }
 
 /* Get capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_capabilities_get);
 int rte_tm_capabilities_get(uint16_t port_id,
 	struct rte_tm_capabilities *cap,
 	struct rte_tm_error *error)
@@ -123,7 +123,7 @@ int rte_tm_capabilities_get(uint16_t port_id,
 }
 
 /* Get level capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_level_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_level_capabilities_get);
 int rte_tm_level_capabilities_get(uint16_t port_id,
 	uint32_t level_id,
 	struct rte_tm_level_capabilities *cap,
@@ -140,7 +140,7 @@ int rte_tm_level_capabilities_get(uint16_t port_id,
 }
 
 /* Get node capabilities */
-RTE_EXPORT_SYMBOL(rte_tm_node_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_tm_node_capabilities_get);
 int rte_tm_node_capabilities_get(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_node_capabilities *cap,
@@ -157,7 +157,7 @@ int rte_tm_node_capabilities_get(uint16_t port_id,
 }
 
 /* Add WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_wred_profile_add)
+RTE_EXPORT_SYMBOL(rte_tm_wred_profile_add);
 int rte_tm_wred_profile_add(uint16_t port_id,
 	uint32_t wred_profile_id,
 	const struct rte_tm_wred_params *profile,
@@ -174,7 +174,7 @@ int rte_tm_wred_profile_add(uint16_t port_id,
 }
 
 /* Delete WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_wred_profile_delete)
+RTE_EXPORT_SYMBOL(rte_tm_wred_profile_delete);
 int rte_tm_wred_profile_delete(uint16_t port_id,
 	uint32_t wred_profile_id,
 	struct rte_tm_error *error)
@@ -190,7 +190,7 @@ int rte_tm_wred_profile_delete(uint16_t port_id,
 }
 
 /* Add/update shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_add_update)
+RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_add_update);
 int rte_tm_shared_wred_context_add_update(uint16_t port_id,
 	uint32_t shared_wred_context_id,
 	uint32_t wred_profile_id,
@@ -209,7 +209,7 @@ int rte_tm_shared_wred_context_add_update(uint16_t port_id,
 }
 
 /* Delete shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shared_wred_context_delete);
 int rte_tm_shared_wred_context_delete(uint16_t port_id,
 	uint32_t shared_wred_context_id,
 	struct rte_tm_error *error)
@@ -226,7 +226,7 @@ int rte_tm_shared_wred_context_delete(uint16_t port_id,
 }
 
 /* Add shaper profile */
-RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_add)
+RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_add);
 int rte_tm_shaper_profile_add(uint16_t port_id,
 	uint32_t shaper_profile_id,
 	const struct rte_tm_shaper_params *profile,
@@ -244,7 +244,7 @@ int rte_tm_shaper_profile_add(uint16_t port_id,
 }
 
 /* Delete WRED profile */
-RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shaper_profile_delete);
 int rte_tm_shaper_profile_delete(uint16_t port_id,
 	uint32_t shaper_profile_id,
 	struct rte_tm_error *error)
@@ -260,7 +260,7 @@ int rte_tm_shaper_profile_delete(uint16_t port_id,
 }
 
 /* Add shared shaper */
-RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_add_update)
+RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_add_update);
 int rte_tm_shared_shaper_add_update(uint16_t port_id,
 	uint32_t shared_shaper_id,
 	uint32_t shaper_profile_id,
@@ -278,7 +278,7 @@ int rte_tm_shared_shaper_add_update(uint16_t port_id,
 }
 
 /* Delete shared shaper */
-RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_delete)
+RTE_EXPORT_SYMBOL(rte_tm_shared_shaper_delete);
 int rte_tm_shared_shaper_delete(uint16_t port_id,
 	uint32_t shared_shaper_id,
 	struct rte_tm_error *error)
@@ -294,7 +294,7 @@ int rte_tm_shared_shaper_delete(uint16_t port_id,
 }
 
 /* Add node to port traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_node_add)
+RTE_EXPORT_SYMBOL(rte_tm_node_add);
 int rte_tm_node_add(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t parent_node_id,
@@ -316,7 +316,7 @@ int rte_tm_node_add(uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tm_node_query, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tm_node_query, 24.11);
 int rte_tm_node_query(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t *parent_node_id,
@@ -340,7 +340,7 @@ int rte_tm_node_query(uint16_t port_id,
 }
 
 /* Delete node from traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_node_delete)
+RTE_EXPORT_SYMBOL(rte_tm_node_delete);
 int rte_tm_node_delete(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -356,7 +356,7 @@ int rte_tm_node_delete(uint16_t port_id,
 }
 
 /* Suspend node */
-RTE_EXPORT_SYMBOL(rte_tm_node_suspend)
+RTE_EXPORT_SYMBOL(rte_tm_node_suspend);
 int rte_tm_node_suspend(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -372,7 +372,7 @@ int rte_tm_node_suspend(uint16_t port_id,
 }
 
 /* Resume node */
-RTE_EXPORT_SYMBOL(rte_tm_node_resume)
+RTE_EXPORT_SYMBOL(rte_tm_node_resume);
 int rte_tm_node_resume(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_error *error)
@@ -388,7 +388,7 @@ int rte_tm_node_resume(uint16_t port_id,
 }
 
 /* Commit the initial port traffic manager hierarchy */
-RTE_EXPORT_SYMBOL(rte_tm_hierarchy_commit)
+RTE_EXPORT_SYMBOL(rte_tm_hierarchy_commit);
 int rte_tm_hierarchy_commit(uint16_t port_id,
 	int clear_on_fail,
 	struct rte_tm_error *error)
@@ -404,7 +404,7 @@ int rte_tm_hierarchy_commit(uint16_t port_id,
 }
 
 /* Update node parent  */
-RTE_EXPORT_SYMBOL(rte_tm_node_parent_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_parent_update);
 int rte_tm_node_parent_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t parent_node_id,
@@ -424,7 +424,7 @@ int rte_tm_node_parent_update(uint16_t port_id,
 }
 
 /* Update node private shaper */
-RTE_EXPORT_SYMBOL(rte_tm_node_shaper_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shaper_update);
 int rte_tm_node_shaper_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shaper_profile_id,
@@ -442,7 +442,7 @@ int rte_tm_node_shaper_update(uint16_t port_id,
 }
 
 /* Update node shared shapers */
-RTE_EXPORT_SYMBOL(rte_tm_node_shared_shaper_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shared_shaper_update);
 int rte_tm_node_shared_shaper_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shared_shaper_id,
@@ -461,7 +461,7 @@ int rte_tm_node_shared_shaper_update(uint16_t port_id,
 }
 
 /* Update node stats */
-RTE_EXPORT_SYMBOL(rte_tm_node_stats_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_stats_update);
 int rte_tm_node_stats_update(uint16_t port_id,
 	uint32_t node_id,
 	uint64_t stats_mask,
@@ -478,7 +478,7 @@ int rte_tm_node_stats_update(uint16_t port_id,
 }
 
 /* Update WFQ weight mode */
-RTE_EXPORT_SYMBOL(rte_tm_node_wfq_weight_mode_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_wfq_weight_mode_update);
 int rte_tm_node_wfq_weight_mode_update(uint16_t port_id,
 	uint32_t node_id,
 	int *wfq_weight_mode,
@@ -498,7 +498,7 @@ int rte_tm_node_wfq_weight_mode_update(uint16_t port_id,
 }
 
 /* Update node congestion management mode */
-RTE_EXPORT_SYMBOL(rte_tm_node_cman_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_cman_update);
 int rte_tm_node_cman_update(uint16_t port_id,
 	uint32_t node_id,
 	enum rte_tm_cman_mode cman,
@@ -515,7 +515,7 @@ int rte_tm_node_cman_update(uint16_t port_id,
 }
 
 /* Update node private WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_node_wred_context_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_wred_context_update);
 int rte_tm_node_wred_context_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t wred_profile_id,
@@ -533,7 +533,7 @@ int rte_tm_node_wred_context_update(uint16_t port_id,
 }
 
 /* Update node shared WRED context */
-RTE_EXPORT_SYMBOL(rte_tm_node_shared_wred_context_update)
+RTE_EXPORT_SYMBOL(rte_tm_node_shared_wred_context_update);
 int rte_tm_node_shared_wred_context_update(uint16_t port_id,
 	uint32_t node_id,
 	uint32_t shared_wred_context_id,
@@ -553,7 +553,7 @@ int rte_tm_node_shared_wred_context_update(uint16_t port_id,
 }
 
 /* Read and/or clear stats counters for specific node */
-RTE_EXPORT_SYMBOL(rte_tm_node_stats_read)
+RTE_EXPORT_SYMBOL(rte_tm_node_stats_read);
 int rte_tm_node_stats_read(uint16_t port_id,
 	uint32_t node_id,
 	struct rte_tm_node_stats *stats,
@@ -573,7 +573,7 @@ int rte_tm_node_stats_read(uint16_t port_id,
 }
 
 /* Packet marking - VLAN DEI */
-RTE_EXPORT_SYMBOL(rte_tm_mark_vlan_dei)
+RTE_EXPORT_SYMBOL(rte_tm_mark_vlan_dei);
 int rte_tm_mark_vlan_dei(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
@@ -592,7 +592,7 @@ int rte_tm_mark_vlan_dei(uint16_t port_id,
 }
 
 /* Packet marking - IPv4/IPv6 ECN */
-RTE_EXPORT_SYMBOL(rte_tm_mark_ip_ecn)
+RTE_EXPORT_SYMBOL(rte_tm_mark_ip_ecn);
 int rte_tm_mark_ip_ecn(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
@@ -611,7 +611,7 @@ int rte_tm_mark_ip_ecn(uint16_t port_id,
 }
 
 /* Packet marking - IPv4/IPv6 DSCP */
-RTE_EXPORT_SYMBOL(rte_tm_mark_ip_dscp)
+RTE_EXPORT_SYMBOL(rte_tm_mark_ip_dscp);
 int rte_tm_mark_ip_dscp(uint16_t port_id,
 	int mark_green,
 	int mark_yellow,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index dffd2c71d0..10fb0bf1c7 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -107,7 +107,7 @@ dummy_event_port_preschedule_hint(__rte_unused void *port,
 {
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_reset)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_reset);
 void
 event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 {
@@ -131,7 +131,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
 	*fp_op = dummy;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_set)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_fp_ops_set);
 void
 event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
 		     const struct rte_eventdev *dev)
diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c
index ade6723b7b..5cfd23221a 100644
--- a/lib/eventdev/eventdev_trace_points.c
+++ b/lib/eventdev/eventdev_trace_points.c
@@ -38,27 +38,27 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_stop,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_close,
 	lib.eventdev.close)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_enq_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_enq_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_enq_burst,
 	lib.eventdev.enq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_deq_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_deq_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst,
 	lib.eventdev.deq.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_maintain)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_maintain);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain,
 	lib.eventdev.maintain)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_profile_switch, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_profile_switch, 23.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch,
 	lib.eventdev.port.profile.switch)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule_modify, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule_modify, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule_modify,
 	lib.eventdev.port.preschedule.modify)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_eventdev_trace_port_preschedule, 24.11);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule,
 	lib.eventdev.port.preschedule)
 
@@ -103,7 +103,7 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_start,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_stop,
 	lib.eventdev.tx.adapter.stop)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_eth_tx_adapter_enqueue)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_eth_tx_adapter_enqueue);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_tx_adapter_enqueue,
 	lib.eventdev.tx.adapter.enq)
 
@@ -120,15 +120,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_adapter_stop,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_adapter_free,
 	lib.eventdev.timer.free)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_arm_burst,
 	lib.eventdev.timer.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_tmo_tick_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_arm_tmo_tick_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_arm_tmo_tick_burst,
 	lib.eventdev.timer.tick.burst)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_cancel_burst)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_timer_cancel_burst);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_timer_cancel_burst,
 	lib.eventdev.timer.cancel)
 
@@ -151,7 +151,7 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start,
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop,
 	lib.eventdev.crypto.stop)
 
-RTE_EXPORT_SYMBOL(__rte_eventdev_trace_crypto_adapter_enqueue)
+RTE_EXPORT_SYMBOL(__rte_eventdev_trace_crypto_adapter_enqueue);
 RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue,
 	lib.eventdev.crypto.enq)
 
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index b827a0ffd6..aadf992570 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -363,7 +363,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create_ext);
 int
 rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_crypto_adapter_conf_cb conf_cb,
@@ -439,7 +439,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_create);
 int
 rte_event_crypto_adapter_create(uint8_t id, uint8_t dev_id,
 				struct rte_event_port_conf *port_config,
@@ -468,7 +468,7 @@ rte_event_crypto_adapter_create(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_free);
 int
 rte_event_crypto_adapter_free(uint8_t id)
 {
@@ -1040,7 +1040,7 @@ eca_add_queue_pair(struct event_crypto_adapter *adapter, uint8_t cdev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_add)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_add);
 int
 rte_event_crypto_adapter_queue_pair_add(uint8_t id,
 			uint8_t cdev_id,
@@ -1195,7 +1195,7 @@ rte_event_crypto_adapter_queue_pair_add(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_del)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_queue_pair_del);
 int
 rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id,
 					int32_t queue_pair_id)
@@ -1321,7 +1321,7 @@ eca_adapter_ctrl(uint8_t id, int start)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_start);
 int
 rte_event_crypto_adapter_start(uint8_t id)
 {
@@ -1336,7 +1336,7 @@ rte_event_crypto_adapter_start(uint8_t id)
 	return eca_adapter_ctrl(id, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stop);
 int
 rte_event_crypto_adapter_stop(uint8_t id)
 {
@@ -1344,7 +1344,7 @@ rte_event_crypto_adapter_stop(uint8_t id)
 	return eca_adapter_ctrl(id, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_get);
 int
 rte_event_crypto_adapter_stats_get(uint8_t id,
 				struct rte_event_crypto_adapter_stats *stats)
@@ -1397,7 +1397,7 @@ rte_event_crypto_adapter_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_stats_reset);
 int
 rte_event_crypto_adapter_stats_reset(uint8_t id)
 {
@@ -1430,7 +1430,7 @@ rte_event_crypto_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_init, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_init(
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1469,7 +1469,7 @@ crypto_adapter_cap_check(struct event_crypto_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_set, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1502,7 +1502,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_crypto_adapter_runtime_params_get, 23.03);
 int
 rte_event_crypto_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_crypto_adapter_runtime_params *params)
@@ -1534,7 +1534,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_service_id_get);
 int
 rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1554,7 +1554,7 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_event_port_get);
 int
 rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -1573,7 +1573,7 @@ rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_vector_limits_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_vector_limits_get);
 int
 rte_event_crypto_adapter_vector_limits_get(
 	uint8_t dev_id, uint16_t cdev_id,
diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c
index cb799f3410..b8b1fa88d5 100644
--- a/lib/eventdev/rte_event_dma_adapter.c
+++ b/lib/eventdev/rte_event_dma_adapter.c
@@ -341,7 +341,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create_ext, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create_ext, 23.11);
 int
 rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id,
 				 rte_event_dma_adapter_conf_cb conf_cb,
@@ -435,7 +435,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_create, 23.11);
 int
 rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config,
 			    enum rte_event_dma_adapter_mode mode)
@@ -460,7 +460,7 @@ rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_free, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_free, 23.11);
 int
 rte_event_dma_adapter_free(uint8_t id)
 {
@@ -481,7 +481,7 @@ rte_event_dma_adapter_free(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_event_port_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_event_port_get, 23.11);
 int
 rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -988,7 +988,7 @@ edma_add_vchan(struct event_dma_adapter *adapter, int16_t dma_dev_id, uint16_t v
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_add, 23.11);
 int
 rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
 				const struct rte_event *event)
@@ -1103,7 +1103,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_del, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_vchan_del, 23.11);
 int
 rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan)
 {
@@ -1170,7 +1170,7 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_service_id_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_service_id_get, 23.11);
 int
 rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1230,7 +1230,7 @@ edma_adapter_ctrl(uint8_t id, int start)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_start, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_start, 23.11);
 int
 rte_event_dma_adapter_start(uint8_t id)
 {
@@ -1245,7 +1245,7 @@ rte_event_dma_adapter_start(uint8_t id)
 	return edma_adapter_ctrl(id, 1);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stop, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stop, 23.11);
 int
 rte_event_dma_adapter_stop(uint8_t id)
 {
@@ -1254,7 +1254,7 @@ rte_event_dma_adapter_stop(uint8_t id)
 
 #define DEFAULT_MAX_NB 128
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_init, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_init, 23.11);
 int
 rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params)
 {
@@ -1290,7 +1290,7 @@ dma_adapter_cap_check(struct event_dma_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_set, 23.11);
 int
 rte_event_dma_adapter_runtime_params_set(uint8_t id,
 					 struct rte_event_dma_adapter_runtime_params *params)
@@ -1320,7 +1320,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_runtime_params_get, 23.11);
 int
 rte_event_dma_adapter_runtime_params_get(uint8_t id,
 					 struct rte_event_dma_adapter_runtime_params *params)
@@ -1348,7 +1348,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_get, 23.11);
 int
 rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats)
 {
@@ -1394,7 +1394,7 @@ rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_reset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_stats_reset, 23.11);
 int
 rte_event_dma_adapter_stats_reset(uint8_t id)
 {
@@ -1427,7 +1427,7 @@ rte_event_dma_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_enqueue, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_enqueue, 23.11);
 uint16_t
 rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
 			      uint16_t nb_events)
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 994f256322..cffc28b71d 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -2519,7 +2519,7 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_ext);
 int
 rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_eth_rx_adapter_conf_cb conf_cb,
@@ -2534,7 +2534,7 @@ rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 	return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_with_params)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create_with_params);
 int
 rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id,
 			struct rte_event_port_conf *port_config,
@@ -2567,7 +2567,7 @@ rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_create_ext_with_params, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_create_ext_with_params, 23.11);
 int
 rte_event_eth_rx_adapter_create_ext_with_params(uint8_t id, uint8_t dev_id,
 			rte_event_eth_rx_adapter_conf_cb conf_cb,
@@ -2584,7 +2584,7 @@ rte_event_eth_rx_adapter_create_ext_with_params(uint8_t id, uint8_t dev_id,
 	return rxa_create(id, dev_id, &temp_params, conf_cb, conf_arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_create);
 int
 rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
 		struct rte_event_port_conf *port_config)
@@ -2610,7 +2610,7 @@ rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_free);
 int
 rte_event_eth_rx_adapter_free(uint8_t id)
 {
@@ -2643,7 +2643,7 @@ rte_event_eth_rx_adapter_free(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_add)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_add);
 int
 rte_event_eth_rx_adapter_queue_add(uint8_t id,
 		uint16_t eth_dev_id,
@@ -2797,7 +2797,7 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_queues_add, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_queues_add, 25.03);
 int
 rte_event_eth_rx_adapter_queues_add(uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id[],
 				    const struct rte_event_eth_rx_adapter_queue_conf queue_conf[],
@@ -2969,7 +2969,7 @@ rxa_sw_vector_limits(struct rte_event_eth_rx_adapter_vector_limits *limits)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_del)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_del);
 int
 rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id,
 				int32_t rx_queue_id)
@@ -3098,7 +3098,7 @@ rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_vector_limits_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_vector_limits_get);
 int
 rte_event_eth_rx_adapter_vector_limits_get(
 	uint8_t dev_id, uint16_t eth_port_id,
@@ -3140,7 +3140,7 @@ rte_event_eth_rx_adapter_vector_limits_get(
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_start);
 int
 rte_event_eth_rx_adapter_start(uint8_t id)
 {
@@ -3148,7 +3148,7 @@ rte_event_eth_rx_adapter_start(uint8_t id)
 	return rxa_ctrl(id, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stop);
 int
 rte_event_eth_rx_adapter_stop(uint8_t id)
 {
@@ -3165,7 +3165,7 @@ rxa_queue_stats_reset(struct eth_rx_queue_info *queue_info)
 	memset(q_stats, 0, sizeof(*q_stats));
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_get);
 int
 rte_event_eth_rx_adapter_stats_get(uint8_t id,
 			       struct rte_event_eth_rx_adapter_stats *stats)
@@ -3240,7 +3240,7 @@ rte_event_eth_rx_adapter_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_get);
 int
 rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
 		uint16_t eth_dev_id,
@@ -3305,7 +3305,7 @@ rte_event_eth_rx_adapter_queue_stats_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_stats_reset);
 int
 rte_event_eth_rx_adapter_stats_reset(uint8_t id)
 {
@@ -3353,7 +3353,7 @@ rte_event_eth_rx_adapter_stats_reset(uint8_t id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_stats_reset);
 int
 rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
 		uint16_t eth_dev_id,
@@ -3408,7 +3408,7 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_service_id_get);
 int
 rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -3431,7 +3431,7 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return rx_adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_event_port_get);
 int
 rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -3454,7 +3454,7 @@ rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return rx_adapter->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_cb_register)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_cb_register);
 int
 rte_event_eth_rx_adapter_cb_register(uint8_t id,
 					uint16_t eth_dev_id,
@@ -3503,7 +3503,7 @@ rte_event_eth_rx_adapter_cb_register(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_queue_conf_get);
 int
 rte_event_eth_rx_adapter_queue_conf_get(uint8_t id,
 			uint16_t eth_dev_id,
@@ -3605,7 +3605,7 @@ rxa_is_queue_added(struct event_eth_rx_adapter *rx_adapter,
 #define rxa_dev_instance_get(rx_adapter) \
 		rxa_evdev((rx_adapter))->dev_ops->eth_rx_adapter_instance_get
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_instance_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_instance_get);
 int
 rte_event_eth_rx_adapter_instance_get(uint16_t eth_dev_id,
 				      uint16_t rx_queue_id,
@@ -3684,7 +3684,7 @@ rxa_caps_check(struct event_eth_rx_adapter *rxa)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_init, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_init(
 		struct rte_event_eth_rx_adapter_runtime_params *params)
@@ -3698,7 +3698,7 @@ rte_event_eth_rx_adapter_runtime_params_init(
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_set, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_eth_rx_adapter_runtime_params *params)
@@ -3727,7 +3727,7 @@ rte_event_eth_rx_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_rx_adapter_runtime_params_get, 23.03);
 int
 rte_event_eth_rx_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_eth_rx_adapter_runtime_params *params)
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 83b6af0955..bcc573c155 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -1039,7 +1039,7 @@ txa_service_stop(uint8_t id)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create);
 int
 rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
 				struct rte_event_port_conf *port_conf)
@@ -1084,7 +1084,7 @@ rte_event_eth_tx_adapter_create(uint8_t id, uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_create_ext);
 int
 rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 				rte_event_eth_tx_adapter_conf_cb conf_cb,
@@ -1129,7 +1129,7 @@ rte_event_eth_tx_adapter_create_ext(uint8_t id, uint8_t dev_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_event_port_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_event_port_get);
 int
 rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 {
@@ -1140,7 +1140,7 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
 	return txa_service_event_port_get(id, event_port_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_free);
 int
 rte_event_eth_tx_adapter_free(uint8_t id)
 {
@@ -1160,7 +1160,7 @@ rte_event_eth_tx_adapter_free(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_add)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_add);
 int
 rte_event_eth_tx_adapter_queue_add(uint8_t id,
 				uint16_t eth_dev_id,
@@ -1194,7 +1194,7 @@ rte_event_eth_tx_adapter_queue_add(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_del)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_del);
 int
 rte_event_eth_tx_adapter_queue_del(uint8_t id,
 				uint16_t eth_dev_id,
@@ -1227,7 +1227,7 @@ rte_event_eth_tx_adapter_queue_del(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_service_id_get);
 int
 rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 {
@@ -1236,7 +1236,7 @@ rte_event_eth_tx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
 	return txa_service_id_get(id, service_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_start);
 int
 rte_event_eth_tx_adapter_start(uint8_t id)
 {
@@ -1251,7 +1251,7 @@ rte_event_eth_tx_adapter_start(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_get);
 int
 rte_event_eth_tx_adapter_stats_get(uint8_t id,
 				struct rte_event_eth_tx_adapter_stats *stats)
@@ -1288,7 +1288,7 @@ rte_event_eth_tx_adapter_stats_get(uint8_t id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stats_reset);
 int
 rte_event_eth_tx_adapter_stats_reset(uint8_t id)
 {
@@ -1306,7 +1306,7 @@ rte_event_eth_tx_adapter_stats_reset(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_init, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_init, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_init(
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1333,7 +1333,7 @@ txa_caps_check(struct txa_service_data *txa)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_set, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_set(uint8_t id,
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1365,7 +1365,7 @@ rte_event_eth_tx_adapter_runtime_params_set(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_eth_tx_adapter_runtime_params_get, 23.03);
 int
 rte_event_eth_tx_adapter_runtime_params_get(uint8_t id,
 		struct rte_event_eth_tx_adapter_runtime_params *txa_params)
@@ -1397,7 +1397,7 @@ rte_event_eth_tx_adapter_runtime_params_get(uint8_t id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_stop);
 int
 rte_event_eth_tx_adapter_stop(uint8_t id)
 {
@@ -1412,7 +1412,7 @@ rte_event_eth_tx_adapter_stop(uint8_t id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_instance_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_instance_get);
 int
 rte_event_eth_tx_adapter_instance_get(uint16_t eth_dev_id,
 				      uint16_t tx_queue_id,
@@ -1546,7 +1546,7 @@ txa_queue_start_state_set(uint16_t eth_dev_id, uint16_t tx_queue_id,
 					    start_state, txa);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_start)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_start);
 int
 rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id)
 {
@@ -1555,7 +1555,7 @@ rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id)
 	return txa_queue_start_state_set(eth_dev_id, tx_queue_id, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_stop)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_queue_stop);
 int
 rte_event_eth_tx_adapter_queue_stop(uint16_t eth_dev_id, uint16_t tx_queue_id)
 {
diff --git a/lib/eventdev/rte_event_ring.c b/lib/eventdev/rte_event_ring.c
index 5718985486..1a0ea149d7 100644
--- a/lib/eventdev/rte_event_ring.c
+++ b/lib/eventdev/rte_event_ring.c
@@ -8,7 +8,7 @@
 #include "rte_event_ring.h"
 #include "eventdev_trace.h"
 
-RTE_EXPORT_SYMBOL(rte_event_ring_init)
+RTE_EXPORT_SYMBOL(rte_event_ring_init);
 int
 rte_event_ring_init(struct rte_event_ring *r, const char *name,
 	unsigned int count, unsigned int flags)
@@ -24,7 +24,7 @@ rte_event_ring_init(struct rte_event_ring *r, const char *name,
 }
 
 /* create the ring */
-RTE_EXPORT_SYMBOL(rte_event_ring_create)
+RTE_EXPORT_SYMBOL(rte_event_ring_create);
 struct rte_event_ring *
 rte_event_ring_create(const char *name, unsigned int count, int socket_id,
 		unsigned int flags)
@@ -37,7 +37,7 @@ rte_event_ring_create(const char *name, unsigned int count, int socket_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_ring_lookup)
+RTE_EXPORT_SYMBOL(rte_event_ring_lookup);
 struct rte_event_ring *
 rte_event_ring_lookup(const char *name)
 {
@@ -47,7 +47,7 @@ rte_event_ring_lookup(const char *name)
 }
 
 /* free the ring */
-RTE_EXPORT_SYMBOL(rte_event_ring_free)
+RTE_EXPORT_SYMBOL(rte_event_ring_free);
 void
 rte_event_ring_free(struct rte_event_ring *r)
 {
diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c
index 06ce478d90..5b8b2c1fcd 100644
--- a/lib/eventdev/rte_event_timer_adapter.c
+++ b/lib/eventdev/rte_event_timer_adapter.c
@@ -133,7 +133,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_create(const struct rte_event_timer_adapter_conf *conf)
 {
@@ -141,7 +141,7 @@ rte_event_timer_adapter_create(const struct rte_event_timer_adapter_conf *conf)
 						  NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create_ext)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_create_ext);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_create_ext(
 		const struct rte_event_timer_adapter_conf *conf,
@@ -267,7 +267,7 @@ rte_event_timer_adapter_create_ext(
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_get_info)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_get_info);
 int
 rte_event_timer_adapter_get_info(const struct rte_event_timer_adapter *adapter,
 		struct rte_event_timer_adapter_info *adapter_info)
@@ -288,7 +288,7 @@ rte_event_timer_adapter_get_info(const struct rte_event_timer_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_start)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_start);
 int
 rte_event_timer_adapter_start(const struct rte_event_timer_adapter *adapter)
 {
@@ -312,7 +312,7 @@ rte_event_timer_adapter_start(const struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stop)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stop);
 int
 rte_event_timer_adapter_stop(const struct rte_event_timer_adapter *adapter)
 {
@@ -336,7 +336,7 @@ rte_event_timer_adapter_stop(const struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_lookup)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_lookup);
 struct rte_event_timer_adapter *
 rte_event_timer_adapter_lookup(uint16_t adapter_id)
 {
@@ -404,7 +404,7 @@ rte_event_timer_adapter_lookup(uint16_t adapter_id)
 	return adapter;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_free)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_free);
 int
 rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 {
@@ -446,7 +446,7 @@ rte_event_timer_adapter_free(struct rte_event_timer_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_service_id_get);
 int
 rte_event_timer_adapter_service_id_get(struct rte_event_timer_adapter *adapter,
 				       uint32_t *service_id)
@@ -464,7 +464,7 @@ rte_event_timer_adapter_service_id_get(struct rte_event_timer_adapter *adapter,
 	return adapter->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_get);
 int
 rte_event_timer_adapter_stats_get(struct rte_event_timer_adapter *adapter,
 				  struct rte_event_timer_adapter_stats *stats)
@@ -479,7 +479,7 @@ rte_event_timer_adapter_stats_get(struct rte_event_timer_adapter *adapter,
 	return adapter->ops->stats_get(adapter, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_reset)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_stats_reset);
 int
 rte_event_timer_adapter_stats_reset(struct rte_event_timer_adapter *adapter)
 {
@@ -490,7 +490,7 @@ rte_event_timer_adapter_stats_reset(struct rte_event_timer_adapter *adapter)
 	return adapter->ops->stats_reset(adapter);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_timer_remaining_ticks_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_timer_remaining_ticks_get, 23.03);
 int
 rte_event_timer_remaining_ticks_get(
 			const struct rte_event_timer_adapter *adapter,
diff --git a/lib/eventdev/rte_event_vector_adapter.c b/lib/eventdev/rte_event_vector_adapter.c
index ad764e2882..24a7a063ce 100644
--- a/lib/eventdev/rte_event_vector_adapter.c
+++ b/lib/eventdev/rte_event_vector_adapter.c
@@ -151,14 +151,14 @@ default_port_conf_cb(uint8_t event_dev_id, uint8_t *event_port_id, void *conf_ar
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_create(const struct rte_event_vector_adapter_conf *conf)
 {
 	return rte_event_vector_adapter_create_ext(conf, default_port_conf_cb, NULL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create_ext, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_create_ext, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *conf,
 				    rte_event_vector_adapter_port_conf_cb_t conf_cb, void *conf_arg)
@@ -304,7 +304,7 @@ rte_event_vector_adapter_create_ext(const struct rte_event_vector_adapter_conf *
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_lookup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_lookup, 25.07);
 struct rte_event_vector_adapter *
 rte_event_vector_adapter_lookup(uint32_t adapter_id)
 {
@@ -372,7 +372,7 @@ rte_event_vector_adapter_lookup(uint32_t adapter_id)
 	return adapter;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_service_id_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_service_id_get, 25.07);
 int
 rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter,
 					uint32_t *service_id)
@@ -385,7 +385,7 @@ rte_event_vector_adapter_service_id_get(struct rte_event_vector_adapter *adapter
 	return adapter->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_destroy, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_destroy, 25.07);
 int
 rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
 {
@@ -414,7 +414,7 @@ rte_event_vector_adapter_destroy(struct rte_event_vector_adapter *adapter)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_info_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_info_get, 25.07);
 int
 rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_adapter_info *info)
 {
@@ -429,7 +429,7 @@ rte_event_vector_adapter_info_get(uint8_t event_dev_id, struct rte_event_vector_
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_conf_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_conf_get, 25.07);
 int
 rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
 				  struct rte_event_vector_adapter_conf *conf)
@@ -441,7 +441,7 @@ rte_event_vector_adapter_conf_get(struct rte_event_vector_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_remaining, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_remaining, 25.07);
 uint8_t
 rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
 {
@@ -461,7 +461,7 @@ rte_event_vector_adapter_remaining(uint8_t event_dev_id, uint8_t event_queue_id)
 	return remaining;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_get, 25.07);
 int
 rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
 				   struct rte_event_vector_adapter_stats *stats)
@@ -476,7 +476,7 @@ rte_event_vector_adapter_stats_get(struct rte_event_vector_adapter *adapter,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_reset, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_stats_reset, 25.07);
 int
 rte_event_vector_adapter_stats_reset(struct rte_event_vector_adapter *adapter)
 {
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index b921142d7b..9325d5880d 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -30,12 +30,12 @@
 #include "eventdev_pmd.h"
 #include "eventdev_trace.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_event_logtype, INFO);
 
 static struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eventdevs)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eventdevs);
 struct rte_eventdev *rte_eventdevs = rte_event_devices;
 
 static struct rte_eventdev_global eventdev_globals = {
@@ -43,19 +43,19 @@ static struct rte_eventdev_global eventdev_globals = {
 };
 
 /* Public fastpath APIs. */
-RTE_EXPORT_SYMBOL(rte_event_fp_ops)
+RTE_EXPORT_SYMBOL(rte_event_fp_ops);
 struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
 
 /* Event dev north bound API implementation */
 
-RTE_EXPORT_SYMBOL(rte_event_dev_count)
+RTE_EXPORT_SYMBOL(rte_event_dev_count);
 uint8_t
 rte_event_dev_count(void)
 {
 	return eventdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_event_dev_get_dev_id);
 int
 rte_event_dev_get_dev_id(const char *name)
 {
@@ -80,7 +80,7 @@ rte_event_dev_get_dev_id(const char *name)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_socket_id)
+RTE_EXPORT_SYMBOL(rte_event_dev_socket_id);
 int
 rte_event_dev_socket_id(uint8_t dev_id)
 {
@@ -94,7 +94,7 @@ rte_event_dev_socket_id(uint8_t dev_id)
 	return dev->data->socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_info_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_info_get);
 int
 rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 {
@@ -123,7 +123,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_rx_adapter_caps_get);
 int
 rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 				uint32_t *caps)
@@ -150,7 +150,7 @@ rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 		: 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_timer_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_timer_adapter_caps_get);
 int
 rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 {
@@ -176,7 +176,7 @@ rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 		: 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_crypto_adapter_caps_get);
 int
 rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 				  uint32_t *caps)
@@ -205,7 +205,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
 		dev->dev_ops->crypto_adapter_caps_get(dev, cdev, caps) : 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_caps_get)
+RTE_EXPORT_SYMBOL(rte_event_eth_tx_adapter_caps_get);
 int
 rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 				uint32_t *caps)
@@ -234,7 +234,7 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
 		: 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_caps_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_dma_adapter_caps_get, 23.11);
 int
 rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *caps)
 {
@@ -257,7 +257,7 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_caps_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_vector_adapter_caps_get, 25.07);
 int
 rte_event_vector_adapter_caps_get(uint8_t dev_id, uint32_t *caps)
 {
@@ -374,7 +374,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_configure)
+RTE_EXPORT_SYMBOL(rte_event_dev_configure);
 int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
@@ -577,7 +577,7 @@ is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_default_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_queue_default_conf_get);
 int
 rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
 				 struct rte_event_queue_conf *queue_conf)
@@ -638,7 +638,7 @@ is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf)
 }
 
 
-RTE_EXPORT_SYMBOL(rte_event_queue_setup)
+RTE_EXPORT_SYMBOL(rte_event_queue_setup);
 int
 rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		      const struct rte_event_queue_conf *queue_conf)
@@ -710,7 +710,7 @@ is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_default_conf_get)
+RTE_EXPORT_SYMBOL(rte_event_port_default_conf_get);
 int
 rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
 				 struct rte_event_port_conf *port_conf)
@@ -738,7 +738,7 @@ rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_setup)
+RTE_EXPORT_SYMBOL(rte_event_port_setup);
 int
 rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 		     const struct rte_event_port_conf *port_conf)
@@ -829,7 +829,7 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_quiesce)
+RTE_EXPORT_SYMBOL(rte_event_port_quiesce);
 void
 rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		       rte_eventdev_port_flush_t release_cb, void *args)
@@ -850,7 +850,7 @@ rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
 		dev->dev_ops->port_quiesce(dev, dev->data->ports[port_id], release_cb, args);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_attr_get);
 int
 rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 		       uint32_t *attr_value)
@@ -881,7 +881,7 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_port_attr_get);
 int
 rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 			uint32_t *attr_value)
@@ -933,7 +933,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_attr_get)
+RTE_EXPORT_SYMBOL(rte_event_queue_attr_get);
 int
 rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			uint32_t *attr_value)
@@ -993,7 +993,7 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_queue_attr_set)
+RTE_EXPORT_SYMBOL(rte_event_queue_attr_set);
 int
 rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 			 uint64_t attr_value)
@@ -1022,7 +1022,7 @@ rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
 	return dev->dev_ops->queue_attr_set(dev, queue_id, attr_id, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_link)
+RTE_EXPORT_SYMBOL(rte_event_port_link);
 int
 rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 		    const uint8_t queues[], const uint8_t priorities[],
@@ -1031,7 +1031,7 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id,
 	return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_set, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_set, 23.11);
 int
 rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[],
 				 const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id)
@@ -1114,7 +1114,7 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_unlink)
+RTE_EXPORT_SYMBOL(rte_event_port_unlink);
 int
 rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 		      uint8_t queues[], uint16_t nb_unlinks)
@@ -1122,7 +1122,7 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
 	return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_unlink, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_unlink, 23.11);
 int
 rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 			      uint16_t nb_unlinks, uint8_t profile_id)
@@ -1209,7 +1209,7 @@ rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_unlinks_in_progress)
+RTE_EXPORT_SYMBOL(rte_event_port_unlinks_in_progress);
 int
 rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
 {
@@ -1234,7 +1234,7 @@ rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id)
 	return dev->dev_ops->port_unlinks_in_progress(dev, dev->data->ports[port_id]);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_port_links_get)
+RTE_EXPORT_SYMBOL(rte_event_port_links_get);
 int
 rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[])
@@ -1267,7 +1267,7 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_get, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_event_port_profile_links_get, 23.11);
 int
 rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[],
 				 uint8_t priorities[], uint8_t profile_id)
@@ -1311,7 +1311,7 @@ rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dequeue_timeout_ticks)
+RTE_EXPORT_SYMBOL(rte_event_dequeue_timeout_ticks);
 int
 rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 				 uint64_t *timeout_ticks)
@@ -1331,7 +1331,7 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 	return dev->dev_ops->timeout_ticks(dev, ns, timeout_ticks);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_service_id_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_service_id_get);
 int
 rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
 {
@@ -1351,7 +1351,7 @@ rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
 	return dev->data->service_inited ? 0 : -ESRCH;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_dump)
+RTE_EXPORT_SYMBOL(rte_event_dev_dump);
 int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
@@ -1379,7 +1379,7 @@ xstats_get_count(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_names_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_names_get);
 int
 rte_event_dev_xstats_names_get(uint8_t dev_id,
 		enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
@@ -1404,7 +1404,7 @@ rte_event_dev_xstats_names_get(uint8_t dev_id,
 }
 
 /* retrieve eventdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_get);
 int
 rte_event_dev_xstats_get(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 		uint8_t queue_port_id, const uint64_t ids[],
@@ -1420,7 +1420,7 @@ rte_event_dev_xstats_get(uint8_t dev_id, enum rte_event_dev_xstats_mode mode,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_by_name_get)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_by_name_get);
 uint64_t
 rte_event_dev_xstats_by_name_get(uint8_t dev_id, const char *name,
 		uint64_t *id)
@@ -1440,7 +1440,7 @@ rte_event_dev_xstats_by_name_get(uint8_t dev_id, const char *name,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_event_dev_xstats_reset);
 int rte_event_dev_xstats_reset(uint8_t dev_id,
 		enum rte_event_dev_xstats_mode mode, int16_t queue_port_id,
 		const uint64_t ids[], uint32_t nb_ids)
@@ -1453,10 +1453,10 @@ int rte_event_dev_xstats_reset(uint8_t dev_id,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_selftest_seqn_dynfield_offset)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_selftest_seqn_dynfield_offset);
 int rte_event_pmd_selftest_seqn_dynfield_offset = -1;
 
-RTE_EXPORT_SYMBOL(rte_event_dev_selftest)
+RTE_EXPORT_SYMBOL(rte_event_dev_selftest);
 int rte_event_dev_selftest(uint8_t dev_id)
 {
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -1477,7 +1477,7 @@ int rte_event_dev_selftest(uint8_t dev_id)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_vector_pool_create)
+RTE_EXPORT_SYMBOL(rte_event_vector_pool_create);
 struct rte_mempool *
 rte_event_vector_pool_create(const char *name, unsigned int n,
 			     unsigned int cache_size, uint16_t nb_elem,
@@ -1523,7 +1523,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_start)
+RTE_EXPORT_SYMBOL(rte_event_dev_start);
 int
 rte_event_dev_start(uint8_t dev_id)
 {
@@ -1555,7 +1555,7 @@ rte_event_dev_start(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_stop_flush_callback_register)
+RTE_EXPORT_SYMBOL(rte_event_dev_stop_flush_callback_register);
 int
 rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
 					   rte_eventdev_stop_flush_t callback,
@@ -1576,7 +1576,7 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_stop)
+RTE_EXPORT_SYMBOL(rte_event_dev_stop);
 void
 rte_event_dev_stop(uint8_t dev_id)
 {
@@ -1601,7 +1601,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_event_dev_close)
+RTE_EXPORT_SYMBOL(rte_event_dev_close);
 int
 rte_event_dev_close(uint8_t dev_id)
 {
@@ -1672,7 +1672,7 @@ eventdev_find_free_device_index(void)
 	return RTE_EVENT_MAX_DEVS;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_allocate);
 struct rte_eventdev *
 rte_event_pmd_allocate(const char *name, int socket_id)
 {
@@ -1721,7 +1721,7 @@ rte_event_pmd_allocate(const char *name, int socket_id)
 	return eventdev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_event_pmd_release);
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev)
 {
@@ -1758,7 +1758,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(event_dev_probing_finish)
+RTE_EXPORT_INTERNAL_SYMBOL(event_dev_probing_finish);
 void
 event_dev_probing_finish(struct rte_eventdev *eventdev)
 {
diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c
index 184210f380..065ac7cd63 100644
--- a/lib/fib/rte_fib.c
+++ b/lib/fib/rte_fib.c
@@ -118,7 +118,7 @@ init_dataplane(struct rte_fib *fib, __rte_unused int socket_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_add)
+RTE_EXPORT_SYMBOL(rte_fib_add);
 int
 rte_fib_add(struct rte_fib *fib, uint32_t ip, uint8_t depth, uint64_t next_hop)
 {
@@ -128,7 +128,7 @@ rte_fib_add(struct rte_fib *fib, uint32_t ip, uint8_t depth, uint64_t next_hop)
 	return fib->modify(fib, ip, depth, next_hop, RTE_FIB_ADD);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_delete)
+RTE_EXPORT_SYMBOL(rte_fib_delete);
 int
 rte_fib_delete(struct rte_fib *fib, uint32_t ip, uint8_t depth)
 {
@@ -138,7 +138,7 @@ rte_fib_delete(struct rte_fib *fib, uint32_t ip, uint8_t depth)
 	return fib->modify(fib, ip, depth, 0, RTE_FIB_DEL);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_fib_lookup_bulk);
 int
 rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips,
 	uint64_t *next_hops, int n)
@@ -150,7 +150,7 @@ rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_create)
+RTE_EXPORT_SYMBOL(rte_fib_create);
 struct rte_fib *
 rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 {
@@ -247,7 +247,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_find_existing)
+RTE_EXPORT_SYMBOL(rte_fib_find_existing);
 struct rte_fib *
 rte_fib_find_existing(const char *name)
 {
@@ -286,7 +286,7 @@ free_dataplane(struct rte_fib *fib)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_free)
+RTE_EXPORT_SYMBOL(rte_fib_free);
 void
 rte_fib_free(struct rte_fib *fib)
 {
@@ -316,21 +316,21 @@ rte_fib_free(struct rte_fib *fib)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_get_dp)
+RTE_EXPORT_SYMBOL(rte_fib_get_dp);
 void *
 rte_fib_get_dp(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->dp;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_get_rib)
+RTE_EXPORT_SYMBOL(rte_fib_get_rib);
 struct rte_rib *
 rte_fib_get_rib(struct rte_fib *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib_select_lookup)
+RTE_EXPORT_SYMBOL(rte_fib_select_lookup);
 int
 rte_fib_select_lookup(struct rte_fib *fib,
 	enum rte_fib_lookup_type type)
@@ -350,7 +350,7 @@ rte_fib_select_lookup(struct rte_fib *fib,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fib_rcu_qsbr_add, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_fib_rcu_qsbr_add, 24.11);
 int
 rte_fib_rcu_qsbr_add(struct rte_fib *fib, struct rte_fib_rcu_config *cfg)
 {
diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c
index 93a1c7197b..0b28dfee98 100644
--- a/lib/fib/rte_fib6.c
+++ b/lib/fib/rte_fib6.c
@@ -116,7 +116,7 @@ init_dataplane(struct rte_fib6 *fib, __rte_unused int socket_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_add)
+RTE_EXPORT_SYMBOL(rte_fib6_add);
 int
 rte_fib6_add(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	uint8_t depth, uint64_t next_hop)
@@ -127,7 +127,7 @@ rte_fib6_add(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	return fib->modify(fib, ip, depth, next_hop, RTE_FIB6_ADD);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_delete)
+RTE_EXPORT_SYMBOL(rte_fib6_delete);
 int
 rte_fib6_delete(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	uint8_t depth)
@@ -138,7 +138,7 @@ rte_fib6_delete(struct rte_fib6 *fib, const struct rte_ipv6_addr *ip,
 	return fib->modify(fib, ip, depth, 0, RTE_FIB6_DEL);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_fib6_lookup_bulk);
 int
 rte_fib6_lookup_bulk(struct rte_fib6 *fib,
 	const struct rte_ipv6_addr *ips,
@@ -150,7 +150,7 @@ rte_fib6_lookup_bulk(struct rte_fib6 *fib,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_create)
+RTE_EXPORT_SYMBOL(rte_fib6_create);
 struct rte_fib6 *
 rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 {
@@ -245,7 +245,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_find_existing)
+RTE_EXPORT_SYMBOL(rte_fib6_find_existing);
 struct rte_fib6 *
 rte_fib6_find_existing(const char *name)
 {
@@ -284,7 +284,7 @@ free_dataplane(struct rte_fib6 *fib)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_free)
+RTE_EXPORT_SYMBOL(rte_fib6_free);
 void
 rte_fib6_free(struct rte_fib6 *fib)
 {
@@ -314,21 +314,21 @@ rte_fib6_free(struct rte_fib6 *fib)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_get_dp)
+RTE_EXPORT_SYMBOL(rte_fib6_get_dp);
 void *
 rte_fib6_get_dp(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->dp;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_get_rib)
+RTE_EXPORT_SYMBOL(rte_fib6_get_rib);
 struct rte_rib6 *
 rte_fib6_get_rib(struct rte_fib6 *fib)
 {
 	return (fib == NULL) ? NULL : fib->rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_fib6_select_lookup)
+RTE_EXPORT_SYMBOL(rte_fib6_select_lookup);
 int
 rte_fib6_select_lookup(struct rte_fib6 *fib,
 	enum rte_fib6_lookup_type type)
diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c
index 0473d9ffb3..58c9bd702b 100644
--- a/lib/gpudev/gpudev.c
+++ b/lib/gpudev/gpudev.c
@@ -50,7 +50,7 @@ struct rte_gpu_callback {
 static rte_rwlock_t gpu_callback_lock = RTE_RWLOCK_INITIALIZER;
 static void gpu_free_callbacks(struct rte_gpu *dev);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_init, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_init, 21.11);
 int
 rte_gpu_init(size_t dev_max)
 {
@@ -78,14 +78,14 @@ rte_gpu_init(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_count_avail, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_count_avail, 21.11);
 uint16_t
 rte_gpu_count_avail(void)
 {
 	return gpu_count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_is_valid, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_is_valid, 21.11);
 bool
 rte_gpu_is_valid(int16_t dev_id)
 {
@@ -103,7 +103,7 @@ gpu_match_parent(int16_t dev_id, int16_t parent)
 	return gpus[dev_id].mpshared->info.parent == parent;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_find_next, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_find_next, 21.11);
 int16_t
 rte_gpu_find_next(int16_t dev_id, int16_t parent)
 {
@@ -139,7 +139,7 @@ gpu_get_by_id(int16_t dev_id)
 	return &gpus[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_get_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_get_by_name);
 struct rte_gpu *
 rte_gpu_get_by_name(const char *name)
 {
@@ -182,7 +182,7 @@ gpu_shared_mem_init(void)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_allocate);
 struct rte_gpu *
 rte_gpu_allocate(const char *name)
 {
@@ -244,7 +244,7 @@ rte_gpu_allocate(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_attach)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_attach);
 struct rte_gpu *
 rte_gpu_attach(const char *name)
 {
@@ -294,7 +294,7 @@ rte_gpu_attach(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_add_child, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_add_child, 21.11);
 int16_t
 rte_gpu_add_child(const char *name, int16_t parent, uint64_t child_context)
 {
@@ -317,7 +317,7 @@ rte_gpu_add_child(const char *name, int16_t parent, uint64_t child_context)
 	return dev->mpshared->info.dev_id;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_complete_new)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_complete_new);
 void
 rte_gpu_complete_new(struct rte_gpu *dev)
 {
@@ -328,7 +328,7 @@ rte_gpu_complete_new(struct rte_gpu *dev)
 	rte_gpu_notify(dev, RTE_GPU_EVENT_NEW);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_release);
 int
 rte_gpu_release(struct rte_gpu *dev)
 {
@@ -358,7 +358,7 @@ rte_gpu_release(struct rte_gpu *dev)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_close, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_close, 21.11);
 int
 rte_gpu_close(int16_t dev_id)
 {
@@ -385,7 +385,7 @@ rte_gpu_close(int16_t dev_id)
 	return firsterr;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_register, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_register, 21.11);
 int
 rte_gpu_callback_register(int16_t dev_id, enum rte_gpu_event event,
 		rte_gpu_callback_t *function, void *user_data)
@@ -445,7 +445,7 @@ rte_gpu_callback_register(int16_t dev_id, enum rte_gpu_event event,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_unregister, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_callback_unregister, 21.11);
 int
 rte_gpu_callback_unregister(int16_t dev_id, enum rte_gpu_event event,
 		rte_gpu_callback_t *function, void *user_data)
@@ -505,7 +505,7 @@ gpu_free_callbacks(struct rte_gpu *dev)
 	rte_rwlock_write_unlock(&gpu_callback_lock);
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_notify)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_gpu_notify);
 void
 rte_gpu_notify(struct rte_gpu *dev, enum rte_gpu_event event)
 {
@@ -522,7 +522,7 @@ rte_gpu_notify(struct rte_gpu *dev, enum rte_gpu_event event)
 	rte_rwlock_read_unlock(&gpu_callback_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_info_get, 21.11);
 int
 rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info)
 {
@@ -547,7 +547,7 @@ rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info)
 	return GPU_DRV_RET(dev->ops.dev_info_get(dev, info));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_alloc, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_alloc, 21.11);
 void *
 rte_gpu_mem_alloc(int16_t dev_id, size_t size, unsigned int align)
 {
@@ -592,7 +592,7 @@ rte_gpu_mem_alloc(int16_t dev_id, size_t size, unsigned int align)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_free, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_free, 21.11);
 int
 rte_gpu_mem_free(int16_t dev_id, void *ptr)
 {
@@ -616,7 +616,7 @@ rte_gpu_mem_free(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_free(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_register, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_register, 21.11);
 int
 rte_gpu_mem_register(int16_t dev_id, size_t size, void *ptr)
 {
@@ -641,7 +641,7 @@ rte_gpu_mem_register(int16_t dev_id, size_t size, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_register(dev, size, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_unregister, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_unregister, 21.11);
 int
 rte_gpu_mem_unregister(int16_t dev_id, void *ptr)
 {
@@ -665,7 +665,7 @@ rte_gpu_mem_unregister(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_unregister(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_map, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_map, 21.11);
 void *
 rte_gpu_mem_cpu_map(int16_t dev_id, size_t size, void *ptr)
 {
@@ -704,7 +704,7 @@ rte_gpu_mem_cpu_map(int16_t dev_id, size_t size, void *ptr)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_unmap, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_mem_cpu_unmap, 21.11);
 int
 rte_gpu_mem_cpu_unmap(int16_t dev_id, void *ptr)
 {
@@ -728,7 +728,7 @@ rte_gpu_mem_cpu_unmap(int16_t dev_id, void *ptr)
 	return GPU_DRV_RET(dev->ops.mem_cpu_unmap(dev, ptr));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_wmb, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_wmb, 21.11);
 int
 rte_gpu_wmb(int16_t dev_id)
 {
@@ -748,7 +748,7 @@ rte_gpu_wmb(int16_t dev_id)
 	return GPU_DRV_RET(dev->ops.wmb(dev));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_flag, 21.11);
 int
 rte_gpu_comm_create_flag(uint16_t dev_id, struct rte_gpu_comm_flag *devflag,
 		enum rte_gpu_comm_flag_type mtype)
@@ -785,7 +785,7 @@ rte_gpu_comm_create_flag(uint16_t dev_id, struct rte_gpu_comm_flag *devflag,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_flag, 21.11);
 int
 rte_gpu_comm_destroy_flag(struct rte_gpu_comm_flag *devflag)
 {
@@ -807,7 +807,7 @@ rte_gpu_comm_destroy_flag(struct rte_gpu_comm_flag *devflag)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_flag, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_flag, 21.11);
 int
 rte_gpu_comm_set_flag(struct rte_gpu_comm_flag *devflag, uint32_t val)
 {
@@ -826,7 +826,7 @@ rte_gpu_comm_set_flag(struct rte_gpu_comm_flag *devflag, uint32_t val)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_flag_value, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_flag_value, 21.11);
 int
 rte_gpu_comm_get_flag_value(struct rte_gpu_comm_flag *devflag, uint32_t *val)
 {
@@ -844,7 +844,7 @@ rte_gpu_comm_get_flag_value(struct rte_gpu_comm_flag *devflag, uint32_t *val)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_create_list, 21.11);
 struct rte_gpu_comm_list *
 rte_gpu_comm_create_list(uint16_t dev_id,
 		uint32_t num_comm_items)
@@ -968,7 +968,7 @@ rte_gpu_comm_create_list(uint16_t dev_id,
 	return comm_list;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_destroy_list, 21.11);
 int
 rte_gpu_comm_destroy_list(struct rte_gpu_comm_list *comm_list,
 		uint32_t num_comm_items)
@@ -1014,7 +1014,7 @@ rte_gpu_comm_destroy_list(struct rte_gpu_comm_list *comm_list,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_populate_list_pkts, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_populate_list_pkts, 21.11);
 int
 rte_gpu_comm_populate_list_pkts(struct rte_gpu_comm_list *comm_list_item,
 		struct rte_mbuf **mbufs, uint32_t num_mbufs)
@@ -1053,7 +1053,7 @@ rte_gpu_comm_populate_list_pkts(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_status, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_set_status, 21.11);
 int
 rte_gpu_comm_set_status(struct rte_gpu_comm_list *comm_list_item,
 		enum rte_gpu_comm_list_status status)
@@ -1068,7 +1068,7 @@ rte_gpu_comm_set_status(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_status, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_get_status, 21.11);
 int
 rte_gpu_comm_get_status(struct rte_gpu_comm_list *comm_list_item,
 		enum rte_gpu_comm_list_status *status)
@@ -1083,7 +1083,7 @@ rte_gpu_comm_get_status(struct rte_gpu_comm_list *comm_list_item,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_cleanup_list, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_gpu_comm_cleanup_list, 21.11);
 int
 rte_gpu_comm_cleanup_list(struct rte_gpu_comm_list *comm_list_item)
 {
diff --git a/lib/graph/graph.c b/lib/graph/graph.c
index 0975bd8d49..9d62599c41 100644
--- a/lib/graph/graph.c
+++ b/lib/graph/graph.c
@@ -334,7 +334,7 @@ graph_src_node_avail(struct graph *graph)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_bind)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_bind);
 int
 rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore)
 {
@@ -366,7 +366,7 @@ rte_graph_model_mcore_dispatch_core_bind(rte_graph_t id, int lcore)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_unbind)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_core_unbind);
 void
 rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id)
 {
@@ -385,7 +385,7 @@ rte_graph_model_mcore_dispatch_core_unbind(rte_graph_t id)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_lookup)
+RTE_EXPORT_SYMBOL(rte_graph_lookup);
 struct rte_graph *
 rte_graph_lookup(const char *name)
 {
@@ -399,7 +399,7 @@ rte_graph_lookup(const char *name)
 	return graph_mem_fixup_secondary(rc);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_create)
+RTE_EXPORT_SYMBOL(rte_graph_create);
 rte_graph_t
 rte_graph_create(const char *name, struct rte_graph_param *prm)
 {
@@ -504,7 +504,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_destroy)
+RTE_EXPORT_SYMBOL(rte_graph_destroy);
 int
 rte_graph_destroy(rte_graph_t id)
 {
@@ -620,7 +620,7 @@ graph_clone(struct graph *parent_graph, const char *name, struct rte_graph_param
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_clone)
+RTE_EXPORT_SYMBOL(rte_graph_clone);
 rte_graph_t
 rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm)
 {
@@ -636,7 +636,7 @@ rte_graph_clone(rte_graph_t id, const char *name, struct rte_graph_param *prm)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_from_name)
+RTE_EXPORT_SYMBOL(rte_graph_from_name);
 rte_graph_t
 rte_graph_from_name(const char *name)
 {
@@ -649,7 +649,7 @@ rte_graph_from_name(const char *name)
 	return RTE_GRAPH_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_id_to_name)
+RTE_EXPORT_SYMBOL(rte_graph_id_to_name);
 char *
 rte_graph_id_to_name(rte_graph_t id)
 {
@@ -665,7 +665,7 @@ rte_graph_id_to_name(rte_graph_t id)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_node_get)
+RTE_EXPORT_SYMBOL(rte_graph_node_get);
 struct rte_node *
 rte_graph_node_get(rte_graph_t gid, uint32_t nid)
 {
@@ -689,7 +689,7 @@ rte_graph_node_get(rte_graph_t gid, uint32_t nid)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_node_get_by_name)
+RTE_EXPORT_SYMBOL(rte_graph_node_get_by_name);
 struct rte_node *
 rte_graph_node_get_by_name(const char *graph_name, const char *node_name)
 {
@@ -712,7 +712,7 @@ rte_graph_node_get_by_name(const char *graph_name, const char *node_name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(__rte_node_stream_alloc)
+RTE_EXPORT_SYMBOL(__rte_node_stream_alloc);
 void __rte_noinline
 __rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node)
 {
@@ -728,7 +728,7 @@ __rte_node_stream_alloc(struct rte_graph *graph, struct rte_node *node)
 	node->realloc_count++;
 }
 
-RTE_EXPORT_SYMBOL(__rte_node_stream_alloc_size)
+RTE_EXPORT_SYMBOL(__rte_node_stream_alloc_size);
 void __rte_noinline
 __rte_node_stream_alloc_size(struct rte_graph *graph, struct rte_node *node,
 			     uint16_t req_size)
@@ -802,7 +802,7 @@ graph_to_dot(FILE *f, struct graph *graph)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_export)
+RTE_EXPORT_SYMBOL(rte_graph_export);
 int
 rte_graph_export(const char *name, FILE *f)
 {
@@ -840,21 +840,21 @@ graph_scan_dump(FILE *f, rte_graph_t id, bool all)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_dump)
+RTE_EXPORT_SYMBOL(rte_graph_dump);
 void
 rte_graph_dump(FILE *f, rte_graph_t id)
 {
 	graph_scan_dump(f, id, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_list_dump)
+RTE_EXPORT_SYMBOL(rte_graph_list_dump);
 void
 rte_graph_list_dump(FILE *f)
 {
 	graph_scan_dump(f, 0, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_max_count)
+RTE_EXPORT_SYMBOL(rte_graph_max_count);
 rte_graph_t
 rte_graph_max_count(void)
 {
diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c
index e3b8cccdc1..2d4f07ad80 100644
--- a/lib/graph/graph_debug.c
+++ b/lib/graph/graph_debug.c
@@ -52,7 +52,7 @@ node_dump(FILE *f, struct node *n)
 		fprintf(f, "     edge[%d] <%s>\n", i, n->next_nodes[i]);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_obj_dump)
+RTE_EXPORT_SYMBOL(rte_graph_obj_dump);
 void
 rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all)
 {
diff --git a/lib/graph/graph_feature_arc.c b/lib/graph/graph_feature_arc.c
index 823aad3e73..c7641ea619 100644
--- a/lib/graph/graph_feature_arc.c
+++ b/lib/graph/graph_feature_arc.c
@@ -53,7 +53,7 @@ static struct rte_mbuf_dynfield rte_graph_feature_arc_mbuf_desc = {
 	.align = alignof(struct rte_graph_feature_arc_mbuf_dynfields),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_main, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_main, 25.07);
 rte_graph_feature_arc_main_t *__rte_graph_feature_arc_main;
 
 /* global feature arc list */
@@ -1062,7 +1062,7 @@ refill_fastpath_data(struct rte_graph_feature_arc *arc, uint32_t feature_bit,
 }
 
 /* feature arc initialization, public API */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_init, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_init, 25.07);
 int
 rte_graph_feature_arc_init(uint16_t num_feature_arcs)
 {
@@ -1193,7 +1193,7 @@ rte_graph_feature_arc_init(uint16_t num_feature_arcs)
 	return rc;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_create, 25.07);
 int
 rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
 			     rte_graph_feature_arc_t *_arc)
@@ -1335,7 +1335,7 @@ rte_graph_feature_arc_create(struct rte_graph_feature_arc_register *reg,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_add, 25.07);
 int
 rte_graph_feature_add(struct rte_graph_feature_register *freg)
 {
@@ -1583,7 +1583,7 @@ rte_graph_feature_add(struct rte_graph_feature_register *freg)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_lookup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_lookup, 25.07);
 int
 rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name,
 			 rte_graph_feature_t *feat)
@@ -1603,7 +1603,7 @@ rte_graph_feature_lookup(rte_graph_feature_arc_t _arc, const char *feature_name,
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_enable, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_enable, 25.07);
 int
 rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index,
 			 const char *feature_name, uint16_t app_cookie,
@@ -1678,7 +1678,7 @@ rte_graph_feature_enable(rte_graph_feature_arc_t _arc, uint32_t index,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_disable, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_disable, 25.07);
 int
 rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const char *feature_name,
 			  struct rte_rcu_qsbr *qsbr)
@@ -1796,7 +1796,7 @@ rte_graph_feature_disable(rte_graph_feature_arc_t _arc, uint32_t index, const ch
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_destroy, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_destroy, 25.07);
 int
 rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc)
 {
@@ -1861,7 +1861,7 @@ rte_graph_feature_arc_destroy(rte_graph_feature_arc_t _arc)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_cleanup, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_cleanup, 25.07);
 int
 rte_graph_feature_arc_cleanup(void)
 {
@@ -1886,7 +1886,7 @@ rte_graph_feature_arc_cleanup(void)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_lookup_by_name, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_lookup_by_name, 25.07);
 int
 rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc_t *_arc)
 {
@@ -1924,7 +1924,7 @@ rte_graph_feature_arc_lookup_by_name(const char *arc_name, rte_graph_feature_arc
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_enabled_features, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_enabled_features, 25.07);
 uint32_t
 rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc)
 {
@@ -1938,7 +1938,7 @@ rte_graph_feature_arc_num_enabled_features(rte_graph_feature_arc_t _arc)
 	return arc->runtime_enabled_features;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_features, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_num_features, 25.07);
 uint32_t
 rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc)
 {
@@ -1957,7 +1957,7 @@ rte_graph_feature_arc_num_features(rte_graph_feature_arc_t _arc)
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_name, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_name, 25.07);
 char *
 rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat)
 {
@@ -1978,7 +1978,7 @@ rte_graph_feature_arc_feature_to_name(rte_graph_feature_arc_t _arc, rte_graph_fe
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_node, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_feature_to_node, 25.07);
 int
 rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_feature_t feat,
 				      rte_node_t *node)
@@ -2005,7 +2005,7 @@ rte_graph_feature_arc_feature_to_node(rte_graph_feature_arc_t _arc, rte_graph_fe
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_register, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_arc_register, 25.07);
 void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg,
 				      const char *caller_name, int lineno)
 {
@@ -2015,7 +2015,7 @@ void __rte_graph_feature_arc_register(struct rte_graph_feature_arc_register *reg
 	STAILQ_INSERT_TAIL(&feature_arc_list, reg, next_arc);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_register, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_graph_feature_register, 25.07);
 void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
 				  const char *caller_name, int lineno)
 {
@@ -2026,7 +2026,7 @@ void __rte_graph_feature_register(struct rte_graph_feature_register *reg,
 	STAILQ_INSERT_TAIL(&feature_list, reg, next_feature);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_names_get, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_graph_feature_arc_names_get, 25.07);
 uint32_t
 rte_graph_feature_arc_names_get(char *arc_names[])
 {
diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c
index eac73cbf71..040fcd6725 100644
--- a/lib/graph/graph_stats.c
+++ b/lib/graph/graph_stats.c
@@ -376,7 +376,7 @@ expand_pattern_to_cluster(struct cluster *cluster, const char *pattern)
 	return -rte_errno;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_create)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_create);
 struct rte_graph_cluster_stats *
 rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
 {
@@ -440,7 +440,7 @@ rte_graph_cluster_stats_create(const struct rte_graph_cluster_stats_param *prm)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_destroy)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_destroy);
 void
 rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat)
 {
@@ -515,7 +515,7 @@ cluster_node_store_prev_stats(struct cluster_node *cluster)
 	stat->prev_cycles = stat->cycles;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_get)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_get);
 void
 rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
 {
@@ -537,7 +537,7 @@ rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_reset)
+RTE_EXPORT_SYMBOL(rte_graph_cluster_stats_reset);
 void
 rte_graph_cluster_stats_reset(struct rte_graph_cluster_stats *stat)
 {
diff --git a/lib/graph/node.c b/lib/graph/node.c
index cae1c809ed..76953a6e75 100644
--- a/lib/graph/node.c
+++ b/lib/graph/node.c
@@ -102,7 +102,7 @@ node_has_duplicate_entry(const char *name)
 }
 
 /* Public functions */
-RTE_EXPORT_SYMBOL(__rte_node_register)
+RTE_EXPORT_SYMBOL(__rte_node_register);
 rte_node_t
 __rte_node_register(const struct rte_node_register *reg)
 {
@@ -238,7 +238,7 @@ node_clone(struct node *node, const char *name)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_clone)
+RTE_EXPORT_SYMBOL(rte_node_clone);
 rte_node_t
 rte_node_clone(rte_node_t id, const char *name)
 {
@@ -255,7 +255,7 @@ rte_node_clone(rte_node_t id, const char *name)
 	return RTE_NODE_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_from_name)
+RTE_EXPORT_SYMBOL(rte_node_from_name);
 rte_node_t
 rte_node_from_name(const char *name)
 {
@@ -268,7 +268,7 @@ rte_node_from_name(const char *name)
 	return RTE_NODE_ID_INVALID;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_id_to_name)
+RTE_EXPORT_SYMBOL(rte_node_id_to_name);
 char *
 rte_node_id_to_name(rte_node_t id)
 {
@@ -284,7 +284,7 @@ rte_node_id_to_name(rte_node_t id)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_count)
+RTE_EXPORT_SYMBOL(rte_node_edge_count);
 rte_edge_t
 rte_node_edge_count(rte_node_t id)
 {
@@ -354,7 +354,7 @@ edge_update(struct node *node, struct node *prev, rte_edge_t from,
 	return count;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_shrink)
+RTE_EXPORT_SYMBOL(rte_node_edge_shrink);
 rte_edge_t
 rte_node_edge_shrink(rte_node_t id, rte_edge_t size)
 {
@@ -382,7 +382,7 @@ rte_node_edge_shrink(rte_node_t id, rte_edge_t size)
 	return rc;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_update)
+RTE_EXPORT_SYMBOL(rte_node_edge_update);
 rte_edge_t
 rte_node_edge_update(rte_node_t id, rte_edge_t from, const char **next_nodes,
 		     uint16_t nb_edges)
@@ -419,7 +419,7 @@ node_copy_edges(struct node *node, char *next_nodes[])
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_edge_get)
+RTE_EXPORT_SYMBOL(rte_node_edge_get);
 rte_node_t
 rte_node_edge_get(rte_node_t id, char *next_nodes[])
 {
@@ -466,21 +466,21 @@ node_scan_dump(FILE *f, rte_node_t id, bool all)
 	return;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_dump)
+RTE_EXPORT_SYMBOL(rte_node_dump);
 void
 rte_node_dump(FILE *f, rte_node_t id)
 {
 	node_scan_dump(f, id, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_node_list_dump)
+RTE_EXPORT_SYMBOL(rte_node_list_dump);
 void
 rte_node_list_dump(FILE *f)
 {
 	node_scan_dump(f, 0, true);
 }
 
-RTE_EXPORT_SYMBOL(rte_node_max_count)
+RTE_EXPORT_SYMBOL(rte_node_max_count);
 rte_node_t
 rte_node_max_count(void)
 {
@@ -517,7 +517,7 @@ node_override_process_func(rte_node_t id, rte_node_process_t process)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_free, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_free, 25.07);
 int
 rte_node_free(rte_node_t id)
 {
diff --git a/lib/graph/rte_graph_model_mcore_dispatch.c b/lib/graph/rte_graph_model_mcore_dispatch.c
index 70f0069bc1..3143b69188 100644
--- a/lib/graph/rte_graph_model_mcore_dispatch.c
+++ b/lib/graph/rte_graph_model_mcore_dispatch.c
@@ -114,7 +114,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph)
 	return false;
 }
 
-RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_node_enqueue)
+RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_node_enqueue);
 bool __rte_noinline
 __rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node,
 					      struct rte_graph_rq_head *rq)
@@ -132,7 +132,7 @@ __rte_graph_mcore_dispatch_sched_node_enqueue(struct rte_node *node,
 	return graph != NULL ? __graph_sched_node_enqueue(node, graph) : false;
 }
 
-RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_wq_process)
+RTE_EXPORT_SYMBOL(__rte_graph_mcore_dispatch_sched_wq_process);
 void
 __rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph)
 {
@@ -172,7 +172,7 @@ __rte_graph_mcore_dispatch_sched_wq_process(struct rte_graph *graph)
 	rte_mempool_put_bulk(mp, (void **)wq_nodes, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_node_lcore_affinity_set)
+RTE_EXPORT_SYMBOL(rte_graph_model_mcore_dispatch_node_lcore_affinity_set);
 int
 rte_graph_model_mcore_dispatch_node_lcore_affinity_set(const char *name, unsigned int lcore_id)
 {
diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c
index 71f8fb44ca..97bc2c2141 100644
--- a/lib/graph/rte_graph_worker.c
+++ b/lib/graph/rte_graph_worker.c
@@ -6,7 +6,7 @@
 #include "rte_graph_worker_common.h"
 #include "graph_private.h"
 
-RTE_EXPORT_SYMBOL(rte_graph_model_is_valid)
+RTE_EXPORT_SYMBOL(rte_graph_model_is_valid);
 bool
 rte_graph_model_is_valid(uint8_t model)
 {
@@ -16,7 +16,7 @@ rte_graph_model_is_valid(uint8_t model)
 	return true;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_worker_model_set)
+RTE_EXPORT_SYMBOL(rte_graph_worker_model_set);
 int
 rte_graph_worker_model_set(uint8_t model)
 {
@@ -32,7 +32,7 @@ rte_graph_worker_model_set(uint8_t model)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_graph_worker_model_get)
+RTE_EXPORT_SYMBOL(rte_graph_worker_model_get);
 uint8_t
 rte_graph_worker_model_get(struct rte_graph *graph)
 {
diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c
index 578cc9b801..2285bf318e 100644
--- a/lib/gro/rte_gro.c
+++ b/lib/gro/rte_gro.c
@@ -89,7 +89,7 @@ struct gro_ctx {
 	void *tbls[RTE_GRO_TYPE_MAX_NUM];
 };
 
-RTE_EXPORT_SYMBOL(rte_gro_ctx_create)
+RTE_EXPORT_SYMBOL(rte_gro_ctx_create);
 void *
 rte_gro_ctx_create(const struct rte_gro_param *param)
 {
@@ -131,7 +131,7 @@ rte_gro_ctx_create(const struct rte_gro_param *param)
 	return gro_ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_ctx_destroy)
+RTE_EXPORT_SYMBOL(rte_gro_ctx_destroy);
 void
 rte_gro_ctx_destroy(void *ctx)
 {
@@ -151,7 +151,7 @@ rte_gro_ctx_destroy(void *ctx)
 	rte_free(gro_ctx);
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_reassemble_burst)
+RTE_EXPORT_SYMBOL(rte_gro_reassemble_burst);
 uint16_t
 rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 		uint16_t nb_pkts,
@@ -352,7 +352,7 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
 	return nb_after_gro;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_reassemble)
+RTE_EXPORT_SYMBOL(rte_gro_reassemble);
 uint16_t
 rte_gro_reassemble(struct rte_mbuf **pkts,
 		uint16_t nb_pkts,
@@ -421,7 +421,7 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
 	return unprocess_num;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_timeout_flush)
+RTE_EXPORT_SYMBOL(rte_gro_timeout_flush);
 uint16_t
 rte_gro_timeout_flush(void *ctx,
 		uint64_t timeout_cycles,
@@ -480,7 +480,7 @@ rte_gro_timeout_flush(void *ctx,
 	return num;
 }
 
-RTE_EXPORT_SYMBOL(rte_gro_get_pkt_count)
+RTE_EXPORT_SYMBOL(rte_gro_get_pkt_count);
 uint64_t
 rte_gro_get_pkt_count(void *ctx)
 {
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index cbf7365702..712221e3d3 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -25,7 +25,7 @@
 		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
-RTE_EXPORT_SYMBOL(rte_gso_segment)
+RTE_EXPORT_SYMBOL(rte_gso_segment);
 int
 rte_gso_segment(struct rte_mbuf *pkt,
 		const struct rte_gso_ctx *gso_ctx,
diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..f565874e28 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -77,7 +77,7 @@ struct __rte_hash_rcu_dq_entry {
 	uint32_t ext_bkt_idx;
 };
 
-RTE_EXPORT_SYMBOL(rte_hash_find_existing)
+RTE_EXPORT_SYMBOL(rte_hash_find_existing);
 struct rte_hash *
 rte_hash_find_existing(const char *name)
 {
@@ -110,7 +110,7 @@ rte_hash_get_last_bkt(struct rte_hash_bucket *lst_bkt)
 	return lst_bkt;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_set_cmp_func)
+RTE_EXPORT_SYMBOL(rte_hash_set_cmp_func);
 void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 {
 	h->cmp_jump_table_idx = KEY_CUSTOM;
@@ -156,7 +156,7 @@ get_alt_bucket_index(const struct rte_hash *h,
 	return (cur_bkt_idx ^ sig) & h->bucket_bitmask;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_create)
+RTE_EXPORT_SYMBOL(rte_hash_create);
 struct rte_hash *
 rte_hash_create(const struct rte_hash_parameters *params)
 {
@@ -528,7 +528,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_free)
+RTE_EXPORT_SYMBOL(rte_hash_free);
 void
 rte_hash_free(struct rte_hash *h)
 {
@@ -576,7 +576,7 @@ rte_hash_free(struct rte_hash *h)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_hash)
+RTE_EXPORT_SYMBOL(rte_hash_hash);
 hash_sig_t
 rte_hash_hash(const struct rte_hash *h, const void *key)
 {
@@ -584,7 +584,7 @@ rte_hash_hash(const struct rte_hash *h, const void *key)
 	return h->hash_func(key, h->key_len, h->hash_func_init_val);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_max_key_id)
+RTE_EXPORT_SYMBOL(rte_hash_max_key_id);
 int32_t
 rte_hash_max_key_id(const struct rte_hash *h)
 {
@@ -600,7 +600,7 @@ rte_hash_max_key_id(const struct rte_hash *h)
 		return h->entries;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_count)
+RTE_EXPORT_SYMBOL(rte_hash_count);
 int32_t
 rte_hash_count(const struct rte_hash *h)
 {
@@ -670,7 +670,7 @@ __hash_rw_reader_unlock(const struct rte_hash *h)
 		rte_rwlock_read_unlock(h->readwrite_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_reset)
+RTE_EXPORT_SYMBOL(rte_hash_reset);
 void
 rte_hash_reset(struct rte_hash *h)
 {
@@ -1254,7 +1254,7 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
 
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash);
 int32_t
 rte_hash_add_key_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1263,7 +1263,7 @@ rte_hash_add_key_with_hash(const struct rte_hash *h,
 	return __rte_hash_add_key_with_hash(h, key, sig, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key)
+RTE_EXPORT_SYMBOL(rte_hash_add_key);
 int32_t
 rte_hash_add_key(const struct rte_hash *h, const void *key)
 {
@@ -1271,7 +1271,7 @@ rte_hash_add_key(const struct rte_hash *h, const void *key)
 	return __rte_hash_add_key_with_hash(h, key, rte_hash_hash(h, key), 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash_data)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_with_hash_data);
 int
 rte_hash_add_key_with_hash_data(const struct rte_hash *h,
 			const void *key, hash_sig_t sig, void *data)
@@ -1286,7 +1286,7 @@ rte_hash_add_key_with_hash_data(const struct rte_hash *h,
 		return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_add_key_data)
+RTE_EXPORT_SYMBOL(rte_hash_add_key_data);
 int
 rte_hash_add_key_data(const struct rte_hash *h, const void *key, void *data)
 {
@@ -1480,7 +1480,7 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key,
 		return __rte_hash_lookup_with_hash_l(h, key, sig, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash);
 int32_t
 rte_hash_lookup_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1489,7 +1489,7 @@ rte_hash_lookup_with_hash(const struct rte_hash *h,
 	return __rte_hash_lookup_with_hash(h, key, sig, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup)
+RTE_EXPORT_SYMBOL(rte_hash_lookup);
 int32_t
 rte_hash_lookup(const struct rte_hash *h, const void *key)
 {
@@ -1497,7 +1497,7 @@ rte_hash_lookup(const struct rte_hash *h, const void *key)
 	return __rte_hash_lookup_with_hash(h, key, rte_hash_hash(h, key), NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_data);
 int
 rte_hash_lookup_with_hash_data(const struct rte_hash *h,
 			const void *key, hash_sig_t sig, void **data)
@@ -1506,7 +1506,7 @@ rte_hash_lookup_with_hash_data(const struct rte_hash *h,
 	return __rte_hash_lookup_with_hash(h, key, sig, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_data);
 int
 rte_hash_lookup_data(const struct rte_hash *h, const void *key, void **data)
 {
@@ -1574,7 +1574,7 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_rcu_qsbr_add)
+RTE_EXPORT_SYMBOL(rte_hash_rcu_qsbr_add);
 int
 rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
 {
@@ -1645,7 +1645,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_hash_rcu_qsbr_dq_reclaim, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_hash_rcu_qsbr_dq_reclaim, 24.07);
 int rte_hash_rcu_qsbr_dq_reclaim(struct rte_hash *h, unsigned int *freed, unsigned int *pending,
 				 unsigned int *available)
 {
@@ -1870,7 +1870,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_del_key_with_hash)
+RTE_EXPORT_SYMBOL(rte_hash_del_key_with_hash);
 int32_t
 rte_hash_del_key_with_hash(const struct rte_hash *h,
 			const void *key, hash_sig_t sig)
@@ -1879,7 +1879,7 @@ rte_hash_del_key_with_hash(const struct rte_hash *h,
 	return __rte_hash_del_key_with_hash(h, key, sig);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_del_key)
+RTE_EXPORT_SYMBOL(rte_hash_del_key);
 int32_t
 rte_hash_del_key(const struct rte_hash *h, const void *key)
 {
@@ -1887,7 +1887,7 @@ rte_hash_del_key(const struct rte_hash *h, const void *key)
 	return __rte_hash_del_key_with_hash(h, key, rte_hash_hash(h, key));
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_get_key_with_position)
+RTE_EXPORT_SYMBOL(rte_hash_get_key_with_position);
 int
 rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position,
 			       void **key)
@@ -1908,7 +1908,7 @@ rte_hash_get_key_with_position(const struct rte_hash *h, const int32_t position,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_free_key_with_position)
+RTE_EXPORT_SYMBOL(rte_hash_free_key_with_position);
 int
 rte_hash_free_key_with_position(const struct rte_hash *h,
 				const int32_t position)
@@ -2421,7 +2421,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 					 hit_mask, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk);
 int
 rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 		      uint32_t num_keys, int32_t *positions)
@@ -2434,7 +2434,7 @@ rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_bulk_data);
 int
 rte_hash_lookup_bulk_data(const struct rte_hash *h, const void **keys,
 		      uint32_t num_keys, uint64_t *hit_mask, void *data[])
@@ -2535,7 +2535,7 @@ __rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 				num_keys, positions, hit_mask, data);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk);
 int
 rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 		hash_sig_t *sig, uint32_t num_keys, int32_t *positions)
@@ -2550,7 +2550,7 @@ rte_hash_lookup_with_hash_bulk(const struct rte_hash *h, const void **keys,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk_data)
+RTE_EXPORT_SYMBOL(rte_hash_lookup_with_hash_bulk_data);
 int
 rte_hash_lookup_with_hash_bulk_data(const struct rte_hash *h,
 		const void **keys, hash_sig_t *sig,
@@ -2570,7 +2570,7 @@ rte_hash_lookup_with_hash_bulk_data(const struct rte_hash *h,
 	return rte_popcount64(*hit_mask);
 }
 
-RTE_EXPORT_SYMBOL(rte_hash_iterate)
+RTE_EXPORT_SYMBOL(rte_hash_iterate);
 int32_t
 rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32_t *next)
 {
diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c
index 38b15a14d1..c755f29cad 100644
--- a/lib/hash/rte_fbk_hash.c
+++ b/lib/hash/rte_fbk_hash.c
@@ -42,7 +42,7 @@ EAL_REGISTER_TAILQ(rte_fbk_hash_tailq)
  * @return
  *   pointer to hash table structure or NULL on error.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_find_existing)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_find_existing);
 struct rte_fbk_hash_table *
 rte_fbk_hash_find_existing(const char *name)
 {
@@ -77,7 +77,7 @@ rte_fbk_hash_find_existing(const char *name)
  *   Pointer to hash table structure that is used in future hash table
  *   operations, or NULL on error.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_create)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_create);
 struct rte_fbk_hash_table *
 rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
 {
@@ -180,7 +180,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
  * @param ht
  *   Hash table to deallocate.
  */
-RTE_EXPORT_SYMBOL(rte_fbk_hash_free)
+RTE_EXPORT_SYMBOL(rte_fbk_hash_free);
 void
 rte_fbk_hash_free(struct rte_fbk_hash_table *ht)
 {
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
index 9fe90d6425..21535e8916 100644
--- a/lib/hash/rte_hash_crc.c
+++ b/lib/hash/rte_hash_crc.c
@@ -13,7 +13,7 @@ RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
 #define HASH_CRC_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH_CRC, "" __VA_ARGS__)
 
-RTE_EXPORT_SYMBOL(rte_hash_crc32_alg)
+RTE_EXPORT_SYMBOL(rte_hash_crc32_alg);
 uint8_t rte_hash_crc32_alg = CRC32_SW;
 
 /**
@@ -28,7 +28,7 @@ uint8_t rte_hash_crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-RTE_EXPORT_SYMBOL(rte_hash_crc_set_alg)
+RTE_EXPORT_SYMBOL(rte_hash_crc_set_alg);
 void
 rte_hash_crc_set_alg(uint8_t alg)
 {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 6c662bf14f..fe0eb44829 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -71,7 +71,7 @@ struct rte_thash_ctx {
 	uint8_t		hash_key[];
 };
 
-RTE_EXPORT_SYMBOL(rte_thash_gfni_supported)
+RTE_EXPORT_SYMBOL(rte_thash_gfni_supported);
 int
 rte_thash_gfni_supported(void)
 {
@@ -85,7 +85,7 @@ rte_thash_gfni_supported(void)
 	return 0;
 };
 
-RTE_EXPORT_SYMBOL(rte_thash_complete_matrix)
+RTE_EXPORT_SYMBOL(rte_thash_complete_matrix);
 void
 rte_thash_complete_matrix(uint64_t *matrixes, const uint8_t *rss_key, int size)
 {
@@ -206,7 +206,7 @@ free_lfsr(struct thash_lfsr *lfsr)
 		rte_free(lfsr);
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_init_ctx)
+RTE_EXPORT_SYMBOL(rte_thash_init_ctx);
 struct rte_thash_ctx *
 rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
 	uint8_t *key, uint32_t flags)
@@ -297,7 +297,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_find_existing)
+RTE_EXPORT_SYMBOL(rte_thash_find_existing);
 struct rte_thash_ctx *
 rte_thash_find_existing(const char *name)
 {
@@ -324,7 +324,7 @@ rte_thash_find_existing(const char *name)
 	return ctx;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_free_ctx)
+RTE_EXPORT_SYMBOL(rte_thash_free_ctx);
 void
 rte_thash_free_ctx(struct rte_thash_ctx *ctx)
 {
@@ -546,7 +546,7 @@ insert_after(struct rte_thash_ctx *ctx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_add_helper)
+RTE_EXPORT_SYMBOL(rte_thash_add_helper);
 int
 rte_thash_add_helper(struct rte_thash_ctx *ctx, const char *name, uint32_t len,
 	uint32_t offset)
@@ -637,7 +637,7 @@ rte_thash_add_helper(struct rte_thash_ctx *ctx, const char *name, uint32_t len,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_helper)
+RTE_EXPORT_SYMBOL(rte_thash_get_helper);
 struct rte_thash_subtuple_helper *
 rte_thash_get_helper(struct rte_thash_ctx *ctx, const char *name)
 {
@@ -654,7 +654,7 @@ rte_thash_get_helper(struct rte_thash_ctx *ctx, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_complement)
+RTE_EXPORT_SYMBOL(rte_thash_get_complement);
 uint32_t
 rte_thash_get_complement(struct rte_thash_subtuple_helper *h,
 	uint32_t hash, uint32_t desired_hash)
@@ -662,14 +662,14 @@ rte_thash_get_complement(struct rte_thash_subtuple_helper *h,
 	return h->compl_table[(hash ^ desired_hash) & h->lsb_msk];
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_key)
+RTE_EXPORT_SYMBOL(rte_thash_get_key);
 const uint8_t *
 rte_thash_get_key(struct rte_thash_ctx *ctx)
 {
 	return ctx->hash_key;
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_get_gfni_matrices)
+RTE_EXPORT_SYMBOL(rte_thash_get_gfni_matrices);
 const uint64_t *
 rte_thash_get_gfni_matrices(struct rte_thash_ctx *ctx)
 {
@@ -765,7 +765,7 @@ write_unaligned_bits(uint8_t *ptr, int len, int offset, uint32_t val)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_thash_adjust_tuple)
+RTE_EXPORT_SYMBOL(rte_thash_adjust_tuple);
 int
 rte_thash_adjust_tuple(struct rte_thash_ctx *ctx,
 	struct rte_thash_subtuple_helper *h,
@@ -835,7 +835,7 @@ rte_thash_adjust_tuple(struct rte_thash_ctx *ctx,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_thash_gen_key, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_thash_gen_key, 24.11);
 int
 rte_thash_gen_key(uint8_t *key, size_t key_len, size_t reta_sz_log,
 	uint32_t entropy_start, size_t entropy_sz)
diff --git a/lib/hash/rte_thash_gf2_poly_math.c b/lib/hash/rte_thash_gf2_poly_math.c
index ddf4dd863b..05cd0d5f37 100644
--- a/lib/hash/rte_thash_gf2_poly_math.c
+++ b/lib/hash/rte_thash_gf2_poly_math.c
@@ -242,7 +242,7 @@ thash_test_poly_order(uint32_t poly, int degree)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(thash_get_rand_poly)
+RTE_EXPORT_INTERNAL_SYMBOL(thash_get_rand_poly);
 uint32_t
 thash_get_rand_poly(uint32_t poly_degree)
 {
diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c
index 2003c7b3db..b82b9bba63 100644
--- a/lib/hash/rte_thash_gfni.c
+++ b/lib/hash/rte_thash_gfni.c
@@ -13,7 +13,7 @@ RTE_LOG_REGISTER_SUFFIX(hash_gfni_logtype, gfni, INFO);
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_stub)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_stub);
 uint32_t
 rte_thash_gfni_stub(const uint64_t *mtrx __rte_unused,
 	const uint8_t *key __rte_unused, int len __rte_unused)
@@ -29,7 +29,7 @@ rte_thash_gfni_stub(const uint64_t *mtrx __rte_unused,
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_bulk_stub)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_thash_gfni_bulk_stub);
 void
 rte_thash_gfni_bulk_stub(const uint64_t *mtrx __rte_unused,
 	int len __rte_unused, uint8_t *tuple[] __rte_unused,
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index ee9aa93027..b004302468 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -15,7 +15,7 @@ RTE_LOG_REGISTER_DEFAULT(ipfrag_logtype, INFO);
 #define	IP_FRAG_HASH_FNUM	2
 
 /* free mbufs from death row */
-RTE_EXPORT_SYMBOL(rte_ip_frag_free_death_row)
+RTE_EXPORT_SYMBOL(rte_ip_frag_free_death_row);
 void
 rte_ip_frag_free_death_row(struct rte_ip_frag_death_row *dr,
 		uint32_t prefetch)
@@ -40,7 +40,7 @@ rte_ip_frag_free_death_row(struct rte_ip_frag_death_row *dr,
 }
 
 /* create fragmentation table */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_create)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_create);
 struct rte_ip_frag_tbl *
 rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
 	uint32_t max_entries, uint64_t max_cycles, int socket_id)
@@ -85,7 +85,7 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
 }
 
 /* delete fragmentation table */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_destroy)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_destroy);
 void
 rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
 {
@@ -99,7 +99,7 @@ rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
 }
 
 /* dump frag table statistics to file */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_statistics_dump)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_statistics_dump);
 void
 rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
 {
@@ -129,7 +129,7 @@ rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
 }
 
 /* Delete expired fragments */
-RTE_EXPORT_SYMBOL(rte_ip_frag_table_del_expired_entries)
+RTE_EXPORT_SYMBOL(rte_ip_frag_table_del_expired_entries);
 void
 rte_ip_frag_table_del_expired_entries(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, uint64_t tms)
diff --git a/lib/ip_frag/rte_ipv4_fragmentation.c b/lib/ip_frag/rte_ipv4_fragmentation.c
index 435a6e13bb..065e49780f 100644
--- a/lib/ip_frag/rte_ipv4_fragmentation.c
+++ b/lib/ip_frag/rte_ipv4_fragmentation.c
@@ -105,7 +105,7 @@ static inline uint16_t __create_ipopt_frag_hdr(uint8_t *iph,
  *   in the pkts_out array.
  *   Otherwise - (-1) * <errno>.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_fragment_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_fragment_packet);
 int32_t
 rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
@@ -288,7 +288,7 @@ rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,
  *   in the pkts_out array.
  *   Otherwise - (-1) * errno.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_fragment_copy_nonseg_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_fragment_copy_nonseg_packet);
 int32_t
 rte_ipv4_fragment_copy_nonseg_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
diff --git a/lib/ip_frag/rte_ipv4_reassembly.c b/lib/ip_frag/rte_ipv4_reassembly.c
index 3c8ae113ba..fca05ddc9e 100644
--- a/lib/ip_frag/rte_ipv4_reassembly.c
+++ b/lib/ip_frag/rte_ipv4_reassembly.c
@@ -95,7 +95,7 @@ ipv4_frag_reassemble(struct ip_frag_pkt *fp)
  *   - an error occurred.
  *   - not all fragments of the packet are collected yet.
  */
-RTE_EXPORT_SYMBOL(rte_ipv4_frag_reassemble_packet)
+RTE_EXPORT_SYMBOL(rte_ipv4_frag_reassemble_packet);
 struct rte_mbuf *
 rte_ipv4_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index c81f2402e3..573732f596 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -64,7 +64,7 @@ __free_fragments(struct rte_mbuf *mb[], uint32_t num)
  *   in the pkts_out array.
  *   Otherwise - (-1) * <errno>.
  */
-RTE_EXPORT_SYMBOL(rte_ipv6_fragment_packet)
+RTE_EXPORT_SYMBOL(rte_ipv6_fragment_packet);
 int32_t
 rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
 	struct rte_mbuf **pkts_out,
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index 0e809a01e5..ca37d03dee 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -133,7 +133,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
  */
 #define MORE_FRAGS(x) (((x) & 0x100) >> 8)
 #define FRAG_OFFSET(x) (rte_cpu_to_be_16(x) >> 3)
-RTE_EXPORT_SYMBOL(rte_ipv6_frag_reassemble_packet)
+RTE_EXPORT_SYMBOL(rte_ipv6_frag_reassemble_packet);
 struct rte_mbuf *
 rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
 	struct rte_ip_frag_death_row *dr, struct rte_mbuf *mb, uint64_t tms,
diff --git a/lib/ipsec/ipsec_sad.c b/lib/ipsec/ipsec_sad.c
index 15ea868f77..fe5d25a94f 100644
--- a/lib/ipsec/ipsec_sad.c
+++ b/lib/ipsec/ipsec_sad.c
@@ -114,7 +114,7 @@ add_specific(struct rte_ipsec_sad *sad, const void *key,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_add)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_add);
 int
 rte_ipsec_sad_add(struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *key,
@@ -214,7 +214,7 @@ del_specific(struct rte_ipsec_sad *sad, const void *key, int key_type)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_del)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_del);
 int
 rte_ipsec_sad_del(struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *key,
@@ -254,7 +254,7 @@ rte_ipsec_sad_del(struct rte_ipsec_sad *sad,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_create)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_create);
 struct rte_ipsec_sad *
 rte_ipsec_sad_create(const char *name, const struct rte_ipsec_sad_conf *conf)
 {
@@ -384,7 +384,7 @@ rte_ipsec_sad_create(const char *name, const struct rte_ipsec_sad_conf *conf)
 	return sad;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_find_existing)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_find_existing);
 struct rte_ipsec_sad *
 rte_ipsec_sad_find_existing(const char *name)
 {
@@ -419,7 +419,7 @@ rte_ipsec_sad_find_existing(const char *name)
 	return sad;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_destroy)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_destroy);
 void
 rte_ipsec_sad_destroy(struct rte_ipsec_sad *sad)
 {
@@ -542,7 +542,7 @@ __ipsec_sad_lookup(const struct rte_ipsec_sad *sad,
 	return found;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sad_lookup)
+RTE_EXPORT_SYMBOL(rte_ipsec_sad_lookup);
 int
 rte_ipsec_sad_lookup(const struct rte_ipsec_sad *sad,
 		const union rte_ipsec_sad_key *keys[], void *sa[], uint32_t n)
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
index a9b6f05270..4cff0e2438 100644
--- a/lib/ipsec/ipsec_telemetry.c
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -205,7 +205,7 @@ handle_telemetry_cmd_ipsec_sa_details(const char *cmd __rte_unused,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_add)
+RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_add);
 int
 rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
 {
@@ -218,7 +218,7 @@ rte_ipsec_telemetry_sa_add(const struct rte_ipsec_sa *sa)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_del)
+RTE_EXPORT_SYMBOL(rte_ipsec_telemetry_sa_del);
 void
 rte_ipsec_telemetry_sa_del(const struct rte_ipsec_sa *sa)
 {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index 4f589f3f3f..a03e106bb1 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -85,7 +85,7 @@ fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_type)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_type);
 uint64_t
 rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 {
@@ -158,7 +158,7 @@ ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 	return sz;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_fini)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_fini);
 void
 rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 {
@@ -528,7 +528,7 @@ fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_size)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_size);
 int
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
@@ -549,7 +549,7 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 	return ipsec_sa_size(type, &wsz, &nb);
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_sa_init)
+RTE_EXPORT_SYMBOL(rte_ipsec_sa_init);
 int
 rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
diff --git a/lib/ipsec/ses.c b/lib/ipsec/ses.c
index 224e752d05..7b137ca9b6 100644
--- a/lib/ipsec/ses.c
+++ b/lib/ipsec/ses.c
@@ -29,7 +29,7 @@ session_check(struct rte_ipsec_session *ss)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ipsec_session_prepare)
+RTE_EXPORT_SYMBOL(rte_ipsec_session_prepare);
 int
 rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
 {
diff --git a/lib/jobstats/rte_jobstats.c b/lib/jobstats/rte_jobstats.c
index 20a4f1391a..4729316e08 100644
--- a/lib/jobstats/rte_jobstats.c
+++ b/lib/jobstats/rte_jobstats.c
@@ -64,7 +64,7 @@ default_update_function(struct rte_jobstats *job, int64_t result)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_init)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_init);
 int
 rte_jobstats_context_init(struct rte_jobstats_context *ctx)
 {
@@ -79,7 +79,7 @@ rte_jobstats_context_init(struct rte_jobstats_context *ctx)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_start)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_start);
 void
 rte_jobstats_context_start(struct rte_jobstats_context *ctx)
 {
@@ -92,7 +92,7 @@ rte_jobstats_context_start(struct rte_jobstats_context *ctx)
 	ctx->state_time = now;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_finish)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_finish);
 void
 rte_jobstats_context_finish(struct rte_jobstats_context *ctx)
 {
@@ -106,7 +106,7 @@ rte_jobstats_context_finish(struct rte_jobstats_context *ctx)
 	ctx->state_time = now;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_context_reset)
+RTE_EXPORT_SYMBOL(rte_jobstats_context_reset);
 void
 rte_jobstats_context_reset(struct rte_jobstats_context *ctx)
 {
@@ -118,14 +118,14 @@ rte_jobstats_context_reset(struct rte_jobstats_context *ctx)
 	ctx->loop_cnt = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_target)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_target);
 void
 rte_jobstats_set_target(struct rte_jobstats *job, int64_t target)
 {
 	job->target = target;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_start)
+RTE_EXPORT_SYMBOL(rte_jobstats_start);
 int
 rte_jobstats_start(struct rte_jobstats_context *ctx, struct rte_jobstats *job)
 {
@@ -145,7 +145,7 @@ rte_jobstats_start(struct rte_jobstats_context *ctx, struct rte_jobstats *job)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_abort)
+RTE_EXPORT_SYMBOL(rte_jobstats_abort);
 int
 rte_jobstats_abort(struct rte_jobstats *job)
 {
@@ -166,7 +166,7 @@ rte_jobstats_abort(struct rte_jobstats *job)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_finish)
+RTE_EXPORT_SYMBOL(rte_jobstats_finish);
 int
 rte_jobstats_finish(struct rte_jobstats *job, int64_t job_value)
 {
@@ -203,7 +203,7 @@ rte_jobstats_finish(struct rte_jobstats *job, int64_t job_value)
 	return need_update;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_period)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_period);
 void
 rte_jobstats_set_period(struct rte_jobstats *job, uint64_t period,
 		uint8_t saturate)
@@ -218,7 +218,7 @@ rte_jobstats_set_period(struct rte_jobstats *job, uint64_t period,
 	job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_min)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_min);
 void
 rte_jobstats_set_min(struct rte_jobstats *job, uint64_t period)
 {
@@ -227,7 +227,7 @@ rte_jobstats_set_min(struct rte_jobstats *job, uint64_t period)
 		job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_max)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_max);
 void
 rte_jobstats_set_max(struct rte_jobstats *job, uint64_t period)
 {
@@ -236,7 +236,7 @@ rte_jobstats_set_max(struct rte_jobstats *job, uint64_t period)
 		job->period = period;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_init)
+RTE_EXPORT_SYMBOL(rte_jobstats_init);
 int
 rte_jobstats_init(struct rte_jobstats *job, const char *name,
 		uint64_t min_period, uint64_t max_period, uint64_t initial_period,
@@ -257,7 +257,7 @@ rte_jobstats_init(struct rte_jobstats *job, const char *name,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_set_update_period_function)
+RTE_EXPORT_SYMBOL(rte_jobstats_set_update_period_function);
 void
 rte_jobstats_set_update_period_function(struct rte_jobstats *job,
 		rte_job_update_period_cb_t update_period_cb)
@@ -268,7 +268,7 @@ rte_jobstats_set_update_period_function(struct rte_jobstats *job,
 	job->update_period_cb = update_period_cb;
 }
 
-RTE_EXPORT_SYMBOL(rte_jobstats_reset)
+RTE_EXPORT_SYMBOL(rte_jobstats_reset);
 void
 rte_jobstats_reset(struct rte_jobstats *job)
 {
diff --git a/lib/kvargs/rte_kvargs.c b/lib/kvargs/rte_kvargs.c
index 4e3198b33f..d1aa30b96f 100644
--- a/lib/kvargs/rte_kvargs.c
+++ b/lib/kvargs/rte_kvargs.c
@@ -152,7 +152,7 @@ check_for_valid_keys(struct rte_kvargs *kvlist,
  * E.g. given a list = { rx = 0, rx = 1, tx = 2 } the number of args for
  * arg "rx" will be 2.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_count)
+RTE_EXPORT_SYMBOL(rte_kvargs_count);
 unsigned
 rte_kvargs_count(const struct rte_kvargs *kvlist, const char *key_match)
 {
@@ -195,7 +195,7 @@ kvargs_process_common(const struct rte_kvargs *kvlist, const char *key_match,
 /*
  * For each matching key in key=value, call the given handler function.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_process)
+RTE_EXPORT_SYMBOL(rte_kvargs_process);
 int
 rte_kvargs_process(const struct rte_kvargs *kvlist, const char *key_match, arg_handler_t handler,
 		   void *opaque_arg)
@@ -206,7 +206,7 @@ rte_kvargs_process(const struct rte_kvargs *kvlist, const char *key_match, arg_h
 /*
  * For each matching key in key=value or only-key, call the given handler function.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_process_opt)
+RTE_EXPORT_SYMBOL(rte_kvargs_process_opt);
 int
 rte_kvargs_process_opt(const struct rte_kvargs *kvlist, const char *key_match,
 		       arg_handler_t handler, void *opaque_arg)
@@ -215,7 +215,7 @@ rte_kvargs_process_opt(const struct rte_kvargs *kvlist, const char *key_match,
 }
 
 /* free the rte_kvargs structure */
-RTE_EXPORT_SYMBOL(rte_kvargs_free)
+RTE_EXPORT_SYMBOL(rte_kvargs_free);
 void
 rte_kvargs_free(struct rte_kvargs *kvlist)
 {
@@ -227,7 +227,7 @@ rte_kvargs_free(struct rte_kvargs *kvlist)
 }
 
 /* Lookup a value in an rte_kvargs list by its key and value. */
-RTE_EXPORT_SYMBOL(rte_kvargs_get_with_value)
+RTE_EXPORT_SYMBOL(rte_kvargs_get_with_value);
 const char *
 rte_kvargs_get_with_value(const struct rte_kvargs *kvlist, const char *key,
 			  const char *value)
@@ -247,7 +247,7 @@ rte_kvargs_get_with_value(const struct rte_kvargs *kvlist, const char *key,
 }
 
 /* Lookup a value in an rte_kvargs list by its key. */
-RTE_EXPORT_SYMBOL(rte_kvargs_get)
+RTE_EXPORT_SYMBOL(rte_kvargs_get);
 const char *
 rte_kvargs_get(const struct rte_kvargs *kvlist, const char *key)
 {
@@ -261,7 +261,7 @@ rte_kvargs_get(const struct rte_kvargs *kvlist, const char *key)
  * an allocated structure that contains a key/value list. Also
  * check if only valid keys were used.
  */
-RTE_EXPORT_SYMBOL(rte_kvargs_parse)
+RTE_EXPORT_SYMBOL(rte_kvargs_parse);
 struct rte_kvargs *
 rte_kvargs_parse(const char *args, const char * const valid_keys[])
 {
@@ -285,7 +285,7 @@ rte_kvargs_parse(const char *args, const char * const valid_keys[])
 	return kvlist;
 }
 
-RTE_EXPORT_SYMBOL(rte_kvargs_parse_delim)
+RTE_EXPORT_SYMBOL(rte_kvargs_parse_delim);
 struct rte_kvargs *
 rte_kvargs_parse_delim(const char *args, const char * const valid_keys[],
 		       const char *valid_ends)
diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c
index f61d5a273f..5437258219 100644
--- a/lib/latencystats/rte_latencystats.c
+++ b/lib/latencystats/rte_latencystats.c
@@ -116,7 +116,7 @@ latencystats_collect(uint64_t values[])
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_update)
+RTE_EXPORT_SYMBOL(rte_latencystats_update);
 int32_t
 rte_latencystats_update(void)
 {
@@ -256,7 +256,7 @@ calc_latency(uint16_t pid __rte_unused,
 	return nb_pkts;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_init)
+RTE_EXPORT_SYMBOL(rte_latencystats_init);
 int
 rte_latencystats_init(uint64_t app_samp_intvl,
 		rte_latency_stats_flow_type_fn user_cb)
@@ -349,7 +349,7 @@ rte_latencystats_init(uint64_t app_samp_intvl,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_uninit)
+RTE_EXPORT_SYMBOL(rte_latencystats_uninit);
 int
 rte_latencystats_uninit(void)
 {
@@ -396,7 +396,7 @@ rte_latencystats_uninit(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_get_names)
+RTE_EXPORT_SYMBOL(rte_latencystats_get_names);
 int
 rte_latencystats_get_names(struct rte_metric_name *names, uint16_t size)
 {
@@ -412,7 +412,7 @@ rte_latencystats_get_names(struct rte_metric_name *names, uint16_t size)
 	return NUM_LATENCY_STATS;
 }
 
-RTE_EXPORT_SYMBOL(rte_latencystats_get)
+RTE_EXPORT_SYMBOL(rte_latencystats_get);
 int
 rte_latencystats_get(struct rte_metric_value *values, uint16_t size)
 {
diff --git a/lib/log/log.c b/lib/log/log.c
index 8ad5250a13..1e8e98944f 100644
--- a/lib/log/log.c
+++ b/lib/log/log.c
@@ -79,7 +79,7 @@ struct log_cur_msg {
 static RTE_DEFINE_PER_LCORE(struct log_cur_msg, log_cur_msg);
 
 /* Change the stream that will be used by logging system */
-RTE_EXPORT_SYMBOL(rte_openlog_stream)
+RTE_EXPORT_SYMBOL(rte_openlog_stream);
 int
 rte_openlog_stream(FILE *f)
 {
@@ -91,7 +91,7 @@ rte_openlog_stream(FILE *f)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_get_stream)
+RTE_EXPORT_SYMBOL(rte_log_get_stream);
 FILE *
 rte_log_get_stream(void)
 {
@@ -101,7 +101,7 @@ rte_log_get_stream(void)
 }
 
 /* Set global log level */
-RTE_EXPORT_SYMBOL(rte_log_set_global_level)
+RTE_EXPORT_SYMBOL(rte_log_set_global_level);
 void
 rte_log_set_global_level(uint32_t level)
 {
@@ -109,14 +109,14 @@ rte_log_set_global_level(uint32_t level)
 }
 
 /* Get global log level */
-RTE_EXPORT_SYMBOL(rte_log_get_global_level)
+RTE_EXPORT_SYMBOL(rte_log_get_global_level);
 uint32_t
 rte_log_get_global_level(void)
 {
 	return rte_logs.level;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_get_level)
+RTE_EXPORT_SYMBOL(rte_log_get_level);
 int
 rte_log_get_level(uint32_t type)
 {
@@ -126,7 +126,7 @@ rte_log_get_level(uint32_t type)
 	return rte_logs.dynamic_types[type].loglevel;
 }
 
-RTE_EXPORT_SYMBOL(rte_log_can_log)
+RTE_EXPORT_SYMBOL(rte_log_can_log);
 bool
 rte_log_can_log(uint32_t logtype, uint32_t level)
 {
@@ -160,7 +160,7 @@ logtype_set_level(uint32_t type, uint32_t level)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_log_set_level)
+RTE_EXPORT_SYMBOL(rte_log_set_level);
 int
 rte_log_set_level(uint32_t type, uint32_t level)
 {
@@ -175,7 +175,7 @@ rte_log_set_level(uint32_t type, uint32_t level)
 }
 
 /* set log level by regular expression */
-RTE_EXPORT_SYMBOL(rte_log_set_level_regexp)
+RTE_EXPORT_SYMBOL(rte_log_set_level_regexp);
 int
 rte_log_set_level_regexp(const char *regex, uint32_t level)
 {
@@ -234,7 +234,7 @@ log_save_level(uint32_t priority, const char *regex, const char *pattern)
 	return -1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_regexp)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_regexp);
 int
 eal_log_save_regexp(const char *regex, uint32_t level)
 {
@@ -242,7 +242,7 @@ eal_log_save_regexp(const char *regex, uint32_t level)
 }
 
 /* set log level based on globbing pattern */
-RTE_EXPORT_SYMBOL(rte_log_set_level_pattern)
+RTE_EXPORT_SYMBOL(rte_log_set_level_pattern);
 int
 rte_log_set_level_pattern(const char *pattern, uint32_t level)
 {
@@ -262,7 +262,7 @@ rte_log_set_level_pattern(const char *pattern, uint32_t level)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_pattern)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_save_pattern);
 int
 eal_log_save_pattern(const char *pattern, uint32_t level)
 {
@@ -270,14 +270,14 @@ eal_log_save_pattern(const char *pattern, uint32_t level)
 }
 
 /* get the current loglevel for the message being processed */
-RTE_EXPORT_SYMBOL(rte_log_cur_msg_loglevel)
+RTE_EXPORT_SYMBOL(rte_log_cur_msg_loglevel);
 int rte_log_cur_msg_loglevel(void)
 {
 	return RTE_PER_LCORE(log_cur_msg).loglevel;
 }
 
 /* get the current logtype for the message being processed */
-RTE_EXPORT_SYMBOL(rte_log_cur_msg_logtype)
+RTE_EXPORT_SYMBOL(rte_log_cur_msg_logtype);
 int rte_log_cur_msg_logtype(void)
 {
 	return RTE_PER_LCORE(log_cur_msg).logtype;
@@ -329,7 +329,7 @@ log_register(const char *name, uint32_t level)
 }
 
 /* register an extended log type */
-RTE_EXPORT_SYMBOL(rte_log_register)
+RTE_EXPORT_SYMBOL(rte_log_register);
 int
 rte_log_register(const char *name)
 {
@@ -337,7 +337,7 @@ rte_log_register(const char *name)
 }
 
 /* Register an extended log type and try to pick its level from EAL options */
-RTE_EXPORT_SYMBOL(rte_log_register_type_and_pick_level)
+RTE_EXPORT_SYMBOL(rte_log_register_type_and_pick_level);
 int
 rte_log_register_type_and_pick_level(const char *name, uint32_t level_def)
 {
@@ -400,7 +400,7 @@ RTE_INIT_PRIO(log_init, LOG)
 	rte_logs.dynamic_types_len = RTE_LOGTYPE_FIRST_EXT_ID;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_level2str)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_level2str);
 const char *
 eal_log_level2str(uint32_t level)
 {
@@ -434,7 +434,7 @@ log_type_compare(const void *a, const void *b)
 }
 
 /* Dump name of each logtype, one per line. */
-RTE_EXPORT_SYMBOL(rte_log_list_types)
+RTE_EXPORT_SYMBOL(rte_log_list_types);
 void
 rte_log_list_types(FILE *out, const char *prefix)
 {
@@ -464,7 +464,7 @@ rte_log_list_types(FILE *out, const char *prefix)
 }
 
 /* dump global level and registered log types */
-RTE_EXPORT_SYMBOL(rte_log_dump)
+RTE_EXPORT_SYMBOL(rte_log_dump);
 void
 rte_log_dump(FILE *f)
 {
@@ -486,7 +486,7 @@ rte_log_dump(FILE *f)
  * Generates a log message The message will be sent in the stream
  * defined by the previous call to rte_openlog_stream().
  */
-RTE_EXPORT_SYMBOL(rte_vlog)
+RTE_EXPORT_SYMBOL(rte_vlog);
 int
 rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
 {
@@ -512,7 +512,7 @@ rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
  * defined by the previous call to rte_openlog_stream().
  * No need to check level here, done by rte_vlog().
  */
-RTE_EXPORT_SYMBOL(rte_log)
+RTE_EXPORT_SYMBOL(rte_log);
 int
 rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
 {
@@ -528,7 +528,7 @@ rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
 /*
  * Called by rte_eal_init
  */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_init)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_init);
 void
 eal_log_init(const char *id)
 {
@@ -574,7 +574,7 @@ eal_log_init(const char *id)
 /*
  * Called by eal_cleanup
  */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_log_cleanup)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_log_cleanup);
 void
 rte_eal_log_cleanup(void)
 {
diff --git a/lib/log/log_color.c b/lib/log/log_color.c
index 690a27f96e..cf1af6483f 100644
--- a/lib/log/log_color.c
+++ b/lib/log/log_color.c
@@ -100,7 +100,7 @@ color_snprintf(char *buf, size_t len, enum log_field field,
  *   auto - enable if stderr is a terminal
  *   never - color output is disabled.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_color)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_color);
 int
 eal_log_color(const char *mode)
 {
diff --git a/lib/log/log_syslog.c b/lib/log/log_syslog.c
index 99d4132a55..121ebafe69 100644
--- a/lib/log/log_syslog.c
+++ b/lib/log/log_syslog.c
@@ -46,7 +46,7 @@ static const struct {
 	{ "local7", LOG_LOCAL7 },
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_syslog)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_syslog);
 int
 eal_log_syslog(const char *name)
 {
diff --git a/lib/log/log_timestamp.c b/lib/log/log_timestamp.c
index 47b6f7cfc4..d08e27d18c 100644
--- a/lib/log/log_timestamp.c
+++ b/lib/log/log_timestamp.c
@@ -41,7 +41,7 @@ static struct {
 } log_time;
 
 /* Set the log timestamp format */
-RTE_EXPORT_INTERNAL_SYMBOL(eal_log_timestamp)
+RTE_EXPORT_INTERNAL_SYMBOL(eal_log_timestamp);
 int
 eal_log_timestamp(const char *str)
 {
diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c
index 6dab86a05e..440deebe7d 100644
--- a/lib/lpm/rte_lpm.c
+++ b/lib/lpm/rte_lpm.c
@@ -118,7 +118,7 @@ depth_to_range(uint8_t depth)
 /*
  * Find an existing lpm table and return a pointer to it.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_find_existing)
+RTE_EXPORT_SYMBOL(rte_lpm_find_existing);
 struct rte_lpm *
 rte_lpm_find_existing(const char *name)
 {
@@ -147,7 +147,7 @@ rte_lpm_find_existing(const char *name)
 /*
  * Allocates memory for LPM object
  */
-RTE_EXPORT_SYMBOL(rte_lpm_create)
+RTE_EXPORT_SYMBOL(rte_lpm_create);
 struct rte_lpm *
 rte_lpm_create(const char *name, int socket_id,
 		const struct rte_lpm_config *config)
@@ -254,7 +254,7 @@ rte_lpm_create(const char *name, int socket_id,
 /*
  * Deallocates memory for given LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_free)
+RTE_EXPORT_SYMBOL(rte_lpm_free);
 void
 rte_lpm_free(struct rte_lpm *lpm)
 {
@@ -304,7 +304,7 @@ __lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n)
 
 /* Associate QSBR variable with an LPM object.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_rcu_qsbr_add)
+RTE_EXPORT_SYMBOL(rte_lpm_rcu_qsbr_add);
 int
 rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
 {
@@ -823,7 +823,7 @@ add_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked, uint8_t depth,
 /*
  * Add a route
  */
-RTE_EXPORT_SYMBOL(rte_lpm_add)
+RTE_EXPORT_SYMBOL(rte_lpm_add);
 int
 rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 		uint32_t next_hop)
@@ -875,7 +875,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 /*
  * Look for a rule in the high-level rules table
  */
-RTE_EXPORT_SYMBOL(rte_lpm_is_rule_present)
+RTE_EXPORT_SYMBOL(rte_lpm_is_rule_present);
 int
 rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
 uint32_t *next_hop)
@@ -1181,7 +1181,7 @@ delete_depth_big(struct __rte_lpm *i_lpm, uint32_t ip_masked,
 /*
  * Deletes a rule
  */
-RTE_EXPORT_SYMBOL(rte_lpm_delete)
+RTE_EXPORT_SYMBOL(rte_lpm_delete);
 int
 rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 {
@@ -1240,7 +1240,7 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
 /*
  * Delete all rules from the LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm_delete_all)
+RTE_EXPORT_SYMBOL(rte_lpm_delete_all);
 void
 rte_lpm_delete_all(struct rte_lpm *lpm)
 {
diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
index e23c886766..38e8247067 100644
--- a/lib/lpm/rte_lpm6.c
+++ b/lib/lpm/rte_lpm6.c
@@ -208,7 +208,7 @@ rebuild_lpm(struct rte_lpm6 *lpm)
 /*
  * Allocates memory for LPM object
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_create)
+RTE_EXPORT_SYMBOL(rte_lpm6_create);
 struct rte_lpm6 *
 rte_lpm6_create(const char *name, int socket_id,
 		const struct rte_lpm6_config *config)
@@ -349,7 +349,7 @@ rte_lpm6_create(const char *name, int socket_id,
 /*
  * Find an existing lpm table and return a pointer to it.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_find_existing)
+RTE_EXPORT_SYMBOL(rte_lpm6_find_existing);
 struct rte_lpm6 *
 rte_lpm6_find_existing(const char *name)
 {
@@ -378,7 +378,7 @@ rte_lpm6_find_existing(const char *name)
 /*
  * Deallocates memory for given LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_free)
+RTE_EXPORT_SYMBOL(rte_lpm6_free);
 void
 rte_lpm6_free(struct rte_lpm6 *lpm)
 {
@@ -823,7 +823,7 @@ simulate_add(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *masked_ip, uint8_
 /*
  * Add a route
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_add)
+RTE_EXPORT_SYMBOL(rte_lpm6_add);
 int
 rte_lpm6_add(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth,
 	     uint32_t next_hop)
@@ -913,7 +913,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
 /*
  * Looks up an IP
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_lookup)
+RTE_EXPORT_SYMBOL(rte_lpm6_lookup);
 int
 rte_lpm6_lookup(const struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip,
 		uint32_t *next_hop)
@@ -946,7 +946,7 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip,
 /*
  * Looks up a group of IP addresses
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_lookup_bulk_func)
+RTE_EXPORT_SYMBOL(rte_lpm6_lookup_bulk_func);
 int
 rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 		struct rte_ipv6_addr *ips,
@@ -992,7 +992,7 @@ rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
 /*
  * Look for a rule in the high-level rules table
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_is_rule_present)
+RTE_EXPORT_SYMBOL(rte_lpm6_is_rule_present);
 int
 rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth,
 			 uint32_t *next_hop)
@@ -1042,7 +1042,7 @@ rule_delete(struct rte_lpm6 *lpm, struct rte_ipv6_addr *ip, uint8_t depth)
  * rather than doing incremental updates like
  * the regular delete function
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete_bulk_func)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete_bulk_func);
 int
 rte_lpm6_delete_bulk_func(struct rte_lpm6 *lpm,
 		struct rte_ipv6_addr *ips, uint8_t *depths,
@@ -1082,7 +1082,7 @@ rte_lpm6_delete_bulk_func(struct rte_lpm6 *lpm,
 /*
  * Delete all rules from the LPM table.
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete_all)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete_all);
 void
 rte_lpm6_delete_all(struct rte_lpm6 *lpm)
 {
@@ -1267,7 +1267,7 @@ remove_tbl(struct rte_lpm6 *lpm, struct rte_lpm_tbl8_hdr *tbl_hdr,
 /*
  * Deletes a rule
  */
-RTE_EXPORT_SYMBOL(rte_lpm6_delete)
+RTE_EXPORT_SYMBOL(rte_lpm6_delete);
 int
 rte_lpm6_delete(struct rte_lpm6 *lpm, const struct rte_ipv6_addr *ip, uint8_t depth)
 {
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 9e7731a8a2..cce4d023a7 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -30,7 +30,7 @@ RTE_LOG_REGISTER_DEFAULT(mbuf_logtype, INFO);
  * rte_mempool_create(), or called directly if using
  * rte_mempool_create_empty()/rte_mempool_populate()
  */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_init)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_init);
 void
 rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
 {
@@ -71,7 +71,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
  * rte_mempool_obj_iter() or rte_mempool_create().
  * Set the fields of a packet mbuf to their default values.
  */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_init)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_init);
 void
 rte_pktmbuf_init(struct rte_mempool *mp,
 		 __rte_unused void *opaque_arg,
@@ -222,7 +222,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 }
 
 /* Helper to create a mbuf pool with given mempool ops name*/
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_by_ops)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_by_ops);
 struct rte_mempool *
 rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size,
@@ -275,7 +275,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
 }
 
 /* helper to create a mbuf pool */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create);
 struct rte_mempool *
 rte_pktmbuf_pool_create(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size,
@@ -286,7 +286,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned int n,
 }
 
 /* Helper to create a mbuf pool with pinned external data buffers. */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_extbuf)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_pool_create_extbuf);
 struct rte_mempool *
 rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 	unsigned int cache_size, uint16_t priv_size,
@@ -374,7 +374,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 }
 
 /* do some sanity checks on a mbuf: panic if it fails */
-RTE_EXPORT_SYMBOL(rte_mbuf_sanity_check)
+RTE_EXPORT_SYMBOL(rte_mbuf_sanity_check);
 void
 rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 {
@@ -384,7 +384,7 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 		rte_panic("%s\n", reason);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_check)
+RTE_EXPORT_SYMBOL(rte_mbuf_check);
 int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason)
 {
@@ -494,7 +494,7 @@ __rte_pktmbuf_free_seg_via_array(struct rte_mbuf *m,
 #define RTE_PKTMBUF_FREE_PENDING_SZ 64
 
 /* Free a bulk of packet mbufs back into their original mempools. */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_free_bulk)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_free_bulk);
 void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 {
 	struct rte_mbuf *m, *m_next, *pending[RTE_PKTMBUF_FREE_PENDING_SZ];
@@ -521,7 +521,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 }
 
 /* Creates a shallow copy of mbuf */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_clone)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_clone);
 struct rte_mbuf *
 rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 {
@@ -561,7 +561,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 }
 
 /* convert multi-segment mbuf to single mbuf */
-RTE_EXPORT_SYMBOL(__rte_pktmbuf_linearize)
+RTE_EXPORT_SYMBOL(__rte_pktmbuf_linearize);
 int
 __rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
 {
@@ -599,7 +599,7 @@ __rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
 }
 
 /* Create a deep copy of mbuf */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_copy)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_copy);
 struct rte_mbuf *
 rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 		 uint32_t off, uint32_t len)
@@ -677,7 +677,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 }
 
 /* dump a mbuf on console */
-RTE_EXPORT_SYMBOL(rte_pktmbuf_dump)
+RTE_EXPORT_SYMBOL(rte_pktmbuf_dump);
 void
 rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 {
@@ -720,7 +720,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 }
 
 /* read len data bytes in a mbuf at specified offset (internal) */
-RTE_EXPORT_SYMBOL(__rte_pktmbuf_read)
+RTE_EXPORT_SYMBOL(__rte_pktmbuf_read);
 const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 	uint32_t len, void *buf)
 {
@@ -758,7 +758,7 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
  * Get the name of a RX offload flag. Must be kept synchronized with flag
  * definitions in rte_mbuf.h.
  */
-RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_name)
+RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_name);
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
@@ -798,7 +798,7 @@ struct flag_mask {
 };
 
 /* write the list of rx ol flags in buffer buf */
-RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_list)
+RTE_EXPORT_SYMBOL(rte_get_rx_ol_flag_list);
 int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
@@ -865,7 +865,7 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
  * Get the name of a TX offload flag. Must be kept synchronized with flag
  * definitions in rte_mbuf.h.
  */
-RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_name)
+RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_name);
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
@@ -900,7 +900,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
 }
 
 /* write the list of tx ol flags in buffer buf */
-RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_list)
+RTE_EXPORT_SYMBOL(rte_get_tx_ol_flag_list);
 int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index 5987c9dee8..f6dd7cd556 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -190,7 +190,7 @@ __mbuf_dynfield_lookup(const char *name)
 	return mbuf_dynfield;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_lookup)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_lookup);
 int
 rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
 {
@@ -327,7 +327,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 	return offset;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register_offset)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register_offset);
 int
 rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 				size_t req)
@@ -354,7 +354,7 @@ rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynfield_register);
 int
 rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
 {
@@ -387,7 +387,7 @@ __mbuf_dynflag_lookup(const char *name)
 	return mbuf_dynflag;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_lookup)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_lookup);
 int
 rte_mbuf_dynflag_lookup(const char *name,
 			struct rte_mbuf_dynflag *params)
@@ -503,7 +503,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 	return bitnum;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register_bitnum)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register_bitnum);
 int
 rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 				unsigned int req)
@@ -527,14 +527,14 @@ rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dynflag_register);
 int
 rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
 {
 	return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_dump)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_dump);
 void rte_mbuf_dyn_dump(FILE *out)
 {
 	struct mbuf_dynfield_list *mbuf_dynfield_list;
@@ -622,7 +622,7 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_rx_timestamp_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_rx_timestamp_register);
 int
 rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag)
 {
@@ -630,7 +630,7 @@ rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag)
 			"Rx", RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME);
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_dyn_tx_timestamp_register)
+RTE_EXPORT_SYMBOL(rte_mbuf_dyn_tx_timestamp_register);
 int
 rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag)
 {
diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c
index 219b364803..3ef59826ef 100644
--- a/lib/mbuf/rte_mbuf_pool_ops.c
+++ b/lib/mbuf/rte_mbuf_pool_ops.c
@@ -11,7 +11,7 @@
 
 #include "mbuf_log.h"
 
-RTE_EXPORT_SYMBOL(rte_mbuf_set_platform_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_set_platform_mempool_ops);
 int
 rte_mbuf_set_platform_mempool_ops(const char *ops_name)
 {
@@ -41,7 +41,7 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name)
 	return -EEXIST;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_platform_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_platform_mempool_ops);
 const char *
 rte_mbuf_platform_mempool_ops(void)
 {
@@ -53,7 +53,7 @@ rte_mbuf_platform_mempool_ops(void)
 	return mz->addr;
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_set_user_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_set_user_mempool_ops);
 int
 rte_mbuf_set_user_mempool_ops(const char *ops_name)
 {
@@ -78,7 +78,7 @@ rte_mbuf_set_user_mempool_ops(const char *ops_name)
 
 }
 
-RTE_EXPORT_SYMBOL(rte_mbuf_user_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_user_mempool_ops);
 const char *
 rte_mbuf_user_mempool_ops(void)
 {
@@ -91,7 +91,7 @@ rte_mbuf_user_mempool_ops(void)
 }
 
 /* Return mbuf pool ops name */
-RTE_EXPORT_SYMBOL(rte_mbuf_best_mempool_ops)
+RTE_EXPORT_SYMBOL(rte_mbuf_best_mempool_ops);
 const char *
 rte_mbuf_best_mempool_ops(void)
 {
diff --git a/lib/mbuf/rte_mbuf_ptype.c b/lib/mbuf/rte_mbuf_ptype.c
index 2c80294498..715c6c1700 100644
--- a/lib/mbuf/rte_mbuf_ptype.c
+++ b/lib/mbuf/rte_mbuf_ptype.c
@@ -9,7 +9,7 @@
 #include <rte_mbuf_ptype.h>
 
 /* get the name of the l2 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l2_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l2_name);
 const char *rte_get_ptype_l2_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L2_MASK) {
@@ -28,7 +28,7 @@ const char *rte_get_ptype_l2_name(uint32_t ptype)
 }
 
 /* get the name of the l3 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l3_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l3_name);
 const char *rte_get_ptype_l3_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L3_MASK) {
@@ -43,7 +43,7 @@ const char *rte_get_ptype_l3_name(uint32_t ptype)
 }
 
 /* get the name of the l4 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_l4_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_l4_name);
 const char *rte_get_ptype_l4_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_L4_MASK) {
@@ -60,7 +60,7 @@ const char *rte_get_ptype_l4_name(uint32_t ptype)
 }
 
 /* get the name of the tunnel packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_tunnel_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_tunnel_name);
 const char *rte_get_ptype_tunnel_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
@@ -82,7 +82,7 @@ const char *rte_get_ptype_tunnel_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l2 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l2_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l2_name);
 const char *rte_get_ptype_inner_l2_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
@@ -94,7 +94,7 @@ const char *rte_get_ptype_inner_l2_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l3 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l3_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l3_name);
 const char *rte_get_ptype_inner_l3_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
@@ -111,7 +111,7 @@ const char *rte_get_ptype_inner_l3_name(uint32_t ptype)
 }
 
 /* get the name of the inner_l4 packet type */
-RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l4_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_inner_l4_name);
 const char *rte_get_ptype_inner_l4_name(uint32_t ptype)
 {
 	switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
@@ -127,7 +127,7 @@ const char *rte_get_ptype_inner_l4_name(uint32_t ptype)
 }
 
 /* write the packet type name into the buffer */
-RTE_EXPORT_SYMBOL(rte_get_ptype_name)
+RTE_EXPORT_SYMBOL(rte_get_ptype_name);
 int rte_get_ptype_name(uint32_t ptype, char *buf, size_t buflen)
 {
 	int ret;
diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c
index 5ff32f1e45..505b80aa33 100644
--- a/lib/member/rte_member.c
+++ b/lib/member/rte_member.c
@@ -24,7 +24,7 @@ static struct rte_tailq_elem rte_member_tailq = {
 };
 EAL_REGISTER_TAILQ(rte_member_tailq)
 
-RTE_EXPORT_SYMBOL(rte_member_find_existing)
+RTE_EXPORT_SYMBOL(rte_member_find_existing);
 struct rte_member_setsum *
 rte_member_find_existing(const char *name)
 {
@@ -49,7 +49,7 @@ rte_member_find_existing(const char *name)
 	return setsum;
 }
 
-RTE_EXPORT_SYMBOL(rte_member_free)
+RTE_EXPORT_SYMBOL(rte_member_free);
 void
 rte_member_free(struct rte_member_setsum *setsum)
 {
@@ -88,7 +88,7 @@ rte_member_free(struct rte_member_setsum *setsum)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_member_create)
+RTE_EXPORT_SYMBOL(rte_member_create);
 struct rte_member_setsum *
 rte_member_create(const struct rte_member_parameters *params)
 {
@@ -192,7 +192,7 @@ rte_member_create(const struct rte_member_parameters *params)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_member_add)
+RTE_EXPORT_SYMBOL(rte_member_add);
 int
 rte_member_add(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t set_id)
@@ -212,7 +212,7 @@ rte_member_add(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_add_byte_count)
+RTE_EXPORT_SYMBOL(rte_member_add_byte_count);
 int
 rte_member_add_byte_count(const struct rte_member_setsum *setsum,
 			  const void *key, uint32_t byte_count)
@@ -228,7 +228,7 @@ rte_member_add_byte_count(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup)
+RTE_EXPORT_SYMBOL(rte_member_lookup);
 int
 rte_member_lookup(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t *set_id)
@@ -248,7 +248,7 @@ rte_member_lookup(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_bulk)
+RTE_EXPORT_SYMBOL(rte_member_lookup_bulk);
 int
 rte_member_lookup_bulk(const struct rte_member_setsum *setsum,
 				const void **keys, uint32_t num_keys,
@@ -269,7 +269,7 @@ rte_member_lookup_bulk(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_multi)
+RTE_EXPORT_SYMBOL(rte_member_lookup_multi);
 int
 rte_member_lookup_multi(const struct rte_member_setsum *setsum, const void *key,
 				uint32_t match_per_key, member_set_t *set_id)
@@ -289,7 +289,7 @@ rte_member_lookup_multi(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_lookup_multi_bulk)
+RTE_EXPORT_SYMBOL(rte_member_lookup_multi_bulk);
 int
 rte_member_lookup_multi_bulk(const struct rte_member_setsum *setsum,
 			const void **keys, uint32_t num_keys,
@@ -312,7 +312,7 @@ rte_member_lookup_multi_bulk(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_query_count)
+RTE_EXPORT_SYMBOL(rte_member_query_count);
 int
 rte_member_query_count(const struct rte_member_setsum *setsum,
 		       const void *key, uint64_t *output)
@@ -328,7 +328,7 @@ rte_member_query_count(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_report_heavyhitter)
+RTE_EXPORT_SYMBOL(rte_member_report_heavyhitter);
 int
 rte_member_report_heavyhitter(const struct rte_member_setsum *setsum,
 				void **key, uint64_t *count)
@@ -344,7 +344,7 @@ rte_member_report_heavyhitter(const struct rte_member_setsum *setsum,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_delete)
+RTE_EXPORT_SYMBOL(rte_member_delete);
 int
 rte_member_delete(const struct rte_member_setsum *setsum, const void *key,
 			member_set_t set_id)
@@ -364,7 +364,7 @@ rte_member_delete(const struct rte_member_setsum *setsum, const void *key,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_member_reset)
+RTE_EXPORT_SYMBOL(rte_member_reset);
 void
 rte_member_reset(const struct rte_member_setsum *setsum)
 {
diff --git a/lib/mempool/mempool_trace_points.c b/lib/mempool/mempool_trace_points.c
index ec465780f4..fa15c55994 100644
--- a/lib/mempool/mempool_trace_points.c
+++ b/lib/mempool/mempool_trace_points.c
@@ -7,35 +7,35 @@
 
 #include "mempool_trace.h"
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_dequeue_bulk,
 	lib.mempool.ops.deq.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_contig_blocks, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_dequeue_contig_blocks, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_dequeue_contig_blocks,
 	lib.mempool.ops.deq.contig)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_enqueue_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_ops_enqueue_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_enqueue_bulk,
 	lib.mempool.ops.enq.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_put, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_put, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_generic_put,
 	lib.mempool.generic.put)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_put_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_put_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_put_bulk,
 	lib.mempool.put.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_get, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_generic_get, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_generic_get,
 	lib.mempool.generic.get)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_bulk, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_bulk, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_bulk,
 	lib.mempool.get.bulk)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_contig_blocks, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_get_contig_blocks, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_contig_blocks,
 	lib.mempool.get.blocks)
 
@@ -66,14 +66,14 @@ RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_create,
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_free,
 	lib.mempool.cache.free)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_default_cache, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_default_cache, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_default_cache,
 	lib.mempool.default.cache)
 
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_get_page_size,
 	lib.mempool.get.page.size)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_cache_flush, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_mempool_trace_cache_flush, 20.05);
 RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_flush,
 	lib.mempool.cache.flush)
 
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 1021ede0c2..41a0d8c35c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -32,7 +32,7 @@
 #include "mempool_trace.h"
 #include "rte_mempool.h"
 
-RTE_EXPORT_SYMBOL(rte_mempool_logtype)
+RTE_EXPORT_SYMBOL(rte_mempool_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_mempool_logtype, INFO);
 
 TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
@@ -181,7 +181,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque,
 }
 
 /* call obj_cb() for each mempool element */
-RTE_EXPORT_SYMBOL(rte_mempool_obj_iter)
+RTE_EXPORT_SYMBOL(rte_mempool_obj_iter);
 uint32_t
 rte_mempool_obj_iter(struct rte_mempool *mp,
 	rte_mempool_obj_cb_t *obj_cb, void *obj_cb_arg)
@@ -200,7 +200,7 @@ rte_mempool_obj_iter(struct rte_mempool *mp,
 }
 
 /* call mem_cb() for each mempool memory chunk */
-RTE_EXPORT_SYMBOL(rte_mempool_mem_iter)
+RTE_EXPORT_SYMBOL(rte_mempool_mem_iter);
 uint32_t
 rte_mempool_mem_iter(struct rte_mempool *mp,
 	rte_mempool_mem_cb_t *mem_cb, void *mem_cb_arg)
@@ -217,7 +217,7 @@ rte_mempool_mem_iter(struct rte_mempool *mp,
 }
 
 /* get the header, trailer and total size of a mempool element. */
-RTE_EXPORT_SYMBOL(rte_mempool_calc_obj_size)
+RTE_EXPORT_SYMBOL(rte_mempool_calc_obj_size);
 uint32_t
 rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	struct rte_mempool_objsz *sz)
@@ -318,7 +318,7 @@ mempool_ops_alloc_once(struct rte_mempool *mp)
  * zone. Return the number of objects added, or a negative value
  * on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_iova)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_iova);
 int
 rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -404,7 +404,7 @@ get_iova(void *addr)
 /* Populate the mempool with a virtual area. Return the number of
  * objects added, or a negative value on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_virt)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_virt);
 int
 rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t len, size_t pg_sz, rte_mempool_memchunk_free_cb_t *free_cb,
@@ -459,7 +459,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 }
 
 /* Get the minimal page size used in a mempool before populating it. */
-RTE_EXPORT_SYMBOL(rte_mempool_get_page_size)
+RTE_EXPORT_SYMBOL(rte_mempool_get_page_size);
 int
 rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 {
@@ -489,7 +489,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
  * and populate them. Return the number of objects added, or a negative
  * value on error.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_default)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_default);
 int
 rte_mempool_populate_default(struct rte_mempool *mp)
 {
@@ -668,7 +668,7 @@ rte_mempool_memchunk_anon_free(struct rte_mempool_memhdr *memhdr,
 }
 
 /* populate the mempool with an anonymous mapping */
-RTE_EXPORT_SYMBOL(rte_mempool_populate_anon)
+RTE_EXPORT_SYMBOL(rte_mempool_populate_anon);
 int
 rte_mempool_populate_anon(struct rte_mempool *mp)
 {
@@ -723,7 +723,7 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
 }
 
 /* free a mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_free)
+RTE_EXPORT_SYMBOL(rte_mempool_free);
 void
 rte_mempool_free(struct rte_mempool *mp)
 {
@@ -772,7 +772,7 @@ mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size)
  * returned to an underlying mempool. This structure is identical to the
  * local_cache[lcore_id] pointed to by the mempool structure.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_cache_create)
+RTE_EXPORT_SYMBOL(rte_mempool_cache_create);
 struct rte_mempool_cache *
 rte_mempool_cache_create(uint32_t size, int socket_id)
 {
@@ -802,7 +802,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id)
  * remaining objects in the cache are flushed to the corresponding
  * mempool.
  */
-RTE_EXPORT_SYMBOL(rte_mempool_cache_free)
+RTE_EXPORT_SYMBOL(rte_mempool_cache_free);
 void
 rte_mempool_cache_free(struct rte_mempool_cache *cache)
 {
@@ -811,7 +811,7 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
 }
 
 /* create an empty mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_create_empty)
+RTE_EXPORT_SYMBOL(rte_mempool_create_empty);
 struct rte_mempool *
 rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	unsigned cache_size, unsigned private_data_size,
@@ -980,7 +980,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 }
 
 /* create the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_create)
+RTE_EXPORT_SYMBOL(rte_mempool_create);
 struct rte_mempool *
 rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	unsigned cache_size, unsigned private_data_size,
@@ -1017,7 +1017,7 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 }
 
 /* Return the number of entries in the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_avail_count)
+RTE_EXPORT_SYMBOL(rte_mempool_avail_count);
 unsigned int
 rte_mempool_avail_count(const struct rte_mempool *mp)
 {
@@ -1042,7 +1042,7 @@ rte_mempool_avail_count(const struct rte_mempool *mp)
 }
 
 /* return the number of entries allocated from the mempool */
-RTE_EXPORT_SYMBOL(rte_mempool_in_use_count)
+RTE_EXPORT_SYMBOL(rte_mempool_in_use_count);
 unsigned int
 rte_mempool_in_use_count(const struct rte_mempool *mp)
 {
@@ -1074,7 +1074,7 @@ rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
 }
 
 /* check and update cookies or panic (internal) */
-RTE_EXPORT_SYMBOL(rte_mempool_check_cookies)
+RTE_EXPORT_SYMBOL(rte_mempool_check_cookies);
 void rte_mempool_check_cookies(const struct rte_mempool *mp,
 	void * const *obj_table_const, unsigned n, int free)
 {
@@ -1143,7 +1143,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_contig_blocks_check_cookies)
+RTE_EXPORT_SYMBOL(rte_mempool_contig_blocks_check_cookies);
 void
 rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
 	void * const *first_obj_table_const, unsigned int n, int free)
@@ -1220,7 +1220,7 @@ mempool_audit_cache(const struct rte_mempool *mp)
 }
 
 /* check the consistency of mempool (size, cookies, ...) */
-RTE_EXPORT_SYMBOL(rte_mempool_audit)
+RTE_EXPORT_SYMBOL(rte_mempool_audit);
 void
 rte_mempool_audit(struct rte_mempool *mp)
 {
@@ -1232,7 +1232,7 @@ rte_mempool_audit(struct rte_mempool *mp)
 }
 
 /* dump the status of the mempool on the console */
-RTE_EXPORT_SYMBOL(rte_mempool_dump)
+RTE_EXPORT_SYMBOL(rte_mempool_dump);
 void
 rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 {
@@ -1337,7 +1337,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
 }
 
 /* dump the status of all mempools on the console */
-RTE_EXPORT_SYMBOL(rte_mempool_list_dump)
+RTE_EXPORT_SYMBOL(rte_mempool_list_dump);
 void
 rte_mempool_list_dump(FILE *f)
 {
@@ -1358,7 +1358,7 @@ rte_mempool_list_dump(FILE *f)
 }
 
 /* search a mempool from its name */
-RTE_EXPORT_SYMBOL(rte_mempool_lookup)
+RTE_EXPORT_SYMBOL(rte_mempool_lookup);
 struct rte_mempool *
 rte_mempool_lookup(const char *name)
 {
@@ -1386,7 +1386,7 @@ rte_mempool_lookup(const char *name)
 	return mp;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_walk)
+RTE_EXPORT_SYMBOL(rte_mempool_walk);
 void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 		      void *arg)
 {
@@ -1405,7 +1405,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 	rte_mcfg_mempool_read_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_mem_range, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_mem_range, 24.07);
 int rte_mempool_get_mem_range(const struct rte_mempool *mp,
 		struct rte_mempool_mem_range_info *mem_range)
 {
@@ -1440,7 +1440,7 @@ int rte_mempool_get_mem_range(const struct rte_mempool *mp,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_obj_alignment, 24.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_get_obj_alignment, 24.07);
 size_t rte_mempool_get_obj_alignment(const struct rte_mempool *mp)
 {
 	if (mp == NULL)
@@ -1474,7 +1474,7 @@ mempool_event_callback_invoke(enum rte_mempool_event event,
 	rte_mcfg_tailq_read_unlock();
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_register);
 int
 rte_mempool_event_callback_register(rte_mempool_event_callback *func,
 				    void *user_data)
@@ -1513,7 +1513,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_mempool_event_callback_unregister);
 int
 rte_mempool_event_callback_unregister(rte_mempool_event_callback *func,
 				      void *user_data)
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 066bec36fc..8dcb9161bf 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -15,14 +15,14 @@
 #include "mempool_trace.h"
 
 /* indirect jump table to support external memory pools. */
-RTE_EXPORT_SYMBOL(rte_mempool_ops_table)
+RTE_EXPORT_SYMBOL(rte_mempool_ops_table);
 struct rte_mempool_ops_table rte_mempool_ops_table = {
 	.sl =  RTE_SPINLOCK_INITIALIZER,
 	.num_ops = 0
 };
 
 /* add a new ops struct in rte_mempool_ops_table, return its index. */
-RTE_EXPORT_SYMBOL(rte_mempool_register_ops)
+RTE_EXPORT_SYMBOL(rte_mempool_register_ops);
 int
 rte_mempool_register_ops(const struct rte_mempool_ops *h)
 {
@@ -149,7 +149,7 @@ rte_mempool_ops_populate(struct rte_mempool *mp, unsigned int max_objs,
 }
 
 /* wrapper to get additional mempool info */
-RTE_EXPORT_SYMBOL(rte_mempool_ops_get_info)
+RTE_EXPORT_SYMBOL(rte_mempool_ops_get_info);
 int
 rte_mempool_ops_get_info(const struct rte_mempool *mp,
 			 struct rte_mempool_info *info)
@@ -165,7 +165,7 @@ rte_mempool_ops_get_info(const struct rte_mempool *mp,
 
 
 /* sets mempool ops previously registered by rte_mempool_register_ops. */
-RTE_EXPORT_SYMBOL(rte_mempool_set_ops_byname)
+RTE_EXPORT_SYMBOL(rte_mempool_set_ops_byname);
 int
 rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	void *pool_config)
diff --git a/lib/mempool/rte_mempool_ops_default.c b/lib/mempool/rte_mempool_ops_default.c
index d27d6fc473..3ece87ca26 100644
--- a/lib/mempool/rte_mempool_ops_default.c
+++ b/lib/mempool/rte_mempool_ops_default.c
@@ -7,7 +7,7 @@
 #include <eal_export.h>
 #include <rte_mempool.h>
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_helper)
+RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_helper);
 ssize_t
 rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
@@ -67,7 +67,7 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp,
 	return mem_size;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_default)
+RTE_EXPORT_SYMBOL(rte_mempool_op_calc_mem_size_default);
 ssize_t
 rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,
@@ -90,7 +90,7 @@ check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_populate_helper)
+RTE_EXPORT_SYMBOL(rte_mempool_op_populate_helper);
 int
 rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int flags,
 			unsigned int max_objs, void *vaddr, rte_iova_t iova,
@@ -138,7 +138,7 @@ rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int flags,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_mempool_op_populate_default)
+RTE_EXPORT_SYMBOL(rte_mempool_op_populate_default);
 int
 rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
 				void *vaddr, rte_iova_t iova, size_t len,
diff --git a/lib/meter/rte_meter.c b/lib/meter/rte_meter.c
index ec76bec4cb..b78c2abe34 100644
--- a/lib/meter/rte_meter.c
+++ b/lib/meter/rte_meter.c
@@ -37,7 +37,7 @@ rte_meter_get_tb_params(uint64_t hz, uint64_t rate, uint64_t *tb_period, uint64_
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_srtcm_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_srtcm_profile_config);
 int
 rte_meter_srtcm_profile_config(struct rte_meter_srtcm_profile *p,
 	struct rte_meter_srtcm_params *params)
@@ -60,7 +60,7 @@ rte_meter_srtcm_profile_config(struct rte_meter_srtcm_profile *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_srtcm_config)
+RTE_EXPORT_SYMBOL(rte_meter_srtcm_config);
 int
 rte_meter_srtcm_config(struct rte_meter_srtcm *m,
 	struct rte_meter_srtcm_profile *p)
@@ -77,7 +77,7 @@ rte_meter_srtcm_config(struct rte_meter_srtcm *m,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_profile_config);
 int
 rte_meter_trtcm_profile_config(struct rte_meter_trtcm_profile *p,
 	struct rte_meter_trtcm_params *params)
@@ -105,7 +105,7 @@ rte_meter_trtcm_profile_config(struct rte_meter_trtcm_profile *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_config);
 int
 rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	struct rte_meter_trtcm_profile *p)
@@ -122,7 +122,7 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config);
 int
 rte_meter_trtcm_rfc4115_profile_config(
 	struct rte_meter_trtcm_rfc4115_profile *p,
@@ -148,7 +148,7 @@ rte_meter_trtcm_rfc4115_profile_config(
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_config)
+RTE_EXPORT_SYMBOL(rte_meter_trtcm_rfc4115_config);
 int
 rte_meter_trtcm_rfc4115_config(
 	struct rte_meter_trtcm_rfc4115 *m,
diff --git a/lib/metrics/rte_metrics.c b/lib/metrics/rte_metrics.c
index 4cd4623b7a..5065a7d4af 100644
--- a/lib/metrics/rte_metrics.c
+++ b/lib/metrics/rte_metrics.c
@@ -56,7 +56,7 @@ struct rte_metrics_data_s {
 	rte_spinlock_t lock;
 };
 
-RTE_EXPORT_SYMBOL(rte_metrics_init)
+RTE_EXPORT_SYMBOL(rte_metrics_init);
 int
 rte_metrics_init(int socket_id)
 {
@@ -82,7 +82,7 @@ rte_metrics_init(int socket_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_deinit)
+RTE_EXPORT_SYMBOL(rte_metrics_deinit);
 int
 rte_metrics_deinit(void)
 {
@@ -106,7 +106,7 @@ rte_metrics_deinit(void)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_reg_name)
+RTE_EXPORT_SYMBOL(rte_metrics_reg_name);
 int
 rte_metrics_reg_name(const char *name)
 {
@@ -115,7 +115,7 @@ rte_metrics_reg_name(const char *name)
 	return rte_metrics_reg_names(list_names, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_reg_names)
+RTE_EXPORT_SYMBOL(rte_metrics_reg_names);
 int
 rte_metrics_reg_names(const char * const *names, uint16_t cnt_names)
 {
@@ -162,14 +162,14 @@ rte_metrics_reg_names(const char * const *names, uint16_t cnt_names)
 	return idx_base;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_update_value)
+RTE_EXPORT_SYMBOL(rte_metrics_update_value);
 int
 rte_metrics_update_value(int port_id, uint16_t key, const uint64_t value)
 {
 	return rte_metrics_update_values(port_id, key, &value, 1);
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_update_values)
+RTE_EXPORT_SYMBOL(rte_metrics_update_values);
 int
 rte_metrics_update_values(int port_id,
 	uint16_t key,
@@ -232,7 +232,7 @@ rte_metrics_update_values(int port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_get_names)
+RTE_EXPORT_SYMBOL(rte_metrics_get_names);
 int
 rte_metrics_get_names(struct rte_metric_name *names,
 	uint16_t capacity)
@@ -264,7 +264,7 @@ rte_metrics_get_names(struct rte_metric_name *names,
 	return return_value;
 }
 
-RTE_EXPORT_SYMBOL(rte_metrics_get_values)
+RTE_EXPORT_SYMBOL(rte_metrics_get_values);
 int
 rte_metrics_get_values(int port_id,
 	struct rte_metric_value *values,
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index f9ec556595..3061d6d15f 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -72,7 +72,7 @@ rte_metrics_tel_reg_port_ethdev_to_metrics(uint16_t port_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05);
 int32_t
 rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 {
@@ -227,7 +227,7 @@ rte_metrics_tel_format_port(uint32_t pid, json_t *ports,
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05);
 int32_t
 rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 		char **json_buffer)
@@ -281,7 +281,7 @@ rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05);
 int32_t
 rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 		int *reg_index, char **json_buffer)
@@ -312,7 +312,7 @@ rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05);
 int32_t
 rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 {
@@ -379,7 +379,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05);
 int32_t
 rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 {
@@ -550,7 +550,7 @@ RTE_INIT(metrics_ctor)
 
 #else /* !RTE_HAS_JANSSON */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_reg_all_ethdev, 20.05);
 int32_t
 rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 {
@@ -560,7 +560,7 @@ rte_metrics_tel_reg_all_ethdev(int *metrics_register_done, int *reg_index_list)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_encode_json_format, 20.05);
 int32_t
 rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	char **json_buffer)
@@ -571,7 +571,7 @@ rte_metrics_tel_encode_json_format(struct telemetry_encode_param *ep,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_ports_stats_json, 20.05);
 int32_t
 rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	int *reg_index, char **json_buffer)
@@ -583,7 +583,7 @@ rte_metrics_tel_get_ports_stats_json(struct telemetry_encode_param *ep,
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_port_stats_ids, 20.05);
 int32_t
 rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 {
@@ -592,7 +592,7 @@ rte_metrics_tel_get_port_stats_ids(struct telemetry_encode_param *ep)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_extract_data, 20.05);
 int32_t
 rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 {
@@ -602,7 +602,7 @@ rte_metrics_tel_extract_data(struct telemetry_encode_param *ep, json_t *data)
 	return -ENOTSUP;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_global_stats, 20.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_metrics_tel_get_global_stats, 20.05);
 int32_t
 rte_metrics_tel_get_global_stats(struct telemetry_encode_param *ep)
 {
diff --git a/lib/mldev/mldev_utils.c b/lib/mldev/mldev_utils.c
index b15f825158..dc60af306e 100644
--- a/lib/mldev/mldev_utils.c
+++ b/lib/mldev/mldev_utils.c
@@ -15,7 +15,7 @@
  * This file implements Machine Learning utility routines, except type conversion routines.
  */
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_size_get)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_size_get);
 int
 rte_ml_io_type_size_get(enum rte_ml_io_type type)
 {
@@ -51,7 +51,7 @@ rte_ml_io_type_size_get(enum rte_ml_io_type type)
 	}
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_to_str)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_io_type_to_str);
 void
 rte_ml_io_type_to_str(enum rte_ml_io_type type, char *str, int len)
 {
diff --git a/lib/mldev/mldev_utils_neon.c b/lib/mldev/mldev_utils_neon.c
index 0222bd7e15..03c9236b3a 100644
--- a/lib/mldev/mldev_utils_neon.c
+++ b/lib/mldev/mldev_utils_neon.c
@@ -77,7 +77,7 @@ __float32_to_int8_neon_s8x1(const float *input, int8_t *output, float scale, int
 	*output = vqmovnh_s16(s16);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11);
 int
 rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -152,7 +152,7 @@ __int8_to_float32_neon_f32x1(const int8_t *input, float *output, float scale, in
 	*output = scale * (vcvts_f32_s32((int32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11);
 int
 rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -246,7 +246,7 @@ __float32_to_uint8_neon_u8x1(const float *input, uint8_t *output, float scale, u
 	*output = vqmovnh_u16(u16);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11);
 int
 rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -321,7 +321,7 @@ __uint8_to_float32_neon_f32x1(const uint8_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_u32((uint32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11);
 int
 rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -401,7 +401,7 @@ __float32_to_int16_neon_s16x1(const float *input, int16_t *output, float scale,
 	*output = vqmovns_s32(vget_lane_s32(s32x2, 0));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11);
 int
 rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -470,7 +470,7 @@ __int16_to_float32_neon_f32x1(const int16_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_s32((int32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11);
 int
 rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -547,7 +547,7 @@ __float32_to_uint16_neon_u16x1(const float *input, uint16_t *output, float scale
 	*output = vqmovns_u32(u32) + zero_point;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11);
 int
 rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint16_t zero_point)
@@ -618,7 +618,7 @@ __uint16_to_float32_neon_f32x1(const uint16_t *input, float *output, float scale
 	*output = scale * (vcvts_f32_u32((uint32_t)*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11);
 int
 rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint16_t zero_point)
@@ -697,7 +697,7 @@ __float32_to_int32_neon_s32x1(const float *input, int32_t *output, float scale,
 	vst1_lane_s32(output, s32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11);
 int
 rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -762,7 +762,7 @@ __int32_to_float32_neon_f32x1(const int32_t *input, float *output, float scale,
 	*output = scale * (vcvts_f32_s32(*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11);
 int
 rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -830,7 +830,7 @@ __float32_to_uint32_neon_u32x1(const float *input, uint32_t *output, float scale
 	*output = vcvtas_u32_f32((*input) / scale + (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11);
 int
 rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint32_t zero_point)
@@ -897,7 +897,7 @@ __uint32_to_float32_neon_f32x1(const uint32_t *input, float *output, float scale
 	*output = scale * (vcvts_f32_u32(*input) - (float)zero_point);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11);
 int
 rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint32_t zero_point)
@@ -992,7 +992,7 @@ __float32_to_int64_neon_s64x1(const float *input, int64_t *output, float scale,
 	*output = s64;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11);
 int
 rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -1081,7 +1081,7 @@ __int64_to_float32_neon_f32x1(const int64_t *input, float *output, float scale,
 	vst1_lane_f32(output, f32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11);
 int
 rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -1172,7 +1172,7 @@ __float32_to_uint64_neon_u64x1(const float *input, uint64_t *output, float scale
 	vst1q_lane_u64(output, u64x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11);
 int
 rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint64_t zero_point)
@@ -1263,7 +1263,7 @@ __uint64_to_float32_neon_f32x1(const uint64_t *input, float *output, float scale
 	vst1_lane_f32(output, f32x2, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11);
 int
 rte_ml_io_uint64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint64_t zero_point)
@@ -1332,7 +1332,7 @@ __float32_to_float16_neon_f16x1(const float32_t *input, float16_t *output)
 	vst1_lane_f16(output, f16x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11);
 int
 rte_ml_io_float32_to_float16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -1400,7 +1400,7 @@ __float16_to_float32_neon_f32x1(const float16_t *input, float32_t *output)
 	vst1q_lane_f32(output, f32x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11);
 int
 rte_ml_io_float16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_neon_bfloat16.c b/lib/mldev/mldev_utils_neon_bfloat16.c
index 65cd73f880..0456528514 100644
--- a/lib/mldev/mldev_utils_neon_bfloat16.c
+++ b/lib/mldev/mldev_utils_neon_bfloat16.c
@@ -51,7 +51,7 @@ __float32_to_bfloat16_neon_f16x1(const float32_t *input, bfloat16_t *output)
 	vst1_lane_bf16(output, bf16x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11);
 int
 rte_ml_io_float32_to_bfloat16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -119,7 +119,7 @@ __bfloat16_to_float32_neon_f32x1(const bfloat16_t *input, float32_t *output)
 	vst1q_lane_f32(output, f32x4, 0);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11);
 int
 rte_ml_io_bfloat16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_scalar.c b/lib/mldev/mldev_utils_scalar.c
index a3aac3f92e..db01e5f68b 100644
--- a/lib/mldev/mldev_utils_scalar.c
+++ b/lib/mldev/mldev_utils_scalar.c
@@ -11,7 +11,7 @@
  * types from higher precision to lower precision and vice-versa, except bfloat16.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int8, 22.11);
 int
 rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -45,7 +45,7 @@ rte_ml_io_float32_to_int8(const void *input, void *output, uint64_t nb_elements,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int8_to_float32, 22.11);
 int
 rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			  int8_t zero_point)
@@ -70,7 +70,7 @@ rte_ml_io_int8_to_float32(const void *input, void *output, uint64_t nb_elements,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint8, 22.11);
 int
 rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -104,7 +104,7 @@ rte_ml_io_float32_to_uint8(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint8_to_float32, 22.11);
 int
 rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   uint8_t zero_point)
@@ -129,7 +129,7 @@ rte_ml_io_uint8_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int16, 22.11);
 int
 rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -163,7 +163,7 @@ rte_ml_io_float32_to_int16(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int16_to_float32, 22.11);
 int
 rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int16_t zero_point)
@@ -188,7 +188,7 @@ rte_ml_io_int16_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint16, 22.11);
 int
 rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint16_t zero_point)
@@ -222,7 +222,7 @@ rte_ml_io_float32_to_uint16(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint16_to_float32, 22.11);
 int
 rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint16_t zero_point)
@@ -247,7 +247,7 @@ rte_ml_io_uint16_to_float32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int32, 22.11);
 int
 rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -272,7 +272,7 @@ rte_ml_io_float32_to_int32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int32_to_float32, 22.11);
 int
 rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int32_t zero_point)
@@ -297,7 +297,7 @@ rte_ml_io_int32_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint32, 22.11);
 int
 rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint32_t zero_point)
@@ -328,7 +328,7 @@ rte_ml_io_float32_to_uint32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint32_to_float32, 22.11);
 int
 rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint32_t zero_point)
@@ -353,7 +353,7 @@ rte_ml_io_uint32_to_float32(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_int64, 22.11);
 int
 rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -378,7 +378,7 @@ rte_ml_io_float32_to_int64(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_int64_to_float32, 22.11);
 int
 rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			   int64_t zero_point)
@@ -403,7 +403,7 @@ rte_ml_io_int64_to_float32(const void *input, void *output, uint64_t nb_elements
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_uint64, 22.11);
 int
 rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint64_t zero_point)
@@ -434,7 +434,7 @@ rte_ml_io_float32_to_uint64(const void *input, void *output, uint64_t nb_element
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_uint64_to_float32, 22.11);
 int
 rte_ml_io_uint64_to_float32(const void *input, void *output, uint64_t nb_elements, float scale,
 			    uint64_t zero_point)
@@ -581,7 +581,7 @@ __float32_to_float16_scalar_rtn(float x)
 	return u16;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_float16, 22.11);
 int
 rte_ml_io_float32_to_float16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -666,7 +666,7 @@ __float16_to_float32_scalar_rtx(uint16_t f16)
 	return f32.f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float16_to_float32, 22.11);
 int
 rte_ml_io_float16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/mldev_utils_scalar_bfloat16.c b/lib/mldev/mldev_utils_scalar_bfloat16.c
index a098d31526..757d92b963 100644
--- a/lib/mldev/mldev_utils_scalar_bfloat16.c
+++ b/lib/mldev/mldev_utils_scalar_bfloat16.c
@@ -93,7 +93,7 @@ __float32_to_bfloat16_scalar_rtn(float x)
 	return u16;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_float32_to_bfloat16, 22.11);
 int
 rte_ml_io_float32_to_bfloat16(const void *input, void *output, uint64_t nb_elements)
 {
@@ -176,7 +176,7 @@ __bfloat16_to_float32_scalar_rtx(uint16_t f16)
 	return f32.f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_bfloat16_to_float32, 22.11);
 int
 rte_ml_io_bfloat16_to_float32(const void *input, void *output, uint64_t nb_elements)
 {
diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c
index b61e4be45c..e1abb52e90 100644
--- a/lib/mldev/rte_mldev.c
+++ b/lib/mldev/rte_mldev.c
@@ -24,14 +24,14 @@ struct rte_ml_op_pool_private {
 	/*< Size of private user data with each operation. */
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_dev);
 struct rte_ml_dev *
 rte_ml_dev_pmd_get_dev(int16_t dev_id)
 {
 	return &ml_dev_globals.devs[dev_id];
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_named_dev)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_get_named_dev);
 struct rte_ml_dev *
 rte_ml_dev_pmd_get_named_dev(const char *name)
 {
@@ -50,7 +50,7 @@ rte_ml_dev_pmd_get_named_dev(const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_allocate)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_allocate);
 struct rte_ml_dev *
 rte_ml_dev_pmd_allocate(const char *name, uint8_t socket_id)
 {
@@ -124,7 +124,7 @@ rte_ml_dev_pmd_allocate(const char *name, uint8_t socket_id)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_release)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_release);
 int
 rte_ml_dev_pmd_release(struct rte_ml_dev *dev)
 {
@@ -160,7 +160,7 @@ rte_ml_dev_pmd_release(struct rte_ml_dev *dev)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_init, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_init, 22.11);
 int
 rte_ml_dev_init(size_t dev_max)
 {
@@ -196,14 +196,14 @@ rte_ml_dev_init(size_t dev_max)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_count, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_count, 22.11);
 uint16_t
 rte_ml_dev_count(void)
 {
 	return ml_dev_globals.nb_devs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_is_valid_dev, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_is_valid_dev, 22.11);
 int
 rte_ml_dev_is_valid_dev(int16_t dev_id)
 {
@@ -219,7 +219,7 @@ rte_ml_dev_is_valid_dev(int16_t dev_id)
 		return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_socket_id, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_socket_id, 22.11);
 int
 rte_ml_dev_socket_id(int16_t dev_id)
 {
@@ -235,7 +235,7 @@ rte_ml_dev_socket_id(int16_t dev_id)
 	return dev->data->socket_id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_info_get, 22.11);
 int
 rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info)
 {
@@ -259,7 +259,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info)
 	return dev->dev_ops->dev_info_get(dev, dev_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_configure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_configure, 22.11);
 int
 rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config)
 {
@@ -299,7 +299,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config)
 	return dev->dev_ops->dev_configure(dev, config);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_close, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_close, 22.11);
 int
 rte_ml_dev_close(int16_t dev_id)
 {
@@ -323,7 +323,7 @@ rte_ml_dev_close(int16_t dev_id)
 	return dev->dev_ops->dev_close(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_start, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_start, 22.11);
 int
 rte_ml_dev_start(int16_t dev_id)
 {
@@ -351,7 +351,7 @@ rte_ml_dev_start(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stop, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stop, 22.11);
 int
 rte_ml_dev_stop(int16_t dev_id)
 {
@@ -379,7 +379,7 @@ rte_ml_dev_stop(int16_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_count, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_count, 22.11);
 uint16_t
 rte_ml_dev_queue_pair_count(int16_t dev_id)
 {
@@ -395,7 +395,7 @@ rte_ml_dev_queue_pair_count(int16_t dev_id)
 	return dev->data->nb_queue_pairs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_setup, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_queue_pair_setup, 22.11);
 int
 rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id,
 			    const struct rte_ml_dev_qp_conf *qp_conf, int socket_id)
@@ -429,7 +429,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->dev_queue_pair_setup(dev, queue_pair_id, qp_conf, socket_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_get, 22.11);
 int
 rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats)
 {
@@ -453,7 +453,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats)
 	return dev->dev_ops->dev_stats_get(dev, stats);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_reset, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_stats_reset, 22.11);
 void
 rte_ml_dev_stats_reset(int16_t dev_id)
 {
@@ -471,7 +471,7 @@ rte_ml_dev_stats_reset(int16_t dev_id)
 	dev->dev_ops->dev_stats_reset(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_names_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_names_get, 22.11);
 int
 rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 			    struct rte_ml_dev_xstats_map *xstats_map, uint32_t size)
@@ -490,7 +490,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in
 	return dev->dev_ops->dev_xstats_names_get(dev, mode, model_id, xstats_map, size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_by_name_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_by_name_get, 22.11);
 int
 rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_id, uint64_t *value)
 {
@@ -518,7 +518,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i
 	return dev->dev_ops->dev_xstats_by_name_get(dev, name, stat_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_get, 22.11);
 int
 rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 		      const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids)
@@ -547,7 +547,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t
 	return dev->dev_ops->dev_xstats_get(dev, mode, model_id, stat_ids, values, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_reset, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_xstats_reset, 22.11);
 int
 rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t model_id,
 			const uint16_t stat_ids[], uint16_t nb_ids)
@@ -566,7 +566,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_
 	return dev->dev_ops->dev_xstats_reset(dev, mode, model_id, stat_ids, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_dump, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_dump, 22.11);
 int
 rte_ml_dev_dump(int16_t dev_id, FILE *fd)
 {
@@ -589,7 +589,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd)
 	return dev->dev_ops->dev_dump(dev, fd);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_selftest, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_selftest, 22.11);
 int
 rte_ml_dev_selftest(int16_t dev_id)
 {
@@ -607,7 +607,7 @@ rte_ml_dev_selftest(int16_t dev_id)
 	return dev->dev_ops->dev_selftest(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_load, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_load, 22.11);
 int
 rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t *model_id)
 {
@@ -635,7 +635,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t *
 	return dev->dev_ops->model_load(dev, params, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_unload, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_unload, 22.11);
 int
 rte_ml_model_unload(int16_t dev_id, uint16_t model_id)
 {
@@ -653,7 +653,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_unload(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_start, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_start, 22.11);
 int
 rte_ml_model_start(int16_t dev_id, uint16_t model_id)
 {
@@ -671,7 +671,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_start(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_stop, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_stop, 22.11);
 int
 rte_ml_model_stop(int16_t dev_id, uint16_t model_id)
 {
@@ -689,7 +689,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id)
 	return dev->dev_ops->model_stop(dev, model_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_info_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_info_get, 22.11);
 int
 rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_info *model_info)
 {
@@ -713,7 +713,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf
 	return dev->dev_ops->model_info_get(dev, model_id, model_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_params_update, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_model_params_update, 22.11);
 int
 rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer)
 {
@@ -736,7 +736,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer)
 	return dev->dev_ops->model_params_update(dev, model_id, buffer);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_quantize, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_quantize, 22.11);
 int
 rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **dbuffer,
 		   struct rte_ml_buff_seg **qbuffer)
@@ -765,7 +765,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d
 	return dev->dev_ops->io_quantize(dev, model_id, dbuffer, qbuffer);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_dequantize, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_io_dequantize, 22.11);
 int
 rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **qbuffer,
 		     struct rte_ml_buff_seg **dbuffer)
@@ -806,7 +806,7 @@ ml_op_init(struct rte_mempool *mempool, __rte_unused void *opaque_arg, void *_op
 	op->mempool = mempool;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_create, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_create, 22.11);
 struct rte_mempool *
 rte_ml_op_pool_create(const char *name, unsigned int nb_elts, unsigned int cache_size,
 		      uint16_t user_size, int socket_id)
@@ -846,14 +846,14 @@ rte_ml_op_pool_create(const char *name, unsigned int nb_elts, unsigned int cache
 	return mp;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_free, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_pool_free, 22.11);
 void
 rte_ml_op_pool_free(struct rte_mempool *mempool)
 {
 	rte_mempool_free(mempool);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_enqueue_burst, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_enqueue_burst, 22.11);
 uint16_t
 rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops)
 {
@@ -890,7 +890,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin
 	return dev->enqueue_burst(dev, qp_id, ops, nb_ops);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dequeue_burst, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dequeue_burst, 22.11);
 uint16_t
 rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops)
 {
@@ -927,7 +927,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin
 	return dev->dequeue_burst(dev, qp_id, ops, nb_ops);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_error_get, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_op_error_get, 22.11);
 int
 rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error *error)
 {
@@ -959,5 +959,5 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error
 	return dev->op_error_get(dev, op, error);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_logtype, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ml_dev_logtype, 22.11);
 RTE_LOG_REGISTER_DEFAULT(rte_ml_dev_logtype, INFO);
diff --git a/lib/mldev/rte_mldev_pmd.c b/lib/mldev/rte_mldev_pmd.c
index 434360f2d3..53129a05d7 100644
--- a/lib/mldev/rte_mldev_pmd.c
+++ b/lib/mldev/rte_mldev_pmd.c
@@ -9,7 +9,7 @@
 
 #include "rte_mldev_pmd.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_create)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_create);
 struct rte_ml_dev *
 rte_ml_dev_pmd_create(const char *name, struct rte_device *device,
 		      struct rte_ml_dev_pmd_init_params *params)
@@ -44,7 +44,7 @@ rte_ml_dev_pmd_create(const char *name, struct rte_device *device,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_destroy)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_ml_dev_pmd_destroy);
 int
 rte_ml_dev_pmd_destroy(struct rte_ml_dev *dev)
 {
diff --git a/lib/net/rte_arp.c b/lib/net/rte_arp.c
index 3f8c69f69d..e2d78217e5 100644
--- a/lib/net/rte_arp.c
+++ b/lib/net/rte_arp.c
@@ -6,7 +6,7 @@
 #include <rte_arp.h>
 
 #define RARP_PKT_SIZE	64
-RTE_EXPORT_SYMBOL(rte_net_make_rarp_packet)
+RTE_EXPORT_SYMBOL(rte_net_make_rarp_packet);
 struct rte_mbuf *
 rte_net_make_rarp_packet(struct rte_mempool *mpool,
 		const struct rte_ether_addr *mac)
diff --git a/lib/net/rte_ether.c b/lib/net/rte_ether.c
index 6703145fc5..68369edd3d 100644
--- a/lib/net/rte_ether.c
+++ b/lib/net/rte_ether.c
@@ -8,7 +8,7 @@
 #include <rte_ether.h>
 #include <rte_errno.h>
 
-RTE_EXPORT_SYMBOL(rte_eth_random_addr)
+RTE_EXPORT_SYMBOL(rte_eth_random_addr);
 void
 rte_eth_random_addr(uint8_t *addr)
 {
@@ -20,7 +20,7 @@ rte_eth_random_addr(uint8_t *addr)
 	addr[0] |= RTE_ETHER_LOCAL_ADMIN_ADDR;	/* set local assignment bit */
 }
 
-RTE_EXPORT_SYMBOL(rte_ether_format_addr)
+RTE_EXPORT_SYMBOL(rte_ether_format_addr);
 void
 rte_ether_format_addr(char *buf, uint16_t size,
 		      const struct rte_ether_addr *eth_addr)
@@ -133,7 +133,7 @@ static unsigned int get_ether_sep(const char *s, char *sep)
  *  - Windows format six groups separated by hyphen
  *  - two groups hexadecimal digits
  */
-RTE_EXPORT_SYMBOL(rte_ether_unformat_addr)
+RTE_EXPORT_SYMBOL(rte_ether_unformat_addr);
 int
 rte_ether_unformat_addr(const char *s, struct rte_ether_addr *ea)
 {
diff --git a/lib/net/rte_net.c b/lib/net/rte_net.c
index 44fb6c0f51..a328d1f3cf 100644
--- a/lib/net/rte_net.c
+++ b/lib/net/rte_net.c
@@ -274,7 +274,7 @@ ptype_tunnel_with_udp(uint16_t *proto, const struct rte_mbuf *m,
 }
 
 /* parse ipv6 extended headers, update offset and return next proto */
-RTE_EXPORT_SYMBOL(rte_net_skip_ip6_ext)
+RTE_EXPORT_SYMBOL(rte_net_skip_ip6_ext);
 int
 rte_net_skip_ip6_ext(uint16_t proto, const struct rte_mbuf *m, uint32_t *off,
 	int *frag)
@@ -321,7 +321,7 @@ rte_net_skip_ip6_ext(uint16_t proto, const struct rte_mbuf *m, uint32_t *off,
 }
 
 /* parse mbuf data to get packet type */
-RTE_EXPORT_SYMBOL(rte_net_get_ptype)
+RTE_EXPORT_SYMBOL(rte_net_get_ptype);
 uint32_t rte_net_get_ptype(const struct rte_mbuf *m,
 	struct rte_net_hdr_lens *hdr_lens, uint32_t layers)
 {
diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c
index 3a589bdd6d..c21955d2d5 100644
--- a/lib/net/rte_net_crc.c
+++ b/lib/net/rte_net_crc.c
@@ -216,7 +216,7 @@ handlers_init(enum rte_net_crc_alg alg)
 
 /* Public API */
 
-RTE_EXPORT_SYMBOL(rte_net_crc_set_alg)
+RTE_EXPORT_SYMBOL(rte_net_crc_set_alg);
 struct rte_net_crc *rte_net_crc_set_alg(enum rte_net_crc_alg alg, enum rte_net_crc_type type)
 {
 	uint16_t max_simd_bitwidth;
@@ -256,13 +256,13 @@ struct rte_net_crc *rte_net_crc_set_alg(enum rte_net_crc_alg alg, enum rte_net_c
 	return crc;
 }
 
-RTE_EXPORT_SYMBOL(rte_net_crc_free)
+RTE_EXPORT_SYMBOL(rte_net_crc_free);
 void rte_net_crc_free(struct rte_net_crc *crc)
 {
 	rte_free(crc);
 }
 
-RTE_EXPORT_SYMBOL(rte_net_crc_calc)
+RTE_EXPORT_SYMBOL(rte_net_crc_calc);
 uint32_t rte_net_crc_calc(const struct rte_net_crc *ctx, const void *data, const uint32_t data_len)
 {
 	return handlers[ctx->alg].f[ctx->type](data, data_len);
diff --git a/lib/node/ethdev_ctrl.c b/lib/node/ethdev_ctrl.c
index f717903731..92207b74fb 100644
--- a/lib/node/ethdev_ctrl.c
+++ b/lib/node/ethdev_ctrl.c
@@ -22,7 +22,7 @@ static struct ethdev_ctrl {
 	uint16_t nb_graphs;
 } ctrl;
 
-RTE_EXPORT_SYMBOL(rte_node_eth_config)
+RTE_EXPORT_SYMBOL(rte_node_eth_config);
 int
 rte_node_eth_config(struct rte_node_ethdev_config *conf, uint16_t nb_confs,
 		    uint16_t nb_graphs)
@@ -141,7 +141,7 @@ rte_node_eth_config(struct rte_node_ethdev_config *conf, uint16_t nb_confs,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ethdev_rx_next_update, 24.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ethdev_rx_next_update, 24.03);
 int
 rte_node_ethdev_rx_next_update(rte_node_t id, const char *edge_name)
 {
diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c
index f6db3219f0..dc6f7060b3 100644
--- a/lib/node/ip4_lookup.c
+++ b/lib/node/ip4_lookup.c
@@ -118,7 +118,7 @@ ip4_lookup_node_process_scalar(struct rte_graph *graph, struct rte_node *node,
 	return nb_objs;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_ip4_route_add)
+RTE_EXPORT_SYMBOL(rte_node_ip4_route_add);
 int
 rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop,
 		       enum rte_node_ip4_lookup_next next_node)
diff --git a/lib/node/ip4_lookup_fib.c b/lib/node/ip4_lookup_fib.c
index 0857d889fc..6b2a60dabc 100644
--- a/lib/node/ip4_lookup_fib.c
+++ b/lib/node/ip4_lookup_fib.c
@@ -193,7 +193,7 @@ ip4_lookup_fib_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_create, 25.07);
 int
 rte_node_ip4_fib_create(int socket, struct rte_fib_conf *conf)
 {
@@ -213,7 +213,7 @@ rte_node_ip4_fib_create(int socket, struct rte_fib_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_route_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_fib_route_add, 25.07);
 int
 rte_node_ip4_fib_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop,
 			   enum rte_node_ip4_lookup_next next_node)
diff --git a/lib/node/ip4_reassembly.c b/lib/node/ip4_reassembly.c
index b61ddfd7d1..cc61eb3ada 100644
--- a/lib/node/ip4_reassembly.c
+++ b/lib/node/ip4_reassembly.c
@@ -128,7 +128,7 @@ ip4_reassembly_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return idx;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_reassembly_configure, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip4_reassembly_configure, 23.11);
 int
 rte_node_ip4_reassembly_configure(struct rte_node_ip4_reassembly_cfg *cfg, uint16_t cnt)
 {
diff --git a/lib/node/ip4_rewrite.c b/lib/node/ip4_rewrite.c
index 37bc3a511f..1e1eaa10b3 100644
--- a/lib/node/ip4_rewrite.c
+++ b/lib/node/ip4_rewrite.c
@@ -548,7 +548,7 @@ ip4_rewrite_set_next(uint16_t port_id, uint16_t next_index)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_node_ip4_rewrite_add)
+RTE_EXPORT_SYMBOL(rte_node_ip4_rewrite_add);
 int
 rte_node_ip4_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
 			 uint8_t rewrite_len, uint16_t dst_port)
diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c
index 83c0500c76..29eb2d6d12 100644
--- a/lib/node/ip6_lookup.c
+++ b/lib/node/ip6_lookup.c
@@ -258,7 +258,7 @@ ip6_lookup_node_process_scalar(struct rte_graph *graph, struct rte_node *node,
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_route_add, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_route_add, 23.07);
 int
 rte_node_ip6_route_add(const struct rte_ipv6_addr *ip, uint8_t depth, uint16_t next_hop,
 		       enum rte_node_ip6_lookup_next next_node)
diff --git a/lib/node/ip6_lookup_fib.c b/lib/node/ip6_lookup_fib.c
index 40c5c753df..2d990b6ec1 100644
--- a/lib/node/ip6_lookup_fib.c
+++ b/lib/node/ip6_lookup_fib.c
@@ -187,7 +187,7 @@ ip6_lookup_fib_node_process(struct rte_graph *graph, struct rte_node *node, void
 	return nb_objs;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_create, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_create, 25.07);
 int
 rte_node_ip6_fib_create(int socket, struct rte_fib6_conf *conf)
 {
@@ -207,7 +207,7 @@ rte_node_ip6_fib_create(int socket, struct rte_fib6_conf *conf)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_route_add, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_fib_route_add, 25.07);
 int
 rte_node_ip6_fib_route_add(const struct rte_ipv6_addr *ip, uint8_t depth, uint16_t next_hop,
 			   enum rte_node_ip6_lookup_next next_node)
diff --git a/lib/node/ip6_rewrite.c b/lib/node/ip6_rewrite.c
index d5488e7fa3..fd7501a803 100644
--- a/lib/node/ip6_rewrite.c
+++ b/lib/node/ip6_rewrite.c
@@ -273,7 +273,7 @@ ip6_rewrite_set_next(uint16_t port_id, uint16_t next_index)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_rewrite_add, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_ip6_rewrite_add, 23.07);
 int
 rte_node_ip6_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data,
 			 uint8_t rewrite_len, uint16_t dst_port)
diff --git a/lib/node/node_mbuf_dynfield.c b/lib/node/node_mbuf_dynfield.c
index 9dbc80f7e5..f632209511 100644
--- a/lib/node/node_mbuf_dynfield.c
+++ b/lib/node/node_mbuf_dynfield.c
@@ -20,7 +20,7 @@ static const struct rte_mbuf_dynfield node_mbuf_dynfield_desc = {
 	.align = alignof(rte_node_mbuf_dynfield_t),
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_mbuf_dynfield_register, 25.07);
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_mbuf_dynfield_register, 25.07);;
 int rte_node_mbuf_dynfield_register(void)
 {
 	struct node_mbuf_dynfield_mz *f = NULL;
diff --git a/lib/node/udp4_input.c b/lib/node/udp4_input.c
index 5a74e28c85..c13934489c 100644
--- a/lib/node/udp4_input.c
+++ b/lib/node/udp4_input.c
@@ -56,7 +56,7 @@ static struct rte_hash_parameters udp4_params = {
 	.socket_id = 0,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_dst_port_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_dst_port_add, 23.11);
 int
 rte_node_udp4_dst_port_add(uint32_t dst_port, rte_edge_t next_node)
 {
@@ -78,7 +78,7 @@ rte_node_udp4_dst_port_add(uint32_t dst_port, rte_edge_t next_node)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_usr_node_add, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_node_udp4_usr_node_add, 23.11);
 int
 rte_node_udp4_usr_node_add(const char *usr_node)
 {
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 2a07b4c1f5..0df40185d2 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -200,7 +200,7 @@ pcapng_section_block(rte_pcapng_t *self,
 }
 
 /* Write an interface block for a DPDK port */
-RTE_EXPORT_SYMBOL(rte_pcapng_add_interface)
+RTE_EXPORT_SYMBOL(rte_pcapng_add_interface);
 int
 rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 			 const char *ifname, const char *ifdescr,
@@ -322,7 +322,7 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
 /*
  * Write an Interface statistics block at the end of capture.
  */
-RTE_EXPORT_SYMBOL(rte_pcapng_write_stats)
+RTE_EXPORT_SYMBOL(rte_pcapng_write_stats);
 ssize_t
 rte_pcapng_write_stats(rte_pcapng_t *self, uint16_t port_id,
 		       uint64_t ifrecv, uint64_t ifdrop,
@@ -388,7 +388,7 @@ rte_pcapng_write_stats(rte_pcapng_t *self, uint16_t port_id,
 	return write(self->outfd, buf, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_pcapng_mbuf_size)
+RTE_EXPORT_SYMBOL(rte_pcapng_mbuf_size);
 uint32_t
 rte_pcapng_mbuf_size(uint32_t length)
 {
@@ -470,7 +470,7 @@ pcapng_vlan_insert(struct rte_mbuf *m, uint16_t ether_type, uint16_t tci)
  */
 
 /* Make a copy of original mbuf with pcapng header and options */
-RTE_EXPORT_SYMBOL(rte_pcapng_copy)
+RTE_EXPORT_SYMBOL(rte_pcapng_copy);
 struct rte_mbuf *
 rte_pcapng_copy(uint16_t port_id, uint32_t queue,
 		const struct rte_mbuf *md,
@@ -612,7 +612,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
 }
 
 /* Write pre-formatted packets to file. */
-RTE_EXPORT_SYMBOL(rte_pcapng_write_packets)
+RTE_EXPORT_SYMBOL(rte_pcapng_write_packets);
 ssize_t
 rte_pcapng_write_packets(rte_pcapng_t *self,
 			 struct rte_mbuf *pkts[], uint16_t nb_pkts)
@@ -682,7 +682,7 @@ rte_pcapng_write_packets(rte_pcapng_t *self,
 }
 
 /* Create new pcapng writer handle */
-RTE_EXPORT_SYMBOL(rte_pcapng_fdopen)
+RTE_EXPORT_SYMBOL(rte_pcapng_fdopen);
 rte_pcapng_t *
 rte_pcapng_fdopen(int fd,
 		  const char *osname, const char *hardware,
@@ -720,7 +720,7 @@ rte_pcapng_fdopen(int fd,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_pcapng_close)
+RTE_EXPORT_SYMBOL(rte_pcapng_close);
 void
 rte_pcapng_close(rte_pcapng_t *self)
 {
diff --git a/lib/pci/rte_pci.c b/lib/pci/rte_pci.c
index e2f89a7f21..1bbdce250c 100644
--- a/lib/pci/rte_pci.c
+++ b/lib/pci/rte_pci.c
@@ -93,7 +93,7 @@ pci_dbdf_parse(const char *input, struct rte_pci_addr *dev_addr)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_device_name)
+RTE_EXPORT_SYMBOL(rte_pci_device_name);
 void
 rte_pci_device_name(const struct rte_pci_addr *addr,
 		char *output, size_t size)
@@ -104,7 +104,7 @@ rte_pci_device_name(const struct rte_pci_addr *addr,
 			    addr->devid, addr->function) >= 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_addr_cmp)
+RTE_EXPORT_SYMBOL(rte_pci_addr_cmp);
 int
 rte_pci_addr_cmp(const struct rte_pci_addr *addr,
 	     const struct rte_pci_addr *addr2)
@@ -127,7 +127,7 @@ rte_pci_addr_cmp(const struct rte_pci_addr *addr,
 		return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pci_addr_parse)
+RTE_EXPORT_SYMBOL(rte_pci_addr_parse);
 int
 rte_pci_addr_parse(const char *str, struct rte_pci_addr *addr)
 {
diff --git a/lib/pdcp/rte_pdcp.c b/lib/pdcp/rte_pdcp.c
index d21df8ab43..614b3f65c7 100644
--- a/lib/pdcp/rte_pdcp.c
+++ b/lib/pdcp/rte_pdcp.c
@@ -98,7 +98,7 @@ pdcp_dl_establish(struct rte_pdcp_entity *entity, const struct rte_pdcp_entity_c
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_establish, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_establish, 23.07);
 struct rte_pdcp_entity *
 rte_pdcp_entity_establish(const struct rte_pdcp_entity_conf *conf)
 {
@@ -199,7 +199,7 @@ pdcp_dl_release(struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[])
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_release, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_release, 23.07);
 int
 rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *out_mb[])
 {
@@ -222,7 +222,7 @@ rte_pdcp_entity_release(struct rte_pdcp_entity *pdcp_entity, struct rte_mbuf *ou
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_suspend, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_entity_suspend, 23.07);
 int
 rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity,
 			struct rte_mbuf *out_mb[])
@@ -250,7 +250,7 @@ rte_pdcp_entity_suspend(struct rte_pdcp_entity *pdcp_entity,
 	return nb_out;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_control_pdu_create, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_control_pdu_create, 23.07);
 struct rte_mbuf *
 rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity,
 			    enum rte_pdcp_ctrl_pdu_type type)
@@ -291,7 +291,7 @@ rte_pdcp_control_pdu_create(struct rte_pdcp_entity *pdcp_entity,
 	return m;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_t_reordering_expiry_handle, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pdcp_t_reordering_expiry_handle, 23.07);
 uint16_t
 rte_pdcp_t_reordering_expiry_handle(const struct rte_pdcp_entity *entity, struct rte_mbuf *out_mb[])
 {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index ba75b828f2..5559d7f7b9 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -418,7 +418,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_init)
+RTE_EXPORT_SYMBOL(rte_pdump_init);
 int
 rte_pdump_init(void)
 {
@@ -441,7 +441,7 @@ rte_pdump_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_uninit)
+RTE_EXPORT_SYMBOL(rte_pdump_uninit);
 int
 rte_pdump_uninit(void)
 {
@@ -612,7 +612,7 @@ pdump_enable(uint16_t port, uint16_t queue,
 					    ENABLE, ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable)
+RTE_EXPORT_SYMBOL(rte_pdump_enable);
 int
 rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		 struct rte_ring *ring,
@@ -623,7 +623,7 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 			    ring, mp, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf);
 int
 rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
 		     uint32_t flags, uint32_t snaplen,
@@ -658,7 +658,7 @@ pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
 					    ENABLE, ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_by_deviceid);
 int
 rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 			     uint32_t flags,
@@ -670,7 +670,7 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 					ring, mp, NULL);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_enable_bpf_by_deviceid);
 int
 rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
 				 uint32_t flags, uint32_t snaplen,
@@ -682,7 +682,7 @@ rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
 					ring, mp, prm);
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_disable)
+RTE_EXPORT_SYMBOL(rte_pdump_disable);
 int
 rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 {
@@ -702,7 +702,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_disable_by_deviceid)
+RTE_EXPORT_SYMBOL(rte_pdump_disable_by_deviceid);
 int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags)
@@ -739,7 +739,7 @@ pdump_sum_stats(uint16_t port, uint16_t nq,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pdump_stats)
+RTE_EXPORT_SYMBOL(rte_pdump_stats);
 int
 rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
 {
diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c
index fa3c8b77ee..a77efa47d4 100644
--- a/lib/pipeline/rte_pipeline.c
+++ b/lib/pipeline/rte_pipeline.c
@@ -190,7 +190,7 @@ rte_pipeline_check_params(struct rte_pipeline_params *params)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_create);
 struct rte_pipeline *
 rte_pipeline_create(struct rte_pipeline_params *params)
 {
@@ -233,7 +233,7 @@ rte_pipeline_create(struct rte_pipeline_params *params)
 	return p;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_free)
+RTE_EXPORT_SYMBOL(rte_pipeline_free);
 int
 rte_pipeline_free(struct rte_pipeline *p)
 {
@@ -327,7 +327,7 @@ rte_table_check_params(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_create);
 int
 rte_pipeline_table_create(struct rte_pipeline *p,
 		struct rte_pipeline_table_params *params,
@@ -399,7 +399,7 @@ rte_pipeline_table_free(struct rte_table *table)
 	rte_free(table->default_entry);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_add)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_add);
 int
 rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
 	uint32_t table_id,
@@ -450,7 +450,7 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_delete)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_default_entry_delete);
 int
 rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -484,7 +484,7 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add);
 int
 rte_pipeline_table_entry_add(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -546,7 +546,7 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p,
 		key_found, (void **) entry_ptr);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete);
 int
 rte_pipeline_table_entry_delete(struct rte_pipeline *p,
 		uint32_t table_id,
@@ -586,7 +586,7 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p,
 	return (table->ops.f_delete)(table->h_table, key, key_found, entry);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add_bulk)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_add_bulk);
 int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
 	uint32_t table_id,
 	void **keys,
@@ -653,7 +653,7 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
 		n_keys, key_found, (void **) entries_ptr);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete_bulk)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_entry_delete_bulk);
 int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p,
 	uint32_t table_id,
 	void **keys,
@@ -811,7 +811,7 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_create);
 int
 rte_pipeline_port_in_create(struct rte_pipeline *p,
 		struct rte_pipeline_port_in_params *params,
@@ -862,7 +862,7 @@ rte_pipeline_port_in_free(struct rte_port_in *port)
 		port->ops.f_free(port->h_port);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_create)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_create);
 int
 rte_pipeline_port_out_create(struct rte_pipeline *p,
 		struct rte_pipeline_port_out_params *params,
@@ -910,7 +910,7 @@ rte_pipeline_port_out_free(struct rte_port_out *port)
 		port->ops.f_free(port->h_port);
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_connect_to_table)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_connect_to_table);
 int
 rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
 		uint32_t port_id,
@@ -945,7 +945,7 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_enable)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_enable);
 int
 rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
 {
@@ -993,7 +993,7 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_disable)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_disable);
 int
 rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
 {
@@ -1049,7 +1049,7 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
 /*
  * Pipeline run-time
  */
-RTE_EXPORT_SYMBOL(rte_pipeline_check)
+RTE_EXPORT_SYMBOL(rte_pipeline_check);
 int
 rte_pipeline_check(struct rte_pipeline *p)
 {
@@ -1323,7 +1323,7 @@ rte_pipeline_action_handler_drop(struct rte_pipeline *p, uint64_t pkts_mask)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_run)
+RTE_EXPORT_SYMBOL(rte_pipeline_run);
 int
 rte_pipeline_run(struct rte_pipeline *p)
 {
@@ -1463,7 +1463,7 @@ rte_pipeline_run(struct rte_pipeline *p)
 	return (int) n_pkts;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_flush)
+RTE_EXPORT_SYMBOL(rte_pipeline_flush);
 int
 rte_pipeline_flush(struct rte_pipeline *p)
 {
@@ -1486,7 +1486,7 @@ rte_pipeline_flush(struct rte_pipeline *p)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_packet_insert)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_packet_insert);
 int
 rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
 	uint32_t port_id, struct rte_mbuf *pkt)
@@ -1498,7 +1498,7 @@ rte_pipeline_port_out_packet_insert(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_hijack)
+RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_hijack);
 int rte_pipeline_ah_packet_hijack(struct rte_pipeline *p,
 	uint64_t pkts_mask)
 {
@@ -1508,7 +1508,7 @@ int rte_pipeline_ah_packet_hijack(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_drop)
+RTE_EXPORT_SYMBOL(rte_pipeline_ah_packet_drop);
 int rte_pipeline_ah_packet_drop(struct rte_pipeline *p,
 	uint64_t pkts_mask)
 {
@@ -1520,7 +1520,7 @@ int rte_pipeline_ah_packet_drop(struct rte_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_in_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_in_stats_read);
 int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	struct rte_pipeline_port_in_stats *stats, int clear)
 {
@@ -1558,7 +1558,7 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_port_out_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_port_out_stats_read);
 int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	struct rte_pipeline_port_out_stats *stats, int clear)
 {
@@ -1593,7 +1593,7 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pipeline_table_stats_read)
+RTE_EXPORT_SYMBOL(rte_pipeline_table_stats_read);
 int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id,
 	struct rte_pipeline_table_stats *stats, int clear)
 {
diff --git a/lib/pipeline/rte_port_in_action.c b/lib/pipeline/rte_port_in_action.c
index 2378e64de9..e52b0f24d1 100644
--- a/lib/pipeline/rte_port_in_action.c
+++ b/lib/pipeline/rte_port_in_action.c
@@ -201,7 +201,7 @@ struct rte_port_in_action_profile {
 	int frozen;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_create, 18.05);
 struct rte_port_in_action_profile *
 rte_port_in_action_profile_create(uint32_t socket_id)
 {
@@ -218,7 +218,7 @@ rte_port_in_action_profile_create(uint32_t socket_id)
 	return ap;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_action_register, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_action_register, 18.05);
 int
 rte_port_in_action_profile_action_register(struct rte_port_in_action_profile *profile,
 	enum rte_port_in_action_type type,
@@ -258,7 +258,7 @@ rte_port_in_action_profile_action_register(struct rte_port_in_action_profile *pr
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_freeze, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_freeze, 18.05);
 int
 rte_port_in_action_profile_freeze(struct rte_port_in_action_profile *profile)
 {
@@ -271,7 +271,7 @@ rte_port_in_action_profile_freeze(struct rte_port_in_action_profile *profile)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_profile_free, 18.05);
 int
 rte_port_in_action_profile_free(struct rte_port_in_action_profile *profile)
 {
@@ -320,7 +320,7 @@ action_data_init(struct rte_port_in_action *action,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_create, 18.05);
 struct rte_port_in_action *
 rte_port_in_action_create(struct rte_port_in_action_profile *profile,
 	uint32_t socket_id)
@@ -357,7 +357,7 @@ rte_port_in_action_create(struct rte_port_in_action_profile *profile,
 	return action;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_apply, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_apply, 18.05);
 int
 rte_port_in_action_apply(struct rte_port_in_action *action,
 	enum rte_port_in_action_type type,
@@ -505,7 +505,7 @@ ah_selector(struct rte_port_in_action *action)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_params_get, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_params_get, 18.05);
 int
 rte_port_in_action_params_get(struct rte_port_in_action *action,
 	struct rte_pipeline_port_in_params *params)
@@ -526,7 +526,7 @@ rte_port_in_action_params_get(struct rte_port_in_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_in_action_free, 18.05);
 int
 rte_port_in_action_free(struct rte_port_in_action *action)
 {
diff --git a/lib/pipeline/rte_swx_ctl.c b/lib/pipeline/rte_swx_ctl.c
index 4e9bb842a1..ea969e61a9 100644
--- a/lib/pipeline/rte_swx_ctl.c
+++ b/lib/pipeline/rte_swx_ctl.c
@@ -1171,7 +1171,7 @@ static struct rte_tailq_elem rte_swx_ctl_pipeline_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_ctl_pipeline_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_find, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_find, 22.11);
 struct rte_swx_ctl_pipeline *
 rte_swx_ctl_pipeline_find(const char *name)
 {
@@ -1251,7 +1251,7 @@ ctl_unregister(struct rte_swx_ctl_pipeline *ctl)
 	rte_mcfg_tailq_write_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_free, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_free, 20.11);
 void
 rte_swx_ctl_pipeline_free(struct rte_swx_ctl_pipeline *ctl)
 {
@@ -1274,7 +1274,7 @@ rte_swx_ctl_pipeline_free(struct rte_swx_ctl_pipeline *ctl)
 	free(ctl);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_create, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_create, 20.11);
 struct rte_swx_ctl_pipeline *
 rte_swx_ctl_pipeline_create(struct rte_swx_pipeline *p)
 {
@@ -1553,7 +1553,7 @@ rte_swx_ctl_pipeline_create(struct rte_swx_pipeline *p)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_add, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_add, 20.11);
 int
 rte_swx_ctl_pipeline_table_entry_add(struct rte_swx_ctl_pipeline *ctl,
 				     const char *table_name,
@@ -1668,7 +1668,7 @@ rte_swx_ctl_pipeline_table_entry_add(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_delete, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_delete, 20.11);
 int
 rte_swx_ctl_pipeline_table_entry_delete(struct rte_swx_ctl_pipeline *ctl,
 					const char *table_name,
@@ -1759,7 +1759,7 @@ rte_swx_ctl_pipeline_table_entry_delete(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_default_entry_add, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_default_entry_add, 20.11);
 int
 rte_swx_ctl_pipeline_table_default_entry_add(struct rte_swx_ctl_pipeline *ctl,
 					     const char *table_name,
@@ -2097,7 +2097,7 @@ table_abort(struct rte_swx_ctl_pipeline *ctl, uint32_t table_id)
 	table_pending_default_free(table);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_add, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_add, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_add(struct rte_swx_ctl_pipeline *ctl,
 					const char *selector_name,
@@ -2125,7 +2125,7 @@ rte_swx_ctl_pipeline_selector_group_add(struct rte_swx_ctl_pipeline *ctl,
 	return -ENOSPC;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_delete, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_delete, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_delete(struct rte_swx_ctl_pipeline *ctl,
 					   const char *selector_name,
@@ -2177,7 +2177,7 @@ rte_swx_ctl_pipeline_selector_group_delete(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_add, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_add, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_member_add(struct rte_swx_ctl_pipeline *ctl,
 					       const char *selector_name,
@@ -2237,7 +2237,7 @@ rte_swx_ctl_pipeline_selector_group_member_add(struct rte_swx_ctl_pipeline *ctl,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_delete, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_group_member_delete, 21.08);
 int
 rte_swx_ctl_pipeline_selector_group_member_delete(struct rte_swx_ctl_pipeline *ctl,
 						  const char *selector_name,
@@ -2491,7 +2491,7 @@ learner_default_entry_duplicate(struct rte_swx_ctl_pipeline *ctl,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_add, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_add, 21.11);
 int
 rte_swx_ctl_pipeline_learner_default_entry_add(struct rte_swx_ctl_pipeline *ctl,
 					       const char *learner_name,
@@ -2565,7 +2565,7 @@ learner_abort(struct rte_swx_ctl_pipeline *ctl, uint32_t learner_id)
 	learner_pending_default_free(l);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_commit, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_commit, 20.11);
 int
 rte_swx_ctl_pipeline_commit(struct rte_swx_ctl_pipeline *ctl, int abort_on_fail)
 {
@@ -2652,7 +2652,7 @@ rte_swx_ctl_pipeline_commit(struct rte_swx_ctl_pipeline *ctl, int abort_on_fail)
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_abort, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_abort, 20.11);
 void
 rte_swx_ctl_pipeline_abort(struct rte_swx_ctl_pipeline *ctl)
 {
@@ -2987,7 +2987,7 @@ token_is_comment(const char *token)
 
 #define RTE_SWX_CTL_ENTRY_TOKENS_MAX 256
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_entry_read, 20.11);
 struct rte_swx_table_entry *
 rte_swx_ctl_pipeline_table_entry_read(struct rte_swx_ctl_pipeline *ctl,
 				      const char *table_name,
@@ -3187,7 +3187,7 @@ rte_swx_ctl_pipeline_table_entry_read(struct rte_swx_ctl_pipeline *ctl,
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_read, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_default_entry_read, 21.11);
 struct rte_swx_table_entry *
 rte_swx_ctl_pipeline_learner_default_entry_read(struct rte_swx_ctl_pipeline *ctl,
 						const char *learner_name,
@@ -3340,7 +3340,7 @@ table_entry_printf(FILE *f,
 	fprintf(f, "\n");
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_fprintf, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_fprintf, 20.11);
 int
 rte_swx_ctl_pipeline_table_fprintf(FILE *f,
 				   struct rte_swx_ctl_pipeline *ctl,
@@ -3391,7 +3391,7 @@ rte_swx_ctl_pipeline_table_fprintf(FILE *f,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_fprintf, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_fprintf, 21.08);
 int
 rte_swx_ctl_pipeline_selector_fprintf(FILE *f,
 				      struct rte_swx_ctl_pipeline *ctl,
diff --git a/lib/pipeline/rte_swx_ipsec.c b/lib/pipeline/rte_swx_ipsec.c
index 553056fad2..2b7d767105 100644
--- a/lib/pipeline/rte_swx_ipsec.c
+++ b/lib/pipeline/rte_swx_ipsec.c
@@ -178,7 +178,7 @@ static struct rte_tailq_elem rte_swx_ipsec_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_ipsec_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_find, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_find, 23.03);
 struct rte_swx_ipsec *
 rte_swx_ipsec_find(const char *name)
 {
@@ -263,7 +263,7 @@ ipsec_unregister(struct rte_swx_ipsec *ipsec)
 static void
 ipsec_session_free(struct rte_swx_ipsec *ipsec, struct rte_ipsec_session *s);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_free, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_free, 23.03);
 void
 rte_swx_ipsec_free(struct rte_swx_ipsec *ipsec)
 {
@@ -294,7 +294,7 @@ rte_swx_ipsec_free(struct rte_swx_ipsec *ipsec)
 	env_free(ipsec, ipsec->total_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_create, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_create, 23.03);
 int
 rte_swx_ipsec_create(struct rte_swx_ipsec **ipsec_out,
 		     const char *name,
@@ -722,7 +722,7 @@ rte_swx_ipsec_post_crypto(struct rte_swx_ipsec *ipsec)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_run, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_run, 23.03);
 void
 rte_swx_ipsec_run(struct rte_swx_ipsec *ipsec)
 {
@@ -1134,7 +1134,7 @@ do {                                   \
 	}                              \
 } while (0)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_read, 23.03);
 struct rte_swx_ipsec_sa_params *
 rte_swx_ipsec_sa_read(struct rte_swx_ipsec *ipsec __rte_unused,
 		      const char *string,
@@ -1768,7 +1768,7 @@ ipsec_session_free(struct rte_swx_ipsec *ipsec,
 	memset(s, 0, sizeof(*s));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_add, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_add, 23.03);
 int
 rte_swx_ipsec_sa_add(struct rte_swx_ipsec *ipsec,
 		     struct rte_swx_ipsec_sa_params *sa_params,
@@ -1808,7 +1808,7 @@ rte_swx_ipsec_sa_add(struct rte_swx_ipsec *ipsec,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_delete, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ipsec_sa_delete, 23.03);
 void
 rte_swx_ipsec_sa_delete(struct rte_swx_ipsec *ipsec,
 			uint32_t sa_id)
diff --git a/lib/pipeline/rte_swx_pipeline.c b/lib/pipeline/rte_swx_pipeline.c
index 2193bc4ebf..d2d8730d2e 100644
--- a/lib/pipeline/rte_swx_pipeline.c
+++ b/lib/pipeline/rte_swx_pipeline.c
@@ -122,7 +122,7 @@ struct_type_field_find(struct struct_type *st, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_struct_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_struct_type_register, 20.11);
 int
 rte_swx_pipeline_struct_type_register(struct rte_swx_pipeline *p,
 				      const char *name,
@@ -254,7 +254,7 @@ port_in_type_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_type_register, 20.11);
 int
 rte_swx_pipeline_port_in_type_register(struct rte_swx_pipeline *p,
 				       const char *name,
@@ -298,7 +298,7 @@ port_in_find(struct rte_swx_pipeline *p, uint32_t port_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_in_config, 20.11);
 int
 rte_swx_pipeline_port_in_config(struct rte_swx_pipeline *p,
 				uint32_t port_id,
@@ -417,7 +417,7 @@ port_out_type_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_type_register, 20.11);
 int
 rte_swx_pipeline_port_out_type_register(struct rte_swx_pipeline *p,
 					const char *name,
@@ -463,7 +463,7 @@ port_out_find(struct rte_swx_pipeline *p, uint32_t port_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_port_out_config, 20.11);
 int
 rte_swx_pipeline_port_out_config(struct rte_swx_pipeline *p,
 				 uint32_t port_id,
@@ -570,7 +570,7 @@ port_out_free(struct rte_swx_pipeline *p)
 /*
  * Packet mirroring.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_mirroring_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_mirroring_config, 20.11);
 int
 rte_swx_pipeline_mirroring_config(struct rte_swx_pipeline *p,
 				  struct rte_swx_pipeline_mirroring_params *params)
@@ -767,7 +767,7 @@ extern_obj_mailbox_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_register, 20.11);
 int
 rte_swx_pipeline_extern_type_register(struct rte_swx_pipeline *p,
 	const char *name,
@@ -808,7 +808,7 @@ rte_swx_pipeline_extern_type_register(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_member_func_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_type_member_func_register, 20.11);
 int
 rte_swx_pipeline_extern_type_member_func_register(struct rte_swx_pipeline *p,
 	const char *extern_type_name,
@@ -846,7 +846,7 @@ rte_swx_pipeline_extern_type_member_func_register(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_object_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_object_config, 20.11);
 int
 rte_swx_pipeline_extern_object_config(struct rte_swx_pipeline *p,
 				      const char *extern_type_name,
@@ -1063,7 +1063,7 @@ extern_func_mailbox_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_func_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_extern_func_register, 20.11);
 int
 rte_swx_pipeline_extern_func_register(struct rte_swx_pipeline *p,
 				      const char *name,
@@ -1192,7 +1192,7 @@ hash_func_find(struct rte_swx_pipeline *p, const char *name)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_hash_func_register, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_hash_func_register, 22.07);
 int
 rte_swx_pipeline_hash_func_register(struct rte_swx_pipeline *p,
 				    const char *name,
@@ -1293,7 +1293,7 @@ rss_find_by_id(struct rte_swx_pipeline *p, uint32_t rss_obj_id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_rss_config, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_rss_config, 23.03);
 int
 rte_swx_pipeline_rss_config(struct rte_swx_pipeline *p, const char *name)
 {
@@ -1471,7 +1471,7 @@ header_field_parse(struct rte_swx_pipeline *p,
 	return f;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_header_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_header_register, 20.11);
 int
 rte_swx_pipeline_packet_header_register(struct rte_swx_pipeline *p,
 					const char *name,
@@ -1610,7 +1610,7 @@ metadata_field_parse(struct rte_swx_pipeline *p, const char *name)
 	return struct_type_field_find(p->metadata_st, &name[2]);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_metadata_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_packet_metadata_register, 20.11);
 int
 rte_swx_pipeline_packet_metadata_register(struct rte_swx_pipeline *p,
 					  const char *struct_type_name)
@@ -7870,7 +7870,7 @@ action_does_learning(struct action *a)
 	return 0; /* FALSE */
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_action_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_action_config, 20.11);
 int
 rte_swx_pipeline_action_config(struct rte_swx_pipeline *p,
 			       const char *name,
@@ -8235,7 +8235,7 @@ table_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_type_register, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_type_register, 20.11);
 int
 rte_swx_pipeline_table_type_register(struct rte_swx_pipeline *p,
 				     const char *name,
@@ -8405,7 +8405,7 @@ table_match_fields_check(struct rte_swx_pipeline *p,
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_config, 20.11);
 int
 rte_swx_pipeline_table_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -8909,7 +8909,7 @@ selector_fields_check(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_selector_config, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_selector_config, 21.08);
 int
 rte_swx_pipeline_selector_config(struct rte_swx_pipeline *p,
 				 const char *name,
@@ -9382,7 +9382,7 @@ learner_action_learning_check(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_learner_config, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_learner_config, 21.11);
 int
 rte_swx_pipeline_learner_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -9956,7 +9956,7 @@ regarray_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_regarray_config, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_regarray_config, 21.05);
 int
 rte_swx_pipeline_regarray_config(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -10095,7 +10095,7 @@ metarray_find_by_id(struct rte_swx_pipeline *p, uint32_t id)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_metarray_config, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_metarray_config, 21.05);
 int
 rte_swx_pipeline_metarray_config(struct rte_swx_pipeline *p,
 				 const char *name,
@@ -10246,7 +10246,7 @@ static struct rte_tailq_elem rte_swx_pipeline_tailq = {
 
 EAL_REGISTER_TAILQ(rte_swx_pipeline_tailq)
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_find, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_find, 22.11);
 struct rte_swx_pipeline *
 rte_swx_pipeline_find(const char *name)
 {
@@ -10326,7 +10326,7 @@ pipeline_unregister(struct rte_swx_pipeline *p)
 	rte_mcfg_tailq_write_unlock();
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_free, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_free, 20.11);
 void
 rte_swx_pipeline_free(struct rte_swx_pipeline *p)
 {
@@ -10472,7 +10472,7 @@ hash_funcs_register(struct rte_swx_pipeline *p)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_config, 20.11);
 int
 rte_swx_pipeline_config(struct rte_swx_pipeline **p, const char *name, int numa_node)
 {
@@ -10549,7 +10549,7 @@ rte_swx_pipeline_config(struct rte_swx_pipeline **p, const char *name, int numa_
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_instructions_config, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_instructions_config, 20.11);
 int
 rte_swx_pipeline_instructions_config(struct rte_swx_pipeline *p,
 				     const char **instructions,
@@ -10572,7 +10572,7 @@ rte_swx_pipeline_instructions_config(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build, 20.11);
 int
 rte_swx_pipeline_build(struct rte_swx_pipeline *p)
 {
@@ -10691,7 +10691,7 @@ rte_swx_pipeline_build(struct rte_swx_pipeline *p)
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_run, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_run, 20.11);
 void
 rte_swx_pipeline_run(struct rte_swx_pipeline *p, uint32_t n_instructions)
 {
@@ -10701,7 +10701,7 @@ rte_swx_pipeline_run(struct rte_swx_pipeline *p, uint32_t n_instructions)
 		instr_exec(p);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_flush, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_flush, 20.11);
 void
 rte_swx_pipeline_flush(struct rte_swx_pipeline *p)
 {
@@ -10718,7 +10718,7 @@ rte_swx_pipeline_flush(struct rte_swx_pipeline *p)
 /*
  * Control.
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_info_get, 20.11);
 int
 rte_swx_ctl_pipeline_info_get(struct rte_swx_pipeline *p,
 			      struct rte_swx_ctl_pipeline_info *pipeline)
@@ -10752,7 +10752,7 @@ rte_swx_ctl_pipeline_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_numa_node_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_numa_node_get, 20.11);
 int
 rte_swx_ctl_pipeline_numa_node_get(struct rte_swx_pipeline *p, int *numa_node)
 {
@@ -10763,7 +10763,7 @@ rte_swx_ctl_pipeline_numa_node_get(struct rte_swx_pipeline *p, int *numa_node)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_info_get, 20.11);
 int
 rte_swx_ctl_action_info_get(struct rte_swx_pipeline *p,
 			    uint32_t action_id,
@@ -10783,7 +10783,7 @@ rte_swx_ctl_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_arg_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_action_arg_info_get, 20.11);
 int
 rte_swx_ctl_action_arg_info_get(struct rte_swx_pipeline *p,
 				uint32_t action_id,
@@ -10808,7 +10808,7 @@ rte_swx_ctl_action_arg_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_info_get, 20.11);
 int
 rte_swx_ctl_table_info_get(struct rte_swx_pipeline *p,
 			   uint32_t table_id,
@@ -10833,7 +10833,7 @@ rte_swx_ctl_table_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_match_field_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_match_field_info_get, 20.11);
 int
 rte_swx_ctl_table_match_field_info_get(struct rte_swx_pipeline *p,
 	uint32_t table_id,
@@ -10859,7 +10859,7 @@ rte_swx_ctl_table_match_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_action_info_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_action_info_get, 20.11);
 int
 rte_swx_ctl_table_action_info_get(struct rte_swx_pipeline *p,
 	uint32_t table_id,
@@ -10883,7 +10883,7 @@ rte_swx_ctl_table_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_ops_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_table_ops_get, 20.11);
 int
 rte_swx_ctl_table_ops_get(struct rte_swx_pipeline *p,
 			  uint32_t table_id,
@@ -10910,7 +10910,7 @@ rte_swx_ctl_table_ops_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_info_get, 21.08);
 int
 rte_swx_ctl_selector_info_get(struct rte_swx_pipeline *p,
 			      uint32_t selector_id,
@@ -10934,7 +10934,7 @@ rte_swx_ctl_selector_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_group_id_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_group_id_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_group_id_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -10957,7 +10957,7 @@ rte_swx_ctl_selector_group_id_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -10983,7 +10983,7 @@ rte_swx_ctl_selector_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_member_id_field_info_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_selector_member_id_field_info_get, 21.08);
 int
 rte_swx_ctl_selector_member_id_field_info_get(struct rte_swx_pipeline *p,
 	 uint32_t selector_id,
@@ -11006,7 +11006,7 @@ rte_swx_ctl_selector_member_id_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_info_get, 21.11);
 int
 rte_swx_ctl_learner_info_get(struct rte_swx_pipeline *p,
 			     uint32_t learner_id,
@@ -11032,7 +11032,7 @@ rte_swx_ctl_learner_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_match_field_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_match_field_info_get, 21.11);
 int
 rte_swx_ctl_learner_match_field_info_get(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11058,7 +11058,7 @@ rte_swx_ctl_learner_match_field_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_action_info_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_learner_action_info_get, 21.11);
 int
 rte_swx_ctl_learner_action_info_get(struct rte_swx_pipeline *p,
 				    uint32_t learner_id,
@@ -11085,7 +11085,7 @@ rte_swx_ctl_learner_action_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_get, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_get, 22.07);
 int
 rte_swx_ctl_pipeline_learner_timeout_get(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11105,7 +11105,7 @@ rte_swx_ctl_pipeline_learner_timeout_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_set, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_timeout_set, 22.07);
 int
 rte_swx_ctl_pipeline_learner_timeout_set(struct rte_swx_pipeline *p,
 					 uint32_t learner_id,
@@ -11137,7 +11137,7 @@ rte_swx_ctl_pipeline_learner_timeout_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_get, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_get, 20.11);
 int
 rte_swx_pipeline_table_state_get(struct rte_swx_pipeline *p,
 				 struct rte_swx_table_state **table_state)
@@ -11149,7 +11149,7 @@ rte_swx_pipeline_table_state_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_table_state_set, 20.11);
 int
 rte_swx_pipeline_table_state_set(struct rte_swx_pipeline *p,
 				 struct rte_swx_table_state *table_state)
@@ -11161,7 +11161,7 @@ rte_swx_pipeline_table_state_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_in_stats_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_in_stats_read, 20.11);
 int
 rte_swx_ctl_pipeline_port_in_stats_read(struct rte_swx_pipeline *p,
 					uint32_t port_id,
@@ -11180,7 +11180,7 @@ rte_swx_ctl_pipeline_port_in_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_out_stats_read, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_port_out_stats_read, 20.11);
 int
 rte_swx_ctl_pipeline_port_out_stats_read(struct rte_swx_pipeline *p,
 					 uint32_t port_id,
@@ -11199,7 +11199,7 @@ rte_swx_ctl_pipeline_port_out_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_stats_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_table_stats_read, 21.05);
 int
 rte_swx_ctl_pipeline_table_stats_read(struct rte_swx_pipeline *p,
 				      const char *table_name,
@@ -11227,7 +11227,7 @@ rte_swx_ctl_pipeline_table_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_stats_read, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_selector_stats_read, 21.08);
 int
 rte_swx_ctl_pipeline_selector_stats_read(struct rte_swx_pipeline *p,
 	const char *selector_name,
@@ -11247,7 +11247,7 @@ rte_swx_ctl_pipeline_selector_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_stats_read, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_learner_stats_read, 21.11);
 int
 rte_swx_ctl_pipeline_learner_stats_read(struct rte_swx_pipeline *p,
 					const char *learner_name,
@@ -11281,7 +11281,7 @@ rte_swx_ctl_pipeline_learner_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_regarray_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_regarray_info_get, 21.05);
 int
 rte_swx_ctl_regarray_info_get(struct rte_swx_pipeline *p,
 			      uint32_t regarray_id,
@@ -11301,7 +11301,7 @@ rte_swx_ctl_regarray_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read, 21.05);
 int
 rte_swx_ctl_pipeline_regarray_read(struct rte_swx_pipeline *p,
 				   const char *regarray_name,
@@ -11323,7 +11323,7 @@ rte_swx_ctl_pipeline_regarray_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write, 21.05);
 int
 rte_swx_ctl_pipeline_regarray_write(struct rte_swx_pipeline *p,
 				   const char *regarray_name,
@@ -11345,7 +11345,7 @@ rte_swx_ctl_pipeline_regarray_write(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_metarray_info_get, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_metarray_info_get, 21.05);
 int
 rte_swx_ctl_metarray_info_get(struct rte_swx_pipeline *p,
 			      uint32_t metarray_id,
@@ -11365,7 +11365,7 @@ rte_swx_ctl_metarray_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_add, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_add, 21.05);
 int
 rte_swx_ctl_meter_profile_add(struct rte_swx_pipeline *p,
 			      const char *name,
@@ -11398,7 +11398,7 @@ rte_swx_ctl_meter_profile_add(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_delete, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_profile_delete, 21.05);
 int
 rte_swx_ctl_meter_profile_delete(struct rte_swx_pipeline *p,
 				 const char *name)
@@ -11419,7 +11419,7 @@ rte_swx_ctl_meter_profile_delete(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset, 21.05);
 int
 rte_swx_ctl_meter_reset(struct rte_swx_pipeline *p,
 			const char *metarray_name,
@@ -11448,7 +11448,7 @@ rte_swx_ctl_meter_reset(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set, 21.05);
 int
 rte_swx_ctl_meter_set(struct rte_swx_pipeline *p,
 		      const char *metarray_name,
@@ -11485,7 +11485,7 @@ rte_swx_ctl_meter_set(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read, 21.05);
 int
 rte_swx_ctl_meter_stats_read(struct rte_swx_pipeline *p,
 			     const char *metarray_name,
@@ -11514,7 +11514,7 @@ rte_swx_ctl_meter_stats_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_mirroring_session_set, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_mirroring_session_set, 20.11);
 int
 rte_swx_ctl_pipeline_mirroring_session_set(struct rte_swx_pipeline *p,
 					   uint32_t session_id,
@@ -11721,7 +11721,7 @@ rte_swx_ctl_pipeline_table_entry_id_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_read_with_key, 22.11);
 int
 rte_swx_ctl_pipeline_regarray_read_with_key(struct rte_swx_pipeline *p,
 					    const char *regarray_name,
@@ -11739,7 +11739,7 @@ rte_swx_ctl_pipeline_regarray_read_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_pipeline_regarray_read(p, regarray_name, entry_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_regarray_write_with_key, 22.11);
 int
 rte_swx_ctl_pipeline_regarray_write_with_key(struct rte_swx_pipeline *p,
 					     const char *regarray_name,
@@ -11757,7 +11757,7 @@ rte_swx_ctl_pipeline_regarray_write_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_pipeline_regarray_write(p, regarray_name, entry_id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_reset_with_key, 22.11);
 int
 rte_swx_ctl_meter_reset_with_key(struct rte_swx_pipeline *p,
 				 const char *metarray_name,
@@ -11774,7 +11774,7 @@ rte_swx_ctl_meter_reset_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_reset(p, metarray_name, entry_id);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_set_with_key, 22.11);
 int
 rte_swx_ctl_meter_set_with_key(struct rte_swx_pipeline *p,
 			       const char *metarray_name,
@@ -11792,7 +11792,7 @@ rte_swx_ctl_meter_set_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_set(p, metarray_name, entry_id, profile_name);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read_with_key, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_meter_stats_read_with_key, 22.11);
 int
 rte_swx_ctl_meter_stats_read_with_key(struct rte_swx_pipeline *p,
 				      const char *metarray_name,
@@ -11810,7 +11810,7 @@ rte_swx_ctl_meter_stats_read_with_key(struct rte_swx_pipeline *p,
 	return rte_swx_ctl_meter_stats_read(p, metarray_name, entry_id, stats);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_rss_info_get, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_rss_info_get, 23.03);
 int
 rte_swx_ctl_rss_info_get(struct rte_swx_pipeline *p,
 			 uint32_t rss_obj_id,
@@ -11831,7 +11831,7 @@ rte_swx_ctl_rss_info_get(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_size_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_size_read, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_size_read(struct rte_swx_pipeline *p,
 				       const char *rss_name,
@@ -11856,7 +11856,7 @@ rte_swx_ctl_pipeline_rss_key_size_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_read, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_read, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_read(struct rte_swx_pipeline *p,
 				  const char *rss_name,
@@ -11881,7 +11881,7 @@ rte_swx_ctl_pipeline_rss_key_read(struct rte_swx_pipeline *p,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_write, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_ctl_pipeline_rss_key_write, 23.03);
 int
 rte_swx_ctl_pipeline_rss_key_write(struct rte_swx_pipeline *p,
 				   const char *rss_name,
@@ -14584,7 +14584,7 @@ pipeline_adjust(struct rte_swx_pipeline *p, struct instruction_group_list *igl)
 	instr_jmp_resolve(p->instructions, p->instruction_data, p->n_instructions);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_codegen, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_codegen, 22.11);
 int
 rte_swx_pipeline_codegen(FILE *spec_file,
 			 FILE *code_file,
@@ -14678,7 +14678,7 @@ rte_swx_pipeline_codegen(FILE *spec_file,
 	return status;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build_from_lib, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_pipeline_build_from_lib, 22.11);
 int
 rte_swx_pipeline_build_from_lib(struct rte_swx_pipeline **pipeline,
 				const char *name,
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index c990d7eb56..f05e046c46 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2363,7 +2363,7 @@ struct rte_table_action_profile {
 	int frozen;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_create, 18.05);
 struct rte_table_action_profile *
 rte_table_action_profile_create(struct rte_table_action_common_config *common)
 {
@@ -2385,7 +2385,7 @@ rte_table_action_profile_create(struct rte_table_action_common_config *common)
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_action_register, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_action_register, 18.05);
 int
 rte_table_action_profile_action_register(struct rte_table_action_profile *profile,
 	enum rte_table_action_type type,
@@ -2449,7 +2449,7 @@ rte_table_action_profile_action_register(struct rte_table_action_profile *profil
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_freeze, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_freeze, 18.05);
 int
 rte_table_action_profile_freeze(struct rte_table_action_profile *profile)
 {
@@ -2463,7 +2463,7 @@ rte_table_action_profile_freeze(struct rte_table_action_profile *profile)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_profile_free, 18.05);
 int
 rte_table_action_profile_free(struct rte_table_action_profile *profile)
 {
@@ -2486,7 +2486,7 @@ struct rte_table_action {
 	struct meter_profile_data mp[METER_PROFILES_MAX];
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_create, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_create, 18.05);
 struct rte_table_action *
 rte_table_action_create(struct rte_table_action_profile *profile,
 	uint32_t socket_id)
@@ -2524,7 +2524,7 @@ action_data_get(void *data,
 	return &data_bytes[offset];
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_apply, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_apply, 18.05);
 int
 rte_table_action_apply(struct rte_table_action *action,
 	void *data,
@@ -2606,7 +2606,7 @@ rte_table_action_apply(struct rte_table_action *action,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_dscp_table_update, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_dscp_table_update, 18.05);
 int
 rte_table_action_dscp_table_update(struct rte_table_action *action,
 	uint64_t dscp_mask,
@@ -2639,7 +2639,7 @@ rte_table_action_dscp_table_update(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_add, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_add, 18.05);
 int
 rte_table_action_meter_profile_add(struct rte_table_action *action,
 	uint32_t meter_profile_id,
@@ -2680,7 +2680,7 @@ rte_table_action_meter_profile_add(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_delete, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_profile_delete, 18.05);
 int
 rte_table_action_meter_profile_delete(struct rte_table_action *action,
 	uint32_t meter_profile_id)
@@ -2704,7 +2704,7 @@ rte_table_action_meter_profile_delete(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_meter_read, 18.05);
 int
 rte_table_action_meter_read(struct rte_table_action *action,
 	void *data,
@@ -2767,7 +2767,7 @@ rte_table_action_meter_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_ttl_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_ttl_read, 18.05);
 int
 rte_table_action_ttl_read(struct rte_table_action *action,
 	void *data,
@@ -2796,7 +2796,7 @@ rte_table_action_ttl_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_stats_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_stats_read, 18.05);
 int
 rte_table_action_stats_read(struct rte_table_action *action,
 	void *data,
@@ -2832,7 +2832,7 @@ rte_table_action_stats_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_time_read, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_time_read, 18.05);
 int
 rte_table_action_time_read(struct rte_table_action *action,
 	void *data,
@@ -2856,7 +2856,7 @@ rte_table_action_time_read(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_crypto_sym_session_get, 18.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_crypto_sym_session_get, 18.11);
 struct rte_cryptodev_sym_session *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data)
@@ -3444,7 +3444,7 @@ ah_selector(struct rte_table_action *action)
 	return ah_default;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_table_params_get, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_table_params_get, 18.05);
 int
 rte_table_action_table_params_get(struct rte_table_action *action,
 	struct rte_pipeline_table_params *params)
@@ -3470,7 +3470,7 @@ rte_table_action_table_params_get(struct rte_table_action *action,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_free, 18.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_table_action_free, 18.05);
 int
 rte_table_action_free(struct rte_table_action *action)
 {
diff --git a/lib/pmu/pmu.c b/lib/pmu/pmu.c
index 4c7271522a..b169e957ec 100644
--- a/lib/pmu/pmu.c
+++ b/lib/pmu/pmu.c
@@ -37,7 +37,7 @@ struct rte_pmu_event {
 	TAILQ_ENTRY(rte_pmu_event) next;
 };
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_pmu)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_pmu);
 struct rte_pmu rte_pmu;
 
 /* Stubs for arch-specific functions */
@@ -291,7 +291,7 @@ cleanup_events(struct rte_pmu_event_group *group)
 	group->enabled = false;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_pmu_enable_group, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(__rte_pmu_enable_group, 25.07);
 int
 __rte_pmu_enable_group(struct rte_pmu_event_group *group)
 {
@@ -393,7 +393,7 @@ free_event(struct rte_pmu_event *event)
 	free(event);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_add_event, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_add_event, 25.07);
 int
 rte_pmu_add_event(const char *name)
 {
@@ -436,7 +436,7 @@ rte_pmu_add_event(const char *name)
 	return event->index;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_init, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_init, 25.07);
 int
 rte_pmu_init(void)
 {
@@ -468,7 +468,7 @@ rte_pmu_init(void)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_fini, 25.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_pmu_fini, 25.07);
 void
 rte_pmu_fini(void)
 {
diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c
index bdab2fbf6c..970214b17b 100644
--- a/lib/port/rte_port_ethdev.c
+++ b/lib/port/rte_port_ethdev.c
@@ -501,7 +501,7 @@ static int rte_port_ethdev_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ethdev_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_reader_ops);
 struct rte_port_in_ops rte_port_ethdev_reader_ops = {
 	.f_create = rte_port_ethdev_reader_create,
 	.f_free = rte_port_ethdev_reader_free,
@@ -509,7 +509,7 @@ struct rte_port_in_ops rte_port_ethdev_reader_ops = {
 	.f_stats = rte_port_ethdev_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_ops);
 struct rte_port_out_ops rte_port_ethdev_writer_ops = {
 	.f_create = rte_port_ethdev_writer_create,
 	.f_free = rte_port_ethdev_writer_free,
@@ -519,7 +519,7 @@ struct rte_port_out_ops rte_port_ethdev_writer_ops = {
 	.f_stats = rte_port_ethdev_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ethdev_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ethdev_writer_nodrop_ops = {
 	.f_create = rte_port_ethdev_writer_nodrop_create,
 	.f_free = rte_port_ethdev_writer_nodrop_free,
diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c
index c3a287b834..fac71da321 100644
--- a/lib/port/rte_port_eventdev.c
+++ b/lib/port/rte_port_eventdev.c
@@ -561,7 +561,7 @@ static int rte_port_eventdev_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_reader_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_reader_ops, 19.11);
 struct rte_port_in_ops rte_port_eventdev_reader_ops = {
 	.f_create = rte_port_eventdev_reader_create,
 	.f_free = rte_port_eventdev_reader_free,
@@ -569,7 +569,7 @@ struct rte_port_in_ops rte_port_eventdev_reader_ops = {
 	.f_stats = rte_port_eventdev_reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_ops, 19.11);
 struct rte_port_out_ops rte_port_eventdev_writer_ops = {
 	.f_create = rte_port_eventdev_writer_create,
 	.f_free = rte_port_eventdev_writer_free,
@@ -579,7 +579,7 @@ struct rte_port_out_ops rte_port_eventdev_writer_ops = {
 	.f_stats = rte_port_eventdev_writer_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_nodrop_ops, 19.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_port_eventdev_writer_nodrop_ops, 19.11);
 struct rte_port_out_ops rte_port_eventdev_writer_nodrop_ops = {
 	.f_create = rte_port_eventdev_writer_nodrop_create,
 	.f_free = rte_port_eventdev_writer_nodrop_free,
diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c
index dbc9efef1b..1f210986bd 100644
--- a/lib/port/rte_port_fd.c
+++ b/lib/port/rte_port_fd.c
@@ -495,7 +495,7 @@ static int rte_port_fd_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_fd_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_reader_ops);
 struct rte_port_in_ops rte_port_fd_reader_ops = {
 	.f_create = rte_port_fd_reader_create,
 	.f_free = rte_port_fd_reader_free,
@@ -503,7 +503,7 @@ struct rte_port_in_ops rte_port_fd_reader_ops = {
 	.f_stats = rte_port_fd_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_fd_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_writer_ops);
 struct rte_port_out_ops rte_port_fd_writer_ops = {
 	.f_create = rte_port_fd_writer_create,
 	.f_free = rte_port_fd_writer_free,
@@ -513,7 +513,7 @@ struct rte_port_out_ops rte_port_fd_writer_ops = {
 	.f_stats = rte_port_fd_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_fd_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_fd_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_fd_writer_nodrop_ops = {
 	.f_create = rte_port_fd_writer_nodrop_create,
 	.f_free = rte_port_fd_writer_nodrop_free,
diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c
index 9444f5939c..914b276031 100644
--- a/lib/port/rte_port_frag.c
+++ b/lib/port/rte_port_frag.c
@@ -263,7 +263,7 @@ rte_port_frag_reader_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv4_frag_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv4_frag_ops);
 struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops = {
 	.f_create = rte_port_ring_reader_ipv4_frag_create,
 	.f_free = rte_port_ring_reader_frag_free,
@@ -271,7 +271,7 @@ struct rte_port_in_ops rte_port_ring_reader_ipv4_frag_ops = {
 	.f_stats = rte_port_frag_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv6_frag_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ipv6_frag_ops);
 struct rte_port_in_ops rte_port_ring_reader_ipv6_frag_ops = {
 	.f_create = rte_port_ring_reader_ipv6_frag_create,
 	.f_free = rte_port_ring_reader_frag_free,
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 58ab7a1c5b..1bffbce8ee 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -315,7 +315,7 @@ rte_port_ras_writer_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv4_ras_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv4_ras_ops);
 struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops = {
 	.f_create = rte_port_ring_writer_ipv4_ras_create,
 	.f_free = rte_port_ring_writer_ras_free,
@@ -325,7 +325,7 @@ struct rte_port_out_ops rte_port_ring_writer_ipv4_ras_ops = {
 	.f_stats = rte_port_ras_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv6_ras_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ipv6_ras_ops);
 struct rte_port_out_ops rte_port_ring_writer_ipv6_ras_ops = {
 	.f_create = rte_port_ring_writer_ipv6_ras_create,
 	.f_free = rte_port_ring_writer_ras_free,
diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c
index 307a576d65..dc61b20aa6 100644
--- a/lib/port/rte_port_ring.c
+++ b/lib/port/rte_port_ring.c
@@ -739,7 +739,7 @@ rte_port_ring_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_ring_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_reader_ops);
 struct rte_port_in_ops rte_port_ring_reader_ops = {
 	.f_create = rte_port_ring_reader_create,
 	.f_free = rte_port_ring_reader_free,
@@ -747,7 +747,7 @@ struct rte_port_in_ops rte_port_ring_reader_ops = {
 	.f_stats = rte_port_ring_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_ops);
 struct rte_port_out_ops rte_port_ring_writer_ops = {
 	.f_create = rte_port_ring_writer_create,
 	.f_free = rte_port_ring_writer_free,
@@ -757,7 +757,7 @@ struct rte_port_out_ops rte_port_ring_writer_ops = {
 	.f_stats = rte_port_ring_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ring_writer_nodrop_ops = {
 	.f_create = rte_port_ring_writer_nodrop_create,
 	.f_free = rte_port_ring_writer_nodrop_free,
@@ -767,7 +767,7 @@ struct rte_port_out_ops rte_port_ring_writer_nodrop_ops = {
 	.f_stats = rte_port_ring_writer_nodrop_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_reader_ops);
 struct rte_port_in_ops rte_port_ring_multi_reader_ops = {
 	.f_create = rte_port_ring_multi_reader_create,
 	.f_free = rte_port_ring_reader_free,
@@ -775,7 +775,7 @@ struct rte_port_in_ops rte_port_ring_multi_reader_ops = {
 	.f_stats = rte_port_ring_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_ops);
 struct rte_port_out_ops rte_port_ring_multi_writer_ops = {
 	.f_create = rte_port_ring_multi_writer_create,
 	.f_free = rte_port_ring_writer_free,
@@ -785,7 +785,7 @@ struct rte_port_out_ops rte_port_ring_multi_writer_ops = {
 	.f_stats = rte_port_ring_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_ring_multi_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_ring_multi_writer_nodrop_ops = {
 	.f_create = rte_port_ring_multi_writer_nodrop_create,
 	.f_free = rte_port_ring_writer_nodrop_free,
diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c
index 3091078aa1..ab46e8dec6 100644
--- a/lib/port/rte_port_sched.c
+++ b/lib/port/rte_port_sched.c
@@ -279,7 +279,7 @@ rte_port_sched_writer_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_sched_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_sched_reader_ops);
 struct rte_port_in_ops rte_port_sched_reader_ops = {
 	.f_create = rte_port_sched_reader_create,
 	.f_free = rte_port_sched_reader_free,
@@ -287,7 +287,7 @@ struct rte_port_in_ops rte_port_sched_reader_ops = {
 	.f_stats = rte_port_sched_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sched_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_sched_writer_ops);
 struct rte_port_out_ops rte_port_sched_writer_ops = {
 	.f_create = rte_port_sched_writer_create,
 	.f_free = rte_port_sched_writer_free,
diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c
index 0557e12506..a492fa55ec 100644
--- a/lib/port/rte_port_source_sink.c
+++ b/lib/port/rte_port_source_sink.c
@@ -597,7 +597,7 @@ rte_port_sink_stats_read(void *port, struct rte_port_out_stats *stats,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_source_ops)
+RTE_EXPORT_SYMBOL(rte_port_source_ops);
 struct rte_port_in_ops rte_port_source_ops = {
 	.f_create = rte_port_source_create,
 	.f_free = rte_port_source_free,
@@ -605,7 +605,7 @@ struct rte_port_in_ops rte_port_source_ops = {
 	.f_stats = rte_port_source_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sink_ops)
+RTE_EXPORT_SYMBOL(rte_port_sink_ops);
 struct rte_port_out_ops rte_port_sink_ops = {
 	.f_create = rte_port_sink_create,
 	.f_free = rte_port_sink_free,
diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c
index 30c9d1283e..bfd6a82b56 100644
--- a/lib/port/rte_port_sym_crypto.c
+++ b/lib/port/rte_port_sym_crypto.c
@@ -529,7 +529,7 @@ static int rte_port_sym_crypto_writer_nodrop_stats_read(void *port,
 /*
  * Summary of port operations
  */
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_reader_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_reader_ops);
 struct rte_port_in_ops rte_port_sym_crypto_reader_ops = {
 	.f_create = rte_port_sym_crypto_reader_create,
 	.f_free = rte_port_sym_crypto_reader_free,
@@ -537,7 +537,7 @@ struct rte_port_in_ops rte_port_sym_crypto_reader_ops = {
 	.f_stats = rte_port_sym_crypto_reader_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_ops);
 struct rte_port_out_ops rte_port_sym_crypto_writer_ops = {
 	.f_create = rte_port_sym_crypto_writer_create,
 	.f_free = rte_port_sym_crypto_writer_free,
@@ -547,7 +547,7 @@ struct rte_port_out_ops rte_port_sym_crypto_writer_ops = {
 	.f_stats = rte_port_sym_crypto_writer_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_nodrop_ops)
+RTE_EXPORT_SYMBOL(rte_port_sym_crypto_writer_nodrop_ops);
 struct rte_port_out_ops rte_port_sym_crypto_writer_nodrop_ops = {
 	.f_create = rte_port_sym_crypto_writer_nodrop_create,
 	.f_free = rte_port_sym_crypto_writer_nodrop_free,
diff --git a/lib/port/rte_swx_port_ethdev.c b/lib/port/rte_swx_port_ethdev.c
index de6d0e5bb3..8c26794aa3 100644
--- a/lib/port/rte_swx_port_ethdev.c
+++ b/lib/port/rte_swx_port_ethdev.c
@@ -402,7 +402,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_reader_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_reader_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_ethdev_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -410,7 +410,7 @@ struct rte_swx_port_in_ops rte_swx_port_ethdev_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_writer_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ethdev_writer_ops, 20.11);
 struct rte_swx_port_out_ops rte_swx_port_ethdev_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_fd.c b/lib/port/rte_swx_port_fd.c
index 72783d2b0f..dfddf69ccc 100644
--- a/lib/port/rte_swx_port_fd.c
+++ b/lib/port/rte_swx_port_fd.c
@@ -345,7 +345,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_reader_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_reader_ops, 21.05);
 struct rte_swx_port_in_ops rte_swx_port_fd_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -353,7 +353,7 @@ struct rte_swx_port_in_ops rte_swx_port_fd_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_writer_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_fd_writer_ops, 21.05);
 struct rte_swx_port_out_ops rte_swx_port_fd_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_ring.c b/lib/port/rte_swx_port_ring.c
index 3ac652ac09..f8d6b77e48 100644
--- a/lib/port/rte_swx_port_ring.c
+++ b/lib/port/rte_swx_port_ring.c
@@ -407,7 +407,7 @@ writer_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_reader_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_reader_ops, 21.05);
 struct rte_swx_port_in_ops rte_swx_port_ring_reader_ops = {
 	.create = reader_create,
 	.free = reader_free,
@@ -415,7 +415,7 @@ struct rte_swx_port_in_ops rte_swx_port_ring_reader_ops = {
 	.stats_read = reader_stats_read,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_writer_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_ring_writer_ops, 21.05);
 struct rte_swx_port_out_ops rte_swx_port_ring_writer_ops = {
 	.create = writer_create,
 	.free = writer_free,
diff --git a/lib/port/rte_swx_port_source_sink.c b/lib/port/rte_swx_port_source_sink.c
index af8b9ec68d..bcfcb8091e 100644
--- a/lib/port/rte_swx_port_source_sink.c
+++ b/lib/port/rte_swx_port_source_sink.c
@@ -202,7 +202,7 @@ source_stats_read(void *port, struct rte_swx_port_in_stats *stats)
 	memcpy(stats, &p->stats, sizeof(p->stats));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 	.create = source_create,
 	.free = source_free,
@@ -212,7 +212,7 @@ struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 
 #else
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_source_ops, 20.11);
 struct rte_swx_port_in_ops rte_swx_port_source_ops = {
 	.create = NULL,
 	.free = NULL,
@@ -383,7 +383,7 @@ sink_stats_read(void *port, struct rte_swx_port_out_stats *stats)
 /*
  * Summary of port operations
  */
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_sink_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_port_sink_ops, 20.11);
 struct rte_swx_port_out_ops rte_swx_port_sink_ops = {
 	.create = sink_create,
 	.free = sink_free,
diff --git a/lib/power/power_common.c b/lib/power/power_common.c
index 2da034e9d0..3fae203e69 100644
--- a/lib/power/power_common.c
+++ b/lib/power/power_common.c
@@ -14,7 +14,7 @@
 
 #include "power_common.h"
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_logtype)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_logtype);
 RTE_LOG_REGISTER_DEFAULT(rte_power_logtype, INFO);
 
 #define POWER_SYSFILE_SCALING_DRIVER   \
@@ -23,7 +23,7 @@ RTE_LOG_REGISTER_DEFAULT(rte_power_logtype, INFO);
 		"/sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor"
 #define POWER_CONVERT_TO_DECIMAL 10
 
-RTE_EXPORT_INTERNAL_SYMBOL(cpufreq_check_scaling_driver)
+RTE_EXPORT_INTERNAL_SYMBOL(cpufreq_check_scaling_driver);
 int
 cpufreq_check_scaling_driver(const char *driver_name)
 {
@@ -69,7 +69,7 @@ cpufreq_check_scaling_driver(const char *driver_name)
 	return 1;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(open_core_sysfs_file)
+RTE_EXPORT_INTERNAL_SYMBOL(open_core_sysfs_file);
 int
 open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
 {
@@ -88,7 +88,7 @@ open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_u32)
+RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_u32);
 int
 read_core_sysfs_u32(FILE *f, uint32_t *val)
 {
@@ -114,7 +114,7 @@ read_core_sysfs_u32(FILE *f, uint32_t *val)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_s)
+RTE_EXPORT_INTERNAL_SYMBOL(read_core_sysfs_s);
 int
 read_core_sysfs_s(FILE *f, char *buf, unsigned int len)
 {
@@ -133,7 +133,7 @@ read_core_sysfs_s(FILE *f, char *buf, unsigned int len)
 	return 0;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(write_core_sysfs_s)
+RTE_EXPORT_INTERNAL_SYMBOL(write_core_sysfs_s);
 int
 write_core_sysfs_s(FILE *f, const char *str)
 {
@@ -160,7 +160,7 @@ write_core_sysfs_s(FILE *f, const char *str)
  * set it into 'performance' if it is not by writing the sys file. The original
  * governor will be saved for rolling back.
  */
-RTE_EXPORT_INTERNAL_SYMBOL(power_set_governor)
+RTE_EXPORT_INTERNAL_SYMBOL(power_set_governor);
 int
 power_set_governor(unsigned int lcore_id, const char *new_governor,
 		char *orig_governor, size_t orig_governor_len)
@@ -214,7 +214,7 @@ power_set_governor(unsigned int lcore_id, const char *new_governor,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(power_get_lcore_mapped_cpu_id)
+RTE_EXPORT_INTERNAL_SYMBOL(power_get_lcore_mapped_cpu_id);
 int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id)
 {
 	rte_cpuset_t lcore_cpus;
diff --git a/lib/power/rte_power_cpufreq.c b/lib/power/rte_power_cpufreq.c
index d4db03a4e5..c5964ee0e6 100644
--- a/lib/power/rte_power_cpufreq.c
+++ b/lib/power/rte_power_cpufreq.c
@@ -26,7 +26,7 @@ const char *power_env_str[] = {
 };
 
 /* register the ops struct in rte_power_cpufreq_ops, return 0 on success. */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_cpufreq_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_cpufreq_ops);
 int
 rte_power_register_cpufreq_ops(struct rte_power_cpufreq_ops *driver_ops)
 {
@@ -46,7 +46,7 @@ rte_power_register_cpufreq_ops(struct rte_power_cpufreq_ops *driver_ops)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_check_env_supported)
+RTE_EXPORT_SYMBOL(rte_power_check_env_supported);
 int
 rte_power_check_env_supported(enum power_management_env env)
 {
@@ -63,7 +63,7 @@ rte_power_check_env_supported(enum power_management_env env)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_env)
+RTE_EXPORT_SYMBOL(rte_power_set_env);
 int
 rte_power_set_env(enum power_management_env env)
 {
@@ -93,7 +93,7 @@ rte_power_set_env(enum power_management_env env)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_unset_env)
+RTE_EXPORT_SYMBOL(rte_power_unset_env);
 void
 rte_power_unset_env(void)
 {
@@ -103,13 +103,13 @@ rte_power_unset_env(void)
 	rte_spinlock_unlock(&global_env_cfg_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_env)
+RTE_EXPORT_SYMBOL(rte_power_get_env);
 enum power_management_env
 rte_power_get_env(void) {
 	return global_default_env;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_init)
+RTE_EXPORT_SYMBOL(rte_power_init);
 int
 rte_power_init(unsigned int lcore_id)
 {
@@ -143,7 +143,7 @@ rte_power_init(unsigned int lcore_id)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_exit)
+RTE_EXPORT_SYMBOL(rte_power_exit);
 int
 rte_power_exit(unsigned int lcore_id)
 {
@@ -156,7 +156,7 @@ rte_power_exit(unsigned int lcore_id)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freqs)
+RTE_EXPORT_SYMBOL(rte_power_freqs);
 uint32_t
 rte_power_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t n)
 {
@@ -164,7 +164,7 @@ rte_power_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t n)
 	return global_cpufreq_ops->get_avail_freqs(lcore_id, freqs, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_freq)
+RTE_EXPORT_SYMBOL(rte_power_get_freq);
 uint32_t
 rte_power_get_freq(unsigned int lcore_id)
 {
@@ -172,7 +172,7 @@ rte_power_get_freq(unsigned int lcore_id)
 	return global_cpufreq_ops->get_freq(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_freq)
+RTE_EXPORT_SYMBOL(rte_power_set_freq);
 uint32_t
 rte_power_set_freq(unsigned int lcore_id, uint32_t index)
 {
@@ -180,7 +180,7 @@ rte_power_set_freq(unsigned int lcore_id, uint32_t index)
 	return global_cpufreq_ops->set_freq(lcore_id, index);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_up)
+RTE_EXPORT_SYMBOL(rte_power_freq_up);
 int
 rte_power_freq_up(unsigned int lcore_id)
 {
@@ -188,7 +188,7 @@ rte_power_freq_up(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_up(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_down)
+RTE_EXPORT_SYMBOL(rte_power_freq_down);
 int
 rte_power_freq_down(unsigned int lcore_id)
 {
@@ -196,7 +196,7 @@ rte_power_freq_down(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_down(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_freq_max);
 int
 rte_power_freq_max(unsigned int lcore_id)
 {
@@ -204,7 +204,7 @@ rte_power_freq_max(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_max(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_freq_min);
 int
 rte_power_freq_min(unsigned int lcore_id)
 {
@@ -212,7 +212,7 @@ rte_power_freq_min(unsigned int lcore_id)
 	return global_cpufreq_ops->freq_min(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_turbo_status)
+RTE_EXPORT_SYMBOL(rte_power_turbo_status);
 int
 rte_power_turbo_status(unsigned int lcore_id)
 {
@@ -220,7 +220,7 @@ rte_power_turbo_status(unsigned int lcore_id)
 	return global_cpufreq_ops->turbo_status(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_enable_turbo)
+RTE_EXPORT_SYMBOL(rte_power_freq_enable_turbo);
 int
 rte_power_freq_enable_turbo(unsigned int lcore_id)
 {
@@ -228,7 +228,7 @@ rte_power_freq_enable_turbo(unsigned int lcore_id)
 	return global_cpufreq_ops->enable_turbo(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_freq_disable_turbo)
+RTE_EXPORT_SYMBOL(rte_power_freq_disable_turbo);
 int
 rte_power_freq_disable_turbo(unsigned int lcore_id)
 {
@@ -236,7 +236,7 @@ rte_power_freq_disable_turbo(unsigned int lcore_id)
 	return global_cpufreq_ops->disable_turbo(lcore_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_capabilities)
+RTE_EXPORT_SYMBOL(rte_power_get_capabilities);
 int
 rte_power_get_capabilities(unsigned int lcore_id,
 		struct rte_power_core_capabilities *caps)
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 6cacc562c2..77b940f493 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -497,7 +497,7 @@ get_monitor_callback(void)
 		clb_multiwait : clb_umwait;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_enable)
+RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_enable);
 int
 rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 		uint16_t queue_id, enum rte_power_pmd_mgmt_type mode)
@@ -615,7 +615,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_disable)
+RTE_EXPORT_SYMBOL(rte_power_ethdev_pmgmt_queue_disable);
 int
 rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 		uint16_t port_id, uint16_t queue_id)
@@ -691,21 +691,21 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_emptypoll_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_emptypoll_max);
 void
 rte_power_pmd_mgmt_set_emptypoll_max(unsigned int max)
 {
 	emptypoll_max = max;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_emptypoll_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_emptypoll_max);
 unsigned int
 rte_power_pmd_mgmt_get_emptypoll_max(void)
 {
 	return emptypoll_max;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_pause_duration)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_pause_duration);
 int
 rte_power_pmd_mgmt_set_pause_duration(unsigned int duration)
 {
@@ -718,14 +718,14 @@ rte_power_pmd_mgmt_set_pause_duration(unsigned int duration)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_pause_duration)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_pause_duration);
 unsigned int
 rte_power_pmd_mgmt_get_pause_duration(void)
 {
 	return pause_duration;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_min);
 int
 rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min)
 {
@@ -743,7 +743,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_set_scaling_freq_max);
 int
 rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
 {
@@ -765,7 +765,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_min);
 int
 rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore)
 {
@@ -780,7 +780,7 @@ rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore)
 	return scale_freq_min[lcore];
 }
 
-RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_pmd_mgmt_get_scaling_freq_max);
 int
 rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
 {
diff --git a/lib/power/rte_power_qos.c b/lib/power/rte_power_qos.c
index be230d1c50..f7cd085819 100644
--- a/lib/power/rte_power_qos.c
+++ b/lib/power/rte_power_qos.c
@@ -18,7 +18,7 @@
 
 #define PM_QOS_CPU_RESUME_LATENCY_BUF_LEN	32
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_set_cpu_resume_latency, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_set_cpu_resume_latency, 24.11);
 int
 rte_power_qos_set_cpu_resume_latency(uint16_t lcore_id, int latency)
 {
@@ -72,7 +72,7 @@ rte_power_qos_set_cpu_resume_latency(uint16_t lcore_id, int latency)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_get_cpu_resume_latency, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_qos_get_cpu_resume_latency, 24.11);
 int
 rte_power_qos_get_cpu_resume_latency(uint16_t lcore_id)
 {
diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c
index 30cd374127..c827d8bada 100644
--- a/lib/power/rte_power_uncore.c
+++ b/lib/power/rte_power_uncore.c
@@ -25,7 +25,7 @@ const char *uncore_env_str[] = {
 };
 
 /* register the ops struct in rte_power_uncore_ops, return 0 on success. */
-RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_uncore_ops)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_power_register_uncore_ops);
 int
 rte_power_register_uncore_ops(struct rte_power_uncore_ops *driver_ops)
 {
@@ -46,7 +46,7 @@ rte_power_register_uncore_ops(struct rte_power_uncore_ops *driver_ops)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_set_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_set_uncore_env, 23.11);
 int
 rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
 {
@@ -86,7 +86,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_unset_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_unset_uncore_env, 23.11);
 void
 rte_power_unset_uncore_env(void)
 {
@@ -95,14 +95,14 @@ rte_power_unset_uncore_env(void)
 	rte_spinlock_unlock(&global_env_cfg_lock);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_get_uncore_env, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_get_uncore_env, 23.11);
 enum rte_uncore_power_mgmt_env
 rte_power_get_uncore_env(void)
 {
 	return global_uncore_env;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_init)
+RTE_EXPORT_SYMBOL(rte_power_uncore_init);
 int
 rte_power_uncore_init(unsigned int pkg, unsigned int die)
 {
@@ -134,7 +134,7 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_exit)
+RTE_EXPORT_SYMBOL(rte_power_uncore_exit);
 int
 rte_power_uncore_exit(unsigned int pkg, unsigned int die)
 {
@@ -148,7 +148,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_power_get_uncore_freq)
+RTE_EXPORT_SYMBOL(rte_power_get_uncore_freq);
 uint32_t
 rte_power_get_uncore_freq(unsigned int pkg, unsigned int die)
 {
@@ -156,7 +156,7 @@ rte_power_get_uncore_freq(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->get_freq(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_set_uncore_freq)
+RTE_EXPORT_SYMBOL(rte_power_set_uncore_freq);
 int
 rte_power_set_uncore_freq(unsigned int pkg, unsigned int die, uint32_t index)
 {
@@ -164,7 +164,7 @@ rte_power_set_uncore_freq(unsigned int pkg, unsigned int die, uint32_t index)
 	return global_uncore_ops->set_freq(pkg, die, index);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_freq_max)
+RTE_EXPORT_SYMBOL(rte_power_uncore_freq_max);
 int
 rte_power_uncore_freq_max(unsigned int pkg, unsigned int die)
 {
@@ -172,7 +172,7 @@ rte_power_uncore_freq_max(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->freq_max(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_freq_min)
+RTE_EXPORT_SYMBOL(rte_power_uncore_freq_min);
 int
 rte_power_uncore_freq_min(unsigned int pkg, unsigned int die)
 {
@@ -180,7 +180,7 @@ rte_power_uncore_freq_min(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->freq_min(pkg, die);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_uncore_freqs, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_power_uncore_freqs, 23.11);
 int
 rte_power_uncore_freqs(unsigned int pkg, unsigned int die,
 			uint32_t *freqs, uint32_t num)
@@ -189,7 +189,7 @@ rte_power_uncore_freqs(unsigned int pkg, unsigned int die,
 	return global_uncore_ops->get_avail_freqs(pkg, die, freqs, num);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_freqs)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_freqs);
 int
 rte_power_uncore_get_num_freqs(unsigned int pkg, unsigned int die)
 {
@@ -197,7 +197,7 @@ rte_power_uncore_get_num_freqs(unsigned int pkg, unsigned int die)
 	return global_uncore_ops->get_num_freqs(pkg, die);
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_pkgs)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_pkgs);
 unsigned int
 rte_power_uncore_get_num_pkgs(void)
 {
@@ -205,7 +205,7 @@ rte_power_uncore_get_num_pkgs(void)
 	return global_uncore_ops->get_num_pkgs();
 }
 
-RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_dies)
+RTE_EXPORT_SYMBOL(rte_power_uncore_get_num_dies);
 unsigned int
 rte_power_uncore_get_num_dies(unsigned int pkg)
 {
diff --git a/lib/rawdev/rte_rawdev.c b/lib/rawdev/rte_rawdev.c
index 4da7956d5a..e1ea7667dc 100644
--- a/lib/rawdev/rte_rawdev.c
+++ b/lib/rawdev/rte_rawdev.c
@@ -23,7 +23,7 @@
 
 static struct rte_rawdev rte_rawdevices[RTE_RAWDEV_MAX_DEVS];
 
-RTE_EXPORT_SYMBOL(rte_rawdevs)
+RTE_EXPORT_SYMBOL(rte_rawdevs);
 struct rte_rawdev *rte_rawdevs = rte_rawdevices;
 
 static struct rte_rawdev_global rawdev_globals = {
@@ -31,14 +31,14 @@ static struct rte_rawdev_global rawdev_globals = {
 };
 
 /* Raw device, northbound API implementation */
-RTE_EXPORT_SYMBOL(rte_rawdev_count)
+RTE_EXPORT_SYMBOL(rte_rawdev_count);
 uint8_t
 rte_rawdev_count(void)
 {
 	return rawdev_globals.nb_devs;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_get_dev_id)
+RTE_EXPORT_SYMBOL(rte_rawdev_get_dev_id);
 uint16_t
 rte_rawdev_get_dev_id(const char *name)
 {
@@ -56,7 +56,7 @@ rte_rawdev_get_dev_id(const char *name)
 	return -ENODEV;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_socket_id)
+RTE_EXPORT_SYMBOL(rte_rawdev_socket_id);
 int
 rte_rawdev_socket_id(uint16_t dev_id)
 {
@@ -68,7 +68,7 @@ rte_rawdev_socket_id(uint16_t dev_id)
 	return dev->socket_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_info_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_info_get);
 int
 rte_rawdev_info_get(uint16_t dev_id, struct rte_rawdev_info *dev_info,
 		size_t dev_private_size)
@@ -97,7 +97,7 @@ rte_rawdev_info_get(uint16_t dev_id, struct rte_rawdev_info *dev_info,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_configure)
+RTE_EXPORT_SYMBOL(rte_rawdev_configure);
 int
 rte_rawdev_configure(uint16_t dev_id, struct rte_rawdev_info *dev_conf,
 		size_t dev_private_size)
@@ -130,7 +130,7 @@ rte_rawdev_configure(uint16_t dev_id, struct rte_rawdev_info *dev_conf,
 	return diag;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_conf_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_conf_get);
 int
 rte_rawdev_queue_conf_get(uint16_t dev_id,
 			  uint16_t queue_id,
@@ -147,7 +147,7 @@ rte_rawdev_queue_conf_get(uint16_t dev_id,
 	return dev->dev_ops->queue_def_conf(dev, queue_id, queue_conf, queue_conf_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_setup)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_setup);
 int
 rte_rawdev_queue_setup(uint16_t dev_id,
 		       uint16_t queue_id,
@@ -164,7 +164,7 @@ rte_rawdev_queue_setup(uint16_t dev_id,
 	return dev->dev_ops->queue_setup(dev, queue_id, queue_conf, queue_conf_size);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_release)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_release);
 int
 rte_rawdev_queue_release(uint16_t dev_id, uint16_t queue_id)
 {
@@ -178,7 +178,7 @@ rte_rawdev_queue_release(uint16_t dev_id, uint16_t queue_id)
 	return dev->dev_ops->queue_release(dev, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_queue_count)
+RTE_EXPORT_SYMBOL(rte_rawdev_queue_count);
 uint16_t
 rte_rawdev_queue_count(uint16_t dev_id)
 {
@@ -192,7 +192,7 @@ rte_rawdev_queue_count(uint16_t dev_id)
 	return dev->dev_ops->queue_count(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_get_attr)
+RTE_EXPORT_SYMBOL(rte_rawdev_get_attr);
 int
 rte_rawdev_get_attr(uint16_t dev_id,
 		    const char *attr_name,
@@ -208,7 +208,7 @@ rte_rawdev_get_attr(uint16_t dev_id,
 	return dev->dev_ops->attr_get(dev, attr_name, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_set_attr)
+RTE_EXPORT_SYMBOL(rte_rawdev_set_attr);
 int
 rte_rawdev_set_attr(uint16_t dev_id,
 		    const char *attr_name,
@@ -224,7 +224,7 @@ rte_rawdev_set_attr(uint16_t dev_id,
 	return dev->dev_ops->attr_set(dev, attr_name, attr_value);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_enqueue_buffers)
+RTE_EXPORT_SYMBOL(rte_rawdev_enqueue_buffers);
 int
 rte_rawdev_enqueue_buffers(uint16_t dev_id,
 			   struct rte_rawdev_buf **buffers,
@@ -241,7 +241,7 @@ rte_rawdev_enqueue_buffers(uint16_t dev_id,
 	return dev->dev_ops->enqueue_bufs(dev, buffers, count, context);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_dequeue_buffers)
+RTE_EXPORT_SYMBOL(rte_rawdev_dequeue_buffers);
 int
 rte_rawdev_dequeue_buffers(uint16_t dev_id,
 			   struct rte_rawdev_buf **buffers,
@@ -258,7 +258,7 @@ rte_rawdev_dequeue_buffers(uint16_t dev_id,
 	return dev->dev_ops->dequeue_bufs(dev, buffers, count, context);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_dump)
+RTE_EXPORT_SYMBOL(rte_rawdev_dump);
 int
 rte_rawdev_dump(uint16_t dev_id, FILE *f)
 {
@@ -282,7 +282,7 @@ xstats_get_count(uint16_t dev_id)
 	return dev->dev_ops->xstats_get_names(dev, NULL, 0);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_names_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_names_get);
 int
 rte_rawdev_xstats_names_get(uint16_t dev_id,
 		struct rte_rawdev_xstats_name *xstats_names,
@@ -307,7 +307,7 @@ rte_rawdev_xstats_names_get(uint16_t dev_id,
 }
 
 /* retrieve rawdev extended statistics */
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_get);
 int
 rte_rawdev_xstats_get(uint16_t dev_id,
 		      const unsigned int ids[],
@@ -322,7 +322,7 @@ rte_rawdev_xstats_get(uint16_t dev_id,
 	return dev->dev_ops->xstats_get(dev, ids, values, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_by_name_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_by_name_get);
 uint64_t
 rte_rawdev_xstats_by_name_get(uint16_t dev_id,
 			      const char *name,
@@ -343,7 +343,7 @@ rte_rawdev_xstats_by_name_get(uint16_t dev_id,
 	return dev->dev_ops->xstats_get_by_name(dev, name, id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_xstats_reset)
+RTE_EXPORT_SYMBOL(rte_rawdev_xstats_reset);
 int
 rte_rawdev_xstats_reset(uint16_t dev_id,
 			const uint32_t ids[], uint32_t nb_ids)
@@ -356,7 +356,7 @@ rte_rawdev_xstats_reset(uint16_t dev_id,
 	return dev->dev_ops->xstats_reset(dev, ids, nb_ids);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_status_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_status_get);
 int
 rte_rawdev_firmware_status_get(uint16_t dev_id, rte_rawdev_obj_t status_info)
 {
@@ -368,7 +368,7 @@ rte_rawdev_firmware_status_get(uint16_t dev_id, rte_rawdev_obj_t status_info)
 	return dev->dev_ops->firmware_status_get(dev, status_info);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_version_get)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_version_get);
 int
 rte_rawdev_firmware_version_get(uint16_t dev_id, rte_rawdev_obj_t version_info)
 {
@@ -380,7 +380,7 @@ rte_rawdev_firmware_version_get(uint16_t dev_id, rte_rawdev_obj_t version_info)
 	return dev->dev_ops->firmware_version_get(dev, version_info);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_load)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_load);
 int
 rte_rawdev_firmware_load(uint16_t dev_id, rte_rawdev_obj_t firmware_image)
 {
@@ -395,7 +395,7 @@ rte_rawdev_firmware_load(uint16_t dev_id, rte_rawdev_obj_t firmware_image)
 	return dev->dev_ops->firmware_load(dev, firmware_image);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_firmware_unload)
+RTE_EXPORT_SYMBOL(rte_rawdev_firmware_unload);
 int
 rte_rawdev_firmware_unload(uint16_t dev_id)
 {
@@ -407,7 +407,7 @@ rte_rawdev_firmware_unload(uint16_t dev_id)
 	return dev->dev_ops->firmware_unload(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_selftest)
+RTE_EXPORT_SYMBOL(rte_rawdev_selftest);
 int
 rte_rawdev_selftest(uint16_t dev_id)
 {
@@ -419,7 +419,7 @@ rte_rawdev_selftest(uint16_t dev_id)
 	return dev->dev_ops->dev_selftest(dev_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_start)
+RTE_EXPORT_SYMBOL(rte_rawdev_start);
 int
 rte_rawdev_start(uint16_t dev_id)
 {
@@ -448,7 +448,7 @@ rte_rawdev_start(uint16_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_stop)
+RTE_EXPORT_SYMBOL(rte_rawdev_stop);
 void
 rte_rawdev_stop(uint16_t dev_id)
 {
@@ -474,7 +474,7 @@ rte_rawdev_stop(uint16_t dev_id)
 	dev->started = 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_close)
+RTE_EXPORT_SYMBOL(rte_rawdev_close);
 int
 rte_rawdev_close(uint16_t dev_id)
 {
@@ -495,7 +495,7 @@ rte_rawdev_close(uint16_t dev_id)
 	return dev->dev_ops->dev_close(dev);
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_reset)
+RTE_EXPORT_SYMBOL(rte_rawdev_reset);
 int
 rte_rawdev_reset(uint16_t dev_id)
 {
@@ -524,7 +524,7 @@ rte_rawdev_find_free_device_index(void)
 	return RTE_RAWDEV_MAX_DEVS;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_pmd_allocate)
+RTE_EXPORT_SYMBOL(rte_rawdev_pmd_allocate);
 struct rte_rawdev *
 rte_rawdev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id)
 {
@@ -566,7 +566,7 @@ rte_rawdev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id)
 	return rawdev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rawdev_pmd_release)
+RTE_EXPORT_SYMBOL(rte_rawdev_pmd_release);
 int
 rte_rawdev_pmd_release(struct rte_rawdev *rawdev)
 {
diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c
index ac6d464b7f..b9c4e8b2e1 100644
--- a/lib/rcu/rte_rcu_qsbr.c
+++ b/lib/rcu/rte_rcu_qsbr.c
@@ -24,7 +24,7 @@
 	RTE_LOG_LINE_PREFIX(level, RCU, "%s(): ", __func__, __VA_ARGS__)
 
 /* Get the memory size of QSBR variable */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_get_memsize)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_get_memsize);
 size_t
 rte_rcu_qsbr_get_memsize(uint32_t max_threads)
 {
@@ -49,7 +49,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads)
 }
 
 /* Initialize a quiescent state variable */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_init)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_init);
 int
 rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
 {
@@ -81,7 +81,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads)
 /* Register a reader thread to report its quiescent state
  * on a QS variable.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_register)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_register);
 int
 rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -117,7 +117,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id)
 /* Remove a reader thread, from the list of threads reporting their
  * quiescent state on a QS variable.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_unregister)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_thread_unregister);
 int
 rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -154,7 +154,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id)
 }
 
 /* Wait till the reader threads have entered quiescent state. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_synchronize)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_synchronize);
 void
 rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
 {
@@ -175,7 +175,7 @@ rte_rcu_qsbr_synchronize(struct rte_rcu_qsbr *v, unsigned int thread_id)
 }
 
 /* Dump the details of a single quiescent state variable to a file. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dump)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dump);
 int
 rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
 {
@@ -242,7 +242,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
 /* Create a queue used to store the data structure elements that can
  * be freed later. This queue is referred to as 'defer queue'.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_create)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_create);
 struct rte_rcu_qsbr_dq *
 rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params)
 {
@@ -319,7 +319,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params)
 /* Enqueue one resource to the defer queue to free after the grace
  * period is over.
  */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_enqueue)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_enqueue);
 int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e)
 {
 	__rte_rcu_qsbr_dq_elem_t *dq_elem;
@@ -378,7 +378,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e)
 }
 
 /* Reclaim resources from the defer queue. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_reclaim)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_reclaim);
 int
 rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n,
 			unsigned int *freed, unsigned int *pending,
@@ -428,7 +428,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n,
 }
 
 /* Delete a defer queue. */
-RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_delete)
+RTE_EXPORT_SYMBOL(rte_rcu_qsbr_dq_delete);
 int
 rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq)
 {
@@ -454,5 +454,5 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rcu_log_type)
+RTE_EXPORT_SYMBOL(rte_rcu_log_type);
 RTE_LOG_REGISTER_DEFAULT(rte_rcu_log_type, ERR);
diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c
index 8ba797b278..d824c381b0 100644
--- a/lib/regexdev/rte_regexdev.c
+++ b/lib/regexdev/rte_regexdev.c
@@ -14,14 +14,14 @@
 #include "rte_regexdev_driver.h"
 
 static const char *MZ_RTE_REGEXDEV_DATA = "rte_regexdev_data";
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regex_devices, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regex_devices, 22.03);
 struct rte_regexdev rte_regex_devices[RTE_MAX_REGEXDEV_DEVS];
 /* Shared memory between primary and secondary processes. */
 static struct {
 	struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS];
 } *rte_regexdev_shared_data;
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_logtype, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_logtype, 22.03);
 RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO);
 
 static uint16_t
@@ -92,7 +92,7 @@ regexdev_check_name(const char *name)
 
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_register);
 struct rte_regexdev *
 rte_regexdev_register(const char *name)
 {
@@ -130,14 +130,14 @@ rte_regexdev_register(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_unregister)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_unregister);
 void
 rte_regexdev_unregister(struct rte_regexdev *dev)
 {
 	dev->state = RTE_REGEXDEV_UNUSED;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_get_device_by_name)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_regexdev_get_device_by_name);
 struct rte_regexdev *
 rte_regexdev_get_device_by_name(const char *name)
 {
@@ -146,7 +146,7 @@ rte_regexdev_get_device_by_name(const char *name)
 	return regexdev_allocated(name);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_count, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_count, 20.08);
 uint8_t
 rte_regexdev_count(void)
 {
@@ -160,7 +160,7 @@ rte_regexdev_count(void)
 	return count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_get_dev_id, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_get_dev_id, 20.08);
 int
 rte_regexdev_get_dev_id(const char *name)
 {
@@ -179,7 +179,7 @@ rte_regexdev_get_dev_id(const char *name)
 	return id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_is_valid_dev, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_is_valid_dev, 22.03);
 int
 rte_regexdev_is_valid_dev(uint16_t dev_id)
 {
@@ -204,14 +204,14 @@ regexdev_info_get(uint8_t dev_id, struct rte_regexdev_info *dev_info)
 
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_info_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_info_get, 20.08);
 int
 rte_regexdev_info_get(uint8_t dev_id, struct rte_regexdev_info *dev_info)
 {
 	return regexdev_info_get(dev_id, dev_info);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_configure, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_configure, 20.08);
 int
 rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg)
 {
@@ -306,7 +306,7 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_queue_pair_setup, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_queue_pair_setup, 20.08);
 int
 rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 			   const struct rte_regexdev_qp_conf *qp_conf)
@@ -339,7 +339,7 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	return dev->dev_ops->dev_qp_setup(dev, queue_pair_id, qp_conf);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_start, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_start, 20.08);
 int
 rte_regexdev_start(uint8_t dev_id)
 {
@@ -356,7 +356,7 @@ rte_regexdev_start(uint8_t dev_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_stop, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_stop, 20.08);
 int
 rte_regexdev_stop(uint8_t dev_id)
 {
@@ -371,7 +371,7 @@ rte_regexdev_stop(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_close, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_close, 20.08);
 int
 rte_regexdev_close(uint8_t dev_id)
 {
@@ -387,7 +387,7 @@ rte_regexdev_close(uint8_t dev_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_get, 20.08);
 int
 rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 		      void *attr_value)
@@ -406,7 +406,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 	return dev->dev_ops->dev_attr_get(dev, attr_id, attr_value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_set, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_attr_set, 20.08);
 int
 rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 		      const void *attr_value)
@@ -425,7 +425,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id,
 	return dev->dev_ops->dev_attr_set(dev, attr_id, attr_value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_update, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_update, 20.08);
 int
 rte_regexdev_rule_db_update(uint8_t dev_id,
 			    const struct rte_regexdev_rule *rules,
@@ -445,7 +445,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id,
 	return dev->dev_ops->dev_rule_db_update(dev, rules, nb_rules);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_compile_activate, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_compile_activate, 20.08);
 int
 rte_regexdev_rule_db_compile_activate(uint8_t dev_id)
 {
@@ -458,7 +458,7 @@ rte_regexdev_rule_db_compile_activate(uint8_t dev_id)
 	return dev->dev_ops->dev_rule_db_compile_activate(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_import, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_import, 20.08);
 int
 rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db,
 			    uint32_t rule_db_len)
@@ -477,7 +477,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db,
 	return dev->dev_ops->dev_db_import(dev, rule_db, rule_db_len);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_export, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_rule_db_export, 20.08);
 int
 rte_regexdev_rule_db_export(uint8_t dev_id, char *rule_db)
 {
@@ -490,7 +490,7 @@ rte_regexdev_rule_db_export(uint8_t dev_id, char *rule_db)
 	return dev->dev_ops->dev_db_export(dev, rule_db);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_names_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_names_get, 20.08);
 int
 rte_regexdev_xstats_names_get(uint8_t dev_id,
 			      struct rte_regexdev_xstats_map *xstats_map)
@@ -509,7 +509,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id,
 	return dev->dev_ops->dev_xstats_names_get(dev, xstats_map);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_get, 20.08);
 int
 rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids,
 			uint64_t *values, uint16_t n)
@@ -531,7 +531,7 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids,
 	return dev->dev_ops->dev_xstats_get(dev, ids, values, n);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_by_name_get, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_by_name_get, 20.08);
 int
 rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name,
 				uint16_t *id, uint64_t *value)
@@ -557,7 +557,7 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name,
 	return dev->dev_ops->dev_xstats_by_name_get(dev, name, id, value);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_reset, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_xstats_reset, 20.08);
 int
 rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids,
 			  uint16_t nb_ids)
@@ -575,7 +575,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids,
 	return dev->dev_ops->dev_xstats_reset(dev, ids, nb_ids);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_selftest, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_selftest, 20.08);
 int
 rte_regexdev_selftest(uint8_t dev_id)
 {
@@ -588,7 +588,7 @@ rte_regexdev_selftest(uint8_t dev_id)
 	return dev->dev_ops->dev_selftest(dev);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_dump, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_regexdev_dump, 20.08);
 int
 rte_regexdev_dump(uint8_t dev_id, FILE *f)
 {
diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
index be06530860..e2d8114c2f 100644
--- a/lib/reorder/rte_reorder.c
+++ b/lib/reorder/rte_reorder.c
@@ -35,7 +35,7 @@ EAL_REGISTER_TAILQ(rte_reorder_tailq)
 #define RTE_REORDER_NAMESIZE 32
 
 #define RTE_REORDER_SEQN_DYNFIELD_NAME "rte_reorder_seqn_dynfield"
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_seqn_dynfield_offset, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_seqn_dynfield_offset, 20.11);
 int rte_reorder_seqn_dynfield_offset = -1;
 
 /* A generic circular buffer */
@@ -61,14 +61,14 @@ struct __rte_cache_aligned rte_reorder_buffer {
 static void
 rte_reorder_free_mbufs(struct rte_reorder_buffer *b);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_memory_footprint_get, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_memory_footprint_get, 23.07);
 unsigned int
 rte_reorder_memory_footprint_get(unsigned int size)
 {
 	return sizeof(struct rte_reorder_buffer) + (2 * size * sizeof(struct rte_mbuf *));
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_init)
+RTE_EXPORT_SYMBOL(rte_reorder_init);
 struct rte_reorder_buffer *
 rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize,
 		const char *name, unsigned int size)
@@ -158,7 +158,7 @@ rte_reorder_entry_insert(struct rte_tailq_entry *new_te)
 	return te;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_create)
+RTE_EXPORT_SYMBOL(rte_reorder_create);
 struct rte_reorder_buffer*
 rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
 {
@@ -215,7 +215,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
 	return b;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_reset)
+RTE_EXPORT_SYMBOL(rte_reorder_reset);
 void
 rte_reorder_reset(struct rte_reorder_buffer *b)
 {
@@ -239,7 +239,7 @@ rte_reorder_free_mbufs(struct rte_reorder_buffer *b)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_free)
+RTE_EXPORT_SYMBOL(rte_reorder_free);
 void
 rte_reorder_free(struct rte_reorder_buffer *b)
 {
@@ -274,7 +274,7 @@ rte_reorder_free(struct rte_reorder_buffer *b)
 	rte_free(te);
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_find_existing)
+RTE_EXPORT_SYMBOL(rte_reorder_find_existing);
 struct rte_reorder_buffer *
 rte_reorder_find_existing(const char *name)
 {
@@ -356,7 +356,7 @@ rte_reorder_fill_overflow(struct rte_reorder_buffer *b, unsigned n)
 	return order_head_adv;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_insert)
+RTE_EXPORT_SYMBOL(rte_reorder_insert);
 int
 rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf)
 {
@@ -423,7 +423,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_reorder_drain)
+RTE_EXPORT_SYMBOL(rte_reorder_drain);
 unsigned int
 rte_reorder_drain(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs,
 		unsigned max_mbufs)
@@ -482,7 +482,7 @@ ready_buffer_seqn_find(const struct cir_buffer *ready_buf, const uint32_t seqn)
 	return low;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_drain_up_to_seqn, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_drain_up_to_seqn, 23.03);
 unsigned int
 rte_reorder_drain_up_to_seqn(struct rte_reorder_buffer *b, struct rte_mbuf **mbufs,
 		const unsigned int max_mbufs, const rte_reorder_seqn_t seqn)
@@ -553,7 +553,7 @@ rte_reorder_is_empty(const struct rte_reorder_buffer *b)
 	return true;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_min_seqn_set, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_reorder_min_seqn_set, 23.03);
 unsigned int
 rte_reorder_min_seqn_set(struct rte_reorder_buffer *b, rte_reorder_seqn_t min_seqn)
 {
diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c
index 046db131ca..216ac4180c 100644
--- a/lib/rib/rte_rib.c
+++ b/lib/rib/rte_rib.c
@@ -102,7 +102,7 @@ node_free(struct rte_rib *rib, struct rte_rib_node *ent)
 	rte_mempool_put(rib->node_pool, ent);
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup)
+RTE_EXPORT_SYMBOL(rte_rib_lookup);
 struct rte_rib_node *
 rte_rib_lookup(struct rte_rib *rib, uint32_t ip)
 {
@@ -122,7 +122,7 @@ rte_rib_lookup(struct rte_rib *rib, uint32_t ip)
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup_parent)
+RTE_EXPORT_SYMBOL(rte_rib_lookup_parent);
 struct rte_rib_node *
 rte_rib_lookup_parent(struct rte_rib_node *ent)
 {
@@ -154,7 +154,7 @@ __rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_lookup_exact)
+RTE_EXPORT_SYMBOL(rte_rib_lookup_exact);
 struct rte_rib_node *
 rte_rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -172,7 +172,7 @@ rte_rib_lookup_exact(struct rte_rib *rib, uint32_t ip, uint8_t depth)
  *  for a given in args ip/depth prefix
  *  last = NULL means the first invocation
  */
-RTE_EXPORT_SYMBOL(rte_rib_get_nxt)
+RTE_EXPORT_SYMBOL(rte_rib_get_nxt);
 struct rte_rib_node *
 rte_rib_get_nxt(struct rte_rib *rib, uint32_t ip,
 	uint8_t depth, struct rte_rib_node *last, int flag)
@@ -213,7 +213,7 @@ rte_rib_get_nxt(struct rte_rib *rib, uint32_t ip,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_remove)
+RTE_EXPORT_SYMBOL(rte_rib_remove);
 void
 rte_rib_remove(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -246,7 +246,7 @@ rte_rib_remove(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_insert)
+RTE_EXPORT_SYMBOL(rte_rib_insert);
 struct rte_rib_node *
 rte_rib_insert(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 {
@@ -353,7 +353,7 @@ rte_rib_insert(struct rte_rib *rib, uint32_t ip, uint8_t depth)
 	return new_node;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_ip)
+RTE_EXPORT_SYMBOL(rte_rib_get_ip);
 int
 rte_rib_get_ip(const struct rte_rib_node *node, uint32_t *ip)
 {
@@ -365,7 +365,7 @@ rte_rib_get_ip(const struct rte_rib_node *node, uint32_t *ip)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_depth)
+RTE_EXPORT_SYMBOL(rte_rib_get_depth);
 int
 rte_rib_get_depth(const struct rte_rib_node *node, uint8_t *depth)
 {
@@ -377,14 +377,14 @@ rte_rib_get_depth(const struct rte_rib_node *node, uint8_t *depth)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_ext)
+RTE_EXPORT_SYMBOL(rte_rib_get_ext);
 void *
 rte_rib_get_ext(struct rte_rib_node *node)
 {
 	return (node == NULL) ? NULL : &node->ext[0];
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_get_nh)
+RTE_EXPORT_SYMBOL(rte_rib_get_nh);
 int
 rte_rib_get_nh(const struct rte_rib_node *node, uint64_t *nh)
 {
@@ -396,7 +396,7 @@ rte_rib_get_nh(const struct rte_rib_node *node, uint64_t *nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_set_nh)
+RTE_EXPORT_SYMBOL(rte_rib_set_nh);
 int
 rte_rib_set_nh(struct rte_rib_node *node, uint64_t nh)
 {
@@ -408,7 +408,7 @@ rte_rib_set_nh(struct rte_rib_node *node, uint64_t nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_create)
+RTE_EXPORT_SYMBOL(rte_rib_create);
 struct rte_rib *
 rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
 {
@@ -490,7 +490,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_find_existing)
+RTE_EXPORT_SYMBOL(rte_rib_find_existing);
 struct rte_rib *
 rte_rib_find_existing(const char *name)
 {
@@ -516,7 +516,7 @@ rte_rib_find_existing(const char *name)
 	return rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib_free)
+RTE_EXPORT_SYMBOL(rte_rib_free);
 void
 rte_rib_free(struct rte_rib *rib)
 {
diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c
index ded5fd044f..86e1d8f1cc 100644
--- a/lib/rib/rte_rib6.c
+++ b/lib/rib/rte_rib6.c
@@ -115,7 +115,7 @@ node_free(struct rte_rib6 *rib, struct rte_rib6_node *ent)
 	rte_mempool_put(rib->node_pool, ent);
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup);
 struct rte_rib6_node *
 rte_rib6_lookup(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip)
@@ -137,7 +137,7 @@ rte_rib6_lookup(struct rte_rib6 *rib,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup_parent)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup_parent);
 struct rte_rib6_node *
 rte_rib6_lookup_parent(struct rte_rib6_node *ent)
 {
@@ -153,7 +153,7 @@ rte_rib6_lookup_parent(struct rte_rib6_node *ent)
 	return tmp;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_lookup_exact)
+RTE_EXPORT_SYMBOL(rte_rib6_lookup_exact);
 struct rte_rib6_node *
 rte_rib6_lookup_exact(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -191,7 +191,7 @@ rte_rib6_lookup_exact(struct rte_rib6 *rib,
  *  for a given in args ip/depth prefix
  *  last = NULL means the first invocation
  */
-RTE_EXPORT_SYMBOL(rte_rib6_get_nxt)
+RTE_EXPORT_SYMBOL(rte_rib6_get_nxt);
 struct rte_rib6_node *
 rte_rib6_get_nxt(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip,
@@ -237,7 +237,7 @@ rte_rib6_get_nxt(struct rte_rib6 *rib,
 	return prev;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_remove)
+RTE_EXPORT_SYMBOL(rte_rib6_remove);
 void
 rte_rib6_remove(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -271,7 +271,7 @@ rte_rib6_remove(struct rte_rib6 *rib,
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_insert)
+RTE_EXPORT_SYMBOL(rte_rib6_insert);
 struct rte_rib6_node *
 rte_rib6_insert(struct rte_rib6 *rib,
 	const struct rte_ipv6_addr *ip, uint8_t depth)
@@ -399,7 +399,7 @@ rte_rib6_insert(struct rte_rib6 *rib,
 	return new_node;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_ip)
+RTE_EXPORT_SYMBOL(rte_rib6_get_ip);
 int
 rte_rib6_get_ip(const struct rte_rib6_node *node,
 		struct rte_ipv6_addr *ip)
@@ -412,7 +412,7 @@ rte_rib6_get_ip(const struct rte_rib6_node *node,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_depth)
+RTE_EXPORT_SYMBOL(rte_rib6_get_depth);
 int
 rte_rib6_get_depth(const struct rte_rib6_node *node, uint8_t *depth)
 {
@@ -424,14 +424,14 @@ rte_rib6_get_depth(const struct rte_rib6_node *node, uint8_t *depth)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_ext)
+RTE_EXPORT_SYMBOL(rte_rib6_get_ext);
 void *
 rte_rib6_get_ext(struct rte_rib6_node *node)
 {
 	return (node == NULL) ? NULL : &node->ext[0];
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_get_nh)
+RTE_EXPORT_SYMBOL(rte_rib6_get_nh);
 int
 rte_rib6_get_nh(const struct rte_rib6_node *node, uint64_t *nh)
 {
@@ -443,7 +443,7 @@ rte_rib6_get_nh(const struct rte_rib6_node *node, uint64_t *nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_set_nh)
+RTE_EXPORT_SYMBOL(rte_rib6_set_nh);
 int
 rte_rib6_set_nh(struct rte_rib6_node *node, uint64_t nh)
 {
@@ -455,7 +455,7 @@ rte_rib6_set_nh(struct rte_rib6_node *node, uint64_t nh)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_create)
+RTE_EXPORT_SYMBOL(rte_rib6_create);
 struct rte_rib6 *
 rte_rib6_create(const char *name, int socket_id,
 		const struct rte_rib6_conf *conf)
@@ -539,7 +539,7 @@ rte_rib6_create(const char *name, int socket_id,
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_find_existing)
+RTE_EXPORT_SYMBOL(rte_rib6_find_existing);
 struct rte_rib6 *
 rte_rib6_find_existing(const char *name)
 {
@@ -570,7 +570,7 @@ rte_rib6_find_existing(const char *name)
 	return rib;
 }
 
-RTE_EXPORT_SYMBOL(rte_rib6_free)
+RTE_EXPORT_SYMBOL(rte_rib6_free);
 void
 rte_rib6_free(struct rte_rib6 *rib)
 {
diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index edd63aa535..548ba059fa 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -53,7 +53,7 @@ EAL_REGISTER_TAILQ(rte_ring_tailq)
 #define HTD_MAX_DEF	8
 
 /* return the size of memory occupied by a ring */
-RTE_EXPORT_SYMBOL(rte_ring_get_memsize_elem)
+RTE_EXPORT_SYMBOL(rte_ring_get_memsize_elem);
 ssize_t
 rte_ring_get_memsize_elem(unsigned int esize, unsigned int count)
 {
@@ -81,7 +81,7 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count)
 }
 
 /* return the size of memory occupied by a ring */
-RTE_EXPORT_SYMBOL(rte_ring_get_memsize)
+RTE_EXPORT_SYMBOL(rte_ring_get_memsize);
 ssize_t
 rte_ring_get_memsize(unsigned int count)
 {
@@ -121,7 +121,7 @@ reset_headtail(void *p)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_ring_reset)
+RTE_EXPORT_SYMBOL(rte_ring_reset);
 void
 rte_ring_reset(struct rte_ring *r)
 {
@@ -180,7 +180,7 @@ get_sync_type(uint32_t flags, enum rte_ring_sync_type *prod_st,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_ring_init)
+RTE_EXPORT_SYMBOL(rte_ring_init);
 int
 rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
 	unsigned int flags)
@@ -248,7 +248,7 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
 }
 
 /* create the ring for a given element size */
-RTE_EXPORT_SYMBOL(rte_ring_create_elem)
+RTE_EXPORT_SYMBOL(rte_ring_create_elem);
 struct rte_ring *
 rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
 		int socket_id, unsigned int flags)
@@ -318,7 +318,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
 }
 
 /* create the ring */
-RTE_EXPORT_SYMBOL(rte_ring_create)
+RTE_EXPORT_SYMBOL(rte_ring_create);
 struct rte_ring *
 rte_ring_create(const char *name, unsigned int count, int socket_id,
 		unsigned int flags)
@@ -328,7 +328,7 @@ rte_ring_create(const char *name, unsigned int count, int socket_id,
 }
 
 /* free the ring */
-RTE_EXPORT_SYMBOL(rte_ring_free)
+RTE_EXPORT_SYMBOL(rte_ring_free);
 void
 rte_ring_free(struct rte_ring *r)
 {
@@ -422,7 +422,7 @@ ring_dump_hts_headtail(FILE *f, const char *prefix,
 	fprintf(f, "%stail=%"PRIu32"\n", prefix, hts->ht.pos.tail);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ring_headtail_dump, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_ring_headtail_dump, 25.03);
 void
 rte_ring_headtail_dump(FILE *f, const char *prefix,
 		const struct rte_ring_headtail *r)
@@ -451,7 +451,7 @@ rte_ring_headtail_dump(FILE *f, const char *prefix,
 }
 
 /* dump the status of the ring on the console */
-RTE_EXPORT_SYMBOL(rte_ring_dump)
+RTE_EXPORT_SYMBOL(rte_ring_dump);
 void
 rte_ring_dump(FILE *f, const struct rte_ring *r)
 {
@@ -470,7 +470,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
 }
 
 /* dump the status of all rings on the console */
-RTE_EXPORT_SYMBOL(rte_ring_list_dump)
+RTE_EXPORT_SYMBOL(rte_ring_list_dump);
 void
 rte_ring_list_dump(FILE *f)
 {
@@ -489,7 +489,7 @@ rte_ring_list_dump(FILE *f)
 }
 
 /* search a ring from its name */
-RTE_EXPORT_SYMBOL(rte_ring_lookup)
+RTE_EXPORT_SYMBOL(rte_ring_lookup);
 struct rte_ring *
 rte_ring_lookup(const char *name)
 {
diff --git a/lib/ring/rte_soring.c b/lib/ring/rte_soring.c
index 0d8abba69c..88dc808362 100644
--- a/lib/ring/rte_soring.c
+++ b/lib/ring/rte_soring.c
@@ -92,7 +92,7 @@ soring_dump_stage_headtail(FILE *f, const char *prefix,
 	fprintf(f, "%shead=%"PRIu32"\n", prefix, st->sht.head);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dump, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dump, 25.03);
 void
 rte_soring_dump(FILE *f, const struct rte_soring *r)
 {
@@ -120,7 +120,7 @@ rte_soring_dump(FILE *f, const struct rte_soring *r)
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_get_memsize, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_get_memsize, 25.03);
 ssize_t
 rte_soring_get_memsize(const struct rte_soring_param *prm)
 {
@@ -154,7 +154,7 @@ soring_compilation_checks(void)
 		offsetof(struct soring_stage_headtail, unused));
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_init, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_init, 25.03);
 int
 rte_soring_init(struct rte_soring *r, const struct rte_soring_param *prm)
 {
diff --git a/lib/ring/soring.c b/lib/ring/soring.c
index 797484d6bf..f8a901c3e9 100644
--- a/lib/ring/soring.c
+++ b/lib/ring/soring.c
@@ -491,7 +491,7 @@ soring_release(struct rte_soring *r, const void *objs,
  * Public functions (data-path) start here.
  */
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_release, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_release, 25.03);
 void
 rte_soring_release(struct rte_soring *r, const void *objs,
 	uint32_t stage, uint32_t n, uint32_t ftoken)
@@ -500,7 +500,7 @@ rte_soring_release(struct rte_soring *r, const void *objs,
 }
 
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_releasx, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_releasx, 25.03);
 void
 rte_soring_releasx(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t stage, uint32_t n, uint32_t ftoken)
@@ -508,7 +508,7 @@ rte_soring_releasx(struct rte_soring *r, const void *objs,
 	soring_release(r, objs, meta, stage, n, ftoken);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_bulk, 25.03);
 uint32_t
 rte_soring_enqueue_bulk(struct rte_soring *r, const void *objs, uint32_t n,
 	uint32_t *free_space)
@@ -517,7 +517,7 @@ rte_soring_enqueue_bulk(struct rte_soring *r, const void *objs, uint32_t n,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_bulk, 25.03);
 uint32_t
 rte_soring_enqueux_bulk(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t n, uint32_t *free_space)
@@ -526,7 +526,7 @@ rte_soring_enqueux_bulk(struct rte_soring *r, const void *objs,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueue_burst, 25.03);
 uint32_t
 rte_soring_enqueue_burst(struct rte_soring *r, const void *objs, uint32_t n,
 	uint32_t *free_space)
@@ -535,7 +535,7 @@ rte_soring_enqueue_burst(struct rte_soring *r, const void *objs, uint32_t n,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_enqueux_burst, 25.03);
 uint32_t
 rte_soring_enqueux_burst(struct rte_soring *r, const void *objs,
 	const void *meta, uint32_t n, uint32_t *free_space)
@@ -544,7 +544,7 @@ rte_soring_enqueux_burst(struct rte_soring *r, const void *objs,
 			free_space);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_bulk, 25.03);
 uint32_t
 rte_soring_dequeue_bulk(struct rte_soring *r, void *objs, uint32_t num,
 	uint32_t *available)
@@ -553,7 +553,7 @@ rte_soring_dequeue_bulk(struct rte_soring *r, void *objs, uint32_t num,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_bulk, 25.03);
 uint32_t
 rte_soring_dequeux_bulk(struct rte_soring *r, void *objs, void *meta,
 	uint32_t num, uint32_t *available)
@@ -562,7 +562,7 @@ rte_soring_dequeux_bulk(struct rte_soring *r, void *objs, void *meta,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeue_burst, 25.03);
 uint32_t
 rte_soring_dequeue_burst(struct rte_soring *r, void *objs, uint32_t num,
 	uint32_t *available)
@@ -571,7 +571,7 @@ rte_soring_dequeue_burst(struct rte_soring *r, void *objs, uint32_t num,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_dequeux_burst, 25.03);
 uint32_t
 rte_soring_dequeux_burst(struct rte_soring *r, void *objs, void *meta,
 	uint32_t num, uint32_t *available)
@@ -580,7 +580,7 @@ rte_soring_dequeux_burst(struct rte_soring *r, void *objs, void *meta,
 			available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_bulk, 25.03);
 uint32_t
 rte_soring_acquire_bulk(struct rte_soring *r, void *objs,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -589,7 +589,7 @@ rte_soring_acquire_bulk(struct rte_soring *r, void *objs,
 			RTE_RING_QUEUE_FIXED, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_bulk, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_bulk, 25.03);
 uint32_t
 rte_soring_acquirx_bulk(struct rte_soring *r, void *objs, void *meta,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -598,7 +598,7 @@ rte_soring_acquirx_bulk(struct rte_soring *r, void *objs, void *meta,
 			RTE_RING_QUEUE_FIXED, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquire_burst, 25.03);
 uint32_t
 rte_soring_acquire_burst(struct rte_soring *r, void *objs,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -607,7 +607,7 @@ rte_soring_acquire_burst(struct rte_soring *r, void *objs,
 			RTE_RING_QUEUE_VARIABLE, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_burst, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_acquirx_burst, 25.03);
 uint32_t
 rte_soring_acquirx_burst(struct rte_soring *r, void *objs, void *meta,
 	uint32_t stage, uint32_t num, uint32_t *ftoken, uint32_t *available)
@@ -616,7 +616,7 @@ rte_soring_acquirx_burst(struct rte_soring *r, void *objs, void *meta,
 			RTE_RING_QUEUE_VARIABLE, ftoken, available);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_count, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_count, 25.03);
 unsigned int
 rte_soring_count(const struct rte_soring *r)
 {
@@ -626,7 +626,7 @@ rte_soring_count(const struct rte_soring *r)
 	return (count > r->capacity) ? r->capacity : count;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_free_count, 25.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_soring_free_count, 25.03);
 unsigned int
 rte_soring_free_count(const struct rte_soring *r)
 {
diff --git a/lib/sched/rte_approx.c b/lib/sched/rte_approx.c
index 86c7d1d3fb..bd935a7e36 100644
--- a/lib/sched/rte_approx.c
+++ b/lib/sched/rte_approx.c
@@ -140,7 +140,7 @@ find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num, uint32_t de
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_approx)
+RTE_EXPORT_SYMBOL(rte_approx);
 int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
 {
 	uint32_t alpha_num, d_num, denum;
diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c
index b5d8988894..f483797907 100644
--- a/lib/sched/rte_pie.c
+++ b/lib/sched/rte_pie.c
@@ -10,7 +10,7 @@
 #include "rte_sched_log.h"
 #include "rte_pie.h"
 
-RTE_EXPORT_SYMBOL(rte_pie_rt_data_init)
+RTE_EXPORT_SYMBOL(rte_pie_rt_data_init);
 int
 rte_pie_rt_data_init(struct rte_pie *pie)
 {
@@ -24,7 +24,7 @@ rte_pie_rt_data_init(struct rte_pie *pie)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_pie_config_init)
+RTE_EXPORT_SYMBOL(rte_pie_config_init);
 int
 rte_pie_config_init(struct rte_pie_config *pie_cfg,
 	const uint16_t qdelay_ref,
diff --git a/lib/sched/rte_red.c b/lib/sched/rte_red.c
index d7534d0bee..f8d1074695 100644
--- a/lib/sched/rte_red.c
+++ b/lib/sched/rte_red.c
@@ -9,22 +9,22 @@
 #include <rte_common.h>
 
 static int rte_red_init_done = 0;     /**< Flag to indicate that global initialisation is done */
-RTE_EXPORT_SYMBOL(rte_red_rand_val)
+RTE_EXPORT_SYMBOL(rte_red_rand_val);
 uint32_t rte_red_rand_val = 0;        /**< Random value cache */
-RTE_EXPORT_SYMBOL(rte_red_rand_seed)
+RTE_EXPORT_SYMBOL(rte_red_rand_seed);
 uint32_t rte_red_rand_seed = 0;       /**< Seed for random number generation */
 
 /**
  * table[i] = log2(1-Wq) * Scale * -1
  *       Wq = 1/(2^i)
  */
-RTE_EXPORT_SYMBOL(rte_red_log2_1_minus_Wq)
+RTE_EXPORT_SYMBOL(rte_red_log2_1_minus_Wq);
 uint16_t rte_red_log2_1_minus_Wq[RTE_RED_WQ_LOG2_NUM];
 
 /**
  * table[i] = 2^(i/16) * Scale
  */
-RTE_EXPORT_SYMBOL(rte_red_pow2_frac_inv)
+RTE_EXPORT_SYMBOL(rte_red_pow2_frac_inv);
 uint16_t rte_red_pow2_frac_inv[16];
 
 /**
@@ -69,7 +69,7 @@ __rte_red_init_tables(void)
 	}
 }
 
-RTE_EXPORT_SYMBOL(rte_red_rt_data_init)
+RTE_EXPORT_SYMBOL(rte_red_rt_data_init);
 int
 rte_red_rt_data_init(struct rte_red *red)
 {
@@ -82,7 +82,7 @@ rte_red_rt_data_init(struct rte_red *red)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_red_config_init)
+RTE_EXPORT_SYMBOL(rte_red_config_init);
 int
 rte_red_config_init(struct rte_red_config *red_cfg,
 	const uint16_t wq_log2,
diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
index 453f935ac8..9f53bed557 100644
--- a/lib/sched/rte_sched.c
+++ b/lib/sched/rte_sched.c
@@ -884,7 +884,7 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_get_memory_footprint)
+RTE_EXPORT_SYMBOL(rte_sched_port_get_memory_footprint);
 uint32_t
 rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
 	struct rte_sched_subport_params **subport_params)
@@ -928,7 +928,7 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
 	return size0 + size1;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_config)
+RTE_EXPORT_SYMBOL(rte_sched_port_config);
 struct rte_sched_port *
 rte_sched_port_config(struct rte_sched_port_params *params)
 {
@@ -1049,7 +1049,7 @@ rte_sched_subport_free(struct rte_sched_port *port,
 	rte_free(subport);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_free)
+RTE_EXPORT_SYMBOL(rte_sched_port_free);
 void
 rte_sched_port_free(struct rte_sched_port *port)
 {
@@ -1163,7 +1163,7 @@ rte_sched_cman_config(struct rte_sched_port *port,
 	return -EINVAL;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_tc_ov_config)
+RTE_EXPORT_SYMBOL(rte_sched_subport_tc_ov_config);
 int
 rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1189,7 +1189,7 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_config)
+RTE_EXPORT_SYMBOL(rte_sched_subport_config);
 int
 rte_sched_subport_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1383,7 +1383,7 @@ rte_sched_subport_config(struct rte_sched_port *port,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_pipe_config)
+RTE_EXPORT_SYMBOL(rte_sched_pipe_config);
 int
 rte_sched_pipe_config(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1508,7 +1508,7 @@ rte_sched_pipe_config(struct rte_sched_port *port,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_pipe_profile_add)
+RTE_EXPORT_SYMBOL(rte_sched_subport_pipe_profile_add);
 int
 rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
 	uint32_t subport_id,
@@ -1574,7 +1574,7 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_subport_profile_add)
+RTE_EXPORT_SYMBOL(rte_sched_port_subport_profile_add);
 int
 rte_sched_port_subport_profile_add(struct rte_sched_port *port,
 	struct rte_sched_subport_profile_params *params,
@@ -1656,7 +1656,7 @@ rte_sched_port_qindex(struct rte_sched_port *port,
 		(RTE_SCHED_QUEUES_PER_PIPE - 1));
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_write)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_write);
 void
 rte_sched_port_pkt_write(struct rte_sched_port *port,
 			 struct rte_mbuf *pkt,
@@ -1670,7 +1670,7 @@ rte_sched_port_pkt_write(struct rte_sched_port *port,
 	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_tree_path)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_tree_path);
 void
 rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
 				  const struct rte_mbuf *pkt,
@@ -1686,14 +1686,14 @@ rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
 	*queue = rte_sched_port_tc_queue(port, queue_id);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_color)
+RTE_EXPORT_SYMBOL(rte_sched_port_pkt_read_color);
 enum rte_color
 rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
 {
 	return (enum rte_color)rte_mbuf_sched_color_get(pkt);
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_subport_read_stats)
+RTE_EXPORT_SYMBOL(rte_sched_subport_read_stats);
 int
 rte_sched_subport_read_stats(struct rte_sched_port *port,
 			     uint32_t subport_id,
@@ -1739,7 +1739,7 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_queue_read_stats)
+RTE_EXPORT_SYMBOL(rte_sched_queue_read_stats);
 int
 rte_sched_queue_read_stats(struct rte_sched_port *port,
 	uint32_t queue_id,
@@ -2055,7 +2055,7 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port,
  * ----->|_______|----->|_______|----->|_______|----->|_______|----->
  *   p01            p11            p21            p31
  */
-RTE_EXPORT_SYMBOL(rte_sched_port_enqueue)
+RTE_EXPORT_SYMBOL(rte_sched_port_enqueue);
 int
 rte_sched_port_enqueue(struct rte_sched_port *port, struct rte_mbuf **pkts,
 		       uint32_t n_pkts)
@@ -2967,7 +2967,7 @@ rte_sched_port_exceptions(struct rte_sched_subport *subport, int second_pass)
 	return exceptions;
 }
 
-RTE_EXPORT_SYMBOL(rte_sched_port_dequeue)
+RTE_EXPORT_SYMBOL(rte_sched_port_dequeue);
 int
 rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts)
 {
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index c47fe44da0..dbb6773758 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -31,12 +31,12 @@
 #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata"
 #define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata"
 
-RTE_EXPORT_SYMBOL(rte_security_dynfield_offset)
+RTE_EXPORT_SYMBOL(rte_security_dynfield_offset);
 int rte_security_dynfield_offset = -1;
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_oop_dynfield_offset, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_oop_dynfield_offset, 23.11);
 int rte_security_oop_dynfield_offset = -1;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_security_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_security_dynfield_register);
 int
 rte_security_dynfield_register(void)
 {
@@ -50,7 +50,7 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_security_oop_dynfield_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_security_oop_dynfield_register);
 int
 rte_security_oop_dynfield_register(void)
 {
@@ -65,7 +65,7 @@ rte_security_oop_dynfield_register(void)
 	return rte_security_oop_dynfield_offset;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_create)
+RTE_EXPORT_SYMBOL(rte_security_session_create);
 void *
 rte_security_session_create(void *ctx,
 			    struct rte_security_session_conf *conf,
@@ -100,7 +100,7 @@ rte_security_session_create(void *ctx,
 	return (void *)sess;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_update)
+RTE_EXPORT_SYMBOL(rte_security_session_update);
 int
 rte_security_session_update(void *ctx, void *sess, struct rte_security_session_conf *conf)
 {
@@ -114,7 +114,7 @@ rte_security_session_update(void *ctx, void *sess, struct rte_security_session_c
 	return instance->ops->session_update(instance->device, sess, conf);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_get_size)
+RTE_EXPORT_SYMBOL(rte_security_session_get_size);
 unsigned int
 rte_security_session_get_size(void *ctx)
 {
@@ -126,7 +126,7 @@ rte_security_session_get_size(void *ctx)
 			instance->ops->session_get_size(instance->device));
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_session_stats_get);
 int
 rte_security_session_stats_get(void *ctx, void *sess, struct rte_security_stats *stats)
 {
@@ -140,7 +140,7 @@ rte_security_session_stats_get(void *ctx, void *sess, struct rte_security_stats
 	return instance->ops->session_stats_get(instance->device, sess, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_session_destroy)
+RTE_EXPORT_SYMBOL(rte_security_session_destroy);
 int
 rte_security_session_destroy(void *ctx, void *sess)
 {
@@ -163,7 +163,7 @@ rte_security_session_destroy(void *ctx, void *sess)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_create)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_create);
 int
 rte_security_macsec_sc_create(void *ctx, struct rte_security_macsec_sc *conf)
 {
@@ -180,7 +180,7 @@ rte_security_macsec_sc_create(void *ctx, struct rte_security_macsec_sc *conf)
 	return sc_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_create)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_create);
 int
 rte_security_macsec_sa_create(void *ctx, struct rte_security_macsec_sa *conf)
 {
@@ -197,7 +197,7 @@ rte_security_macsec_sa_create(void *ctx, struct rte_security_macsec_sa *conf)
 	return sa_id;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_destroy)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_destroy);
 int
 rte_security_macsec_sc_destroy(void *ctx, uint16_t sc_id,
 			       enum rte_security_macsec_direction dir)
@@ -217,7 +217,7 @@ rte_security_macsec_sc_destroy(void *ctx, uint16_t sc_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_destroy)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_destroy);
 int
 rte_security_macsec_sa_destroy(void *ctx, uint16_t sa_id,
 			       enum rte_security_macsec_direction dir)
@@ -237,7 +237,7 @@ rte_security_macsec_sa_destroy(void *ctx, uint16_t sa_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sc_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sc_stats_get);
 int
 rte_security_macsec_sc_stats_get(void *ctx, uint16_t sc_id,
 				 enum rte_security_macsec_direction dir,
@@ -251,7 +251,7 @@ rte_security_macsec_sc_stats_get(void *ctx, uint16_t sc_id,
 	return instance->ops->macsec_sc_stats_get(instance->device, sc_id, dir, stats);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_macsec_sa_stats_get)
+RTE_EXPORT_SYMBOL(rte_security_macsec_sa_stats_get);
 int
 rte_security_macsec_sa_stats_get(void *ctx, uint16_t sa_id,
 				 enum rte_security_macsec_direction dir,
@@ -265,7 +265,7 @@ rte_security_macsec_sa_stats_get(void *ctx, uint16_t sa_id,
 	return instance->ops->macsec_sa_stats_get(instance->device, sa_id, dir, stats);
 }
 
-RTE_EXPORT_SYMBOL(__rte_security_set_pkt_metadata)
+RTE_EXPORT_SYMBOL(__rte_security_set_pkt_metadata);
 int
 __rte_security_set_pkt_metadata(void *ctx, void *sess, struct rte_mbuf *m, void *params)
 {
@@ -280,7 +280,7 @@ __rte_security_set_pkt_metadata(void *ctx, void *sess, struct rte_mbuf *m, void
 	return instance->ops->set_pkt_metadata(instance->device, sess, m, params);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_capabilities_get)
+RTE_EXPORT_SYMBOL(rte_security_capabilities_get);
 const struct rte_security_capability *
 rte_security_capabilities_get(void *ctx)
 {
@@ -291,7 +291,7 @@ rte_security_capabilities_get(void *ctx)
 	return instance->ops->capabilities_get(instance->device);
 }
 
-RTE_EXPORT_SYMBOL(rte_security_capability_get)
+RTE_EXPORT_SYMBOL(rte_security_capability_get);
 const struct rte_security_capability *
 rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 {
@@ -344,7 +344,7 @@ rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 	return NULL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_rx_inject_configure, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_rx_inject_configure, 23.11);
 int
 rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
 {
@@ -357,7 +357,7 @@ rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
 	return instance->ops->rx_inject_configure(instance->device, port_id, enable);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_inb_pkt_rx_inject, 23.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_security_inb_pkt_rx_inject, 23.11);
 uint16_t
 rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
 			       uint16_t nb_pkts)
diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c
index 4c78fe4b4b..2fcfd57204 100644
--- a/lib/stack/rte_stack.c
+++ b/lib/stack/rte_stack.c
@@ -45,7 +45,7 @@ rte_stack_get_memsize(unsigned int count, uint32_t flags)
 		return rte_stack_std_get_memsize(count);
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_create)
+RTE_EXPORT_SYMBOL(rte_stack_create);
 struct rte_stack *
 rte_stack_create(const char *name, unsigned int count, int socket_id,
 		 uint32_t flags)
@@ -131,7 +131,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
 	return s;
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_free)
+RTE_EXPORT_SYMBOL(rte_stack_free);
 void
 rte_stack_free(struct rte_stack *s)
 {
@@ -164,7 +164,7 @@ rte_stack_free(struct rte_stack *s)
 	rte_memzone_free(s->memzone);
 }
 
-RTE_EXPORT_SYMBOL(rte_stack_lookup)
+RTE_EXPORT_SYMBOL(rte_stack_lookup);
 struct rte_stack *
 rte_stack_lookup(const char *name)
 {
diff --git a/lib/table/rte_swx_table_em.c b/lib/table/rte_swx_table_em.c
index 4ec54cb635..a8a5ee1b75 100644
--- a/lib/table/rte_swx_table_em.c
+++ b/lib/table/rte_swx_table_em.c
@@ -648,7 +648,7 @@ table_footprint(struct rte_swx_table_params *params,
 	return memory_footprint;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_unoptimized_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_unoptimized_ops, 20.11);
 struct rte_swx_table_ops rte_swx_table_exact_match_unoptimized_ops = {
 	.footprint_get = table_footprint,
 	.mailbox_size_get = table_mailbox_size_get_unoptimized,
@@ -659,7 +659,7 @@ struct rte_swx_table_ops rte_swx_table_exact_match_unoptimized_ops = {
 	.free = table_free,
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_ops, 20.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_exact_match_ops, 20.11);
 struct rte_swx_table_ops rte_swx_table_exact_match_ops = {
 	.footprint_get = table_footprint,
 	.mailbox_size_get = table_mailbox_size_get,
diff --git a/lib/table/rte_swx_table_learner.c b/lib/table/rte_swx_table_learner.c
index 2d61bceeaf..03ba4173a4 100644
--- a/lib/table/rte_swx_table_learner.c
+++ b/lib/table/rte_swx_table_learner.c
@@ -273,7 +273,7 @@ table_entry_id_get(struct table *t, struct table_bucket *b, size_t bucket_key_po
 	return (bucket_id << TABLE_KEYS_PER_BUCKET_LOG2) + bucket_key_pos;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_footprint_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_footprint_get, 21.11);
 uint64_t
 rte_swx_table_learner_footprint_get(struct rte_swx_table_learner_params *params)
 {
@@ -285,7 +285,7 @@ rte_swx_table_learner_footprint_get(struct rte_swx_table_learner_params *params)
 	return status ? 0 : p.total_size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_create, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_create, 21.11);
 void *
 rte_swx_table_learner_create(struct rte_swx_table_learner_params *params, int numa_node)
 {
@@ -309,7 +309,7 @@ rte_swx_table_learner_create(struct rte_swx_table_learner_params *params, int nu
 	return t;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_free, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_free, 21.11);
 void
 rte_swx_table_learner_free(void *table)
 {
@@ -321,7 +321,7 @@ rte_swx_table_learner_free(void *table)
 	env_free(t, t->params.total_size);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_timeout_update, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_timeout_update, 22.07);
 int
 rte_swx_table_learner_timeout_update(void *table,
 				     uint32_t key_timeout_id,
@@ -359,14 +359,14 @@ struct mailbox {
 	int state;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_mailbox_size_get, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_mailbox_size_get, 21.11);
 uint64_t
 rte_swx_table_learner_mailbox_size_get(void)
 {
 	return sizeof(struct mailbox);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_lookup, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_lookup, 21.11);
 int
 rte_swx_table_learner_lookup(void *table,
 			     void *mailbox,
@@ -453,7 +453,7 @@ rte_swx_table_learner_lookup(void *table,
 	}
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm, 22.07);
 void
 rte_swx_table_learner_rearm(void *table,
 			    void *mailbox,
@@ -477,7 +477,7 @@ rte_swx_table_learner_rearm(void *table,
 	b->time[bucket_key_pos] = (input_time + key_timeout) >> 32;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm_new, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_rearm_new, 22.07);
 void
 rte_swx_table_learner_rearm_new(void *table,
 				void *mailbox,
@@ -502,7 +502,7 @@ rte_swx_table_learner_rearm_new(void *table,
 	b->key_timeout_id[bucket_key_pos] = (uint8_t)key_timeout_id;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_add, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_add, 21.11);
 uint32_t
 rte_swx_table_learner_add(void *table,
 			  void *mailbox,
@@ -579,7 +579,7 @@ rte_swx_table_learner_add(void *table,
 	return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_delete, 21.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_learner_delete, 21.11);
 void
 rte_swx_table_learner_delete(void *table __rte_unused,
 			     void *mailbox)
diff --git a/lib/table/rte_swx_table_selector.c b/lib/table/rte_swx_table_selector.c
index d42f67f157..060ee4a4b6 100644
--- a/lib/table/rte_swx_table_selector.c
+++ b/lib/table/rte_swx_table_selector.c
@@ -171,7 +171,7 @@ struct table {
 	uint32_t n_members_per_group_max_log2;
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_footprint_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_footprint_get, 21.08);
 uint64_t
 rte_swx_table_selector_footprint_get(uint32_t n_groups_max, uint32_t n_members_per_group_max)
 {
@@ -184,7 +184,7 @@ rte_swx_table_selector_footprint_get(uint32_t n_groups_max, uint32_t n_members_p
 	return sizeof(struct table) + group_table_size + members_size;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_free, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_free, 21.08);
 void
 rte_swx_table_selector_free(void *table)
 {
@@ -262,7 +262,7 @@ group_set(struct table *t,
 	  uint32_t group_id,
 	  struct rte_swx_table_selector_group *group);
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_create, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_create, 21.08);
 void *
 rte_swx_table_selector_create(struct rte_swx_table_selector_params *params,
 			      struct rte_swx_table_selector_group **groups,
@@ -532,7 +532,7 @@ group_set(struct table *t,
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_group_set, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_group_set, 21.08);
 int
 rte_swx_table_selector_group_set(void *table,
 				 uint32_t group_id,
@@ -547,14 +547,14 @@ struct mailbox {
 
 };
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_mailbox_size_get, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_mailbox_size_get, 21.08);
 uint64_t
 rte_swx_table_selector_mailbox_size_get(void)
 {
 	return sizeof(struct mailbox);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_select, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_selector_select, 21.08);
 int
 rte_swx_table_selector_select(void *table,
 			      void *mailbox __rte_unused,
diff --git a/lib/table/rte_swx_table_wm.c b/lib/table/rte_swx_table_wm.c
index c57738dda3..1b7fa514f5 100644
--- a/lib/table/rte_swx_table_wm.c
+++ b/lib/table/rte_swx_table_wm.c
@@ -458,7 +458,7 @@ table_lookup(void *table,
 	return 1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_wildcard_match_ops, 21.05)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_swx_table_wildcard_match_ops, 21.05);
 struct rte_swx_table_ops rte_swx_table_wildcard_match_ops = {
 	.footprint_get = NULL,
 	.mailbox_size_get = table_mailbox_size_get,
diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c
index 74fa0145d8..24601a35ca 100644
--- a/lib/table/rte_table_acl.c
+++ b/lib/table/rte_table_acl.c
@@ -782,7 +782,7 @@ rte_table_acl_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_acl_ops)
+RTE_EXPORT_SYMBOL(rte_table_acl_ops);
 struct rte_table_ops rte_table_acl_ops = {
 	.f_create = rte_table_acl_create,
 	.f_free = rte_table_acl_free,
diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c
index 55356e5999..08646bc103 100644
--- a/lib/table/rte_table_array.c
+++ b/lib/table/rte_table_array.c
@@ -197,7 +197,7 @@ rte_table_array_stats_read(void *table, struct rte_table_stats *stats, int clear
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_array_ops)
+RTE_EXPORT_SYMBOL(rte_table_array_ops);
 struct rte_table_ops rte_table_array_ops = {
 	.f_create = rte_table_array_create,
 	.f_free = rte_table_array_free,
diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c
index a2b920fa92..5b55754cbe 100644
--- a/lib/table/rte_table_hash_cuckoo.c
+++ b/lib/table/rte_table_hash_cuckoo.c
@@ -314,7 +314,7 @@ rte_table_hash_cuckoo_stats_read(void *table, struct rte_table_stats *stats,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_cuckoo_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_cuckoo_ops);
 struct rte_table_ops rte_table_hash_cuckoo_ops = {
 	.f_create = rte_table_hash_cuckoo_create,
 	.f_free = rte_table_hash_cuckoo_free,
diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c
index 86e8eeb4c8..6c220ad971 100644
--- a/lib/table/rte_table_hash_ext.c
+++ b/lib/table/rte_table_hash_ext.c
@@ -998,7 +998,7 @@ rte_table_hash_ext_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_ext_ops);
 struct rte_table_ops rte_table_hash_ext_ops	 = {
 	.f_create = rte_table_hash_ext_create,
 	.f_free = rte_table_hash_ext_free,
diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c
index da24a7985d..e05d7bf99a 100644
--- a/lib/table/rte_table_hash_key16.c
+++ b/lib/table/rte_table_hash_key16.c
@@ -1167,7 +1167,7 @@ rte_table_hash_key16_stats_read(void *table, struct rte_table_stats *stats, int
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key16_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key16_lru_ops);
 struct rte_table_ops rte_table_hash_key16_lru_ops = {
 	.f_create = rte_table_hash_create_key16_lru,
 	.f_free = rte_table_hash_free_key16_lru,
@@ -1179,7 +1179,7 @@ struct rte_table_ops rte_table_hash_key16_lru_ops = {
 	.f_stats = rte_table_hash_key16_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key16_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key16_ext_ops);
 struct rte_table_ops rte_table_hash_key16_ext_ops = {
 	.f_create = rte_table_hash_create_key16_ext,
 	.f_free = rte_table_hash_free_key16_ext,
diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c
index 297931a2a5..c2200c09b0 100644
--- a/lib/table/rte_table_hash_key32.c
+++ b/lib/table/rte_table_hash_key32.c
@@ -1200,7 +1200,7 @@ rte_table_hash_key32_stats_read(void *table, struct rte_table_stats *stats, int
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key32_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key32_lru_ops);
 struct rte_table_ops rte_table_hash_key32_lru_ops = {
 	.f_create = rte_table_hash_create_key32_lru,
 	.f_free = rte_table_hash_free_key32_lru,
@@ -1212,7 +1212,7 @@ struct rte_table_ops rte_table_hash_key32_lru_ops = {
 	.f_stats = rte_table_hash_key32_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key32_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key32_ext_ops);
 struct rte_table_ops rte_table_hash_key32_ext_ops = {
 	.f_create = rte_table_hash_create_key32_ext,
 	.f_free = rte_table_hash_free_key32_ext,
diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c
index 746863082f..08d3e53743 100644
--- a/lib/table/rte_table_hash_key8.c
+++ b/lib/table/rte_table_hash_key8.c
@@ -1134,7 +1134,7 @@ rte_table_hash_key8_stats_read(void *table, struct rte_table_stats *stats, int c
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key8_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key8_lru_ops);
 struct rte_table_ops rte_table_hash_key8_lru_ops = {
 	.f_create = rte_table_hash_create_key8_lru,
 	.f_free = rte_table_hash_free_key8_lru,
@@ -1146,7 +1146,7 @@ struct rte_table_ops rte_table_hash_key8_lru_ops = {
 	.f_stats = rte_table_hash_key8_stats_read,
 };
 
-RTE_EXPORT_SYMBOL(rte_table_hash_key8_ext_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_key8_ext_ops);
 struct rte_table_ops rte_table_hash_key8_ext_ops = {
 	.f_create = rte_table_hash_create_key8_ext,
 	.f_free = rte_table_hash_free_key8_ext,
diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c
index 548f5eebf2..d6cd928a96 100644
--- a/lib/table/rte_table_hash_lru.c
+++ b/lib/table/rte_table_hash_lru.c
@@ -946,7 +946,7 @@ rte_table_hash_lru_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_hash_lru_ops)
+RTE_EXPORT_SYMBOL(rte_table_hash_lru_ops);
 struct rte_table_ops rte_table_hash_lru_ops = {
 	.f_create = rte_table_hash_lru_create,
 	.f_free = rte_table_hash_lru_free,
diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c
index 6fd0c30f85..3afa1b4c95 100644
--- a/lib/table/rte_table_lpm.c
+++ b/lib/table/rte_table_lpm.c
@@ -356,7 +356,7 @@ rte_table_lpm_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_lpm_ops)
+RTE_EXPORT_SYMBOL(rte_table_lpm_ops);
 struct rte_table_ops rte_table_lpm_ops = {
 	.f_create = rte_table_lpm_create,
 	.f_free = rte_table_lpm_free,
diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c
index 9159784dfa..a81195e88b 100644
--- a/lib/table/rte_table_lpm_ipv6.c
+++ b/lib/table/rte_table_lpm_ipv6.c
@@ -357,7 +357,7 @@ rte_table_lpm_ipv6_stats_read(void *table, struct rte_table_stats *stats, int cl
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_lpm_ipv6_ops)
+RTE_EXPORT_SYMBOL(rte_table_lpm_ipv6_ops);
 struct rte_table_ops rte_table_lpm_ipv6_ops = {
 	.f_create = rte_table_lpm_ipv6_create,
 	.f_free = rte_table_lpm_ipv6_free,
diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c
index 3d2ac55c49..2d70e0761f 100644
--- a/lib/table/rte_table_stub.c
+++ b/lib/table/rte_table_stub.c
@@ -82,7 +82,7 @@ rte_table_stub_stats_read(void *table, struct rte_table_stats *stats, int clear)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_table_stub_ops)
+RTE_EXPORT_SYMBOL(rte_table_stub_ops);
 struct rte_table_ops rte_table_stub_ops = {
 	.f_create = rte_table_stub_create,
 	.f_free = NULL,
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c
index 1cbbffbf3f..d40057e197 100644
--- a/lib/telemetry/telemetry.c
+++ b/lib/telemetry/telemetry.c
@@ -115,14 +115,14 @@ register_cmd(const char *cmd, const char *help,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_telemetry_register_cmd)
+RTE_EXPORT_SYMBOL(rte_telemetry_register_cmd);
 int
 rte_telemetry_register_cmd(const char *cmd, telemetry_cb fn, const char *help)
 {
 	return register_cmd(cmd, help, fn, NULL, NULL);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_telemetry_register_cmd_arg, 24.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_telemetry_register_cmd_arg, 24.11);
 int
 rte_telemetry_register_cmd_arg(const char *cmd, telemetry_arg_cb fn, void *arg, const char *help)
 {
@@ -655,7 +655,7 @@ telemetry_v2_init(void)
 
 #endif /* !RTE_EXEC_ENV_WINDOWS */
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_init)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_init);
 int32_t
 rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_t *cpuset)
 {
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index c120600622..fb014fe389 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -17,7 +17,7 @@
 
 #define RTE_TEL_UINT_HEX_STR_BUF_LEN 64
 
-RTE_EXPORT_SYMBOL(rte_tel_data_start_array)
+RTE_EXPORT_SYMBOL(rte_tel_data_start_array);
 int
 rte_tel_data_start_array(struct rte_tel_data *d, enum rte_tel_value_type type)
 {
@@ -32,7 +32,7 @@ rte_tel_data_start_array(struct rte_tel_data *d, enum rte_tel_value_type type)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_start_dict)
+RTE_EXPORT_SYMBOL(rte_tel_data_start_dict);
 int
 rte_tel_data_start_dict(struct rte_tel_data *d)
 {
@@ -41,7 +41,7 @@ rte_tel_data_start_dict(struct rte_tel_data *d)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_string);
 int
 rte_tel_data_string(struct rte_tel_data *d, const char *str)
 {
@@ -54,7 +54,7 @@ rte_tel_data_string(struct rte_tel_data *d, const char *str)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_string);
 int
 rte_tel_data_add_array_string(struct rte_tel_data *d, const char *str)
 {
@@ -67,7 +67,7 @@ rte_tel_data_add_array_string(struct rte_tel_data *d, const char *str)
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_int)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_int);
 int
 rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
 {
@@ -79,7 +79,7 @@ rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_uint)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_uint);
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
 {
@@ -91,14 +91,14 @@ rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_u64)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_u64);
 int
 rte_tel_data_add_array_u64(struct rte_tel_data *d, uint64_t x)
 {
 	return rte_tel_data_add_array_uint(d, x);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_array_container)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_array_container);
 int
 rte_tel_data_add_array_container(struct rte_tel_data *d,
 		struct rte_tel_data *val, int keep)
@@ -131,7 +131,7 @@ rte_tel_uint_to_hex_encoded_str(char *buf, size_t buf_len, uint64_t val,
 	return len < (int)buf_len ? 0 : -EINVAL;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_array_uint_hex, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_array_uint_hex, 23.03);
 int
 rte_tel_data_add_array_uint_hex(struct rte_tel_data *d, uint64_t val,
 				uint8_t display_bitwidth)
@@ -162,7 +162,7 @@ valid_name(const char *name)
 	return true;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_string)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_string);
 int
 rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name,
 		const char *val)
@@ -188,7 +188,7 @@ rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_int)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_int);
 int
 rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
 {
@@ -208,7 +208,7 @@ rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_uint)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_uint);
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
 		const char *name, uint64_t val)
@@ -229,14 +229,14 @@ rte_tel_data_add_dict_uint(struct rte_tel_data *d,
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_u64)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_u64);
 int
 rte_tel_data_add_dict_u64(struct rte_tel_data *d, const char *name, uint64_t val)
 {
 	return rte_tel_data_add_dict_uint(d, name, val);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_container)
+RTE_EXPORT_SYMBOL(rte_tel_data_add_dict_container);
 int
 rte_tel_data_add_dict_container(struct rte_tel_data *d, const char *name,
 		struct rte_tel_data *val, int keep)
@@ -262,7 +262,7 @@ rte_tel_data_add_dict_container(struct rte_tel_data *d, const char *name,
 	return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_dict_uint_hex, 23.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_tel_data_add_dict_uint_hex, 23.03);
 int
 rte_tel_data_add_dict_uint_hex(struct rte_tel_data *d, const char *name,
 			       uint64_t val, uint8_t display_bitwidth)
@@ -279,14 +279,14 @@ rte_tel_data_add_dict_uint_hex(struct rte_tel_data *d, const char *name,
 	return rte_tel_data_add_dict_string(d, name, hex_str);
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_alloc)
+RTE_EXPORT_SYMBOL(rte_tel_data_alloc);
 struct rte_tel_data *
 rte_tel_data_alloc(void)
 {
 	return malloc(sizeof(struct rte_tel_data));
 }
 
-RTE_EXPORT_SYMBOL(rte_tel_data_free)
+RTE_EXPORT_SYMBOL(rte_tel_data_free);
 void
 rte_tel_data_free(struct rte_tel_data *data)
 {
diff --git a/lib/telemetry/telemetry_legacy.c b/lib/telemetry/telemetry_legacy.c
index 89ec750c09..f832bd9ac5 100644
--- a/lib/telemetry/telemetry_legacy.c
+++ b/lib/telemetry/telemetry_legacy.c
@@ -53,7 +53,7 @@ struct json_command callbacks[TELEMETRY_LEGACY_MAX_CALLBACKS] = {
 int num_legacy_callbacks = 1;
 static rte_spinlock_t callback_sl = RTE_SPINLOCK_INITIALIZER;
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_legacy_register)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_legacy_register);
 int
 rte_telemetry_legacy_register(const char *cmd,
 		enum rte_telemetry_legacy_data_req data_req,
diff --git a/lib/timer/rte_timer.c b/lib/timer/rte_timer.c
index b349c2abbc..f76079e8ce 100644
--- a/lib/timer/rte_timer.c
+++ b/lib/timer/rte_timer.c
@@ -85,7 +85,7 @@ timer_data_valid(uint32_t id)
 	timer_data = &rte_timer_data_arr[id];				\
 } while (0)
 
-RTE_EXPORT_SYMBOL(rte_timer_data_alloc)
+RTE_EXPORT_SYMBOL(rte_timer_data_alloc);
 int
 rte_timer_data_alloc(uint32_t *id_ptr)
 {
@@ -110,7 +110,7 @@ rte_timer_data_alloc(uint32_t *id_ptr)
 	return -ENOSPC;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_data_dealloc)
+RTE_EXPORT_SYMBOL(rte_timer_data_dealloc);
 int
 rte_timer_data_dealloc(uint32_t id)
 {
@@ -128,7 +128,7 @@ rte_timer_data_dealloc(uint32_t id)
  * secondary processes should be empty, the zeroth entry can be shared by
  * multiple processes.
  */
-RTE_EXPORT_SYMBOL(rte_timer_subsystem_init)
+RTE_EXPORT_SYMBOL(rte_timer_subsystem_init);
 int
 rte_timer_subsystem_init(void)
 {
@@ -188,7 +188,7 @@ rte_timer_subsystem_init(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_subsystem_finalize)
+RTE_EXPORT_SYMBOL(rte_timer_subsystem_finalize);
 void
 rte_timer_subsystem_finalize(void)
 {
@@ -208,7 +208,7 @@ rte_timer_subsystem_finalize(void)
 }
 
 /* Initialize the timer handle tim for use */
-RTE_EXPORT_SYMBOL(rte_timer_init)
+RTE_EXPORT_SYMBOL(rte_timer_init);
 void
 rte_timer_init(struct rte_timer *tim)
 {
@@ -545,7 +545,7 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
 }
 
 /* Reset and start the timer associated with the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_reset)
+RTE_EXPORT_SYMBOL(rte_timer_reset);
 int
 rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
 		      enum rte_timer_type type, unsigned int tim_lcore,
@@ -555,7 +555,7 @@ rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
 				   tim_lcore, fct, arg);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_reset)
+RTE_EXPORT_SYMBOL(rte_timer_alt_reset);
 int
 rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
 		    uint64_t ticks, enum rte_timer_type type,
@@ -577,7 +577,7 @@ rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
 }
 
 /* loop until rte_timer_reset() succeed */
-RTE_EXPORT_SYMBOL(rte_timer_reset_sync)
+RTE_EXPORT_SYMBOL(rte_timer_reset_sync);
 void
 rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
 		     enum rte_timer_type type, unsigned tim_lcore,
@@ -627,14 +627,14 @@ __rte_timer_stop(struct rte_timer *tim,
 }
 
 /* Stop the timer associated with the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_stop)
+RTE_EXPORT_SYMBOL(rte_timer_stop);
 int
 rte_timer_stop(struct rte_timer *tim)
 {
 	return rte_timer_alt_stop(default_data_id, tim);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_stop)
+RTE_EXPORT_SYMBOL(rte_timer_alt_stop);
 int
 rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
 {
@@ -646,7 +646,7 @@ rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
 }
 
 /* loop until rte_timer_stop() succeed */
-RTE_EXPORT_SYMBOL(rte_timer_stop_sync)
+RTE_EXPORT_SYMBOL(rte_timer_stop_sync);
 void
 rte_timer_stop_sync(struct rte_timer *tim)
 {
@@ -655,7 +655,7 @@ rte_timer_stop_sync(struct rte_timer *tim)
 }
 
 /* Test the PENDING status of the timer handle tim */
-RTE_EXPORT_SYMBOL(rte_timer_pending)
+RTE_EXPORT_SYMBOL(rte_timer_pending);
 int
 rte_timer_pending(struct rte_timer *tim)
 {
@@ -790,7 +790,7 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
 	priv_timer[lcore_id].running_tim = NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_manage)
+RTE_EXPORT_SYMBOL(rte_timer_manage);
 int
 rte_timer_manage(void)
 {
@@ -803,7 +803,7 @@ rte_timer_manage(void)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_manage)
+RTE_EXPORT_SYMBOL(rte_timer_alt_manage);
 int
 rte_timer_alt_manage(uint32_t timer_data_id,
 		     unsigned int *poll_lcores,
@@ -985,7 +985,7 @@ rte_timer_alt_manage(uint32_t timer_data_id,
 }
 
 /* Walk pending lists, stopping timers and calling user-specified function */
-RTE_EXPORT_SYMBOL(rte_timer_stop_all)
+RTE_EXPORT_SYMBOL(rte_timer_stop_all);
 int
 rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores,
 		   int nb_walk_lcores,
@@ -1018,7 +1018,7 @@ rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_next_ticks)
+RTE_EXPORT_SYMBOL(rte_timer_next_ticks);
 int64_t
 rte_timer_next_ticks(void)
 {
@@ -1072,14 +1072,14 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_dump_stats)
+RTE_EXPORT_SYMBOL(rte_timer_dump_stats);
 int
 rte_timer_dump_stats(FILE *f)
 {
 	return rte_timer_alt_dump_stats(default_data_id, f);
 }
 
-RTE_EXPORT_SYMBOL(rte_timer_alt_dump_stats)
+RTE_EXPORT_SYMBOL(rte_timer_alt_dump_stats);
 int
 rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
 {
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 9b4f332f94..1111ecbe0b 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -572,7 +572,7 @@ find_vhost_user_socket(const char *path)
 	return NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_attach_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_attach_vdpa_device);
 int
 rte_vhost_driver_attach_vdpa_device(const char *path,
 		struct rte_vdpa_device *dev)
@@ -591,7 +591,7 @@ rte_vhost_driver_attach_vdpa_device(const char *path,
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_detach_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_detach_vdpa_device);
 int
 rte_vhost_driver_detach_vdpa_device(const char *path)
 {
@@ -606,7 +606,7 @@ rte_vhost_driver_detach_vdpa_device(const char *path)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_device);
 struct rte_vdpa_device *
 rte_vhost_driver_get_vdpa_device(const char *path)
 {
@@ -622,7 +622,7 @@ rte_vhost_driver_get_vdpa_device(const char *path)
 	return dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_dev_type)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_vdpa_dev_type);
 int
 rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type)
 {
@@ -651,7 +651,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_disable_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_disable_features);
 int
 rte_vhost_driver_disable_features(const char *path, uint64_t features)
 {
@@ -672,7 +672,7 @@ rte_vhost_driver_disable_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_enable_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_enable_features);
 int
 rte_vhost_driver_enable_features(const char *path, uint64_t features)
 {
@@ -696,7 +696,7 @@ rte_vhost_driver_enable_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_features);
 int
 rte_vhost_driver_set_features(const char *path, uint64_t features)
 {
@@ -718,7 +718,7 @@ rte_vhost_driver_set_features(const char *path, uint64_t features)
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_features);
 int
 rte_vhost_driver_get_features(const char *path, uint64_t *features)
 {
@@ -754,7 +754,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_protocol_features);
 int
 rte_vhost_driver_set_protocol_features(const char *path,
 		uint64_t protocol_features)
@@ -769,7 +769,7 @@ rte_vhost_driver_set_protocol_features(const char *path,
 	return vsocket ? 0 : -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_protocol_features);
 int
 rte_vhost_driver_get_protocol_features(const char *path,
 		uint64_t *protocol_features)
@@ -808,7 +808,7 @@ rte_vhost_driver_get_protocol_features(const char *path,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_get_queue_num)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_get_queue_num);
 int
 rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num)
 {
@@ -844,7 +844,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_set_max_queue_num)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_set_max_queue_num);
 int
 rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
 {
@@ -902,7 +902,7 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
  * (the default case), or client (when RTE_VHOST_USER_CLIENT) flag
  * is set.
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_register)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_register);
 int
 rte_vhost_driver_register(const char *path, uint64_t flags)
 {
@@ -1068,7 +1068,7 @@ vhost_user_remove_reconnect(struct vhost_user_socket *vsocket)
 /**
  * Unregister the specified vhost socket
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_unregister)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_unregister);
 int
 rte_vhost_driver_unregister(const char *path)
 {
@@ -1152,7 +1152,7 @@ rte_vhost_driver_unregister(const char *path)
 /*
  * Register ops so that we can add/remove device to data core.
  */
-RTE_EXPORT_SYMBOL(rte_vhost_driver_callback_register)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_callback_register);
 int
 rte_vhost_driver_callback_register(const char *path,
 	struct rte_vhost_device_ops const * const ops)
@@ -1180,7 +1180,7 @@ vhost_driver_callback_get(const char *path)
 	return vsocket ? vsocket->notify_ops : NULL;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_driver_start)
+RTE_EXPORT_SYMBOL(rte_vhost_driver_start);
 int
 rte_vhost_driver_start(const char *path)
 {
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index bc2dd8d2e1..2ddcc49a35 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -50,7 +50,7 @@ __vdpa_find_device_by_name(const char *name)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_find_device_by_name)
+RTE_EXPORT_SYMBOL(rte_vdpa_find_device_by_name);
 struct rte_vdpa_device *
 rte_vdpa_find_device_by_name(const char *name)
 {
@@ -63,7 +63,7 @@ rte_vdpa_find_device_by_name(const char *name)
 	return dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_rte_device)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_rte_device);
 struct rte_device *
 rte_vdpa_get_rte_device(struct rte_vdpa_device *vdpa_dev)
 {
@@ -73,7 +73,7 @@ rte_vdpa_get_rte_device(struct rte_vdpa_device *vdpa_dev)
 	return vdpa_dev->device;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_register_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_register_device);
 struct rte_vdpa_device *
 rte_vdpa_register_device(struct rte_device *rte_dev,
 		struct rte_vdpa_dev_ops *ops)
@@ -129,7 +129,7 @@ rte_vdpa_register_device(struct rte_device *rte_dev,
 	return dev;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_unregister_device)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_unregister_device);
 int
 rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 {
@@ -151,7 +151,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_relay_vring_used)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vdpa_relay_vring_used);
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
 {
@@ -263,7 +263,7 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
 	return -1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_queue_num)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_queue_num);
 int
 rte_vdpa_get_queue_num(struct rte_vdpa_device *dev, uint32_t *queue_num)
 {
@@ -273,7 +273,7 @@ rte_vdpa_get_queue_num(struct rte_vdpa_device *dev, uint32_t *queue_num)
 	return dev->ops->get_queue_num(dev, queue_num);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_features)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_features);
 int
 rte_vdpa_get_features(struct rte_vdpa_device *dev, uint64_t *features)
 {
@@ -283,7 +283,7 @@ rte_vdpa_get_features(struct rte_vdpa_device *dev, uint64_t *features)
 	return dev->ops->get_features(dev, features);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_protocol_features);
 int
 rte_vdpa_get_protocol_features(struct rte_vdpa_device *dev, uint64_t *features)
 {
@@ -294,7 +294,7 @@ rte_vdpa_get_protocol_features(struct rte_vdpa_device *dev, uint64_t *features)
 	return dev->ops->get_protocol_features(dev, features);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_stats_names)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_stats_names);
 int
 rte_vdpa_get_stats_names(struct rte_vdpa_device *dev,
 		struct rte_vdpa_stat_name *stats_names,
@@ -309,7 +309,7 @@ rte_vdpa_get_stats_names(struct rte_vdpa_device *dev,
 	return dev->ops->get_stats_names(dev, stats_names, size);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_get_stats)
+RTE_EXPORT_SYMBOL(rte_vdpa_get_stats);
 int
 rte_vdpa_get_stats(struct rte_vdpa_device *dev, uint16_t qid,
 		struct rte_vdpa_stat *stats, unsigned int n)
@@ -323,7 +323,7 @@ rte_vdpa_get_stats(struct rte_vdpa_device *dev, uint16_t qid,
 	return dev->ops->get_stats(dev, qid, stats, n);
 }
 
-RTE_EXPORT_SYMBOL(rte_vdpa_reset_stats)
+RTE_EXPORT_SYMBOL(rte_vdpa_reset_stats);
 int
 rte_vdpa_reset_stats(struct rte_vdpa_device *dev, uint16_t qid)
 {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index a2e3e2635d..a928abbe99 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -861,7 +861,7 @@ vhost_enable_linearbuf(int vid)
 	dev->linearbuf = 1;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_mtu)
+RTE_EXPORT_SYMBOL(rte_vhost_get_mtu);
 int
 rte_vhost_get_mtu(int vid, uint16_t *mtu)
 {
@@ -881,7 +881,7 @@ rte_vhost_get_mtu(int vid, uint16_t *mtu)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_numa_node)
+RTE_EXPORT_SYMBOL(rte_vhost_get_numa_node);
 int
 rte_vhost_get_numa_node(int vid)
 {
@@ -908,7 +908,7 @@ rte_vhost_get_numa_node(int vid)
 #endif
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_num)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_num);
 uint16_t
 rte_vhost_get_vring_num(int vid)
 {
@@ -920,7 +920,7 @@ rte_vhost_get_vring_num(int vid)
 	return dev->nr_vring;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_ifname)
+RTE_EXPORT_SYMBOL(rte_vhost_get_ifname);
 int
 rte_vhost_get_ifname(int vid, char *buf, size_t len)
 {
@@ -937,7 +937,7 @@ rte_vhost_get_ifname(int vid, char *buf, size_t len)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_features)
+RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_features);
 int
 rte_vhost_get_negotiated_features(int vid, uint64_t *features)
 {
@@ -951,7 +951,7 @@ rte_vhost_get_negotiated_features(int vid, uint64_t *features)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_protocol_features)
+RTE_EXPORT_SYMBOL(rte_vhost_get_negotiated_protocol_features);
 int
 rte_vhost_get_negotiated_protocol_features(int vid,
 					   uint64_t *protocol_features)
@@ -966,7 +966,7 @@ rte_vhost_get_negotiated_protocol_features(int vid,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_mem_table)
+RTE_EXPORT_SYMBOL(rte_vhost_get_mem_table);
 int
 rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem)
 {
@@ -990,7 +990,7 @@ rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_vring)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_vring);
 int
 rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
 			  struct rte_vhost_vring *vring)
@@ -1027,7 +1027,7 @@ rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_ring_inflight)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vhost_ring_inflight);
 int
 rte_vhost_get_vhost_ring_inflight(int vid, uint16_t vring_idx,
 				  struct rte_vhost_ring_inflight *vring)
@@ -1063,7 +1063,7 @@ rte_vhost_get_vhost_ring_inflight(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_split)
+RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_split);
 int
 rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
 				  uint16_t idx)
@@ -1100,7 +1100,7 @@ rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_set_inflight_desc_packed);
 int
 rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
 				   uint16_t head, uint16_t last,
@@ -1169,7 +1169,7 @@ rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_split)
+RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_split);
 int
 rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
 				  uint16_t last_used_idx, uint16_t idx)
@@ -1211,7 +1211,7 @@ rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_clr_inflight_desc_packed);
 int
 rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
 				   uint16_t head)
@@ -1258,7 +1258,7 @@ rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_split)
+RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_split);
 int
 rte_vhost_set_last_inflight_io_split(int vid, uint16_t vring_idx,
 				     uint16_t idx)
@@ -1294,7 +1294,7 @@ rte_vhost_set_last_inflight_io_split(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_packed)
+RTE_EXPORT_SYMBOL(rte_vhost_set_last_inflight_io_packed);
 int
 rte_vhost_set_last_inflight_io_packed(int vid, uint16_t vring_idx,
 				      uint16_t head)
@@ -1345,7 +1345,7 @@ rte_vhost_set_last_inflight_io_packed(int vid, uint16_t vring_idx,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_call)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_call);
 int
 rte_vhost_vring_call(int vid, uint16_t vring_idx)
 {
@@ -1382,7 +1382,7 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_call_nonblock)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_call_nonblock);
 int
 rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)
 {
@@ -1420,7 +1420,7 @@ rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_avail_entries)
+RTE_EXPORT_SYMBOL(rte_vhost_avail_entries);
 uint16_t
 rte_vhost_avail_entries(int vid, uint16_t queue_id)
 {
@@ -1517,7 +1517,7 @@ vhost_enable_guest_notification(struct virtio_net *dev,
 		return vhost_enable_notify_split(dev, vq, enable);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_enable_guest_notification)
+RTE_EXPORT_SYMBOL(rte_vhost_enable_guest_notification);
 int
 rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 {
@@ -1551,7 +1551,7 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_notify_guest, 23.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_notify_guest, 23.07);
 void
 rte_vhost_notify_guest(int vid, uint16_t queue_id)
 {
@@ -1588,7 +1588,7 @@ rte_vhost_notify_guest(int vid, uint16_t queue_id)
 	rte_rwlock_read_unlock(&vq->access_lock);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_log_write)
+RTE_EXPORT_SYMBOL(rte_vhost_log_write);
 void
 rte_vhost_log_write(int vid, uint64_t addr, uint64_t len)
 {
@@ -1600,7 +1600,7 @@ rte_vhost_log_write(int vid, uint64_t addr, uint64_t len)
 	vhost_log_write(dev, addr, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_log_used_vring)
+RTE_EXPORT_SYMBOL(rte_vhost_log_used_vring);
 void
 rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
 			 uint64_t offset, uint64_t len)
@@ -1621,7 +1621,7 @@ rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
 	vhost_log_used_vring(dev, vq, offset, len);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_rx_queue_count)
+RTE_EXPORT_SYMBOL(rte_vhost_rx_queue_count);
 uint32_t
 rte_vhost_rx_queue_count(int vid, uint16_t qid)
 {
@@ -1659,7 +1659,7 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vdpa_device)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vdpa_device);
 struct rte_vdpa_device *
 rte_vhost_get_vdpa_device(int vid)
 {
@@ -1671,7 +1671,7 @@ rte_vhost_get_vdpa_device(int vid)
 	return dev->vdpa_dev;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_log_base)
+RTE_EXPORT_SYMBOL(rte_vhost_get_log_base);
 int
 rte_vhost_get_log_base(int vid, uint64_t *log_base,
 		uint64_t *log_size)
@@ -1687,7 +1687,7 @@ rte_vhost_get_log_base(int vid, uint64_t *log_base,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base);
 int
 rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 		uint16_t *last_avail_idx, uint16_t *last_used_idx)
@@ -1718,7 +1718,7 @@ rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_set_vring_base)
+RTE_EXPORT_SYMBOL(rte_vhost_set_vring_base);
 int
 rte_vhost_set_vring_base(int vid, uint16_t queue_id,
 		uint16_t last_avail_idx, uint16_t last_used_idx)
@@ -1751,7 +1751,7 @@ rte_vhost_set_vring_base(int vid, uint16_t queue_id,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base_from_inflight)
+RTE_EXPORT_SYMBOL(rte_vhost_get_vring_base_from_inflight);
 int
 rte_vhost_get_vring_base_from_inflight(int vid,
 				       uint16_t queue_id,
@@ -1786,7 +1786,7 @@ rte_vhost_get_vring_base_from_inflight(int vid,
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_extern_callback_register)
+RTE_EXPORT_SYMBOL(rte_vhost_extern_callback_register);
 int
 rte_vhost_extern_callback_register(int vid,
 		struct rte_vhost_user_extern_ops const * const ops, void *ctx)
@@ -1874,7 +1874,7 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register, 20.08);
 int
 rte_vhost_async_channel_register(int vid, uint16_t queue_id)
 {
@@ -1908,7 +1908,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_register_thread_unsafe, 21.08);
 int
 rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -1931,7 +1931,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
 	return async_channel_register(dev, vq);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister, 20.08);
 int
 rte_vhost_async_channel_unregister(int vid, uint16_t queue_id)
 {
@@ -1978,7 +1978,7 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_channel_unregister_thread_unsafe, 21.08);
 int
 rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -2013,7 +2013,7 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 	return 0;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_configure, 22.03)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_configure, 22.03);
 int
 rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 {
@@ -2090,7 +2090,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	return -1;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight, 21.08);
 int
 rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
 {
@@ -2129,7 +2129,7 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight_thread_unsafe, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_get_inflight_thread_unsafe, 22.07);
 int
 rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id)
 {
@@ -2158,7 +2158,7 @@ rte_vhost_async_get_inflight_thread_unsafe(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_get_monitor_addr)
+RTE_EXPORT_SYMBOL(rte_vhost_get_monitor_addr);
 int
 rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
 		struct rte_vhost_power_monitor_cond *pmc)
@@ -2209,7 +2209,7 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
 }
 
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get_names)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get_names);
 int
 rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
 		struct rte_vhost_stat_name *name, unsigned int size)
@@ -2237,7 +2237,7 @@ rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
 	return VHOST_NB_VQ_STATS;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_get);
 int
 rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
 		struct rte_vhost_stat *stats, unsigned int n)
@@ -2284,7 +2284,7 @@ rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_reset)
+RTE_EXPORT_SYMBOL(rte_vhost_vring_stats_reset);
 int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -2320,7 +2320,7 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return ret;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_unconfigure, 22.11)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_dma_unconfigure, 22.11);
 int
 rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
 {
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 648e2d731b..ed5b164846 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -1782,7 +1782,7 @@ vhost_crypto_complete_one_vm_requests(struct rte_crypto_op **ops,
 	return processed;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_driver_start)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_driver_start);
 int
 rte_vhost_crypto_driver_start(const char *path)
 {
@@ -1804,7 +1804,7 @@ rte_vhost_crypto_driver_start(const char *path)
 	return rte_vhost_driver_start(path);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_create)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_create);
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
@@ -1888,7 +1888,7 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 	return ret;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_free)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_free);
 int
 rte_vhost_crypto_free(int vid)
 {
@@ -1918,7 +1918,7 @@ rte_vhost_crypto_free(int vid)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_set_zero_copy)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_set_zero_copy);
 int
 rte_vhost_crypto_set_zero_copy(int vid, enum rte_vhost_crypto_zero_copy option)
 {
@@ -1974,7 +1974,7 @@ rte_vhost_crypto_set_zero_copy(int vid, enum rte_vhost_crypto_zero_copy option)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_fetch_requests)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_fetch_requests);
 uint16_t
 rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
@@ -2104,7 +2104,7 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
 	return i;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_crypto_finalize_requests)
+RTE_EXPORT_SYMBOL(rte_vhost_crypto_finalize_requests);
 uint16_t
 rte_vhost_crypto_finalize_requests(struct rte_crypto_op **ops,
 		uint16_t nb_ops, int *callfds, uint16_t *nb_callfds)
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index b73dec6a22..f5578df43e 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -3360,7 +3360,7 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm)
 	return 0;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_backend_config_change)
+RTE_EXPORT_SYMBOL(rte_vhost_backend_config_change);
 int
 rte_vhost_backend_config_change(int vid, bool need_reply)
 {
@@ -3423,7 +3423,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev,
 	return ret;
 }
 
-RTE_EXPORT_INTERNAL_SYMBOL(rte_vhost_host_notifier_ctrl)
+RTE_EXPORT_INTERNAL_SYMBOL(rte_vhost_host_notifier_ctrl);
 int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable)
 {
 	struct virtio_net *dev;
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 77545d0a4d..699bac781b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -1740,7 +1740,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nb_tx;
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_enqueue_burst)
+RTE_EXPORT_SYMBOL(rte_vhost_enqueue_burst);
 uint16_t
 rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
 	struct rte_mbuf **__rte_restrict pkts, uint16_t count)
@@ -2342,7 +2342,7 @@ vhost_poll_enqueue_completed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nr_cpl_pkts;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_poll_enqueue_completed, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_poll_enqueue_completed, 20.08);
 uint16_t
 rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -2398,7 +2398,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 	return n_pkts_cpl;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue_thread_unsafe, 21.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue_thread_unsafe, 21.08);
 uint16_t
 rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -2456,7 +2456,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
 	return n_pkts_cpl;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_clear_queue, 22.07);
 uint16_t
 rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts,
 		uint16_t count, int16_t dma_id, uint16_t vchan_id)
@@ -2572,7 +2572,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return nb_tx;
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_submit_enqueue_burst, 20.08)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_submit_enqueue_burst, 20.08);
 uint16_t
 rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
 		struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
@@ -3594,7 +3594,7 @@ virtio_dev_tx_packed_compliant(struct virtio_net *dev,
 	return virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count, false);
 }
 
-RTE_EXPORT_SYMBOL(rte_vhost_dequeue_burst)
+RTE_EXPORT_SYMBOL(rte_vhost_dequeue_burst);
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
@@ -4204,7 +4204,7 @@ virtio_dev_tx_async_packed_compliant(struct virtio_net *dev, struct vhost_virtqu
 				pkts, count, dma_id, vchan_id, false);
 }
 
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_try_dequeue_burst, 22.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_vhost_async_try_dequeue_burst, 22.07);
 uint16_t
 rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [PATCH v2 3/3] doc: update ABI versioning guide
  2025-08-29  2:34  1% ` [PATCH v2 0/3] " Chengwen Feng
@ 2025-08-29  2:34  9%   ` Chengwen Feng
  0 siblings, 0 replies; 77+ results
From: Chengwen Feng @ 2025-08-29  2:34 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Add a semicolon after RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL in the guide.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 doc/guides/contributing/abi_versioning.rst | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
-- 
2.17.1


^ permalink raw reply	[relevance 9%]

* [PATCH v2 0/3] support quick jump to API definition
  2025-08-28  2:59  1% Chengwen Feng
@ 2025-08-29  2:34  1% ` Chengwen Feng
  2025-08-29  2:34  9%   ` [PATCH v2 3/3] doc: update ABI versioning guide Chengwen Feng
  2025-09-01  1:21  1% ` [PATCH v3 0/5] add semicolon when export any symbol Chengwen Feng
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 77+ results
From: Chengwen Feng @ 2025-08-29  2:34 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

Chengwen Feng (3):
  lib: support quick jump to API definition
  drivers: support quick jump to API definition
  doc: update ABI versioning guide

---
v2: 
1. drop the gen-version-map.py change make sure it will not compile
   error with on-going code. 
2. fix CI error: two semicolon for rte_node_mbuf_dynfield_register.
   and mlx5-glue.c error (by keep no change)
3. split to three commit.

 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 397 files changed, 4171 insertions(+), 4171 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* Re: [PATCH 3/3] vhost_user: support for memory regions
  @ 2025-08-29 11:59  3%   ` Maxime Coquelin
  2025-10-08  9:23  0%     ` Bathija, Pravin
  0 siblings, 1 reply; 77+ results
From: Maxime Coquelin @ 2025-08-29 11:59 UTC (permalink / raw)
  To: Pravin M Bathija, dev; +Cc: pravin.m.bathija.dev

The title is not consistent with other commits in this library.

On 8/12/25 4:33 AM, Pravin M Bathija wrote:
> - modify data structures and add functions to support
>    add and remove memory regions/slots
> - define VHOST_MEMORY_MAX_NREGIONS & modify function
>    vhost_user_set_mem_table accordingly
> - dynamically add new memory slots via vhost_user_add_mem_reg
> - remove unused memory slots via vhost_user_rem_mem_reg
> - define data structure VhostUserSingleMemReg for single
>    memory region
> - modify data structures VhostUserRequest & VhostUserMsg
> 

Please write full sentences, explaining the purpose of this change and 
not just listing the changes themselves.

> Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
> ---
>   lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++------
>   lib/vhost/vhost_user.h |  10 ++
>   2 files changed, 291 insertions(+), 44 deletions(-)
> 
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index b73dec6a22..6367f54b97 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -74,6 +74,9 @@ VHOST_MESSAGE_HANDLER(VHOST_USER_SET_FEATURES, vhost_user_set_features, false, t
>   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_OWNER, vhost_user_set_owner, false, true) \
>   VHOST_MESSAGE_HANDLER(VHOST_USER_RESET_OWNER, vhost_user_reset_owner, false, false) \
>   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_MEM_TABLE, vhost_user_set_mem_table, true, true) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_GET_MAX_MEM_SLOTS, vhost_user_get_max_mem_slots, false, false) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_ADD_MEM_REG, vhost_user_add_mem_reg, true, true) \
> +VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG, vhost_user_rem_mem_reg, true, true) \

Shouldn't it be:
VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG, vhost_user_rem_mem_reg, 
false, true)

And if not, aren't you not leaking FDs in vhost_user_rem_mem_reg?

>   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_BASE, vhost_user_set_log_base, true, true) \
>   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_FD, vhost_user_set_log_fd, true, true) \
>   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_VRING_NUM, vhost_user_set_vring_num, false, true) \
> @@ -228,7 +231,17 @@ async_dma_map(struct virtio_net *dev, bool do_map)
>   }
>   
>   static void
> -free_mem_region(struct virtio_net *dev)
> +free_mem_region(struct rte_vhost_mem_region *reg)
> +{
> +	if (reg != NULL && reg->host_user_addr) {
> +		munmap(reg->mmap_addr, reg->mmap_size);
> +		close(reg->fd);
> +		memset(reg, 0, sizeof(struct rte_vhost_mem_region));
> +	}
> +}
> +
> +static void
> +free_all_mem_regions(struct virtio_net *dev)
>   {
>   	uint32_t i;
>   	struct rte_vhost_mem_region *reg;
> @@ -239,12 +252,10 @@ free_mem_region(struct virtio_net *dev)
>   	if (dev->async_copy && rte_vfio_is_enabled("vfio"))
>   		async_dma_map(dev, false);
>   
> -	for (i = 0; i < dev->mem->nregions; i++) {
> +	for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
>   		reg = &dev->mem->regions[i];
> -		if (reg->host_user_addr) {
> -			munmap(reg->mmap_addr, reg->mmap_size);
> -			close(reg->fd);
> -		}
> +		if (reg->mmap_addr)
> +			free_mem_region(reg);

Please split this patch in multiple ones.
Do the refactorings in dedicated patches.

>   	}
>   }
>   
> @@ -258,7 +269,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
>   		vdpa_dev->ops->dev_cleanup(dev->vid);
>   
>   	if (dev->mem) {
> -		free_mem_region(dev);
> +		free_all_mem_regions(dev);
>   		rte_free(dev->mem);
>   		dev->mem = NULL;
>   	}
> @@ -707,7 +718,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   	vhost_devices[dev->vid] = dev;
>   
>   	mem_size = sizeof(struct rte_vhost_memory) +
> -		sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
> +		sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS;
>   	mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
>   	if (!mem) {
>   		VHOST_CONFIG_LOG(dev->ifname, ERR,
> @@ -811,8 +822,10 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
>   	uint32_t i;
>   	uintptr_t hua = (uintptr_t)ptr;
>   
> -	for (i = 0; i < mem->nregions; i++) {
> +	for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
>   		r = &mem->regions[i];
> +		if (r->host_user_addr == 0)
> +			continue;
>   		if (hua >= r->host_user_addr &&
>   			hua < r->host_user_addr + r->size) {
>   			return get_blk_size(r->fd);
> @@ -1250,9 +1263,13 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
>   	 * retrieve the region offset when handling userfaults.
>   	 */
>   	memory = &ctx->msg.payload.memory;
> -	for (i = 0; i < memory->nregions; i++) {
> +	for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +		int reg_msg_index = 0;
>   		reg = &dev->mem->regions[i];
> -		memory->regions[i].userspace_addr = reg->host_user_addr;
> +		if (reg->host_user_addr == 0)
> +			continue;
> +		memory->regions[reg_msg_index].userspace_addr = reg->host_user_addr;
> +		reg_msg_index++;
>   	}
>   
>   	/* Send the addresses back to qemu */
> @@ -1279,8 +1296,10 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
>   	}
>   
>   	/* Now userfault register and we can use the memory */
> -	for (i = 0; i < memory->nregions; i++) {
> +	for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
>   		reg = &dev->mem->regions[i];
> +		if (reg->host_user_addr == 0)
> +			continue;
>   		if (vhost_user_postcopy_region_register(dev, reg) < 0)
>   			return -1;
>   	}
> @@ -1385,6 +1404,46 @@ vhost_user_mmap_region(struct virtio_net *dev,
>   	return 0;
>   }
>   
> +static int
> +vhost_user_initialize_memory(struct virtio_net **pdev)
> +{
> +	struct virtio_net *dev = *pdev;
> +	int numa_node = SOCKET_ID_ANY;
> +
> +	/*
> +	 * If VQ 0 has already been allocated, try to allocate on the same
> +	 * NUMA node. It can be reallocated later in numa_realloc().
> +	 */
> +	if (dev->nr_vring > 0)
> +		numa_node = dev->virtqueue[0]->numa_node;
> +
> +	dev->nr_guest_pages = 0;
> +	if (dev->guest_pages == NULL) {
> +		dev->max_guest_pages = 8;
> +		dev->guest_pages = rte_zmalloc_socket(NULL,
> +					dev->max_guest_pages *
> +					sizeof(struct guest_page),
> +					RTE_CACHE_LINE_SIZE,
> +					numa_node);
> +		if (dev->guest_pages == NULL) {
> +			VHOST_CONFIG_LOG(dev->ifname, ERR,
> +				"failed to allocate memory for dev->guest_pages");
> +			return -1;
> +		}
> +	}
> +
> +	dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
> +		sizeof(struct rte_vhost_mem_region) * VHOST_MEMORY_MAX_NREGIONS, 0, numa_node);
> +	if (dev->mem == NULL) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
> +		rte_free(dev->guest_pages);
> +		dev->guest_pages = NULL;
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +
>   static int
>   vhost_user_set_mem_table(struct virtio_net **pdev,
>   			struct vhu_msg_context *ctx,
> @@ -1393,7 +1452,6 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>   	struct virtio_net *dev = *pdev;
>   	struct VhostUserMemory *memory = &ctx->msg.payload.memory;
>   	struct rte_vhost_mem_region *reg;
> -	int numa_node = SOCKET_ID_ANY;
>   	uint64_t mmap_offset;
>   	uint32_t i;
>   	bool async_notify = false;
> @@ -1438,39 +1496,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>   		if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
>   			vhost_user_iotlb_flush_all(dev);
>   
> -		free_mem_region(dev);
> +		free_all_mem_regions(dev);
>   		rte_free(dev->mem);
>   		dev->mem = NULL;
>   	}
>   
> -	/*
> -	 * If VQ 0 has already been allocated, try to allocate on the same
> -	 * NUMA node. It can be reallocated later in numa_realloc().
> -	 */
> -	if (dev->nr_vring > 0)
> -		numa_node = dev->virtqueue[0]->numa_node;
> -
> -	dev->nr_guest_pages = 0;
> -	if (dev->guest_pages == NULL) {
> -		dev->max_guest_pages = 8;
> -		dev->guest_pages = rte_zmalloc_socket(NULL,
> -					dev->max_guest_pages *
> -					sizeof(struct guest_page),
> -					RTE_CACHE_LINE_SIZE,
> -					numa_node);
> -		if (dev->guest_pages == NULL) {
> -			VHOST_CONFIG_LOG(dev->ifname, ERR,
> -				"failed to allocate memory for dev->guest_pages");
> -			goto close_msg_fds;
> -		}
> -	}
> -
> -	dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) +
> -		sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node);
> -	if (dev->mem == NULL) {
> -		VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem");
> -		goto free_guest_pages;
> -	}
> +	if (vhost_user_initialize_memory(pdev) < 0)
> +		goto close_msg_fds;

This part should be refactored into a dedicated preliminary patch.

>   
>   	for (i = 0; i < memory->nregions; i++) {
>   		reg = &dev->mem->regions[i];
> @@ -1534,11 +1566,182 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>   	return RTE_VHOST_MSG_RESULT_OK;
>   
>   free_mem_table:
> -	free_mem_region(dev);
> +	free_all_mem_regions(dev);
>   	rte_free(dev->mem);
>   	dev->mem = NULL;
> +	rte_free(dev->guest_pages);
> +	dev->guest_pages = NULL;
> +close_msg_fds:
> +	close_msg_fds(ctx);
> +	return RTE_VHOST_MSG_RESULT_ERR;
> +}
> +
> +
> +static int
> +vhost_user_get_max_mem_slots(struct virtio_net **pdev __rte_unused,
> +			struct vhu_msg_context *ctx,
> +			int main_fd __rte_unused)
> +{
> +	uint32_t max_mem_slots = VHOST_MEMORY_MAX_NREGIONS;

This VHOST_MEMORY_MAX_NREGIONS value was hardcoded when only
VHOST_USER_SET_MEM_TABLE was introduced.

With this new features, my understanding is that we can get rid off this
limit, right?

The good news is increasing it should not break the DPDK ABI.

Would it make sense to increase it?
> +
> +	ctx->msg.payload.u64 = (uint64_t)max_mem_slots;
> +	ctx->msg.size = sizeof(ctx->msg.payload.u64);
> +	ctx->fd_num = 0;
>   
> -free_guest_pages:
> +	return RTE_VHOST_MSG_RESULT_REPLY;
> +}
> +
> +static int
> +vhost_user_add_mem_reg(struct virtio_net **pdev,
> +			struct vhu_msg_context *ctx,
> +			int main_fd __rte_unused)
> +{
> +	struct virtio_net *dev = *pdev;
> +	struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;
> +	uint32_t i;
> +
> +	/* make sure new region will fit */
> +	if (dev->mem != NULL && dev->mem->nregions >= VHOST_MEMORY_MAX_NREGIONS) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR,
> +			"too many memory regions already (%u)",
> +			dev->mem->nregions);
> +		goto close_msg_fds;
> +	}
> +
> +	/* make sure supplied memory fd present */
> +	if (ctx->fd_num != 1) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR,
> +			"fd count makes no sense (%u)",
> +			ctx->fd_num);
> +		goto close_msg_fds;
> +	}

There is a lack of support for vDPA devices.
My understanding here is that the vDPA device does not get the new table 
entry.

In set_mem_table, we call its close callback, but that might be a bit 
too much for simple memory hotplug. we might need another mechanism.

> +
> +	/* Make sure no overlap in guest virtual address space */
> +	if (dev->mem != NULL && dev->mem->nregions > 0)	{
> +		for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +			struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
> +
> +			if (current_region->mmap_size == 0)
> +				continue;
> +
> +			uint64_t current_region_guest_start = current_region->guest_user_addr;
> +			uint64_t current_region_guest_end = current_region_guest_start
> +								+ current_region->mmap_size - 1;

Shouldn't it use size instead of mmap_size to check for overlaps?

> +			uint64_t proposed_region_guest_start = region->userspace_addr;
> +			uint64_t proposed_region_guest_end = proposed_region_guest_start
> +								+ region->memory_size - 1;
> +			bool overlap = false;
> +
> +			bool current_region_guest_start_overlap =
> +				current_region_guest_start >= proposed_region_guest_start
> +				&& current_region_guest_start <= proposed_region_guest_end;
> +			bool current_region_guest_end_overlap =
> +				current_region_guest_end >= proposed_region_guest_start
> +				&& current_region_guest_end <= proposed_region_guest_end;
> +			bool proposed_region_guest_start_overlap =
> +				proposed_region_guest_start >= current_region_guest_start
> +				&& proposed_region_guest_start <= current_region_guest_end;
> +			bool proposed_region_guest_end_overlap =
> +				proposed_region_guest_end >= current_region_guest_start
> +				&& proposed_region_guest_end <= current_region_guest_end;
> +
> +			overlap = current_region_guest_start_overlap
> +				|| current_region_guest_end_overlap
> +				|| proposed_region_guest_start_overlap
> +				|| proposed_region_guest_end_overlap;
> +
> +			if (overlap) {
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"requested memory region overlaps with another region");
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tRequested region address:0x%" PRIx64,
> +					region->userspace_addr);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tRequested region size:0x%" PRIx64,
> +					region->memory_size);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tOverlapping region address:0x%" PRIx64,
> +					current_region->guest_user_addr);
> +				VHOST_CONFIG_LOG(dev->ifname, ERR,
> +					"\tOverlapping region size:0x%" PRIx64,
> +					current_region->mmap_size);
> +				goto close_msg_fds;
> +			}
> +
> +		}
> +	}
> +
> +	/* convert first region add to normal memory table set */
> +	if (dev->mem == NULL) {
> +		if (vhost_user_initialize_memory(pdev) < 0)
> +			goto close_msg_fds;
> +	}
> +
> +	/* find a new region and set it like memory table set does */
> +	struct rte_vhost_mem_region *reg = NULL;
> +	uint64_t mmap_offset;
> +
> +	for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +		if (dev->mem->regions[i].guest_user_addr == 0) {
> +			reg = &dev->mem->regions[i];
> +			break;
> +		}
> +	}
> +	if (reg == NULL) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR, "no free memory region");
> +		goto close_msg_fds;
> +	}
> +
> +	reg->guest_phys_addr = region->guest_phys_addr;
> +	reg->guest_user_addr = region->userspace_addr;
> +	reg->size            = region->memory_size;
> +	reg->fd              = ctx->fds[0];
> +
> +	mmap_offset = region->mmap_offset;
> +
> +	if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) {
> +		VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region");
> +		goto close_msg_fds;
> +	}
> +
> +	dev->mem->nregions++;
> +
> +	if (dev->async_copy && rte_vfio_is_enabled("vfio"))
> +		async_dma_map(dev, true);
> +
> +	if (vhost_user_postcopy_register(dev, main_fd, ctx) < 0)
> +		goto free_mem_table;
> +
> +	for (i = 0; i < dev->nr_vring; i++) {
> +		struct vhost_virtqueue *vq = dev->virtqueue[i];
> +
> +		if (!vq)
> +			continue;
> +
> +		if (vq->desc || vq->avail || vq->used) {
> +			/* vhost_user_lock_all_queue_pairs locked all qps */
> +			VHOST_USER_ASSERT_LOCK(dev, vq, VHOST_USER_SET_MEM_TABLE);

VHOST_USER_ASSERT_LOCK(dev, vq, VHOST_USER_ADD_MEM_REG); ?

> +
> +			/*
> +			 * If the memory table got updated, the ring addresses
> +			 * need to be translated again as virtual addresses have
> +			 * changed.
> +			 */
> +			vring_invalidate(dev, vq);
> +
> +			translate_ring_addresses(&dev, &vq);
> +			*pdev = dev;
> +		}
> +	}
> +
> +	dump_guest_pages(dev);
> +
> +	return RTE_VHOST_MSG_RESULT_OK;
> +
> +free_mem_table:
> +	free_all_mem_regions(dev);
> +	rte_free(dev->mem);
> +	dev->mem = NULL;
>   	rte_free(dev->guest_pages);
>   	dev->guest_pages = NULL;
>   close_msg_fds:
> @@ -1546,6 +1749,40 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
>   	return RTE_VHOST_MSG_RESULT_ERR;
>   }
>   
> +static int
> +vhost_user_rem_mem_reg(struct virtio_net **pdev __rte_unused,
> +			struct vhu_msg_context *ctx __rte_unused,
> +			int main_fd __rte_unused)
> +{
> +	struct virtio_net *dev = *pdev;
> +	struct VhostUserMemoryRegion *region = &ctx->msg.payload.memory_single.region;
> +

It lacks support for vDPA devices.
In set_mem_table, we call the vDPA close cb to ensure it is not actively 
accessing memoring being unmapped.

We need something similar here, otherwise the vDPA device is not aware 
of the memory being unplugged.

> +	if (dev->mem != NULL && dev->mem->nregions > 0) {
> +		for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> +			struct rte_vhost_mem_region *current_region = &dev->mem->regions[i];
> +
> +			if (current_region->guest_user_addr == 0)
> +				continue;
> +
> +			/*
> +			 * According to the vhost-user specification:
> +			 * The memory region to be removed is identified by its guest address,
> +			 * user address and size. The mmap offset is ignored.
> +			 */
> +			if (region->userspace_addr == current_region->guest_user_addr
> +				&& region->guest_phys_addr == current_region->guest_phys_addr
> +				&& region->memory_size == current_region->size) {
> +				free_mem_region(current_region);
> +				dev->mem->nregions--;
> +				return RTE_VHOST_MSG_RESULT_OK;
> +			}

There is a lack of IOTLB entries invalidation here, as IOTLB entries in
the cache could point to memory being unmapped in this function.

Same comment for vring invalidation, as the vring adresses are not re-
translated at each burst.

> +		}
> +	}
> +
> +	VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to find region");
> +	return RTE_VHOST_MSG_RESULT_ERR;
> +}
> +
>   static bool
>   vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   {
> diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
> index ef486545ba..5a0e747b58 100644
> --- a/lib/vhost/vhost_user.h
> +++ b/lib/vhost/vhost_user.h
> @@ -32,6 +32,7 @@
>   					 (1ULL << VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD) | \
>   					 (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \
>   					 (1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT) | \
> +					 (1ULL << VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS) | \
>   					 (1ULL << VHOST_USER_PROTOCOL_F_STATUS))
>   
>   typedef enum VhostUserRequest {
> @@ -67,6 +68,9 @@ typedef enum VhostUserRequest {
>   	VHOST_USER_POSTCOPY_END = 30,
>   	VHOST_USER_GET_INFLIGHT_FD = 31,
>   	VHOST_USER_SET_INFLIGHT_FD = 32,
> +	VHOST_USER_GET_MAX_MEM_SLOTS = 36,
> +	VHOST_USER_ADD_MEM_REG = 37,
> +	VHOST_USER_REM_MEM_REG = 38,
>   	VHOST_USER_SET_STATUS = 39,
>   	VHOST_USER_GET_STATUS = 40,
>   } VhostUserRequest;
> @@ -91,6 +95,11 @@ typedef struct VhostUserMemory {
>   	VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS];
>   } VhostUserMemory;
>   
> +typedef struct VhostUserSingleMemReg {
> +	uint64_t padding;
> +	VhostUserMemoryRegion region;
> +} VhostUserSingleMemReg;
> +
>   typedef struct VhostUserLog {
>   	uint64_t mmap_size;
>   	uint64_t mmap_offset;
> @@ -186,6 +195,7 @@ typedef struct __rte_packed_begin VhostUserMsg {
>   		struct vhost_vring_state state;
>   		struct vhost_vring_addr addr;
>   		VhostUserMemory memory;
> +		VhostUserSingleMemReg memory_single;
>   		VhostUserLog    log;
>   		struct vhost_iotlb_msg iotlb;
>   		VhostUserCryptoSessionParam crypto_session;


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v7 00/13] Simplify running with high-numbered CPUs
  @ 2025-08-29 14:39  3%   ` Bruce Richardson
  0 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-08-29 14:39 UTC (permalink / raw)
  To: dev

On Wed, Jul 23, 2025 at 05:19:58PM +0100, Bruce Richardson wrote:
> The ultimate of this patchset is to make it easier to run on systems
> with large numbers of cores, by simplifying the process of using core
> numbers >RTE_MAX_LCORE. The new EAL args "--lcores-remapped", also
> shortened to just "-L", and "--lcoreid-base", are added to DPDK to
> support this. However, in order to enable the addition of these new
> flags, the first 10 patches of this set do cleanups, for reasons
> explained below.
> 
> When processing cmdline arguments in DPDK, we always do so with very
> little context. So, for example, when processing the "-l" flag, we have
> no idea whether there will be later a --proc-type=secondary flag. We
> have all sorts of post-arg-processing checks in place to try and catch
> these scenarios.
> 
> To improve this situation, this patchset tries to simplify the handling
> of argument processing, by explicitly doing an initial pass to collate
> all arguments into a structure. Thereafter, the actual arg parsing is
> done in a fixed order, meaning that e.g. when processing the
> --main-lcore flag, we have already processed the service core flags. We
> also can far quicker and easier check for conflicting options, since
> they can all be checked for NULL/non-NULL in the arg structure
> immediately after the struct has been populated.
> 
> To do the initial argument gathering, this RFC uses the existing
> argparse library in DPDK. With recent changes, and a few additional
> patches at the start of this set, this library now meets our needs for
> EAL argument parsing and allows us to not need to do direct getopt
> argument processing inside EAL at all.
> 
> An additional benefit of this work is that the argument parsing for EAL
> is much more centralised into common options and the options list file.
> This single list with ifdefs makes it clear to the viewer what options
> are common across OS's, vs what are unix-only or linux-only.
> 
> Once the cleanup and rework is done, adding the new options for
> remapping cores becomes a lot simpler, since we can very easily check
> for scenarios like multi-process and handle those appropriately.
> 
> V7:
> * expand the scope of the patchset beyond just cleanup to add in the
>   extra 3 patches for -L and --lcoreid-base option.
> 
Recheck-request: rebase=main, iol-abi-testing

^ permalink raw reply	[relevance 3%]

* [PATCH v3 0/5] add semicolon when export any symbol
  2025-08-28  2:59  1% Chengwen Feng
  2025-08-29  2:34  1% ` [PATCH v2 0/3] " Chengwen Feng
@ 2025-09-01  1:21  1% ` Chengwen Feng
  2025-09-01  1:21  9%   ` [PATCH v3 5/5] doc: update ABI versioning guide Chengwen Feng
  2025-09-01 10:46  1% ` [PATCH v4 0/5] add semicolon when export any symbol Chengwen Feng
  2025-09-03  2:05  1% ` [PATCH v5 0/5] add semicolon when export any symbol Chengwen Feng
  3 siblings, 1 reply; 77+ results
From: Chengwen Feng @ 2025-09-01  1:21 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

Chengwen Feng (5):
  lib: add semicolon when export symbol
  lib: add semicolon when export experimental symbol
  lib: add semicolon when export internal symbol
  drivers: add semicolon when export any symbol
  doc: update ABI versioning guide

---
v3:
1. split the lib commit to three commits.
2. rebase (try to fix CI error: apply rte_cfgfile.c failed).
v2: 
1. drop the gen-version-map.py change make sure it will not compile
   error with on-going code. 
2. fix CI error: two semicolon for rte_node_mbuf_dynfield_register.
   and mlx5-glue.c error (by keep no change)
3. split to three commit.

 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_glue.c         |    2 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mlx5/windows/mlx5_glue.c       |    2 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 399 files changed, 4173 insertions(+), 4173 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [PATCH v3 5/5] doc: update ABI versioning guide
  2025-09-01  1:21  1% ` [PATCH v3 0/5] add semicolon when export any symbol Chengwen Feng
@ 2025-09-01  1:21  9%   ` Chengwen Feng
  0 siblings, 0 replies; 77+ results
From: Chengwen Feng @ 2025-09-01  1:21 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Add a semicolon after RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL in the guide.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 doc/guides/contributing/abi_versioning.rst | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
-- 
2.17.1


^ permalink raw reply	[relevance 9%]

* [PATCH v9 1/1] ethdev: add support to provide link type
  @ 2025-09-01  5:44  3% ` skori
  2025-09-08  8:51  3% ` [PATCH v10 " skori
  1 sibling, 0 replies; 77+ results
From: skori @ 2025-09-01  5:44 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko
  Cc: dev, Sunil Kumar Kori, Nithin Dabilpuram

From: Sunil Kumar Kori <skori@marvell.com>

Adding link type parameter to provide the type
of port like twisted pair, fibre etc.

Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX
to a readable string.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
v8..v9:
 - Adds 25.11 release notes.
v7..v8:
 - Add documentation for invalid link type.
 - Remove trace point from API.
 - Rebase on next-net.
v6..v7:
 - Replace link_type to link_connector.
 - Update comments.
v5..v6:
 - Fix doxygen error.
v4..v5:                                                                                             
 - Convert link type to connector.
 - Fix build error on Windows.
 - Handle comsmetic review comments.
v3..v4:
 - Convert #define into enum.
 - Enhance comments for each port link type.
 - Fix test failures.
v2..v3
 - Extend link type list as per suggestion.

 app/test/test_ethdev_link.c            | 18 +++++----
 doc/guides/rel_notes/release_25_11.rst |  9 +++++
 lib/ethdev/rte_ethdev.c                | 45 ++++++++++++++++++++-
 lib/ethdev/rte_ethdev.h                | 54 ++++++++++++++++++++++++++
 4 files changed, 117 insertions(+), 9 deletions(-)

diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index f063a5fe26..0e543228b0 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -17,23 +17,25 @@ test_link_status_up_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
 	printf("Default link up #1: %s\n", text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg Other",
 		text, strlen(text), "Invalid default link status string");
 
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
 	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_SGMII;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -41,7 +43,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -49,7 +51,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -57,6 +59,7 @@ test_link_status_up_default(void)
 	link_status.link_speed = RTE_ETH_SPEED_NUM_400G;
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_GAUI;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -92,7 +95,8 @@ test_link_status_invalid(void)
 		.link_speed = 55555,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -100,7 +104,7 @@ test_link_status_invalid(void)
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
 		"Failed to format invalid string\n");
 	printf("invalid link up #1: len=%d %s\n", ret, text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg Other",
 		text, strlen(text), "Incorrect invalid link status string");
 
 	return TEST_SUCCESS;
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index e81ba33907..4e8b4d18b1 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -59,6 +59,12 @@ New Features
 
   * Enabled software taildrop for ordered queues.
 
+* **Added ethdev API in library.**
+
+  * Added API to report type of link connection for a port.
+    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
+    unless driver specifies it.
+
 Removed Items
 -------------
 
@@ -103,6 +109,9 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* ethdev: Added ``link_connector`` field to ``rte_eth_link`` structure
+  to report type of link connection a port.
+
 
 Known Issues
 ------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4148c33807..1c4a758a03 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3324,18 +3324,59 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
 	else
-		ret = snprintf(str, len, "Link up at %s %s %s",
+		ret = snprintf(str, len, "Link up at %s %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
-			"Autoneg" : "Fixed");
+			"Autoneg" : "Fixed",
+			rte_eth_link_connector_to_str(eth_link->link_connector));
 
 	rte_eth_trace_link_to_str(len, eth_link, str, ret);
 
 	return ret;
 }
 
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_connector_to_str, 25.11)
+const char *
+rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector)
+{
+	static const char * const link_connector_str[] = {
+		[RTE_ETH_LINK_CONNECTOR_NONE] = "None",
+		[RTE_ETH_LINK_CONNECTOR_TP] = "Twisted Pair",
+		[RTE_ETH_LINK_CONNECTOR_AUI] = "Attachment Unit Interface",
+		[RTE_ETH_LINK_CONNECTOR_MII] = "Media Independent Interface",
+		[RTE_ETH_LINK_CONNECTOR_FIBER] = "Fiber",
+		[RTE_ETH_LINK_CONNECTOR_BNC] = "BNC",
+		[RTE_ETH_LINK_CONNECTOR_DAC] = "Direct Attach Copper",
+		[RTE_ETH_LINK_CONNECTOR_SGMII] = "SGMII",
+		[RTE_ETH_LINK_CONNECTOR_QSGMII] = "QSGMII",
+		[RTE_ETH_LINK_CONNECTOR_XFI] = "XFI",
+		[RTE_ETH_LINK_CONNECTOR_SFI] = "SFI",
+		[RTE_ETH_LINK_CONNECTOR_XLAUI] = "XLAUI",
+		[RTE_ETH_LINK_CONNECTOR_GAUI] = "GAUI",
+		[RTE_ETH_LINK_CONNECTOR_XAUI] = "XAUI",
+		[RTE_ETH_LINK_CONNECTOR_CAUI] = "CAUI",
+		[RTE_ETH_LINK_CONNECTOR_LAUI] = "LAUI",
+		[RTE_ETH_LINK_CONNECTOR_SFP] = "SFP",
+		[RTE_ETH_LINK_CONNECTOR_SFP_DD] = "SFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_SFP_PLUS] = "SFP+",
+		[RTE_ETH_LINK_CONNECTOR_SFP28] = "SFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP] = "QSFP",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_PLUS] = "QSFP+",
+		[RTE_ETH_LINK_CONNECTOR_QSFP28] = "QSFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP56] = "QSFP56",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_DD] = "QSFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_OTHER] = "Other",
+	};
+	const char *str = NULL;
+
+	if (link_connector < ((enum rte_eth_link_connector)RTE_DIM(link_connector_str)))
+		str = link_connector_str[link_connector];
+
+	return str;
+}
+
 RTE_EXPORT_SYMBOL(rte_eth_stats_get)
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 5d7fc5ee9d..0c7366b53e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -329,6 +329,45 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**
+ * @enum rte_eth_link_connector
+ * @brief Ethernet port link connector type
+ *
+ * This enum defines the possible types of Ethernet port link connectors.
+ */
+enum rte_eth_link_connector {
+	RTE_ETH_LINK_CONNECTOR_NONE = 0,     /**< Not defined */
+	RTE_ETH_LINK_CONNECTOR_TP,           /**< Twisted Pair */
+	RTE_ETH_LINK_CONNECTOR_AUI,          /**< Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_MII,          /**< Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_FIBER,        /**< Optical Fiber Link */
+	RTE_ETH_LINK_CONNECTOR_BNC,          /**< BNC Link type for RF connection */
+	RTE_ETH_LINK_CONNECTOR_DAC,          /**< Direct Attach copper */
+	RTE_ETH_LINK_CONNECTOR_SGMII,        /**< Serial Gigabit Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_QSGMII,       /**< Link to multiplex 4 SGMII over one serial link */
+	RTE_ETH_LINK_CONNECTOR_XFI,          /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFI,          /**< 10 Gigabit Serial Interface for optical network */
+	RTE_ETH_LINK_CONNECTOR_XLAUI,        /**< 40 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_GAUI,         /**< Gigabit Interface for 50/100/200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_XAUI,         /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_CAUI,         /**< 100 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_LAUI,         /**< 50 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFP,          /**< Pluggable module for 1 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_PLUS,     /**< Pluggable module for 10 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP28,        /**< Pluggable module for 25 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_DD,       /**< Pluggable module for 100 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_QSFP,         /**< Module to mutiplex 4 SFP i.e. 4*1=4 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_PLUS,    /**< Module to mutiplex 4 SFP_PLUS i.e. 4*10=40 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP28,       /**< Module to mutiplex 4 SFP28 i.e. 4*25=100 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP56,       /**< Module to mutiplex 4 SFP56 i.e. 4*50=200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_DD,      /**< Module to mutiplex 4 SFP_DD i.e. 4*100=400 Gbps */
+	RTE_ETH_LINK_CONNECTOR_OTHER = 31,   /**< non-physical interfaces like virtio, ring etc.
+					       * It also includes unknown connector types,
+					       * i.e. physical connectors not yet defined in this
+					       * list of connector types.
+					       */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -341,6 +380,7 @@ struct rte_eth_link {
 			uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 			uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 			uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+			uint16_t link_connector : 5;  /**< RTE_ETH_LINK_CONNECTOR_XXX */
 		};
 	};
 };
@@ -3116,6 +3156,20 @@ int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link)
 __rte_experimental
 const char *rte_eth_link_speed_to_str(uint32_t link_speed);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * This function converts an Ethernet link type to a string.
+ *
+ * @param link_connector
+ *   The link type to convert.
+ * @return
+ *   NULL for invalid link connector values otherwise the string representation of the link type.
+ */
+__rte_experimental
+const char *rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
-- 
2.43.0


^ permalink raw reply	[relevance 3%]

* [PATCH v4 0/5] add semicolon when export any symbol
  2025-08-28  2:59  1% Chengwen Feng
  2025-08-29  2:34  1% ` [PATCH v2 0/3] " Chengwen Feng
  2025-09-01  1:21  1% ` [PATCH v3 0/5] add semicolon when export any symbol Chengwen Feng
@ 2025-09-01 10:46  1% ` Chengwen Feng
  2025-09-01 10:46  9%   ` [PATCH v4 5/5] doc: update ABI versioning guide Chengwen Feng
  2025-09-03  2:05  1% ` [PATCH v5 0/5] add semicolon when export any symbol Chengwen Feng
  3 siblings, 1 reply; 77+ results
From: Chengwen Feng @ 2025-09-01 10:46 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

Chengwen Feng (5):
  lib: add semicolon when export symbol
  lib: add semicolon when export experimental symbol
  lib: add semicolon when export internal symbol
  drivers: add semicolon when export any symbol
  doc: update ABI versioning guide

---
v4:
1. fix CI error of mlx5-glue.c error.
v3:
1. split the lib commit to three commits.
2. rebase (try to fix CI error: apply rte_cfgfile.c failed).
v2:
1. drop the gen-version-map.py change make sure it will not compile
   error with on-going code.
2. fix CI error: two semicolon for rte_node_mbuf_dynfield_register.
   and mlx5-glue.c error (by keep no change)
3. split to three commit.

 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 397 files changed, 4171 insertions(+), 4171 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [PATCH v4 5/5] doc: update ABI versioning guide
  2025-09-01 10:46  1% ` [PATCH v4 0/5] add semicolon when export any symbol Chengwen Feng
@ 2025-09-01 10:46  9%   ` Chengwen Feng
  0 siblings, 0 replies; 77+ results
From: Chengwen Feng @ 2025-09-01 10:46 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Add a semicolon after RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL in the guide.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 doc/guides/contributing/abi_versioning.rst | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
-- 
2.17.1


^ permalink raw reply	[relevance 9%]

* [PATCH v5 0/5] add semicolon when export any symbol
  2025-08-28  2:59  1% Chengwen Feng
                   ` (2 preceding siblings ...)
  2025-09-01 10:46  1% ` [PATCH v4 0/5] add semicolon when export any symbol Chengwen Feng
@ 2025-09-03  2:05  1% ` Chengwen Feng
  2025-09-03  2:05  9%   ` [PATCH v5 5/5] doc: update ABI versioning guide Chengwen Feng
  2025-09-03  7:04  0%   ` [PATCH v5 0/5] add semicolon when export any symbol David Marchand
  3 siblings, 2 replies; 77+ results
From: Chengwen Feng @ 2025-09-03  2:05 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
but don't end with a semicolon. As a result, some IDEs cannot identify
the APIs and cannot quickly jump to the definition.

A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
commit.

And also redefine RTE_EXPORT_XXX_SYMBOL:
#define RTE_EXPORT_XXX_SYMBOL(x, x) extern int dummy_rte_export_symbol

Chengwen Feng (5):
  lib: add semicolon when export symbol
  lib: add semicolon when export experimental symbol
  lib: add semicolon when export internal symbol
  drivers: add semicolon when export any symbol
  doc: update ABI versioning guide

---
v5:
1. fix CI error of mlx5 driver (-Werror -Wextra-semi of clang) by
   redefine RTE_EXPORT_XXX_SYMBOL as extern int dummy_rte_export_symbol
v4:
1. fix CI error of mlx5-glue.c error.
v3:
1. split the lib commit to three commits.
2. rebase (try to fix CI error: apply rte_cfgfile.c failed).
v2:
1. drop the gen-version-map.py change make sure it will not compile
   error with on-going code.
2. fix CI error: two semicolon for rte_node_mbuf_dynfield_register.
   and mlx5-glue.c error (by keep no change)
3. split to three commit.

 doc/guides/contributing/abi_versioning.rst    |   10 +-
 drivers/baseband/acc/rte_acc100_pmd.c         |    2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |    2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |    2 +-
 drivers/bus/auxiliary/auxiliary_common.c      |    4 +-
 drivers/bus/cdx/cdx.c                         |    8 +-
 drivers/bus/cdx/cdx_vfio.c                    |    8 +-
 drivers/bus/dpaa/dpaa_bus.c                   |   18 +-
 drivers/bus/dpaa/dpaa_bus_base_symbols.c      |  186 +--
 drivers/bus/fslmc/fslmc_bus.c                 |    8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |   24 +-
 drivers/bus/fslmc/mc/dpbp.c                   |   12 +-
 drivers/bus/fslmc/mc/dpci.c                   |    6 +-
 drivers/bus/fslmc/mc/dpcon.c                  |   12 +-
 drivers/bus/fslmc/mc/dpdmai.c                 |   16 +-
 drivers/bus/fslmc/mc/dpio.c                   |   26 +-
 drivers/bus/fslmc/mc/dpmng.c                  |    4 +-
 drivers/bus/fslmc/mc/mc_sys.c                 |    2 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpbp.c      |    6 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |    4 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |   22 +-
 drivers/bus/fslmc/qbman/qbman_debug.c         |    4 +-
 drivers/bus/fslmc/qbman/qbman_portal.c        |   82 +-
 drivers/bus/ifpga/ifpga_bus.c                 |    6 +-
 drivers/bus/pci/bsd/pci.c                     |   20 +-
 drivers/bus/pci/linux/pci.c                   |   20 +-
 drivers/bus/pci/pci_common.c                  |   20 +-
 drivers/bus/pci/windows/pci.c                 |   20 +-
 drivers/bus/platform/platform.c               |    4 +-
 drivers/bus/uacce/uacce.c                     |   18 +-
 drivers/bus/vdev/vdev.c                       |   12 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   12 +-
 drivers/bus/vmbus/vmbus_channel.c             |   26 +-
 drivers/bus/vmbus/vmbus_common.c              |    6 +-
 drivers/common/cnxk/cnxk_security.c           |   24 +-
 drivers/common/cnxk/cnxk_utils.c              |    2 +-
 drivers/common/cnxk/roc_platform.c            |   36 +-
 .../common/cnxk/roc_platform_base_symbols.c   | 1084 ++++++++---------
 drivers/common/cpt/cpt_fpm_tables.c           |    4 +-
 drivers/common/cpt/cpt_pmd_ops_helper.c       |    6 +-
 drivers/common/dpaax/caamflib.c               |    2 +-
 drivers/common/dpaax/dpaa_of.c                |   24 +-
 drivers/common/dpaax/dpaax_iova_table.c       |   12 +-
 drivers/common/ionic/ionic_common_uio.c       |    8 +-
 .../common/mlx5/linux/mlx5_common_auxiliary.c |    2 +-
 drivers/common/mlx5/linux/mlx5_common_os.c    |   20 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |    6 +-
 drivers/common/mlx5/linux/mlx5_glue.c         |    2 +-
 drivers/common/mlx5/linux/mlx5_nl.c           |   42 +-
 drivers/common/mlx5/mlx5_common.c             |   18 +-
 drivers/common/mlx5/mlx5_common_devx.c        |   18 +-
 drivers/common/mlx5/mlx5_common_mp.c          |   16 +-
 drivers/common/mlx5/mlx5_common_mr.c          |   22 +-
 drivers/common/mlx5/mlx5_common_pci.c         |    4 +-
 drivers/common/mlx5/mlx5_common_utils.c       |   22 +-
 drivers/common/mlx5/mlx5_devx_cmds.c          |  102 +-
 drivers/common/mlx5/mlx5_malloc.c             |    8 +-
 drivers/common/mlx5/windows/mlx5_common_os.c  |   12 +-
 drivers/common/mlx5/windows/mlx5_glue.c       |    2 +-
 drivers/common/mvep/mvep_common.c             |    4 +-
 drivers/common/nfp/nfp_common.c               |   14 +-
 drivers/common/nfp/nfp_common_pci.c           |    2 +-
 drivers/common/nfp/nfp_dev.c                  |    2 +-
 drivers/common/nitrox/nitrox_device.c         |    2 +-
 drivers/common/nitrox/nitrox_logs.c           |    2 +-
 drivers/common/nitrox/nitrox_qp.c             |    4 +-
 drivers/common/octeontx/octeontx_mbox.c       |   12 +-
 drivers/common/sfc_efx/sfc_base_symbols.c     |  542 ++++-----
 drivers/common/sfc_efx/sfc_efx.c              |    4 +-
 drivers/common/sfc_efx/sfc_efx_mcdi.c         |    4 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   14 +-
 drivers/crypto/cnxk/cn20k_cryptodev_ops.c     |   12 +-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |    4 +-
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   14 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |    4 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |    4 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |    4 +-
 .../scheduler/rte_cryptodev_scheduler.c       |   20 +-
 drivers/dma/cnxk/cnxk_dmadev_fp.c             |    8 +-
 drivers/event/cnxk/cnxk_worker.c              |    4 +-
 drivers/event/dlb2/rte_pmd_dlb2.c             |    4 +-
 drivers/mempool/cnxk/cn10k_hwpool_ops.c       |    6 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |    4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   12 +-
 drivers/net/atlantic/rte_pmd_atlantic.c       |   12 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   32 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   24 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   30 +-
 drivers/net/cnxk/cnxk_ethdev.c                |    6 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c            |   18 +-
 drivers/net/dpaa/dpaa_ethdev.c                |    6 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |    2 +-
 drivers/net/dpaa2/base/dpaa2_tlu_hash.c       |    2 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   14 +-
 drivers/net/dpaa2/dpaa2_mux.c                 |    6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |    2 +-
 drivers/net/intel/i40e/rte_pmd_i40e.c         |   78 +-
 drivers/net/intel/iavf/iavf_base_symbols.c    |   14 +-
 drivers/net/intel/iavf/iavf_rxtx.c            |   16 +-
 drivers/net/intel/ice/ice_diagnose.c          |    6 +-
 drivers/net/intel/idpf/idpf_common_device.c   |   20 +-
 drivers/net/intel/idpf/idpf_common_rxtx.c     |   46 +-
 .../net/intel/idpf/idpf_common_rxtx_avx2.c    |    4 +-
 .../net/intel/idpf/idpf_common_rxtx_avx512.c  |   10 +-
 drivers/net/intel/idpf/idpf_common_virtchnl.c |   58 +-
 drivers/net/intel/ipn3ke/ipn3ke_ethdev.c      |    2 +-
 drivers/net/intel/ixgbe/rte_pmd_ixgbe.c       |   74 +-
 drivers/net/mlx5/mlx5.c                       |    2 +-
 drivers/net/mlx5/mlx5_flow.c                  |    8 +-
 drivers/net/mlx5/mlx5_rx.c                    |    4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |    4 +-
 drivers/net/mlx5/mlx5_tx.c                    |    2 +-
 drivers/net/mlx5/mlx5_txq.c                   |    6 +-
 drivers/net/octeontx/octeontx_ethdev.c        |    2 +-
 drivers/net/ring/rte_eth_ring.c               |    4 +-
 drivers/net/softnic/rte_eth_softnic.c         |    2 +-
 drivers/net/softnic/rte_eth_softnic_thread.c  |    2 +-
 drivers/net/vhost/rte_eth_vhost.c             |    4 +-
 drivers/power/kvm_vm/guest_channel.c          |    4 +-
 drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c         |   20 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   22 +-
 lib/acl/acl_bld.c                             |    2 +-
 lib/acl/acl_run_scalar.c                      |    2 +-
 lib/acl/rte_acl.c                             |   22 +-
 lib/argparse/rte_argparse.c                   |    4 +-
 lib/bbdev/bbdev_trace_points.c                |    4 +-
 lib/bbdev/rte_bbdev.c                         |   62 +-
 lib/bitratestats/rte_bitrate.c                |    8 +-
 lib/bpf/bpf.c                                 |    4 +-
 lib/bpf/bpf_convert.c                         |    2 +-
 lib/bpf/bpf_dump.c                            |    2 +-
 lib/bpf/bpf_exec.c                            |    4 +-
 lib/bpf/bpf_load.c                            |    2 +-
 lib/bpf/bpf_load_elf.c                        |    2 +-
 lib/bpf/bpf_pkt.c                             |    8 +-
 lib/bpf/bpf_stub.c                            |    4 +-
 lib/cfgfile/rte_cfgfile.c                     |   34 +-
 lib/cmdline/cmdline.c                         |   18 +-
 lib/cmdline/cmdline_cirbuf.c                  |   38 +-
 lib/cmdline/cmdline_parse.c                   |    8 +-
 lib/cmdline/cmdline_parse_bool.c              |    2 +-
 lib/cmdline/cmdline_parse_etheraddr.c         |    6 +-
 lib/cmdline/cmdline_parse_ipaddr.c            |    6 +-
 lib/cmdline/cmdline_parse_num.c               |    6 +-
 lib/cmdline/cmdline_parse_portlist.c          |    6 +-
 lib/cmdline/cmdline_parse_string.c            |   10 +-
 lib/cmdline/cmdline_rdline.c                  |   30 +-
 lib/cmdline/cmdline_socket.c                  |    6 +-
 lib/cmdline/cmdline_vt100.c                   |    4 +-
 lib/compressdev/rte_comp.c                    |   12 +-
 lib/compressdev/rte_compressdev.c             |   50 +-
 lib/compressdev/rte_compressdev_pmd.c         |    6 +-
 lib/cryptodev/cryptodev_pmd.c                 |   14 +-
 lib/cryptodev/cryptodev_trace_points.c        |    6 +-
 lib/cryptodev/rte_cryptodev.c                 |  166 +--
 lib/dispatcher/rte_dispatcher.c               |   26 +-
 lib/distributor/rte_distributor.c             |   18 +-
 lib/dmadev/rte_dmadev.c                       |   38 +-
 lib/dmadev/rte_dmadev_trace_points.c          |   14 +-
 lib/eal/arm/rte_cpuflags.c                    |    6 +-
 lib/eal/arm/rte_hypervisor.c                  |    2 +-
 lib/eal/arm/rte_power_intrinsics.c            |    8 +-
 lib/eal/common/eal_common_bus.c               |   20 +-
 lib/eal/common/eal_common_class.c             |    8 +-
 lib/eal/common/eal_common_config.c            |   14 +-
 lib/eal/common/eal_common_cpuflags.c          |    2 +-
 lib/eal/common/eal_common_debug.c             |    4 +-
 lib/eal/common/eal_common_dev.c               |   38 +-
 lib/eal/common/eal_common_devargs.c           |   18 +-
 lib/eal/common/eal_common_errno.c             |    4 +-
 lib/eal/common/eal_common_fbarray.c           |   52 +-
 lib/eal/common/eal_common_hexdump.c           |    4 +-
 lib/eal/common/eal_common_hypervisor.c        |    2 +-
 lib/eal/common/eal_common_interrupts.c        |   54 +-
 lib/eal/common/eal_common_launch.c            |   10 +-
 lib/eal/common/eal_common_lcore.c             |   34 +-
 lib/eal/common/eal_common_lcore_var.c         |    2 +-
 lib/eal/common/eal_common_mcfg.c              |   40 +-
 lib/eal/common/eal_common_memory.c            |   60 +-
 lib/eal/common/eal_common_memzone.c           |   18 +-
 lib/eal/common/eal_common_options.c           |    8 +-
 lib/eal/common/eal_common_proc.c              |   16 +-
 lib/eal/common/eal_common_string_fns.c        |    8 +-
 lib/eal/common/eal_common_tailqs.c            |    6 +-
 lib/eal/common/eal_common_thread.c            |   28 +-
 lib/eal/common/eal_common_timer.c             |    8 +-
 lib/eal/common/eal_common_trace.c             |   30 +-
 lib/eal/common/eal_common_trace_ctf.c         |    2 +-
 lib/eal/common/eal_common_trace_points.c      |   36 +-
 lib/eal/common/eal_common_trace_utils.c       |    2 +-
 lib/eal/common/eal_common_uuid.c              |    8 +-
 lib/eal/common/eal_export.h                   |    6 +-
 lib/eal/common/rte_bitset.c                   |    2 +-
 lib/eal/common/rte_keepalive.c                |   12 +-
 lib/eal/common/rte_malloc.c                   |   46 +-
 lib/eal/common/rte_random.c                   |    8 +-
 lib/eal/common/rte_reciprocal.c               |    4 +-
 lib/eal/common/rte_service.c                  |   62 +-
 lib/eal/common/rte_version.c                  |   14 +-
 lib/eal/freebsd/eal.c                         |   44 +-
 lib/eal/freebsd/eal_alarm.c                   |    4 +-
 lib/eal/freebsd/eal_dev.c                     |    8 +-
 lib/eal/freebsd/eal_interrupts.c              |   38 +-
 lib/eal/freebsd/eal_memory.c                  |    6 +-
 lib/eal/freebsd/eal_thread.c                  |    4 +-
 lib/eal/freebsd/eal_timer.c                   |    2 +-
 lib/eal/linux/eal.c                           |   14 +-
 lib/eal/linux/eal_alarm.c                     |    4 +-
 lib/eal/linux/eal_dev.c                       |    8 +-
 lib/eal/linux/eal_interrupts.c                |   38 +-
 lib/eal/linux/eal_memory.c                    |    6 +-
 lib/eal/linux/eal_thread.c                    |    4 +-
 lib/eal/linux/eal_timer.c                     |    8 +-
 lib/eal/linux/eal_vfio.c                      |   32 +-
 lib/eal/loongarch/rte_cpuflags.c              |    6 +-
 lib/eal/loongarch/rte_hypervisor.c            |    2 +-
 lib/eal/loongarch/rte_power_intrinsics.c      |    8 +-
 lib/eal/ppc/rte_cpuflags.c                    |    6 +-
 lib/eal/ppc/rte_hypervisor.c                  |    2 +-
 lib/eal/ppc/rte_power_intrinsics.c            |    8 +-
 lib/eal/riscv/rte_cpuflags.c                  |    6 +-
 lib/eal/riscv/rte_hypervisor.c                |    2 +-
 lib/eal/riscv/rte_power_intrinsics.c          |    8 +-
 lib/eal/unix/eal_debug.c                      |    4 +-
 lib/eal/unix/eal_filesystem.c                 |    2 +-
 lib/eal/unix/eal_firmware.c                   |    2 +-
 lib/eal/unix/eal_unix_memory.c                |    8 +-
 lib/eal/unix/eal_unix_timer.c                 |    2 +-
 lib/eal/unix/rte_thread.c                     |   26 +-
 lib/eal/windows/eal.c                         |   22 +-
 lib/eal/windows/eal_alarm.c                   |    4 +-
 lib/eal/windows/eal_debug.c                   |    2 +-
 lib/eal/windows/eal_dev.c                     |    8 +-
 lib/eal/windows/eal_interrupts.c              |   38 +-
 lib/eal/windows/eal_memory.c                  |   14 +-
 lib/eal/windows/eal_mp.c                      |   12 +-
 lib/eal/windows/eal_thread.c                  |    2 +-
 lib/eal/windows/eal_timer.c                   |    2 +-
 lib/eal/windows/rte_thread.c                  |   28 +-
 lib/eal/x86/rte_cpuflags.c                    |    6 +-
 lib/eal/x86/rte_hypervisor.c                  |    2 +-
 lib/eal/x86/rte_power_intrinsics.c            |    8 +-
 lib/eal/x86/rte_spinlock.c                    |    2 +-
 lib/efd/rte_efd.c                             |   14 +-
 lib/ethdev/ethdev_driver.c                    |   48 +-
 lib/ethdev/ethdev_linux_ethtool.c             |    6 +-
 lib/ethdev/ethdev_private.c                   |    4 +-
 lib/ethdev/ethdev_trace_points.c              |   12 +-
 lib/ethdev/rte_ethdev.c                       |  336 ++---
 lib/ethdev/rte_ethdev_cman.c                  |    8 +-
 lib/ethdev/rte_flow.c                         |  128 +-
 lib/ethdev/rte_mtr.c                          |   42 +-
 lib/ethdev/rte_tm.c                           |   62 +-
 lib/eventdev/eventdev_private.c               |    4 +-
 lib/eventdev/eventdev_trace_points.c          |   22 +-
 lib/eventdev/rte_event_crypto_adapter.c       |   30 +-
 lib/eventdev/rte_event_dma_adapter.c          |   30 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   46 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   34 +-
 lib/eventdev/rte_event_ring.c                 |    8 +-
 lib/eventdev/rte_event_timer_adapter.c        |   22 +-
 lib/eventdev/rte_event_vector_adapter.c       |   20 +-
 lib/eventdev/rte_eventdev.c                   |   94 +-
 lib/fib/rte_fib.c                             |   20 +-
 lib/fib/rte_fib6.c                            |   18 +-
 lib/gpudev/gpudev.c                           |   64 +-
 lib/graph/graph.c                             |   32 +-
 lib/graph/graph_debug.c                       |    2 +-
 lib/graph/graph_feature_arc.c                 |   34 +-
 lib/graph/graph_stats.c                       |    8 +-
 lib/graph/node.c                              |   24 +-
 lib/graph/rte_graph_model_mcore_dispatch.c    |    6 +-
 lib/graph/rte_graph_worker.c                  |    6 +-
 lib/gro/rte_gro.c                             |   12 +-
 lib/gso/rte_gso.c                             |    2 +-
 lib/hash/rte_cuckoo_hash.c                    |   54 +-
 lib/hash/rte_fbk_hash.c                       |    6 +-
 lib/hash/rte_hash_crc.c                       |    4 +-
 lib/hash/rte_thash.c                          |   24 +-
 lib/hash/rte_thash_gf2_poly_math.c            |    2 +-
 lib/hash/rte_thash_gfni.c                     |    4 +-
 lib/ip_frag/rte_ip_frag_common.c              |   10 +-
 lib/ip_frag/rte_ipv4_fragmentation.c          |    4 +-
 lib/ip_frag/rte_ipv4_reassembly.c             |    2 +-
 lib/ip_frag/rte_ipv6_fragmentation.c          |    2 +-
 lib/ip_frag/rte_ipv6_reassembly.c             |    2 +-
 lib/ipsec/ipsec_sad.c                         |   12 +-
 lib/ipsec/ipsec_telemetry.c                   |    4 +-
 lib/ipsec/sa.c                                |    8 +-
 lib/ipsec/ses.c                               |    2 +-
 lib/jobstats/rte_jobstats.c                   |   28 +-
 lib/kvargs/rte_kvargs.c                       |   16 +-
 lib/latencystats/rte_latencystats.c           |   10 +-
 lib/log/log.c                                 |   44 +-
 lib/log/log_color.c                           |    2 +-
 lib/log/log_syslog.c                          |    2 +-
 lib/log/log_timestamp.c                       |    2 +-
 lib/lpm/rte_lpm.c                             |   16 +-
 lib/lpm/rte_lpm6.c                            |   20 +-
 lib/mbuf/rte_mbuf.c                           |   34 +-
 lib/mbuf/rte_mbuf_dyn.c                       |   18 +-
 lib/mbuf/rte_mbuf_pool_ops.c                  |   10 +-
 lib/mbuf/rte_mbuf_ptype.c                     |   16 +-
 lib/member/rte_member.c                       |   26 +-
 lib/mempool/mempool_trace_points.c            |   20 +-
 lib/mempool/rte_mempool.c                     |   54 +-
 lib/mempool/rte_mempool_ops.c                 |    8 +-
 lib/mempool/rte_mempool_ops_default.c         |    8 +-
 lib/meter/rte_meter.c                         |   12 +-
 lib/metrics/rte_metrics.c                     |   16 +-
 lib/metrics/rte_metrics_telemetry.c           |   22 +-
 lib/mldev/mldev_utils.c                       |    4 +-
 lib/mldev/mldev_utils_neon.c                  |   36 +-
 lib/mldev/mldev_utils_neon_bfloat16.c         |    4 +-
 lib/mldev/mldev_utils_scalar.c                |   36 +-
 lib/mldev/mldev_utils_scalar_bfloat16.c       |    4 +-
 lib/mldev/rte_mldev.c                         |   74 +-
 lib/mldev/rte_mldev_pmd.c                     |    4 +-
 lib/net/rte_arp.c                             |    2 +-
 lib/net/rte_ether.c                           |    6 +-
 lib/net/rte_net.c                             |    4 +-
 lib/net/rte_net_crc.c                         |    6 +-
 lib/node/ethdev_ctrl.c                        |    4 +-
 lib/node/ip4_lookup.c                         |    2 +-
 lib/node/ip4_lookup_fib.c                     |    4 +-
 lib/node/ip4_reassembly.c                     |    2 +-
 lib/node/ip4_rewrite.c                        |    2 +-
 lib/node/ip6_lookup.c                         |    2 +-
 lib/node/ip6_lookup_fib.c                     |    4 +-
 lib/node/ip6_rewrite.c                        |    2 +-
 lib/node/udp4_input.c                         |    4 +-
 lib/pcapng/rte_pcapng.c                       |   14 +-
 lib/pci/rte_pci.c                             |    6 +-
 lib/pdcp/rte_pdcp.c                           |   10 +-
 lib/pdump/rte_pdump.c                         |   18 +-
 lib/pipeline/rte_pipeline.c                   |   46 +-
 lib/pipeline/rte_port_in_action.c             |   16 +-
 lib/pipeline/rte_swx_ctl.c                    |   34 +-
 lib/pipeline/rte_swx_ipsec.c                  |   14 +-
 lib/pipeline/rte_swx_pipeline.c               |  146 +--
 lib/pipeline/rte_table_action.c               |   32 +-
 lib/pmu/pmu.c                                 |   10 +-
 lib/port/rte_port_ethdev.c                    |    6 +-
 lib/port/rte_port_eventdev.c                  |    6 +-
 lib/port/rte_port_fd.c                        |    6 +-
 lib/port/rte_port_frag.c                      |    4 +-
 lib/port/rte_port_ras.c                       |    4 +-
 lib/port/rte_port_ring.c                      |   12 +-
 lib/port/rte_port_sched.c                     |    4 +-
 lib/port/rte_port_source_sink.c               |    4 +-
 lib/port/rte_port_sym_crypto.c                |    6 +-
 lib/port/rte_swx_port_ethdev.c                |    4 +-
 lib/port/rte_swx_port_fd.c                    |    4 +-
 lib/port/rte_swx_port_ring.c                  |    4 +-
 lib/port/rte_swx_port_source_sink.c           |    6 +-
 lib/power/power_common.c                      |   16 +-
 lib/power/rte_power_cpufreq.c                 |   36 +-
 lib/power/rte_power_pmd_mgmt.c                |   20 +-
 lib/power/rte_power_qos.c                     |    4 +-
 lib/power/rte_power_uncore.c                  |   28 +-
 lib/rawdev/rte_rawdev.c                       |   60 +-
 lib/rcu/rte_rcu_qsbr.c                        |   22 +-
 lib/regexdev/rte_regexdev.c                   |   52 +-
 lib/reorder/rte_reorder.c                     |   22 +-
 lib/rib/rte_rib.c                             |   28 +-
 lib/rib/rte_rib6.c                            |   28 +-
 lib/ring/rte_ring.c                           |   22 +-
 lib/ring/rte_soring.c                         |    6 +-
 lib/ring/soring.c                             |   32 +-
 lib/sched/rte_approx.c                        |    2 +-
 lib/sched/rte_pie.c                           |    4 +-
 lib/sched/rte_red.c                           |   12 +-
 lib/sched/rte_sched.c                         |   30 +-
 lib/security/rte_security.c                   |   40 +-
 lib/stack/rte_stack.c                         |    6 +-
 lib/table/rte_swx_table_em.c                  |    4 +-
 lib/table/rte_swx_table_learner.c             |   20 +-
 lib/table/rte_swx_table_selector.c            |   12 +-
 lib/table/rte_swx_table_wm.c                  |    2 +-
 lib/table/rte_table_acl.c                     |    2 +-
 lib/table/rte_table_array.c                   |    2 +-
 lib/table/rte_table_hash_cuckoo.c             |    2 +-
 lib/table/rte_table_hash_ext.c                |    2 +-
 lib/table/rte_table_hash_key16.c              |    4 +-
 lib/table/rte_table_hash_key32.c              |    4 +-
 lib/table/rte_table_hash_key8.c               |    4 +-
 lib/table/rte_table_hash_lru.c                |    2 +-
 lib/table/rte_table_lpm.c                     |    2 +-
 lib/table/rte_table_lpm_ipv6.c                |    2 +-
 lib/table/rte_table_stub.c                    |    2 +-
 lib/telemetry/telemetry.c                     |    6 +-
 lib/telemetry/telemetry_data.c                |   34 +-
 lib/telemetry/telemetry_legacy.c              |    2 +-
 lib/timer/rte_timer.c                         |   36 +-
 lib/vhost/socket.c                            |   32 +-
 lib/vhost/vdpa.c                              |   22 +-
 lib/vhost/vhost.c                             |   82 +-
 lib/vhost/vhost_crypto.c                      |   12 +-
 lib/vhost/vhost_user.c                        |    4 +-
 lib/vhost/virtio_net.c                        |   14 +-
 400 files changed, 4176 insertions(+), 4176 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [PATCH v5 5/5] doc: update ABI versioning guide
  2025-09-03  2:05  1% ` [PATCH v5 0/5] add semicolon when export any symbol Chengwen Feng
@ 2025-09-03  2:05  9%   ` Chengwen Feng
  2025-09-03  7:04  0%   ` [PATCH v5 0/5] add semicolon when export any symbol David Marchand
  1 sibling, 0 replies; 77+ results
From: Chengwen Feng @ 2025-09-03  2:05 UTC (permalink / raw)
  To: thomas, david.marchand, stephen; +Cc: dev

Add a semicolon after RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
RTE_EXPORT_EXPERIMENTAL_SYMBOL in the guide.

Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
 doc/guides/contributing/abi_versioning.rst | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 2fa2b15edc..0c1135becc 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -168,7 +168,7 @@ Assume we have a function as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param)
  {
@@ -187,7 +187,7 @@ private, is safe), but it also requires modifying the code as follows
   * Create an acl context object for apps to
   * manipulate
   */
- RTE_EXPORT_SYMBOL(rte_acl_create)
+ RTE_EXPORT_SYMBOL(rte_acl_create);
  int
  rte_acl_create(struct rte_acl_param *param, int debug)
  {
@@ -213,7 +213,7 @@ the function return type, the function name and its arguments.
 
 .. code-block:: c
 
- -RTE_EXPORT_SYMBOL(rte_acl_create)
+ -RTE_EXPORT_SYMBOL(rte_acl_create);
  -int
  -rte_acl_create(struct rte_acl_param *param)
  +RTE_VERSION_SYMBOL(21, int, rte_acl_create, (struct rte_acl_param *param))
@@ -303,7 +303,7 @@ Assume we have an experimental function ``rte_acl_create`` as follows:
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create)
+   RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_acl_create);
    __rte_experimental
    int
    rte_acl_create(struct rte_acl_param *param)
@@ -320,7 +320,7 @@ When we promote the symbol to the stable ABI, we simply strip the
     * Create an acl context object for apps to
     * manipulate
     */
-   RTE_EXPORT_SYMBOL(rte_acl_create)
+   RTE_EXPORT_SYMBOL(rte_acl_create);
    int
    rte_acl_create(struct rte_acl_param *param)
    {
-- 
2.17.1


^ permalink raw reply	[relevance 9%]

* Re: [PATCH v5 0/5] add semicolon when export any symbol
  2025-09-03  2:05  1% ` [PATCH v5 0/5] add semicolon when export any symbol Chengwen Feng
  2025-09-03  2:05  9%   ` [PATCH v5 5/5] doc: update ABI versioning guide Chengwen Feng
@ 2025-09-03  7:04  0%   ` David Marchand
  2025-09-04  0:24  0%     ` fengchengwen
  1 sibling, 1 reply; 77+ results
From: David Marchand @ 2025-09-03  7:04 UTC (permalink / raw)
  To: Chengwen Feng; +Cc: thomas, stephen, dev, Bruce Richardson

Hello,

On Wed, 3 Sept 2025 at 04:05, Chengwen Feng <fengchengwen@huawei.com> wrote:
>
> Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
> RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
> but don't end with a semicolon. As a result, some IDEs cannot identify
> the APIs and cannot quickly jump to the definition.
>
> A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
> commit.
>
> And also redefine RTE_EXPORT_XXX_SYMBOL:
> #define RTE_EXPORT_XXX_SYMBOL(x, x) extern int dummy_rte_export_symbol
>
> Chengwen Feng (5):
>   lib: add semicolon when export symbol
>   lib: add semicolon when export experimental symbol
>   lib: add semicolon when export internal symbol
>   drivers: add semicolon when export any symbol
>   doc: update ABI versioning guide

I am skeptical about this series.

The current positionning should be seen as an additional info on the
return type, in the definition of the symbol.
Does it mean that this IDE would fail if we add any kind of
macros/attribute involving the symbol name?

Afaics, ctags can be taught to skip those macros and just behaves
correctly by adding in its config file:
-DRTE_EXPORT_EXPERIMENTAL_SYMBOL(a)=
-DRTE_EXPORT_INTERNAL_SYMBOL(a)=
-DRTE_EXPORT_SYMBOL(a)=

I think another option would be to move the call to export macros
after the whole definition of the symbol, though I prefer the current
position for readability.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [RFC 7/8] uapi: import VFIO header
  @ 2025-09-03  7:28  1% ` David Marchand
      2 siblings, 0 replies; 77+ results
From: David Marchand @ 2025-09-03  7:28 UTC (permalink / raw)
  To: dev; +Cc: thomas, maxime.coquelin

Import VFIO header (from v6.16) to be included in many parts of DPDK.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 kernel/linux/uapi/linux/vfio.h | 1836 ++++++++++++++++++++++++++++++++
 kernel/linux/uapi/version      |    2 +-
 2 files changed, 1837 insertions(+), 1 deletion(-)
 create mode 100644 kernel/linux/uapi/linux/vfio.h

diff --git a/kernel/linux/uapi/linux/vfio.h b/kernel/linux/uapi/linux/vfio.h
new file mode 100644
index 0000000000..4413783940
--- /dev/null
+++ b/kernel/linux/uapi/linux/vfio.h
@@ -0,0 +1,1836 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * VFIO API definition
+ *
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ *     Author: Alex Williamson <alex.williamson@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef _UAPIVFIO_H
+#define _UAPIVFIO_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define VFIO_API_VERSION	0
+
+
+/* Kernel & User level defines for VFIO IOCTLs. */
+
+/* Extensions */
+
+#define VFIO_TYPE1_IOMMU		1
+#define VFIO_SPAPR_TCE_IOMMU		2
+#define VFIO_TYPE1v2_IOMMU		3
+/*
+ * IOMMU enforces DMA cache coherence (ex. PCIe NoSnoop stripping).  This
+ * capability is subject to change as groups are added or removed.
+ */
+#define VFIO_DMA_CC_IOMMU		4
+
+/* Check if EEH is supported */
+#define VFIO_EEH			5
+
+/* Two-stage IOMMU */
+#define __VFIO_RESERVED_TYPE1_NESTING_IOMMU	6	/* Implies v2 */
+
+#define VFIO_SPAPR_TCE_v2_IOMMU		7
+
+/*
+ * The No-IOMMU IOMMU offers no translation or isolation for devices and
+ * supports no ioctls outside of VFIO_CHECK_EXTENSION.  Use of VFIO's No-IOMMU
+ * code will taint the host kernel and should be used with extreme caution.
+ */
+#define VFIO_NOIOMMU_IOMMU		8
+
+/* Supports VFIO_DMA_UNMAP_FLAG_ALL */
+#define VFIO_UNMAP_ALL			9
+
+/*
+ * Supports the vaddr flag for DMA map and unmap.  Not supported for mediated
+ * devices, so this capability is subject to change as groups are added or
+ * removed.
+ */
+#define VFIO_UPDATE_VADDR		10
+
+/*
+ * The IOCTL interface is designed for extensibility by embedding the
+ * structure length (argsz) and flags into structures passed between
+ * kernel and userspace.  We therefore use the _IO() macro for these
+ * defines to avoid implicitly embedding a size into the ioctl request.
+ * As structure fields are added, argsz will increase to match and flag
+ * bits will be defined to indicate additional fields with valid data.
+ * It's *always* the caller's responsibility to indicate the size of
+ * the structure passed by setting argsz appropriately.
+ */
+
+#define VFIO_TYPE	(';')
+#define VFIO_BASE	100
+
+/*
+ * For extension of INFO ioctls, VFIO makes use of a capability chain
+ * designed after PCI/e capabilities.  A flag bit indicates whether
+ * this capability chain is supported and a field defined in the fixed
+ * structure defines the offset of the first capability in the chain.
+ * This field is only valid when the corresponding bit in the flags
+ * bitmap is set.  This offset field is relative to the start of the
+ * INFO buffer, as is the next field within each capability header.
+ * The id within the header is a shared address space per INFO ioctl,
+ * while the version field is specific to the capability id.  The
+ * contents following the header are specific to the capability id.
+ */
+struct vfio_info_cap_header {
+	__u16	id;		/* Identifies capability */
+	__u16	version;	/* Version specific to the capability ID */
+	__u32	next;		/* Offset of next capability */
+};
+
+/*
+ * Callers of INFO ioctls passing insufficiently sized buffers will see
+ * the capability chain flag bit set, a zero value for the first capability
+ * offset (if available within the provided argsz), and argsz will be
+ * updated to report the necessary buffer size.  For compatibility, the
+ * INFO ioctl will not report error in this case, but the capability chain
+ * will not be available.
+ */
+
+/* -------- IOCTLs for VFIO file descriptor (/dev/vfio/vfio) -------- */
+
+/**
+ * VFIO_GET_API_VERSION - _IO(VFIO_TYPE, VFIO_BASE + 0)
+ *
+ * Report the version of the VFIO API.  This allows us to bump the entire
+ * API version should we later need to add or change features in incompatible
+ * ways.
+ * Return: VFIO_API_VERSION
+ * Availability: Always
+ */
+#define VFIO_GET_API_VERSION		_IO(VFIO_TYPE, VFIO_BASE + 0)
+
+/**
+ * VFIO_CHECK_EXTENSION - _IOW(VFIO_TYPE, VFIO_BASE + 1, __u32)
+ *
+ * Check whether an extension is supported.
+ * Return: 0 if not supported, 1 (or some other positive integer) if supported.
+ * Availability: Always
+ */
+#define VFIO_CHECK_EXTENSION		_IO(VFIO_TYPE, VFIO_BASE + 1)
+
+/**
+ * VFIO_SET_IOMMU - _IOW(VFIO_TYPE, VFIO_BASE + 2, __s32)
+ *
+ * Set the iommu to the given type.  The type must be supported by an
+ * iommu driver as verified by calling CHECK_EXTENSION using the same
+ * type.  A group must be set to this file descriptor before this
+ * ioctl is available.  The IOMMU interfaces enabled by this call are
+ * specific to the value set.
+ * Return: 0 on success, -errno on failure
+ * Availability: When VFIO group attached
+ */
+#define VFIO_SET_IOMMU			_IO(VFIO_TYPE, VFIO_BASE + 2)
+
+/* -------- IOCTLs for GROUP file descriptors (/dev/vfio/$GROUP) -------- */
+
+/**
+ * VFIO_GROUP_GET_STATUS - _IOR(VFIO_TYPE, VFIO_BASE + 3,
+ *						struct vfio_group_status)
+ *
+ * Retrieve information about the group.  Fills in provided
+ * struct vfio_group_info.  Caller sets argsz.
+ * Return: 0 on succes, -errno on failure.
+ * Availability: Always
+ */
+struct vfio_group_status {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_GROUP_FLAGS_VIABLE		(1 << 0)
+#define VFIO_GROUP_FLAGS_CONTAINER_SET	(1 << 1)
+};
+#define VFIO_GROUP_GET_STATUS		_IO(VFIO_TYPE, VFIO_BASE + 3)
+
+/**
+ * VFIO_GROUP_SET_CONTAINER - _IOW(VFIO_TYPE, VFIO_BASE + 4, __s32)
+ *
+ * Set the container for the VFIO group to the open VFIO file
+ * descriptor provided.  Groups may only belong to a single
+ * container.  Containers may, at their discretion, support multiple
+ * groups.  Only when a container is set are all of the interfaces
+ * of the VFIO file descriptor and the VFIO group file descriptor
+ * available to the user.
+ * Return: 0 on success, -errno on failure.
+ * Availability: Always
+ */
+#define VFIO_GROUP_SET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 4)
+
+/**
+ * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
+ *
+ * Remove the group from the attached container.  This is the
+ * opposite of the SET_CONTAINER call and returns the group to
+ * an initial state.  All device file descriptors must be released
+ * prior to calling this interface.  When removing the last group
+ * from a container, the IOMMU will be disabled and all state lost,
+ * effectively also returning the VFIO file descriptor to an initial
+ * state.
+ * Return: 0 on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_UNSET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 5)
+
+/**
+ * VFIO_GROUP_GET_DEVICE_FD - _IOW(VFIO_TYPE, VFIO_BASE + 6, char)
+ *
+ * Return a new file descriptor for the device object described by
+ * the provided string.  The string should match a device listed in
+ * the devices subdirectory of the IOMMU group sysfs entry.  The
+ * group containing the device must already be added to this context.
+ * Return: new file descriptor on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_GET_DEVICE_FD	_IO(VFIO_TYPE, VFIO_BASE + 6)
+
+/* --------------- IOCTLs for DEVICE file descriptors --------------- */
+
+/**
+ * VFIO_DEVICE_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 7,
+ *						struct vfio_device_info)
+ *
+ * Retrieve information about the device.  Fills in provided
+ * struct vfio_device_info.  Caller sets argsz.
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FLAGS_RESET	(1 << 0)	/* Device supports reset */
+#define VFIO_DEVICE_FLAGS_PCI	(1 << 1)	/* vfio-pci device */
+#define VFIO_DEVICE_FLAGS_PLATFORM (1 << 2)	/* vfio-platform device */
+#define VFIO_DEVICE_FLAGS_AMBA  (1 << 3)	/* vfio-amba device */
+#define VFIO_DEVICE_FLAGS_CCW	(1 << 4)	/* vfio-ccw device */
+#define VFIO_DEVICE_FLAGS_AP	(1 << 5)	/* vfio-ap device */
+#define VFIO_DEVICE_FLAGS_FSL_MC (1 << 6)	/* vfio-fsl-mc device */
+#define VFIO_DEVICE_FLAGS_CAPS	(1 << 7)	/* Info supports caps */
+#define VFIO_DEVICE_FLAGS_CDX	(1 << 8)	/* vfio-cdx device */
+	__u32	num_regions;	/* Max region index + 1 */
+	__u32	num_irqs;	/* Max IRQ index + 1 */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+#define VFIO_DEVICE_GET_INFO		_IO(VFIO_TYPE, VFIO_BASE + 7)
+
+/*
+ * Vendor driver using Mediated device framework should provide device_api
+ * attribute in supported type attribute groups. Device API string should be one
+ * of the following corresponding to device flags in vfio_device_info structure.
+ */
+
+#define VFIO_DEVICE_API_PCI_STRING		"vfio-pci"
+#define VFIO_DEVICE_API_PLATFORM_STRING		"vfio-platform"
+#define VFIO_DEVICE_API_AMBA_STRING		"vfio-amba"
+#define VFIO_DEVICE_API_CCW_STRING		"vfio-ccw"
+#define VFIO_DEVICE_API_AP_STRING		"vfio-ap"
+
+/*
+ * The following capabilities are unique to s390 zPCI devices.  Their contents
+ * are further-defined in vfio_zdev.h
+ */
+#define VFIO_DEVICE_INFO_CAP_ZPCI_BASE		1
+#define VFIO_DEVICE_INFO_CAP_ZPCI_GROUP		2
+#define VFIO_DEVICE_INFO_CAP_ZPCI_UTIL		3
+#define VFIO_DEVICE_INFO_CAP_ZPCI_PFIP		4
+
+/*
+ * The following VFIO_DEVICE_INFO capability reports support for PCIe AtomicOp
+ * completion to the root bus with supported widths provided via flags.
+ */
+#define VFIO_DEVICE_INFO_CAP_PCI_ATOMIC_COMP	5
+struct vfio_device_info_cap_pci_atomic_comp {
+	struct vfio_info_cap_header header;
+	__u32 flags;
+#define VFIO_PCI_ATOMIC_COMP32	(1 << 0)
+#define VFIO_PCI_ATOMIC_COMP64	(1 << 1)
+#define VFIO_PCI_ATOMIC_COMP128	(1 << 2)
+	__u32 reserved;
+};
+
+/**
+ * VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
+ *				       struct vfio_region_info)
+ *
+ * Retrieve information about a device region.  Caller provides
+ * struct vfio_region_info with index value set.  Caller sets argsz.
+ * Implementation of region mapping is bus driver specific.  This is
+ * intended to describe MMIO, I/O port, as well as bus specific
+ * regions (ex. PCI config space).  Zero sized regions may be used
+ * to describe unimplemented regions (ex. unimplemented PCI BARs).
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_region_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_REGION_INFO_FLAG_READ	(1 << 0) /* Region supports read */
+#define VFIO_REGION_INFO_FLAG_WRITE	(1 << 1) /* Region supports write */
+#define VFIO_REGION_INFO_FLAG_MMAP	(1 << 2) /* Region supports mmap */
+#define VFIO_REGION_INFO_FLAG_CAPS	(1 << 3) /* Info supports caps */
+	__u32	index;		/* Region index */
+	__u32	cap_offset;	/* Offset within info struct of first cap */
+	__aligned_u64	size;	/* Region size (bytes) */
+	__aligned_u64	offset;	/* Region offset from start of device fd */
+};
+#define VFIO_DEVICE_GET_REGION_INFO	_IO(VFIO_TYPE, VFIO_BASE + 8)
+
+/*
+ * The sparse mmap capability allows finer granularity of specifying areas
+ * within a region with mmap support.  When specified, the user should only
+ * mmap the offset ranges specified by the areas array.  mmaps outside of the
+ * areas specified may fail (such as the range covering a PCI MSI-X table) or
+ * may result in improper device behavior.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_SPARSE_MMAP	1
+
+struct vfio_region_sparse_mmap_area {
+	__aligned_u64	offset;	/* Offset of mmap'able area within region */
+	__aligned_u64	size;	/* Size of mmap'able area */
+};
+
+struct vfio_region_info_cap_sparse_mmap {
+	struct vfio_info_cap_header header;
+	__u32	nr_areas;
+	__u32	reserved;
+	struct vfio_region_sparse_mmap_area areas[];
+};
+
+/*
+ * The device specific type capability allows regions unique to a specific
+ * device or class of devices to be exposed.  This helps solve the problem for
+ * vfio bus drivers of defining which region indexes correspond to which region
+ * on the device, without needing to resort to static indexes, as done by
+ * vfio-pci.  For instance, if we were to go back in time, we might remove
+ * VFIO_PCI_VGA_REGION_INDEX and let vfio-pci simply define that all indexes
+ * greater than or equal to VFIO_PCI_NUM_REGIONS are device specific and we'd
+ * make a "VGA" device specific type to describe the VGA access space.  This
+ * means that non-VGA devices wouldn't need to waste this index, and thus the
+ * address space associated with it due to implementation of device file
+ * descriptor offsets in vfio-pci.
+ *
+ * The current implementation is now part of the user ABI, so we can't use this
+ * for VGA, but there are other upcoming use cases, such as opregions for Intel
+ * IGD devices and framebuffers for vGPU devices.  We missed VGA, but we'll
+ * use this for future additions.
+ *
+ * The structure below defines version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_TYPE	2
+
+struct vfio_region_info_cap_type {
+	struct vfio_info_cap_header header;
+	__u32 type;	/* global per bus driver */
+	__u32 subtype;	/* type specific */
+};
+
+/*
+ * List of region types, global per bus driver.
+ * If you introduce a new type, please add it here.
+ */
+
+/* PCI region type containing a PCI vendor part */
+#define VFIO_REGION_TYPE_PCI_VENDOR_TYPE	(1 << 31)
+#define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
+#define VFIO_REGION_TYPE_GFX                    (1)
+#define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION_DEPRECATED   (3)
+
+/* sub-types for VFIO_REGION_TYPE_PCI_* */
+
+/* 8086 vendor PCI sub-types */
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION	(1)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_HOST_CFG	(2)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_LPC_CFG	(3)
+
+/* 10de vendor PCI sub-types */
+/*
+ * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
+
+/* 1014 vendor PCI sub-types */
+/*
+ * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
+ * to do TLB invalidation on a GPU.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+
+/* sub-types for VFIO_REGION_TYPE_GFX */
+#define VFIO_REGION_SUBTYPE_GFX_EDID            (1)
+
+/**
+ * struct vfio_region_gfx_edid - EDID region layout.
+ *
+ * Set display link state and EDID blob.
+ *
+ * The EDID blob has monitor information such as brand, name, serial
+ * number, physical size, supported video modes and more.
+ *
+ * This special region allows userspace (typically qemu) set a virtual
+ * EDID for the virtual monitor, which allows a flexible display
+ * configuration.
+ *
+ * For the edid blob spec look here:
+ *    https://en.wikipedia.org/wiki/Extended_Display_Identification_Data
+ *
+ * On linux systems you can find the EDID blob in sysfs:
+ *    /sys/class/drm/${card}/${connector}/edid
+ *
+ * You can use the edid-decode ulility (comes with xorg-x11-utils) to
+ * decode the EDID blob.
+ *
+ * @edid_offset: location of the edid blob, relative to the
+ *               start of the region (readonly).
+ * @edid_max_size: max size of the edid blob (readonly).
+ * @edid_size: actual edid size (read/write).
+ * @link_state: display link state (read/write).
+ * VFIO_DEVICE_GFX_LINK_STATE_UP: Monitor is turned on.
+ * VFIO_DEVICE_GFX_LINK_STATE_DOWN: Monitor is turned off.
+ * @max_xres: max display width (0 == no limitation, readonly).
+ * @max_yres: max display height (0 == no limitation, readonly).
+ *
+ * EDID update protocol:
+ *   (1) set link-state to down.
+ *   (2) update edid blob and size.
+ *   (3) set link-state to up.
+ */
+struct vfio_region_gfx_edid {
+	__u32 edid_offset;
+	__u32 edid_max_size;
+	__u32 edid_size;
+	__u32 max_xres;
+	__u32 max_yres;
+	__u32 link_state;
+#define VFIO_DEVICE_GFX_LINK_STATE_UP    1
+#define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
+};
+
+/* sub-types for VFIO_REGION_TYPE_CCW */
+#define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
+#define VFIO_REGION_SUBTYPE_CCW_SCHIB		(2)
+#define VFIO_REGION_SUBTYPE_CCW_CRW		(3)
+
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED (1)
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_V1_STOP      (0)
+#define VFIO_DEVICE_STATE_V1_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_V1_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_V1_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_V1_RUNNING | \
+				     VFIO_DEVICE_STATE_V1_SAVING |  \
+				     VFIO_DEVICE_STATE_V1_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_V1_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_V1_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_V1_SAVING | \
+					      VFIO_DEVICE_STATE_V1_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_STATE_V1_SAVING | \
+					     VFIO_DEVICE_STATE_V1_RESUMING)
+
+	__u32 reserved;
+	__aligned_u64 pending_bytes;
+	__aligned_u64 data_offset;
+	__aligned_u64 data_size;
+};
+
+/*
+ * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
+ * which allows direct access to non-MSIX registers which happened to be within
+ * the same system page.
+ *
+ * Even though the userspace gets direct access to the MSIX data, the existing
+ * VFIO_DEVICE_SET_IRQS interface must still be used for MSIX configuration.
+ */
+#define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
+
+/*
+ * Capability with compressed real address (aka SSA - small system address)
+ * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing
+ * and by the userspace to associate a NVLink bridge with a GPU.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT	4
+
+struct vfio_region_info_cap_nvlink2_ssatgt {
+	struct vfio_info_cap_header header;
+	__aligned_u64 tgt;
+};
+
+/*
+ * Capability with an NVLink link speed. The value is read by
+ * the NVlink2 bridge driver from the bridge's "ibm,nvlink-speed"
+ * property in the device tree. The value is fixed in the hardware
+ * and failing to provide the correct value results in the link
+ * not working with no indication from the driver why.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD	5
+
+struct vfio_region_info_cap_nvlink2_lnkspd {
+	struct vfio_info_cap_header header;
+	__u32 link_speed;
+	__u32 __pad;
+};
+
+/**
+ * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
+ *				    struct vfio_irq_info)
+ *
+ * Retrieve information about a device IRQ.  Caller provides
+ * struct vfio_irq_info with index value set.  Caller sets argsz.
+ * Implementation of IRQ mapping is bus driver specific.  Indexes
+ * using multiple IRQs are primarily intended to support MSI-like
+ * interrupt blocks.  Zero count irq blocks may be used to describe
+ * unimplemented interrupt types.
+ *
+ * The EVENTFD flag indicates the interrupt index supports eventfd based
+ * signaling.
+ *
+ * The MASKABLE flags indicates the index supports MASK and UNMASK
+ * actions described below.
+ *
+ * AUTOMASKED indicates that after signaling, the interrupt line is
+ * automatically masked by VFIO and the user needs to unmask the line
+ * to receive new interrupts.  This is primarily intended to distinguish
+ * level triggered interrupts.
+ *
+ * The NORESIZE flag indicates that the interrupt lines within the index
+ * are setup as a set and new subindexes cannot be enabled without first
+ * disabling the entire index.  This is used for interrupts like PCI MSI
+ * and MSI-X where the driver may only use a subset of the available
+ * indexes, but VFIO needs to enable a specific number of vectors
+ * upfront.  In the case of MSI-X, where the user can enable MSI-X and
+ * then add and unmask vectors, it's up to userspace to make the decision
+ * whether to allocate the maximum supported number of vectors or tear
+ * down setup and incrementally increase the vectors as each is enabled.
+ * Absence of the NORESIZE flag indicates that vectors can be enabled
+ * and disabled dynamically without impacting other vectors within the
+ * index.
+ */
+struct vfio_irq_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_INFO_EVENTFD		(1 << 0)
+#define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
+#define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
+#define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
+	__u32	index;		/* IRQ index */
+	__u32	count;		/* Number of IRQs within this index */
+};
+#define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
+
+/**
+ * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
+ *
+ * Set signaling, masking, and unmasking of interrupts.  Caller provides
+ * struct vfio_irq_set with all fields set.  'start' and 'count' indicate
+ * the range of subindexes being specified.
+ *
+ * The DATA flags specify the type of data provided.  If DATA_NONE, the
+ * operation performs the specified action immediately on the specified
+ * interrupt(s).  For example, to unmask AUTOMASKED interrupt [0,0]:
+ * flags = (DATA_NONE|ACTION_UNMASK), index = 0, start = 0, count = 1.
+ *
+ * DATA_BOOL allows sparse support for the same on arrays of interrupts.
+ * For example, to mask interrupts [0,1] and [0,3] (but not [0,2]):
+ * flags = (DATA_BOOL|ACTION_MASK), index = 0, start = 1, count = 3,
+ * data = {1,0,1}
+ *
+ * DATA_EVENTFD binds the specified ACTION to the provided __s32 eventfd.
+ * A value of -1 can be used to either de-assign interrupts if already
+ * assigned or skip un-assigned interrupts.  For example, to set an eventfd
+ * to be trigger for interrupts [0,0] and [0,2]:
+ * flags = (DATA_EVENTFD|ACTION_TRIGGER), index = 0, start = 0, count = 3,
+ * data = {fd1, -1, fd2}
+ * If index [0,1] is previously set, two count = 1 ioctls calls would be
+ * required to set [0,0] and [0,2] without changing [0,1].
+ *
+ * Once a signaling mechanism is set, DATA_BOOL or DATA_NONE can be used
+ * with ACTION_TRIGGER to perform kernel level interrupt loopback testing
+ * from userspace (ie. simulate hardware triggering).
+ *
+ * Setting of an event triggering mechanism to userspace for ACTION_TRIGGER
+ * enables the interrupt index for the device.  Individual subindex interrupts
+ * can be disabled using the -1 value for DATA_EVENTFD or the index can be
+ * disabled as a whole with: flags = (DATA_NONE|ACTION_TRIGGER), count = 0.
+ *
+ * Note that ACTION_[UN]MASK specify user->kernel signaling (irqfds) while
+ * ACTION_TRIGGER specifies kernel->user signaling.
+ */
+struct vfio_irq_set {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
+#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
+#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
+#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
+#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
+#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
+	__u32	index;
+	__u32	start;
+	__u32	count;
+	__u8	data[];
+};
+#define VFIO_DEVICE_SET_IRQS		_IO(VFIO_TYPE, VFIO_BASE + 10)
+
+#define VFIO_IRQ_SET_DATA_TYPE_MASK	(VFIO_IRQ_SET_DATA_NONE | \
+					 VFIO_IRQ_SET_DATA_BOOL | \
+					 VFIO_IRQ_SET_DATA_EVENTFD)
+#define VFIO_IRQ_SET_ACTION_TYPE_MASK	(VFIO_IRQ_SET_ACTION_MASK | \
+					 VFIO_IRQ_SET_ACTION_UNMASK | \
+					 VFIO_IRQ_SET_ACTION_TRIGGER)
+/**
+ * VFIO_DEVICE_RESET - _IO(VFIO_TYPE, VFIO_BASE + 11)
+ *
+ * Reset a device.
+ */
+#define VFIO_DEVICE_RESET		_IO(VFIO_TYPE, VFIO_BASE + 11)
+
+/*
+ * The VFIO-PCI bus driver makes use of the following fixed region and
+ * IRQ index mapping.  Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_PCI_BAR0_REGION_INDEX,
+	VFIO_PCI_BAR1_REGION_INDEX,
+	VFIO_PCI_BAR2_REGION_INDEX,
+	VFIO_PCI_BAR3_REGION_INDEX,
+	VFIO_PCI_BAR4_REGION_INDEX,
+	VFIO_PCI_BAR5_REGION_INDEX,
+	VFIO_PCI_ROM_REGION_INDEX,
+	VFIO_PCI_CONFIG_REGION_INDEX,
+	/*
+	 * Expose VGA regions defined for PCI base class 03, subclass 00.
+	 * This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
+	 * as well as the MMIO range 0xa0000 to 0xbffff.  Each implemented
+	 * range is found at it's identity mapped offset from the region
+	 * offset, for example 0x3b0 is region_info.offset + 0x3b0.  Areas
+	 * between described ranges are unimplemented.
+	 */
+	VFIO_PCI_VGA_REGION_INDEX,
+	VFIO_PCI_NUM_REGIONS = 9 /* Fixed user ABI, region indexes >=9 use */
+				 /* device specific cap to define content. */
+};
+
+enum {
+	VFIO_PCI_INTX_IRQ_INDEX,
+	VFIO_PCI_MSI_IRQ_INDEX,
+	VFIO_PCI_MSIX_IRQ_INDEX,
+	VFIO_PCI_ERR_IRQ_INDEX,
+	VFIO_PCI_REQ_IRQ_INDEX,
+	VFIO_PCI_NUM_IRQS
+};
+
+/*
+ * The vfio-ccw bus driver makes use of the following fixed region and
+ * IRQ index mapping. Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_CCW_CONFIG_REGION_INDEX,
+	VFIO_CCW_NUM_REGIONS
+};
+
+enum {
+	VFIO_CCW_IO_IRQ_INDEX,
+	VFIO_CCW_CRW_IRQ_INDEX,
+	VFIO_CCW_REQ_IRQ_INDEX,
+	VFIO_CCW_NUM_IRQS
+};
+
+/*
+ * The vfio-ap bus driver makes use of the following IRQ index mapping.
+ * Unimplemented IRQ types return a count of zero.
+ */
+enum {
+	VFIO_AP_REQ_IRQ_INDEX,
+	VFIO_AP_CFG_CHG_IRQ_INDEX,
+	VFIO_AP_NUM_IRQS
+};
+
+/**
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 12,
+ *					      struct vfio_pci_hot_reset_info)
+ *
+ * This command is used to query the affected devices in the hot reset for
+ * a given device.
+ *
+ * This command always reports the segment, bus, and devfn information for
+ * each affected device, and selectively reports the group_id or devid per
+ * the way how the calling device is opened.
+ *
+ *	- If the calling device is opened via the traditional group/container
+ *	  API, group_id is reported.  User should check if it has owned all
+ *	  the affected devices and provides a set of group fds to prove the
+ *	  ownership in VFIO_DEVICE_PCI_HOT_RESET ioctl.
+ *
+ *	- If the calling device is opened as a cdev, devid is reported.
+ *	  Flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set to indicate this
+ *	  data type.  All the affected devices should be represented in
+ *	  the dev_set, ex. bound to a vfio driver, and also be owned by
+ *	  this interface which is determined by the following conditions:
+ *	  1) Has a valid devid within the iommufd_ctx of the calling device.
+ *	     Ownership cannot be determined across separate iommufd_ctx and
+ *	     the cdev calling conventions do not support a proof-of-ownership
+ *	     model as provided in the legacy group interface.  In this case
+ *	     valid devid with value greater than zero is provided in the return
+ *	     structure.
+ *	  2) Does not have a valid devid within the iommufd_ctx of the calling
+ *	     device, but belongs to the same IOMMU group as the calling device
+ *	     or another opened device that has a valid devid within the
+ *	     iommufd_ctx of the calling device.  This provides implicit ownership
+ *	     for devices within the same DMA isolation context.  In this case
+ *	     the devid value of VFIO_PCI_DEVID_OWNED is provided in the return
+ *	     structure.
+ *
+ *	  A devid value of VFIO_PCI_DEVID_NOT_OWNED is provided in the return
+ *	  structure for affected devices where device is NOT represented in the
+ *	  dev_set or ownership is not available.  Such devices prevent the use
+ *	  of VFIO_DEVICE_PCI_HOT_RESET ioctl outside of the proof-of-ownership
+ *	  calling conventions (ie. via legacy group accessed devices).  Flag
+ *	  VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED would be set when all the
+ *	  affected devices are represented in the dev_set and also owned by
+ *	  the user.  This flag is available only when
+ *	  flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set, otherwise reserved.
+ *	  When set, user could invoke VFIO_DEVICE_PCI_HOT_RESET with a zero
+ *	  length fd array on the calling device as the ownership is validated
+ *	  by iommufd_ctx.
+ *
+ * Return: 0 on success, -errno on failure:
+ *	-enospc = insufficient buffer, -enodev = unsupported for device.
+ */
+struct vfio_pci_dependent_device {
+	union {
+		__u32   group_id;
+		__u32	devid;
+#define VFIO_PCI_DEVID_OWNED		0
+#define VFIO_PCI_DEVID_NOT_OWNED	-1
+	};
+	__u16	segment;
+	__u8	bus;
+	__u8	devfn; /* Use PCI_SLOT/PCI_FUNC */
+};
+
+struct vfio_pci_hot_reset_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID		(1 << 0)
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED	(1 << 1)
+	__u32	count;
+	struct vfio_pci_dependent_device	devices[];
+};
+
+#define VFIO_DEVICE_GET_PCI_HOT_RESET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_DEVICE_PCI_HOT_RESET - _IOW(VFIO_TYPE, VFIO_BASE + 13,
+ *				    struct vfio_pci_hot_reset)
+ *
+ * A PCI hot reset results in either a bus or slot reset which may affect
+ * other devices sharing the bus/slot.  The calling user must have
+ * ownership of the full set of affected devices as determined by the
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO ioctl.
+ *
+ * When called on a device file descriptor acquired through the vfio
+ * group interface, the user is required to provide proof of ownership
+ * of those affected devices via the group_fds array in struct
+ * vfio_pci_hot_reset.
+ *
+ * When called on a direct cdev opened vfio device, the flags field of
+ * struct vfio_pci_hot_reset_info reports the ownership status of the
+ * affected devices and this ioctl must be called with an empty group_fds
+ * array.  See above INFO ioctl definition for ownership requirements.
+ *
+ * Mixed usage of legacy groups and cdevs across the set of affected
+ * devices is not supported.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_pci_hot_reset {
+	__u32	argsz;
+	__u32	flags;
+	__u32	count;
+	__s32	group_fds[];
+};
+
+#define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
+
+/**
+ * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
+ *                                    struct vfio_device_query_gfx_plane)
+ *
+ * Set the drm_plane_type and flags, then retrieve the gfx plane info.
+ *
+ * flags supported:
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
+ *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
+ *   support for dma-buf.
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
+ *   to ask if the mdev supports region. 0 on support, -EINVAL on no
+ *   support for region.
+ * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
+ *   with each call to query the plane info.
+ * - Others are invalid and return -EINVAL.
+ *
+ * Note:
+ * 1. Plane could be disabled by guest. In that case, success will be
+ *    returned with zero-initialized drm_format, size, width and height
+ *    fields.
+ * 2. x_hot/y_hot is set to 0xFFFFFFFF if no hotspot information available
+ *
+ * Return: 0 on success, -errno on other failure.
+ */
+struct vfio_device_gfx_plane_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
+#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
+#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
+	/* in */
+	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
+	/* out */
+	__u32 drm_format;	/* drm format of plane */
+	__aligned_u64 drm_format_mod;   /* tiled mode */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page*/
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane*/
+	__u32 x_hot;    /* horizontal position of cursor hotspot */
+	__u32 y_hot;    /* vertical position of cursor hotspot */
+	union {
+		__u32 region_index;	/* region index */
+		__u32 dmabuf_id;	/* dma-buf id */
+	};
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/**
+ * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15, __u32)
+ *
+ * Return a new dma-buf file descriptor for an exposed guest framebuffer
+ * described by the provided dmabuf_id. The dmabuf_id is returned from VFIO_
+ * DEVICE_QUERY_GFX_PLANE as a token of the exposed guest framebuffer.
+ */
+
+#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
+
+/**
+ * VFIO_DEVICE_IOEVENTFD - _IOW(VFIO_TYPE, VFIO_BASE + 16,
+ *                              struct vfio_device_ioeventfd)
+ *
+ * Perform a write to the device at the specified device fd offset, with
+ * the specified data and width when the provided eventfd is triggered.
+ * vfio bus drivers may not support this for all regions, for all widths,
+ * or at all.  vfio-pci currently only enables support for BAR regions,
+ * excluding the MSI-X vector table.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_ioeventfd {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_IOEVENTFD_8		(1 << 0) /* 1-byte write */
+#define VFIO_DEVICE_IOEVENTFD_16	(1 << 1) /* 2-byte write */
+#define VFIO_DEVICE_IOEVENTFD_32	(1 << 2) /* 4-byte write */
+#define VFIO_DEVICE_IOEVENTFD_64	(1 << 3) /* 8-byte write */
+#define VFIO_DEVICE_IOEVENTFD_SIZE_MASK	(0xf)
+	__aligned_u64	offset;		/* device fd offset of write */
+	__aligned_u64	data;		/* data to be written */
+	__s32	fd;			/* -1 for de-assignment */
+	__u32	reserved;
+};
+
+#define VFIO_DEVICE_IOEVENTFD		_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_DEVICE_FEATURE - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *			       struct vfio_device_feature)
+ *
+ * Get, set, or probe feature data of the device.  The feature is selected
+ * using the FEATURE_MASK portion of the flags field.  Support for a feature
+ * can be probed by setting both the FEATURE_MASK and PROBE bits.  A probe
+ * may optionally include the GET and/or SET bits to determine read vs write
+ * access of the feature respectively.  Probing a feature will return success
+ * if the feature is supported and all of the optionally indicated GET/SET
+ * methods are supported.  The format of the data portion of the structure is
+ * specific to the given feature.  The data portion is not required for
+ * probing.  GET and SET are mutually exclusive, except for use with PROBE.
+ *
+ * Return 0 on success, -errno on failure.
+ */
+struct vfio_device_feature {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FEATURE_MASK	(0xffff) /* 16-bit feature index */
+#define VFIO_DEVICE_FEATURE_GET		(1 << 16) /* Get feature into data[] */
+#define VFIO_DEVICE_FEATURE_SET		(1 << 17) /* Set feature from data[] */
+#define VFIO_DEVICE_FEATURE_PROBE	(1 << 18) /* Probe feature support */
+	__u8	data[];
+};
+
+#define VFIO_DEVICE_FEATURE		_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/*
+ * VFIO_DEVICE_BIND_IOMMUFD - _IOR(VFIO_TYPE, VFIO_BASE + 18,
+ *				   struct vfio_device_bind_iommufd)
+ * @argsz:	 User filled size of this data.
+ * @flags:	 Must be 0.
+ * @iommufd:	 iommufd to bind.
+ * @out_devid:	 The device id generated by this bind. devid is a handle for
+ *		 this device/iommufd bond and can be used in IOMMUFD commands.
+ *
+ * Bind a vfio_device to the specified iommufd.
+ *
+ * User is restricted from accessing the device before the binding operation
+ * is completed.  Only allowed on cdev fds.
+ *
+ * Unbind is automatically conducted when device fd is closed.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_bind_iommufd {
+	__u32		argsz;
+	__u32		flags;
+	__s32		iommufd;
+	__u32		out_devid;
+};
+
+#define VFIO_DEVICE_BIND_IOMMUFD	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/*
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 19,
+ *					struct vfio_device_attach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for attach.
+ * @pt_id:	Input the target id which can represent an ioas or a hwpt
+ *		allocated via iommufd subsystem.
+ *		Output the input ioas id or the attached hwpt id which could
+ *		be the specified hwpt itself or a hwpt automatically created
+ *		for the specified ioas by kernel during the attachment.
+ * @pasid:	The pasid to be attached, only meaningful when
+ *		VFIO_DEVICE_ATTACH_PASID is set in @flags
+ *
+ * Associate the device with an address space within the bound iommufd.
+ * Undo by VFIO_DEVICE_DETACH_IOMMUFD_PT or device fd close.  This is only
+ * allowed on cdev fds.
+ *
+ * If a vfio device or a pasid of this device is currently attached to a valid
+ * hw_pagetable (hwpt), without doing a VFIO_DEVICE_DETACH_IOMMUFD_PT, a second
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT ioctl passing in another hwpt id is allowed.
+ * This action, also known as a hw_pagetable replacement, will replace the
+ * currently attached hwpt of the device or the pasid of this device with a new
+ * hwpt corresponding to the given pt_id.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_attach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_ATTACH_PASID	(1 << 0)
+	__u32	pt_id;
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_ATTACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/*
+ * VFIO_DEVICE_DETACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 20,
+ *					struct vfio_device_detach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for detach.
+ * @pasid:	The pasid to be detached, only meaningful when
+ *		VFIO_DEVICE_DETACH_PASID is set in @flags
+ *
+ * Remove the association of the device or a pasid of the device and its current
+ * associated address space.  After it, the device or the pasid should be in a
+ * blocking DMA state.  This is only allowed on cdev fds.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_detach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_DETACH_PASID	(1 << 0)
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_DETACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/*
+ * Provide support for setting a PCI VF Token, which is used as a shared
+ * secret between PF and VF drivers.  This feature may only be set on a
+ * PCI SR-IOV PF when SR-IOV is enabled on the PF and there are no existing
+ * open VFs.  Data provided when setting this feature is a 16-byte array
+ * (__u8 b[16]), representing a UUID.
+ */
+#define VFIO_DEVICE_FEATURE_PCI_VF_TOKEN	(0)
+
+/*
+ * Indicates the device can support the migration API through
+ * VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE. If this GET succeeds, the RUNNING and
+ * ERROR states are always supported. Support for additional states is
+ * indicated via the flags field; at least VFIO_MIGRATION_STOP_COPY must be
+ * set.
+ *
+ * VFIO_MIGRATION_STOP_COPY means that STOP, STOP_COPY and
+ * RESUMING are supported.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P means that RUNNING_P2P
+ * is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_PRE_COPY means that
+ * PRE_COPY is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY
+ * means that RUNNING_P2P, PRE_COPY and PRE_COPY_P2P are supported
+ * in addition to the STOP_COPY states.
+ *
+ * Other combinations of flags have behavior to be defined in the future.
+ */
+struct vfio_device_feature_migration {
+	__aligned_u64 flags;
+#define VFIO_MIGRATION_STOP_COPY	(1 << 0)
+#define VFIO_MIGRATION_P2P		(1 << 1)
+#define VFIO_MIGRATION_PRE_COPY		(1 << 2)
+};
+#define VFIO_DEVICE_FEATURE_MIGRATION 1
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, execute a migration state change on the VFIO
+ * device. The new state is supplied in device_state, see enum
+ * vfio_device_mig_state for details
+ *
+ * The kernel migration driver must fully transition the device to the new state
+ * value before the operation returns to the user.
+ *
+ * The kernel migration driver must not generate asynchronous device state
+ * transitions outside of manipulation by the user or the VFIO_DEVICE_RESET
+ * ioctl as described above.
+ *
+ * If this function fails then current device_state may be the original
+ * operating state or some other state along the combination transition path.
+ * The user can then decide if it should execute a VFIO_DEVICE_RESET, attempt
+ * to return to the original state, or attempt to return to some other state
+ * such as RUNNING or STOP.
+ *
+ * If the new_state starts a new data transfer session then the FD associated
+ * with that session is returned in data_fd. The user is responsible to close
+ * this FD when it is finished. The user must consider the migration data stream
+ * carried over the FD to be opaque and must preserve the byte order of the
+ * stream. The user is not required to preserve buffer segmentation when writing
+ * the data stream during the RESUMING operation.
+ *
+ * Upon VFIO_DEVICE_FEATURE_GET, get the current migration state of the VFIO
+ * device, data_fd will be -1.
+ */
+struct vfio_device_feature_mig_state {
+	__u32 device_state; /* From enum vfio_device_mig_state */
+	__s32 data_fd;
+};
+#define VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE 2
+
+/*
+ * The device migration Finite State Machine is described by the enum
+ * vfio_device_mig_state. Some of the FSM arcs will create a migration data
+ * transfer session by returning a FD, in this case the migration data will
+ * flow over the FD using read() and write() as discussed below.
+ *
+ * There are 5 states to support VFIO_MIGRATION_STOP_COPY:
+ *  RUNNING - The device is running normally
+ *  STOP - The device does not change the internal or external state
+ *  STOP_COPY - The device internal state can be read out
+ *  RESUMING - The device is stopped and is loading a new internal state
+ *  ERROR - The device has failed and must be reset
+ *
+ * And optional states to support VFIO_MIGRATION_P2P:
+ *  RUNNING_P2P - RUNNING, except the device cannot do peer to peer DMA
+ * And VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY - The device is running normally but tracking internal state
+ *             changes
+ * And VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY_P2P - PRE_COPY, except the device cannot do peer to peer DMA
+ *
+ * The FSM takes actions on the arcs between FSM states. The driver implements
+ * the following behavior for the FSM arcs:
+ *
+ * RUNNING_P2P -> STOP
+ * STOP_COPY -> STOP
+ *   While in STOP the device must stop the operation of the device. The device
+ *   must not generate interrupts, DMA, or any other change to external state.
+ *   It must not change its internal state. When stopped the device and kernel
+ *   migration driver must accept and respond to interaction to support external
+ *   subsystems in the STOP state, for example PCI MSI-X and PCI config space.
+ *   Failure by the user to restrict device access while in STOP must not result
+ *   in error conditions outside the user context (ex. host system faults).
+ *
+ *   The STOP_COPY arc will terminate a data transfer session.
+ *
+ * RESUMING -> STOP
+ *   Leaving RESUMING terminates a data transfer session and indicates the
+ *   device should complete processing of the data delivered by write(). The
+ *   kernel migration driver should complete the incorporation of data written
+ *   to the data transfer FD into the device internal state and perform
+ *   final validity and consistency checking of the new device state. If the
+ *   user provided data is found to be incomplete, inconsistent, or otherwise
+ *   invalid, the migration driver must fail the SET_STATE ioctl and
+ *   optionally go to the ERROR state as described below.
+ *
+ *   While in STOP the device has the same behavior as other STOP states
+ *   described above.
+ *
+ *   To abort a RESUMING session the device must be reset.
+ *
+ * PRE_COPY -> RUNNING
+ * RUNNING_P2P -> RUNNING
+ *   While in RUNNING the device is fully operational, the device may generate
+ *   interrupts, DMA, respond to MMIO, all vfio device regions are functional,
+ *   and the device may advance its internal state.
+ *
+ *   The PRE_COPY arc will terminate a data transfer session.
+ *
+ * PRE_COPY_P2P -> RUNNING_P2P
+ * RUNNING -> RUNNING_P2P
+ * STOP -> RUNNING_P2P
+ *   While in RUNNING_P2P the device is partially running in the P2P quiescent
+ *   state defined below.
+ *
+ *   The PRE_COPY_P2P arc will terminate a data transfer session.
+ *
+ * RUNNING -> PRE_COPY
+ * RUNNING_P2P -> PRE_COPY_P2P
+ * STOP -> STOP_COPY
+ *   PRE_COPY, PRE_COPY_P2P and STOP_COPY form the "saving group" of states
+ *   which share a data transfer session. Moving between these states alters
+ *   what is streamed in session, but does not terminate or otherwise affect
+ *   the associated fd.
+ *
+ *   These arcs begin the process of saving the device state and will return a
+ *   new data_fd. The migration driver may perform actions such as enabling
+ *   dirty logging of device state when entering PRE_COPY or PER_COPY_P2P.
+ *
+ *   Each arc does not change the device operation, the device remains
+ *   RUNNING, P2P quiesced or in STOP. The STOP_COPY state is described below
+ *   in PRE_COPY_P2P -> STOP_COPY.
+ *
+ * PRE_COPY -> PRE_COPY_P2P
+ *   Entering PRE_COPY_P2P continues all the behaviors of PRE_COPY above.
+ *   However, while in the PRE_COPY_P2P state, the device is partially running
+ *   in the P2P quiescent state defined below, like RUNNING_P2P.
+ *
+ * PRE_COPY_P2P -> PRE_COPY
+ *   This arc allows returning the device to a full RUNNING behavior while
+ *   continuing all the behaviors of PRE_COPY.
+ *
+ * PRE_COPY_P2P -> STOP_COPY
+ *   While in the STOP_COPY state the device has the same behavior as STOP
+ *   with the addition that the data transfers session continues to stream the
+ *   migration state. End of stream on the FD indicates the entire device
+ *   state has been transferred.
+ *
+ *   The user should take steps to restrict access to vfio device regions while
+ *   the device is in STOP_COPY or risk corruption of the device migration data
+ *   stream.
+ *
+ * STOP -> RESUMING
+ *   Entering the RESUMING state starts a process of restoring the device state
+ *   and will return a new data_fd. The data stream fed into the data_fd should
+ *   be taken from the data transfer output of a single FD during saving from
+ *   a compatible device. The migration driver may alter/reset the internal
+ *   device state for this arc if required to prepare the device to receive the
+ *   migration data.
+ *
+ * STOP_COPY -> PRE_COPY
+ * STOP_COPY -> PRE_COPY_P2P
+ *   These arcs are not permitted and return error if requested. Future
+ *   revisions of this API may define behaviors for these arcs, in this case
+ *   support will be discoverable by a new flag in
+ *   VFIO_DEVICE_FEATURE_MIGRATION.
+ *
+ * any -> ERROR
+ *   ERROR cannot be specified as a device state, however any transition request
+ *   can be failed with an errno return and may then move the device_state into
+ *   ERROR. In this case the device was unable to execute the requested arc and
+ *   was also unable to restore the device to any valid device_state.
+ *   To recover from ERROR VFIO_DEVICE_RESET must be used to return the
+ *   device_state back to RUNNING.
+ *
+ * The optional peer to peer (P2P) quiescent state is intended to be a quiescent
+ * state for the device for the purposes of managing multiple devices within a
+ * user context where peer-to-peer DMA between devices may be active. The
+ * RUNNING_P2P and PRE_COPY_P2P states must prevent the device from initiating
+ * any new P2P DMA transactions. If the device can identify P2P transactions
+ * then it can stop only P2P DMA, otherwise it must stop all DMA. The migration
+ * driver must complete any such outstanding operations prior to completing the
+ * FSM arc into a P2P state. For the purpose of specification the states
+ * behave as though the device was fully running if not supported. Like while in
+ * STOP or STOP_COPY the user must not touch the device, otherwise the state
+ * can be exited.
+ *
+ * The remaining possible transitions are interpreted as combinations of the
+ * above FSM arcs. As there are multiple paths through the FSM arcs the path
+ * should be selected based on the following rules:
+ *   - Select the shortest path.
+ *   - The path cannot have saving group states as interior arcs, only
+ *     starting/end states.
+ * Refer to vfio_mig_get_next_state() for the result of the algorithm.
+ *
+ * The automatic transit through the FSM arcs that make up the combination
+ * transition is invisible to the user. When working with combination arcs the
+ * user may see any step along the path in the device_state if SET_STATE
+ * fails. When handling these types of errors users should anticipate future
+ * revisions of this protocol using new states and those states becoming
+ * visible in this case.
+ *
+ * The optional states cannot be used with SET_STATE if the device does not
+ * support them. The user can discover if these states are supported by using
+ * VFIO_DEVICE_FEATURE_MIGRATION. By using combination transitions the user can
+ * avoid knowing about these optional states if the kernel driver supports them.
+ *
+ * Arcs touching PRE_COPY and PRE_COPY_P2P are removed if support for PRE_COPY
+ * is not present.
+ */
+enum vfio_device_mig_state {
+	VFIO_DEVICE_STATE_ERROR = 0,
+	VFIO_DEVICE_STATE_STOP = 1,
+	VFIO_DEVICE_STATE_RUNNING = 2,
+	VFIO_DEVICE_STATE_STOP_COPY = 3,
+	VFIO_DEVICE_STATE_RESUMING = 4,
+	VFIO_DEVICE_STATE_RUNNING_P2P = 5,
+	VFIO_DEVICE_STATE_PRE_COPY = 6,
+	VFIO_DEVICE_STATE_PRE_COPY_P2P = 7,
+	VFIO_DEVICE_STATE_NR,
+};
+
+/**
+ * VFIO_MIG_GET_PRECOPY_INFO - _IO(VFIO_TYPE, VFIO_BASE + 21)
+ *
+ * This ioctl is used on the migration data FD in the precopy phase of the
+ * migration data transfer. It returns an estimate of the current data sizes
+ * remaining to be transferred. It allows the user to judge when it is
+ * appropriate to leave PRE_COPY for STOP_COPY.
+ *
+ * This ioctl is valid only in PRE_COPY states and kernel driver should
+ * return -EINVAL from any other migration state.
+ *
+ * The vfio_precopy_info data structure returned by this ioctl provides
+ * estimates of data available from the device during the PRE_COPY states.
+ * This estimate is split into two categories, initial_bytes and
+ * dirty_bytes.
+ *
+ * The initial_bytes field indicates the amount of initial precopy
+ * data available from the device. This field should have a non-zero initial
+ * value and decrease as migration data is read from the device.
+ * It is recommended to leave PRE_COPY for STOP_COPY only after this field
+ * reaches zero. Leaving PRE_COPY earlier might make things slower.
+ *
+ * The dirty_bytes field tracks device state changes relative to data
+ * previously retrieved.  This field starts at zero and may increase as
+ * the internal device state is modified or decrease as that modified
+ * state is read from the device.
+ *
+ * Userspace may use the combination of these fields to estimate the
+ * potential data size available during the PRE_COPY phases, as well as
+ * trends relative to the rate the device is dirtying its internal
+ * state, but these fields are not required to have any bearing relative
+ * to the data size available during the STOP_COPY phase.
+ *
+ * Drivers have a lot of flexibility in when and what they transfer during the
+ * PRE_COPY phase, and how they report this from VFIO_MIG_GET_PRECOPY_INFO.
+ *
+ * During pre-copy the migration data FD has a temporary "end of stream" that is
+ * reached when both initial_bytes and dirty_byte are zero. For instance, this
+ * may indicate that the device is idle and not currently dirtying any internal
+ * state. When read() is done on this temporary end of stream the kernel driver
+ * should return ENOMSG from read(). Userspace can wait for more data (which may
+ * never come) by using poll.
+ *
+ * Once in STOP_COPY the migration data FD has a permanent end of stream
+ * signaled in the usual way by read() always returning 0 and poll always
+ * returning readable. ENOMSG may not be returned in STOP_COPY.
+ * Support for this ioctl is mandatory if a driver claims to support
+ * VFIO_MIGRATION_PRE_COPY.
+ *
+ * Return: 0 on success, -1 and errno set on failure.
+ */
+struct vfio_precopy_info {
+	__u32 argsz;
+	__u32 flags;
+	__aligned_u64 initial_bytes;
+	__aligned_u64 dirty_bytes;
+};
+
+#define VFIO_MIG_GET_PRECOPY_INFO _IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, allow the device to be moved into a low power
+ * state with the platform-based power management.  Device use of lower power
+ * states depends on factors managed by the runtime power management core,
+ * including system level support and coordinating support among dependent
+ * devices.  Enabling device low power entry does not guarantee lower power
+ * usage by the device, nor is a mechanism provided through this feature to
+ * know the current power state of the device.  If any device access happens
+ * (either from the host or through the vfio uAPI) when the device is in the
+ * low power state, then the host will move the device out of the low power
+ * state as necessary prior to the access.  Once the access is completed, the
+ * device may re-enter the low power state.  For single shot low power support
+ * with wake-up notification, see
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP below.  Access to mmap'd
+ * device regions is disabled on LOW_POWER_ENTRY and may only be resumed after
+ * calling LOW_POWER_EXIT.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY 3
+
+/*
+ * This device feature has the same behavior as
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY with the exception that the user
+ * provides an eventfd for wake-up notification.  When the device moves out of
+ * the low power state for the wake-up, the host will not allow the device to
+ * re-enter a low power state without a subsequent user call to one of the low
+ * power entry device feature IOCTLs.  Access to mmap'd device regions is
+ * disabled on LOW_POWER_ENTRY_WITH_WAKEUP and may only be resumed after the
+ * low power exit.  The low power exit can happen either through LOW_POWER_EXIT
+ * or through any other access (where the wake-up notification has been
+ * generated).  The access to mmap'd device regions will not trigger low power
+ * exit.
+ *
+ * The notification through the provided eventfd will be generated only when
+ * the device has entered and is resumed from a low power state after
+ * calling this device feature IOCTL.  A device that has not entered low power
+ * state, as managed through the runtime power management core, will not
+ * generate a notification through the provided eventfd on access.  Calling the
+ * LOW_POWER_EXIT feature is optional in the case where notification has been
+ * signaled on the provided eventfd that a resume from low power has occurred.
+ */
+struct vfio_device_low_power_entry_with_wakeup {
+	__s32 wakeup_eventfd;
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP 4
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, disallow use of device low power states as
+ * previously enabled via VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY or
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP device features.
+ * This device feature IOCTL may itself generate a wakeup eventfd notification
+ * in the latter case if the device had previously entered a low power state.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_EXIT 5
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET start/stop device DMA logging.
+ * VFIO_DEVICE_FEATURE_PROBE can be used to detect if the device supports
+ * DMA logging.
+ *
+ * DMA logging allows a device to internally record what DMAs the device is
+ * initiating and report them back to userspace. It is part of the VFIO
+ * migration infrastructure that allows implementing dirty page tracking
+ * during the pre copy phase of live migration. Only DMA WRITEs are logged,
+ * and this API is not connected to VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE.
+ *
+ * When DMA logging is started a range of IOVAs to monitor is provided and the
+ * device can optimize its logging to cover only the IOVA range given. Each
+ * DMA that the device initiates inside the range will be logged by the device
+ * for later retrieval.
+ *
+ * page_size is an input that hints what tracking granularity the device
+ * should try to achieve. If the device cannot do the hinted page size then
+ * it's the driver choice which page size to pick based on its support.
+ * On output the device will return the page size it selected.
+ *
+ * ranges is a pointer to an array of
+ * struct vfio_device_feature_dma_logging_range.
+ *
+ * The core kernel code guarantees to support by minimum num_ranges that fit
+ * into a single kernel page. User space can try higher values but should give
+ * up if the above can't be achieved as of some driver limitations.
+ *
+ * A single call to start device DMA logging can be issued and a matching stop
+ * should follow at the end. Another start is not allowed in the meantime.
+ */
+struct vfio_device_feature_dma_logging_control {
+	__aligned_u64 page_size;
+	__u32 num_ranges;
+	__u32 __reserved;
+	__aligned_u64 ranges;
+};
+
+struct vfio_device_feature_dma_logging_range {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_START 6
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET stop device DMA logging that was started
+ * by VFIO_DEVICE_FEATURE_DMA_LOGGING_START
+ */
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_STOP 7
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back and clear the device DMA log
+ *
+ * Query the device's DMA log for written pages within the given IOVA range.
+ * During querying the log is cleared for the IOVA range.
+ *
+ * bitmap is a pointer to an array of u64s that will hold the output bitmap
+ * with 1 bit reporting a page_size unit of IOVA. The mapping of IOVA to bits
+ * is given by:
+ *  bitmap[(addr - iova)/page_size] & (1ULL << (addr % 64))
+ *
+ * The input page_size can be any power of two value and does not have to
+ * match the value given to VFIO_DEVICE_FEATURE_DMA_LOGGING_START. The driver
+ * will format its internal logging to match the reporting page size, possibly
+ * by replicating bits if the internal page size is lower than requested.
+ *
+ * The LOGGING_REPORT will only set bits in the bitmap and never clear or
+ * perform any initialization of the user provided bitmap.
+ *
+ * If any error is returned userspace should assume that the dirty log is
+ * corrupted. Error recovery is to consider all memory dirty and try to
+ * restart the dirty tracking, or to abort/restart the whole migration.
+ *
+ * If DMA logging is not enabled, an error will be returned.
+ *
+ */
+struct vfio_device_feature_dma_logging_report {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+	__aligned_u64 page_size;
+	__aligned_u64 bitmap;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_REPORT 8
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back the estimated data length that will
+ * be required to complete stop copy.
+ *
+ * Note: Can be called on each device state.
+ */
+
+struct vfio_device_feature_mig_data_size {
+	__aligned_u64 stop_copy_length;
+};
+
+#define VFIO_DEVICE_FEATURE_MIG_DATA_SIZE 9
+
+/**
+ * Upon VFIO_DEVICE_FEATURE_SET, set or clear the BUS mastering for the device
+ * based on the operation specified in op flag.
+ *
+ * The functionality is incorporated for devices that needs bus master control,
+ * but the in-band device interface lacks the support. Consequently, it is not
+ * applicable to PCI devices, as bus master control for PCI devices is managed
+ * in-band through the configuration space. At present, this feature is supported
+ * only for CDX devices.
+ * When the device's BUS MASTER setting is configured as CLEAR, it will result in
+ * blocking all incoming DMA requests from the device. On the other hand, configuring
+ * the device's BUS MASTER setting as SET (enable) will grant the device the
+ * capability to perform DMA to the host memory.
+ */
+struct vfio_device_feature_bus_master {
+	__u32 op;
+#define		VFIO_DEVICE_FEATURE_CLEAR_MASTER	0	/* Clear Bus Master */
+#define		VFIO_DEVICE_FEATURE_SET_MASTER		1	/* Set Bus Master */
+};
+#define VFIO_DEVICE_FEATURE_BUS_MASTER 10
+
+/* -------- API for Type1 VFIO IOMMU -------- */
+
+/**
+ * VFIO_IOMMU_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 12, struct vfio_iommu_info)
+ *
+ * Retrieve information about the IOMMU object. Fills in provided
+ * struct vfio_iommu_info. Caller sets argsz.
+ *
+ * XXX Should we do these by CHECK_EXTENSION too?
+ */
+struct vfio_iommu_type1_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
+#define VFIO_IOMMU_INFO_CAPS	(1 << 1)	/* Info supports caps */
+	__aligned_u64	iova_pgsizes;		/* Bitmap of supported page sizes */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+
+/*
+ * The IOVA capability allows to report the valid IOVA range(s)
+ * excluding any non-relaxable reserved regions exposed by
+ * devices attached to the container. Any DMA map attempt
+ * outside the valid iova range will return error.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE  1
+
+struct vfio_iova_range {
+	__u64	start;
+	__u64	end;
+};
+
+struct vfio_iommu_type1_info_cap_iova_range {
+	struct	vfio_info_cap_header header;
+	__u32	nr_iovas;
+	__u32	reserved;
+	struct	vfio_iova_range iova_ranges[];
+};
+
+/*
+ * The migration capability allows to report supported features for migration.
+ *
+ * The structures below define version 1 of this capability.
+ *
+ * The existence of this capability indicates that IOMMU kernel driver supports
+ * dirty page logging.
+ *
+ * pgsize_bitmap: Kernel driver returns bitmap of supported page sizes for dirty
+ * page logging.
+ * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
+ * size in bytes that can be used by user applications when getting the dirty
+ * bitmap.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  2
+
+struct vfio_iommu_type1_info_cap_migration {
+	struct	vfio_info_cap_header header;
+	__u32	flags;
+	__u64	pgsize_bitmap;
+	__u64	max_dirty_bitmap_size;		/* in bytes */
+};
+
+/*
+ * The DMA available capability allows to report the current number of
+ * simultaneously outstanding DMA mappings that are allowed.
+ *
+ * The structure below defines version 1 of this capability.
+ *
+ * avail: specifies the current number of outstanding DMA mappings allowed.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_DMA_AVAIL 3
+
+struct vfio_iommu_type1_info_dma_avail {
+	struct	vfio_info_cap_header header;
+	__u32	avail;
+};
+
+#define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_IOMMU_MAP_DMA - _IOW(VFIO_TYPE, VFIO_BASE + 13, struct vfio_dma_map)
+ *
+ * Map process virtual addresses to IO virtual addresses using the
+ * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
+ *
+ * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova. The vaddr
+ * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR.  To
+ * maintain memory consistency within the user application, the updated vaddr
+ * must address the same memory object as originally mapped.  Failure to do so
+ * will result in user memory corruption and/or device misbehavior.  iova and
+ * size must match those in the original MAP_DMA call.  Protection is not
+ * changed, and the READ & WRITE flags must be 0.
+ */
+struct vfio_iommu_type1_dma_map {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
+#define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
+#define VFIO_DMA_MAP_FLAG_VADDR (1 << 2)
+	__u64	vaddr;				/* Process virtual address */
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+
+#define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
+
+struct vfio_bitmap {
+	__u64        pgsize;	/* page size for bitmap in bytes */
+	__u64        size;	/* in bytes */
+	__u64 *data;	/* one bit per page */
+};
+
+/**
+ * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
+ *							struct vfio_dma_unmap)
+ *
+ * Unmap IO virtual addresses using the provided struct vfio_dma_unmap.
+ * Caller sets argsz.  The actual unmapped size is returned in the size
+ * field.  No guarantee is made to the user that arbitrary unmaps of iova
+ * or size different from those used in the original mapping call will
+ * succeed.
+ *
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get the dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, the user must
+ * provide a struct vfio_bitmap in data[]. User must provide zero-allocated
+ * memory via vfio_bitmap.data and its size in the vfio_bitmap.size field.
+ * A bit in the bitmap represents one page, of user provided page size in
+ * vfio_bitmap.pgsize field, consecutively starting from iova offset. Bit set
+ * indicates that the page at that offset from iova is dirty. A Bitmap of the
+ * pages in the range of unmapped size is returned in the user-provided
+ * vfio_bitmap.data.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_ALL, unmap all addresses.  iova and size
+ * must be 0.  This cannot be combined with the get-dirty-bitmap flag.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host
+ * virtual addresses in the iova range.  DMA to already-mapped pages continues.
+ * Groups may not be added to the container while any addresses are invalid.
+ * This cannot be combined with the get-dirty-bitmap flag.
+ */
+struct vfio_iommu_type1_dma_unmap {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
+#define VFIO_DMA_UNMAP_FLAG_ALL		     (1 << 1)
+#define VFIO_DMA_UNMAP_FLAG_VADDR	     (1 << 2)
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
+};
+
+#define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/*
+ * IOCTLs to enable/disable IOMMU container usage.
+ * No parameters are supported.
+ */
+#define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
+#define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages logging.
+ * Caller should set flag depending on which operation to perform, details as
+ * below:
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_START flag set, instructs
+ * the IOMMU driver to log pages that are dirtied or potentially dirtied by
+ * the device; designed to be used when a migration is in progress. Dirty pages
+ * are logged until logging is disabled by user application by calling the IOCTL
+ * with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag set, instructs
+ * the IOMMU driver to stop logging dirtied pages.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set
+ * returns the dirty pages bitmap for IOMMU container for a given IOVA range.
+ * The user must specify the IOVA range and the pgsize through the structure
+ * vfio_iommu_type1_dirty_bitmap_get in the data[] portion. This interface
+ * supports getting a bitmap of the smallest supported pgsize only and can be
+ * modified in future to get a bitmap of any specified supported pgsize. The
+ * user must provide a zeroed memory area for the bitmap memory and specify its
+ * size in bitmap.size. One bit is used to represent one page consecutively
+ * starting from iova offset. The user should provide page size in bitmap.pgsize
+ * field. A bit set in the bitmap indicates that the page at that offset from
+ * iova is dirty. The caller must set argsz to a value including the size of
+ * structure vfio_iommu_type1_dirty_bitmap_get, but excluding the size of the
+ * actual bitmap. If dirty pages logging is not enabled, an error will be
+ * returned.
+ *
+ * Only one of the flags _START, _STOP and _GET may be specified at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64              iova;	/* IO virtual address */
+	__u64              size;	/* Size of iova range */
+	struct vfio_bitmap bitmap;
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
+
+/*
+ * The SPAPR TCE DDW info struct provides the information about
+ * the details of Dynamic DMA window capability.
+ *
+ * @pgsizes contains a page size bitmask, 4K/64K/16M are supported.
+ * @max_dynamic_windows_supported tells the maximum number of windows
+ * which the platform can create.
+ * @levels tells the maximum number of levels in multi-level IOMMU tables;
+ * this allows splitting a table into smaller chunks which reduces
+ * the amount of physically contiguous memory required for the table.
+ */
+struct vfio_iommu_spapr_tce_ddw_info {
+	__u64 pgsizes;			/* Bitmap of supported page sizes */
+	__u32 max_dynamic_windows_supported;
+	__u32 levels;
+};
+
+/*
+ * The SPAPR TCE info struct provides the information about the PCI bus
+ * address ranges available for DMA, these values are programmed into
+ * the hardware so the guest has to know that information.
+ *
+ * The DMA 32 bit window start is an absolute PCI bus address.
+ * The IOVA address passed via map/unmap ioctls are absolute PCI bus
+ * addresses too so the window works as a filter rather than an offset
+ * for IOVA addresses.
+ *
+ * Flags supported:
+ * - VFIO_IOMMU_SPAPR_INFO_DDW: informs the userspace that dynamic DMA windows
+ *   (DDW) support is present. @ddw is only supported when DDW is present.
+ */
+struct vfio_iommu_spapr_tce_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_IOMMU_SPAPR_INFO_DDW	(1 << 0)	/* DDW supported */
+	__u32 dma32_window_start;	/* 32 bit window start (bytes) */
+	__u32 dma32_window_size;	/* 32 bit window size (bytes) */
+	struct vfio_iommu_spapr_tce_ddw_info ddw;
+};
+
+#define VFIO_IOMMU_SPAPR_TCE_GET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/*
+ * EEH PE operation struct provides ways to:
+ * - enable/disable EEH functionality;
+ * - unfreeze IO/DMA for frozen PE;
+ * - read PE state;
+ * - reset PE;
+ * - configure PE;
+ * - inject EEH error.
+ */
+struct vfio_eeh_pe_err {
+	__u32 type;
+	__u32 func;
+	__u64 addr;
+	__u64 mask;
+};
+
+struct vfio_eeh_pe_op {
+	__u32 argsz;
+	__u32 flags;
+	__u32 op;
+	union {
+		struct vfio_eeh_pe_err err;
+	};
+};
+
+#define VFIO_EEH_PE_DISABLE		0	/* Disable EEH functionality */
+#define VFIO_EEH_PE_ENABLE		1	/* Enable EEH functionality  */
+#define VFIO_EEH_PE_UNFREEZE_IO		2	/* Enable IO for frozen PE   */
+#define VFIO_EEH_PE_UNFREEZE_DMA	3	/* Enable DMA for frozen PE  */
+#define VFIO_EEH_PE_GET_STATE		4	/* PE state retrieval        */
+#define  VFIO_EEH_PE_STATE_NORMAL	0	/* PE in functional state    */
+#define  VFIO_EEH_PE_STATE_RESET	1	/* PE reset in progress      */
+#define  VFIO_EEH_PE_STATE_STOPPED	2	/* Stopped DMA and IO        */
+#define  VFIO_EEH_PE_STATE_STOPPED_DMA	4	/* Stopped DMA only          */
+#define  VFIO_EEH_PE_STATE_UNAVAIL	5	/* State unavailable         */
+#define VFIO_EEH_PE_RESET_DEACTIVATE	5	/* Deassert PE reset         */
+#define VFIO_EEH_PE_RESET_HOT		6	/* Assert hot reset          */
+#define VFIO_EEH_PE_RESET_FUNDAMENTAL	7	/* Assert fundamental reset  */
+#define VFIO_EEH_PE_CONFIGURE		8	/* PE configuration          */
+#define VFIO_EEH_PE_INJECT_ERR		9	/* Inject EEH error          */
+
+#define VFIO_EEH_PE_OP			_IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/**
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 17, struct vfio_iommu_spapr_register_memory)
+ *
+ * Registers user space memory where DMA is allowed. It pins
+ * user pages and does the locked memory accounting so
+ * subsequent VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA calls
+ * get faster.
+ */
+struct vfio_iommu_spapr_register_memory {
+	__u32	argsz;
+	__u32	flags;
+	__u64	vaddr;				/* Process virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+#define VFIO_IOMMU_SPAPR_REGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/**
+ * VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 18, struct vfio_iommu_spapr_register_memory)
+ *
+ * Unregisters user space memory registered with
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY.
+ * Uses vfio_iommu_spapr_register_memory for parameters.
+ */
+#define VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_CREATE - _IOWR(VFIO_TYPE, VFIO_BASE + 19, struct vfio_iommu_spapr_tce_create)
+ *
+ * Creates an additional TCE table and programs it (sets a new DMA window)
+ * to every IOMMU group in the container. It receives page shift, window
+ * size and number of levels in the TCE table being created.
+ *
+ * It allocates and returns an offset on a PCI bus of the new DMA window.
+ */
+struct vfio_iommu_spapr_tce_create {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 page_shift;
+	__u32 __resv1;
+	__u64 window_size;
+	__u32 levels;
+	__u32 __resv2;
+	/* out */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_CREATE	_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_REMOVE - _IOW(VFIO_TYPE, VFIO_BASE + 20, struct vfio_iommu_spapr_tce_remove)
+ *
+ * Unprograms a TCE table from all groups in the container and destroys it.
+ * It receives a PCI bus offset as a window id.
+ */
+struct vfio_iommu_spapr_tce_remove {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_REMOVE	_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/* ***************************************************************** */
+
+#endif /* _UAPIVFIO_H */
diff --git a/kernel/linux/uapi/version b/kernel/linux/uapi/version
index 3c68968f92..966a998301 100644
--- a/kernel/linux/uapi/version
+++ b/kernel/linux/uapi/version
@@ -1 +1 @@
-v6.14
+v6.16
-- 
2.51.0


^ permalink raw reply	[relevance 1%]

* [RFC v2 8/9] uapi: import VFIO header
  @ 2025-09-03 15:17  1%   ` David Marchand
  0 siblings, 0 replies; 77+ results
From: David Marchand @ 2025-09-03 15:17 UTC (permalink / raw)
  To: dev; +Cc: thomas, maxime.coquelin, anatoly.burakov

Import VFIO header (from v6.16) to be included in many parts of DPDK.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 kernel/linux/uapi/linux/vfio.h | 1836 ++++++++++++++++++++++++++++++++
 kernel/linux/uapi/version      |    2 +-
 2 files changed, 1837 insertions(+), 1 deletion(-)
 create mode 100644 kernel/linux/uapi/linux/vfio.h

diff --git a/kernel/linux/uapi/linux/vfio.h b/kernel/linux/uapi/linux/vfio.h
new file mode 100644
index 0000000000..4413783940
--- /dev/null
+++ b/kernel/linux/uapi/linux/vfio.h
@@ -0,0 +1,1836 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * VFIO API definition
+ *
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ *     Author: Alex Williamson <alex.williamson@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef _UAPIVFIO_H
+#define _UAPIVFIO_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define VFIO_API_VERSION	0
+
+
+/* Kernel & User level defines for VFIO IOCTLs. */
+
+/* Extensions */
+
+#define VFIO_TYPE1_IOMMU		1
+#define VFIO_SPAPR_TCE_IOMMU		2
+#define VFIO_TYPE1v2_IOMMU		3
+/*
+ * IOMMU enforces DMA cache coherence (ex. PCIe NoSnoop stripping).  This
+ * capability is subject to change as groups are added or removed.
+ */
+#define VFIO_DMA_CC_IOMMU		4
+
+/* Check if EEH is supported */
+#define VFIO_EEH			5
+
+/* Two-stage IOMMU */
+#define __VFIO_RESERVED_TYPE1_NESTING_IOMMU	6	/* Implies v2 */
+
+#define VFIO_SPAPR_TCE_v2_IOMMU		7
+
+/*
+ * The No-IOMMU IOMMU offers no translation or isolation for devices and
+ * supports no ioctls outside of VFIO_CHECK_EXTENSION.  Use of VFIO's No-IOMMU
+ * code will taint the host kernel and should be used with extreme caution.
+ */
+#define VFIO_NOIOMMU_IOMMU		8
+
+/* Supports VFIO_DMA_UNMAP_FLAG_ALL */
+#define VFIO_UNMAP_ALL			9
+
+/*
+ * Supports the vaddr flag for DMA map and unmap.  Not supported for mediated
+ * devices, so this capability is subject to change as groups are added or
+ * removed.
+ */
+#define VFIO_UPDATE_VADDR		10
+
+/*
+ * The IOCTL interface is designed for extensibility by embedding the
+ * structure length (argsz) and flags into structures passed between
+ * kernel and userspace.  We therefore use the _IO() macro for these
+ * defines to avoid implicitly embedding a size into the ioctl request.
+ * As structure fields are added, argsz will increase to match and flag
+ * bits will be defined to indicate additional fields with valid data.
+ * It's *always* the caller's responsibility to indicate the size of
+ * the structure passed by setting argsz appropriately.
+ */
+
+#define VFIO_TYPE	(';')
+#define VFIO_BASE	100
+
+/*
+ * For extension of INFO ioctls, VFIO makes use of a capability chain
+ * designed after PCI/e capabilities.  A flag bit indicates whether
+ * this capability chain is supported and a field defined in the fixed
+ * structure defines the offset of the first capability in the chain.
+ * This field is only valid when the corresponding bit in the flags
+ * bitmap is set.  This offset field is relative to the start of the
+ * INFO buffer, as is the next field within each capability header.
+ * The id within the header is a shared address space per INFO ioctl,
+ * while the version field is specific to the capability id.  The
+ * contents following the header are specific to the capability id.
+ */
+struct vfio_info_cap_header {
+	__u16	id;		/* Identifies capability */
+	__u16	version;	/* Version specific to the capability ID */
+	__u32	next;		/* Offset of next capability */
+};
+
+/*
+ * Callers of INFO ioctls passing insufficiently sized buffers will see
+ * the capability chain flag bit set, a zero value for the first capability
+ * offset (if available within the provided argsz), and argsz will be
+ * updated to report the necessary buffer size.  For compatibility, the
+ * INFO ioctl will not report error in this case, but the capability chain
+ * will not be available.
+ */
+
+/* -------- IOCTLs for VFIO file descriptor (/dev/vfio/vfio) -------- */
+
+/**
+ * VFIO_GET_API_VERSION - _IO(VFIO_TYPE, VFIO_BASE + 0)
+ *
+ * Report the version of the VFIO API.  This allows us to bump the entire
+ * API version should we later need to add or change features in incompatible
+ * ways.
+ * Return: VFIO_API_VERSION
+ * Availability: Always
+ */
+#define VFIO_GET_API_VERSION		_IO(VFIO_TYPE, VFIO_BASE + 0)
+
+/**
+ * VFIO_CHECK_EXTENSION - _IOW(VFIO_TYPE, VFIO_BASE + 1, __u32)
+ *
+ * Check whether an extension is supported.
+ * Return: 0 if not supported, 1 (or some other positive integer) if supported.
+ * Availability: Always
+ */
+#define VFIO_CHECK_EXTENSION		_IO(VFIO_TYPE, VFIO_BASE + 1)
+
+/**
+ * VFIO_SET_IOMMU - _IOW(VFIO_TYPE, VFIO_BASE + 2, __s32)
+ *
+ * Set the iommu to the given type.  The type must be supported by an
+ * iommu driver as verified by calling CHECK_EXTENSION using the same
+ * type.  A group must be set to this file descriptor before this
+ * ioctl is available.  The IOMMU interfaces enabled by this call are
+ * specific to the value set.
+ * Return: 0 on success, -errno on failure
+ * Availability: When VFIO group attached
+ */
+#define VFIO_SET_IOMMU			_IO(VFIO_TYPE, VFIO_BASE + 2)
+
+/* -------- IOCTLs for GROUP file descriptors (/dev/vfio/$GROUP) -------- */
+
+/**
+ * VFIO_GROUP_GET_STATUS - _IOR(VFIO_TYPE, VFIO_BASE + 3,
+ *						struct vfio_group_status)
+ *
+ * Retrieve information about the group.  Fills in provided
+ * struct vfio_group_info.  Caller sets argsz.
+ * Return: 0 on succes, -errno on failure.
+ * Availability: Always
+ */
+struct vfio_group_status {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_GROUP_FLAGS_VIABLE		(1 << 0)
+#define VFIO_GROUP_FLAGS_CONTAINER_SET	(1 << 1)
+};
+#define VFIO_GROUP_GET_STATUS		_IO(VFIO_TYPE, VFIO_BASE + 3)
+
+/**
+ * VFIO_GROUP_SET_CONTAINER - _IOW(VFIO_TYPE, VFIO_BASE + 4, __s32)
+ *
+ * Set the container for the VFIO group to the open VFIO file
+ * descriptor provided.  Groups may only belong to a single
+ * container.  Containers may, at their discretion, support multiple
+ * groups.  Only when a container is set are all of the interfaces
+ * of the VFIO file descriptor and the VFIO group file descriptor
+ * available to the user.
+ * Return: 0 on success, -errno on failure.
+ * Availability: Always
+ */
+#define VFIO_GROUP_SET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 4)
+
+/**
+ * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
+ *
+ * Remove the group from the attached container.  This is the
+ * opposite of the SET_CONTAINER call and returns the group to
+ * an initial state.  All device file descriptors must be released
+ * prior to calling this interface.  When removing the last group
+ * from a container, the IOMMU will be disabled and all state lost,
+ * effectively also returning the VFIO file descriptor to an initial
+ * state.
+ * Return: 0 on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_UNSET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 5)
+
+/**
+ * VFIO_GROUP_GET_DEVICE_FD - _IOW(VFIO_TYPE, VFIO_BASE + 6, char)
+ *
+ * Return a new file descriptor for the device object described by
+ * the provided string.  The string should match a device listed in
+ * the devices subdirectory of the IOMMU group sysfs entry.  The
+ * group containing the device must already be added to this context.
+ * Return: new file descriptor on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_GET_DEVICE_FD	_IO(VFIO_TYPE, VFIO_BASE + 6)
+
+/* --------------- IOCTLs for DEVICE file descriptors --------------- */
+
+/**
+ * VFIO_DEVICE_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 7,
+ *						struct vfio_device_info)
+ *
+ * Retrieve information about the device.  Fills in provided
+ * struct vfio_device_info.  Caller sets argsz.
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FLAGS_RESET	(1 << 0)	/* Device supports reset */
+#define VFIO_DEVICE_FLAGS_PCI	(1 << 1)	/* vfio-pci device */
+#define VFIO_DEVICE_FLAGS_PLATFORM (1 << 2)	/* vfio-platform device */
+#define VFIO_DEVICE_FLAGS_AMBA  (1 << 3)	/* vfio-amba device */
+#define VFIO_DEVICE_FLAGS_CCW	(1 << 4)	/* vfio-ccw device */
+#define VFIO_DEVICE_FLAGS_AP	(1 << 5)	/* vfio-ap device */
+#define VFIO_DEVICE_FLAGS_FSL_MC (1 << 6)	/* vfio-fsl-mc device */
+#define VFIO_DEVICE_FLAGS_CAPS	(1 << 7)	/* Info supports caps */
+#define VFIO_DEVICE_FLAGS_CDX	(1 << 8)	/* vfio-cdx device */
+	__u32	num_regions;	/* Max region index + 1 */
+	__u32	num_irqs;	/* Max IRQ index + 1 */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+#define VFIO_DEVICE_GET_INFO		_IO(VFIO_TYPE, VFIO_BASE + 7)
+
+/*
+ * Vendor driver using Mediated device framework should provide device_api
+ * attribute in supported type attribute groups. Device API string should be one
+ * of the following corresponding to device flags in vfio_device_info structure.
+ */
+
+#define VFIO_DEVICE_API_PCI_STRING		"vfio-pci"
+#define VFIO_DEVICE_API_PLATFORM_STRING		"vfio-platform"
+#define VFIO_DEVICE_API_AMBA_STRING		"vfio-amba"
+#define VFIO_DEVICE_API_CCW_STRING		"vfio-ccw"
+#define VFIO_DEVICE_API_AP_STRING		"vfio-ap"
+
+/*
+ * The following capabilities are unique to s390 zPCI devices.  Their contents
+ * are further-defined in vfio_zdev.h
+ */
+#define VFIO_DEVICE_INFO_CAP_ZPCI_BASE		1
+#define VFIO_DEVICE_INFO_CAP_ZPCI_GROUP		2
+#define VFIO_DEVICE_INFO_CAP_ZPCI_UTIL		3
+#define VFIO_DEVICE_INFO_CAP_ZPCI_PFIP		4
+
+/*
+ * The following VFIO_DEVICE_INFO capability reports support for PCIe AtomicOp
+ * completion to the root bus with supported widths provided via flags.
+ */
+#define VFIO_DEVICE_INFO_CAP_PCI_ATOMIC_COMP	5
+struct vfio_device_info_cap_pci_atomic_comp {
+	struct vfio_info_cap_header header;
+	__u32 flags;
+#define VFIO_PCI_ATOMIC_COMP32	(1 << 0)
+#define VFIO_PCI_ATOMIC_COMP64	(1 << 1)
+#define VFIO_PCI_ATOMIC_COMP128	(1 << 2)
+	__u32 reserved;
+};
+
+/**
+ * VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
+ *				       struct vfio_region_info)
+ *
+ * Retrieve information about a device region.  Caller provides
+ * struct vfio_region_info with index value set.  Caller sets argsz.
+ * Implementation of region mapping is bus driver specific.  This is
+ * intended to describe MMIO, I/O port, as well as bus specific
+ * regions (ex. PCI config space).  Zero sized regions may be used
+ * to describe unimplemented regions (ex. unimplemented PCI BARs).
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_region_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_REGION_INFO_FLAG_READ	(1 << 0) /* Region supports read */
+#define VFIO_REGION_INFO_FLAG_WRITE	(1 << 1) /* Region supports write */
+#define VFIO_REGION_INFO_FLAG_MMAP	(1 << 2) /* Region supports mmap */
+#define VFIO_REGION_INFO_FLAG_CAPS	(1 << 3) /* Info supports caps */
+	__u32	index;		/* Region index */
+	__u32	cap_offset;	/* Offset within info struct of first cap */
+	__aligned_u64	size;	/* Region size (bytes) */
+	__aligned_u64	offset;	/* Region offset from start of device fd */
+};
+#define VFIO_DEVICE_GET_REGION_INFO	_IO(VFIO_TYPE, VFIO_BASE + 8)
+
+/*
+ * The sparse mmap capability allows finer granularity of specifying areas
+ * within a region with mmap support.  When specified, the user should only
+ * mmap the offset ranges specified by the areas array.  mmaps outside of the
+ * areas specified may fail (such as the range covering a PCI MSI-X table) or
+ * may result in improper device behavior.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_SPARSE_MMAP	1
+
+struct vfio_region_sparse_mmap_area {
+	__aligned_u64	offset;	/* Offset of mmap'able area within region */
+	__aligned_u64	size;	/* Size of mmap'able area */
+};
+
+struct vfio_region_info_cap_sparse_mmap {
+	struct vfio_info_cap_header header;
+	__u32	nr_areas;
+	__u32	reserved;
+	struct vfio_region_sparse_mmap_area areas[];
+};
+
+/*
+ * The device specific type capability allows regions unique to a specific
+ * device or class of devices to be exposed.  This helps solve the problem for
+ * vfio bus drivers of defining which region indexes correspond to which region
+ * on the device, without needing to resort to static indexes, as done by
+ * vfio-pci.  For instance, if we were to go back in time, we might remove
+ * VFIO_PCI_VGA_REGION_INDEX and let vfio-pci simply define that all indexes
+ * greater than or equal to VFIO_PCI_NUM_REGIONS are device specific and we'd
+ * make a "VGA" device specific type to describe the VGA access space.  This
+ * means that non-VGA devices wouldn't need to waste this index, and thus the
+ * address space associated with it due to implementation of device file
+ * descriptor offsets in vfio-pci.
+ *
+ * The current implementation is now part of the user ABI, so we can't use this
+ * for VGA, but there are other upcoming use cases, such as opregions for Intel
+ * IGD devices and framebuffers for vGPU devices.  We missed VGA, but we'll
+ * use this for future additions.
+ *
+ * The structure below defines version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_TYPE	2
+
+struct vfio_region_info_cap_type {
+	struct vfio_info_cap_header header;
+	__u32 type;	/* global per bus driver */
+	__u32 subtype;	/* type specific */
+};
+
+/*
+ * List of region types, global per bus driver.
+ * If you introduce a new type, please add it here.
+ */
+
+/* PCI region type containing a PCI vendor part */
+#define VFIO_REGION_TYPE_PCI_VENDOR_TYPE	(1 << 31)
+#define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
+#define VFIO_REGION_TYPE_GFX                    (1)
+#define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION_DEPRECATED   (3)
+
+/* sub-types for VFIO_REGION_TYPE_PCI_* */
+
+/* 8086 vendor PCI sub-types */
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION	(1)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_HOST_CFG	(2)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_LPC_CFG	(3)
+
+/* 10de vendor PCI sub-types */
+/*
+ * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
+
+/* 1014 vendor PCI sub-types */
+/*
+ * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
+ * to do TLB invalidation on a GPU.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+
+/* sub-types for VFIO_REGION_TYPE_GFX */
+#define VFIO_REGION_SUBTYPE_GFX_EDID            (1)
+
+/**
+ * struct vfio_region_gfx_edid - EDID region layout.
+ *
+ * Set display link state and EDID blob.
+ *
+ * The EDID blob has monitor information such as brand, name, serial
+ * number, physical size, supported video modes and more.
+ *
+ * This special region allows userspace (typically qemu) set a virtual
+ * EDID for the virtual monitor, which allows a flexible display
+ * configuration.
+ *
+ * For the edid blob spec look here:
+ *    https://en.wikipedia.org/wiki/Extended_Display_Identification_Data
+ *
+ * On linux systems you can find the EDID blob in sysfs:
+ *    /sys/class/drm/${card}/${connector}/edid
+ *
+ * You can use the edid-decode ulility (comes with xorg-x11-utils) to
+ * decode the EDID blob.
+ *
+ * @edid_offset: location of the edid blob, relative to the
+ *               start of the region (readonly).
+ * @edid_max_size: max size of the edid blob (readonly).
+ * @edid_size: actual edid size (read/write).
+ * @link_state: display link state (read/write).
+ * VFIO_DEVICE_GFX_LINK_STATE_UP: Monitor is turned on.
+ * VFIO_DEVICE_GFX_LINK_STATE_DOWN: Monitor is turned off.
+ * @max_xres: max display width (0 == no limitation, readonly).
+ * @max_yres: max display height (0 == no limitation, readonly).
+ *
+ * EDID update protocol:
+ *   (1) set link-state to down.
+ *   (2) update edid blob and size.
+ *   (3) set link-state to up.
+ */
+struct vfio_region_gfx_edid {
+	__u32 edid_offset;
+	__u32 edid_max_size;
+	__u32 edid_size;
+	__u32 max_xres;
+	__u32 max_yres;
+	__u32 link_state;
+#define VFIO_DEVICE_GFX_LINK_STATE_UP    1
+#define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
+};
+
+/* sub-types for VFIO_REGION_TYPE_CCW */
+#define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
+#define VFIO_REGION_SUBTYPE_CCW_SCHIB		(2)
+#define VFIO_REGION_SUBTYPE_CCW_CRW		(3)
+
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED (1)
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_V1_STOP      (0)
+#define VFIO_DEVICE_STATE_V1_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_V1_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_V1_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_V1_RUNNING | \
+				     VFIO_DEVICE_STATE_V1_SAVING |  \
+				     VFIO_DEVICE_STATE_V1_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_V1_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_V1_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_V1_SAVING | \
+					      VFIO_DEVICE_STATE_V1_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_STATE_V1_SAVING | \
+					     VFIO_DEVICE_STATE_V1_RESUMING)
+
+	__u32 reserved;
+	__aligned_u64 pending_bytes;
+	__aligned_u64 data_offset;
+	__aligned_u64 data_size;
+};
+
+/*
+ * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
+ * which allows direct access to non-MSIX registers which happened to be within
+ * the same system page.
+ *
+ * Even though the userspace gets direct access to the MSIX data, the existing
+ * VFIO_DEVICE_SET_IRQS interface must still be used for MSIX configuration.
+ */
+#define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
+
+/*
+ * Capability with compressed real address (aka SSA - small system address)
+ * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing
+ * and by the userspace to associate a NVLink bridge with a GPU.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT	4
+
+struct vfio_region_info_cap_nvlink2_ssatgt {
+	struct vfio_info_cap_header header;
+	__aligned_u64 tgt;
+};
+
+/*
+ * Capability with an NVLink link speed. The value is read by
+ * the NVlink2 bridge driver from the bridge's "ibm,nvlink-speed"
+ * property in the device tree. The value is fixed in the hardware
+ * and failing to provide the correct value results in the link
+ * not working with no indication from the driver why.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD	5
+
+struct vfio_region_info_cap_nvlink2_lnkspd {
+	struct vfio_info_cap_header header;
+	__u32 link_speed;
+	__u32 __pad;
+};
+
+/**
+ * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
+ *				    struct vfio_irq_info)
+ *
+ * Retrieve information about a device IRQ.  Caller provides
+ * struct vfio_irq_info with index value set.  Caller sets argsz.
+ * Implementation of IRQ mapping is bus driver specific.  Indexes
+ * using multiple IRQs are primarily intended to support MSI-like
+ * interrupt blocks.  Zero count irq blocks may be used to describe
+ * unimplemented interrupt types.
+ *
+ * The EVENTFD flag indicates the interrupt index supports eventfd based
+ * signaling.
+ *
+ * The MASKABLE flags indicates the index supports MASK and UNMASK
+ * actions described below.
+ *
+ * AUTOMASKED indicates that after signaling, the interrupt line is
+ * automatically masked by VFIO and the user needs to unmask the line
+ * to receive new interrupts.  This is primarily intended to distinguish
+ * level triggered interrupts.
+ *
+ * The NORESIZE flag indicates that the interrupt lines within the index
+ * are setup as a set and new subindexes cannot be enabled without first
+ * disabling the entire index.  This is used for interrupts like PCI MSI
+ * and MSI-X where the driver may only use a subset of the available
+ * indexes, but VFIO needs to enable a specific number of vectors
+ * upfront.  In the case of MSI-X, where the user can enable MSI-X and
+ * then add and unmask vectors, it's up to userspace to make the decision
+ * whether to allocate the maximum supported number of vectors or tear
+ * down setup and incrementally increase the vectors as each is enabled.
+ * Absence of the NORESIZE flag indicates that vectors can be enabled
+ * and disabled dynamically without impacting other vectors within the
+ * index.
+ */
+struct vfio_irq_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_INFO_EVENTFD		(1 << 0)
+#define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
+#define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
+#define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
+	__u32	index;		/* IRQ index */
+	__u32	count;		/* Number of IRQs within this index */
+};
+#define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
+
+/**
+ * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
+ *
+ * Set signaling, masking, and unmasking of interrupts.  Caller provides
+ * struct vfio_irq_set with all fields set.  'start' and 'count' indicate
+ * the range of subindexes being specified.
+ *
+ * The DATA flags specify the type of data provided.  If DATA_NONE, the
+ * operation performs the specified action immediately on the specified
+ * interrupt(s).  For example, to unmask AUTOMASKED interrupt [0,0]:
+ * flags = (DATA_NONE|ACTION_UNMASK), index = 0, start = 0, count = 1.
+ *
+ * DATA_BOOL allows sparse support for the same on arrays of interrupts.
+ * For example, to mask interrupts [0,1] and [0,3] (but not [0,2]):
+ * flags = (DATA_BOOL|ACTION_MASK), index = 0, start = 1, count = 3,
+ * data = {1,0,1}
+ *
+ * DATA_EVENTFD binds the specified ACTION to the provided __s32 eventfd.
+ * A value of -1 can be used to either de-assign interrupts if already
+ * assigned or skip un-assigned interrupts.  For example, to set an eventfd
+ * to be trigger for interrupts [0,0] and [0,2]:
+ * flags = (DATA_EVENTFD|ACTION_TRIGGER), index = 0, start = 0, count = 3,
+ * data = {fd1, -1, fd2}
+ * If index [0,1] is previously set, two count = 1 ioctls calls would be
+ * required to set [0,0] and [0,2] without changing [0,1].
+ *
+ * Once a signaling mechanism is set, DATA_BOOL or DATA_NONE can be used
+ * with ACTION_TRIGGER to perform kernel level interrupt loopback testing
+ * from userspace (ie. simulate hardware triggering).
+ *
+ * Setting of an event triggering mechanism to userspace for ACTION_TRIGGER
+ * enables the interrupt index for the device.  Individual subindex interrupts
+ * can be disabled using the -1 value for DATA_EVENTFD or the index can be
+ * disabled as a whole with: flags = (DATA_NONE|ACTION_TRIGGER), count = 0.
+ *
+ * Note that ACTION_[UN]MASK specify user->kernel signaling (irqfds) while
+ * ACTION_TRIGGER specifies kernel->user signaling.
+ */
+struct vfio_irq_set {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
+#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
+#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
+#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
+#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
+#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
+	__u32	index;
+	__u32	start;
+	__u32	count;
+	__u8	data[];
+};
+#define VFIO_DEVICE_SET_IRQS		_IO(VFIO_TYPE, VFIO_BASE + 10)
+
+#define VFIO_IRQ_SET_DATA_TYPE_MASK	(VFIO_IRQ_SET_DATA_NONE | \
+					 VFIO_IRQ_SET_DATA_BOOL | \
+					 VFIO_IRQ_SET_DATA_EVENTFD)
+#define VFIO_IRQ_SET_ACTION_TYPE_MASK	(VFIO_IRQ_SET_ACTION_MASK | \
+					 VFIO_IRQ_SET_ACTION_UNMASK | \
+					 VFIO_IRQ_SET_ACTION_TRIGGER)
+/**
+ * VFIO_DEVICE_RESET - _IO(VFIO_TYPE, VFIO_BASE + 11)
+ *
+ * Reset a device.
+ */
+#define VFIO_DEVICE_RESET		_IO(VFIO_TYPE, VFIO_BASE + 11)
+
+/*
+ * The VFIO-PCI bus driver makes use of the following fixed region and
+ * IRQ index mapping.  Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_PCI_BAR0_REGION_INDEX,
+	VFIO_PCI_BAR1_REGION_INDEX,
+	VFIO_PCI_BAR2_REGION_INDEX,
+	VFIO_PCI_BAR3_REGION_INDEX,
+	VFIO_PCI_BAR4_REGION_INDEX,
+	VFIO_PCI_BAR5_REGION_INDEX,
+	VFIO_PCI_ROM_REGION_INDEX,
+	VFIO_PCI_CONFIG_REGION_INDEX,
+	/*
+	 * Expose VGA regions defined for PCI base class 03, subclass 00.
+	 * This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
+	 * as well as the MMIO range 0xa0000 to 0xbffff.  Each implemented
+	 * range is found at it's identity mapped offset from the region
+	 * offset, for example 0x3b0 is region_info.offset + 0x3b0.  Areas
+	 * between described ranges are unimplemented.
+	 */
+	VFIO_PCI_VGA_REGION_INDEX,
+	VFIO_PCI_NUM_REGIONS = 9 /* Fixed user ABI, region indexes >=9 use */
+				 /* device specific cap to define content. */
+};
+
+enum {
+	VFIO_PCI_INTX_IRQ_INDEX,
+	VFIO_PCI_MSI_IRQ_INDEX,
+	VFIO_PCI_MSIX_IRQ_INDEX,
+	VFIO_PCI_ERR_IRQ_INDEX,
+	VFIO_PCI_REQ_IRQ_INDEX,
+	VFIO_PCI_NUM_IRQS
+};
+
+/*
+ * The vfio-ccw bus driver makes use of the following fixed region and
+ * IRQ index mapping. Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_CCW_CONFIG_REGION_INDEX,
+	VFIO_CCW_NUM_REGIONS
+};
+
+enum {
+	VFIO_CCW_IO_IRQ_INDEX,
+	VFIO_CCW_CRW_IRQ_INDEX,
+	VFIO_CCW_REQ_IRQ_INDEX,
+	VFIO_CCW_NUM_IRQS
+};
+
+/*
+ * The vfio-ap bus driver makes use of the following IRQ index mapping.
+ * Unimplemented IRQ types return a count of zero.
+ */
+enum {
+	VFIO_AP_REQ_IRQ_INDEX,
+	VFIO_AP_CFG_CHG_IRQ_INDEX,
+	VFIO_AP_NUM_IRQS
+};
+
+/**
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 12,
+ *					      struct vfio_pci_hot_reset_info)
+ *
+ * This command is used to query the affected devices in the hot reset for
+ * a given device.
+ *
+ * This command always reports the segment, bus, and devfn information for
+ * each affected device, and selectively reports the group_id or devid per
+ * the way how the calling device is opened.
+ *
+ *	- If the calling device is opened via the traditional group/container
+ *	  API, group_id is reported.  User should check if it has owned all
+ *	  the affected devices and provides a set of group fds to prove the
+ *	  ownership in VFIO_DEVICE_PCI_HOT_RESET ioctl.
+ *
+ *	- If the calling device is opened as a cdev, devid is reported.
+ *	  Flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set to indicate this
+ *	  data type.  All the affected devices should be represented in
+ *	  the dev_set, ex. bound to a vfio driver, and also be owned by
+ *	  this interface which is determined by the following conditions:
+ *	  1) Has a valid devid within the iommufd_ctx of the calling device.
+ *	     Ownership cannot be determined across separate iommufd_ctx and
+ *	     the cdev calling conventions do not support a proof-of-ownership
+ *	     model as provided in the legacy group interface.  In this case
+ *	     valid devid with value greater than zero is provided in the return
+ *	     structure.
+ *	  2) Does not have a valid devid within the iommufd_ctx of the calling
+ *	     device, but belongs to the same IOMMU group as the calling device
+ *	     or another opened device that has a valid devid within the
+ *	     iommufd_ctx of the calling device.  This provides implicit ownership
+ *	     for devices within the same DMA isolation context.  In this case
+ *	     the devid value of VFIO_PCI_DEVID_OWNED is provided in the return
+ *	     structure.
+ *
+ *	  A devid value of VFIO_PCI_DEVID_NOT_OWNED is provided in the return
+ *	  structure for affected devices where device is NOT represented in the
+ *	  dev_set or ownership is not available.  Such devices prevent the use
+ *	  of VFIO_DEVICE_PCI_HOT_RESET ioctl outside of the proof-of-ownership
+ *	  calling conventions (ie. via legacy group accessed devices).  Flag
+ *	  VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED would be set when all the
+ *	  affected devices are represented in the dev_set and also owned by
+ *	  the user.  This flag is available only when
+ *	  flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set, otherwise reserved.
+ *	  When set, user could invoke VFIO_DEVICE_PCI_HOT_RESET with a zero
+ *	  length fd array on the calling device as the ownership is validated
+ *	  by iommufd_ctx.
+ *
+ * Return: 0 on success, -errno on failure:
+ *	-enospc = insufficient buffer, -enodev = unsupported for device.
+ */
+struct vfio_pci_dependent_device {
+	union {
+		__u32   group_id;
+		__u32	devid;
+#define VFIO_PCI_DEVID_OWNED		0
+#define VFIO_PCI_DEVID_NOT_OWNED	-1
+	};
+	__u16	segment;
+	__u8	bus;
+	__u8	devfn; /* Use PCI_SLOT/PCI_FUNC */
+};
+
+struct vfio_pci_hot_reset_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID		(1 << 0)
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED	(1 << 1)
+	__u32	count;
+	struct vfio_pci_dependent_device	devices[];
+};
+
+#define VFIO_DEVICE_GET_PCI_HOT_RESET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_DEVICE_PCI_HOT_RESET - _IOW(VFIO_TYPE, VFIO_BASE + 13,
+ *				    struct vfio_pci_hot_reset)
+ *
+ * A PCI hot reset results in either a bus or slot reset which may affect
+ * other devices sharing the bus/slot.  The calling user must have
+ * ownership of the full set of affected devices as determined by the
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO ioctl.
+ *
+ * When called on a device file descriptor acquired through the vfio
+ * group interface, the user is required to provide proof of ownership
+ * of those affected devices via the group_fds array in struct
+ * vfio_pci_hot_reset.
+ *
+ * When called on a direct cdev opened vfio device, the flags field of
+ * struct vfio_pci_hot_reset_info reports the ownership status of the
+ * affected devices and this ioctl must be called with an empty group_fds
+ * array.  See above INFO ioctl definition for ownership requirements.
+ *
+ * Mixed usage of legacy groups and cdevs across the set of affected
+ * devices is not supported.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_pci_hot_reset {
+	__u32	argsz;
+	__u32	flags;
+	__u32	count;
+	__s32	group_fds[];
+};
+
+#define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
+
+/**
+ * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
+ *                                    struct vfio_device_query_gfx_plane)
+ *
+ * Set the drm_plane_type and flags, then retrieve the gfx plane info.
+ *
+ * flags supported:
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
+ *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
+ *   support for dma-buf.
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
+ *   to ask if the mdev supports region. 0 on support, -EINVAL on no
+ *   support for region.
+ * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
+ *   with each call to query the plane info.
+ * - Others are invalid and return -EINVAL.
+ *
+ * Note:
+ * 1. Plane could be disabled by guest. In that case, success will be
+ *    returned with zero-initialized drm_format, size, width and height
+ *    fields.
+ * 2. x_hot/y_hot is set to 0xFFFFFFFF if no hotspot information available
+ *
+ * Return: 0 on success, -errno on other failure.
+ */
+struct vfio_device_gfx_plane_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
+#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
+#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
+	/* in */
+	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
+	/* out */
+	__u32 drm_format;	/* drm format of plane */
+	__aligned_u64 drm_format_mod;   /* tiled mode */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page*/
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane*/
+	__u32 x_hot;    /* horizontal position of cursor hotspot */
+	__u32 y_hot;    /* vertical position of cursor hotspot */
+	union {
+		__u32 region_index;	/* region index */
+		__u32 dmabuf_id;	/* dma-buf id */
+	};
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/**
+ * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15, __u32)
+ *
+ * Return a new dma-buf file descriptor for an exposed guest framebuffer
+ * described by the provided dmabuf_id. The dmabuf_id is returned from VFIO_
+ * DEVICE_QUERY_GFX_PLANE as a token of the exposed guest framebuffer.
+ */
+
+#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
+
+/**
+ * VFIO_DEVICE_IOEVENTFD - _IOW(VFIO_TYPE, VFIO_BASE + 16,
+ *                              struct vfio_device_ioeventfd)
+ *
+ * Perform a write to the device at the specified device fd offset, with
+ * the specified data and width when the provided eventfd is triggered.
+ * vfio bus drivers may not support this for all regions, for all widths,
+ * or at all.  vfio-pci currently only enables support for BAR regions,
+ * excluding the MSI-X vector table.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_ioeventfd {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_IOEVENTFD_8		(1 << 0) /* 1-byte write */
+#define VFIO_DEVICE_IOEVENTFD_16	(1 << 1) /* 2-byte write */
+#define VFIO_DEVICE_IOEVENTFD_32	(1 << 2) /* 4-byte write */
+#define VFIO_DEVICE_IOEVENTFD_64	(1 << 3) /* 8-byte write */
+#define VFIO_DEVICE_IOEVENTFD_SIZE_MASK	(0xf)
+	__aligned_u64	offset;		/* device fd offset of write */
+	__aligned_u64	data;		/* data to be written */
+	__s32	fd;			/* -1 for de-assignment */
+	__u32	reserved;
+};
+
+#define VFIO_DEVICE_IOEVENTFD		_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_DEVICE_FEATURE - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *			       struct vfio_device_feature)
+ *
+ * Get, set, or probe feature data of the device.  The feature is selected
+ * using the FEATURE_MASK portion of the flags field.  Support for a feature
+ * can be probed by setting both the FEATURE_MASK and PROBE bits.  A probe
+ * may optionally include the GET and/or SET bits to determine read vs write
+ * access of the feature respectively.  Probing a feature will return success
+ * if the feature is supported and all of the optionally indicated GET/SET
+ * methods are supported.  The format of the data portion of the structure is
+ * specific to the given feature.  The data portion is not required for
+ * probing.  GET and SET are mutually exclusive, except for use with PROBE.
+ *
+ * Return 0 on success, -errno on failure.
+ */
+struct vfio_device_feature {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FEATURE_MASK	(0xffff) /* 16-bit feature index */
+#define VFIO_DEVICE_FEATURE_GET		(1 << 16) /* Get feature into data[] */
+#define VFIO_DEVICE_FEATURE_SET		(1 << 17) /* Set feature from data[] */
+#define VFIO_DEVICE_FEATURE_PROBE	(1 << 18) /* Probe feature support */
+	__u8	data[];
+};
+
+#define VFIO_DEVICE_FEATURE		_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/*
+ * VFIO_DEVICE_BIND_IOMMUFD - _IOR(VFIO_TYPE, VFIO_BASE + 18,
+ *				   struct vfio_device_bind_iommufd)
+ * @argsz:	 User filled size of this data.
+ * @flags:	 Must be 0.
+ * @iommufd:	 iommufd to bind.
+ * @out_devid:	 The device id generated by this bind. devid is a handle for
+ *		 this device/iommufd bond and can be used in IOMMUFD commands.
+ *
+ * Bind a vfio_device to the specified iommufd.
+ *
+ * User is restricted from accessing the device before the binding operation
+ * is completed.  Only allowed on cdev fds.
+ *
+ * Unbind is automatically conducted when device fd is closed.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_bind_iommufd {
+	__u32		argsz;
+	__u32		flags;
+	__s32		iommufd;
+	__u32		out_devid;
+};
+
+#define VFIO_DEVICE_BIND_IOMMUFD	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/*
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 19,
+ *					struct vfio_device_attach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for attach.
+ * @pt_id:	Input the target id which can represent an ioas or a hwpt
+ *		allocated via iommufd subsystem.
+ *		Output the input ioas id or the attached hwpt id which could
+ *		be the specified hwpt itself or a hwpt automatically created
+ *		for the specified ioas by kernel during the attachment.
+ * @pasid:	The pasid to be attached, only meaningful when
+ *		VFIO_DEVICE_ATTACH_PASID is set in @flags
+ *
+ * Associate the device with an address space within the bound iommufd.
+ * Undo by VFIO_DEVICE_DETACH_IOMMUFD_PT or device fd close.  This is only
+ * allowed on cdev fds.
+ *
+ * If a vfio device or a pasid of this device is currently attached to a valid
+ * hw_pagetable (hwpt), without doing a VFIO_DEVICE_DETACH_IOMMUFD_PT, a second
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT ioctl passing in another hwpt id is allowed.
+ * This action, also known as a hw_pagetable replacement, will replace the
+ * currently attached hwpt of the device or the pasid of this device with a new
+ * hwpt corresponding to the given pt_id.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_attach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_ATTACH_PASID	(1 << 0)
+	__u32	pt_id;
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_ATTACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/*
+ * VFIO_DEVICE_DETACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 20,
+ *					struct vfio_device_detach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for detach.
+ * @pasid:	The pasid to be detached, only meaningful when
+ *		VFIO_DEVICE_DETACH_PASID is set in @flags
+ *
+ * Remove the association of the device or a pasid of the device and its current
+ * associated address space.  After it, the device or the pasid should be in a
+ * blocking DMA state.  This is only allowed on cdev fds.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_detach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_DETACH_PASID	(1 << 0)
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_DETACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/*
+ * Provide support for setting a PCI VF Token, which is used as a shared
+ * secret between PF and VF drivers.  This feature may only be set on a
+ * PCI SR-IOV PF when SR-IOV is enabled on the PF and there are no existing
+ * open VFs.  Data provided when setting this feature is a 16-byte array
+ * (__u8 b[16]), representing a UUID.
+ */
+#define VFIO_DEVICE_FEATURE_PCI_VF_TOKEN	(0)
+
+/*
+ * Indicates the device can support the migration API through
+ * VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE. If this GET succeeds, the RUNNING and
+ * ERROR states are always supported. Support for additional states is
+ * indicated via the flags field; at least VFIO_MIGRATION_STOP_COPY must be
+ * set.
+ *
+ * VFIO_MIGRATION_STOP_COPY means that STOP, STOP_COPY and
+ * RESUMING are supported.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P means that RUNNING_P2P
+ * is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_PRE_COPY means that
+ * PRE_COPY is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY
+ * means that RUNNING_P2P, PRE_COPY and PRE_COPY_P2P are supported
+ * in addition to the STOP_COPY states.
+ *
+ * Other combinations of flags have behavior to be defined in the future.
+ */
+struct vfio_device_feature_migration {
+	__aligned_u64 flags;
+#define VFIO_MIGRATION_STOP_COPY	(1 << 0)
+#define VFIO_MIGRATION_P2P		(1 << 1)
+#define VFIO_MIGRATION_PRE_COPY		(1 << 2)
+};
+#define VFIO_DEVICE_FEATURE_MIGRATION 1
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, execute a migration state change on the VFIO
+ * device. The new state is supplied in device_state, see enum
+ * vfio_device_mig_state for details
+ *
+ * The kernel migration driver must fully transition the device to the new state
+ * value before the operation returns to the user.
+ *
+ * The kernel migration driver must not generate asynchronous device state
+ * transitions outside of manipulation by the user or the VFIO_DEVICE_RESET
+ * ioctl as described above.
+ *
+ * If this function fails then current device_state may be the original
+ * operating state or some other state along the combination transition path.
+ * The user can then decide if it should execute a VFIO_DEVICE_RESET, attempt
+ * to return to the original state, or attempt to return to some other state
+ * such as RUNNING or STOP.
+ *
+ * If the new_state starts a new data transfer session then the FD associated
+ * with that session is returned in data_fd. The user is responsible to close
+ * this FD when it is finished. The user must consider the migration data stream
+ * carried over the FD to be opaque and must preserve the byte order of the
+ * stream. The user is not required to preserve buffer segmentation when writing
+ * the data stream during the RESUMING operation.
+ *
+ * Upon VFIO_DEVICE_FEATURE_GET, get the current migration state of the VFIO
+ * device, data_fd will be -1.
+ */
+struct vfio_device_feature_mig_state {
+	__u32 device_state; /* From enum vfio_device_mig_state */
+	__s32 data_fd;
+};
+#define VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE 2
+
+/*
+ * The device migration Finite State Machine is described by the enum
+ * vfio_device_mig_state. Some of the FSM arcs will create a migration data
+ * transfer session by returning a FD, in this case the migration data will
+ * flow over the FD using read() and write() as discussed below.
+ *
+ * There are 5 states to support VFIO_MIGRATION_STOP_COPY:
+ *  RUNNING - The device is running normally
+ *  STOP - The device does not change the internal or external state
+ *  STOP_COPY - The device internal state can be read out
+ *  RESUMING - The device is stopped and is loading a new internal state
+ *  ERROR - The device has failed and must be reset
+ *
+ * And optional states to support VFIO_MIGRATION_P2P:
+ *  RUNNING_P2P - RUNNING, except the device cannot do peer to peer DMA
+ * And VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY - The device is running normally but tracking internal state
+ *             changes
+ * And VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY_P2P - PRE_COPY, except the device cannot do peer to peer DMA
+ *
+ * The FSM takes actions on the arcs between FSM states. The driver implements
+ * the following behavior for the FSM arcs:
+ *
+ * RUNNING_P2P -> STOP
+ * STOP_COPY -> STOP
+ *   While in STOP the device must stop the operation of the device. The device
+ *   must not generate interrupts, DMA, or any other change to external state.
+ *   It must not change its internal state. When stopped the device and kernel
+ *   migration driver must accept and respond to interaction to support external
+ *   subsystems in the STOP state, for example PCI MSI-X and PCI config space.
+ *   Failure by the user to restrict device access while in STOP must not result
+ *   in error conditions outside the user context (ex. host system faults).
+ *
+ *   The STOP_COPY arc will terminate a data transfer session.
+ *
+ * RESUMING -> STOP
+ *   Leaving RESUMING terminates a data transfer session and indicates the
+ *   device should complete processing of the data delivered by write(). The
+ *   kernel migration driver should complete the incorporation of data written
+ *   to the data transfer FD into the device internal state and perform
+ *   final validity and consistency checking of the new device state. If the
+ *   user provided data is found to be incomplete, inconsistent, or otherwise
+ *   invalid, the migration driver must fail the SET_STATE ioctl and
+ *   optionally go to the ERROR state as described below.
+ *
+ *   While in STOP the device has the same behavior as other STOP states
+ *   described above.
+ *
+ *   To abort a RESUMING session the device must be reset.
+ *
+ * PRE_COPY -> RUNNING
+ * RUNNING_P2P -> RUNNING
+ *   While in RUNNING the device is fully operational, the device may generate
+ *   interrupts, DMA, respond to MMIO, all vfio device regions are functional,
+ *   and the device may advance its internal state.
+ *
+ *   The PRE_COPY arc will terminate a data transfer session.
+ *
+ * PRE_COPY_P2P -> RUNNING_P2P
+ * RUNNING -> RUNNING_P2P
+ * STOP -> RUNNING_P2P
+ *   While in RUNNING_P2P the device is partially running in the P2P quiescent
+ *   state defined below.
+ *
+ *   The PRE_COPY_P2P arc will terminate a data transfer session.
+ *
+ * RUNNING -> PRE_COPY
+ * RUNNING_P2P -> PRE_COPY_P2P
+ * STOP -> STOP_COPY
+ *   PRE_COPY, PRE_COPY_P2P and STOP_COPY form the "saving group" of states
+ *   which share a data transfer session. Moving between these states alters
+ *   what is streamed in session, but does not terminate or otherwise affect
+ *   the associated fd.
+ *
+ *   These arcs begin the process of saving the device state and will return a
+ *   new data_fd. The migration driver may perform actions such as enabling
+ *   dirty logging of device state when entering PRE_COPY or PER_COPY_P2P.
+ *
+ *   Each arc does not change the device operation, the device remains
+ *   RUNNING, P2P quiesced or in STOP. The STOP_COPY state is described below
+ *   in PRE_COPY_P2P -> STOP_COPY.
+ *
+ * PRE_COPY -> PRE_COPY_P2P
+ *   Entering PRE_COPY_P2P continues all the behaviors of PRE_COPY above.
+ *   However, while in the PRE_COPY_P2P state, the device is partially running
+ *   in the P2P quiescent state defined below, like RUNNING_P2P.
+ *
+ * PRE_COPY_P2P -> PRE_COPY
+ *   This arc allows returning the device to a full RUNNING behavior while
+ *   continuing all the behaviors of PRE_COPY.
+ *
+ * PRE_COPY_P2P -> STOP_COPY
+ *   While in the STOP_COPY state the device has the same behavior as STOP
+ *   with the addition that the data transfers session continues to stream the
+ *   migration state. End of stream on the FD indicates the entire device
+ *   state has been transferred.
+ *
+ *   The user should take steps to restrict access to vfio device regions while
+ *   the device is in STOP_COPY or risk corruption of the device migration data
+ *   stream.
+ *
+ * STOP -> RESUMING
+ *   Entering the RESUMING state starts a process of restoring the device state
+ *   and will return a new data_fd. The data stream fed into the data_fd should
+ *   be taken from the data transfer output of a single FD during saving from
+ *   a compatible device. The migration driver may alter/reset the internal
+ *   device state for this arc if required to prepare the device to receive the
+ *   migration data.
+ *
+ * STOP_COPY -> PRE_COPY
+ * STOP_COPY -> PRE_COPY_P2P
+ *   These arcs are not permitted and return error if requested. Future
+ *   revisions of this API may define behaviors for these arcs, in this case
+ *   support will be discoverable by a new flag in
+ *   VFIO_DEVICE_FEATURE_MIGRATION.
+ *
+ * any -> ERROR
+ *   ERROR cannot be specified as a device state, however any transition request
+ *   can be failed with an errno return and may then move the device_state into
+ *   ERROR. In this case the device was unable to execute the requested arc and
+ *   was also unable to restore the device to any valid device_state.
+ *   To recover from ERROR VFIO_DEVICE_RESET must be used to return the
+ *   device_state back to RUNNING.
+ *
+ * The optional peer to peer (P2P) quiescent state is intended to be a quiescent
+ * state for the device for the purposes of managing multiple devices within a
+ * user context where peer-to-peer DMA between devices may be active. The
+ * RUNNING_P2P and PRE_COPY_P2P states must prevent the device from initiating
+ * any new P2P DMA transactions. If the device can identify P2P transactions
+ * then it can stop only P2P DMA, otherwise it must stop all DMA. The migration
+ * driver must complete any such outstanding operations prior to completing the
+ * FSM arc into a P2P state. For the purpose of specification the states
+ * behave as though the device was fully running if not supported. Like while in
+ * STOP or STOP_COPY the user must not touch the device, otherwise the state
+ * can be exited.
+ *
+ * The remaining possible transitions are interpreted as combinations of the
+ * above FSM arcs. As there are multiple paths through the FSM arcs the path
+ * should be selected based on the following rules:
+ *   - Select the shortest path.
+ *   - The path cannot have saving group states as interior arcs, only
+ *     starting/end states.
+ * Refer to vfio_mig_get_next_state() for the result of the algorithm.
+ *
+ * The automatic transit through the FSM arcs that make up the combination
+ * transition is invisible to the user. When working with combination arcs the
+ * user may see any step along the path in the device_state if SET_STATE
+ * fails. When handling these types of errors users should anticipate future
+ * revisions of this protocol using new states and those states becoming
+ * visible in this case.
+ *
+ * The optional states cannot be used with SET_STATE if the device does not
+ * support them. The user can discover if these states are supported by using
+ * VFIO_DEVICE_FEATURE_MIGRATION. By using combination transitions the user can
+ * avoid knowing about these optional states if the kernel driver supports them.
+ *
+ * Arcs touching PRE_COPY and PRE_COPY_P2P are removed if support for PRE_COPY
+ * is not present.
+ */
+enum vfio_device_mig_state {
+	VFIO_DEVICE_STATE_ERROR = 0,
+	VFIO_DEVICE_STATE_STOP = 1,
+	VFIO_DEVICE_STATE_RUNNING = 2,
+	VFIO_DEVICE_STATE_STOP_COPY = 3,
+	VFIO_DEVICE_STATE_RESUMING = 4,
+	VFIO_DEVICE_STATE_RUNNING_P2P = 5,
+	VFIO_DEVICE_STATE_PRE_COPY = 6,
+	VFIO_DEVICE_STATE_PRE_COPY_P2P = 7,
+	VFIO_DEVICE_STATE_NR,
+};
+
+/**
+ * VFIO_MIG_GET_PRECOPY_INFO - _IO(VFIO_TYPE, VFIO_BASE + 21)
+ *
+ * This ioctl is used on the migration data FD in the precopy phase of the
+ * migration data transfer. It returns an estimate of the current data sizes
+ * remaining to be transferred. It allows the user to judge when it is
+ * appropriate to leave PRE_COPY for STOP_COPY.
+ *
+ * This ioctl is valid only in PRE_COPY states and kernel driver should
+ * return -EINVAL from any other migration state.
+ *
+ * The vfio_precopy_info data structure returned by this ioctl provides
+ * estimates of data available from the device during the PRE_COPY states.
+ * This estimate is split into two categories, initial_bytes and
+ * dirty_bytes.
+ *
+ * The initial_bytes field indicates the amount of initial precopy
+ * data available from the device. This field should have a non-zero initial
+ * value and decrease as migration data is read from the device.
+ * It is recommended to leave PRE_COPY for STOP_COPY only after this field
+ * reaches zero. Leaving PRE_COPY earlier might make things slower.
+ *
+ * The dirty_bytes field tracks device state changes relative to data
+ * previously retrieved.  This field starts at zero and may increase as
+ * the internal device state is modified or decrease as that modified
+ * state is read from the device.
+ *
+ * Userspace may use the combination of these fields to estimate the
+ * potential data size available during the PRE_COPY phases, as well as
+ * trends relative to the rate the device is dirtying its internal
+ * state, but these fields are not required to have any bearing relative
+ * to the data size available during the STOP_COPY phase.
+ *
+ * Drivers have a lot of flexibility in when and what they transfer during the
+ * PRE_COPY phase, and how they report this from VFIO_MIG_GET_PRECOPY_INFO.
+ *
+ * During pre-copy the migration data FD has a temporary "end of stream" that is
+ * reached when both initial_bytes and dirty_byte are zero. For instance, this
+ * may indicate that the device is idle and not currently dirtying any internal
+ * state. When read() is done on this temporary end of stream the kernel driver
+ * should return ENOMSG from read(). Userspace can wait for more data (which may
+ * never come) by using poll.
+ *
+ * Once in STOP_COPY the migration data FD has a permanent end of stream
+ * signaled in the usual way by read() always returning 0 and poll always
+ * returning readable. ENOMSG may not be returned in STOP_COPY.
+ * Support for this ioctl is mandatory if a driver claims to support
+ * VFIO_MIGRATION_PRE_COPY.
+ *
+ * Return: 0 on success, -1 and errno set on failure.
+ */
+struct vfio_precopy_info {
+	__u32 argsz;
+	__u32 flags;
+	__aligned_u64 initial_bytes;
+	__aligned_u64 dirty_bytes;
+};
+
+#define VFIO_MIG_GET_PRECOPY_INFO _IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, allow the device to be moved into a low power
+ * state with the platform-based power management.  Device use of lower power
+ * states depends on factors managed by the runtime power management core,
+ * including system level support and coordinating support among dependent
+ * devices.  Enabling device low power entry does not guarantee lower power
+ * usage by the device, nor is a mechanism provided through this feature to
+ * know the current power state of the device.  If any device access happens
+ * (either from the host or through the vfio uAPI) when the device is in the
+ * low power state, then the host will move the device out of the low power
+ * state as necessary prior to the access.  Once the access is completed, the
+ * device may re-enter the low power state.  For single shot low power support
+ * with wake-up notification, see
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP below.  Access to mmap'd
+ * device regions is disabled on LOW_POWER_ENTRY and may only be resumed after
+ * calling LOW_POWER_EXIT.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY 3
+
+/*
+ * This device feature has the same behavior as
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY with the exception that the user
+ * provides an eventfd for wake-up notification.  When the device moves out of
+ * the low power state for the wake-up, the host will not allow the device to
+ * re-enter a low power state without a subsequent user call to one of the low
+ * power entry device feature IOCTLs.  Access to mmap'd device regions is
+ * disabled on LOW_POWER_ENTRY_WITH_WAKEUP and may only be resumed after the
+ * low power exit.  The low power exit can happen either through LOW_POWER_EXIT
+ * or through any other access (where the wake-up notification has been
+ * generated).  The access to mmap'd device regions will not trigger low power
+ * exit.
+ *
+ * The notification through the provided eventfd will be generated only when
+ * the device has entered and is resumed from a low power state after
+ * calling this device feature IOCTL.  A device that has not entered low power
+ * state, as managed through the runtime power management core, will not
+ * generate a notification through the provided eventfd on access.  Calling the
+ * LOW_POWER_EXIT feature is optional in the case where notification has been
+ * signaled on the provided eventfd that a resume from low power has occurred.
+ */
+struct vfio_device_low_power_entry_with_wakeup {
+	__s32 wakeup_eventfd;
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP 4
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, disallow use of device low power states as
+ * previously enabled via VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY or
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP device features.
+ * This device feature IOCTL may itself generate a wakeup eventfd notification
+ * in the latter case if the device had previously entered a low power state.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_EXIT 5
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET start/stop device DMA logging.
+ * VFIO_DEVICE_FEATURE_PROBE can be used to detect if the device supports
+ * DMA logging.
+ *
+ * DMA logging allows a device to internally record what DMAs the device is
+ * initiating and report them back to userspace. It is part of the VFIO
+ * migration infrastructure that allows implementing dirty page tracking
+ * during the pre copy phase of live migration. Only DMA WRITEs are logged,
+ * and this API is not connected to VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE.
+ *
+ * When DMA logging is started a range of IOVAs to monitor is provided and the
+ * device can optimize its logging to cover only the IOVA range given. Each
+ * DMA that the device initiates inside the range will be logged by the device
+ * for later retrieval.
+ *
+ * page_size is an input that hints what tracking granularity the device
+ * should try to achieve. If the device cannot do the hinted page size then
+ * it's the driver choice which page size to pick based on its support.
+ * On output the device will return the page size it selected.
+ *
+ * ranges is a pointer to an array of
+ * struct vfio_device_feature_dma_logging_range.
+ *
+ * The core kernel code guarantees to support by minimum num_ranges that fit
+ * into a single kernel page. User space can try higher values but should give
+ * up if the above can't be achieved as of some driver limitations.
+ *
+ * A single call to start device DMA logging can be issued and a matching stop
+ * should follow at the end. Another start is not allowed in the meantime.
+ */
+struct vfio_device_feature_dma_logging_control {
+	__aligned_u64 page_size;
+	__u32 num_ranges;
+	__u32 __reserved;
+	__aligned_u64 ranges;
+};
+
+struct vfio_device_feature_dma_logging_range {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_START 6
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET stop device DMA logging that was started
+ * by VFIO_DEVICE_FEATURE_DMA_LOGGING_START
+ */
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_STOP 7
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back and clear the device DMA log
+ *
+ * Query the device's DMA log for written pages within the given IOVA range.
+ * During querying the log is cleared for the IOVA range.
+ *
+ * bitmap is a pointer to an array of u64s that will hold the output bitmap
+ * with 1 bit reporting a page_size unit of IOVA. The mapping of IOVA to bits
+ * is given by:
+ *  bitmap[(addr - iova)/page_size] & (1ULL << (addr % 64))
+ *
+ * The input page_size can be any power of two value and does not have to
+ * match the value given to VFIO_DEVICE_FEATURE_DMA_LOGGING_START. The driver
+ * will format its internal logging to match the reporting page size, possibly
+ * by replicating bits if the internal page size is lower than requested.
+ *
+ * The LOGGING_REPORT will only set bits in the bitmap and never clear or
+ * perform any initialization of the user provided bitmap.
+ *
+ * If any error is returned userspace should assume that the dirty log is
+ * corrupted. Error recovery is to consider all memory dirty and try to
+ * restart the dirty tracking, or to abort/restart the whole migration.
+ *
+ * If DMA logging is not enabled, an error will be returned.
+ *
+ */
+struct vfio_device_feature_dma_logging_report {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+	__aligned_u64 page_size;
+	__aligned_u64 bitmap;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_REPORT 8
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back the estimated data length that will
+ * be required to complete stop copy.
+ *
+ * Note: Can be called on each device state.
+ */
+
+struct vfio_device_feature_mig_data_size {
+	__aligned_u64 stop_copy_length;
+};
+
+#define VFIO_DEVICE_FEATURE_MIG_DATA_SIZE 9
+
+/**
+ * Upon VFIO_DEVICE_FEATURE_SET, set or clear the BUS mastering for the device
+ * based on the operation specified in op flag.
+ *
+ * The functionality is incorporated for devices that needs bus master control,
+ * but the in-band device interface lacks the support. Consequently, it is not
+ * applicable to PCI devices, as bus master control for PCI devices is managed
+ * in-band through the configuration space. At present, this feature is supported
+ * only for CDX devices.
+ * When the device's BUS MASTER setting is configured as CLEAR, it will result in
+ * blocking all incoming DMA requests from the device. On the other hand, configuring
+ * the device's BUS MASTER setting as SET (enable) will grant the device the
+ * capability to perform DMA to the host memory.
+ */
+struct vfio_device_feature_bus_master {
+	__u32 op;
+#define		VFIO_DEVICE_FEATURE_CLEAR_MASTER	0	/* Clear Bus Master */
+#define		VFIO_DEVICE_FEATURE_SET_MASTER		1	/* Set Bus Master */
+};
+#define VFIO_DEVICE_FEATURE_BUS_MASTER 10
+
+/* -------- API for Type1 VFIO IOMMU -------- */
+
+/**
+ * VFIO_IOMMU_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 12, struct vfio_iommu_info)
+ *
+ * Retrieve information about the IOMMU object. Fills in provided
+ * struct vfio_iommu_info. Caller sets argsz.
+ *
+ * XXX Should we do these by CHECK_EXTENSION too?
+ */
+struct vfio_iommu_type1_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
+#define VFIO_IOMMU_INFO_CAPS	(1 << 1)	/* Info supports caps */
+	__aligned_u64	iova_pgsizes;		/* Bitmap of supported page sizes */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+
+/*
+ * The IOVA capability allows to report the valid IOVA range(s)
+ * excluding any non-relaxable reserved regions exposed by
+ * devices attached to the container. Any DMA map attempt
+ * outside the valid iova range will return error.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE  1
+
+struct vfio_iova_range {
+	__u64	start;
+	__u64	end;
+};
+
+struct vfio_iommu_type1_info_cap_iova_range {
+	struct	vfio_info_cap_header header;
+	__u32	nr_iovas;
+	__u32	reserved;
+	struct	vfio_iova_range iova_ranges[];
+};
+
+/*
+ * The migration capability allows to report supported features for migration.
+ *
+ * The structures below define version 1 of this capability.
+ *
+ * The existence of this capability indicates that IOMMU kernel driver supports
+ * dirty page logging.
+ *
+ * pgsize_bitmap: Kernel driver returns bitmap of supported page sizes for dirty
+ * page logging.
+ * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
+ * size in bytes that can be used by user applications when getting the dirty
+ * bitmap.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  2
+
+struct vfio_iommu_type1_info_cap_migration {
+	struct	vfio_info_cap_header header;
+	__u32	flags;
+	__u64	pgsize_bitmap;
+	__u64	max_dirty_bitmap_size;		/* in bytes */
+};
+
+/*
+ * The DMA available capability allows to report the current number of
+ * simultaneously outstanding DMA mappings that are allowed.
+ *
+ * The structure below defines version 1 of this capability.
+ *
+ * avail: specifies the current number of outstanding DMA mappings allowed.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_DMA_AVAIL 3
+
+struct vfio_iommu_type1_info_dma_avail {
+	struct	vfio_info_cap_header header;
+	__u32	avail;
+};
+
+#define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_IOMMU_MAP_DMA - _IOW(VFIO_TYPE, VFIO_BASE + 13, struct vfio_dma_map)
+ *
+ * Map process virtual addresses to IO virtual addresses using the
+ * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
+ *
+ * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova. The vaddr
+ * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR.  To
+ * maintain memory consistency within the user application, the updated vaddr
+ * must address the same memory object as originally mapped.  Failure to do so
+ * will result in user memory corruption and/or device misbehavior.  iova and
+ * size must match those in the original MAP_DMA call.  Protection is not
+ * changed, and the READ & WRITE flags must be 0.
+ */
+struct vfio_iommu_type1_dma_map {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
+#define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
+#define VFIO_DMA_MAP_FLAG_VADDR (1 << 2)
+	__u64	vaddr;				/* Process virtual address */
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+
+#define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
+
+struct vfio_bitmap {
+	__u64        pgsize;	/* page size for bitmap in bytes */
+	__u64        size;	/* in bytes */
+	__u64 *data;	/* one bit per page */
+};
+
+/**
+ * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
+ *							struct vfio_dma_unmap)
+ *
+ * Unmap IO virtual addresses using the provided struct vfio_dma_unmap.
+ * Caller sets argsz.  The actual unmapped size is returned in the size
+ * field.  No guarantee is made to the user that arbitrary unmaps of iova
+ * or size different from those used in the original mapping call will
+ * succeed.
+ *
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get the dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, the user must
+ * provide a struct vfio_bitmap in data[]. User must provide zero-allocated
+ * memory via vfio_bitmap.data and its size in the vfio_bitmap.size field.
+ * A bit in the bitmap represents one page, of user provided page size in
+ * vfio_bitmap.pgsize field, consecutively starting from iova offset. Bit set
+ * indicates that the page at that offset from iova is dirty. A Bitmap of the
+ * pages in the range of unmapped size is returned in the user-provided
+ * vfio_bitmap.data.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_ALL, unmap all addresses.  iova and size
+ * must be 0.  This cannot be combined with the get-dirty-bitmap flag.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host
+ * virtual addresses in the iova range.  DMA to already-mapped pages continues.
+ * Groups may not be added to the container while any addresses are invalid.
+ * This cannot be combined with the get-dirty-bitmap flag.
+ */
+struct vfio_iommu_type1_dma_unmap {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
+#define VFIO_DMA_UNMAP_FLAG_ALL		     (1 << 1)
+#define VFIO_DMA_UNMAP_FLAG_VADDR	     (1 << 2)
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
+};
+
+#define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/*
+ * IOCTLs to enable/disable IOMMU container usage.
+ * No parameters are supported.
+ */
+#define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
+#define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages logging.
+ * Caller should set flag depending on which operation to perform, details as
+ * below:
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_START flag set, instructs
+ * the IOMMU driver to log pages that are dirtied or potentially dirtied by
+ * the device; designed to be used when a migration is in progress. Dirty pages
+ * are logged until logging is disabled by user application by calling the IOCTL
+ * with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag set, instructs
+ * the IOMMU driver to stop logging dirtied pages.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set
+ * returns the dirty pages bitmap for IOMMU container for a given IOVA range.
+ * The user must specify the IOVA range and the pgsize through the structure
+ * vfio_iommu_type1_dirty_bitmap_get in the data[] portion. This interface
+ * supports getting a bitmap of the smallest supported pgsize only and can be
+ * modified in future to get a bitmap of any specified supported pgsize. The
+ * user must provide a zeroed memory area for the bitmap memory and specify its
+ * size in bitmap.size. One bit is used to represent one page consecutively
+ * starting from iova offset. The user should provide page size in bitmap.pgsize
+ * field. A bit set in the bitmap indicates that the page at that offset from
+ * iova is dirty. The caller must set argsz to a value including the size of
+ * structure vfio_iommu_type1_dirty_bitmap_get, but excluding the size of the
+ * actual bitmap. If dirty pages logging is not enabled, an error will be
+ * returned.
+ *
+ * Only one of the flags _START, _STOP and _GET may be specified at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64              iova;	/* IO virtual address */
+	__u64              size;	/* Size of iova range */
+	struct vfio_bitmap bitmap;
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
+
+/*
+ * The SPAPR TCE DDW info struct provides the information about
+ * the details of Dynamic DMA window capability.
+ *
+ * @pgsizes contains a page size bitmask, 4K/64K/16M are supported.
+ * @max_dynamic_windows_supported tells the maximum number of windows
+ * which the platform can create.
+ * @levels tells the maximum number of levels in multi-level IOMMU tables;
+ * this allows splitting a table into smaller chunks which reduces
+ * the amount of physically contiguous memory required for the table.
+ */
+struct vfio_iommu_spapr_tce_ddw_info {
+	__u64 pgsizes;			/* Bitmap of supported page sizes */
+	__u32 max_dynamic_windows_supported;
+	__u32 levels;
+};
+
+/*
+ * The SPAPR TCE info struct provides the information about the PCI bus
+ * address ranges available for DMA, these values are programmed into
+ * the hardware so the guest has to know that information.
+ *
+ * The DMA 32 bit window start is an absolute PCI bus address.
+ * The IOVA address passed via map/unmap ioctls are absolute PCI bus
+ * addresses too so the window works as a filter rather than an offset
+ * for IOVA addresses.
+ *
+ * Flags supported:
+ * - VFIO_IOMMU_SPAPR_INFO_DDW: informs the userspace that dynamic DMA windows
+ *   (DDW) support is present. @ddw is only supported when DDW is present.
+ */
+struct vfio_iommu_spapr_tce_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_IOMMU_SPAPR_INFO_DDW	(1 << 0)	/* DDW supported */
+	__u32 dma32_window_start;	/* 32 bit window start (bytes) */
+	__u32 dma32_window_size;	/* 32 bit window size (bytes) */
+	struct vfio_iommu_spapr_tce_ddw_info ddw;
+};
+
+#define VFIO_IOMMU_SPAPR_TCE_GET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/*
+ * EEH PE operation struct provides ways to:
+ * - enable/disable EEH functionality;
+ * - unfreeze IO/DMA for frozen PE;
+ * - read PE state;
+ * - reset PE;
+ * - configure PE;
+ * - inject EEH error.
+ */
+struct vfio_eeh_pe_err {
+	__u32 type;
+	__u32 func;
+	__u64 addr;
+	__u64 mask;
+};
+
+struct vfio_eeh_pe_op {
+	__u32 argsz;
+	__u32 flags;
+	__u32 op;
+	union {
+		struct vfio_eeh_pe_err err;
+	};
+};
+
+#define VFIO_EEH_PE_DISABLE		0	/* Disable EEH functionality */
+#define VFIO_EEH_PE_ENABLE		1	/* Enable EEH functionality  */
+#define VFIO_EEH_PE_UNFREEZE_IO		2	/* Enable IO for frozen PE   */
+#define VFIO_EEH_PE_UNFREEZE_DMA	3	/* Enable DMA for frozen PE  */
+#define VFIO_EEH_PE_GET_STATE		4	/* PE state retrieval        */
+#define  VFIO_EEH_PE_STATE_NORMAL	0	/* PE in functional state    */
+#define  VFIO_EEH_PE_STATE_RESET	1	/* PE reset in progress      */
+#define  VFIO_EEH_PE_STATE_STOPPED	2	/* Stopped DMA and IO        */
+#define  VFIO_EEH_PE_STATE_STOPPED_DMA	4	/* Stopped DMA only          */
+#define  VFIO_EEH_PE_STATE_UNAVAIL	5	/* State unavailable         */
+#define VFIO_EEH_PE_RESET_DEACTIVATE	5	/* Deassert PE reset         */
+#define VFIO_EEH_PE_RESET_HOT		6	/* Assert hot reset          */
+#define VFIO_EEH_PE_RESET_FUNDAMENTAL	7	/* Assert fundamental reset  */
+#define VFIO_EEH_PE_CONFIGURE		8	/* PE configuration          */
+#define VFIO_EEH_PE_INJECT_ERR		9	/* Inject EEH error          */
+
+#define VFIO_EEH_PE_OP			_IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/**
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 17, struct vfio_iommu_spapr_register_memory)
+ *
+ * Registers user space memory where DMA is allowed. It pins
+ * user pages and does the locked memory accounting so
+ * subsequent VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA calls
+ * get faster.
+ */
+struct vfio_iommu_spapr_register_memory {
+	__u32	argsz;
+	__u32	flags;
+	__u64	vaddr;				/* Process virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+#define VFIO_IOMMU_SPAPR_REGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/**
+ * VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 18, struct vfio_iommu_spapr_register_memory)
+ *
+ * Unregisters user space memory registered with
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY.
+ * Uses vfio_iommu_spapr_register_memory for parameters.
+ */
+#define VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_CREATE - _IOWR(VFIO_TYPE, VFIO_BASE + 19, struct vfio_iommu_spapr_tce_create)
+ *
+ * Creates an additional TCE table and programs it (sets a new DMA window)
+ * to every IOMMU group in the container. It receives page shift, window
+ * size and number of levels in the TCE table being created.
+ *
+ * It allocates and returns an offset on a PCI bus of the new DMA window.
+ */
+struct vfio_iommu_spapr_tce_create {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 page_shift;
+	__u32 __resv1;
+	__u64 window_size;
+	__u32 levels;
+	__u32 __resv2;
+	/* out */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_CREATE	_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_REMOVE - _IOW(VFIO_TYPE, VFIO_BASE + 20, struct vfio_iommu_spapr_tce_remove)
+ *
+ * Unprograms a TCE table from all groups in the container and destroys it.
+ * It receives a PCI bus offset as a window id.
+ */
+struct vfio_iommu_spapr_tce_remove {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_REMOVE	_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/* ***************************************************************** */
+
+#endif /* _UAPIVFIO_H */
diff --git a/kernel/linux/uapi/version b/kernel/linux/uapi/version
index 3c68968f92..966a998301 100644
--- a/kernel/linux/uapi/version
+++ b/kernel/linux/uapi/version
@@ -1 +1 @@
-v6.14
+v6.16
-- 
2.51.0


^ permalink raw reply	[relevance 1%]

* Re: [PATCH v5 0/5] add semicolon when export any symbol
  2025-09-03  7:04  0%   ` [PATCH v5 0/5] add semicolon when export any symbol David Marchand
@ 2025-09-04  0:24  0%     ` fengchengwen
  0 siblings, 0 replies; 77+ results
From: fengchengwen @ 2025-09-04  0:24 UTC (permalink / raw)
  To: David Marchand; +Cc: thomas, stephen, dev, Bruce Richardson

Hi David,

On 9/3/2025 3:04 PM, David Marchand wrote:
> Hello,
> 
> On Wed, 3 Sept 2025 at 04:05, Chengwen Feng <fengchengwen@huawei.com> wrote:
>>
>> Currently, the RTE_EXPORT_INTERNAL_SYMBOL, RTE_EXPORT_SYMBOL and
>> RTE_EXPORT_EXPERIMENTAL_SYMBOL are placed at the beginning of APIs,
>> but don't end with a semicolon. As a result, some IDEs cannot identify
>> the APIs and cannot quickly jump to the definition.
>>
>> A semicolon is added to the end of above RTE_EXPORT_XXX_SYMBOL in this
>> commit.
>>
>> And also redefine RTE_EXPORT_XXX_SYMBOL:
>> #define RTE_EXPORT_XXX_SYMBOL(x, x) extern int dummy_rte_export_symbol
>>
>> Chengwen Feng (5):
>>   lib: add semicolon when export symbol
>>   lib: add semicolon when export experimental symbol
>>   lib: add semicolon when export internal symbol
>>   drivers: add semicolon when export any symbol
>>   doc: update ABI versioning guide
> 
> I am skeptical about this series.
> 
> The current positionning should be seen as an additional info on the
> return type, in the definition of the symbol.
> Does it mean that this IDE would fail if we add any kind of
> macros/attribute involving the symbol name?

I tried vscode and SI (source insight), and found user could use "token macro" in SI
to make the IDE skip such symbol (so use ctrl+ will quick jump to definition), but
I can't find such setting for vscode.

> 
> Afaics, ctags can be taught to skip those macros and just behaves
> correctly by adding in its config file:
> -DRTE_EXPORT_EXPERIMENTAL_SYMBOL(a)=
> -DRTE_EXPORT_INTERNAL_SYMBOL(a)=
> -DRTE_EXPORT_SYMBOL(a)=

How about add note in DPDK document if don't apply this commit?

> 
> I think another option would be to move the call to export macros
> after the whole definition of the symbol, though I prefer the current
> position for readability.

If not add a semi, it will affect next API:
int A()
RTE_EXPORT_SYMBOL(A)

int B()

> 
> 


^ permalink raw reply	[relevance 0%]

* [PATCH v10 1/1] ethdev: add support to provide link type
    2025-09-01  5:44  3% ` [PATCH v9 " skori
@ 2025-09-08  8:51  3% ` skori
  2025-09-11  8:48  3%   ` [PATCH v11 1/1] ethdev: add link connector type skori
  1 sibling, 1 reply; 77+ results
From: skori @ 2025-09-08  8:51 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko
  Cc: dev, Sunil Kumar Kori, Nithin Dabilpuram

From: Sunil Kumar Kori <skori@marvell.com>

Adding link type parameter to provide the type
of port like twisted pair, fibre etc.

Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX
to a readable string.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
v9..v10:
 - Rebase on top of next-net:main branch.
v8..v9:
 - Adds 25.11 release notes.
v7..v8:
 - Add documentation for invalid link type.
 - Remove trace point from API.
 - Rebase on next-net.
v6..v7:
 - Replace link_type to link_connector.
 - Update comments.
v5..v6:
 - Fix doxygen error.
v4..v5:                                                                                             
 - Convert link type to connector.
 - Fix build error on Windows.
 - Handle comsmetic review comments.
v3..v4:
 - Convert #define into enum.
 - Enhance comments for each port link type.
 - Fix test failures.
v2..v3
 - Extend link type list as per suggestion.

 app/test/test_ethdev_link.c            | 18 +++++----
 doc/guides/rel_notes/release_25_11.rst |  9 +++++
 lib/ethdev/rte_ethdev.c                | 45 ++++++++++++++++++++-
 lib/ethdev/rte_ethdev.h                | 54 ++++++++++++++++++++++++++
 4 files changed, 117 insertions(+), 9 deletions(-)

diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index f063a5fe26..0e543228b0 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -17,23 +17,25 @@ test_link_status_up_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
 	printf("Default link up #1: %s\n", text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg Other",
 		text, strlen(text), "Invalid default link status string");
 
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
 	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_SGMII;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -41,7 +43,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -49,7 +51,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -57,6 +59,7 @@ test_link_status_up_default(void)
 	link_status.link_speed = RTE_ETH_SPEED_NUM_400G;
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_GAUI;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -92,7 +95,8 @@ test_link_status_invalid(void)
 		.link_speed = 55555,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -100,7 +104,7 @@ test_link_status_invalid(void)
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
 		"Failed to format invalid string\n");
 	printf("invalid link up #1: len=%d %s\n", ret, text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg Other",
 		text, strlen(text), "Incorrect invalid link status string");
 
 	return TEST_SUCCESS;
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index 32d61691d2..7c752fab23 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added ethdev API in library.*
+
+  * Added API to report type of link connection for a port.
+    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
+    unless driver specifies it.
+
 
 Removed Items
 -------------
@@ -106,6 +112,9 @@ ABI Changes
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
+* ethdev: Added ``link_connector`` field to ``rte_eth_link`` structure
+  to report type of link connection a port.
+
 
 Known Issues
 ------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index dd7c00bc94..351f1746dc 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3285,18 +3285,59 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
 	else
-		ret = snprintf(str, len, "Link up at %s %s %s",
+		ret = snprintf(str, len, "Link up at %s %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
-			"Autoneg" : "Fixed");
+			"Autoneg" : "Fixed",
+			rte_eth_link_connector_to_str(eth_link->link_connector));
 
 	rte_eth_trace_link_to_str(len, eth_link, str, ret);
 
 	return ret;
 }
 
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_connector_to_str, 25.11)
+const char *
+rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector)
+{
+	static const char * const link_connector_str[] = {
+		[RTE_ETH_LINK_CONNECTOR_NONE] = "None",
+		[RTE_ETH_LINK_CONNECTOR_TP] = "Twisted Pair",
+		[RTE_ETH_LINK_CONNECTOR_AUI] = "Attachment Unit Interface",
+		[RTE_ETH_LINK_CONNECTOR_MII] = "Media Independent Interface",
+		[RTE_ETH_LINK_CONNECTOR_FIBER] = "Fiber",
+		[RTE_ETH_LINK_CONNECTOR_BNC] = "BNC",
+		[RTE_ETH_LINK_CONNECTOR_DAC] = "Direct Attach Copper",
+		[RTE_ETH_LINK_CONNECTOR_SGMII] = "SGMII",
+		[RTE_ETH_LINK_CONNECTOR_QSGMII] = "QSGMII",
+		[RTE_ETH_LINK_CONNECTOR_XFI] = "XFI",
+		[RTE_ETH_LINK_CONNECTOR_SFI] = "SFI",
+		[RTE_ETH_LINK_CONNECTOR_XLAUI] = "XLAUI",
+		[RTE_ETH_LINK_CONNECTOR_GAUI] = "GAUI",
+		[RTE_ETH_LINK_CONNECTOR_XAUI] = "XAUI",
+		[RTE_ETH_LINK_CONNECTOR_CAUI] = "CAUI",
+		[RTE_ETH_LINK_CONNECTOR_LAUI] = "LAUI",
+		[RTE_ETH_LINK_CONNECTOR_SFP] = "SFP",
+		[RTE_ETH_LINK_CONNECTOR_SFP_DD] = "SFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_SFP_PLUS] = "SFP+",
+		[RTE_ETH_LINK_CONNECTOR_SFP28] = "SFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP] = "QSFP",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_PLUS] = "QSFP+",
+		[RTE_ETH_LINK_CONNECTOR_QSFP28] = "QSFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP56] = "QSFP56",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_DD] = "QSFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_OTHER] = "Other",
+	};
+	const char *str = NULL;
+
+	if (link_connector < ((enum rte_eth_link_connector)RTE_DIM(link_connector_str)))
+		str = link_connector_str[link_connector];
+
+	return str;
+}
+
 RTE_EXPORT_SYMBOL(rte_eth_stats_get)
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index f9fb6ae549..329ec25fc9 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -329,6 +329,45 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**
+ * @enum rte_eth_link_connector
+ * @brief Ethernet port link connector type
+ *
+ * This enum defines the possible types of Ethernet port link connectors.
+ */
+enum rte_eth_link_connector {
+	RTE_ETH_LINK_CONNECTOR_NONE = 0,     /**< Not defined */
+	RTE_ETH_LINK_CONNECTOR_TP,           /**< Twisted Pair */
+	RTE_ETH_LINK_CONNECTOR_AUI,          /**< Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_MII,          /**< Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_FIBER,        /**< Optical Fiber Link */
+	RTE_ETH_LINK_CONNECTOR_BNC,          /**< BNC Link type for RF connection */
+	RTE_ETH_LINK_CONNECTOR_DAC,          /**< Direct Attach copper */
+	RTE_ETH_LINK_CONNECTOR_SGMII,        /**< Serial Gigabit Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_QSGMII,       /**< Link to multiplex 4 SGMII over one serial link */
+	RTE_ETH_LINK_CONNECTOR_XFI,          /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFI,          /**< 10 Gigabit Serial Interface for optical network */
+	RTE_ETH_LINK_CONNECTOR_XLAUI,        /**< 40 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_GAUI,         /**< Gigabit Interface for 50/100/200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_XAUI,         /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_CAUI,         /**< 100 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_LAUI,         /**< 50 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFP,          /**< Pluggable module for 1 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_PLUS,     /**< Pluggable module for 10 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP28,        /**< Pluggable module for 25 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_DD,       /**< Pluggable module for 100 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_QSFP,         /**< Module to mutiplex 4 SFP i.e. 4*1=4 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_PLUS,    /**< Module to mutiplex 4 SFP_PLUS i.e. 4*10=40 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP28,       /**< Module to mutiplex 4 SFP28 i.e. 4*25=100 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP56,       /**< Module to mutiplex 4 SFP56 i.e. 4*50=200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_DD,      /**< Module to mutiplex 4 SFP_DD i.e. 4*100=400 Gbps */
+	RTE_ETH_LINK_CONNECTOR_OTHER = 31,   /**< non-physical interfaces like virtio, ring etc.
+					       * It also includes unknown connector types,
+					       * i.e. physical connectors not yet defined in this
+					       * list of connector types.
+					       */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -341,6 +380,7 @@ struct rte_eth_link {
 			uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 			uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 			uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+			uint16_t link_connector : 5;  /**< RTE_ETH_LINK_CONNECTOR_XXX */
 		};
 	};
 };
@@ -3116,6 +3156,20 @@ int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link)
 __rte_experimental
 const char *rte_eth_link_speed_to_str(uint32_t link_speed);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * This function converts an Ethernet link type to a string.
+ *
+ * @param link_connector
+ *   The link type to convert.
+ * @return
+ *   NULL for invalid link connector values otherwise the string representation of the link type.
+ */
+__rte_experimental
+const char *rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
-- 
2.43.0


^ permalink raw reply	[relevance 3%]

* [PATCH v11 1/1] ethdev: add link connector type
  2025-09-08  8:51  3% ` [PATCH v10 " skori
@ 2025-09-11  8:48  3%   ` skori
  2025-09-11  9:41  0%     ` Morten Brørup
  2025-09-11 10:34  3%     ` [PATCH v12 " skori
  0 siblings, 2 replies; 77+ results
From: skori @ 2025-09-11  8:48 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko
  Cc: dev, Sunil Kumar Kori, Nithin Dabilpuram

From: Sunil Kumar Kori <skori@marvell.com>

Adding link connector parameter to provide the type
of connection for a port like twisted pair, fiber etc.

Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX
to a readable string.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
v10..v11
 - Fix review comments on documentation.
 - Rebase on top of next-net:main branch.
v9..v10:
 - Rebase on top of next-net:main branch.
v8..v9:
 - Adds 25.11 release notes.
v7..v8:
 - Add documentation for invalid link type.
 - Remove trace point from API.
 - Rebase on next-net.
v6..v7:
 - Replace link_type to link_connector.
 - Update comments.
v5..v6:
 - Fix doxygen error.
v4..v5:                                                                                             
 - Convert link type to connector.
 - Fix build error on Windows.
 - Handle comsmetic review comments.
v3..v4:
 - Convert #define into enum.
 - Enhance comments for each port link type.
 - Fix test failures.
v2..v3
 - Extend link type list as per suggestion.

 app/test/test_ethdev_link.c            | 18 +++++----
 doc/guides/rel_notes/release_25_11.rst | 23 +++++++++++
 lib/ethdev/rte_ethdev.c                | 45 ++++++++++++++++++++-
 lib/ethdev/rte_ethdev.h                | 54 ++++++++++++++++++++++++++
 4 files changed, 131 insertions(+), 9 deletions(-)

diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index 47c526eb0c..359b879fae 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -17,23 +17,25 @@ test_link_status_up_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
 	printf("Default link up #1: %s\n", text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg Other",
 		text, strlen(text), "Invalid default link status string");
 
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
 	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_SGMII;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -41,7 +43,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -49,7 +51,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -57,6 +59,7 @@ test_link_status_up_default(void)
 	link_status.link_speed = RTE_ETH_SPEED_NUM_800G;
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_GAUI;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -92,7 +95,8 @@ test_link_status_invalid(void)
 		.link_speed = 55555,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -100,7 +104,7 @@ test_link_status_invalid(void)
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
 		"Failed to format invalid string\n");
 	printf("invalid link up #1: len=%d %s\n", ret, text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg Other",
 		text, strlen(text), "Incorrect invalid link status string");
 
 	return TEST_SUCCESS;
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index efb88bbbb0..1bfbdbd6a8 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -55,6 +55,26 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added ethdev API to get link connector.**
+
+  * Added API to report type of link connection for a port.
+    The following connectors are enumerated:
+
+   * NONE
+   * TP
+   * FIBER
+   * BNC
+   * DAC
+   * XFI, SFI
+   * MII, SGMII, QSGMII
+   * AUI, XLAUI, GAUI, AUI, CAUI, LAUI
+   * SFP, SFP_PLUS, SFP28, SFP_DD
+   * QSFP, QSFP_PLUS, QSFP28, QSFP56, QSFP_DD
+   * OTHER
+
+    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
+    unless driver specifies it.
+
 * **Added speed 800G.**
 
   Added Ethernet link speed for 800 Gb/s as it is well standardized in IEEE,
@@ -124,6 +144,9 @@ ABI Changes
 * eal: The structure ``rte_mp_msg`` alignment has been updated to 8 bytes to limit unaligned
   accesses in messages payload.
 
+* ethdev: Added ``link_connector`` field to ``rte_eth_link`` structure
+  to report type of link connection for a port.
+
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index f22139cb38..60f4ca34e0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3330,18 +3330,59 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
 	else
-		ret = snprintf(str, len, "Link up at %s %s %s",
+		ret = snprintf(str, len, "Link up at %s %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
-			"Autoneg" : "Fixed");
+			"Autoneg" : "Fixed",
+			rte_eth_link_connector_to_str(eth_link->link_connector));
 
 	rte_eth_trace_link_to_str(len, eth_link, str, ret);
 
 	return ret;
 }
 
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_connector_to_str, 25.11)
+const char *
+rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector)
+{
+	static const char * const link_connector_str[] = {
+		[RTE_ETH_LINK_CONNECTOR_NONE] = "None",
+		[RTE_ETH_LINK_CONNECTOR_TP] = "Twisted Pair",
+		[RTE_ETH_LINK_CONNECTOR_AUI] = "Attachment Unit Interface",
+		[RTE_ETH_LINK_CONNECTOR_MII] = "Media Independent Interface",
+		[RTE_ETH_LINK_CONNECTOR_FIBER] = "Fiber",
+		[RTE_ETH_LINK_CONNECTOR_BNC] = "BNC",
+		[RTE_ETH_LINK_CONNECTOR_DAC] = "Direct Attach Copper",
+		[RTE_ETH_LINK_CONNECTOR_SGMII] = "SGMII",
+		[RTE_ETH_LINK_CONNECTOR_QSGMII] = "QSGMII",
+		[RTE_ETH_LINK_CONNECTOR_XFI] = "XFI",
+		[RTE_ETH_LINK_CONNECTOR_SFI] = "SFI",
+		[RTE_ETH_LINK_CONNECTOR_XLAUI] = "XLAUI",
+		[RTE_ETH_LINK_CONNECTOR_GAUI] = "GAUI",
+		[RTE_ETH_LINK_CONNECTOR_XAUI] = "XAUI",
+		[RTE_ETH_LINK_CONNECTOR_CAUI] = "CAUI",
+		[RTE_ETH_LINK_CONNECTOR_LAUI] = "LAUI",
+		[RTE_ETH_LINK_CONNECTOR_SFP] = "SFP",
+		[RTE_ETH_LINK_CONNECTOR_SFP_DD] = "SFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_SFP_PLUS] = "SFP+",
+		[RTE_ETH_LINK_CONNECTOR_SFP28] = "SFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP] = "QSFP",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_PLUS] = "QSFP+",
+		[RTE_ETH_LINK_CONNECTOR_QSFP28] = "QSFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP56] = "QSFP56",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_DD] = "QSFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_OTHER] = "Other",
+	};
+	const char *str = NULL;
+
+	if (link_connector < ((enum rte_eth_link_connector)RTE_DIM(link_connector_str)))
+		str = link_connector_str[link_connector];
+
+	return str;
+}
+
 RTE_EXPORT_SYMBOL(rte_eth_stats_get)
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d23c143eed..23ebf6b89d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -331,6 +331,45 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**
+ * @enum rte_eth_link_connector
+ * @brief Ethernet port link connector type
+ *
+ * This enum defines the possible types of Ethernet port link connectors.
+ */
+enum rte_eth_link_connector {
+	RTE_ETH_LINK_CONNECTOR_NONE = 0,     /**< None. Default unless driver specifies it */
+	RTE_ETH_LINK_CONNECTOR_TP,           /**< Twisted Pair */
+	RTE_ETH_LINK_CONNECTOR_AUI,          /**< Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_MII,          /**< Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_FIBER,        /**< Optical Fiber Link */
+	RTE_ETH_LINK_CONNECTOR_BNC,          /**< BNC Link type for RF connection */
+	RTE_ETH_LINK_CONNECTOR_DAC,          /**< Direct Attach copper */
+	RTE_ETH_LINK_CONNECTOR_SGMII,        /**< Serial Gigabit Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_QSGMII,       /**< Link to multiplex 4 SGMII over one serial link */
+	RTE_ETH_LINK_CONNECTOR_XFI,          /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFI,          /**< 10 Gigabit Serial Interface for optical network */
+	RTE_ETH_LINK_CONNECTOR_XLAUI,        /**< 40 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_GAUI,         /**< Gigabit Interface for 50/100/200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_XAUI,         /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_CAUI,         /**< 100 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_LAUI,         /**< 50 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFP,          /**< Pluggable module for 1 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_PLUS,     /**< Pluggable module for 10 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP28,        /**< Pluggable module for 25 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_DD,       /**< Pluggable module for 100 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_QSFP,         /**< Module to mutiplex 4 SFP i.e. 4*1=4 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_PLUS,    /**< Module to mutiplex 4 SFP_PLUS i.e. 4*10=40 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP28,       /**< Module to mutiplex 4 SFP28 i.e. 4*25=100 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP56,       /**< Module to mutiplex 4 SFP56 i.e. 4*50=200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_DD,      /**< Module to mutiplex 4 SFP_DD i.e. 4*100=400 Gbps */
+	RTE_ETH_LINK_CONNECTOR_OTHER = 31,   /**< non-physical interfaces like virtio, ring etc.
+					       * It also includes unknown connector types,
+					       * i.e. physical connectors not yet defined in this
+					       * list of connector types.
+					       */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -343,6 +382,7 @@ struct rte_eth_link {
 			uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 			uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 			uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+			uint16_t link_connector : 5;  /**< RTE_ETH_LINK_CONNECTOR_XXX */
 		};
 	};
 };
@@ -3118,6 +3158,20 @@ int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link)
 __rte_experimental
 const char *rte_eth_link_speed_to_str(uint32_t link_speed);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * This function converts an Ethernet link type to a string.
+ *
+ * @param link_connector
+ *   The link type to convert.
+ * @return
+ *   NULL for invalid link connector values otherwise the string representation of the link type.
+ */
+__rte_experimental
+const char *rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
-- 
2.43.0


^ permalink raw reply	[relevance 3%]

* RE: [PATCH v11 1/1] ethdev: add link connector type
  2025-09-11  8:48  3%   ` [PATCH v11 1/1] ethdev: add link connector type skori
@ 2025-09-11  9:41  0%     ` Morten Brørup
  2025-09-11 10:37  0%       ` Sunil Kumar Kori
  2025-09-11 10:34  3%     ` [PATCH v12 " skori
  1 sibling, 1 reply; 77+ results
From: Morten Brørup @ 2025-09-11  9:41 UTC (permalink / raw)
  To: Sunil Kumar Kori, Nithin Dabilpuram
  Cc: dev, Thomas Monjalon, Andrew Rybchenko

> From: Sunil Kumar Kori <skori@marvell.com>
> 
> Adding link connector parameter to provide the type
> of connection for a port like twisted pair, fiber etc.
> 
> Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX
> to a readable string.
> 
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---

[...]

> +* **Added ethdev API to get link connector.**
> +
> +  * Added API to report type of link connection for a port.
> +    The following connectors are enumerated:
> +
> +   * NONE
> +   * TP
> +   * FIBER
> +   * BNC
> +   * DAC
> +   * XFI, SFI
> +   * MII, SGMII, QSGMII
> +   * AUI, XLAUI, GAUI, AUI, CAUI, LAUI
> +   * SFP, SFP_PLUS, SFP28, SFP_DD
> +   * QSFP, QSFP_PLUS, QSFP28, QSFP56, QSFP_DD
> +   * OTHER

Please use the string names, not the enum name, here.
E.g. Twisted Pair instead of TP, and SFP+ instead of SFP_PLUS.

> +
> +    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
> +    unless driver specifies it.
> +
>  * **Added speed 800G.**
> 
>    Added Ethernet link speed for 800 Gb/s as it is well standardized in
> IEEE,
> @@ -124,6 +144,9 @@ ABI Changes
>  * eal: The structure ``rte_mp_msg`` alignment has been updated to 8
> bytes to limit unaligned
>    accesses in messages payload.
> 
> +* ethdev: Added ``link_connector`` field to ``rte_eth_link`` structure
> +  to report type of link connection for a port.

connection -> connector

[...]

>  /**
>   * A structure used to retrieve link-level information of an Ethernet
> port.
>   */
> @@ -343,6 +382,7 @@ struct rte_eth_link {
>  			uint16_t link_duplex  : 1;  /**<
> RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
>  			uint16_t link_autoneg : 1;  /**<
> RTE_ETH_LINK_[AUTONEG/FIXED] */
>  			uint16_t link_status  : 1;  /**<
> RTE_ETH_LINK_[DOWN/UP] */
> +			uint16_t link_connector : 5;  /**<
> RTE_ETH_LINK_CONNECTOR_XXX */

Please use 6 bits instead of 5, so it is more future proof.
With the connector types already defined, 5 bits only leaves room for six more connector types.

Remember to update the value of RTE_ETH_LINK_CONNECTOR_OTHER from 31 to 63.

-Morten


^ permalink raw reply	[relevance 0%]

* [PATCH v12 1/1] ethdev: add link connector type
  2025-09-11  8:48  3%   ` [PATCH v11 1/1] ethdev: add link connector type skori
  2025-09-11  9:41  0%     ` Morten Brørup
@ 2025-09-11 10:34  3%     ` skori
  1 sibling, 0 replies; 77+ results
From: skori @ 2025-09-11 10:34 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko
  Cc: dev, Sunil Kumar Kori, Nithin Dabilpuram

From: Sunil Kumar Kori <skori@marvell.com>

Adding link connector parameter to provide the type
of connection for a port like twisted pair, fiber etc.

Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX
to a readable string.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
v11..v12:
 - Increase size of link_connector field.
v10..v11
 - Fix review comments on documentation.
 - Rebase on top of next-net:main branch.
v9..v10:
 - Rebase on top of next-net:main branch.

 app/test/test_ethdev_link.c            | 18 +++++----
 doc/guides/rel_notes/release_25_11.rst | 25 ++++++++++++
 lib/ethdev/rte_ethdev.c                | 45 ++++++++++++++++++++-
 lib/ethdev/rte_ethdev.h                | 54 ++++++++++++++++++++++++++
 4 files changed, 133 insertions(+), 9 deletions(-)

diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index 47c526eb0c..359b879fae 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -17,23 +17,25 @@ test_link_status_up_default(void)
 		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
 	printf("Default link up #1: %s\n", text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg Other",
 		text, strlen(text), "Invalid default link status string");
 
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
 	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_SGMII;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 10 Mbps HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -41,7 +43,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Unknown HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -49,7 +51,7 @@ test_link_status_up_default(void)
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at None HDX Fixed SGMII",
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
@@ -57,6 +59,7 @@ test_link_status_up_default(void)
 	link_status.link_speed = RTE_ETH_SPEED_NUM_800G;
 	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
+	link_status.link_connector = RTE_ETH_LINK_CONNECTOR_GAUI;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -92,7 +95,8 @@ test_link_status_invalid(void)
 		.link_speed = 55555,
 		.link_status = RTE_ETH_LINK_UP,
 		.link_autoneg = RTE_ETH_LINK_AUTONEG,
-		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_connector = RTE_ETH_LINK_CONNECTOR_OTHER
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -100,7 +104,7 @@ test_link_status_invalid(void)
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
 		"Failed to format invalid string\n");
 	printf("invalid link up #1: len=%d %s\n", ret, text);
-	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg",
+	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at Invalid FDX Autoneg Other",
 		text, strlen(text), "Incorrect invalid link status string");
 
 	return TEST_SUCCESS;
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index efb88bbbb0..fccf3d1366 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -55,6 +55,28 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added ethdev API to get link connector.**
+
+  * Added API to report type of link connector for a port.
+    The following connectors are enumerated:
+
+   * None
+   * Twisted Pair
+   * Attachment Unit Interface (AUI)
+   * Optical Fiber Link
+   * BNC
+   * Direct Attach Copper
+   * XFI, SFI
+   * Media Independent Interface (MII)
+   * SGMII, QSGMII
+   * XLAUI, GAUI, AUI, CAUI, LAUI
+   * SFP, SFP+, SFP28, SFP-DD
+   * QSFP, QSFP+, QSFP28, QSFP56, QSFP-DD
+   * OTHER
+
+    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
+    unless driver specifies it.
+
 * **Added speed 800G.**
 
   Added Ethernet link speed for 800 Gb/s as it is well standardized in IEEE,
@@ -124,6 +146,9 @@ ABI Changes
 * eal: The structure ``rte_mp_msg`` alignment has been updated to 8 bytes to limit unaligned
   accesses in messages payload.
 
+* ethdev: Added ``link_connector`` field to ``rte_eth_link`` structure
+  to report type of link connector for a port.
+
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index f22139cb38..60f4ca34e0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3330,18 +3330,59 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		ret = snprintf(str, len, "Link down");
 	else
-		ret = snprintf(str, len, "Link up at %s %s %s",
+		ret = snprintf(str, len, "Link up at %s %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
 			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
 			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
-			"Autoneg" : "Fixed");
+			"Autoneg" : "Fixed",
+			rte_eth_link_connector_to_str(eth_link->link_connector));
 
 	rte_eth_trace_link_to_str(len, eth_link, str, ret);
 
 	return ret;
 }
 
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eth_link_connector_to_str, 25.11)
+const char *
+rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector)
+{
+	static const char * const link_connector_str[] = {
+		[RTE_ETH_LINK_CONNECTOR_NONE] = "None",
+		[RTE_ETH_LINK_CONNECTOR_TP] = "Twisted Pair",
+		[RTE_ETH_LINK_CONNECTOR_AUI] = "Attachment Unit Interface",
+		[RTE_ETH_LINK_CONNECTOR_MII] = "Media Independent Interface",
+		[RTE_ETH_LINK_CONNECTOR_FIBER] = "Fiber",
+		[RTE_ETH_LINK_CONNECTOR_BNC] = "BNC",
+		[RTE_ETH_LINK_CONNECTOR_DAC] = "Direct Attach Copper",
+		[RTE_ETH_LINK_CONNECTOR_SGMII] = "SGMII",
+		[RTE_ETH_LINK_CONNECTOR_QSGMII] = "QSGMII",
+		[RTE_ETH_LINK_CONNECTOR_XFI] = "XFI",
+		[RTE_ETH_LINK_CONNECTOR_SFI] = "SFI",
+		[RTE_ETH_LINK_CONNECTOR_XLAUI] = "XLAUI",
+		[RTE_ETH_LINK_CONNECTOR_GAUI] = "GAUI",
+		[RTE_ETH_LINK_CONNECTOR_XAUI] = "XAUI",
+		[RTE_ETH_LINK_CONNECTOR_CAUI] = "CAUI",
+		[RTE_ETH_LINK_CONNECTOR_LAUI] = "LAUI",
+		[RTE_ETH_LINK_CONNECTOR_SFP] = "SFP",
+		[RTE_ETH_LINK_CONNECTOR_SFP_DD] = "SFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_SFP_PLUS] = "SFP+",
+		[RTE_ETH_LINK_CONNECTOR_SFP28] = "SFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP] = "QSFP",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_PLUS] = "QSFP+",
+		[RTE_ETH_LINK_CONNECTOR_QSFP28] = "QSFP28",
+		[RTE_ETH_LINK_CONNECTOR_QSFP56] = "QSFP56",
+		[RTE_ETH_LINK_CONNECTOR_QSFP_DD] = "QSFP-DD",
+		[RTE_ETH_LINK_CONNECTOR_OTHER] = "Other",
+	};
+	const char *str = NULL;
+
+	if (link_connector < ((enum rte_eth_link_connector)RTE_DIM(link_connector_str)))
+		str = link_connector_str[link_connector];
+
+	return str;
+}
+
 RTE_EXPORT_SYMBOL(rte_eth_stats_get)
 int
 rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d23c143eed..996d9b212a 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -331,6 +331,45 @@ struct rte_eth_stats {
 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
 /**@}*/
 
+/**
+ * @enum rte_eth_link_connector
+ * @brief Ethernet port link connector type
+ *
+ * This enum defines the possible types of Ethernet port link connectors.
+ */
+enum rte_eth_link_connector {
+	RTE_ETH_LINK_CONNECTOR_NONE = 0,     /**< None. Default unless driver specifies it */
+	RTE_ETH_LINK_CONNECTOR_TP,           /**< Twisted Pair */
+	RTE_ETH_LINK_CONNECTOR_AUI,          /**< Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_MII,          /**< Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_FIBER,        /**< Optical Fiber Link */
+	RTE_ETH_LINK_CONNECTOR_BNC,          /**< BNC Link type for RF connection */
+	RTE_ETH_LINK_CONNECTOR_DAC,          /**< Direct Attach copper */
+	RTE_ETH_LINK_CONNECTOR_SGMII,        /**< Serial Gigabit Media Independent Interface */
+	RTE_ETH_LINK_CONNECTOR_QSGMII,       /**< Link to multiplex 4 SGMII over one serial link */
+	RTE_ETH_LINK_CONNECTOR_XFI,          /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFI,          /**< 10 Gigabit Serial Interface for optical network */
+	RTE_ETH_LINK_CONNECTOR_XLAUI,        /**< 40 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_GAUI,         /**< Gigabit Interface for 50/100/200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_XAUI,         /**< 10 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_CAUI,         /**< 100 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_LAUI,         /**< 50 Gigabit Attachment Unit Interface */
+	RTE_ETH_LINK_CONNECTOR_SFP,          /**< Pluggable module for 1 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_PLUS,     /**< Pluggable module for 10 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP28,        /**< Pluggable module for 25 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_SFP_DD,       /**< Pluggable module for 100 Gigabit */
+	RTE_ETH_LINK_CONNECTOR_QSFP,         /**< Module to mutiplex 4 SFP i.e. 4*1=4 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_PLUS,    /**< Module to mutiplex 4 SFP_PLUS i.e. 4*10=40 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP28,       /**< Module to mutiplex 4 SFP28 i.e. 4*25=100 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP56,       /**< Module to mutiplex 4 SFP56 i.e. 4*50=200 Gbps */
+	RTE_ETH_LINK_CONNECTOR_QSFP_DD,      /**< Module to mutiplex 4 SFP_DD i.e. 4*100=400 Gbps */
+	RTE_ETH_LINK_CONNECTOR_OTHER = 63,   /**< non-physical interfaces like virtio, ring etc.
+					       * It also includes unknown connector types,
+					       * i.e. physical connectors not yet defined in this
+					       * list of connector types.
+					       */
+};
+
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
@@ -343,6 +382,7 @@ struct rte_eth_link {
 			uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
 			uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
 			uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
+			uint16_t link_connector : 6;  /**< RTE_ETH_LINK_CONNECTOR_XXX */
 		};
 	};
 };
@@ -3118,6 +3158,20 @@ int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link)
 __rte_experimental
 const char *rte_eth_link_speed_to_str(uint32_t link_speed);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * This function converts an Ethernet link type to a string.
+ *
+ * @param link_connector
+ *   The link type to convert.
+ * @return
+ *   NULL for invalid link connector values otherwise the string representation of the link type.
+ */
+__rte_experimental
+const char *rte_eth_link_connector_to_str(enum rte_eth_link_connector link_connector);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
-- 
2.43.0


^ permalink raw reply	[relevance 3%]

* RE: [PATCH v11 1/1] ethdev: add link connector type
  2025-09-11  9:41  0%     ` Morten Brørup
@ 2025-09-11 10:37  0%       ` Sunil Kumar Kori
  0 siblings, 0 replies; 77+ results
From: Sunil Kumar Kori @ 2025-09-11 10:37 UTC (permalink / raw)
  To: Morten Brørup, Nithin Kumar Dabilpuram
  Cc: dev, Thomas Monjalon, Andrew Rybchenko

> > From: Sunil Kumar Kori <skori@marvell.com>
> >
> > Adding link connector parameter to provide the type of connection for
> > a port like twisted pair, fiber etc.
> >
> > Also added an API to convert the RTE_ETH_LINK_CONNECTOR_XXX to a
> > readable string.
> >
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> > ---
> 
> [...]
> 
> > +* **Added ethdev API to get link connector.**
> > +
> > +  * Added API to report type of link connection for a port.
> > +    The following connectors are enumerated:
> > +
> > +   * NONE
> > +   * TP
> > +   * FIBER
> > +   * BNC
> > +   * DAC
> > +   * XFI, SFI
> > +   * MII, SGMII, QSGMII
> > +   * AUI, XLAUI, GAUI, AUI, CAUI, LAUI
> > +   * SFP, SFP_PLUS, SFP28, SFP_DD
> > +   * QSFP, QSFP_PLUS, QSFP28, QSFP56, QSFP_DD
> > +   * OTHER
> 
> Please use the string names, not the enum name, here.
> E.g. Twisted Pair instead of TP, and SFP+ instead of SFP_PLUS.
> 
> > +
> > +    By default, it reports ``RTE_ETH_LINK_CONNECTOR_NONE``
> > +    unless driver specifies it.
> > +
> >  * **Added speed 800G.**
> >
> >    Added Ethernet link speed for 800 Gb/s as it is well standardized
> > in IEEE, @@ -124,6 +144,9 @@ ABI Changes
> >  * eal: The structure ``rte_mp_msg`` alignment has been updated to 8
> > bytes to limit unaligned
> >    accesses in messages payload.
> >
> > +* ethdev: Added ``link_connector`` field to ``rte_eth_link``
> > +structure
> > +  to report type of link connection for a port.
> 
> connection -> connector
> 
> [...]
> 
> >  /**
> >   * A structure used to retrieve link-level information of an Ethernet
> > port.
> >   */
> > @@ -343,6 +382,7 @@ struct rte_eth_link {
> >  			uint16_t link_duplex  : 1;  /**<
> > RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
> >  			uint16_t link_autoneg : 1;  /**<
> > RTE_ETH_LINK_[AUTONEG/FIXED] */
> >  			uint16_t link_status  : 1;  /**<
> > RTE_ETH_LINK_[DOWN/UP] */
> > +			uint16_t link_connector : 5;  /**<
> > RTE_ETH_LINK_CONNECTOR_XXX */
> 
> Please use 6 bits instead of 5, so it is more future proof.
> With the connector types already defined, 5 bits only leaves room for six more
> connector types.
> 
> Remember to update the value of RTE_ETH_LINK_CONNECTOR_OTHER from 31 to
> 63.
> 
> -Morten

Ack. Sent next version.

^ permalink raw reply	[relevance 0%]

* [DPDK/meson Bug 1787] ARM toolchin prefix changed in newest toolchain
@ 2025-09-12  9:28  4% bugzilla
  0 siblings, 0 replies; 77+ results
From: bugzilla @ 2025-09-12  9:28 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 1076 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1787

            Bug ID: 1787
           Summary: ARM toolchin prefix changed in newest toolchain
           Product: DPDK
           Version: 24.11
          Hardware: ARM
                OS: Linux
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: meson
          Assignee: dev@dpdk.org
          Reporter: nelio.laranjeiro@6wind.com
  Target Milestone: ---

In config/arm/* files in the binary section, the prefix tool chain is hardcoded
with the old notation i.e. arch-os-abi, this has changed to arch-vendor-os-abi
causing all this files unusable with latest tool chain.

Example: arm-gnu-toolchain-14.3.rel1-aarch64-aarch64-none-linux-gnu.tar.xz [1]
which prefix is: aarch64-none-linux-gnu instead of old prefix
aarch64-linux-gnu.

[1]
https://developer.arm.com/-/media/Files/downloads/gnu/14.3.rel1/binrel/arm-gnu-toolchain-14.3.rel1-aarch64-aarch64-none-linux-gnu.tar.xz

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 3102 bytes --]

^ permalink raw reply	[relevance 4%]

* [PATCH v4 1/4] hash: move table of hash compare functions out of header
  @ 2025-09-16 15:00  7%   ` Stephen Hemminger
  0 siblings, 0 replies; 77+ results
From: Stephen Hemminger @ 2025-09-16 15:00 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Morten Brørup, Yipeng Wang,
	Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin

Remove the definition of the compare jump table from the
header file so the internal details are not exposed.
Prevents future ABI breakage if new sizes are added.

Make other macros local if possible, header should
only contain exposed API.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/hash/rte_cuckoo_hash.c | 74 ++++++++++++++++++++++++++++++-----
 lib/hash/rte_cuckoo_hash.h | 79 +-------------------------------------
 2 files changed, 65 insertions(+), 88 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 2c92c51624..619fe0c691 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -25,14 +25,51 @@
 #include <rte_tailq.h>
 
 #include "rte_hash.h"
+#include "rte_cuckoo_hash.h"
 
-/* needs to be before rte_cuckoo_hash.h */
 RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO);
 #define RTE_LOGTYPE_HASH hash_logtype
 #define HASH_LOG(level, ...) \
 	RTE_LOG_LINE(level, HASH, "" __VA_ARGS__)
 
-#include "rte_cuckoo_hash.h"
+/* Macro to enable/disable run-time checking of function parameters */
+#if defined(RTE_LIBRTE_HASH_DEBUG)
+#define RETURN_IF_TRUE(cond, retval) do { \
+	if (cond) \
+		return retval; \
+} while (0)
+#else
+#define RETURN_IF_TRUE(cond, retval)
+#endif
+
+#if defined(RTE_ARCH_X86)
+#include "rte_cmp_x86.h"
+#endif
+
+#if defined(RTE_ARCH_ARM64)
+#include "rte_cmp_arm64.h"
+#endif
+
+/*
+ * All different options to select a key compare function,
+ * based on the key size and custom function.
+ * Not in rte_cuckoo_hash.h to avoid ABI issues.
+ */
+enum cmp_jump_table_case {
+	KEY_CUSTOM = 0,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	KEY_16_BYTES,
+	KEY_32_BYTES,
+	KEY_48_BYTES,
+	KEY_64_BYTES,
+	KEY_80_BYTES,
+	KEY_96_BYTES,
+	KEY_112_BYTES,
+	KEY_128_BYTES,
+#endif
+	KEY_OTHER_BYTES,
+	NUM_KEY_CMP_CASES,
+};
 
 /* Enum used to select the implementation of the signature comparison function to use
  * eg: a system supporting SVE might want to use a NEON or scalar implementation.
@@ -117,6 +154,25 @@ void rte_hash_set_cmp_func(struct rte_hash *h, rte_hash_cmp_eq_t func)
 	h->rte_hash_custom_cmp_eq = func;
 }
 
+/*
+ * Table storing all different key compare functions
+ * (multi-process supported)
+ */
+static const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
+	[KEY_CUSTOM] = NULL,
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	[KEY_16_BYTES] = rte_hash_k16_cmp_eq,
+	[KEY_32_BYTES] = rte_hash_k32_cmp_eq,
+	[KEY_48_BYTES] = rte_hash_k48_cmp_eq,
+	[KEY_64_BYTES] = rte_hash_k64_cmp_eq,
+	[KEY_80_BYTES] = rte_hash_k80_cmp_eq,
+	[KEY_96_BYTES] = rte_hash_k96_cmp_eq,
+	[KEY_112_BYTES] = rte_hash_k112_cmp_eq,
+	[KEY_128_BYTES] = rte_hash_k128_cmp_eq,
+#endif
+	[KEY_OTHER_BYTES] = memcmp,
+};
+
 static inline int
 rte_hash_cmp_eq(const void *key1, const void *key2, const struct rte_hash *h)
 {
@@ -390,13 +446,13 @@ rte_hash_create(const struct rte_hash_parameters *params)
 		goto err_unlock;
 	}
 
-/*
- * If x86 architecture is used, select appropriate compare function,
- * which may use x86 intrinsics, otherwise use memcmp
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	/* Select function to compare keys */
 	switch (params->key_len) {
+#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
+	/*
+	 * If x86 architecture is used, select appropriate compare function,
+	 * which may use x86 intrinsics, otherwise use memcmp
+	 */
 	case 16:
 		h->cmp_jump_table_idx = KEY_16_BYTES;
 		break;
@@ -421,13 +477,11 @@ rte_hash_create(const struct rte_hash_parameters *params)
 	case 128:
 		h->cmp_jump_table_idx = KEY_128_BYTES;
 		break;
+#endif
 	default:
 		/* If key is not multiple of 16, use generic memcmp */
 		h->cmp_jump_table_idx = KEY_OTHER_BYTES;
 	}
-#else
-	h->cmp_jump_table_idx = KEY_OTHER_BYTES;
-#endif
 
 	if (use_local_cache) {
 		local_free_slots = rte_zmalloc_socket(NULL,
diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h
index 26a992419a..16fe999c4c 100644
--- a/lib/hash/rte_cuckoo_hash.h
+++ b/lib/hash/rte_cuckoo_hash.h
@@ -12,86 +12,9 @@
 #define _RTE_CUCKOO_HASH_H_
 
 #include <stdalign.h>
-
-#if defined(RTE_ARCH_X86)
-#include "rte_cmp_x86.h"
-#endif
-
-#if defined(RTE_ARCH_ARM64)
-#include "rte_cmp_arm64.h"
-#endif
-
-/* Macro to enable/disable run-time checking of function parameters */
-#if defined(RTE_LIBRTE_HASH_DEBUG)
-#define RETURN_IF_TRUE(cond, retval) do { \
-	if (cond) \
-		return retval; \
-} while (0)
-#else
-#define RETURN_IF_TRUE(cond, retval)
-#endif
-
 #include <rte_hash_crc.h>
 #include <rte_jhash.h>
 
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_16_BYTES,
-	KEY_32_BYTES,
-	KEY_48_BYTES,
-	KEY_64_BYTES,
-	KEY_80_BYTES,
-	KEY_96_BYTES,
-	KEY_112_BYTES,
-	KEY_128_BYTES,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	rte_hash_k16_cmp_eq,
-	rte_hash_k32_cmp_eq,
-	rte_hash_k48_cmp_eq,
-	rte_hash_k64_cmp_eq,
-	rte_hash_k80_cmp_eq,
-	rte_hash_k96_cmp_eq,
-	rte_hash_k112_cmp_eq,
-	rte_hash_k128_cmp_eq,
-	memcmp
-};
-#else
-/*
- * All different options to select a key compare function,
- * based on the key size and custom function.
- */
-enum cmp_jump_table_case {
-	KEY_CUSTOM = 0,
-	KEY_OTHER_BYTES,
-	NUM_KEY_CMP_CASES,
-};
-
-/*
- * Table storing all different key compare functions
- * (multi-process supported)
- */
-const rte_hash_cmp_eq_t cmp_jump_table[NUM_KEY_CMP_CASES] = {
-	NULL,
-	memcmp
-};
-
-#endif
-
-
 /**
  * Number of items per bucket.
  * 8 is a tradeoff between performance and memory consumption.
@@ -189,7 +112,7 @@ struct __rte_cache_aligned rte_hash {
 	uint32_t hash_func_init_val;    /**< Init value used by hash_func. */
 	rte_hash_cmp_eq_t rte_hash_custom_cmp_eq;
 	/**< Custom function used to compare keys. */
-	enum cmp_jump_table_case cmp_jump_table_idx;
+	unsigned int cmp_jump_table_idx;
 	/**< Indicates which compare function to use. */
 	unsigned int sig_cmp_fn;
 	/**< Indicates which signature compare function to use. */
-- 
2.47.3


^ permalink raw reply	[relevance 7%]

* Re: [PATCH 0/3] lib: fix AVX2 checks and macro exposure
  @ 2025-09-18  8:10  4% ` Thomas Monjalon
  2025-09-18  8:59  0%   ` Bruce Richardson
  0 siblings, 1 reply; 77+ results
From: Thomas Monjalon @ 2025-09-18  8:10 UTC (permalink / raw)
  To: bruce.richardson; +Cc: dev

18/09/2025 09:28, Thomas Monjalon:
> These are fixes for AVX2 in efd and member libraries.
> While at it, I've hidden a macro which was wrongly exported in the API
> without having a correct prefix.
> 
> Thomas Monjalon (3):
>   efd: fix AVX2 support
>   member: remove AVX2 build-time checks
>   member: hide internal macro

The AVX2 changes break the compilation of "x86-generic" with these messages:

lib/member/rte_member_x86.h: In function 'search_bucket_single_avx':
lib/member/rte_member_x86.h:35:28: error: AVX vector return without AVX enabled changes the ABI [-Werror=psabi]
   35 |         uint32_t hitmask = _mm256_movemask_epi8((__m256i)_mm256_cmpeq_epi16(

lib/efd/rte_efd_x86.h: In function 'efd_lookup_internal_avx2':
lib/efd/rte_efd_x86.h:24:17: error: AVX vector return without AVX enabled changes the ABI [-Werror=psabi]
   24 |         __m256i vhash_val_a = _mm256_set1_epi32(hash_val_a);

AVX2 must be forced on these headers.
The solution is probably to move these functions in .c files
declared as sources_avx2 in meson.build.



^ permalink raw reply	[relevance 4%]

* Re: [PATCH 0/3] lib: fix AVX2 checks and macro exposure
  2025-09-18  8:10  4% ` Thomas Monjalon
@ 2025-09-18  8:59  0%   ` Bruce Richardson
  0 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-09-18  8:59 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On Thu, Sep 18, 2025 at 10:10:59AM +0200, Thomas Monjalon wrote:
> 18/09/2025 09:28, Thomas Monjalon:
> > These are fixes for AVX2 in efd and member libraries.
> > While at it, I've hidden a macro which was wrongly exported in the API
> > without having a correct prefix.
> > 
> > Thomas Monjalon (3):
> >   efd: fix AVX2 support
> >   member: remove AVX2 build-time checks
> >   member: hide internal macro
> 
> The AVX2 changes break the compilation of "x86-generic" with these messages:
> 
> lib/member/rte_member_x86.h: In function 'search_bucket_single_avx':
> lib/member/rte_member_x86.h:35:28: error: AVX vector return without AVX enabled changes the ABI [-Werror=psabi]
>    35 |         uint32_t hitmask = _mm256_movemask_epi8((__m256i)_mm256_cmpeq_epi16(
> 
> lib/efd/rte_efd_x86.h: In function 'efd_lookup_internal_avx2':
> lib/efd/rte_efd_x86.h:24:17: error: AVX vector return without AVX enabled changes the ABI [-Werror=psabi]
>    24 |         __m256i vhash_val_a = _mm256_set1_epi32(hash_val_a);
> 
> AVX2 must be forced on these headers.
> The solution is probably to move these functions in .c files
> declared as sources_avx2 in meson.build.
>
Yes, this is probably the best approach 

^ permalink raw reply	[relevance 0%]

* Re: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
  @ 2025-09-18 21:18  4%     ` Sameer Vaze
  2025-09-19  5:08  0%       ` Akhil Goyal
  0 siblings, 1 reply; 77+ results
From: Sameer Vaze @ 2025-09-18 21:18 UTC (permalink / raw)
  To: Akhil Goyal, Sunila Sahu, Fan Zhang, Ashish Gupta; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 2399 bytes --]

Hey Akhil,

I attempted to split the changes into multiple patches and added a depends-on the second patch. But automation does not seem to be picking up the patch as a dependency. Is there a process step I messed up:

Patch 1: compress/zlib: support for dictionary and PDCP checksum - Patchwork<https://patches.dpdk.org/project/dpdk/patch/20250918204411.1701035-1-svaze@qti.qualcomm.com/>
Patch 2 with depends-n: app/compress-perf: support dictionary files - Patchwork<https://patches.dpdk.org/project/dpdk/patch/20250918210806.1709958-1-svaze@qti.qualcomm.com/>

Thanks
Sameer Vaze
________________________________
From: Akhil Goyal <gakhil@marvell.com>
Sent: Tuesday, June 17, 2025 3:34 PM
To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta <ashishg@marvell.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files

WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros.

> compress/zlib: support PDCP checksum
>
> compress/zlib: support zlib dictionary
>
> compressdev: add PDCP checksum
>
> compressdev: support zlib dictionary
>
> Adds support to provide predefined dictionaries to zlib. Handles setting
> and getting of dictionaries using zlib apis. Also includes support to
> read dictionary files
>
> Adds support for passing in and validationg 3GPP PDCP spec defined
> checksums as defined under the Uplink Data Compression(UDC) feature.
> Changes also include functions that do inflate or deflate specific
> checksum operations.
>
> Introduces new members to compression api structures to allow setting
> predefined dictionaries
>
> Signed-off-by: Sameer Vaze <svaze@qti.qualcomm.com>

Seems like multiple patches are squashed into a single patch

I see that this patch has ABI breaks.
We need to defer this patch for next ABI break release.
Please split the patch appropriately.
First patch should define the library changes.
And subsequently logically broken PMD patches
Followed by application patches.
Ensure each patch is compilable.

Since this patch is breaking ABI/API,
Please send a deprecation notice to be merged in this release and
Implementation for next release.

Also avoid unnecessary and irrelevant code changes.


[-- Attachment #2: Type: text/html, Size: 4680 bytes --]

^ permalink raw reply	[relevance 4%]

* RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
  2025-09-18 21:18  4%     ` Sameer Vaze
@ 2025-09-19  5:08  0%       ` Akhil Goyal
  2025-09-19 16:00  0%         ` Sameer Vaze
  0 siblings, 1 reply; 77+ results
From: Akhil Goyal @ 2025-09-19  5:08 UTC (permalink / raw)
  To: Sameer Vaze, Sunila Sahu, Fan Zhang, Ashish Gupta; +Cc: dev

Hi Sameer,
> Hey Akhil,
> 
> I attempted to split the changes into multiple patches and added a depends-on
> the second patch. But automation does not seem to be picking up the patch as a
> dependency. Is there a process step I messed up:

When you have dependent patches, you should send them as a series.
Automation runs on the last patch in the series only. 
Currently it is not handling depends-on tag. It is for reviewers for now.


> 
> 
> Patch 1: compress/zlib: support for dictionary and PDCP checksum - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918204411.1701035-1-svaze@qti.qualcomm.com/>
> Patch 2 with depends-n: app/compress-perf: support dictionary files - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918210806.1709958-1-svaze@qti.qualcomm.com/>
> 
> Thanks
> Sameer Vaze
> ________________________________
> 
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Tuesday, June 17, 2025 3:34 PM
> To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu
> <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta
> <ashishg@marvell.com>
> Cc: dev@dpdk.org <dev@dpdk.org>
> Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
> 
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
> 
> > compress/zlib: support PDCP checksum
> >
> > compress/zlib: support zlib dictionary
> >
> > compressdev: add PDCP checksum
> >
> > compressdev: support zlib dictionary
> >
> > Adds support to provide predefined dictionaries to zlib. Handles setting
> > and getting of dictionaries using zlib apis. Also includes support to
> > read dictionary files
> >
> > Adds support for passing in and validationg 3GPP PDCP spec defined
> > checksums as defined under the Uplink Data Compression(UDC) feature.
> > Changes also include functions that do inflate or deflate specific
> > checksum operations.
> >
> > Introduces new members to compression api structures to allow setting
> > predefined dictionaries
> >
> > Signed-off-by: Sameer Vaze <svaze@qti.qualcomm.com>
> 
> Seems like multiple patches are squashed into a single patch
> 
> I see that this patch has ABI breaks.
> We need to defer this patch for next ABI break release.
> Please split the patch appropriately.
> First patch should define the library changes.
> And subsequently logically broken PMD patches
> Followed by application patches.
> Ensure each patch is compilable.
> 
> Since this patch is breaking ABI/API,
> Please send a deprecation notice to be merged in this release and
> Implementation for next release.
> 
> Also avoid unnecessary and irrelevant code changes.
> 


^ permalink raw reply	[relevance 0%]

* [PATCH] build: remove deprecated kmods option
@ 2025-09-19  7:57  5% Bruce Richardson
  2025-09-19  8:44  5% ` [PATCH v2] " Bruce Richardson
  2025-09-23 14:40  4% ` [PATCH v3] " Bruce Richardson
  0 siblings, 2 replies; 77+ results
From: Bruce Richardson @ 2025-09-19  7:57 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson

The "enable_kmods" meson option was deprecated back in 2023[1], so can
now be removed from DPDK.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

[1] https://doc.dpdk.org/guides-23.11/rel_notes/deprecation.html
---
 doc/guides/rel_notes/deprecation.rst | 7 -------
 meson_options.txt                    | 2 --
 2 files changed, 9 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0fcdd02d3c..483030cda8 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,13 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
 Deprecation Notices
 -------------------
 
-* build: The ``enable_kmods`` option is deprecated and will be removed in a future release.
-  Setting/clearing the option has no impact on the build.
-  Instead, kernel modules will be always built for OS's where out-of-tree kernel modules
-  are required for DPDK operation.
-  Currently, this means that modules will only be built for FreeBSD.
-  No modules are shipped with DPDK for either Linux or Windows.
-
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
diff --git a/meson_options.txt b/meson_options.txt
index e49b2fc089..e28d24054c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -24,8 +24,6 @@ option('enable_drivers', type: 'string', value: '', description:
        'Comma-separated list of drivers to build. If unspecified, build all drivers.')
 option('enable_driver_sdk', type: 'boolean', value: false, description:
        'Install headers to build drivers.')
-option('enable_kmods', type: 'boolean', value: true, description:
-       '[Deprecated - will be removed in future release] build kernel modules')
 option('enable_libs', type: 'string', value: '', description:
        'Comma-separated list of optional libraries to explicitly enable. [NOTE: mandatory libs are always enabled]')
 option('examples', type: 'string', value: '', description:
-- 
2.48.1


^ permalink raw reply	[relevance 5%]

* [PATCH v3 09/10] uapi: import VFIO header
  @ 2025-09-19  8:38  1%   ` David Marchand
  0 siblings, 0 replies; 77+ results
From: David Marchand @ 2025-09-19  8:38 UTC (permalink / raw)
  To: dev; +Cc: thomas, maxime.coquelin, anatoly.burakov, stephen

Import VFIO header (from v6.16) to be included in many parts of DPDK.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v3:
- reimported header following script update,

---
 kernel/linux/uapi/linux/vfio.h | 1836 ++++++++++++++++++++++++++++++++
 kernel/linux/uapi/version      |    2 +-
 2 files changed, 1837 insertions(+), 1 deletion(-)
 create mode 100644 kernel/linux/uapi/linux/vfio.h

diff --git a/kernel/linux/uapi/linux/vfio.h b/kernel/linux/uapi/linux/vfio.h
new file mode 100644
index 0000000000..79bf8c0cc5
--- /dev/null
+++ b/kernel/linux/uapi/linux/vfio.h
@@ -0,0 +1,1836 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * VFIO API definition
+ *
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ *     Author: Alex Williamson <alex.williamson@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef VFIO_H
+#define VFIO_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define VFIO_API_VERSION	0
+
+
+/* Kernel & User level defines for VFIO IOCTLs. */
+
+/* Extensions */
+
+#define VFIO_TYPE1_IOMMU		1
+#define VFIO_SPAPR_TCE_IOMMU		2
+#define VFIO_TYPE1v2_IOMMU		3
+/*
+ * IOMMU enforces DMA cache coherence (ex. PCIe NoSnoop stripping).  This
+ * capability is subject to change as groups are added or removed.
+ */
+#define VFIO_DMA_CC_IOMMU		4
+
+/* Check if EEH is supported */
+#define VFIO_EEH			5
+
+/* Two-stage IOMMU */
+#define __VFIO_RESERVED_TYPE1_NESTING_IOMMU	6	/* Implies v2 */
+
+#define VFIO_SPAPR_TCE_v2_IOMMU		7
+
+/*
+ * The No-IOMMU IOMMU offers no translation or isolation for devices and
+ * supports no ioctls outside of VFIO_CHECK_EXTENSION.  Use of VFIO's No-IOMMU
+ * code will taint the host kernel and should be used with extreme caution.
+ */
+#define VFIO_NOIOMMU_IOMMU		8
+
+/* Supports VFIO_DMA_UNMAP_FLAG_ALL */
+#define VFIO_UNMAP_ALL			9
+
+/*
+ * Supports the vaddr flag for DMA map and unmap.  Not supported for mediated
+ * devices, so this capability is subject to change as groups are added or
+ * removed.
+ */
+#define VFIO_UPDATE_VADDR		10
+
+/*
+ * The IOCTL interface is designed for extensibility by embedding the
+ * structure length (argsz) and flags into structures passed between
+ * kernel and userspace.  We therefore use the _IO() macro for these
+ * defines to avoid implicitly embedding a size into the ioctl request.
+ * As structure fields are added, argsz will increase to match and flag
+ * bits will be defined to indicate additional fields with valid data.
+ * It's *always* the caller's responsibility to indicate the size of
+ * the structure passed by setting argsz appropriately.
+ */
+
+#define VFIO_TYPE	(';')
+#define VFIO_BASE	100
+
+/*
+ * For extension of INFO ioctls, VFIO makes use of a capability chain
+ * designed after PCI/e capabilities.  A flag bit indicates whether
+ * this capability chain is supported and a field defined in the fixed
+ * structure defines the offset of the first capability in the chain.
+ * This field is only valid when the corresponding bit in the flags
+ * bitmap is set.  This offset field is relative to the start of the
+ * INFO buffer, as is the next field within each capability header.
+ * The id within the header is a shared address space per INFO ioctl,
+ * while the version field is specific to the capability id.  The
+ * contents following the header are specific to the capability id.
+ */
+struct vfio_info_cap_header {
+	__u16	id;		/* Identifies capability */
+	__u16	version;	/* Version specific to the capability ID */
+	__u32	next;		/* Offset of next capability */
+};
+
+/*
+ * Callers of INFO ioctls passing insufficiently sized buffers will see
+ * the capability chain flag bit set, a zero value for the first capability
+ * offset (if available within the provided argsz), and argsz will be
+ * updated to report the necessary buffer size.  For compatibility, the
+ * INFO ioctl will not report error in this case, but the capability chain
+ * will not be available.
+ */
+
+/* -------- IOCTLs for VFIO file descriptor (/dev/vfio/vfio) -------- */
+
+/**
+ * VFIO_GET_API_VERSION - _IO(VFIO_TYPE, VFIO_BASE + 0)
+ *
+ * Report the version of the VFIO API.  This allows us to bump the entire
+ * API version should we later need to add or change features in incompatible
+ * ways.
+ * Return: VFIO_API_VERSION
+ * Availability: Always
+ */
+#define VFIO_GET_API_VERSION		_IO(VFIO_TYPE, VFIO_BASE + 0)
+
+/**
+ * VFIO_CHECK_EXTENSION - _IOW(VFIO_TYPE, VFIO_BASE + 1, __u32)
+ *
+ * Check whether an extension is supported.
+ * Return: 0 if not supported, 1 (or some other positive integer) if supported.
+ * Availability: Always
+ */
+#define VFIO_CHECK_EXTENSION		_IO(VFIO_TYPE, VFIO_BASE + 1)
+
+/**
+ * VFIO_SET_IOMMU - _IOW(VFIO_TYPE, VFIO_BASE + 2, __s32)
+ *
+ * Set the iommu to the given type.  The type must be supported by an
+ * iommu driver as verified by calling CHECK_EXTENSION using the same
+ * type.  A group must be set to this file descriptor before this
+ * ioctl is available.  The IOMMU interfaces enabled by this call are
+ * specific to the value set.
+ * Return: 0 on success, -errno on failure
+ * Availability: When VFIO group attached
+ */
+#define VFIO_SET_IOMMU			_IO(VFIO_TYPE, VFIO_BASE + 2)
+
+/* -------- IOCTLs for GROUP file descriptors (/dev/vfio/$GROUP) -------- */
+
+/**
+ * VFIO_GROUP_GET_STATUS - _IOR(VFIO_TYPE, VFIO_BASE + 3,
+ *						struct vfio_group_status)
+ *
+ * Retrieve information about the group.  Fills in provided
+ * struct vfio_group_info.  Caller sets argsz.
+ * Return: 0 on succes, -errno on failure.
+ * Availability: Always
+ */
+struct vfio_group_status {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_GROUP_FLAGS_VIABLE		(1 << 0)
+#define VFIO_GROUP_FLAGS_CONTAINER_SET	(1 << 1)
+};
+#define VFIO_GROUP_GET_STATUS		_IO(VFIO_TYPE, VFIO_BASE + 3)
+
+/**
+ * VFIO_GROUP_SET_CONTAINER - _IOW(VFIO_TYPE, VFIO_BASE + 4, __s32)
+ *
+ * Set the container for the VFIO group to the open VFIO file
+ * descriptor provided.  Groups may only belong to a single
+ * container.  Containers may, at their discretion, support multiple
+ * groups.  Only when a container is set are all of the interfaces
+ * of the VFIO file descriptor and the VFIO group file descriptor
+ * available to the user.
+ * Return: 0 on success, -errno on failure.
+ * Availability: Always
+ */
+#define VFIO_GROUP_SET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 4)
+
+/**
+ * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
+ *
+ * Remove the group from the attached container.  This is the
+ * opposite of the SET_CONTAINER call and returns the group to
+ * an initial state.  All device file descriptors must be released
+ * prior to calling this interface.  When removing the last group
+ * from a container, the IOMMU will be disabled and all state lost,
+ * effectively also returning the VFIO file descriptor to an initial
+ * state.
+ * Return: 0 on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_UNSET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 5)
+
+/**
+ * VFIO_GROUP_GET_DEVICE_FD - _IOW(VFIO_TYPE, VFIO_BASE + 6, char)
+ *
+ * Return a new file descriptor for the device object described by
+ * the provided string.  The string should match a device listed in
+ * the devices subdirectory of the IOMMU group sysfs entry.  The
+ * group containing the device must already be added to this context.
+ * Return: new file descriptor on success, -errno on failure.
+ * Availability: When attached to container
+ */
+#define VFIO_GROUP_GET_DEVICE_FD	_IO(VFIO_TYPE, VFIO_BASE + 6)
+
+/* --------------- IOCTLs for DEVICE file descriptors --------------- */
+
+/**
+ * VFIO_DEVICE_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 7,
+ *						struct vfio_device_info)
+ *
+ * Retrieve information about the device.  Fills in provided
+ * struct vfio_device_info.  Caller sets argsz.
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FLAGS_RESET	(1 << 0)	/* Device supports reset */
+#define VFIO_DEVICE_FLAGS_PCI	(1 << 1)	/* vfio-pci device */
+#define VFIO_DEVICE_FLAGS_PLATFORM (1 << 2)	/* vfio-platform device */
+#define VFIO_DEVICE_FLAGS_AMBA  (1 << 3)	/* vfio-amba device */
+#define VFIO_DEVICE_FLAGS_CCW	(1 << 4)	/* vfio-ccw device */
+#define VFIO_DEVICE_FLAGS_AP	(1 << 5)	/* vfio-ap device */
+#define VFIO_DEVICE_FLAGS_FSL_MC (1 << 6)	/* vfio-fsl-mc device */
+#define VFIO_DEVICE_FLAGS_CAPS	(1 << 7)	/* Info supports caps */
+#define VFIO_DEVICE_FLAGS_CDX	(1 << 8)	/* vfio-cdx device */
+	__u32	num_regions;	/* Max region index + 1 */
+	__u32	num_irqs;	/* Max IRQ index + 1 */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+#define VFIO_DEVICE_GET_INFO		_IO(VFIO_TYPE, VFIO_BASE + 7)
+
+/*
+ * Vendor driver using Mediated device framework should provide device_api
+ * attribute in supported type attribute groups. Device API string should be one
+ * of the following corresponding to device flags in vfio_device_info structure.
+ */
+
+#define VFIO_DEVICE_API_PCI_STRING		"vfio-pci"
+#define VFIO_DEVICE_API_PLATFORM_STRING		"vfio-platform"
+#define VFIO_DEVICE_API_AMBA_STRING		"vfio-amba"
+#define VFIO_DEVICE_API_CCW_STRING		"vfio-ccw"
+#define VFIO_DEVICE_API_AP_STRING		"vfio-ap"
+
+/*
+ * The following capabilities are unique to s390 zPCI devices.  Their contents
+ * are further-defined in vfio_zdev.h
+ */
+#define VFIO_DEVICE_INFO_CAP_ZPCI_BASE		1
+#define VFIO_DEVICE_INFO_CAP_ZPCI_GROUP		2
+#define VFIO_DEVICE_INFO_CAP_ZPCI_UTIL		3
+#define VFIO_DEVICE_INFO_CAP_ZPCI_PFIP		4
+
+/*
+ * The following VFIO_DEVICE_INFO capability reports support for PCIe AtomicOp
+ * completion to the root bus with supported widths provided via flags.
+ */
+#define VFIO_DEVICE_INFO_CAP_PCI_ATOMIC_COMP	5
+struct vfio_device_info_cap_pci_atomic_comp {
+	struct vfio_info_cap_header header;
+	__u32 flags;
+#define VFIO_PCI_ATOMIC_COMP32	(1 << 0)
+#define VFIO_PCI_ATOMIC_COMP64	(1 << 1)
+#define VFIO_PCI_ATOMIC_COMP128	(1 << 2)
+	__u32 reserved;
+};
+
+/**
+ * VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
+ *				       struct vfio_region_info)
+ *
+ * Retrieve information about a device region.  Caller provides
+ * struct vfio_region_info with index value set.  Caller sets argsz.
+ * Implementation of region mapping is bus driver specific.  This is
+ * intended to describe MMIO, I/O port, as well as bus specific
+ * regions (ex. PCI config space).  Zero sized regions may be used
+ * to describe unimplemented regions (ex. unimplemented PCI BARs).
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_region_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_REGION_INFO_FLAG_READ	(1 << 0) /* Region supports read */
+#define VFIO_REGION_INFO_FLAG_WRITE	(1 << 1) /* Region supports write */
+#define VFIO_REGION_INFO_FLAG_MMAP	(1 << 2) /* Region supports mmap */
+#define VFIO_REGION_INFO_FLAG_CAPS	(1 << 3) /* Info supports caps */
+	__u32	index;		/* Region index */
+	__u32	cap_offset;	/* Offset within info struct of first cap */
+	__aligned_u64	size;	/* Region size (bytes) */
+	__aligned_u64	offset;	/* Region offset from start of device fd */
+};
+#define VFIO_DEVICE_GET_REGION_INFO	_IO(VFIO_TYPE, VFIO_BASE + 8)
+
+/*
+ * The sparse mmap capability allows finer granularity of specifying areas
+ * within a region with mmap support.  When specified, the user should only
+ * mmap the offset ranges specified by the areas array.  mmaps outside of the
+ * areas specified may fail (such as the range covering a PCI MSI-X table) or
+ * may result in improper device behavior.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_SPARSE_MMAP	1
+
+struct vfio_region_sparse_mmap_area {
+	__aligned_u64	offset;	/* Offset of mmap'able area within region */
+	__aligned_u64	size;	/* Size of mmap'able area */
+};
+
+struct vfio_region_info_cap_sparse_mmap {
+	struct vfio_info_cap_header header;
+	__u32	nr_areas;
+	__u32	reserved;
+	struct vfio_region_sparse_mmap_area areas[];
+};
+
+/*
+ * The device specific type capability allows regions unique to a specific
+ * device or class of devices to be exposed.  This helps solve the problem for
+ * vfio bus drivers of defining which region indexes correspond to which region
+ * on the device, without needing to resort to static indexes, as done by
+ * vfio-pci.  For instance, if we were to go back in time, we might remove
+ * VFIO_PCI_VGA_REGION_INDEX and let vfio-pci simply define that all indexes
+ * greater than or equal to VFIO_PCI_NUM_REGIONS are device specific and we'd
+ * make a "VGA" device specific type to describe the VGA access space.  This
+ * means that non-VGA devices wouldn't need to waste this index, and thus the
+ * address space associated with it due to implementation of device file
+ * descriptor offsets in vfio-pci.
+ *
+ * The current implementation is now part of the user ABI, so we can't use this
+ * for VGA, but there are other upcoming use cases, such as opregions for Intel
+ * IGD devices and framebuffers for vGPU devices.  We missed VGA, but we'll
+ * use this for future additions.
+ *
+ * The structure below defines version 1 of this capability.
+ */
+#define VFIO_REGION_INFO_CAP_TYPE	2
+
+struct vfio_region_info_cap_type {
+	struct vfio_info_cap_header header;
+	__u32 type;	/* global per bus driver */
+	__u32 subtype;	/* type specific */
+};
+
+/*
+ * List of region types, global per bus driver.
+ * If you introduce a new type, please add it here.
+ */
+
+/* PCI region type containing a PCI vendor part */
+#define VFIO_REGION_TYPE_PCI_VENDOR_TYPE	(1 << 31)
+#define VFIO_REGION_TYPE_PCI_VENDOR_MASK	(0xffff)
+#define VFIO_REGION_TYPE_GFX                    (1)
+#define VFIO_REGION_TYPE_CCW			(2)
+#define VFIO_REGION_TYPE_MIGRATION_DEPRECATED   (3)
+
+/* sub-types for VFIO_REGION_TYPE_PCI_* */
+
+/* 8086 vendor PCI sub-types */
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_OPREGION	(1)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_HOST_CFG	(2)
+#define VFIO_REGION_SUBTYPE_INTEL_IGD_LPC_CFG	(3)
+
+/* 10de vendor PCI sub-types */
+/*
+ * NVIDIA GPU NVlink2 RAM is coherent RAM mapped onto the host address space.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_NVIDIA_NVLINK2_RAM	(1)
+
+/* 1014 vendor PCI sub-types */
+/*
+ * IBM NPU NVlink2 ATSD (Address Translation Shootdown) register of NPU
+ * to do TLB invalidation on a GPU.
+ *
+ * Deprecated, region no longer provided
+ */
+#define VFIO_REGION_SUBTYPE_IBM_NVLINK2_ATSD	(1)
+
+/* sub-types for VFIO_REGION_TYPE_GFX */
+#define VFIO_REGION_SUBTYPE_GFX_EDID            (1)
+
+/**
+ * struct vfio_region_gfx_edid - EDID region layout.
+ *
+ * Set display link state and EDID blob.
+ *
+ * The EDID blob has monitor information such as brand, name, serial
+ * number, physical size, supported video modes and more.
+ *
+ * This special region allows userspace (typically qemu) set a virtual
+ * EDID for the virtual monitor, which allows a flexible display
+ * configuration.
+ *
+ * For the edid blob spec look here:
+ *    https://en.wikipedia.org/wiki/Extended_Display_Identification_Data
+ *
+ * On linux systems you can find the EDID blob in sysfs:
+ *    /sys/class/drm/${card}/${connector}/edid
+ *
+ * You can use the edid-decode ulility (comes with xorg-x11-utils) to
+ * decode the EDID blob.
+ *
+ * @edid_offset: location of the edid blob, relative to the
+ *               start of the region (readonly).
+ * @edid_max_size: max size of the edid blob (readonly).
+ * @edid_size: actual edid size (read/write).
+ * @link_state: display link state (read/write).
+ * VFIO_DEVICE_GFX_LINK_STATE_UP: Monitor is turned on.
+ * VFIO_DEVICE_GFX_LINK_STATE_DOWN: Monitor is turned off.
+ * @max_xres: max display width (0 == no limitation, readonly).
+ * @max_yres: max display height (0 == no limitation, readonly).
+ *
+ * EDID update protocol:
+ *   (1) set link-state to down.
+ *   (2) update edid blob and size.
+ *   (3) set link-state to up.
+ */
+struct vfio_region_gfx_edid {
+	__u32 edid_offset;
+	__u32 edid_max_size;
+	__u32 edid_size;
+	__u32 max_xres;
+	__u32 max_yres;
+	__u32 link_state;
+#define VFIO_DEVICE_GFX_LINK_STATE_UP    1
+#define VFIO_DEVICE_GFX_LINK_STATE_DOWN  2
+};
+
+/* sub-types for VFIO_REGION_TYPE_CCW */
+#define VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD	(1)
+#define VFIO_REGION_SUBTYPE_CCW_SCHIB		(2)
+#define VFIO_REGION_SUBTYPE_CCW_CRW		(3)
+
+/* sub-types for VFIO_REGION_TYPE_MIGRATION */
+#define VFIO_REGION_SUBTYPE_MIGRATION_DEPRECATED (1)
+
+struct vfio_device_migration_info {
+	__u32 device_state;         /* VFIO device state */
+#define VFIO_DEVICE_STATE_V1_STOP      (0)
+#define VFIO_DEVICE_STATE_V1_RUNNING   (1 << 0)
+#define VFIO_DEVICE_STATE_V1_SAVING    (1 << 1)
+#define VFIO_DEVICE_STATE_V1_RESUMING  (1 << 2)
+#define VFIO_DEVICE_STATE_MASK      (VFIO_DEVICE_STATE_V1_RUNNING | \
+				     VFIO_DEVICE_STATE_V1_SAVING |  \
+				     VFIO_DEVICE_STATE_V1_RESUMING)
+
+#define VFIO_DEVICE_STATE_VALID(state) \
+	(state & VFIO_DEVICE_STATE_V1_RESUMING ? \
+	(state & VFIO_DEVICE_STATE_MASK) == VFIO_DEVICE_STATE_V1_RESUMING : 1)
+
+#define VFIO_DEVICE_STATE_IS_ERROR(state) \
+	((state & VFIO_DEVICE_STATE_MASK) == (VFIO_DEVICE_STATE_V1_SAVING | \
+					      VFIO_DEVICE_STATE_V1_RESUMING))
+
+#define VFIO_DEVICE_STATE_SET_ERROR(state) \
+	((state & ~VFIO_DEVICE_STATE_MASK) | VFIO_DEVICE_STATE_V1_SAVING | \
+					     VFIO_DEVICE_STATE_V1_RESUMING)
+
+	__u32 reserved;
+	__aligned_u64 pending_bytes;
+	__aligned_u64 data_offset;
+	__aligned_u64 data_size;
+};
+
+/*
+ * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
+ * which allows direct access to non-MSIX registers which happened to be within
+ * the same system page.
+ *
+ * Even though the userspace gets direct access to the MSIX data, the existing
+ * VFIO_DEVICE_SET_IRQS interface must still be used for MSIX configuration.
+ */
+#define VFIO_REGION_INFO_CAP_MSIX_MAPPABLE	3
+
+/*
+ * Capability with compressed real address (aka SSA - small system address)
+ * where GPU RAM is mapped on a system bus. Used by a GPU for DMA routing
+ * and by the userspace to associate a NVLink bridge with a GPU.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_SSATGT	4
+
+struct vfio_region_info_cap_nvlink2_ssatgt {
+	struct vfio_info_cap_header header;
+	__aligned_u64 tgt;
+};
+
+/*
+ * Capability with an NVLink link speed. The value is read by
+ * the NVlink2 bridge driver from the bridge's "ibm,nvlink-speed"
+ * property in the device tree. The value is fixed in the hardware
+ * and failing to provide the correct value results in the link
+ * not working with no indication from the driver why.
+ *
+ * Deprecated, capability no longer provided
+ */
+#define VFIO_REGION_INFO_CAP_NVLINK2_LNKSPD	5
+
+struct vfio_region_info_cap_nvlink2_lnkspd {
+	struct vfio_info_cap_header header;
+	__u32 link_speed;
+	__u32 __pad;
+};
+
+/**
+ * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
+ *				    struct vfio_irq_info)
+ *
+ * Retrieve information about a device IRQ.  Caller provides
+ * struct vfio_irq_info with index value set.  Caller sets argsz.
+ * Implementation of IRQ mapping is bus driver specific.  Indexes
+ * using multiple IRQs are primarily intended to support MSI-like
+ * interrupt blocks.  Zero count irq blocks may be used to describe
+ * unimplemented interrupt types.
+ *
+ * The EVENTFD flag indicates the interrupt index supports eventfd based
+ * signaling.
+ *
+ * The MASKABLE flags indicates the index supports MASK and UNMASK
+ * actions described below.
+ *
+ * AUTOMASKED indicates that after signaling, the interrupt line is
+ * automatically masked by VFIO and the user needs to unmask the line
+ * to receive new interrupts.  This is primarily intended to distinguish
+ * level triggered interrupts.
+ *
+ * The NORESIZE flag indicates that the interrupt lines within the index
+ * are setup as a set and new subindexes cannot be enabled without first
+ * disabling the entire index.  This is used for interrupts like PCI MSI
+ * and MSI-X where the driver may only use a subset of the available
+ * indexes, but VFIO needs to enable a specific number of vectors
+ * upfront.  In the case of MSI-X, where the user can enable MSI-X and
+ * then add and unmask vectors, it's up to userspace to make the decision
+ * whether to allocate the maximum supported number of vectors or tear
+ * down setup and incrementally increase the vectors as each is enabled.
+ * Absence of the NORESIZE flag indicates that vectors can be enabled
+ * and disabled dynamically without impacting other vectors within the
+ * index.
+ */
+struct vfio_irq_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_INFO_EVENTFD		(1 << 0)
+#define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
+#define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
+#define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
+	__u32	index;		/* IRQ index */
+	__u32	count;		/* Number of IRQs within this index */
+};
+#define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
+
+/**
+ * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
+ *
+ * Set signaling, masking, and unmasking of interrupts.  Caller provides
+ * struct vfio_irq_set with all fields set.  'start' and 'count' indicate
+ * the range of subindexes being specified.
+ *
+ * The DATA flags specify the type of data provided.  If DATA_NONE, the
+ * operation performs the specified action immediately on the specified
+ * interrupt(s).  For example, to unmask AUTOMASKED interrupt [0,0]:
+ * flags = (DATA_NONE|ACTION_UNMASK), index = 0, start = 0, count = 1.
+ *
+ * DATA_BOOL allows sparse support for the same on arrays of interrupts.
+ * For example, to mask interrupts [0,1] and [0,3] (but not [0,2]):
+ * flags = (DATA_BOOL|ACTION_MASK), index = 0, start = 1, count = 3,
+ * data = {1,0,1}
+ *
+ * DATA_EVENTFD binds the specified ACTION to the provided __s32 eventfd.
+ * A value of -1 can be used to either de-assign interrupts if already
+ * assigned or skip un-assigned interrupts.  For example, to set an eventfd
+ * to be trigger for interrupts [0,0] and [0,2]:
+ * flags = (DATA_EVENTFD|ACTION_TRIGGER), index = 0, start = 0, count = 3,
+ * data = {fd1, -1, fd2}
+ * If index [0,1] is previously set, two count = 1 ioctls calls would be
+ * required to set [0,0] and [0,2] without changing [0,1].
+ *
+ * Once a signaling mechanism is set, DATA_BOOL or DATA_NONE can be used
+ * with ACTION_TRIGGER to perform kernel level interrupt loopback testing
+ * from userspace (ie. simulate hardware triggering).
+ *
+ * Setting of an event triggering mechanism to userspace for ACTION_TRIGGER
+ * enables the interrupt index for the device.  Individual subindex interrupts
+ * can be disabled using the -1 value for DATA_EVENTFD or the index can be
+ * disabled as a whole with: flags = (DATA_NONE|ACTION_TRIGGER), count = 0.
+ *
+ * Note that ACTION_[UN]MASK specify user->kernel signaling (irqfds) while
+ * ACTION_TRIGGER specifies kernel->user signaling.
+ */
+struct vfio_irq_set {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
+#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
+#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
+#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
+#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
+#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
+	__u32	index;
+	__u32	start;
+	__u32	count;
+	__u8	data[];
+};
+#define VFIO_DEVICE_SET_IRQS		_IO(VFIO_TYPE, VFIO_BASE + 10)
+
+#define VFIO_IRQ_SET_DATA_TYPE_MASK	(VFIO_IRQ_SET_DATA_NONE | \
+					 VFIO_IRQ_SET_DATA_BOOL | \
+					 VFIO_IRQ_SET_DATA_EVENTFD)
+#define VFIO_IRQ_SET_ACTION_TYPE_MASK	(VFIO_IRQ_SET_ACTION_MASK | \
+					 VFIO_IRQ_SET_ACTION_UNMASK | \
+					 VFIO_IRQ_SET_ACTION_TRIGGER)
+/**
+ * VFIO_DEVICE_RESET - _IO(VFIO_TYPE, VFIO_BASE + 11)
+ *
+ * Reset a device.
+ */
+#define VFIO_DEVICE_RESET		_IO(VFIO_TYPE, VFIO_BASE + 11)
+
+/*
+ * The VFIO-PCI bus driver makes use of the following fixed region and
+ * IRQ index mapping.  Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_PCI_BAR0_REGION_INDEX,
+	VFIO_PCI_BAR1_REGION_INDEX,
+	VFIO_PCI_BAR2_REGION_INDEX,
+	VFIO_PCI_BAR3_REGION_INDEX,
+	VFIO_PCI_BAR4_REGION_INDEX,
+	VFIO_PCI_BAR5_REGION_INDEX,
+	VFIO_PCI_ROM_REGION_INDEX,
+	VFIO_PCI_CONFIG_REGION_INDEX,
+	/*
+	 * Expose VGA regions defined for PCI base class 03, subclass 00.
+	 * This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
+	 * as well as the MMIO range 0xa0000 to 0xbffff.  Each implemented
+	 * range is found at it's identity mapped offset from the region
+	 * offset, for example 0x3b0 is region_info.offset + 0x3b0.  Areas
+	 * between described ranges are unimplemented.
+	 */
+	VFIO_PCI_VGA_REGION_INDEX,
+	VFIO_PCI_NUM_REGIONS = 9 /* Fixed user ABI, region indexes >=9 use */
+				 /* device specific cap to define content. */
+};
+
+enum {
+	VFIO_PCI_INTX_IRQ_INDEX,
+	VFIO_PCI_MSI_IRQ_INDEX,
+	VFIO_PCI_MSIX_IRQ_INDEX,
+	VFIO_PCI_ERR_IRQ_INDEX,
+	VFIO_PCI_REQ_IRQ_INDEX,
+	VFIO_PCI_NUM_IRQS
+};
+
+/*
+ * The vfio-ccw bus driver makes use of the following fixed region and
+ * IRQ index mapping. Unimplemented regions return a size of zero.
+ * Unimplemented IRQ types return a count of zero.
+ */
+
+enum {
+	VFIO_CCW_CONFIG_REGION_INDEX,
+	VFIO_CCW_NUM_REGIONS
+};
+
+enum {
+	VFIO_CCW_IO_IRQ_INDEX,
+	VFIO_CCW_CRW_IRQ_INDEX,
+	VFIO_CCW_REQ_IRQ_INDEX,
+	VFIO_CCW_NUM_IRQS
+};
+
+/*
+ * The vfio-ap bus driver makes use of the following IRQ index mapping.
+ * Unimplemented IRQ types return a count of zero.
+ */
+enum {
+	VFIO_AP_REQ_IRQ_INDEX,
+	VFIO_AP_CFG_CHG_IRQ_INDEX,
+	VFIO_AP_NUM_IRQS
+};
+
+/**
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 12,
+ *					      struct vfio_pci_hot_reset_info)
+ *
+ * This command is used to query the affected devices in the hot reset for
+ * a given device.
+ *
+ * This command always reports the segment, bus, and devfn information for
+ * each affected device, and selectively reports the group_id or devid per
+ * the way how the calling device is opened.
+ *
+ *	- If the calling device is opened via the traditional group/container
+ *	  API, group_id is reported.  User should check if it has owned all
+ *	  the affected devices and provides a set of group fds to prove the
+ *	  ownership in VFIO_DEVICE_PCI_HOT_RESET ioctl.
+ *
+ *	- If the calling device is opened as a cdev, devid is reported.
+ *	  Flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set to indicate this
+ *	  data type.  All the affected devices should be represented in
+ *	  the dev_set, ex. bound to a vfio driver, and also be owned by
+ *	  this interface which is determined by the following conditions:
+ *	  1) Has a valid devid within the iommufd_ctx of the calling device.
+ *	     Ownership cannot be determined across separate iommufd_ctx and
+ *	     the cdev calling conventions do not support a proof-of-ownership
+ *	     model as provided in the legacy group interface.  In this case
+ *	     valid devid with value greater than zero is provided in the return
+ *	     structure.
+ *	  2) Does not have a valid devid within the iommufd_ctx of the calling
+ *	     device, but belongs to the same IOMMU group as the calling device
+ *	     or another opened device that has a valid devid within the
+ *	     iommufd_ctx of the calling device.  This provides implicit ownership
+ *	     for devices within the same DMA isolation context.  In this case
+ *	     the devid value of VFIO_PCI_DEVID_OWNED is provided in the return
+ *	     structure.
+ *
+ *	  A devid value of VFIO_PCI_DEVID_NOT_OWNED is provided in the return
+ *	  structure for affected devices where device is NOT represented in the
+ *	  dev_set or ownership is not available.  Such devices prevent the use
+ *	  of VFIO_DEVICE_PCI_HOT_RESET ioctl outside of the proof-of-ownership
+ *	  calling conventions (ie. via legacy group accessed devices).  Flag
+ *	  VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED would be set when all the
+ *	  affected devices are represented in the dev_set and also owned by
+ *	  the user.  This flag is available only when
+ *	  flag VFIO_PCI_HOT_RESET_FLAG_DEV_ID is set, otherwise reserved.
+ *	  When set, user could invoke VFIO_DEVICE_PCI_HOT_RESET with a zero
+ *	  length fd array on the calling device as the ownership is validated
+ *	  by iommufd_ctx.
+ *
+ * Return: 0 on success, -errno on failure:
+ *	-enospc = insufficient buffer, -enodev = unsupported for device.
+ */
+struct vfio_pci_dependent_device {
+	union {
+		__u32   group_id;
+		__u32	devid;
+#define VFIO_PCI_DEVID_OWNED		0
+#define VFIO_PCI_DEVID_NOT_OWNED	-1
+	};
+	__u16	segment;
+	__u8	bus;
+	__u8	devfn; /* Use PCI_SLOT/PCI_FUNC */
+};
+
+struct vfio_pci_hot_reset_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID		(1 << 0)
+#define VFIO_PCI_HOT_RESET_FLAG_DEV_ID_OWNED	(1 << 1)
+	__u32	count;
+	struct vfio_pci_dependent_device	devices[];
+};
+
+#define VFIO_DEVICE_GET_PCI_HOT_RESET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_DEVICE_PCI_HOT_RESET - _IOW(VFIO_TYPE, VFIO_BASE + 13,
+ *				    struct vfio_pci_hot_reset)
+ *
+ * A PCI hot reset results in either a bus or slot reset which may affect
+ * other devices sharing the bus/slot.  The calling user must have
+ * ownership of the full set of affected devices as determined by the
+ * VFIO_DEVICE_GET_PCI_HOT_RESET_INFO ioctl.
+ *
+ * When called on a device file descriptor acquired through the vfio
+ * group interface, the user is required to provide proof of ownership
+ * of those affected devices via the group_fds array in struct
+ * vfio_pci_hot_reset.
+ *
+ * When called on a direct cdev opened vfio device, the flags field of
+ * struct vfio_pci_hot_reset_info reports the ownership status of the
+ * affected devices and this ioctl must be called with an empty group_fds
+ * array.  See above INFO ioctl definition for ownership requirements.
+ *
+ * Mixed usage of legacy groups and cdevs across the set of affected
+ * devices is not supported.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_pci_hot_reset {
+	__u32	argsz;
+	__u32	flags;
+	__u32	count;
+	__s32	group_fds[];
+};
+
+#define VFIO_DEVICE_PCI_HOT_RESET	_IO(VFIO_TYPE, VFIO_BASE + 13)
+
+/**
+ * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
+ *                                    struct vfio_device_query_gfx_plane)
+ *
+ * Set the drm_plane_type and flags, then retrieve the gfx plane info.
+ *
+ * flags supported:
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_DMABUF are set
+ *   to ask if the mdev supports dma-buf. 0 on support, -EINVAL on no
+ *   support for dma-buf.
+ * - VFIO_GFX_PLANE_TYPE_PROBE and VFIO_GFX_PLANE_TYPE_REGION are set
+ *   to ask if the mdev supports region. 0 on support, -EINVAL on no
+ *   support for region.
+ * - VFIO_GFX_PLANE_TYPE_DMABUF or VFIO_GFX_PLANE_TYPE_REGION is set
+ *   with each call to query the plane info.
+ * - Others are invalid and return -EINVAL.
+ *
+ * Note:
+ * 1. Plane could be disabled by guest. In that case, success will be
+ *    returned with zero-initialized drm_format, size, width and height
+ *    fields.
+ * 2. x_hot/y_hot is set to 0xFFFFFFFF if no hotspot information available
+ *
+ * Return: 0 on success, -errno on other failure.
+ */
+struct vfio_device_gfx_plane_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_GFX_PLANE_TYPE_PROBE (1 << 0)
+#define VFIO_GFX_PLANE_TYPE_DMABUF (1 << 1)
+#define VFIO_GFX_PLANE_TYPE_REGION (1 << 2)
+	/* in */
+	__u32 drm_plane_type;	/* type of plane: DRM_PLANE_TYPE_* */
+	/* out */
+	__u32 drm_format;	/* drm format of plane */
+	__aligned_u64 drm_format_mod;   /* tiled mode */
+	__u32 width;	/* width of plane */
+	__u32 height;	/* height of plane */
+	__u32 stride;	/* stride of plane */
+	__u32 size;	/* size of plane in bytes, align on page*/
+	__u32 x_pos;	/* horizontal position of cursor plane */
+	__u32 y_pos;	/* vertical position of cursor plane*/
+	__u32 x_hot;    /* horizontal position of cursor hotspot */
+	__u32 y_hot;    /* vertical position of cursor hotspot */
+	union {
+		__u32 region_index;	/* region index */
+		__u32 dmabuf_id;	/* dma-buf id */
+	};
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/**
+ * VFIO_DEVICE_GET_GFX_DMABUF - _IOW(VFIO_TYPE, VFIO_BASE + 15, __u32)
+ *
+ * Return a new dma-buf file descriptor for an exposed guest framebuffer
+ * described by the provided dmabuf_id. The dmabuf_id is returned from VFIO_
+ * DEVICE_QUERY_GFX_PLANE as a token of the exposed guest framebuffer.
+ */
+
+#define VFIO_DEVICE_GET_GFX_DMABUF _IO(VFIO_TYPE, VFIO_BASE + 15)
+
+/**
+ * VFIO_DEVICE_IOEVENTFD - _IOW(VFIO_TYPE, VFIO_BASE + 16,
+ *                              struct vfio_device_ioeventfd)
+ *
+ * Perform a write to the device at the specified device fd offset, with
+ * the specified data and width when the provided eventfd is triggered.
+ * vfio bus drivers may not support this for all regions, for all widths,
+ * or at all.  vfio-pci currently only enables support for BAR regions,
+ * excluding the MSI-X vector table.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_ioeventfd {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_IOEVENTFD_8		(1 << 0) /* 1-byte write */
+#define VFIO_DEVICE_IOEVENTFD_16	(1 << 1) /* 2-byte write */
+#define VFIO_DEVICE_IOEVENTFD_32	(1 << 2) /* 4-byte write */
+#define VFIO_DEVICE_IOEVENTFD_64	(1 << 3) /* 8-byte write */
+#define VFIO_DEVICE_IOEVENTFD_SIZE_MASK	(0xf)
+	__aligned_u64	offset;		/* device fd offset of write */
+	__aligned_u64	data;		/* data to be written */
+	__s32	fd;			/* -1 for de-assignment */
+	__u32	reserved;
+};
+
+#define VFIO_DEVICE_IOEVENTFD		_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_DEVICE_FEATURE - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *			       struct vfio_device_feature)
+ *
+ * Get, set, or probe feature data of the device.  The feature is selected
+ * using the FEATURE_MASK portion of the flags field.  Support for a feature
+ * can be probed by setting both the FEATURE_MASK and PROBE bits.  A probe
+ * may optionally include the GET and/or SET bits to determine read vs write
+ * access of the feature respectively.  Probing a feature will return success
+ * if the feature is supported and all of the optionally indicated GET/SET
+ * methods are supported.  The format of the data portion of the structure is
+ * specific to the given feature.  The data portion is not required for
+ * probing.  GET and SET are mutually exclusive, except for use with PROBE.
+ *
+ * Return 0 on success, -errno on failure.
+ */
+struct vfio_device_feature {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_FEATURE_MASK	(0xffff) /* 16-bit feature index */
+#define VFIO_DEVICE_FEATURE_GET		(1 << 16) /* Get feature into data[] */
+#define VFIO_DEVICE_FEATURE_SET		(1 << 17) /* Set feature from data[] */
+#define VFIO_DEVICE_FEATURE_PROBE	(1 << 18) /* Probe feature support */
+	__u8	data[];
+};
+
+#define VFIO_DEVICE_FEATURE		_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/*
+ * VFIO_DEVICE_BIND_IOMMUFD - _IOR(VFIO_TYPE, VFIO_BASE + 18,
+ *				   struct vfio_device_bind_iommufd)
+ * @argsz:	 User filled size of this data.
+ * @flags:	 Must be 0.
+ * @iommufd:	 iommufd to bind.
+ * @out_devid:	 The device id generated by this bind. devid is a handle for
+ *		 this device/iommufd bond and can be used in IOMMUFD commands.
+ *
+ * Bind a vfio_device to the specified iommufd.
+ *
+ * User is restricted from accessing the device before the binding operation
+ * is completed.  Only allowed on cdev fds.
+ *
+ * Unbind is automatically conducted when device fd is closed.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_bind_iommufd {
+	__u32		argsz;
+	__u32		flags;
+	__s32		iommufd;
+	__u32		out_devid;
+};
+
+#define VFIO_DEVICE_BIND_IOMMUFD	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/*
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 19,
+ *					struct vfio_device_attach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for attach.
+ * @pt_id:	Input the target id which can represent an ioas or a hwpt
+ *		allocated via iommufd subsystem.
+ *		Output the input ioas id or the attached hwpt id which could
+ *		be the specified hwpt itself or a hwpt automatically created
+ *		for the specified ioas by kernel during the attachment.
+ * @pasid:	The pasid to be attached, only meaningful when
+ *		VFIO_DEVICE_ATTACH_PASID is set in @flags
+ *
+ * Associate the device with an address space within the bound iommufd.
+ * Undo by VFIO_DEVICE_DETACH_IOMMUFD_PT or device fd close.  This is only
+ * allowed on cdev fds.
+ *
+ * If a vfio device or a pasid of this device is currently attached to a valid
+ * hw_pagetable (hwpt), without doing a VFIO_DEVICE_DETACH_IOMMUFD_PT, a second
+ * VFIO_DEVICE_ATTACH_IOMMUFD_PT ioctl passing in another hwpt id is allowed.
+ * This action, also known as a hw_pagetable replacement, will replace the
+ * currently attached hwpt of the device or the pasid of this device with a new
+ * hwpt corresponding to the given pt_id.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_attach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_ATTACH_PASID	(1 << 0)
+	__u32	pt_id;
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_ATTACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/*
+ * VFIO_DEVICE_DETACH_IOMMUFD_PT - _IOW(VFIO_TYPE, VFIO_BASE + 20,
+ *					struct vfio_device_detach_iommufd_pt)
+ * @argsz:	User filled size of this data.
+ * @flags:	Flags for detach.
+ * @pasid:	The pasid to be detached, only meaningful when
+ *		VFIO_DEVICE_DETACH_PASID is set in @flags
+ *
+ * Remove the association of the device or a pasid of the device and its current
+ * associated address space.  After it, the device or the pasid should be in a
+ * blocking DMA state.  This is only allowed on cdev fds.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+struct vfio_device_detach_iommufd_pt {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DEVICE_DETACH_PASID	(1 << 0)
+	__u32	pasid;
+};
+
+#define VFIO_DEVICE_DETACH_IOMMUFD_PT		_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/*
+ * Provide support for setting a PCI VF Token, which is used as a shared
+ * secret between PF and VF drivers.  This feature may only be set on a
+ * PCI SR-IOV PF when SR-IOV is enabled on the PF and there are no existing
+ * open VFs.  Data provided when setting this feature is a 16-byte array
+ * (__u8 b[16]), representing a UUID.
+ */
+#define VFIO_DEVICE_FEATURE_PCI_VF_TOKEN	(0)
+
+/*
+ * Indicates the device can support the migration API through
+ * VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE. If this GET succeeds, the RUNNING and
+ * ERROR states are always supported. Support for additional states is
+ * indicated via the flags field; at least VFIO_MIGRATION_STOP_COPY must be
+ * set.
+ *
+ * VFIO_MIGRATION_STOP_COPY means that STOP, STOP_COPY and
+ * RESUMING are supported.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P means that RUNNING_P2P
+ * is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_PRE_COPY means that
+ * PRE_COPY is supported in addition to the STOP_COPY states.
+ *
+ * VFIO_MIGRATION_STOP_COPY | VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY
+ * means that RUNNING_P2P, PRE_COPY and PRE_COPY_P2P are supported
+ * in addition to the STOP_COPY states.
+ *
+ * Other combinations of flags have behavior to be defined in the future.
+ */
+struct vfio_device_feature_migration {
+	__aligned_u64 flags;
+#define VFIO_MIGRATION_STOP_COPY	(1 << 0)
+#define VFIO_MIGRATION_P2P		(1 << 1)
+#define VFIO_MIGRATION_PRE_COPY		(1 << 2)
+};
+#define VFIO_DEVICE_FEATURE_MIGRATION 1
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, execute a migration state change on the VFIO
+ * device. The new state is supplied in device_state, see enum
+ * vfio_device_mig_state for details
+ *
+ * The kernel migration driver must fully transition the device to the new state
+ * value before the operation returns to the user.
+ *
+ * The kernel migration driver must not generate asynchronous device state
+ * transitions outside of manipulation by the user or the VFIO_DEVICE_RESET
+ * ioctl as described above.
+ *
+ * If this function fails then current device_state may be the original
+ * operating state or some other state along the combination transition path.
+ * The user can then decide if it should execute a VFIO_DEVICE_RESET, attempt
+ * to return to the original state, or attempt to return to some other state
+ * such as RUNNING or STOP.
+ *
+ * If the new_state starts a new data transfer session then the FD associated
+ * with that session is returned in data_fd. The user is responsible to close
+ * this FD when it is finished. The user must consider the migration data stream
+ * carried over the FD to be opaque and must preserve the byte order of the
+ * stream. The user is not required to preserve buffer segmentation when writing
+ * the data stream during the RESUMING operation.
+ *
+ * Upon VFIO_DEVICE_FEATURE_GET, get the current migration state of the VFIO
+ * device, data_fd will be -1.
+ */
+struct vfio_device_feature_mig_state {
+	__u32 device_state; /* From enum vfio_device_mig_state */
+	__s32 data_fd;
+};
+#define VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE 2
+
+/*
+ * The device migration Finite State Machine is described by the enum
+ * vfio_device_mig_state. Some of the FSM arcs will create a migration data
+ * transfer session by returning a FD, in this case the migration data will
+ * flow over the FD using read() and write() as discussed below.
+ *
+ * There are 5 states to support VFIO_MIGRATION_STOP_COPY:
+ *  RUNNING - The device is running normally
+ *  STOP - The device does not change the internal or external state
+ *  STOP_COPY - The device internal state can be read out
+ *  RESUMING - The device is stopped and is loading a new internal state
+ *  ERROR - The device has failed and must be reset
+ *
+ * And optional states to support VFIO_MIGRATION_P2P:
+ *  RUNNING_P2P - RUNNING, except the device cannot do peer to peer DMA
+ * And VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY - The device is running normally but tracking internal state
+ *             changes
+ * And VFIO_MIGRATION_P2P | VFIO_MIGRATION_PRE_COPY:
+ *  PRE_COPY_P2P - PRE_COPY, except the device cannot do peer to peer DMA
+ *
+ * The FSM takes actions on the arcs between FSM states. The driver implements
+ * the following behavior for the FSM arcs:
+ *
+ * RUNNING_P2P -> STOP
+ * STOP_COPY -> STOP
+ *   While in STOP the device must stop the operation of the device. The device
+ *   must not generate interrupts, DMA, or any other change to external state.
+ *   It must not change its internal state. When stopped the device and kernel
+ *   migration driver must accept and respond to interaction to support external
+ *   subsystems in the STOP state, for example PCI MSI-X and PCI config space.
+ *   Failure by the user to restrict device access while in STOP must not result
+ *   in error conditions outside the user context (ex. host system faults).
+ *
+ *   The STOP_COPY arc will terminate a data transfer session.
+ *
+ * RESUMING -> STOP
+ *   Leaving RESUMING terminates a data transfer session and indicates the
+ *   device should complete processing of the data delivered by write(). The
+ *   kernel migration driver should complete the incorporation of data written
+ *   to the data transfer FD into the device internal state and perform
+ *   final validity and consistency checking of the new device state. If the
+ *   user provided data is found to be incomplete, inconsistent, or otherwise
+ *   invalid, the migration driver must fail the SET_STATE ioctl and
+ *   optionally go to the ERROR state as described below.
+ *
+ *   While in STOP the device has the same behavior as other STOP states
+ *   described above.
+ *
+ *   To abort a RESUMING session the device must be reset.
+ *
+ * PRE_COPY -> RUNNING
+ * RUNNING_P2P -> RUNNING
+ *   While in RUNNING the device is fully operational, the device may generate
+ *   interrupts, DMA, respond to MMIO, all vfio device regions are functional,
+ *   and the device may advance its internal state.
+ *
+ *   The PRE_COPY arc will terminate a data transfer session.
+ *
+ * PRE_COPY_P2P -> RUNNING_P2P
+ * RUNNING -> RUNNING_P2P
+ * STOP -> RUNNING_P2P
+ *   While in RUNNING_P2P the device is partially running in the P2P quiescent
+ *   state defined below.
+ *
+ *   The PRE_COPY_P2P arc will terminate a data transfer session.
+ *
+ * RUNNING -> PRE_COPY
+ * RUNNING_P2P -> PRE_COPY_P2P
+ * STOP -> STOP_COPY
+ *   PRE_COPY, PRE_COPY_P2P and STOP_COPY form the "saving group" of states
+ *   which share a data transfer session. Moving between these states alters
+ *   what is streamed in session, but does not terminate or otherwise affect
+ *   the associated fd.
+ *
+ *   These arcs begin the process of saving the device state and will return a
+ *   new data_fd. The migration driver may perform actions such as enabling
+ *   dirty logging of device state when entering PRE_COPY or PER_COPY_P2P.
+ *
+ *   Each arc does not change the device operation, the device remains
+ *   RUNNING, P2P quiesced or in STOP. The STOP_COPY state is described below
+ *   in PRE_COPY_P2P -> STOP_COPY.
+ *
+ * PRE_COPY -> PRE_COPY_P2P
+ *   Entering PRE_COPY_P2P continues all the behaviors of PRE_COPY above.
+ *   However, while in the PRE_COPY_P2P state, the device is partially running
+ *   in the P2P quiescent state defined below, like RUNNING_P2P.
+ *
+ * PRE_COPY_P2P -> PRE_COPY
+ *   This arc allows returning the device to a full RUNNING behavior while
+ *   continuing all the behaviors of PRE_COPY.
+ *
+ * PRE_COPY_P2P -> STOP_COPY
+ *   While in the STOP_COPY state the device has the same behavior as STOP
+ *   with the addition that the data transfers session continues to stream the
+ *   migration state. End of stream on the FD indicates the entire device
+ *   state has been transferred.
+ *
+ *   The user should take steps to restrict access to vfio device regions while
+ *   the device is in STOP_COPY or risk corruption of the device migration data
+ *   stream.
+ *
+ * STOP -> RESUMING
+ *   Entering the RESUMING state starts a process of restoring the device state
+ *   and will return a new data_fd. The data stream fed into the data_fd should
+ *   be taken from the data transfer output of a single FD during saving from
+ *   a compatible device. The migration driver may alter/reset the internal
+ *   device state for this arc if required to prepare the device to receive the
+ *   migration data.
+ *
+ * STOP_COPY -> PRE_COPY
+ * STOP_COPY -> PRE_COPY_P2P
+ *   These arcs are not permitted and return error if requested. Future
+ *   revisions of this API may define behaviors for these arcs, in this case
+ *   support will be discoverable by a new flag in
+ *   VFIO_DEVICE_FEATURE_MIGRATION.
+ *
+ * any -> ERROR
+ *   ERROR cannot be specified as a device state, however any transition request
+ *   can be failed with an errno return and may then move the device_state into
+ *   ERROR. In this case the device was unable to execute the requested arc and
+ *   was also unable to restore the device to any valid device_state.
+ *   To recover from ERROR VFIO_DEVICE_RESET must be used to return the
+ *   device_state back to RUNNING.
+ *
+ * The optional peer to peer (P2P) quiescent state is intended to be a quiescent
+ * state for the device for the purposes of managing multiple devices within a
+ * user context where peer-to-peer DMA between devices may be active. The
+ * RUNNING_P2P and PRE_COPY_P2P states must prevent the device from initiating
+ * any new P2P DMA transactions. If the device can identify P2P transactions
+ * then it can stop only P2P DMA, otherwise it must stop all DMA. The migration
+ * driver must complete any such outstanding operations prior to completing the
+ * FSM arc into a P2P state. For the purpose of specification the states
+ * behave as though the device was fully running if not supported. Like while in
+ * STOP or STOP_COPY the user must not touch the device, otherwise the state
+ * can be exited.
+ *
+ * The remaining possible transitions are interpreted as combinations of the
+ * above FSM arcs. As there are multiple paths through the FSM arcs the path
+ * should be selected based on the following rules:
+ *   - Select the shortest path.
+ *   - The path cannot have saving group states as interior arcs, only
+ *     starting/end states.
+ * Refer to vfio_mig_get_next_state() for the result of the algorithm.
+ *
+ * The automatic transit through the FSM arcs that make up the combination
+ * transition is invisible to the user. When working with combination arcs the
+ * user may see any step along the path in the device_state if SET_STATE
+ * fails. When handling these types of errors users should anticipate future
+ * revisions of this protocol using new states and those states becoming
+ * visible in this case.
+ *
+ * The optional states cannot be used with SET_STATE if the device does not
+ * support them. The user can discover if these states are supported by using
+ * VFIO_DEVICE_FEATURE_MIGRATION. By using combination transitions the user can
+ * avoid knowing about these optional states if the kernel driver supports them.
+ *
+ * Arcs touching PRE_COPY and PRE_COPY_P2P are removed if support for PRE_COPY
+ * is not present.
+ */
+enum vfio_device_mig_state {
+	VFIO_DEVICE_STATE_ERROR = 0,
+	VFIO_DEVICE_STATE_STOP = 1,
+	VFIO_DEVICE_STATE_RUNNING = 2,
+	VFIO_DEVICE_STATE_STOP_COPY = 3,
+	VFIO_DEVICE_STATE_RESUMING = 4,
+	VFIO_DEVICE_STATE_RUNNING_P2P = 5,
+	VFIO_DEVICE_STATE_PRE_COPY = 6,
+	VFIO_DEVICE_STATE_PRE_COPY_P2P = 7,
+	VFIO_DEVICE_STATE_NR,
+};
+
+/**
+ * VFIO_MIG_GET_PRECOPY_INFO - _IO(VFIO_TYPE, VFIO_BASE + 21)
+ *
+ * This ioctl is used on the migration data FD in the precopy phase of the
+ * migration data transfer. It returns an estimate of the current data sizes
+ * remaining to be transferred. It allows the user to judge when it is
+ * appropriate to leave PRE_COPY for STOP_COPY.
+ *
+ * This ioctl is valid only in PRE_COPY states and kernel driver should
+ * return -EINVAL from any other migration state.
+ *
+ * The vfio_precopy_info data structure returned by this ioctl provides
+ * estimates of data available from the device during the PRE_COPY states.
+ * This estimate is split into two categories, initial_bytes and
+ * dirty_bytes.
+ *
+ * The initial_bytes field indicates the amount of initial precopy
+ * data available from the device. This field should have a non-zero initial
+ * value and decrease as migration data is read from the device.
+ * It is recommended to leave PRE_COPY for STOP_COPY only after this field
+ * reaches zero. Leaving PRE_COPY earlier might make things slower.
+ *
+ * The dirty_bytes field tracks device state changes relative to data
+ * previously retrieved.  This field starts at zero and may increase as
+ * the internal device state is modified or decrease as that modified
+ * state is read from the device.
+ *
+ * Userspace may use the combination of these fields to estimate the
+ * potential data size available during the PRE_COPY phases, as well as
+ * trends relative to the rate the device is dirtying its internal
+ * state, but these fields are not required to have any bearing relative
+ * to the data size available during the STOP_COPY phase.
+ *
+ * Drivers have a lot of flexibility in when and what they transfer during the
+ * PRE_COPY phase, and how they report this from VFIO_MIG_GET_PRECOPY_INFO.
+ *
+ * During pre-copy the migration data FD has a temporary "end of stream" that is
+ * reached when both initial_bytes and dirty_byte are zero. For instance, this
+ * may indicate that the device is idle and not currently dirtying any internal
+ * state. When read() is done on this temporary end of stream the kernel driver
+ * should return ENOMSG from read(). Userspace can wait for more data (which may
+ * never come) by using poll.
+ *
+ * Once in STOP_COPY the migration data FD has a permanent end of stream
+ * signaled in the usual way by read() always returning 0 and poll always
+ * returning readable. ENOMSG may not be returned in STOP_COPY.
+ * Support for this ioctl is mandatory if a driver claims to support
+ * VFIO_MIGRATION_PRE_COPY.
+ *
+ * Return: 0 on success, -1 and errno set on failure.
+ */
+struct vfio_precopy_info {
+	__u32 argsz;
+	__u32 flags;
+	__aligned_u64 initial_bytes;
+	__aligned_u64 dirty_bytes;
+};
+
+#define VFIO_MIG_GET_PRECOPY_INFO _IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, allow the device to be moved into a low power
+ * state with the platform-based power management.  Device use of lower power
+ * states depends on factors managed by the runtime power management core,
+ * including system level support and coordinating support among dependent
+ * devices.  Enabling device low power entry does not guarantee lower power
+ * usage by the device, nor is a mechanism provided through this feature to
+ * know the current power state of the device.  If any device access happens
+ * (either from the host or through the vfio uAPI) when the device is in the
+ * low power state, then the host will move the device out of the low power
+ * state as necessary prior to the access.  Once the access is completed, the
+ * device may re-enter the low power state.  For single shot low power support
+ * with wake-up notification, see
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP below.  Access to mmap'd
+ * device regions is disabled on LOW_POWER_ENTRY and may only be resumed after
+ * calling LOW_POWER_EXIT.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY 3
+
+/*
+ * This device feature has the same behavior as
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY with the exception that the user
+ * provides an eventfd for wake-up notification.  When the device moves out of
+ * the low power state for the wake-up, the host will not allow the device to
+ * re-enter a low power state without a subsequent user call to one of the low
+ * power entry device feature IOCTLs.  Access to mmap'd device regions is
+ * disabled on LOW_POWER_ENTRY_WITH_WAKEUP and may only be resumed after the
+ * low power exit.  The low power exit can happen either through LOW_POWER_EXIT
+ * or through any other access (where the wake-up notification has been
+ * generated).  The access to mmap'd device regions will not trigger low power
+ * exit.
+ *
+ * The notification through the provided eventfd will be generated only when
+ * the device has entered and is resumed from a low power state after
+ * calling this device feature IOCTL.  A device that has not entered low power
+ * state, as managed through the runtime power management core, will not
+ * generate a notification through the provided eventfd on access.  Calling the
+ * LOW_POWER_EXIT feature is optional in the case where notification has been
+ * signaled on the provided eventfd that a resume from low power has occurred.
+ */
+struct vfio_device_low_power_entry_with_wakeup {
+	__s32 wakeup_eventfd;
+	__u32 reserved;
+};
+
+#define VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP 4
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET, disallow use of device low power states as
+ * previously enabled via VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY or
+ * VFIO_DEVICE_FEATURE_LOW_POWER_ENTRY_WITH_WAKEUP device features.
+ * This device feature IOCTL may itself generate a wakeup eventfd notification
+ * in the latter case if the device had previously entered a low power state.
+ */
+#define VFIO_DEVICE_FEATURE_LOW_POWER_EXIT 5
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET start/stop device DMA logging.
+ * VFIO_DEVICE_FEATURE_PROBE can be used to detect if the device supports
+ * DMA logging.
+ *
+ * DMA logging allows a device to internally record what DMAs the device is
+ * initiating and report them back to userspace. It is part of the VFIO
+ * migration infrastructure that allows implementing dirty page tracking
+ * during the pre copy phase of live migration. Only DMA WRITEs are logged,
+ * and this API is not connected to VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE.
+ *
+ * When DMA logging is started a range of IOVAs to monitor is provided and the
+ * device can optimize its logging to cover only the IOVA range given. Each
+ * DMA that the device initiates inside the range will be logged by the device
+ * for later retrieval.
+ *
+ * page_size is an input that hints what tracking granularity the device
+ * should try to achieve. If the device cannot do the hinted page size then
+ * it's the driver choice which page size to pick based on its support.
+ * On output the device will return the page size it selected.
+ *
+ * ranges is a pointer to an array of
+ * struct vfio_device_feature_dma_logging_range.
+ *
+ * The core kernel code guarantees to support by minimum num_ranges that fit
+ * into a single kernel page. User space can try higher values but should give
+ * up if the above can't be achieved as of some driver limitations.
+ *
+ * A single call to start device DMA logging can be issued and a matching stop
+ * should follow at the end. Another start is not allowed in the meantime.
+ */
+struct vfio_device_feature_dma_logging_control {
+	__aligned_u64 page_size;
+	__u32 num_ranges;
+	__u32 __reserved;
+	__aligned_u64 ranges;
+};
+
+struct vfio_device_feature_dma_logging_range {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_START 6
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_SET stop device DMA logging that was started
+ * by VFIO_DEVICE_FEATURE_DMA_LOGGING_START
+ */
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_STOP 7
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back and clear the device DMA log
+ *
+ * Query the device's DMA log for written pages within the given IOVA range.
+ * During querying the log is cleared for the IOVA range.
+ *
+ * bitmap is a pointer to an array of u64s that will hold the output bitmap
+ * with 1 bit reporting a page_size unit of IOVA. The mapping of IOVA to bits
+ * is given by:
+ *  bitmap[(addr - iova)/page_size] & (1ULL << (addr % 64))
+ *
+ * The input page_size can be any power of two value and does not have to
+ * match the value given to VFIO_DEVICE_FEATURE_DMA_LOGGING_START. The driver
+ * will format its internal logging to match the reporting page size, possibly
+ * by replicating bits if the internal page size is lower than requested.
+ *
+ * The LOGGING_REPORT will only set bits in the bitmap and never clear or
+ * perform any initialization of the user provided bitmap.
+ *
+ * If any error is returned userspace should assume that the dirty log is
+ * corrupted. Error recovery is to consider all memory dirty and try to
+ * restart the dirty tracking, or to abort/restart the whole migration.
+ *
+ * If DMA logging is not enabled, an error will be returned.
+ *
+ */
+struct vfio_device_feature_dma_logging_report {
+	__aligned_u64 iova;
+	__aligned_u64 length;
+	__aligned_u64 page_size;
+	__aligned_u64 bitmap;
+};
+
+#define VFIO_DEVICE_FEATURE_DMA_LOGGING_REPORT 8
+
+/*
+ * Upon VFIO_DEVICE_FEATURE_GET read back the estimated data length that will
+ * be required to complete stop copy.
+ *
+ * Note: Can be called on each device state.
+ */
+
+struct vfio_device_feature_mig_data_size {
+	__aligned_u64 stop_copy_length;
+};
+
+#define VFIO_DEVICE_FEATURE_MIG_DATA_SIZE 9
+
+/**
+ * Upon VFIO_DEVICE_FEATURE_SET, set or clear the BUS mastering for the device
+ * based on the operation specified in op flag.
+ *
+ * The functionality is incorporated for devices that needs bus master control,
+ * but the in-band device interface lacks the support. Consequently, it is not
+ * applicable to PCI devices, as bus master control for PCI devices is managed
+ * in-band through the configuration space. At present, this feature is supported
+ * only for CDX devices.
+ * When the device's BUS MASTER setting is configured as CLEAR, it will result in
+ * blocking all incoming DMA requests from the device. On the other hand, configuring
+ * the device's BUS MASTER setting as SET (enable) will grant the device the
+ * capability to perform DMA to the host memory.
+ */
+struct vfio_device_feature_bus_master {
+	__u32 op;
+#define		VFIO_DEVICE_FEATURE_CLEAR_MASTER	0	/* Clear Bus Master */
+#define		VFIO_DEVICE_FEATURE_SET_MASTER		1	/* Set Bus Master */
+};
+#define VFIO_DEVICE_FEATURE_BUS_MASTER 10
+
+/* -------- API for Type1 VFIO IOMMU -------- */
+
+/**
+ * VFIO_IOMMU_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 12, struct vfio_iommu_info)
+ *
+ * Retrieve information about the IOMMU object. Fills in provided
+ * struct vfio_iommu_info. Caller sets argsz.
+ *
+ * XXX Should we do these by CHECK_EXTENSION too?
+ */
+struct vfio_iommu_type1_info {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
+#define VFIO_IOMMU_INFO_CAPS	(1 << 1)	/* Info supports caps */
+	__aligned_u64	iova_pgsizes;		/* Bitmap of supported page sizes */
+	__u32   cap_offset;	/* Offset within info struct of first cap */
+	__u32   pad;
+};
+
+/*
+ * The IOVA capability allows to report the valid IOVA range(s)
+ * excluding any non-relaxable reserved regions exposed by
+ * devices attached to the container. Any DMA map attempt
+ * outside the valid iova range will return error.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE  1
+
+struct vfio_iova_range {
+	__u64	start;
+	__u64	end;
+};
+
+struct vfio_iommu_type1_info_cap_iova_range {
+	struct	vfio_info_cap_header header;
+	__u32	nr_iovas;
+	__u32	reserved;
+	struct	vfio_iova_range iova_ranges[];
+};
+
+/*
+ * The migration capability allows to report supported features for migration.
+ *
+ * The structures below define version 1 of this capability.
+ *
+ * The existence of this capability indicates that IOMMU kernel driver supports
+ * dirty page logging.
+ *
+ * pgsize_bitmap: Kernel driver returns bitmap of supported page sizes for dirty
+ * page logging.
+ * max_dirty_bitmap_size: Kernel driver returns maximum supported dirty bitmap
+ * size in bytes that can be used by user applications when getting the dirty
+ * bitmap.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION  2
+
+struct vfio_iommu_type1_info_cap_migration {
+	struct	vfio_info_cap_header header;
+	__u32	flags;
+	__u64	pgsize_bitmap;
+	__u64	max_dirty_bitmap_size;		/* in bytes */
+};
+
+/*
+ * The DMA available capability allows to report the current number of
+ * simultaneously outstanding DMA mappings that are allowed.
+ *
+ * The structure below defines version 1 of this capability.
+ *
+ * avail: specifies the current number of outstanding DMA mappings allowed.
+ */
+#define VFIO_IOMMU_TYPE1_INFO_DMA_AVAIL 3
+
+struct vfio_iommu_type1_info_dma_avail {
+	struct	vfio_info_cap_header header;
+	__u32	avail;
+};
+
+#define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/**
+ * VFIO_IOMMU_MAP_DMA - _IOW(VFIO_TYPE, VFIO_BASE + 13, struct vfio_dma_map)
+ *
+ * Map process virtual addresses to IO virtual addresses using the
+ * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
+ *
+ * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova. The vaddr
+ * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR.  To
+ * maintain memory consistency within the user application, the updated vaddr
+ * must address the same memory object as originally mapped.  Failure to do so
+ * will result in user memory corruption and/or device misbehavior.  iova and
+ * size must match those in the original MAP_DMA call.  Protection is not
+ * changed, and the READ & WRITE flags must be 0.
+ */
+struct vfio_iommu_type1_dma_map {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
+#define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
+#define VFIO_DMA_MAP_FLAG_VADDR (1 << 2)
+	__u64	vaddr;				/* Process virtual address */
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+
+#define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)
+
+struct vfio_bitmap {
+	__u64        pgsize;	/* page size for bitmap in bytes */
+	__u64        size;	/* in bytes */
+	__u64 *data;	/* one bit per page */
+};
+
+/**
+ * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
+ *							struct vfio_dma_unmap)
+ *
+ * Unmap IO virtual addresses using the provided struct vfio_dma_unmap.
+ * Caller sets argsz.  The actual unmapped size is returned in the size
+ * field.  No guarantee is made to the user that arbitrary unmaps of iova
+ * or size different from those used in the original mapping call will
+ * succeed.
+ *
+ * VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP should be set to get the dirty bitmap
+ * before unmapping IO virtual addresses. When this flag is set, the user must
+ * provide a struct vfio_bitmap in data[]. User must provide zero-allocated
+ * memory via vfio_bitmap.data and its size in the vfio_bitmap.size field.
+ * A bit in the bitmap represents one page, of user provided page size in
+ * vfio_bitmap.pgsize field, consecutively starting from iova offset. Bit set
+ * indicates that the page at that offset from iova is dirty. A Bitmap of the
+ * pages in the range of unmapped size is returned in the user-provided
+ * vfio_bitmap.data.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_ALL, unmap all addresses.  iova and size
+ * must be 0.  This cannot be combined with the get-dirty-bitmap flag.
+ *
+ * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host
+ * virtual addresses in the iova range.  DMA to already-mapped pages continues.
+ * Groups may not be added to the container while any addresses are invalid.
+ * This cannot be combined with the get-dirty-bitmap flag.
+ */
+struct vfio_iommu_type1_dma_unmap {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP (1 << 0)
+#define VFIO_DMA_UNMAP_FLAG_ALL		     (1 << 1)
+#define VFIO_DMA_UNMAP_FLAG_VADDR	     (1 << 2)
+	__u64	iova;				/* IO virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+	__u8    data[];
+};
+
+#define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)
+
+/*
+ * IOCTLs to enable/disable IOMMU container usage.
+ * No parameters are supported.
+ */
+#define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
+#define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/**
+ * VFIO_IOMMU_DIRTY_PAGES - _IOWR(VFIO_TYPE, VFIO_BASE + 17,
+ *                                     struct vfio_iommu_type1_dirty_bitmap)
+ * IOCTL is used for dirty pages logging.
+ * Caller should set flag depending on which operation to perform, details as
+ * below:
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_START flag set, instructs
+ * the IOMMU driver to log pages that are dirtied or potentially dirtied by
+ * the device; designed to be used when a migration is in progress. Dirty pages
+ * are logged until logging is disabled by user application by calling the IOCTL
+ * with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP flag set, instructs
+ * the IOMMU driver to stop logging dirtied pages.
+ *
+ * Calling the IOCTL with VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP flag set
+ * returns the dirty pages bitmap for IOMMU container for a given IOVA range.
+ * The user must specify the IOVA range and the pgsize through the structure
+ * vfio_iommu_type1_dirty_bitmap_get in the data[] portion. This interface
+ * supports getting a bitmap of the smallest supported pgsize only and can be
+ * modified in future to get a bitmap of any specified supported pgsize. The
+ * user must provide a zeroed memory area for the bitmap memory and specify its
+ * size in bitmap.size. One bit is used to represent one page consecutively
+ * starting from iova offset. The user should provide page size in bitmap.pgsize
+ * field. A bit set in the bitmap indicates that the page at that offset from
+ * iova is dirty. The caller must set argsz to a value including the size of
+ * structure vfio_iommu_type1_dirty_bitmap_get, but excluding the size of the
+ * actual bitmap. If dirty pages logging is not enabled, an error will be
+ * returned.
+ *
+ * Only one of the flags _START, _STOP and _GET may be specified at a time.
+ *
+ */
+struct vfio_iommu_type1_dirty_bitmap {
+	__u32        argsz;
+	__u32        flags;
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_START	(1 << 0)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP	(1 << 1)
+#define VFIO_IOMMU_DIRTY_PAGES_FLAG_GET_BITMAP	(1 << 2)
+	__u8         data[];
+};
+
+struct vfio_iommu_type1_dirty_bitmap_get {
+	__u64              iova;	/* IO virtual address */
+	__u64              size;	/* Size of iova range */
+	struct vfio_bitmap bitmap;
+};
+
+#define VFIO_IOMMU_DIRTY_PAGES             _IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
+
+/*
+ * The SPAPR TCE DDW info struct provides the information about
+ * the details of Dynamic DMA window capability.
+ *
+ * @pgsizes contains a page size bitmask, 4K/64K/16M are supported.
+ * @max_dynamic_windows_supported tells the maximum number of windows
+ * which the platform can create.
+ * @levels tells the maximum number of levels in multi-level IOMMU tables;
+ * this allows splitting a table into smaller chunks which reduces
+ * the amount of physically contiguous memory required for the table.
+ */
+struct vfio_iommu_spapr_tce_ddw_info {
+	__u64 pgsizes;			/* Bitmap of supported page sizes */
+	__u32 max_dynamic_windows_supported;
+	__u32 levels;
+};
+
+/*
+ * The SPAPR TCE info struct provides the information about the PCI bus
+ * address ranges available for DMA, these values are programmed into
+ * the hardware so the guest has to know that information.
+ *
+ * The DMA 32 bit window start is an absolute PCI bus address.
+ * The IOVA address passed via map/unmap ioctls are absolute PCI bus
+ * addresses too so the window works as a filter rather than an offset
+ * for IOVA addresses.
+ *
+ * Flags supported:
+ * - VFIO_IOMMU_SPAPR_INFO_DDW: informs the userspace that dynamic DMA windows
+ *   (DDW) support is present. @ddw is only supported when DDW is present.
+ */
+struct vfio_iommu_spapr_tce_info {
+	__u32 argsz;
+	__u32 flags;
+#define VFIO_IOMMU_SPAPR_INFO_DDW	(1 << 0)	/* DDW supported */
+	__u32 dma32_window_start;	/* 32 bit window start (bytes) */
+	__u32 dma32_window_size;	/* 32 bit window size (bytes) */
+	struct vfio_iommu_spapr_tce_ddw_info ddw;
+};
+
+#define VFIO_IOMMU_SPAPR_TCE_GET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
+
+/*
+ * EEH PE operation struct provides ways to:
+ * - enable/disable EEH functionality;
+ * - unfreeze IO/DMA for frozen PE;
+ * - read PE state;
+ * - reset PE;
+ * - configure PE;
+ * - inject EEH error.
+ */
+struct vfio_eeh_pe_err {
+	__u32 type;
+	__u32 func;
+	__u64 addr;
+	__u64 mask;
+};
+
+struct vfio_eeh_pe_op {
+	__u32 argsz;
+	__u32 flags;
+	__u32 op;
+	union {
+		struct vfio_eeh_pe_err err;
+	};
+};
+
+#define VFIO_EEH_PE_DISABLE		0	/* Disable EEH functionality */
+#define VFIO_EEH_PE_ENABLE		1	/* Enable EEH functionality  */
+#define VFIO_EEH_PE_UNFREEZE_IO		2	/* Enable IO for frozen PE   */
+#define VFIO_EEH_PE_UNFREEZE_DMA	3	/* Enable DMA for frozen PE  */
+#define VFIO_EEH_PE_GET_STATE		4	/* PE state retrieval        */
+#define  VFIO_EEH_PE_STATE_NORMAL	0	/* PE in functional state    */
+#define  VFIO_EEH_PE_STATE_RESET	1	/* PE reset in progress      */
+#define  VFIO_EEH_PE_STATE_STOPPED	2	/* Stopped DMA and IO        */
+#define  VFIO_EEH_PE_STATE_STOPPED_DMA	4	/* Stopped DMA only          */
+#define  VFIO_EEH_PE_STATE_UNAVAIL	5	/* State unavailable         */
+#define VFIO_EEH_PE_RESET_DEACTIVATE	5	/* Deassert PE reset         */
+#define VFIO_EEH_PE_RESET_HOT		6	/* Assert hot reset          */
+#define VFIO_EEH_PE_RESET_FUNDAMENTAL	7	/* Assert fundamental reset  */
+#define VFIO_EEH_PE_CONFIGURE		8	/* PE configuration          */
+#define VFIO_EEH_PE_INJECT_ERR		9	/* Inject EEH error          */
+
+#define VFIO_EEH_PE_OP			_IO(VFIO_TYPE, VFIO_BASE + 21)
+
+/**
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 17, struct vfio_iommu_spapr_register_memory)
+ *
+ * Registers user space memory where DMA is allowed. It pins
+ * user pages and does the locked memory accounting so
+ * subsequent VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA calls
+ * get faster.
+ */
+struct vfio_iommu_spapr_register_memory {
+	__u32	argsz;
+	__u32	flags;
+	__u64	vaddr;				/* Process virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+#define VFIO_IOMMU_SPAPR_REGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/**
+ * VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 18, struct vfio_iommu_spapr_register_memory)
+ *
+ * Unregisters user space memory registered with
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY.
+ * Uses vfio_iommu_spapr_register_memory for parameters.
+ */
+#define VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_CREATE - _IOWR(VFIO_TYPE, VFIO_BASE + 19, struct vfio_iommu_spapr_tce_create)
+ *
+ * Creates an additional TCE table and programs it (sets a new DMA window)
+ * to every IOMMU group in the container. It receives page shift, window
+ * size and number of levels in the TCE table being created.
+ *
+ * It allocates and returns an offset on a PCI bus of the new DMA window.
+ */
+struct vfio_iommu_spapr_tce_create {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 page_shift;
+	__u32 __resv1;
+	__u64 window_size;
+	__u32 levels;
+	__u32 __resv2;
+	/* out */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_CREATE	_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_REMOVE - _IOW(VFIO_TYPE, VFIO_BASE + 20, struct vfio_iommu_spapr_tce_remove)
+ *
+ * Unprograms a TCE table from all groups in the container and destroys it.
+ * It receives a PCI bus offset as a window id.
+ */
+struct vfio_iommu_spapr_tce_remove {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_REMOVE	_IO(VFIO_TYPE, VFIO_BASE + 20)
+
+/* ***************************************************************** */
+
+#endif /* VFIO_H */
diff --git a/kernel/linux/uapi/version b/kernel/linux/uapi/version
index 3c68968f92..966a998301 100644
--- a/kernel/linux/uapi/version
+++ b/kernel/linux/uapi/version
@@ -1 +1 @@
-v6.14
+v6.16
-- 
2.51.0


^ permalink raw reply	[relevance 1%]

* [PATCH v2] build: remove deprecated kmods option
  2025-09-19  7:57  5% [PATCH] build: remove deprecated kmods option Bruce Richardson
@ 2025-09-19  8:44  5% ` Bruce Richardson
  2025-09-23 14:40  4% ` [PATCH v3] " Bruce Richardson
  1 sibling, 0 replies; 77+ results
From: Bruce Richardson @ 2025-09-19  8:44 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson

The "enable_kmods" meson option was deprecated back in 2023[1], so can
now be removed from DPDK.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>

[1] https://doc.dpdk.org/guides-23.11/rel_notes/deprecation.html

---
v2: remove missed references in DTS and in freebsd meson.build
---
 doc/guides/rel_notes/deprecation.rst | 7 -------
 dts/framework/remote_session/dpdk.py | 2 +-
 dts/framework/utils.py               | 2 +-
 kernel/freebsd/meson.build           | 4 ++--
 meson_options.txt                    | 2 --
 5 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5aaeb1052a..bdcd2775b6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,13 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
 Deprecation Notices
 -------------------
 
-* build: The ``enable_kmods`` option is deprecated and will be removed in a future release.
-  Setting/clearing the option has no impact on the build.
-  Instead, kernel modules will be always built for OS's where out-of-tree kernel modules
-  are required for DPDK operation.
-  Currently, this means that modules will only be built for FreeBSD.
-  No modules are shipped with DPDK for either Linux or Windows.
-
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
diff --git a/dts/framework/remote_session/dpdk.py b/dts/framework/remote_session/dpdk.py
index 606d6e22fe..2dc8dab642 100644
--- a/dts/framework/remote_session/dpdk.py
+++ b/dts/framework/remote_session/dpdk.py
@@ -262,7 +262,7 @@ def _build_dpdk(self) -> None:
         """
         self._session.build_dpdk(
             self._env_vars,
-            MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+            MesonArgs(default_library="static", libdir="lib"),
             self.remote_dpdk_tree_path,
             self.remote_dpdk_build_dir,
         )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 0c81ab1b95..9f7201c888 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -111,7 +111,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         Example:
             ::
 
-                meson_args = MesonArgs(enable_kmods=True).
+                meson_args = MesonArgs(check_includes=True).
         """
         self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
diff --git a/kernel/freebsd/meson.build b/kernel/freebsd/meson.build
index 1f612711be..862e19e766 100644
--- a/kernel/freebsd/meson.build
+++ b/kernel/freebsd/meson.build
@@ -29,7 +29,7 @@ foreach k:kmods
                 'KMOD_CFLAGS=' + ' '.join(kmod_cflags),
                 'CC=clang'],
             depends: built_kmods, # make each module depend on prev
-            build_by_default: get_option('enable_kmods'),
-            install: get_option('enable_kmods'),
+            build_by_default: true,
+            install: true,
             install_dir: '/boot/modules/')
 endforeach
diff --git a/meson_options.txt b/meson_options.txt
index e49b2fc089..e28d24054c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -24,8 +24,6 @@ option('enable_drivers', type: 'string', value: '', description:
        'Comma-separated list of drivers to build. If unspecified, build all drivers.')
 option('enable_driver_sdk', type: 'boolean', value: false, description:
        'Install headers to build drivers.')
-option('enable_kmods', type: 'boolean', value: true, description:
-       '[Deprecated - will be removed in future release] build kernel modules')
 option('enable_libs', type: 'string', value: '', description:
        'Comma-separated list of optional libraries to explicitly enable. [NOTE: mandatory libs are always enabled]')
 option('examples', type: 'string', value: '', description:
-- 
2.48.1


^ permalink raw reply	[relevance 5%]

* Re: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
  2025-09-19  5:08  0%       ` Akhil Goyal
@ 2025-09-19 16:00  0%         ` Sameer Vaze
  2025-09-30 15:27  0%           ` Sameer Vaze
  0 siblings, 1 reply; 77+ results
From: Sameer Vaze @ 2025-09-19 16:00 UTC (permalink / raw)
  To: Akhil Goyal, Sunila Sahu, Fan Zhang, Ashish Gupta; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 3555 bytes --]

I don't see anything specific about creating and pushing a series : 9. Contributing Code to DPDK — Data Plane Development Kit 25.07.0 documentation<https://doc.dpdk.org/guides/contributing/patches.html>.

The only mention to a series above seems to use the depends-on tag.

Thanks
Sameer Vaze
________________________________
From: Akhil Goyal <gakhil@marvell.com>
Sent: Thursday, September 18, 2025 11:08 PM
To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta <ashishg@marvell.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files

WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros.

Hi Sameer,
> Hey Akhil,
>
> I attempted to split the changes into multiple patches and added a depends-on
> the second patch. But automation does not seem to be picking up the patch as a
> dependency. Is there a process step I messed up:

When you have dependent patches, you should send them as a series.
Automation runs on the last patch in the series only.
Currently it is not handling depends-on tag. It is for reviewers for now.


>
>
> Patch 1: compress/zlib: support for dictionary and PDCP checksum - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918204411.1701035-1-svaze@qti.qualcomm.com/>
> Patch 2 with depends-n: app/compress-perf: support dictionary files - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918210806.1709958-1-svaze@qti.qualcomm.com/>
>
> Thanks
> Sameer Vaze
> ________________________________
>
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Tuesday, June 17, 2025 3:34 PM
> To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu
> <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta
> <ashishg@marvell.com>
> Cc: dev@dpdk.org <dev@dpdk.org>
> Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
>
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
>
> > compress/zlib: support PDCP checksum
> >
> > compress/zlib: support zlib dictionary
> >
> > compressdev: add PDCP checksum
> >
> > compressdev: support zlib dictionary
> >
> > Adds support to provide predefined dictionaries to zlib. Handles setting
> > and getting of dictionaries using zlib apis. Also includes support to
> > read dictionary files
> >
> > Adds support for passing in and validationg 3GPP PDCP spec defined
> > checksums as defined under the Uplink Data Compression(UDC) feature.
> > Changes also include functions that do inflate or deflate specific
> > checksum operations.
> >
> > Introduces new members to compression api structures to allow setting
> > predefined dictionaries
> >
> > Signed-off-by: Sameer Vaze <svaze@qti.qualcomm.com>
>
> Seems like multiple patches are squashed into a single patch
>
> I see that this patch has ABI breaks.
> We need to defer this patch for next ABI break release.
> Please split the patch appropriately.
> First patch should define the library changes.
> And subsequently logically broken PMD patches
> Followed by application patches.
> Ensure each patch is compilable.
>
> Since this patch is breaking ABI/API,
> Please send a deprecation notice to be merged in this release and
> Implementation for next release.
>
> Also avoid unnecessary and irrelevant code changes.
>


[-- Attachment #2: Type: text/html, Size: 5914 bytes --]

^ permalink raw reply	[relevance 0%]

* RE: [PATCH 1/1] ring: safe partial ordering for head/tail update
  @ 2025-09-20 12:01  3%           ` Konstantin Ananyev
       [not found]                 ` <cf7e14d4ba5e9d78fddf083b6c92d75942447931.camel@arm.com>
  2025-09-23 21:57  0%             ` Ola Liljedahl
  0 siblings, 2 replies; 77+ results
From: Konstantin Ananyev @ 2025-09-20 12:01 UTC (permalink / raw)
  To: Ola Liljedahl, Wathsala Vithanage, Honnappa Nagarahalli
  Cc: dev, Dhruv Tripathi, Bruce Richardson


> >
> > To avoid information loss I combined reply to two Wathsala replies into one.
> >
> >
> > > > > The function __rte_ring_headtail_move_head() assumes that the
> > > > > barrier
> > > > (fence) between the load of the head and the load-acquire of the
> > > > > opposing tail guarantees the following: if a first thread reads
> > > > > tail
> > > > > and then writes head and a second thread reads the new value of
> > > > > head
> > > > > and then reads tail, then it should observe the same (or a later)
> > > > > value of tail.
> > > > >
> > > > > This assumption is incorrect under the C11 memory model. If the
> > > > > barrier
> > > > > (fence) is intended to establish a total ordering of ring
> > > > > operations,
> > > > > it fails to do so. Instead, the current implementation only
> > > > > enforces a
> > > > > partial ordering, which can lead to unsafe interleavings. In
> > > > > particular,
> > > > > some partial orders can cause underflows in free slot or available
> > > > > element computations, potentially resulting in data corruption.
> > > >
> > > > Hmm... sounds exactly like the problem from the patch we discussed
> > > > earlier that year:
> > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> konstantin.ananyev@huawei.com <mailto:20250521111432.207936-4-
> konstantin.ananyev@huawei.com>/
> > > > In two words:
> > > > "... thread can see 'latest' 'cons.head' value, with 'previous' value
> > > > for 'prod.tail' or visa-versa.
> > > > In other words: 'cons.head' value depends on 'prod.tail', so before
> > > > making latest 'cons.head'
> > > > value visible to other threads, we need to ensure that latest
> > > > 'prod.tail' is also visible."
> > > > Is that the one?
> >
> >
> > > Yes, the behavior occurs under RCpc (LDAPR) but not under RCsc (LDAR),
> > > which is why we didn’t catch it earlier. A fuller explanation, with
> > > Herd7 simulations, is in the blog post linked in the cover letter.
> > >
> > > https://community.arm.com/arm-community-blogs/b/architectures-and-
> processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-order
> <https://community.arm.com/arm-community-blogs/b/architectures-and-
> processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-order>
> >
> >
> > I see, so now it is reproducible with core rte_ring on real HW.
> >
> >
> > > >
> > > > > The issue manifests when a CPU first acts as a producer and later
> > > > > as a
> > > > > consumer. In this scenario, the barrier assumption may fail when
> > > > > another
> > > > > core takes the consumer role. A Herd7 litmus test in C11 can
> > > > > demonstrate
> > > > > this violation. The problem has not been widely observed so far
> > > > > because:
> > > > > (a) on strong memory models (e.g., x86-64) the assumption holds,
> > > > > and
> > > > > (b) on relaxed models with RCsc semantics the ordering is still
> > > > > strong
> > > > > enough to prevent hazards.
> > > > > The problem becomes visible only on weaker models, when load-
> > > > > acquire is
> > > > > implemented with RCpc semantics (e.g. some AArch64 CPUs which
> > > > > support
> > > > > the LDAPR and LDAPUR instructions).
> > > > >
> > > > > Three possible solutions exist:
> > > > > 1. Strengthen ordering by upgrading release/acquire semantics to
> > > > > sequential consistency. This requires using seq-cst for
> > > > > stores,
> > > > > loads, and CAS operations. However, this approach introduces a
> > > > > significant performance penalty on relaxed-memory
> > > > > architectures.
> > > > >
> > > > > 2. Establish a safe partial order by enforcing a pair-wise
> > > > > happens-before relationship between thread of same role by
> > > > > changing
> > > > > the CAS and the preceding load of the head by converting them
> > > > > to
> > > > > release and acquire respectively. This approach makes the
> > > > > original
> > > > > barrier assumption unnecessary and allows its removal.
> > > >
> > > > For the sake of clarity, can you outline what would be exact code
> > > > changes for
> > > > approach #2? Same as in that patch:
> > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > > konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com>/
> > > > Or something different?
> > >
> > > Sorry, I missed the later half you your comment before.
> > > Yes, you have proposed the same solution there.
> >
> >
> > Ok, thanks for confirmation.
> >
> >
> > > >
> > > >
> > > > > 3. Retain partial ordering but ensure only safe partial orders
> > > > > are
> > > > > committed. This can be done by detecting underflow conditions
> > > > > (producer < consumer) and quashing the update in such cases.
> > > > > This approach makes the original barrier assumption
> > > > > unnecessary
> > > > > and allows its removal.
> > > >
> > > > > This patch implements solution (3) for performance reasons.
> > > > >
> > > > > Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com
> <mailto:wathsala.vithanage@arm.com>>
> > > > > Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com
> <mailto:ola.liljedahl@arm.com>>
> > > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com
> <mailto:honnappa.nagarahalli@arm.com>>
> > > > > Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com
> <mailto:dhruv.tripathi@arm.com>>
> > > > > ---
> > > > > lib/ring/rte_ring_c11_pvt.h | 10 +++++++---
> > > > > 1 file changed, 7 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/lib/ring/rte_ring_c11_pvt.h
> > > > > b/lib/ring/rte_ring_c11_pvt.h
> > > > > index b9388af0da..e5ac1f6b9e 100644
> > > > > --- a/lib/ring/rte_ring_c11_pvt.h
> > > > > +++ b/lib/ring/rte_ring_c11_pvt.h
> > > > > @@ -83,9 +83,6 @@ __rte_ring_headtail_move_head(struct
> > > > > rte_ring_headtail
> > > > > *d,
> > > > > /* Reset n to the initial burst count */
> > > > > n = max;
> > > > >
> > > > > - /* Ensure the head is read before tail */
> > > > > - rte_atomic_thread_fence(rte_memory_order_acquire);
> > > > > -
> > > > > /* load-acquire synchronize with store-release of
> > > > > ht->tail
> > > > > * in update_tail.
> > > > > */
> > > >
> > > > But then cons.head can be read a before prod.tail (and visa-versa),
> > > > right?
> > >
> > > Right, we let it happen but eliminate any resulting states that are
> > > semantically incorrect at the end.
> >
> >
> > Two comments here:
> > 1) I think it is probably safer to do the check like that:
> > If (*entries > ring->capacity) ...
> Yes, this might be another way of handling underflow situations. We could study
> this.
> 
> I have used the check for negative without problems in my ring buffer
> implementations
> https://github.com/ARM-software/progress64/blob/master/src/p64_ringbuf.c
> but can't say that has been battle-tested.

My thought was about the case (probably hypothetical) when the difference
between stale tail and head will be bigger then 2^31 + 1. 
 
> > 2) My concern that without forcing a proper read ordering
> > (cons.head first then prod.tail) we re-introduce a window for all sorts of
> > ABA-like problems.
> Head and tail indexes are monotonically increasing so I don't see a risk for ABA-like
> problems.

I understand that, but with current CPU speeds it can take rte_ring just few seconds to
wrap around head/tail values. If user doing something really fancy - like using rte_ring ZC API
(i.e. just moving head/tail without reading actual objects) that can probably happen even
faster (less than a second?).
Are we sure that the stale tail value will never persist that long?
Let say user calling move_head() in a loop till it succeeds?  

> Indeed, adding a monotonically increasing tag to pointers is the common way of
> avoiding ABA
> problems in lock-free designs.

Yep, using 64-bit values for head/tail counters will help to avoid these concerns.
But it will probably break HTS/RTS modes, plus it is an ABI change for sure.
 
Actually after another thought, I have one more concern here:

+               /*
+                * Ensure the entries calculation was not based on a stale
+                * and unsafe stail observation that causes underflow.
+                */
+               if ((int)*entries < 0)
+                       *entries = 0;
+
 
With that change, it might return not-valid information back to the user
about number of free/occupied entries in the ring.
Plus rte_ring_enqueue() now might fail even when there are enough free entries
in the ring (same for dequeue).
That looks like a change in our public API behavior that might break many things.
There are quite few places when caller expects enqueue/dequeue
operation to always succeed (let say there always should be enough free space in the ring).
For example: rte_mempool works like that.
I am pretty sure there are quite few other places like that inside DPDK,
not to mention third-party code.

Considering all of the above, I am actually more in favor
to combine approaches #2 and #3 for the final patch:  
establish a safe partial order (#2) and keep the check from #3 (should it become an assert()/verify()?) 

Another thing to note: whatever final approach we choose -
we need to make sure that the problem is addressed across all other
rte_ring flavors/modes too (generic implementation, rts/hts mode, soring).

Konstantin 
 




^ permalink raw reply	[relevance 3%]

* RE: [PATCH 1/1] ring: safe partial ordering for head/tail update
       [not found]                 ` <cf7e14d4ba5e9d78fddf083b6c92d75942447931.camel@arm.com>
@ 2025-09-22  7:12  0%               ` Konstantin Ananyev
  0 siblings, 0 replies; 77+ results
From: Konstantin Ananyev @ 2025-09-22  7:12 UTC (permalink / raw)
  To: Wathsala Vithanage
  Cc: dev, Dhruv Tripathi, Bruce Richardson, Konstantin Ananyev,
	Ola Liljedahl, Wathsala Vithanage, Honnappa Nagarahalli



> > > >
> > > > To avoid information loss I combined reply to two Wathsala
> > > > replies into one.
> > > >
> > > >
> > > > > > > The function __rte_ring_headtail_move_head() assumes that
> > > > > > > the
> > > > > > > barrier
> > > > > > (fence) between the load of the head and the load-acquire of
> > > > > > the
> > > > > > > opposing tail guarantees the following: if a first thread
> > > > > > > reads
> > > > > > > tail
> > > > > > > and then writes head and a second thread reads the new
> > > > > > > value of
> > > > > > > head
> > > > > > > and then reads tail, then it should observe the same (or a
> > > > > > > later)
> > > > > > > value of tail.
> > > > > > >
> > > > > > > This assumption is incorrect under the C11 memory model. If
> > > > > > > the
> > > > > > > barrier
> > > > > > > (fence) is intended to establish a total ordering of ring
> > > > > > > operations,
> > > > > > > it fails to do so. Instead, the current implementation only
> > > > > > > enforces a
> > > > > > > partial ordering, which can lead to unsafe interleavings.
> > > > > > > In
> > > > > > > particular,
> > > > > > > some partial orders can cause underflows in free slot or
> > > > > > > available
> > > > > > > element computations, potentially resulting in data
> > > > > > > corruption.
> > > > > >
> > > > > > Hmm... sounds exactly like the problem from the patch we
> > > > > > discussed
> > > > > > earlier that year:
> > > > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-
> 4
> > > > > > -
> > > konstantin.ananyev@huawei.com <mailto:20250521111432.207936-4-
> > > konstantin.ananyev@huawei.com>/
> > > > > > In two words:
> > > > > > "... thread can see 'latest' 'cons.head' value, with
> > > > > > 'previous' value
> > > > > > for 'prod.tail' or visa-versa.
> > > > > > In other words: 'cons.head' value depends on 'prod.tail', so
> > > > > > before
> > > > > > making latest 'cons.head'
> > > > > > value visible to other threads, we need to ensure that latest
> > > > > > 'prod.tail' is also visible."
> > > > > > Is that the one?
> > > >
> > > >
> > > > > Yes, the behavior occurs under RCpc (LDAPR) but not under RCsc
> > > > > (LDAR),
> > > > > which is why we didn’t catch it earlier. A fuller explanation,
> > > > > with
> > > > > Herd7 simulations, is in the blog post linked in the cover
> > > > > letter.
> > > > >
> > > > > https://community.arm.com/arm-community-blogs/b/architectures-and
> > > > > -
> > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-
> > > of-partial-order
> > > <https://community.arm.com/arm-community-blogs/b/architectures-and-
> > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-
> > > of-partial-order>
> > > >
> > > >
> > > > I see, so now it is reproducible with core rte_ring on real HW.
> > > >
> > > >
> > > > > >
> > > > > > > The issue manifests when a CPU first acts as a producer and
> > > > > > > later
> > > > > > > as a
> > > > > > > consumer. In this scenario, the barrier assumption may fail
> > > > > > > when
> > > > > > > another
> > > > > > > core takes the consumer role. A Herd7 litmus test in C11
> > > > > > > can
> > > > > > > demonstrate
> > > > > > > this violation. The problem has not been widely observed so
> > > > > > > far
> > > > > > > because:
> > > > > > > (a) on strong memory models (e.g., x86-64) the assumption
> > > > > > > holds,
> > > > > > > and
> > > > > > > (b) on relaxed models with RCsc semantics the ordering is
> > > > > > > still
> > > > > > > strong
> > > > > > > enough to prevent hazards.
> > > > > > > The problem becomes visible only on weaker models, when
> > > > > > > load-
> > > > > > > acquire is
> > > > > > > implemented with RCpc semantics (e.g. some AArch64 CPUs
> > > > > > > which
> > > > > > > support
> > > > > > > the LDAPR and LDAPUR instructions).
> > > > > > >
> > > > > > > Three possible solutions exist:
> > > > > > > 1. Strengthen ordering by upgrading release/acquire
> > > > > > > semantics to
> > > > > > > sequential consistency. This requires using seq-cst for
> > > > > > > stores,
> > > > > > > loads, and CAS operations. However, this approach
> > > > > > > introduces a
> > > > > > > significant performance penalty on relaxed-memory
> > > > > > > architectures.
> > > > > > >
> > > > > > > 2. Establish a safe partial order by enforcing a pair-wise
> > > > > > > happens-before relationship between thread of same role by
> > > > > > > changing
> > > > > > > the CAS and the preceding load of the head by converting
> > > > > > > them
> > > > > > > to
> > > > > > > release and acquire respectively. This approach makes the
> > > > > > > original
> > > > > > > barrier assumption unnecessary and allows its removal.
> > > > > >
> > > > > > For the sake of clarity, can you outline what would be exact
> > > > > > code
> > > > > > changes for
> > > > > > approach #2? Same as in that patch:
> > > > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-
> 4
> > > > > > -
> > > <
> > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-
> > > 4->
> > > > > konstantin.ananyev@huawei.com <mailto:
> > > > > konstantin.ananyev@huawei.com>/
> > > > > > Or something different?
> > > > >
> > > > > Sorry, I missed the later half you your comment before.
> > > > > Yes, you have proposed the same solution there.
> > > >
> > > >
> > > > Ok, thanks for confirmation.
> > > >
> > > >
> > > > > >
> > > > > >
> > > > > > > 3. Retain partial ordering but ensure only safe partial
> > > > > > > orders
> > > > > > > are
> > > > > > > committed. This can be done by detecting underflow
> > > > > > > conditions
> > > > > > > (producer < consumer) and quashing the update in such
> > > > > > > cases.
> > > > > > > This approach makes the original barrier assumption
> > > > > > > unnecessary
> > > > > > > and allows its removal.
> > > > > >
> > > > > > > This patch implements solution (3) for performance reasons.
> > > > > > >
> > > > > > > Signed-off-by: Wathsala Vithanage
> > > > > > > <wathsala.vithanage@arm.com
> > > <mailto:wathsala.vithanage@arm.com>>
> > > > > > > Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com
> > > <mailto:ola.liljedahl@arm.com>>
> > > > > > > Reviewed-by: Honnappa Nagarahalli
> > > > > > > <honnappa.nagarahalli@arm.com
> > > <mailto:honnappa.nagarahalli@arm.com>>
> > > > > > > Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com
> > > <mailto:dhruv.tripathi@arm.com>>
> > > > > > > ---
> > > > > > > lib/ring/rte_ring_c11_pvt.h | 10 +++++++---
> > > > > > > 1 file changed, 7 insertions(+), 3 deletions(-)
> > > > > > >
> > > > > > > diff --git a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > index b9388af0da..e5ac1f6b9e 100644
> > > > > > > --- a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > +++ b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > @@ -83,9 +83,6 @@ __rte_ring_headtail_move_head(struct
> > > > > > > rte_ring_headtail
> > > > > > > *d,
> > > > > > > /* Reset n to the initial burst count */
> > > > > > > n = max;
> > > > > > >
> > > > > > > - /* Ensure the head is read before tail */
> > > > > > > - rte_atomic_thread_fence(rte_memory_order_acquire);
> > > > > > > -
> > > > > > > /* load-acquire synchronize with store-release of
> > > > > > > ht->tail
> > > > > > > * in update_tail.
> > > > > > > */
> > > > > >
> > > > > > But then cons.head can be read a before prod.tail (and visa-
> > > > > > versa),
> > > > > > right?
> > > > >
> > > > > Right, we let it happen but eliminate any resulting states that
> > > > > are
> > > > > semantically incorrect at the end.
> > > >
> > > >
> > > > Two comments here:
> > > > 1) I think it is probably safer to do the check like that:
> > > > If (*entries > ring->capacity) ...
> > > Yes, this might be another way of handling underflow situations. We
> > > could study
> > > this.
> > >
> > > I have used the check for negative without problems in my ring
> > > buffer
> > > implementations
> > > https://github.com/ARM-software/progress64/blob/master/src/p64_ringbuf.c
> > > but can't say that has been battle-tested.
> >
> > My thought was about the case (probably hypothetical) when the
> > difference
> > between stale tail and head will be bigger then 2^31 + 1.
> >
> > > > 2) My concern that without forcing a proper read ordering
> > > > (cons.head first then prod.tail) we re-introduce a window for all
> > > > sorts of
> > > > ABA-like problems.
> 
> The ABA like problem you are referring here is index wrapping around
> then reaching the point of not resulting a negative value I assume.
> If so, the distance between a stale (tail) and a wrapped around head
> has to be at least 0x80000000.

Yes, I think so. 

> > > Head and tail indexes are monotonically increasing so I don't see a
> > > risk for ABA-like
> > > problems.
> >
> > I understand that, but with current CPU speeds it can take rte_ring
> > just few seconds to
> > wrap around head/tail values. If user doing something really fancy -
> > like using rte_ring ZC API
> > (i.e. just moving head/tail without reading actual objects) that can
> > probably happen even
> > faster (less than a second?).
> > Are we sure that the stale tail value will never persist that long?
> > Let say user calling move_head() in a loop till it succeeds?
> >
> 
> Systems with fast CPUs may also have shorter window of inconsistency.
> 
> > > Indeed, adding a monotonically increasing tag to pointers is the
> > > common way of
> > > avoiding ABA
> > > problems in lock-free designs.
> >
> > Yep, using 64-bit values for head/tail counters will help to avoid
> > these concerns.
> > But it will probably break HTS/RTS modes, plus it is an ABI change
> > for sure.
> >
> > Actually after another thought, I have one more concern here:
> >
> > +               /*
> > +                * Ensure the entries calculation was not based on a
> > stale
> > +                * and unsafe stail observation that causes
> > underflow.
> > +                */
> > +               if ((int)*entries < 0)
> > +                       *entries = 0;
> > +
> >
> > With that change, it might return not-valid information back to the
> > user
> > about number of free/occupied entries in the ring.
> > Plus rte_ring_enqueue() now might fail even when there are enough
> > free entries
> > in the ring (same for dequeue).
> > That looks like a change in our public API behavior that might break
> > many things.
> > There are quite few places when caller expects enqueue/dequeue
> > operation to always succeed (let say there always should be enough
> > free space in the ring).
> > For example: rte_mempool works like that.
> > I am pretty sure there are quite few other places like that inside
> > DPDK,
> > not to mention third-party code.
> >
> > Considering all of the above, I am actually more in favor
> > to combine approaches #2 and #3 for the final patch:
> > establish a safe partial order (#2) and keep the check from #3
> > (should it become an assert()/verify()?)
> >
> 
> #2 is OK too, difference in performance is only meaningful when at
> single core Figure 12 in blog post.

Yes, same thought here.

> But why combine #3? Litmus tests prove that this state won't be reached
> in #2 (Figure 10).

My intention was to convert it to RTE_ASSERT().
To help people catch situations when things went completely wrong.
Though if you believe it is completely unnecessary - I am ok with pure #2 approach. 

> > Another thing to note: whatever final approach we choose -
> > we need to make sure that the problem is addressed across all other
> > rte_ring flavors/modes too (generic implementation, rts/hts mode,
> > soring).
> 
> Agree.

Ok, ping me if you need a hand with v2.
Thanks
Konstantin
 

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 1/2] build: add backward compatibility for nested drivers
  @ 2025-09-23 13:08  3%   ` Kevin Traynor
  2025-09-23 13:28  0%     ` Bruce Richardson
  0 siblings, 1 reply; 77+ results
From: Kevin Traynor @ 2025-09-23 13:08 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: bruce.richardson, david.marchand

On 22/09/2025 16:51, Thomas Monjalon wrote:
> 22/09/2025 13:07, Kevin Traynor:
>> --- a/drivers/meson.build
>> +++ b/drivers/meson.build
>> -# add cmdline disabled drivers and meson disabled drivers together
>> -disable_drivers += ',' + get_option('disable_drivers')
>> +# map legacy driver names
>> +driver_map = {
>> +    'net/e1000': 'net/intel/e1000',
>> +    'net/fm10k': 'net/intel/fm10k',
>> +    'net/i40e': 'net/intel/i40e',
>> +    'net/iavf': 'net/intel/iavf',
>> +    'net/ice': 'net/intel/ice',
>> +    'net/idpf': 'net/intel/idpf',
>> +    'net/ipn3ke': 'net/intel/ipn3ke',
>> +    'net/ixgbe': 'net/intel/ixgbe',
>> +    'net/cpfl': 'net/intel/cpfl',
>> +}
> 
> Should we build this list inside a file drivers/net/intel/meson.build ?
> I'm not sure my idea is better...
> 

Not sure. It's generic as is, so any other driver moved anywhere could
add to this list here easily. Let's see what others think.

>> +
>> +# add cmdline drivers
>> +foreach driver_type : [['disable', get_option('disable_drivers')],
>> +                       ['enable', get_option('enable_drivers')]]
>> +    driver_list_name = driver_type[0] + '_drivers'
>> +    cmdline_drivers = ',' + driver_type[1]
>> +
>> +    foreach driver : cmdline_drivers.split(',')
>> +        if driver_map.has_key(driver)
> 
> I feel we need comments for above parsing.
> 

Ack, will add

>> +            driver_mapped = driver_map[driver]
>> +            warning('Driver name "@0@" is deprecated, please use "@1@" instead.'
> 
> Not sure about this warning.
> We can keep compatibility without saying it is deprecated.
> 

Yes, that is a good point for discussion. Seen as support for the legacy
names were already dropped and I wasn't aware of any ABI like policy
about it, I thought there may be a preference for deprecation
warning/continuing to move to the new name only.

I would be happy to keep the legacy name without a warning/deprecation
for a longer term and we could adopt this as general guideline by
default too. It should not cost much effort to do this.

Another minor point is, if this needs a Fixes tag? Yes, in the sense it
feels like it added a banana skin for users (the patches are because I
hit this issue with 25.07). I didn't add it for now, as no guarantees
were broken and there isn't an upstream stable for backporting to anyway.

>> +                    .format(driver, driver_mapped))
>> +            driver = driver_mapped
>> +        endif
>> +        if driver_list_name == 'disable_drivers'
>> +            disable_drivers += ',' + driver
>> +        else
>> +            enable_drivers += ',' + driver
>> +        endif
>> +    endforeach
>> +endforeach
>> +
>> +# add cmdline drivers and meson drivers together
> 
> This comment is not clear.
> 

I will remove this, it was previously for the enable/disable blocks of
code, but only relevant to the line that was expanded into the loop
above not the below one.

>>  disable_drivers = run_command(list_dir_globs, disable_drivers, check: true).stdout().split()
>> -
>> -# add cmdline enabled drivers and meson enabled drivers together
>> -enable_drivers = ',' + get_option('enable_drivers')
> 
> 


^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/2] build: add backward compatibility for nested drivers
  2025-09-23 13:08  3%   ` Kevin Traynor
@ 2025-09-23 13:28  0%     ` Bruce Richardson
  2025-09-24  8:43  0%       ` Thomas Monjalon
  0 siblings, 1 reply; 77+ results
From: Bruce Richardson @ 2025-09-23 13:28 UTC (permalink / raw)
  To: Kevin Traynor; +Cc: Thomas Monjalon, dev, david.marchand

On Tue, Sep 23, 2025 at 02:08:35PM +0100, Kevin Traynor wrote:
> On 22/09/2025 16:51, Thomas Monjalon wrote:
> > 22/09/2025 13:07, Kevin Traynor:
> >> --- a/drivers/meson.build
> >> +++ b/drivers/meson.build
> >> -# add cmdline disabled drivers and meson disabled drivers together
> >> -disable_drivers += ',' + get_option('disable_drivers')
> >> +# map legacy driver names
> >> +driver_map = {
> >> +    'net/e1000': 'net/intel/e1000',
> >> +    'net/fm10k': 'net/intel/fm10k',
> >> +    'net/i40e': 'net/intel/i40e',
> >> +    'net/iavf': 'net/intel/iavf',
> >> +    'net/ice': 'net/intel/ice',
> >> +    'net/idpf': 'net/intel/idpf',
> >> +    'net/ipn3ke': 'net/intel/ipn3ke',
> >> +    'net/ixgbe': 'net/intel/ixgbe',
> >> +    'net/cpfl': 'net/intel/cpfl',
> >> +}
> > 
> > Should we build this list inside a file drivers/net/intel/meson.build ?
> > I'm not sure my idea is better...
> > 
> 
> Not sure. It's generic as is, so any other driver moved anywhere could
> add to this list here easily. Let's see what others think.
> 

I'm fine either way. Having a general list may be good, as we may have
other driver moves or renames in future too.

> >> +
> >> +# add cmdline drivers
> >> +foreach driver_type : [['disable', get_option('disable_drivers')],
> >> +                       ['enable', get_option('enable_drivers')]]
> >> +    driver_list_name = driver_type[0] + '_drivers'
> >> +    cmdline_drivers = ',' + driver_type[1]
> >> +
> >> +    foreach driver : cmdline_drivers.split(',')
> >> +        if driver_map.has_key(driver)
> > 
> > I feel we need comments for above parsing.
> > 
> 
> Ack, will add
> 
> >> +            driver_mapped = driver_map[driver]
> >> +            warning('Driver name "@0@" is deprecated, please use "@1@" instead.'
> > 
> > Not sure about this warning.
> > We can keep compatibility without saying it is deprecated.
> > 
> 
> Yes, that is a good point for discussion. Seen as support for the legacy
> names were already dropped and I wasn't aware of any ABI like policy
> about it, I thought there may be a preference for deprecation
> warning/continuing to move to the new name only.
> 
> I would be happy to keep the legacy name without a warning/deprecation
> for a longer term and we could adopt this as general guideline by
> default too. It should not cost much effort to do this.
> 

Agreed. If we do decide after a while to remove an old name, then we should
do a deprecation notice first.

> Another minor point is, if this needs a Fixes tag? Yes, in the sense it
> feels like it added a banana skin for users (the patches are because I
> hit this issue with 25.07). I didn't add it for now, as no guarantees
> were broken and there isn't an upstream stable for backporting to anyway.
> 

If there is no backporting, I'm not sure it matters. Maybe add one anyway
to imply that this was something that should have been thought of in the
original patch.

/Bruce

^ permalink raw reply	[relevance 0%]

* [RFC PATCH 6/6] doc: update docs for ethdev changes
  @ 2025-09-23 14:12  4% ` Bruce Richardson
      2 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-09-23 14:12 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson

Move text from deprecation notice to release note, and update.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rel_notes/deprecation.rst   | 7 -------
 doc/guides/rel_notes/release_25_11.rst | 6 ++++++
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5aaeb1052a..bdebc3399f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -110,13 +110,6 @@ Deprecation Notices
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
 
-* ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
-  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
-  ``q_errors``.
-  Instead queue stats will be received via xstats API. Current method support
-  will be limited to maximum 256 queues.
-  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
-
 * ethdev: Flow actions ``PF`` and ``VF`` have been deprecated since DPDK 21.11
   and are yet to be removed. That still has not happened because there are net
   drivers which support combined use of either action ``PF`` or action ``VF``
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index efb88bbbb0..441085de69 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -105,6 +105,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* ethdev: As previously announced in deprecation notes,
+  queue specific stats fields are now removed from ``struct rte_eth_stats``.
+  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``, ``q_errors``.
+  Instead queue stats will be received via xstats API.
+  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` is removed from public headers.
+
 
 ABI Changes
 -----------
-- 
2.48.1


^ permalink raw reply	[relevance 4%]

* [PATCH v3] build: remove deprecated kmods option
  2025-09-19  7:57  5% [PATCH] build: remove deprecated kmods option Bruce Richardson
  2025-09-19  8:44  5% ` [PATCH v2] " Bruce Richardson
@ 2025-09-23 14:40  4% ` Bruce Richardson
  1 sibling, 0 replies; 77+ results
From: Bruce Richardson @ 2025-09-23 14:40 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson, David Marchand

The "enable_kmods" meson option was deprecated back in 2023[1], so can
now be removed from DPDK.

[1] https://doc.dpdk.org/guides-23.11/rel_notes/deprecation.html

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: David Marchand <david.marchand@redhat.com>

---
v3: add release note update.
v2: remove missed references in DTS and in freebsd meson.build
---
 doc/guides/rel_notes/deprecation.rst   | 7 -------
 doc/guides/rel_notes/release_25_11.rst | 7 +++++++
 dts/framework/remote_session/dpdk.py   | 2 +-
 dts/framework/utils.py                 | 2 +-
 kernel/freebsd/meson.build             | 4 ++--
 meson_options.txt                      | 2 --
 6 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5aaeb1052a..bdcd2775b6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,13 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
 Deprecation Notices
 -------------------
 
-* build: The ``enable_kmods`` option is deprecated and will be removed in a future release.
-  Setting/clearing the option has no impact on the build.
-  Instead, kernel modules will be always built for OS's where out-of-tree kernel modules
-  are required for DPDK operation.
-  Currently, this means that modules will only be built for FreeBSD.
-  No modules are shipped with DPDK for either Linux or Windows.
-
 * kvargs: The function ``rte_kvargs_process`` will get a new parameter
   for returning key match count. It will ease handling of no-match case.
 
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index efb88bbbb0..bce8e9f563 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -89,6 +89,13 @@ Removed Items
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* build: as previously announced in the deprecation notices,
+  the ``enable_kmods`` build option has been removed.
+  Kernel modules will now automatically be built for OS's where out-of-tree kernel modules
+  are required for DPDK operation.
+  Currently, this means that modules will only be built for FreeBSD.
+  No modules are shipped with DPDK for either Linux or Windows.
+
 
 API Changes
 -----------
diff --git a/dts/framework/remote_session/dpdk.py b/dts/framework/remote_session/dpdk.py
index 606d6e22fe..2dc8dab642 100644
--- a/dts/framework/remote_session/dpdk.py
+++ b/dts/framework/remote_session/dpdk.py
@@ -262,7 +262,7 @@ def _build_dpdk(self) -> None:
         """
         self._session.build_dpdk(
             self._env_vars,
-            MesonArgs(default_library="static", enable_kmods=True, libdir="lib"),
+            MesonArgs(default_library="static", libdir="lib"),
             self.remote_dpdk_tree_path,
             self.remote_dpdk_build_dir,
         )
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index 0c81ab1b95..9f7201c888 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -111,7 +111,7 @@ def __init__(self, default_library: str | None = None, **dpdk_args: str | bool):
         Example:
             ::
 
-                meson_args = MesonArgs(enable_kmods=True).
+                meson_args = MesonArgs(check_includes=True).
         """
         self._default_library = f"--default-library={default_library}" if default_library else ""
         self._dpdk_args = " ".join(
diff --git a/kernel/freebsd/meson.build b/kernel/freebsd/meson.build
index 1f612711be..862e19e766 100644
--- a/kernel/freebsd/meson.build
+++ b/kernel/freebsd/meson.build
@@ -29,7 +29,7 @@ foreach k:kmods
                 'KMOD_CFLAGS=' + ' '.join(kmod_cflags),
                 'CC=clang'],
             depends: built_kmods, # make each module depend on prev
-            build_by_default: get_option('enable_kmods'),
-            install: get_option('enable_kmods'),
+            build_by_default: true,
+            install: true,
             install_dir: '/boot/modules/')
 endforeach
diff --git a/meson_options.txt b/meson_options.txt
index e49b2fc089..e28d24054c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -24,8 +24,6 @@ option('enable_drivers', type: 'string', value: '', description:
        'Comma-separated list of drivers to build. If unspecified, build all drivers.')
 option('enable_driver_sdk', type: 'boolean', value: false, description:
        'Install headers to build drivers.')
-option('enable_kmods', type: 'boolean', value: true, description:
-       '[Deprecated - will be removed in future release] build kernel modules')
 option('enable_libs', type: 'string', value: '', description:
        'Comma-separated list of optional libraries to explicitly enable. [NOTE: mandatory libs are always enabled]')
 option('examples', type: 'string', value: '', description:
-- 
2.48.1


^ permalink raw reply	[relevance 4%]

* [PATCH] config/riscv: add rv64gcv cross compilation target
@ 2025-09-23 15:07 14% sunyuechi
  2025-10-06 12:43  4% ` sunyuechi
  0 siblings, 1 reply; 77+ results
From: sunyuechi @ 2025-09-23 15:07 UTC (permalink / raw)
  To: dev; +Cc: Sun Yuechi, Stanisław Kardach, Bruce Richardson

From: Sun Yuechi <sunyuechi@iscas.ac.cn>

Add a cross file for rv64gcv, enable it in devtools/test-meson-builds.sh,
and update the RISC-V cross-build guide to support the vector extension.

Signed-off-by: Sun Yuechi <sunyuechi@iscas.ac.cn>
---
 config/riscv/meson.build                        |  3 ++-
 config/riscv/riscv64_rv64gcv_linux_gcc          | 17 +++++++++++++++++
 devtools/test-meson-builds.sh                   |  4 ++++
 .../linux_gsg/cross_build_dpdk_for_riscv.rst    |  2 ++
 4 files changed, 25 insertions(+), 1 deletion(-)
 create mode 100644 config/riscv/riscv64_rv64gcv_linux_gcc

diff --git a/config/riscv/meson.build b/config/riscv/meson.build
index f3daea0c0e..a06429a1e2 100644
--- a/config/riscv/meson.build
+++ b/config/riscv/meson.build
@@ -43,7 +43,8 @@ vendor_generic = {
         ['RTE_MAX_NUMA_NODES', 2]
     ],
     'arch_config': {
-        'generic': {'machine_args': ['-march=rv64gc']}
+        'generic': {'machine_args': ['-march=rv64gc']},
+        'rv64gcv': {'machine_args': ['-march=rv64gcv']},
     }
 }
 
diff --git a/config/riscv/riscv64_rv64gcv_linux_gcc b/config/riscv/riscv64_rv64gcv_linux_gcc
new file mode 100644
index 0000000000..ccc5115dec
--- /dev/null
+++ b/config/riscv/riscv64_rv64gcv_linux_gcc
@@ -0,0 +1,17 @@
+[binaries]
+c = ['ccache', 'riscv64-linux-gnu-gcc']
+cpp = ['ccache', 'riscv64-linux-gnu-g++']
+ar = 'riscv64-linux-gnu-ar'
+strip = 'riscv64-linux-gnu-strip'
+pcap-config = ''
+
+[host_machine]
+system = 'linux'
+cpu_family = 'riscv64'
+cpu = 'rv64gcv'
+endian = 'little'
+
+[properties]
+vendor_id = 'generic'
+arch_id = 'rv64gcv'
+pkg_config_libdir = '/usr/lib/riscv64-linux-gnu/pkgconfig'
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 4fff1f7177..4f07f84eb0 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -290,6 +290,10 @@ build build-ppc64-power8-gcc $f ABI $use_shared
 f=$srcdir/config/riscv/riscv64_linux_gcc
 build build-riscv64-generic-gcc $f ABI $use_shared
 
+# RISC-V vector (rv64gcv)
+f=$srcdir/config/riscv/riscv64_rv64gcv_linux_gcc
+build build-riscv64_rv64gcv_gcc $f ABI $use_shared
+
 # Test installation of the x86-generic target, to be used for checking
 # the sample apps build using the pkg-config file for cflags and libs
 load_env cc
diff --git a/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst b/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
index 7d7f7ac72b..bcba12a604 100644
--- a/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
+++ b/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
@@ -108,6 +108,8 @@ Currently the following targets are supported:
 
 * Generic rv64gc ISA: ``config/riscv/riscv64_linux_gcc``
 
+* RV64GCV ISA: ``config/riscv/riscv64_rv64gcv_linux_gcc``
+
 * SiFive U740 SoC: ``config/riscv/riscv64_sifive_u740_linux_gcc``
 
 To add a new target support, ``config/riscv/meson.build`` has to be modified by
-- 
2.51.0


^ permalink raw reply	[relevance 14%]

* RE: [PATCH 1/1] ring: safe partial ordering for head/tail update
  2025-09-23 21:57  0%             ` Ola Liljedahl
@ 2025-09-24  6:56  0%               ` Konstantin Ananyev
  2025-09-24  7:50  0%                 ` Konstantin Ananyev
  0 siblings, 1 reply; 77+ results
From: Konstantin Ananyev @ 2025-09-24  6:56 UTC (permalink / raw)
  To: Ola Liljedahl, Wathsala Vithanage, Honnappa Nagarahalli
  Cc: dev, Dhruv Tripathi, Bruce Richardson


> > > > To avoid information loss I combined reply to two Wathsala replies into one.
> > > >
> > > >
> > > > > > > The function __rte_ring_headtail_move_head() assumes that the
> > > > > > > barrier
> > > > > > (fence) between the load of the head and the load-acquire of the
> > > > > > > opposing tail guarantees the following: if a first thread reads
> > > > > > > tail
> > > > > > > and then writes head and a second thread reads the new value of
> > > > > > > head
> > > > > > > and then reads tail, then it should observe the same (or a later)
> > > > > > > value of tail.
> > > > > > >
> > > > > > > This assumption is incorrect under the C11 memory model. If the
> > > > > > > barrier
> > > > > > > (fence) is intended to establish a total ordering of ring
> > > > > > > operations,
> > > > > > > it fails to do so. Instead, the current implementation only
> > > > > > > enforces a
> > > > > > > partial ordering, which can lead to unsafe interleavings. In
> > > > > > > particular,
> > > > > > > some partial orders can cause underflows in free slot or available
> > > > > > > element computations, potentially resulting in data corruption.
> > > > > >
> > > > > > Hmm... sounds exactly like the problem from the patch we discussed
> > > > > > earlier that year:
> > > > > >
> https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > > konstantin.ananyev@huawei.com
> <mailto:konstantin.ananyev@huawei.com> <mailto:20250521111432.207936-4-
> > > konstantin.ananyev@huawei.com
> <mailto:konstantin.ananyev@huawei.com>>/
> > > > > > In two words:
> > > > > > "... thread can see 'latest' 'cons.head' value, with 'previous' value
> > > > > > for 'prod.tail' or visa-versa.
> > > > > > In other words: 'cons.head' value depends on 'prod.tail', so before
> > > > > > making latest 'cons.head'
> > > > > > value visible to other threads, we need to ensure that latest
> > > > > > 'prod.tail' is also visible."
> > > > > > Is that the one?
> > > >
> > > >
> > > > > Yes, the behavior occurs under RCpc (LDAPR) but not under RCsc (LDAR),
> > > > > which is why we didn’t catch it earlier. A fuller explanation, with
> > > > > Herd7 simulations, is in the blog post linked in the cover letter.
> > > > >
> > > > > https://community.arm.com/arm-community-blogs/b/architectures-and-
> <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-
> order
> > > <https://community.arm.com/arm-community-blogs/b/architectures-and-
> <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-
> order>
> > > >
> > > >
> > > > I see, so now it is reproducible with core rte_ring on real HW.
> > > >
> > > >
> > > > > >
> > > > > > > The issue manifests when a CPU first acts as a producer and later
> > > > > > > as a
> > > > > > > consumer. In this scenario, the barrier assumption may fail when
> > > > > > > another
> > > > > > > core takes the consumer role. A Herd7 litmus test in C11 can
> > > > > > > demonstrate
> > > > > > > this violation. The problem has not been widely observed so far
> > > > > > > because:
> > > > > > > (a) on strong memory models (e.g., x86-64) the assumption holds,
> > > > > > > and
> > > > > > > (b) on relaxed models with RCsc semantics the ordering is still
> > > > > > > strong
> > > > > > > enough to prevent hazards.
> > > > > > > The problem becomes visible only on weaker models, when load-
> > > > > > > acquire is
> > > > > > > implemented with RCpc semantics (e.g. some AArch64 CPUs which
> > > > > > > support
> > > > > > > the LDAPR and LDAPUR instructions).
> > > > > > >
> > > > > > > Three possible solutions exist:
> > > > > > > 1. Strengthen ordering by upgrading release/acquire semantics to
> > > > > > > sequential consistency. This requires using seq-cst for
> > > > > > > stores,
> > > > > > > loads, and CAS operations. However, this approach introduces a
> > > > > > > significant performance penalty on relaxed-memory
> > > > > > > architectures.
> > > > > > >
> > > > > > > 2. Establish a safe partial order by enforcing a pair-wise
> > > > > > > happens-before relationship between thread of same role by
> > > > > > > changing
> > > > > > > the CAS and the preceding load of the head by converting them
> > > > > > > to
> > > > > > > release and acquire respectively. This approach makes the
> > > > > > > original
> > > > > > > barrier assumption unnecessary and allows its removal.
> > > > > >
> > > > > > For the sake of clarity, can you outline what would be exact code
> > > > > > changes for
> > > > > > approach #2? Same as in that patch:
> > > > > >
> https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> &gt;>
> > > > > konstantin.ananyev@huawei.com
> <mailto:konstantin.ananyev@huawei.com>
> <mailto:konstantin.ananyev@huawei.com
> <mailto:konstantin.ananyev@huawei.com>>/
> > > > > > Or something different?
> > > > >
> > > > > Sorry, I missed the later half you your comment before.
> > > > > Yes, you have proposed the same solution there.
> > > >
> > > >
> > > > Ok, thanks for confirmation.
> > > >
> > > >
> > > > > >
> > > > > >
> > > > > > > 3. Retain partial ordering but ensure only safe partial orders
> > > > > > > are
> > > > > > > committed. This can be done by detecting underflow conditions
> > > > > > > (producer < consumer) and quashing the update in such cases.
> > > > > > > This approach makes the original barrier assumption
> > > > > > > unnecessary
> > > > > > > and allows its removal.
> > > > > >
> > > > > > > This patch implements solution (3) for performance reasons.
> > > > > > >
> > > > > > > Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com
> <mailto:wathsala.vithanage@arm.com>
> > > <mailto:wathsala.vithanage@arm.com
> <mailto:wathsala.vithanage@arm.com>>>
> > > > > > > Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com
> <mailto:ola.liljedahl@arm.com>
> > > <mailto:ola.liljedahl@arm.com <mailto:ola.liljedahl@arm.com>>>
> > > > > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com
> <mailto:honnappa.nagarahalli@arm.com>
> > > <mailto:honnappa.nagarahalli@arm.com
> <mailto:honnappa.nagarahalli@arm.com>>>
> > > > > > > Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com
> <mailto:dhruv.tripathi@arm.com>
> > > <mailto:dhruv.tripathi@arm.com <mailto:dhruv.tripathi@arm.com>>>
> > > > > > > ---
> > > > > > > lib/ring/rte_ring_c11_pvt.h | 10 +++++++---
> > > > > > > 1 file changed, 7 insertions(+), 3 deletions(-)
> > > > > > >
> > > > > > > diff --git a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > index b9388af0da..e5ac1f6b9e 100644
> > > > > > > --- a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > +++ b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > @@ -83,9 +83,6 @@ __rte_ring_headtail_move_head(struct
> > > > > > > rte_ring_headtail
> > > > > > > *d,
> > > > > > > /* Reset n to the initial burst count */
> > > > > > > n = max;
> > > > > > >
> > > > > > > - /* Ensure the head is read before tail */
> > > > > > > - rte_atomic_thread_fence(rte_memory_order_acquire);
> > > > > > > -
> > > > > > > /* load-acquire synchronize with store-release of
> > > > > > > ht->tail
> > > > > > > * in update_tail.
> > > > > > > */
> > > > > >
> > > > > > But then cons.head can be read a before prod.tail (and visa-versa),
> > > > > > right?
> > > > >
> > > > > Right, we let it happen but eliminate any resulting states that are
> > > > > semantically incorrect at the end.
> > > >
> > > >
> > > > Two comments here:
> > > > 1) I think it is probably safer to do the check like that:
> > > > If (*entries > ring->capacity) ...
> > > Yes, this might be another way of handling underflow situations. We could
> study
> > > this.
> > >
> > > I have used the check for negative without problems in my ring buffer
> > > implementations
> > > https://github.com/ARM-
> software/progress64/blob/master/src/p64_ringbuf.c <https://github.com/ARM-
> software/progress64/blob/master/src/p64_ringbuf.c>
> > > but can't say that has been battle-tested.
> >
> >
> > My thought was about the case (probably hypothetical) when the difference
> > between stale tail and head will be bigger then 2^31 + 1.
> >
> >
> > > > 2) My concern that without forcing a proper read ordering
> > > > (cons.head first then prod.tail) we re-introduce a window for all sorts of
> > > > ABA-like problems.
> > > Head and tail indexes are monotonically increasing so I don't see a risk for
> ABA-like
> > > problems.
> >
> >
> > I understand that, but with current CPU speeds it can take rte_ring just few
> seconds to
> > wrap around head/tail values. If user doing something really fancy - like using
> rte_ring ZC API
> > (i.e. just moving head/tail without reading actual objects) that can probably
> happen even
> > faster (less than a second?).
> > Are we sure that the stale tail value will never persist that long?
> > Let say user calling move_head() in a loop till it succeeds?
> >
> >
> > > Indeed, adding a monotonically increasing tag to pointers is the common way
> of
> > > avoiding ABA
> > > problems in lock-free designs.
> >
> >
> > Yep, using 64-bit values for head/tail counters will help to avoid these concerns.
> > But it will probably break HTS/RTS modes, plus it is an ABI change for sure.
> >
> >
> > Actually after another thought, I have one more concern here:
> >
> >
> > + /*
> > + * Ensure the entries calculation was not based on a stale
> > + * and unsafe stail observation that causes underflow.
> > + */
> > + if ((int)*entries < 0)
> > + *entries = 0;
> > +
> >
> >
> > With that change, it might return not-valid information back to the user
> > about number of free/occupied entries in the ring.
> > Plus rte_ring_enqueue() now might fail even when there are enough free
> entries
> > in the ring (same for dequeue).
> How do you (or the thread) know there are enough free (or used) entries? Do
> you
> assume sequentially consistent behaviour (a total order of memory accesses)?
> Otherwise, you would need to explicitly create a happens-before relation
> between threads, e.g. a consumer which made room in the ring buffer must
> synchronize-with the producer that there is now room for more elements. That
> synchronize-with edge will ensure the producer reads a fresh value of stail. But
> without it, how can a thread know the state of the ring buffer that is being
> manipulated by another thread?
> 
> > That looks like a change in our public API behavior that might break many
> things.
> > There are quite few places when caller expects enqueue/dequeue
> > operation to always succeed (let say there always should be enough free space
> in the ring).
> Single-threaded scenarios are not a problem. Do you have a multithreaded
> scenario where
> the caller expects enqueue/dequeue to always succeed? How are the threads
> involved in such
> a scenario synchronizing with each other?

Sure, I am talking about MT scenario.
I think I already provided an example: DPDK mempool library (see below).
In brief, It works like that:
At init it allocates ring of N memory buffers and ring big enough to hold all of them.
Then it enqueues all allocated memory buffers into the ring.
mempool_get - retrieves (dequeues) buffers from the ring.
mempool_put - puts them back (enqueues) to the ring
get() might fail (ENOMEM), while put is expected to always succeed. 

> 
> > For example: rte_mempool works like that.
> > I am pretty sure there are quite few other places like that inside DPDK,
> > not to mention third-party code.
> >
> >
> > Considering all of the above, I am actually more in favor
> > to combine approaches #2 and #3 for the final patch:
> > establish a safe partial order (#2) and keep the check from #3 (should it become
> an assert()/verify()?)
> I agree that using acquire/release for all prod/cons_head accesses will make it
> easier to
> reason about the ring buffer state. Sequential consistency (total order) is the
> easiest to
> reason about and often seems to be desired and expected by programmers (e.g.
> "I'll just
> add a barrier here to ensure A happens before B in this thread, now there is a
> total order...").
> 
> - Ola
> 
> >
> >
> > Another thing to note: whatever final approach we choose -
> > we need to make sure that the problem is addressed across all other
> > rte_ring flavors/modes too (generic implementation, rts/hts mode, soring).
> >
> >
> > Konstantin
> 
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended recipient,
> please notify the sender immediately and do not disclose the contents to any
> other person, use it for any purpose, or store or copy the information in any
> medium. Thank you.

^ permalink raw reply	[relevance 0%]

* RE: [PATCH 1/1] ring: safe partial ordering for head/tail update
  2025-09-24  6:56  0%               ` Konstantin Ananyev
@ 2025-09-24  7:50  0%                 ` Konstantin Ananyev
  0 siblings, 0 replies; 77+ results
From: Konstantin Ananyev @ 2025-09-24  7:50 UTC (permalink / raw)
  To: Konstantin Ananyev, Ola Liljedahl, Wathsala Vithanage,
	Honnappa Nagarahalli
  Cc: dev, Dhruv Tripathi, Bruce Richardson



> > > > > To avoid information loss I combined reply to two Wathsala replies into one.
> > > > >
> > > > >
> > > > > > > > The function __rte_ring_headtail_move_head() assumes that the
> > > > > > > > barrier
> > > > > > > (fence) between the load of the head and the load-acquire of the
> > > > > > > > opposing tail guarantees the following: if a first thread reads
> > > > > > > > tail
> > > > > > > > and then writes head and a second thread reads the new value of
> > > > > > > > head
> > > > > > > > and then reads tail, then it should observe the same (or a later)
> > > > > > > > value of tail.
> > > > > > > >
> > > > > > > > This assumption is incorrect under the C11 memory model. If the
> > > > > > > > barrier
> > > > > > > > (fence) is intended to establish a total ordering of ring
> > > > > > > > operations,
> > > > > > > > it fails to do so. Instead, the current implementation only
> > > > > > > > enforces a
> > > > > > > > partial ordering, which can lead to unsafe interleavings. In
> > > > > > > > particular,
> > > > > > > > some partial orders can cause underflows in free slot or available
> > > > > > > > element computations, potentially resulting in data corruption.
> > > > > > >
> > > > > > > Hmm... sounds exactly like the problem from the patch we discussed
> > > > > > > earlier that year:
> > > > > > >
> > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > > > konstantin.ananyev@huawei.com
> > <mailto:konstantin.ananyev@huawei.com> <mailto:20250521111432.207936-4-
> > > > konstantin.ananyev@huawei.com
> > <mailto:konstantin.ananyev@huawei.com>>/
> > > > > > > In two words:
> > > > > > > "... thread can see 'latest' 'cons.head' value, with 'previous' value
> > > > > > > for 'prod.tail' or visa-versa.
> > > > > > > In other words: 'cons.head' value depends on 'prod.tail', so before
> > > > > > > making latest 'cons.head'
> > > > > > > value visible to other threads, we need to ensure that latest
> > > > > > > 'prod.tail' is also visible."
> > > > > > > Is that the one?
> > > > >
> > > > >
> > > > > > Yes, the behavior occurs under RCpc (LDAPR) but not under RCsc (LDAR),
> > > > > > which is why we didn’t catch it earlier. A fuller explanation, with
> > > > > > Herd7 simulations, is in the blog post linked in the cover letter.
> > > > > >
> > > > > > https://community.arm.com/arm-community-blogs/b/architectures-and-
> > <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-
> > order
> > > > <https://community.arm.com/arm-community-blogs/b/architectures-and-
> > <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > > > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-
> > order>
> > > > >
> > > > >
> > > > > I see, so now it is reproducible with core rte_ring on real HW.
> > > > >
> > > > >
> > > > > > >
> > > > > > > > The issue manifests when a CPU first acts as a producer and later
> > > > > > > > as a
> > > > > > > > consumer. In this scenario, the barrier assumption may fail when
> > > > > > > > another
> > > > > > > > core takes the consumer role. A Herd7 litmus test in C11 can
> > > > > > > > demonstrate
> > > > > > > > this violation. The problem has not been widely observed so far
> > > > > > > > because:
> > > > > > > > (a) on strong memory models (e.g., x86-64) the assumption holds,
> > > > > > > > and
> > > > > > > > (b) on relaxed models with RCsc semantics the ordering is still
> > > > > > > > strong
> > > > > > > > enough to prevent hazards.
> > > > > > > > The problem becomes visible only on weaker models, when load-
> > > > > > > > acquire is
> > > > > > > > implemented with RCpc semantics (e.g. some AArch64 CPUs which
> > > > > > > > support
> > > > > > > > the LDAPR and LDAPUR instructions).
> > > > > > > >
> > > > > > > > Three possible solutions exist:
> > > > > > > > 1. Strengthen ordering by upgrading release/acquire semantics to
> > > > > > > > sequential consistency. This requires using seq-cst for
> > > > > > > > stores,
> > > > > > > > loads, and CAS operations. However, this approach introduces a
> > > > > > > > significant performance penalty on relaxed-memory
> > > > > > > > architectures.
> > > > > > > >
> > > > > > > > 2. Establish a safe partial order by enforcing a pair-wise
> > > > > > > > happens-before relationship between thread of same role by
> > > > > > > > changing
> > > > > > > > the CAS and the preceding load of the head by converting them
> > > > > > > > to
> > > > > > > > release and acquire respectively. This approach makes the
> > > > > > > > original
> > > > > > > > barrier assumption unnecessary and allows its removal.
> > > > > > >
> > > > > > > For the sake of clarity, can you outline what would be exact code
> > > > > > > changes for
> > > > > > > approach #2? Same as in that patch:
> > > > > > >
> > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > > > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> > > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-
> > &gt;>
> > > > > > konstantin.ananyev@huawei.com
> > <mailto:konstantin.ananyev@huawei.com>
> > <mailto:konstantin.ananyev@huawei.com
> > <mailto:konstantin.ananyev@huawei.com>>/
> > > > > > > Or something different?
> > > > > >
> > > > > > Sorry, I missed the later half you your comment before.
> > > > > > Yes, you have proposed the same solution there.
> > > > >
> > > > >
> > > > > Ok, thanks for confirmation.
> > > > >
> > > > >
> > > > > > >
> > > > > > >
> > > > > > > > 3. Retain partial ordering but ensure only safe partial orders
> > > > > > > > are
> > > > > > > > committed. This can be done by detecting underflow conditions
> > > > > > > > (producer < consumer) and quashing the update in such cases.
> > > > > > > > This approach makes the original barrier assumption
> > > > > > > > unnecessary
> > > > > > > > and allows its removal.
> > > > > > >
> > > > > > > > This patch implements solution (3) for performance reasons.
> > > > > > > >
> > > > > > > > Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com
> > <mailto:wathsala.vithanage@arm.com>
> > > > <mailto:wathsala.vithanage@arm.com
> > <mailto:wathsala.vithanage@arm.com>>>
> > > > > > > > Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com
> > <mailto:ola.liljedahl@arm.com>
> > > > <mailto:ola.liljedahl@arm.com <mailto:ola.liljedahl@arm.com>>>
> > > > > > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com
> > <mailto:honnappa.nagarahalli@arm.com>
> > > > <mailto:honnappa.nagarahalli@arm.com
> > <mailto:honnappa.nagarahalli@arm.com>>>
> > > > > > > > Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com
> > <mailto:dhruv.tripathi@arm.com>
> > > > <mailto:dhruv.tripathi@arm.com <mailto:dhruv.tripathi@arm.com>>>
> > > > > > > > ---
> > > > > > > > lib/ring/rte_ring_c11_pvt.h | 10 +++++++---
> > > > > > > > 1 file changed, 7 insertions(+), 3 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > > b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > > index b9388af0da..e5ac1f6b9e 100644
> > > > > > > > --- a/lib/ring/rte_ring_c11_pvt.h
> > > > > > > > +++ b/lib/ring/rte_ring_c11_pvt.h
> > > > > > > > @@ -83,9 +83,6 @@ __rte_ring_headtail_move_head(struct
> > > > > > > > rte_ring_headtail
> > > > > > > > *d,
> > > > > > > > /* Reset n to the initial burst count */
> > > > > > > > n = max;
> > > > > > > >
> > > > > > > > - /* Ensure the head is read before tail */
> > > > > > > > - rte_atomic_thread_fence(rte_memory_order_acquire);
> > > > > > > > -
> > > > > > > > /* load-acquire synchronize with store-release of
> > > > > > > > ht->tail
> > > > > > > > * in update_tail.
> > > > > > > > */
> > > > > > >
> > > > > > > But then cons.head can be read a before prod.tail (and visa-versa),
> > > > > > > right?
> > > > > >
> > > > > > Right, we let it happen but eliminate any resulting states that are
> > > > > > semantically incorrect at the end.
> > > > >
> > > > >
> > > > > Two comments here:
> > > > > 1) I think it is probably safer to do the check like that:
> > > > > If (*entries > ring->capacity) ...
> > > > Yes, this might be another way of handling underflow situations. We could
> > study
> > > > this.
> > > >
> > > > I have used the check for negative without problems in my ring buffer
> > > > implementations
> > > > https://github.com/ARM-
> > software/progress64/blob/master/src/p64_ringbuf.c <https://github.com/ARM-
> > software/progress64/blob/master/src/p64_ringbuf.c>
> > > > but can't say that has been battle-tested.
> > >
> > >
> > > My thought was about the case (probably hypothetical) when the difference
> > > between stale tail and head will be bigger then 2^31 + 1.
> > >
> > >
> > > > > 2) My concern that without forcing a proper read ordering
> > > > > (cons.head first then prod.tail) we re-introduce a window for all sorts of
> > > > > ABA-like problems.
> > > > Head and tail indexes are monotonically increasing so I don't see a risk for
> > ABA-like
> > > > problems.
> > >
> > >
> > > I understand that, but with current CPU speeds it can take rte_ring just few
> > seconds to
> > > wrap around head/tail values. If user doing something really fancy - like using
> > rte_ring ZC API
> > > (i.e. just moving head/tail without reading actual objects) that can probably
> > happen even
> > > faster (less than a second?).
> > > Are we sure that the stale tail value will never persist that long?
> > > Let say user calling move_head() in a loop till it succeeds?
> > >
> > >
> > > > Indeed, adding a monotonically increasing tag to pointers is the common way
> > of
> > > > avoiding ABA
> > > > problems in lock-free designs.
> > >
> > >
> > > Yep, using 64-bit values for head/tail counters will help to avoid these concerns.
> > > But it will probably break HTS/RTS modes, plus it is an ABI change for sure.
> > >
> > >
> > > Actually after another thought, I have one more concern here:
> > >
> > >
> > > + /*
> > > + * Ensure the entries calculation was not based on a stale
> > > + * and unsafe stail observation that causes underflow.
> > > + */
> > > + if ((int)*entries < 0)
> > > + *entries = 0;
> > > +
> > >
> > >
> > > With that change, it might return not-valid information back to the user
> > > about number of free/occupied entries in the ring.
> > > Plus rte_ring_enqueue() now might fail even when there are enough free
> > entries
> > > in the ring (same for dequeue).
> > How do you (or the thread) know there are enough free (or used) entries? Do
> > you
> > assume sequentially consistent behaviour (a total order of memory accesses)?
> > Otherwise, you would need to explicitly create a happens-before relation
> > between threads, e.g. a consumer which made room in the ring buffer must
> > synchronize-with the producer that there is now room for more elements. That
> > synchronize-with edge will ensure the producer reads a fresh value of stail. But
> > without it, how can a thread know the state of the ring buffer that is being
> > manipulated by another thread?
> >
> > > That looks like a change in our public API behavior that might break many
> > things.
> > > There are quite few places when caller expects enqueue/dequeue
> > > operation to always succeed (let say there always should be enough free space
> > in the ring).
> > Single-threaded scenarios are not a problem. Do you have a multithreaded
> > scenario where
> > the caller expects enqueue/dequeue to always succeed? How are the threads
> > involved in such
> > a scenario synchronizing with each other?
> 
> Sure, I am talking about MT scenario.
> I think I already provided an example: DPDK mempool library (see below).
> In brief, It works like that:
> At init it allocates ring of N memory buffers and ring big enough to hold all of them.

Sorry, I meant to say: "it allocates N memory buffers and ring big enough to hold all of them".
 
> Then it enqueues all allocated memory buffers into the ring.
> mempool_get - retrieves (dequeues) buffers from the ring.
> mempool_put - puts them back (enqueues) to the ring
> get() might fail (ENOMEM), while put is expected to always succeed.
> 
> >
> > > For example: rte_mempool works like that.
> > > I am pretty sure there are quite few other places like that inside DPDK,
> > > not to mention third-party code.
> > >
> > >
> > > Considering all of the above, I am actually more in favor
> > > to combine approaches #2 and #3 for the final patch:
> > > establish a safe partial order (#2) and keep the check from #3 (should it become
> > an assert()/verify()?)
> > I agree that using acquire/release for all prod/cons_head accesses will make it
> > easier to
> > reason about the ring buffer state. Sequential consistency (total order) is the
> > easiest to
> > reason about and often seems to be desired and expected by programmers (e.g.
> > "I'll just
> > add a barrier here to ensure A happens before B in this thread, now there is a
> > total order...").
> >
> > - Ola
> >
> > >
> > >
> > > Another thing to note: whatever final approach we choose -
> > > we need to make sure that the problem is addressed across all other
> > > rte_ring flavors/modes too (generic implementation, rts/hts mode, soring).
> > >
> > >
> > > Konstantin
> >
> >
> > IMPORTANT NOTICE: The contents of this email and any attachments are
> > confidential and may also be privileged. If you are not the intended recipient,
> > please notify the sender immediately and do not disclose the contents to any
> > other person, use it for any purpose, or store or copy the information in any
> > medium. Thank you.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 1/1] ring: safe partial ordering for head/tail update
  2025-09-20 12:01  3%           ` Konstantin Ananyev
       [not found]                 ` <cf7e14d4ba5e9d78fddf083b6c92d75942447931.camel@arm.com>
@ 2025-09-23 21:57  0%             ` Ola Liljedahl
  2025-09-24  6:56  0%               ` Konstantin Ananyev
  1 sibling, 1 reply; 77+ results
From: Ola Liljedahl @ 2025-09-23 21:57 UTC (permalink / raw)
  To: Konstantin Ananyev, Wathsala Vithanage, Honnappa Nagarahalli
  Cc: dev, Dhruv Tripathi, Bruce Richardson

>
>
> On 2025-09-20, 14:01, "Konstantin Ananyev" <konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com>> wrote:
>
>
>
>
> > >
> > > To avoid information loss I combined reply to two Wathsala replies into one.
> > >
> > >
> > > > > > The function __rte_ring_headtail_move_head() assumes that the
> > > > > > barrier
> > > > > (fence) between the load of the head and the load-acquire of the
> > > > > > opposing tail guarantees the following: if a first thread reads
> > > > > > tail
> > > > > > and then writes head and a second thread reads the new value of
> > > > > > head
> > > > > > and then reads tail, then it should observe the same (or a later)
> > > > > > value of tail.
> > > > > >
> > > > > > This assumption is incorrect under the C11 memory model. If the
> > > > > > barrier
> > > > > > (fence) is intended to establish a total ordering of ring
> > > > > > operations,
> > > > > > it fails to do so. Instead, the current implementation only
> > > > > > enforces a
> > > > > > partial ordering, which can lead to unsafe interleavings. In
> > > > > > particular,
> > > > > > some partial orders can cause underflows in free slot or available
> > > > > > element computations, potentially resulting in data corruption.
> > > > >
> > > > > Hmm... sounds exactly like the problem from the patch we discussed
> > > > > earlier that year:
> > > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4- <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com> <mailto:20250521111432.207936-4-
> > konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com>>/
> > > > > In two words:
> > > > > "... thread can see 'latest' 'cons.head' value, with 'previous' value
> > > > > for 'prod.tail' or visa-versa.
> > > > > In other words: 'cons.head' value depends on 'prod.tail', so before
> > > > > making latest 'cons.head'
> > > > > value visible to other threads, we need to ensure that latest
> > > > > 'prod.tail' is also visible."
> > > > > Is that the one?
> > >
> > >
> > > > Yes, the behavior occurs under RCpc (LDAPR) but not under RCsc (LDAR),
> > > > which is why we didn’t catch it earlier. A fuller explanation, with
> > > > Herd7 simulations, is in the blog post linked in the cover letter.
> > > >
> > > > https://community.arm.com/arm-community-blogs/b/architectures-and- <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-order
> > <https://community.arm.com/arm-community-blogs/b/architectures-and- <https://community.arm.com/arm-community-blogs/b/architectures-and->
> > processors-blog/posts/when-a-barrier-does-not-block-the-pitfalls-of-partial-order>
> > >
> > >
> > > I see, so now it is reproducible with core rte_ring on real HW.
> > >
> > >
> > > > >
> > > > > > The issue manifests when a CPU first acts as a producer and later
> > > > > > as a
> > > > > > consumer. In this scenario, the barrier assumption may fail when
> > > > > > another
> > > > > > core takes the consumer role. A Herd7 litmus test in C11 can
> > > > > > demonstrate
> > > > > > this violation. The problem has not been widely observed so far
> > > > > > because:
> > > > > > (a) on strong memory models (e.g., x86-64) the assumption holds,
> > > > > > and
> > > > > > (b) on relaxed models with RCsc semantics the ordering is still
> > > > > > strong
> > > > > > enough to prevent hazards.
> > > > > > The problem becomes visible only on weaker models, when load-
> > > > > > acquire is
> > > > > > implemented with RCpc semantics (e.g. some AArch64 CPUs which
> > > > > > support
> > > > > > the LDAPR and LDAPUR instructions).
> > > > > >
> > > > > > Three possible solutions exist:
> > > > > > 1. Strengthen ordering by upgrading release/acquire semantics to
> > > > > > sequential consistency. This requires using seq-cst for
> > > > > > stores,
> > > > > > loads, and CAS operations. However, this approach introduces a
> > > > > > significant performance penalty on relaxed-memory
> > > > > > architectures.
> > > > > >
> > > > > > 2. Establish a safe partial order by enforcing a pair-wise
> > > > > > happens-before relationship between thread of same role by
> > > > > > changing
> > > > > > the CAS and the preceding load of the head by converting them
> > > > > > to
> > > > > > release and acquire respectively. This approach makes the
> > > > > > original
> > > > > > barrier assumption unnecessary and allows its removal.
> > > > >
> > > > > For the sake of clarity, can you outline what would be exact code
> > > > > changes for
> > > > > approach #2? Same as in that patch:
> > > > > https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4- <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4->
> > <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-> <https://patchwork.dpdk.org/project/dpdk/patch/20250521111432.207936-4-&gt;>
> > > > konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com> <mailto:konstantin.ananyev@huawei.com <mailto:konstantin.ananyev@huawei.com>>/
> > > > > Or something different?
> > > >
> > > > Sorry, I missed the later half you your comment before.
> > > > Yes, you have proposed the same solution there.
> > >
> > >
> > > Ok, thanks for confirmation.
> > >
> > >
> > > > >
> > > > >
> > > > > > 3. Retain partial ordering but ensure only safe partial orders
> > > > > > are
> > > > > > committed. This can be done by detecting underflow conditions
> > > > > > (producer < consumer) and quashing the update in such cases.
> > > > > > This approach makes the original barrier assumption
> > > > > > unnecessary
> > > > > > and allows its removal.
> > > > >
> > > > > > This patch implements solution (3) for performance reasons.
> > > > > >
> > > > > > Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com <mailto:wathsala.vithanage@arm.com>
> > <mailto:wathsala.vithanage@arm.com <mailto:wathsala.vithanage@arm.com>>>
> > > > > > Signed-off-by: Ola Liljedahl <ola.liljedahl@arm.com <mailto:ola.liljedahl@arm.com>
> > <mailto:ola.liljedahl@arm.com <mailto:ola.liljedahl@arm.com>>>
> > > > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com <mailto:honnappa.nagarahalli@arm.com>
> > <mailto:honnappa.nagarahalli@arm.com <mailto:honnappa.nagarahalli@arm.com>>>
> > > > > > Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com <mailto:dhruv.tripathi@arm.com>
> > <mailto:dhruv.tripathi@arm.com <mailto:dhruv.tripathi@arm.com>>>
> > > > > > ---
> > > > > > lib/ring/rte_ring_c11_pvt.h | 10 +++++++---
> > > > > > 1 file changed, 7 insertions(+), 3 deletions(-)
> > > > > >
> > > > > > diff --git a/lib/ring/rte_ring_c11_pvt.h
> > > > > > b/lib/ring/rte_ring_c11_pvt.h
> > > > > > index b9388af0da..e5ac1f6b9e 100644
> > > > > > --- a/lib/ring/rte_ring_c11_pvt.h
> > > > > > +++ b/lib/ring/rte_ring_c11_pvt.h
> > > > > > @@ -83,9 +83,6 @@ __rte_ring_headtail_move_head(struct
> > > > > > rte_ring_headtail
> > > > > > *d,
> > > > > > /* Reset n to the initial burst count */
> > > > > > n = max;
> > > > > >
> > > > > > - /* Ensure the head is read before tail */
> > > > > > - rte_atomic_thread_fence(rte_memory_order_acquire);
> > > > > > -
> > > > > > /* load-acquire synchronize with store-release of
> > > > > > ht->tail
> > > > > > * in update_tail.
> > > > > > */
> > > > >
> > > > > But then cons.head can be read a before prod.tail (and visa-versa),
> > > > > right?
> > > >
> > > > Right, we let it happen but eliminate any resulting states that are
> > > > semantically incorrect at the end.
> > >
> > >
> > > Two comments here:
> > > 1) I think it is probably safer to do the check like that:
> > > If (*entries > ring->capacity) ...
> > Yes, this might be another way of handling underflow situations. We could study
> > this.
> >
> > I have used the check for negative without problems in my ring buffer
> > implementations
> > https://github.com/ARM-software/progress64/blob/master/src/p64_ringbuf.c <https://github.com/ARM-software/progress64/blob/master/src/p64_ringbuf.c>
> > but can't say that has been battle-tested.
>
>
> My thought was about the case (probably hypothetical) when the difference
> between stale tail and head will be bigger then 2^31 + 1.
>
>
> > > 2) My concern that without forcing a proper read ordering
> > > (cons.head first then prod.tail) we re-introduce a window for all sorts of
> > > ABA-like problems.
> > Head and tail indexes are monotonically increasing so I don't see a risk for ABA-like
> > problems.
>
>
> I understand that, but with current CPU speeds it can take rte_ring just few seconds to
> wrap around head/tail values. If user doing something really fancy - like using rte_ring ZC API
> (i.e. just moving head/tail without reading actual objects) that can probably happen even
> faster (less than a second?).
> Are we sure that the stale tail value will never persist that long?
> Let say user calling move_head() in a loop till it succeeds?
>
>
> > Indeed, adding a monotonically increasing tag to pointers is the common way of
> > avoiding ABA
> > problems in lock-free designs.
>
>
> Yep, using 64-bit values for head/tail counters will help to avoid these concerns.
> But it will probably break HTS/RTS modes, plus it is an ABI change for sure.
>
>
> Actually after another thought, I have one more concern here:
>
>
> + /*
> + * Ensure the entries calculation was not based on a stale
> + * and unsafe stail observation that causes underflow.
> + */
> + if ((int)*entries < 0)
> + *entries = 0;
> +
>
>
> With that change, it might return not-valid information back to the user
> about number of free/occupied entries in the ring.
> Plus rte_ring_enqueue() now might fail even when there are enough free entries
> in the ring (same for dequeue).
How do you (or the thread) know there are enough free (or used) entries? Do you
assume sequentially consistent behaviour (a total order of memory accesses)?
Otherwise, you would need to explicitly create a happens-before relation
between threads, e.g. a consumer which made room in the ring buffer must
synchronize-with the producer that there is now room for more elements. That
synchronize-with edge will ensure the producer reads a fresh value of stail. But
without it, how can a thread know the state of the ring buffer that is being
manipulated by another thread?

> That looks like a change in our public API behavior that might break many things.
> There are quite few places when caller expects enqueue/dequeue
> operation to always succeed (let say there always should be enough free space in the ring).
Single-threaded scenarios are not a problem. Do you have a multithreaded scenario where
the caller expects enqueue/dequeue to always succeed? How are the threads involved in such
a scenario synchronizing with each other?

> For example: rte_mempool works like that.
> I am pretty sure there are quite few other places like that inside DPDK,
> not to mention third-party code.
>
>
> Considering all of the above, I am actually more in favor
> to combine approaches #2 and #3 for the final patch:
> establish a safe partial order (#2) and keep the check from #3 (should it become an assert()/verify()?)
I agree that using acquire/release for all prod/cons_head accesses will make it easier to
reason about the ring buffer state. Sequential consistency (total order) is the easiest to
reason about and often seems to be desired and expected by programmers (e.g. "I'll just
add a barrier here to ensure A happens before B in this thread, now there is a total order...").

- Ola

>
>
> Another thing to note: whatever final approach we choose -
> we need to make sure that the problem is addressed across all other
> rte_ring flavors/modes too (generic implementation, rts/hts mode, soring).
>
>
> Konstantin


IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 1/2] build: add backward compatibility for nested drivers
  2025-09-23 13:28  0%     ` Bruce Richardson
@ 2025-09-24  8:43  0%       ` Thomas Monjalon
  0 siblings, 0 replies; 77+ results
From: Thomas Monjalon @ 2025-09-24  8:43 UTC (permalink / raw)
  To: Kevin Traynor, Bruce Richardson; +Cc: dev, david.marchand

23/09/2025 15:28, Bruce Richardson:
> On Tue, Sep 23, 2025 at 02:08:35PM +0100, Kevin Traynor wrote:
> > Yes, that is a good point for discussion. Seen as support for the legacy
> > names were already dropped and I wasn't aware of any ABI like policy
> > about it, I thought there may be a preference for deprecation
> > warning/continuing to move to the new name only.
> > 
> > I would be happy to keep the legacy name without a warning/deprecation
> > for a longer term and we could adopt this as general guideline by
> > default too. It should not cost much effort to do this.
> 
> Agreed. If we do decide after a while to remove an old name, then we should
> do a deprecation notice first.

I don't think we should require a notice if there is no deprecation,
just an alias added.

> > Another minor point is, if this needs a Fixes tag? Yes, in the sense it
> > feels like it added a banana skin for users (the patches are because I
> > hit this issue with 25.07). I didn't add it for now, as no guarantees
> > were broken and there isn't an upstream stable for backporting to anyway.
> 
> If there is no backporting, I'm not sure it matters. Maybe add one anyway
> to imply that this was something that should have been thought of in the
> original patch.

Backports are not only for upstream branches.
If someone wants to maintain 25.07 privately,
it is good to know what to backport.



^ permalink raw reply	[relevance 0%]

* [RFC 0/6] get rid of pthread_cancel
@ 2025-09-24 16:51  3% Stephen Hemminger
  0 siblings, 0 replies; 77+ results
From: Stephen Hemminger @ 2025-09-24 16:51 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The use of pthread_cancel in DPDK is problematic since cancel is implemented
as signal in Linux and TerminateThread in Windows. Using pthread_cancel
also exposes the internals of rte_thread_t to drivers which limits potential
ABI changes.

This patchset shows how the same effect can be had in most places by using
either existing flag variables or the semantics of sockets/pipes used to
communicate with the thread.

This RFC because it doesn't cover all uses of pthread_cancel, in the final
version pthread_cancel will be gone and flagged as error in checkpatch;
and the patches hit multiple drivers which require special hardware.

Stephen Hemminger (6):
  eal: avoid using pthread_cancel
  eventdev: avoid use of pthread_cancel
  raw/ifpga: avoid use of pthread_cancel
  dma/skeleton: avoid use of pthread_cancel
  intel/ipn3ke: avoid use of pthread_cancel
  intel/iavf: remove use of pthread_cancel

 drivers/dma/skeleton/skeleton_dmadev.c        |  5 +---
 drivers/net/intel/iavf/iavf_vchnl.c           |  8 +++----
 drivers/net/intel/ipn3ke/ipn3ke_representor.c |  8 ++-----
 drivers/raw/ifpga/ifpga_rawdev.c              |  8 +------
 lib/eal/common/eal_common_proc.c              | 24 +++++--------------
 lib/eventdev/rte_event_eth_rx_adapter.c       | 21 ++++++++--------
 6 files changed, 25 insertions(+), 49 deletions(-)

-- 
2.47.3


^ permalink raw reply	[relevance 3%]

* [PATCH v2 6/6] doc: update docs for ethdev changes
  @ 2025-09-29 15:00  4%   ` Bruce Richardson
  0 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-09-29 15:00 UTC (permalink / raw)
  To: dev; +Cc: Bruce Richardson

Move text from deprecation notice to release note, and update.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rel_notes/deprecation.rst   | 7 -------
 doc/guides/rel_notes/release_25_11.rst | 6 ++++++
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 483030cda8..4b9da99484 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -98,13 +98,6 @@ Deprecation Notices
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
 
-* ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
-  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
-  ``q_errors``.
-  Instead queue stats will be received via xstats API. Current method support
-  will be limited to maximum 256 queues.
-  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
-
 * ethdev: Flow actions ``PF`` and ``VF`` have been deprecated since DPDK 21.11
   and are yet to be removed. That still has not happened because there are net
   drivers which support combined use of either action ``PF`` or action ``VF``
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..4b00d3ec9e 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -116,6 +116,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* ethdev: As previously announced in deprecation notes,
+  queue specific stats fields are now removed from ``struct rte_eth_stats``.
+  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``, ``q_errors``.
+  Instead queue stats will be received via xstats API.
+  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` is removed from public headers.
+
 
 ABI Changes
 -----------
-- 
2.48.1


^ permalink raw reply	[relevance 4%]

* Re: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
  2025-09-19 16:00  0%         ` Sameer Vaze
@ 2025-09-30 15:27  0%           ` Sameer Vaze
  0 siblings, 0 replies; 77+ results
From: Sameer Vaze @ 2025-09-30 15:27 UTC (permalink / raw)
  To: Akhil Goyal, Sunila Sahu, Fan Zhang, Ashish Gupta; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 4384 bytes --]

Hey Akhil,

I was able to get the changes as a series. Looks like it is now passing the CI build and test steps after Patrick fixed the traffic generator and retriggered a build. Let me know if you have any comments for the changes:

https://patches.dpdk.org/project/dpdk/list/?series=36214


Thanks
Sameer Vaze
________________________________
From: Sameer Vaze <svaze@qti.qualcomm.com>
Sent: Friday, September 19, 2025 10:00 AM
To: Akhil Goyal <gakhil@marvell.com>; Sunila Sahu <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta <ashishg@marvell.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: Re: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files


WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros.

I don't see anything specific about creating and pushing a series : 9. Contributing Code to DPDK — Data Plane Development Kit 25.07.0 documentation<https://doc.dpdk.org/guides/contributing/patches.html>.

The only mention to a series above seems to use the depends-on tag.

Thanks
Sameer Vaze
________________________________
From: Akhil Goyal <gakhil@marvell.com>
Sent: Thursday, September 18, 2025 11:08 PM
To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta <ashishg@marvell.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files

WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros.

Hi Sameer,
> Hey Akhil,
>
> I attempted to split the changes into multiple patches and added a depends-on
> the second patch. But automation does not seem to be picking up the patch as a
> dependency. Is there a process step I messed up:

When you have dependent patches, you should send them as a series.
Automation runs on the last patch in the series only.
Currently it is not handling depends-on tag. It is for reviewers for now.


>
>
> Patch 1: compress/zlib: support for dictionary and PDCP checksum - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918204411.1701035-1-svaze@qti.qualcomm.com/>
> Patch 2 with depends-n: app/compress-perf: support dictionary files - Patchwork
> <https://patches.dpdk.org/project/dpdk/patch/20250918210806.1709958-1-svaze@qti.qualcomm.com/>
>
> Thanks
> Sameer Vaze
> ________________________________
>
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Tuesday, June 17, 2025 3:34 PM
> To: Sameer Vaze <svaze@qti.qualcomm.com>; Sunila Sahu
> <ssahu@marvell.com>; Fan Zhang <fanzhang.oss@gmail.com>; Ashish Gupta
> <ashishg@marvell.com>
> Cc: dev@dpdk.org <dev@dpdk.org>
> Subject: RE: [EXTERNAL] [PATCH] app/compress-perf: support dictionary files
>
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
>
> > compress/zlib: support PDCP checksum
> >
> > compress/zlib: support zlib dictionary
> >
> > compressdev: add PDCP checksum
> >
> > compressdev: support zlib dictionary
> >
> > Adds support to provide predefined dictionaries to zlib. Handles setting
> > and getting of dictionaries using zlib apis. Also includes support to
> > read dictionary files
> >
> > Adds support for passing in and validationg 3GPP PDCP spec defined
> > checksums as defined under the Uplink Data Compression(UDC) feature.
> > Changes also include functions that do inflate or deflate specific
> > checksum operations.
> >
> > Introduces new members to compression api structures to allow setting
> > predefined dictionaries
> >
> > Signed-off-by: Sameer Vaze <svaze@qti.qualcomm.com>
>
> Seems like multiple patches are squashed into a single patch
>
> I see that this patch has ABI breaks.
> We need to defer this patch for next ABI break release.
> Please split the patch appropriately.
> First patch should define the library changes.
> And subsequently logically broken PMD patches
> Followed by application patches.
> Ensure each patch is compilable.
>
> Since this patch is breaking ABI/API,
> Please send a deprecation notice to be merged in this release and
> Implementation for next release.
>
> Also avoid unnecessary and irrelevant code changes.
>


[-- Attachment #2: Type: text/html, Size: 8604 bytes --]

^ permalink raw reply	[relevance 0%]

* [PATCH v1 1/3] cryptodev: support PQC ML algorithms
  @ 2025-09-30 18:03  3%   ` Gowrishankar Muthukrishnan
    1 sibling, 0 replies; 77+ results
From: Gowrishankar Muthukrishnan @ 2025-09-30 18:03 UTC (permalink / raw)
  To: dev, Akhil Goyal, Fan Zhang, Kai Ji; +Cc: anoobj, Gowrishankar Muthukrishnan

Add support for PQC ML-KEM and ML-DSA algorithms.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |   2 +
 doc/guides/prog_guide/cryptodev_lib.rst    |   3 +-
 doc/guides/rel_notes/release_25_11.rst     |  11 +
 lib/cryptodev/rte_crypto_asym.h            | 306 +++++++++++++++++++++
 lib/cryptodev/rte_cryptodev.c              |  60 ++++
 lib/cryptodev/rte_cryptodev.h              |  15 +-
 6 files changed, 394 insertions(+), 3 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 116ffce249..64198f013a 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -134,6 +134,8 @@ ECPM                    =
 ECDH                    =
 SM2                     =
 EdDSA                   =
+ML-DSA                  =
+ML-KEM                  =
 
 ;
 ; Supported Operating systems of a default crypto driver.
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b54efcb74e..f0ee44eb54 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -928,7 +928,8 @@ Asymmetric Cryptography
 The cryptodev library currently provides support for the following asymmetric
 Crypto operations; RSA, Modular exponentiation and inversion, Diffie-Hellman and
 Elliptic Curve Diffie-Hellman public and/or private key generation and shared
-secret compute, DSA and EdDSA signature generation and verification.
+secret compute, DSA and EdDSA signature generation and verification,
+PQC ML-KEM and ML-DSA algorithms.
 
 Session and Session Management
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..9d47f762d7 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -76,6 +76,14 @@ New Features
   * Added multi-process per port.
   * Optimized code.
 
+* **Added PQC ML-KEM and ML-DSA support.**
+
+  * Added PQC ML-KEM support with reference to FIPS203.
+  * Added PQC ML-DSA support with reference to FIPS204.
+
+* **Updated openssl crypto driver.**
+
+  * Added support for PQC ML-KEM and ML-DSA algorithms.
 
 Removed Items
 -------------
@@ -138,6 +146,9 @@ ABI Changes
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
+* cryptodev: The enum ``rte_crypto_asym_xform_type``, struct ``rte_crypto_asym_xform``
+  and struct ``rte_crypto_asym_op`` are updated to include new values to support
+  ML-KEM and ML-DSA.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..14a0e57467 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -37,6 +37,20 @@ rte_crypto_asym_ke_strings[];
 extern const char *
 rte_crypto_asym_op_strings[];
 
+/** PQC ML crypto op parameters size */
+extern const uint16_t
+rte_crypto_ml_kem_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_cipher_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_sign_size[];
+
 #ifdef __cplusplus
 }
 #endif
@@ -144,6 +158,14 @@ enum rte_crypto_asym_xform_type {
 	/**< Edwards Curve Digital Signature Algorithm
 	 * Perform Signature Generation and Verification.
 	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_KEM,
+	/**< Module Lattice based Key Encapsulation Mechanism
+	 * Performs Key Pair Generation, Encapsulation and Decapsulation.
+	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_DSA
+	/**< Module Lattice based Digital Signature Algorithm
+	 * Performs Key Pair Generation, Signature Generation and Verification.
+	 */
 };
 
 /**
@@ -720,6 +742,282 @@ struct rte_crypto_sm2_op_param {
 	 */
 };
 
+/**
+ * PQC ML-KEM algorithms
+ *
+ * List of ML-KEM algorithms used in PQC
+ */
+enum rte_crypto_ml_kem_param_set {
+	RTE_CRYPTO_ML_KEM_PARAM_NONE,
+	RTE_CRYPTO_ML_KEM_PARAM_512,
+	RTE_CRYPTO_ML_KEM_PARAM_768,
+	RTE_CRYPTO_ML_KEM_PARAM_1024,
+};
+
+/**
+ * PQC ML-KEM op types
+ *
+ * List of ML-KEM op types in PQC
+ */
+enum rte_crypto_ml_kem_op_type {
+	RTE_CRYPTO_ML_KEM_OP_KEYGEN,
+	RTE_CRYPTO_ML_KEM_OP_KEYVER,
+	RTE_CRYPTO_ML_KEM_OP_ENCAP,
+	RTE_CRYPTO_ML_KEM_OP_DECAP,
+	RTE_CRYPTO_ML_KEM_OP_END
+};
+
+/**
+ * PQC ML-KEM transform data
+ *
+ * Structure describing ML-KEM xform params
+ */
+struct rte_crypto_ml_kem_xform {
+	enum rte_crypto_ml_kem_param_set param;
+};
+
+/**
+ * PQC ML-KEM KEYGEN op
+ *
+ * Parameters for PQC ML-KEM key generation operation
+ */
+struct rte_crypto_ml_kem_keygen_op {
+	rte_crypto_param d;
+	/**< The seed d value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param z;
+	/**< The seed z value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param ek;
+	/**<
+	 * Pointer to output data
+	 * - The computed encapsulation key.
+	 * - Refer `rte_crypto_ml_kem_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param dk;
+	/**<
+	 * Pointer to output data
+	 * - The computed decapsulation key.
+	 * - Refer `rte_crypto_ml_kem_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-KEM KEYVER op
+ *
+ * Parameters for PQC ML-KEM key verification operation
+ */
+struct rte_crypto_ml_kem_keyver_op {
+	enum rte_crypto_ml_kem_op_type op;
+	/**<
+	 * Op associated with key to be verified is one of below:
+	 * - Encapsulation op
+	 * - Decapsulation op
+	 */
+
+	rte_crypto_param key;
+	/**<
+	 * KEM key to check.
+	 * - ek in case of encapsulation op.
+	 * - dk in case of decapsulation op.
+	 */
+};
+
+/**
+ * PQC ML-KEM ENCAP op
+ *
+ * Parameters for PQC ML-KEM encapsulation operation
+ */
+struct rte_crypto_ml_kem_encap_op {
+	rte_crypto_param message;
+	/**< The message (of 32 bytes in length) for randomness.*/
+
+	rte_crypto_param ek;
+	/**< The encapsulation key.*/
+
+	rte_crypto_param cipher;
+	/**<
+	 * Pointer to output data
+	 * - The computed cipher.
+	 * - Refer `rte_crypto_ml_kem_cipher_size` for size of buffer.
+	 */
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM DECAP op
+ *
+ * Parameters for PQC ML-KEM decapsulation operation
+ */
+struct rte_crypto_ml_kem_decap_op {
+	rte_crypto_param cipher;
+	/**< The cipher to be decapsulated.*/
+
+	rte_crypto_param dk;
+	/**< The decapsulation key.*/
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM op
+ *
+ * Parameters for PQC ML-KEM operation
+ */
+struct rte_crypto_ml_kem_op {
+	enum rte_crypto_ml_kem_op_type op;
+	union {
+		struct rte_crypto_ml_kem_keygen_op keygen;
+		struct rte_crypto_ml_kem_keyver_op keyver;
+		struct rte_crypto_ml_kem_encap_op encap;
+		struct rte_crypto_ml_kem_decap_op decap;
+	};
+};
+
+/**
+ * PQC ML-DSA algorithms
+ *
+ * List of ML-DSA algorithms used in PQC
+ */
+enum rte_crypto_ml_dsa_param_set {
+	RTE_CRYPTO_ML_DSA_PARAM_NONE,
+	RTE_CRYPTO_ML_DSA_PARAM_44,
+	RTE_CRYPTO_ML_DSA_PARAM_65,
+	RTE_CRYPTO_ML_DSA_PARAM_87,
+};
+
+/**
+ * PQC ML-DSA op types
+ *
+ * List of ML-DSA op types in PQC
+ */
+enum rte_crypto_ml_dsa_op_type {
+	RTE_CRYPTO_ML_DSA_OP_KEYGEN,
+	RTE_CRYPTO_ML_DSA_OP_SIGN,
+	RTE_CRYPTO_ML_DSA_OP_VERIFY,
+	RTE_CRYPTO_ML_DSA_OP_END
+};
+
+/**
+ * PQC ML-DSA transform data
+ *
+ * Structure describing ML-DSA xform params
+ */
+struct rte_crypto_ml_dsa_xform {
+	enum rte_crypto_ml_dsa_param_set param;
+
+	bool sign_deterministic;
+	/**< The signature generated using deterministic method. */
+
+	bool sign_prehash;
+	/**< The signature generated using prehash or pure routine. */
+};
+
+/**
+ * PQC ML-DSA KEYGEN op
+ *
+ * Parameters for PQC ML-DSA key generation operation
+ */
+struct rte_crypto_ml_dsa_keygen_op {
+	rte_crypto_param seed;
+	/**< The random seed (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param pubkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed public key.
+	 * - Refer `rte_crypto_ml_dsa_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param privkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed secret key.
+	 * - Refer `rte_crypto_ml_dsa_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGGEN op
+ *
+ * Parameters for PQC ML-DSA sign operation
+ */
+struct rte_crypto_ml_dsa_siggen_op {
+	rte_crypto_param message;
+	/**< The message to generate signature.*/
+
+	rte_crypto_param mu;
+	/**< The mu to generate signature.*/
+
+	rte_crypto_param privkey;
+	/**< The secret key to generate signature.*/
+
+	rte_crypto_param seed;
+	/**< The seed to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+
+	rte_crypto_param sign;
+	/**<
+	 * Pointer to output data
+	 * - The computed signature.
+	 * - Refer `rte_crypto_ml_dsa_sign_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGVER op
+ *
+ * Parameters for PQC ML-DSA verify operation
+ */
+struct rte_crypto_ml_dsa_sigver_op {
+	rte_crypto_param pubkey;
+	/**< The public key to verify signature.*/
+
+	rte_crypto_param message;
+	/**< The message used to verify signature.*/
+
+	rte_crypto_param sign;
+	/**< The signature to verify.*/
+
+	rte_crypto_param mu;
+	/**< The mu used to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+};
+
+/**
+ * PQC ML-DSA op
+ *
+ * Parameters for PQC ML-DSA operation
+ */
+struct rte_crypto_ml_dsa_op {
+	enum rte_crypto_ml_dsa_op_type op;
+	union {
+		struct rte_crypto_ml_dsa_keygen_op keygen;
+		struct rte_crypto_ml_dsa_siggen_op siggen;
+		struct rte_crypto_ml_dsa_sigver_op sigver;
+	};
+};
+
 /**
  * Asymmetric crypto transform data
  *
@@ -751,6 +1049,12 @@ struct rte_crypto_asym_xform {
 		/**< EC xform parameters, used by elliptic curve based
 		 * operations.
 		 */
+
+		struct rte_crypto_ml_kem_xform mlkem;
+		/**< PQC ML-KEM xform parameters */
+
+		struct rte_crypto_ml_dsa_xform mldsa;
+		/**< PQC ML-DSA xform parameters */
 	};
 };
 
@@ -778,6 +1082,8 @@ struct rte_crypto_asym_op {
 		struct rte_crypto_ecpm_op_param ecpm;
 		struct rte_crypto_sm2_op_param sm2;
 		struct rte_crypto_eddsa_op_param eddsa;
+		struct rte_crypto_ml_kem_op mlkem;
+		struct rte_crypto_ml_dsa_op mldsa;
 	};
 	uint16_t flags;
 	/**<
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..fd40c8a64c 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -229,6 +229,66 @@ const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY] = "pub_ec_key_verify"
 };
 
+/**
+ * Public key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_pubkey_size)
+const uint16_t rte_crypto_ml_kem_pubkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 800,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1184,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Private key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_privkey_size)
+const uint16_t rte_crypto_ml_kem_privkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 1632,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 2400,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 3168,
+};
+
+/**
+ * Cipher size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_cipher_size)
+const uint16_t rte_crypto_ml_kem_cipher_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 768,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1088,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Public key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_pubkey_size)
+const uint16_t rte_crypto_ml_dsa_pubkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 1312,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 1952,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 2592,
+};
+
+/**
+ * Private key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_privkey_size)
+const uint16_t rte_crypto_ml_dsa_privkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2560,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 4032,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4896,
+};
+
+/**
+ * Sign size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_sign_size)
+const uint16_t rte_crypto_ml_dsa_sign_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2420,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 3309,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4627,
+};
+
 struct rte_cryptodev_sym_session_pool_private_data {
 	uint16_t sess_data_sz;
 	/**< driver session data size */
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index eaf0e50d37..37a6a5e49b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -167,10 +167,13 @@ struct rte_cryptodev_asymmetric_xform_capability {
 	uint32_t op_types;
 	/**<
 	 * Bitmask for supported rte_crypto_asym_op_type or
+	 * rte_crypto_ml_kem_op_type or rte_crypto_ml_dsa_op_type or
 	 * rte_crypto_asym_ke_type. Which enum is used is determined
 	 * by the rte_crypto_asym_xform_type. For key exchange algorithms
-	 * like Diffie-Hellman it is rte_crypto_asym_ke_type, for others
-	 * it is rte_crypto_asym_op_type.
+	 * like Diffie-Hellman it is rte_crypto_asym_ke_type,
+	 * for ML-KEM algorithms it is rte_crypto_ml_kem_op_type,
+	 * for ML-DSA algorithms it is rte_crypto_ml_dsa_op_type,
+	 * or others it is rte_crypto_asym_op_type.
 	 */
 
 	__extension__
@@ -188,6 +191,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
 
 		uint32_t op_capa[RTE_CRYPTO_ASYM_OP_LIST_END];
 		/**< Operation specific capabilities. */
+
+		uint32_t mlkem_capa[RTE_CRYPTO_ML_KEM_OP_END];
+		/**< Bitmask of supported ML-KEM parameter sets. */
+
+		uint32_t mldsa_capa[RTE_CRYPTO_ML_DSA_OP_END];
+		/**< Bitmask of supported ML-DSA parameter sets. */
 	};
 
 	uint64_t hash_algos;
@@ -577,6 +586,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support inner checksum computation/verification */
 #define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
 /**< Support Rx injection after security processing */
+#define RTE_CRYPTODEV_FF_MLDSA_SIGN_PREHASH		(1ULL << 29)
+/**< Support Pre Hash ML-DSA Signature Generation */
 
 /**
  * Get the name of a crypto device feature flag
-- 
2.37.1


^ permalink raw reply	[relevance 3%]

* [PATCH v2 1/3] cryptodev: support PQC ML algorithms
  @ 2025-10-01  7:37  3%     ` Gowrishankar Muthukrishnan
    1 sibling, 0 replies; 77+ results
From: Gowrishankar Muthukrishnan @ 2025-10-01  7:37 UTC (permalink / raw)
  To: dev, Akhil Goyal, Fan Zhang, Kai Ji; +Cc: anoobj, Gowrishankar Muthukrishnan

Add support for PQC ML-KEM and ML-DSA algorithms.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |   2 +
 doc/guides/prog_guide/cryptodev_lib.rst    |   3 +-
 doc/guides/rel_notes/release_25_11.rst     |  11 +
 lib/cryptodev/rte_crypto_asym.h            | 306 +++++++++++++++++++++
 lib/cryptodev/rte_cryptodev.c              |  60 ++++
 lib/cryptodev/rte_cryptodev.h              |  15 +-
 6 files changed, 394 insertions(+), 3 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 116ffce249..64198f013a 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -134,6 +134,8 @@ ECPM                    =
 ECDH                    =
 SM2                     =
 EdDSA                   =
+ML-DSA                  =
+ML-KEM                  =
 
 ;
 ; Supported Operating systems of a default crypto driver.
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b54efcb74e..f0ee44eb54 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -928,7 +928,8 @@ Asymmetric Cryptography
 The cryptodev library currently provides support for the following asymmetric
 Crypto operations; RSA, Modular exponentiation and inversion, Diffie-Hellman and
 Elliptic Curve Diffie-Hellman public and/or private key generation and shared
-secret compute, DSA and EdDSA signature generation and verification.
+secret compute, DSA and EdDSA signature generation and verification,
+PQC ML-KEM and ML-DSA algorithms.
 
 Session and Session Management
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..9d47f762d7 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -76,6 +76,14 @@ New Features
   * Added multi-process per port.
   * Optimized code.
 
+* **Added PQC ML-KEM and ML-DSA support.**
+
+  * Added PQC ML-KEM support with reference to FIPS203.
+  * Added PQC ML-DSA support with reference to FIPS204.
+
+* **Updated openssl crypto driver.**
+
+  * Added support for PQC ML-KEM and ML-DSA algorithms.
 
 Removed Items
 -------------
@@ -138,6 +146,9 @@ ABI Changes
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
+* cryptodev: The enum ``rte_crypto_asym_xform_type``, struct ``rte_crypto_asym_xform``
+  and struct ``rte_crypto_asym_op`` are updated to include new values to support
+  ML-KEM and ML-DSA.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..14a0e57467 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -37,6 +37,20 @@ rte_crypto_asym_ke_strings[];
 extern const char *
 rte_crypto_asym_op_strings[];
 
+/** PQC ML crypto op parameters size */
+extern const uint16_t
+rte_crypto_ml_kem_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_cipher_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_sign_size[];
+
 #ifdef __cplusplus
 }
 #endif
@@ -144,6 +158,14 @@ enum rte_crypto_asym_xform_type {
 	/**< Edwards Curve Digital Signature Algorithm
 	 * Perform Signature Generation and Verification.
 	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_KEM,
+	/**< Module Lattice based Key Encapsulation Mechanism
+	 * Performs Key Pair Generation, Encapsulation and Decapsulation.
+	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_DSA
+	/**< Module Lattice based Digital Signature Algorithm
+	 * Performs Key Pair Generation, Signature Generation and Verification.
+	 */
 };
 
 /**
@@ -720,6 +742,282 @@ struct rte_crypto_sm2_op_param {
 	 */
 };
 
+/**
+ * PQC ML-KEM algorithms
+ *
+ * List of ML-KEM algorithms used in PQC
+ */
+enum rte_crypto_ml_kem_param_set {
+	RTE_CRYPTO_ML_KEM_PARAM_NONE,
+	RTE_CRYPTO_ML_KEM_PARAM_512,
+	RTE_CRYPTO_ML_KEM_PARAM_768,
+	RTE_CRYPTO_ML_KEM_PARAM_1024,
+};
+
+/**
+ * PQC ML-KEM op types
+ *
+ * List of ML-KEM op types in PQC
+ */
+enum rte_crypto_ml_kem_op_type {
+	RTE_CRYPTO_ML_KEM_OP_KEYGEN,
+	RTE_CRYPTO_ML_KEM_OP_KEYVER,
+	RTE_CRYPTO_ML_KEM_OP_ENCAP,
+	RTE_CRYPTO_ML_KEM_OP_DECAP,
+	RTE_CRYPTO_ML_KEM_OP_END
+};
+
+/**
+ * PQC ML-KEM transform data
+ *
+ * Structure describing ML-KEM xform params
+ */
+struct rte_crypto_ml_kem_xform {
+	enum rte_crypto_ml_kem_param_set param;
+};
+
+/**
+ * PQC ML-KEM KEYGEN op
+ *
+ * Parameters for PQC ML-KEM key generation operation
+ */
+struct rte_crypto_ml_kem_keygen_op {
+	rte_crypto_param d;
+	/**< The seed d value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param z;
+	/**< The seed z value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param ek;
+	/**<
+	 * Pointer to output data
+	 * - The computed encapsulation key.
+	 * - Refer `rte_crypto_ml_kem_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param dk;
+	/**<
+	 * Pointer to output data
+	 * - The computed decapsulation key.
+	 * - Refer `rte_crypto_ml_kem_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-KEM KEYVER op
+ *
+ * Parameters for PQC ML-KEM key verification operation
+ */
+struct rte_crypto_ml_kem_keyver_op {
+	enum rte_crypto_ml_kem_op_type op;
+	/**<
+	 * Op associated with key to be verified is one of below:
+	 * - Encapsulation op
+	 * - Decapsulation op
+	 */
+
+	rte_crypto_param key;
+	/**<
+	 * KEM key to check.
+	 * - ek in case of encapsulation op.
+	 * - dk in case of decapsulation op.
+	 */
+};
+
+/**
+ * PQC ML-KEM ENCAP op
+ *
+ * Parameters for PQC ML-KEM encapsulation operation
+ */
+struct rte_crypto_ml_kem_encap_op {
+	rte_crypto_param message;
+	/**< The message (of 32 bytes in length) for randomness.*/
+
+	rte_crypto_param ek;
+	/**< The encapsulation key.*/
+
+	rte_crypto_param cipher;
+	/**<
+	 * Pointer to output data
+	 * - The computed cipher.
+	 * - Refer `rte_crypto_ml_kem_cipher_size` for size of buffer.
+	 */
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM DECAP op
+ *
+ * Parameters for PQC ML-KEM decapsulation operation
+ */
+struct rte_crypto_ml_kem_decap_op {
+	rte_crypto_param cipher;
+	/**< The cipher to be decapsulated.*/
+
+	rte_crypto_param dk;
+	/**< The decapsulation key.*/
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM op
+ *
+ * Parameters for PQC ML-KEM operation
+ */
+struct rte_crypto_ml_kem_op {
+	enum rte_crypto_ml_kem_op_type op;
+	union {
+		struct rte_crypto_ml_kem_keygen_op keygen;
+		struct rte_crypto_ml_kem_keyver_op keyver;
+		struct rte_crypto_ml_kem_encap_op encap;
+		struct rte_crypto_ml_kem_decap_op decap;
+	};
+};
+
+/**
+ * PQC ML-DSA algorithms
+ *
+ * List of ML-DSA algorithms used in PQC
+ */
+enum rte_crypto_ml_dsa_param_set {
+	RTE_CRYPTO_ML_DSA_PARAM_NONE,
+	RTE_CRYPTO_ML_DSA_PARAM_44,
+	RTE_CRYPTO_ML_DSA_PARAM_65,
+	RTE_CRYPTO_ML_DSA_PARAM_87,
+};
+
+/**
+ * PQC ML-DSA op types
+ *
+ * List of ML-DSA op types in PQC
+ */
+enum rte_crypto_ml_dsa_op_type {
+	RTE_CRYPTO_ML_DSA_OP_KEYGEN,
+	RTE_CRYPTO_ML_DSA_OP_SIGN,
+	RTE_CRYPTO_ML_DSA_OP_VERIFY,
+	RTE_CRYPTO_ML_DSA_OP_END
+};
+
+/**
+ * PQC ML-DSA transform data
+ *
+ * Structure describing ML-DSA xform params
+ */
+struct rte_crypto_ml_dsa_xform {
+	enum rte_crypto_ml_dsa_param_set param;
+
+	bool sign_deterministic;
+	/**< The signature generated using deterministic method. */
+
+	bool sign_prehash;
+	/**< The signature generated using prehash or pure routine. */
+};
+
+/**
+ * PQC ML-DSA KEYGEN op
+ *
+ * Parameters for PQC ML-DSA key generation operation
+ */
+struct rte_crypto_ml_dsa_keygen_op {
+	rte_crypto_param seed;
+	/**< The random seed (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param pubkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed public key.
+	 * - Refer `rte_crypto_ml_dsa_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param privkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed secret key.
+	 * - Refer `rte_crypto_ml_dsa_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGGEN op
+ *
+ * Parameters for PQC ML-DSA sign operation
+ */
+struct rte_crypto_ml_dsa_siggen_op {
+	rte_crypto_param message;
+	/**< The message to generate signature.*/
+
+	rte_crypto_param mu;
+	/**< The mu to generate signature.*/
+
+	rte_crypto_param privkey;
+	/**< The secret key to generate signature.*/
+
+	rte_crypto_param seed;
+	/**< The seed to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+
+	rte_crypto_param sign;
+	/**<
+	 * Pointer to output data
+	 * - The computed signature.
+	 * - Refer `rte_crypto_ml_dsa_sign_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGVER op
+ *
+ * Parameters for PQC ML-DSA verify operation
+ */
+struct rte_crypto_ml_dsa_sigver_op {
+	rte_crypto_param pubkey;
+	/**< The public key to verify signature.*/
+
+	rte_crypto_param message;
+	/**< The message used to verify signature.*/
+
+	rte_crypto_param sign;
+	/**< The signature to verify.*/
+
+	rte_crypto_param mu;
+	/**< The mu used to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+};
+
+/**
+ * PQC ML-DSA op
+ *
+ * Parameters for PQC ML-DSA operation
+ */
+struct rte_crypto_ml_dsa_op {
+	enum rte_crypto_ml_dsa_op_type op;
+	union {
+		struct rte_crypto_ml_dsa_keygen_op keygen;
+		struct rte_crypto_ml_dsa_siggen_op siggen;
+		struct rte_crypto_ml_dsa_sigver_op sigver;
+	};
+};
+
 /**
  * Asymmetric crypto transform data
  *
@@ -751,6 +1049,12 @@ struct rte_crypto_asym_xform {
 		/**< EC xform parameters, used by elliptic curve based
 		 * operations.
 		 */
+
+		struct rte_crypto_ml_kem_xform mlkem;
+		/**< PQC ML-KEM xform parameters */
+
+		struct rte_crypto_ml_dsa_xform mldsa;
+		/**< PQC ML-DSA xform parameters */
 	};
 };
 
@@ -778,6 +1082,8 @@ struct rte_crypto_asym_op {
 		struct rte_crypto_ecpm_op_param ecpm;
 		struct rte_crypto_sm2_op_param sm2;
 		struct rte_crypto_eddsa_op_param eddsa;
+		struct rte_crypto_ml_kem_op mlkem;
+		struct rte_crypto_ml_dsa_op mldsa;
 	};
 	uint16_t flags;
 	/**<
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..fd40c8a64c 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -229,6 +229,66 @@ const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY] = "pub_ec_key_verify"
 };
 
+/**
+ * Public key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_pubkey_size)
+const uint16_t rte_crypto_ml_kem_pubkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 800,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1184,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Private key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_privkey_size)
+const uint16_t rte_crypto_ml_kem_privkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 1632,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 2400,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 3168,
+};
+
+/**
+ * Cipher size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_cipher_size)
+const uint16_t rte_crypto_ml_kem_cipher_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 768,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1088,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Public key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_pubkey_size)
+const uint16_t rte_crypto_ml_dsa_pubkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 1312,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 1952,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 2592,
+};
+
+/**
+ * Private key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_privkey_size)
+const uint16_t rte_crypto_ml_dsa_privkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2560,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 4032,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4896,
+};
+
+/**
+ * Sign size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_sign_size)
+const uint16_t rte_crypto_ml_dsa_sign_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2420,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 3309,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4627,
+};
+
 struct rte_cryptodev_sym_session_pool_private_data {
 	uint16_t sess_data_sz;
 	/**< driver session data size */
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index eaf0e50d37..37a6a5e49b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -167,10 +167,13 @@ struct rte_cryptodev_asymmetric_xform_capability {
 	uint32_t op_types;
 	/**<
 	 * Bitmask for supported rte_crypto_asym_op_type or
+	 * rte_crypto_ml_kem_op_type or rte_crypto_ml_dsa_op_type or
 	 * rte_crypto_asym_ke_type. Which enum is used is determined
 	 * by the rte_crypto_asym_xform_type. For key exchange algorithms
-	 * like Diffie-Hellman it is rte_crypto_asym_ke_type, for others
-	 * it is rte_crypto_asym_op_type.
+	 * like Diffie-Hellman it is rte_crypto_asym_ke_type,
+	 * for ML-KEM algorithms it is rte_crypto_ml_kem_op_type,
+	 * for ML-DSA algorithms it is rte_crypto_ml_dsa_op_type,
+	 * or others it is rte_crypto_asym_op_type.
 	 */
 
 	__extension__
@@ -188,6 +191,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
 
 		uint32_t op_capa[RTE_CRYPTO_ASYM_OP_LIST_END];
 		/**< Operation specific capabilities. */
+
+		uint32_t mlkem_capa[RTE_CRYPTO_ML_KEM_OP_END];
+		/**< Bitmask of supported ML-KEM parameter sets. */
+
+		uint32_t mldsa_capa[RTE_CRYPTO_ML_DSA_OP_END];
+		/**< Bitmask of supported ML-DSA parameter sets. */
 	};
 
 	uint64_t hash_algos;
@@ -577,6 +586,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support inner checksum computation/verification */
 #define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
 /**< Support Rx injection after security processing */
+#define RTE_CRYPTODEV_FF_MLDSA_SIGN_PREHASH		(1ULL << 29)
+/**< Support Pre Hash ML-DSA Signature Generation */
 
 /**
  * Get the name of a crypto device feature flag
-- 
2.37.1


^ permalink raw reply	[relevance 3%]

* [PATCH v3 1/3] cryptodev: support PQC ML algorithms
  @ 2025-10-01 17:56  3%       ` Gowrishankar Muthukrishnan
  2025-10-03 14:24  0%         ` Akhil Goyal
    1 sibling, 1 reply; 77+ results
From: Gowrishankar Muthukrishnan @ 2025-10-01 17:56 UTC (permalink / raw)
  To: dev, Akhil Goyal, Fan Zhang, Kai Ji; +Cc: anoobj, Gowrishankar Muthukrishnan

Add support for PQC ML-KEM and ML-DSA algorithms.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |   2 +
 doc/guides/prog_guide/cryptodev_lib.rst    |   3 +-
 doc/guides/rel_notes/release_25_11.rst     |  11 +
 lib/cryptodev/rte_crypto_asym.h            | 306 +++++++++++++++++++++
 lib/cryptodev/rte_cryptodev.c              |  60 ++++
 lib/cryptodev/rte_cryptodev.h              |  15 +-
 6 files changed, 394 insertions(+), 3 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 116ffce249..64198f013a 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -134,6 +134,8 @@ ECPM                    =
 ECDH                    =
 SM2                     =
 EdDSA                   =
+ML-DSA                  =
+ML-KEM                  =
 
 ;
 ; Supported Operating systems of a default crypto driver.
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b54efcb74e..f0ee44eb54 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -928,7 +928,8 @@ Asymmetric Cryptography
 The cryptodev library currently provides support for the following asymmetric
 Crypto operations; RSA, Modular exponentiation and inversion, Diffie-Hellman and
 Elliptic Curve Diffie-Hellman public and/or private key generation and shared
-secret compute, DSA and EdDSA signature generation and verification.
+secret compute, DSA and EdDSA signature generation and verification,
+PQC ML-KEM and ML-DSA algorithms.
 
 Session and Session Management
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..9d47f762d7 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -76,6 +76,14 @@ New Features
   * Added multi-process per port.
   * Optimized code.
 
+* **Added PQC ML-KEM and ML-DSA support.**
+
+  * Added PQC ML-KEM support with reference to FIPS203.
+  * Added PQC ML-DSA support with reference to FIPS204.
+
+* **Updated openssl crypto driver.**
+
+  * Added support for PQC ML-KEM and ML-DSA algorithms.
 
 Removed Items
 -------------
@@ -138,6 +146,9 @@ ABI Changes
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
+* cryptodev: The enum ``rte_crypto_asym_xform_type``, struct ``rte_crypto_asym_xform``
+  and struct ``rte_crypto_asym_op`` are updated to include new values to support
+  ML-KEM and ML-DSA.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..14a0e57467 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -37,6 +37,20 @@ rte_crypto_asym_ke_strings[];
 extern const char *
 rte_crypto_asym_op_strings[];
 
+/** PQC ML crypto op parameters size */
+extern const uint16_t
+rte_crypto_ml_kem_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_cipher_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_sign_size[];
+
 #ifdef __cplusplus
 }
 #endif
@@ -144,6 +158,14 @@ enum rte_crypto_asym_xform_type {
 	/**< Edwards Curve Digital Signature Algorithm
 	 * Perform Signature Generation and Verification.
 	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_KEM,
+	/**< Module Lattice based Key Encapsulation Mechanism
+	 * Performs Key Pair Generation, Encapsulation and Decapsulation.
+	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_DSA
+	/**< Module Lattice based Digital Signature Algorithm
+	 * Performs Key Pair Generation, Signature Generation and Verification.
+	 */
 };
 
 /**
@@ -720,6 +742,282 @@ struct rte_crypto_sm2_op_param {
 	 */
 };
 
+/**
+ * PQC ML-KEM algorithms
+ *
+ * List of ML-KEM algorithms used in PQC
+ */
+enum rte_crypto_ml_kem_param_set {
+	RTE_CRYPTO_ML_KEM_PARAM_NONE,
+	RTE_CRYPTO_ML_KEM_PARAM_512,
+	RTE_CRYPTO_ML_KEM_PARAM_768,
+	RTE_CRYPTO_ML_KEM_PARAM_1024,
+};
+
+/**
+ * PQC ML-KEM op types
+ *
+ * List of ML-KEM op types in PQC
+ */
+enum rte_crypto_ml_kem_op_type {
+	RTE_CRYPTO_ML_KEM_OP_KEYGEN,
+	RTE_CRYPTO_ML_KEM_OP_KEYVER,
+	RTE_CRYPTO_ML_KEM_OP_ENCAP,
+	RTE_CRYPTO_ML_KEM_OP_DECAP,
+	RTE_CRYPTO_ML_KEM_OP_END
+};
+
+/**
+ * PQC ML-KEM transform data
+ *
+ * Structure describing ML-KEM xform params
+ */
+struct rte_crypto_ml_kem_xform {
+	enum rte_crypto_ml_kem_param_set param;
+};
+
+/**
+ * PQC ML-KEM KEYGEN op
+ *
+ * Parameters for PQC ML-KEM key generation operation
+ */
+struct rte_crypto_ml_kem_keygen_op {
+	rte_crypto_param d;
+	/**< The seed d value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param z;
+	/**< The seed z value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param ek;
+	/**<
+	 * Pointer to output data
+	 * - The computed encapsulation key.
+	 * - Refer `rte_crypto_ml_kem_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param dk;
+	/**<
+	 * Pointer to output data
+	 * - The computed decapsulation key.
+	 * - Refer `rte_crypto_ml_kem_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-KEM KEYVER op
+ *
+ * Parameters for PQC ML-KEM key verification operation
+ */
+struct rte_crypto_ml_kem_keyver_op {
+	enum rte_crypto_ml_kem_op_type op;
+	/**<
+	 * Op associated with key to be verified is one of below:
+	 * - Encapsulation op
+	 * - Decapsulation op
+	 */
+
+	rte_crypto_param key;
+	/**<
+	 * KEM key to check.
+	 * - ek in case of encapsulation op.
+	 * - dk in case of decapsulation op.
+	 */
+};
+
+/**
+ * PQC ML-KEM ENCAP op
+ *
+ * Parameters for PQC ML-KEM encapsulation operation
+ */
+struct rte_crypto_ml_kem_encap_op {
+	rte_crypto_param message;
+	/**< The message (of 32 bytes in length) for randomness.*/
+
+	rte_crypto_param ek;
+	/**< The encapsulation key.*/
+
+	rte_crypto_param cipher;
+	/**<
+	 * Pointer to output data
+	 * - The computed cipher.
+	 * - Refer `rte_crypto_ml_kem_cipher_size` for size of buffer.
+	 */
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM DECAP op
+ *
+ * Parameters for PQC ML-KEM decapsulation operation
+ */
+struct rte_crypto_ml_kem_decap_op {
+	rte_crypto_param cipher;
+	/**< The cipher to be decapsulated.*/
+
+	rte_crypto_param dk;
+	/**< The decapsulation key.*/
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM op
+ *
+ * Parameters for PQC ML-KEM operation
+ */
+struct rte_crypto_ml_kem_op {
+	enum rte_crypto_ml_kem_op_type op;
+	union {
+		struct rte_crypto_ml_kem_keygen_op keygen;
+		struct rte_crypto_ml_kem_keyver_op keyver;
+		struct rte_crypto_ml_kem_encap_op encap;
+		struct rte_crypto_ml_kem_decap_op decap;
+	};
+};
+
+/**
+ * PQC ML-DSA algorithms
+ *
+ * List of ML-DSA algorithms used in PQC
+ */
+enum rte_crypto_ml_dsa_param_set {
+	RTE_CRYPTO_ML_DSA_PARAM_NONE,
+	RTE_CRYPTO_ML_DSA_PARAM_44,
+	RTE_CRYPTO_ML_DSA_PARAM_65,
+	RTE_CRYPTO_ML_DSA_PARAM_87,
+};
+
+/**
+ * PQC ML-DSA op types
+ *
+ * List of ML-DSA op types in PQC
+ */
+enum rte_crypto_ml_dsa_op_type {
+	RTE_CRYPTO_ML_DSA_OP_KEYGEN,
+	RTE_CRYPTO_ML_DSA_OP_SIGN,
+	RTE_CRYPTO_ML_DSA_OP_VERIFY,
+	RTE_CRYPTO_ML_DSA_OP_END
+};
+
+/**
+ * PQC ML-DSA transform data
+ *
+ * Structure describing ML-DSA xform params
+ */
+struct rte_crypto_ml_dsa_xform {
+	enum rte_crypto_ml_dsa_param_set param;
+
+	bool sign_deterministic;
+	/**< The signature generated using deterministic method. */
+
+	bool sign_prehash;
+	/**< The signature generated using prehash or pure routine. */
+};
+
+/**
+ * PQC ML-DSA KEYGEN op
+ *
+ * Parameters for PQC ML-DSA key generation operation
+ */
+struct rte_crypto_ml_dsa_keygen_op {
+	rte_crypto_param seed;
+	/**< The random seed (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param pubkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed public key.
+	 * - Refer `rte_crypto_ml_dsa_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param privkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed secret key.
+	 * - Refer `rte_crypto_ml_dsa_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGGEN op
+ *
+ * Parameters for PQC ML-DSA sign operation
+ */
+struct rte_crypto_ml_dsa_siggen_op {
+	rte_crypto_param message;
+	/**< The message to generate signature.*/
+
+	rte_crypto_param mu;
+	/**< The mu to generate signature.*/
+
+	rte_crypto_param privkey;
+	/**< The secret key to generate signature.*/
+
+	rte_crypto_param seed;
+	/**< The seed to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+
+	rte_crypto_param sign;
+	/**<
+	 * Pointer to output data
+	 * - The computed signature.
+	 * - Refer `rte_crypto_ml_dsa_sign_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGVER op
+ *
+ * Parameters for PQC ML-DSA verify operation
+ */
+struct rte_crypto_ml_dsa_sigver_op {
+	rte_crypto_param pubkey;
+	/**< The public key to verify signature.*/
+
+	rte_crypto_param message;
+	/**< The message used to verify signature.*/
+
+	rte_crypto_param sign;
+	/**< The signature to verify.*/
+
+	rte_crypto_param mu;
+	/**< The mu used to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+};
+
+/**
+ * PQC ML-DSA op
+ *
+ * Parameters for PQC ML-DSA operation
+ */
+struct rte_crypto_ml_dsa_op {
+	enum rte_crypto_ml_dsa_op_type op;
+	union {
+		struct rte_crypto_ml_dsa_keygen_op keygen;
+		struct rte_crypto_ml_dsa_siggen_op siggen;
+		struct rte_crypto_ml_dsa_sigver_op sigver;
+	};
+};
+
 /**
  * Asymmetric crypto transform data
  *
@@ -751,6 +1049,12 @@ struct rte_crypto_asym_xform {
 		/**< EC xform parameters, used by elliptic curve based
 		 * operations.
 		 */
+
+		struct rte_crypto_ml_kem_xform mlkem;
+		/**< PQC ML-KEM xform parameters */
+
+		struct rte_crypto_ml_dsa_xform mldsa;
+		/**< PQC ML-DSA xform parameters */
 	};
 };
 
@@ -778,6 +1082,8 @@ struct rte_crypto_asym_op {
 		struct rte_crypto_ecpm_op_param ecpm;
 		struct rte_crypto_sm2_op_param sm2;
 		struct rte_crypto_eddsa_op_param eddsa;
+		struct rte_crypto_ml_kem_op mlkem;
+		struct rte_crypto_ml_dsa_op mldsa;
 	};
 	uint16_t flags;
 	/**<
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..fd40c8a64c 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -229,6 +229,66 @@ const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY] = "pub_ec_key_verify"
 };
 
+/**
+ * Public key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_pubkey_size)
+const uint16_t rte_crypto_ml_kem_pubkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 800,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1184,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Private key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_privkey_size)
+const uint16_t rte_crypto_ml_kem_privkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 1632,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 2400,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 3168,
+};
+
+/**
+ * Cipher size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_cipher_size)
+const uint16_t rte_crypto_ml_kem_cipher_size[] = {
+	[RTE_CRYPTO_ML_KEM_PARAM_512] = 768,
+	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1088,
+	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
+};
+
+/**
+ * Public key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_pubkey_size)
+const uint16_t rte_crypto_ml_dsa_pubkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 1312,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 1952,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 2592,
+};
+
+/**
+ * Private key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_privkey_size)
+const uint16_t rte_crypto_ml_dsa_privkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2560,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 4032,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4896,
+};
+
+/**
+ * Sign size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_sign_size)
+const uint16_t rte_crypto_ml_dsa_sign_size[] = {
+	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2420,
+	[RTE_CRYPTO_ML_DSA_PARAM_65] = 3309,
+	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4627,
+};
+
 struct rte_cryptodev_sym_session_pool_private_data {
 	uint16_t sess_data_sz;
 	/**< driver session data size */
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index eaf0e50d37..37a6a5e49b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -167,10 +167,13 @@ struct rte_cryptodev_asymmetric_xform_capability {
 	uint32_t op_types;
 	/**<
 	 * Bitmask for supported rte_crypto_asym_op_type or
+	 * rte_crypto_ml_kem_op_type or rte_crypto_ml_dsa_op_type or
 	 * rte_crypto_asym_ke_type. Which enum is used is determined
 	 * by the rte_crypto_asym_xform_type. For key exchange algorithms
-	 * like Diffie-Hellman it is rte_crypto_asym_ke_type, for others
-	 * it is rte_crypto_asym_op_type.
+	 * like Diffie-Hellman it is rte_crypto_asym_ke_type,
+	 * for ML-KEM algorithms it is rte_crypto_ml_kem_op_type,
+	 * for ML-DSA algorithms it is rte_crypto_ml_dsa_op_type,
+	 * or others it is rte_crypto_asym_op_type.
 	 */
 
 	__extension__
@@ -188,6 +191,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
 
 		uint32_t op_capa[RTE_CRYPTO_ASYM_OP_LIST_END];
 		/**< Operation specific capabilities. */
+
+		uint32_t mlkem_capa[RTE_CRYPTO_ML_KEM_OP_END];
+		/**< Bitmask of supported ML-KEM parameter sets. */
+
+		uint32_t mldsa_capa[RTE_CRYPTO_ML_DSA_OP_END];
+		/**< Bitmask of supported ML-DSA parameter sets. */
 	};
 
 	uint64_t hash_algos;
@@ -577,6 +586,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support inner checksum computation/verification */
 #define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
 /**< Support Rx injection after security processing */
+#define RTE_CRYPTODEV_FF_MLDSA_SIGN_PREHASH		(1ULL << 29)
+/**< Support Pre Hash ML-DSA Signature Generation */
 
 /**
  * Get the name of a crypto device feature flag
-- 
2.37.1


^ permalink raw reply	[relevance 3%]

* [PATCH v2 1/2] doc: remove unused anchors
  @ 2025-10-02 11:32  7% ` David Marchand
  0 siblings, 0 replies; 77+ results
From: David Marchand @ 2025-10-02 11:32 UTC (permalink / raw)
  To: dev
  Cc: Kai Ji, Julien Aube, John Daley, Hyong Youb Kim,
	Bruce Richardson, Anatoly Burakov, Wenbo Cao, Maxime Coquelin,
	Chenbo Xia, Jochen Behrens, Chengwen Feng, Kevin Laatz,
	Byron Marohn, Yipeng Wang, Tyler Retzlaff, Cristian Dumitrescu,
	Abhinandan Gujjar, Amit Prakash Shukla, Jerin Jacob,
	Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Sameh Gobriel,
	Srikanth Yalavarthi, Anoob Joseph, Volodymyr Fialko,
	Honnappa Nagarahalli, Konstantin Ananyev, David Hunt,
	Sivaprasad Tummala, Luca Vizzarro, Patrick Robb,
	Sunil Kumar Kori, Rakesh Kudurumalla

The documentation has unused anchors that were either left behind after
a documentation refactoring, or just unused since day 1.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/contributing/abi_policy.rst            |  2 --
 doc/guides/contributing/cheatsheet.rst            |  1 -
 doc/guides/contributing/patches.rst               |  1 -
 doc/guides/cryptodevs/qat.rst                     |  1 -
 doc/guides/howto/flow_bifurcation.rst             |  1 -
 doc/guides/howto/lm_bond_virtio_sriov.rst         |  1 -
 doc/guides/howto/lm_virtio_vhost_user.rst         |  1 -
 doc/guides/howto/pvp_reference_benchmark.rst      |  1 -
 doc/guides/linux_gsg/build_dpdk.rst               |  1 -
 doc/guides/nics/bnx2x.rst                         |  1 -
 doc/guides/nics/enic.rst                          |  4 ----
 doc/guides/nics/ice.rst                           |  1 -
 doc/guides/nics/rnp.rst                           |  1 -
 doc/guides/nics/virtio.rst                        |  1 -
 doc/guides/nics/vmxnet3.rst                       |  3 ---
 doc/guides/prog_guide/dmadev.rst                  |  1 -
 doc/guides/prog_guide/efd_lib.rst                 |  4 ----
 doc/guides/prog_guide/env_abstraction_layer.rst   |  2 --
 doc/guides/prog_guide/ethdev/qos_framework.rst    |  4 ----
 .../prog_guide/eventdev/event_crypto_adapter.rst  |  2 --
 .../prog_guide/eventdev/event_dma_adapter.rst     |  2 --
 doc/guides/prog_guide/eventdev/eventdev.rst       |  1 -
 doc/guides/prog_guide/graph_lib.rst               |  4 ----
 doc/guides/prog_guide/member_lib.rst              |  7 -------
 doc/guides/prog_guide/mldev.rst                   |  1 -
 doc/guides/prog_guide/multi_proc_support.rst      |  1 -
 doc/guides/prog_guide/overview.rst                |  1 -
 doc/guides/prog_guide/packet_framework.rst        |  1 -
 doc/guides/prog_guide/pdcp_lib.rst                |  1 -
 doc/guides/prog_guide/ring_lib.rst                | 15 ---------------
 doc/guides/rel_notes/release_20_02.rst            |  1 -
 doc/guides/sample_app_ug/dist_app.rst             |  1 -
 doc/guides/sample_app_ug/l2_forward_crypto.rst    |  1 -
 doc/guides/sample_app_ug/l3_forward.rst           |  1 -
 doc/guides/sample_app_ug/multi_process.rst        |  2 --
 doc/guides/sample_app_ug/ptpclient.rst            |  1 -
 doc/guides/sample_app_ug/qos_scheduler.rst        |  1 -
 doc/guides/sample_app_ug/test_pipeline.rst        |  1 -
 doc/guides/sample_app_ug/vm_power_management.rst  |  2 --
 doc/guides/tools/dts.rst                          |  1 -
 doc/guides/tools/graph.rst                        |  2 --
 doc/guides/tools/testeventdev.rst                 | 10 ----------
 doc/guides/tools/testmldev.rst                    |  6 ------
 43 files changed, 98 deletions(-)

diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index f03a7467ac..8288235921 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -53,7 +53,6 @@ Therefore, in the case of dynamic linking, it is critical that an ABI is
 preserved, or (when modified), done in such a way that the application is unable
 to behave improperly or in an unexpected fashion.
 
-.. _figure_what_is_an_abi:
 
 .. figure:: img/what_is_an_abi.*
 
@@ -104,7 +103,6 @@ An ABI version is supported in all new releases until the next major ABI version
 is declared. When changing the major ABI version, the release notes will detail
 all ABI changes.
 
-.. _figure_abi_stability_policy:
 
 .. figure:: img/abi_stability_policy.*
 
diff --git a/doc/guides/contributing/cheatsheet.rst b/doc/guides/contributing/cheatsheet.rst
index 0debd118d7..4b353d2d01 100644
--- a/doc/guides/contributing/cheatsheet.rst
+++ b/doc/guides/contributing/cheatsheet.rst
@@ -4,7 +4,6 @@
 Patch Cheatsheet
 ================
 
-.. _figure_patch_cheatsheet:
 
 .. figure:: img/patch_cheatsheet.*
 
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 069a18e4ec..663881a59b 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -452,7 +452,6 @@ For example::
      since 802.1AS can be supported through the same interfaces.
 
 
-.. _contrib_checkpatch:
 
 Checking the Patches
 --------------------
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 68d792e4cc..d1c71ce89f 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -227,7 +227,6 @@ Configuring and Building the DPDK QAT PMDs
 Further information on configuring, building and installing DPDK is described
 :doc:`here <../linux_gsg/build_dpdk>`.
 
-.. _building_qat_config:
 
 Build Configuration
 ~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/howto/flow_bifurcation.rst b/doc/guides/howto/flow_bifurcation.rst
index 5d2127bc31..3a3a779ad0 100644
--- a/doc/guides/howto/flow_bifurcation.rst
+++ b/doc/guides/howto/flow_bifurcation.rst
@@ -36,7 +36,6 @@ the kernel driver while a DPDK application can receive specific traffic
 bypassing the Linux kernel by using drivers like VFIO or the DPDK ``igb_uio``
 module.
 
-.. _figure_flow_bifurcation_overview:
 
 .. figure:: img/flow_bifurcation_overview.*
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index 1d46ebb27f..7fd54e8d91 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -40,7 +40,6 @@ The ip address of host_server_1 is 10.237.212.46
 
 The ip address of host_server_2 is 10.237.212.131
 
-.. _figure_lm_bond_virtio_sriov:
 
 .. figure:: img/lm_bond_virtio_sriov.*
 
diff --git a/doc/guides/howto/lm_virtio_vhost_user.rst b/doc/guides/howto/lm_virtio_vhost_user.rst
index 94ab71d653..63cfb84bf0 100644
--- a/doc/guides/howto/lm_virtio_vhost_user.rst
+++ b/doc/guides/howto/lm_virtio_vhost_user.rst
@@ -32,7 +32,6 @@ The ip address of host_server_1 is 10.237.212.46
 
 The ip address of host_server_2 is 10.237.212.131
 
-.. _figure_lm_vhost_user:
 
 .. figure:: img/lm_vhost_user.*
 
diff --git a/doc/guides/howto/pvp_reference_benchmark.rst b/doc/guides/howto/pvp_reference_benchmark.rst
index bec97b8675..6d2616e404 100644
--- a/doc/guides/howto/pvp_reference_benchmark.rst
+++ b/doc/guides/howto/pvp_reference_benchmark.rst
@@ -20,7 +20,6 @@ v16.11 using RHEL7 for both host and guest.
 Setup overview
 --------------
 
-.. _figure_pvp_2nics:
 
 .. figure:: img/pvp_2nics.*
 
diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index 2a983412dd..8d2b1708b8 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -82,7 +82,6 @@ and the last step causing the dynamic loader `ld.so` to update its cache to take
    distributions, `/usr/local/lib` and `/usr/local/lib64` should be added
    to a file in `/etc/ld.so.conf.d/` before running `ldconfig`.
 
-.. _adjusting_build_options:
 
 Adjusting Build Options
 ~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/bnx2x.rst b/doc/guides/nics/bnx2x.rst
index c24d32b9ab..fad62d2d52 100644
--- a/doc/guides/nics/bnx2x.rst
+++ b/doc/guides/nics/bnx2x.rst
@@ -99,7 +99,6 @@ enabling debugging options may affect system performance.
 
   Toggle display of register reads and writes.
 
-.. _bnx2x_driver-compilation:
 
 Driver compilation and testing
 ------------------------------
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index a400bbc4f7..77578b4913 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -219,7 +219,6 @@ There are two known limitations of the current SR-IOV implementation.
    and assign them to VMs as passthrough devices.
 
 
-.. _enic-generic-flow-api:
 
 Generic Flow API support
 ------------------------
@@ -279,8 +278,6 @@ the (stripped) VLAN header whether stripping is enabled or disabled.
 More features may be added in future firmware and new versions of the VIC.
 Please refer to the release notes.
 
-.. _overlay_offload:
-
 Overlay Offload
 ---------------
 
@@ -429,7 +426,6 @@ To verify the selected entry size, enable debug logging
     PMD: rte_enic_pmd: Supported CQ entry sizes: 16 32
     PMD: rte_enic_pmd: Using 16B CQ entry size
 
-.. _enic_limitations:
 
 Limitations
 -----------
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index 7e9ba23102..7056d9709f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -557,7 +557,6 @@ Additional Options
 
     -a 18:01.0,cap=dcf,acl=off
 
-.. _figure_ice_dcf:
 
 .. figure:: img/ice_dcf.*
 
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 706cd04fa7..c4504e26f2 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -32,7 +32,6 @@ Chip Basic Overview
 N10 has two functions, each function support multiple ports (1 to 8),
 which is different of normal PCIe network card (one PF for each port).
 
-.. _figure_mucse_nic:
 
 .. figure:: img/mucse_nic_port.*
 
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index a7642d96ce..588ac41464 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -93,7 +93,6 @@ The following prerequisites apply:
 Virtio with qemu virtio Back End
 --------------------------------
 
-.. _figure_host_vm_comms_qemu:
 
 .. figure:: img/host_vm_comms_qemu.*
 
diff --git a/doc/guides/nics/vmxnet3.rst b/doc/guides/nics/vmxnet3.rst
index 3f498b905d..b3de27c36c 100644
--- a/doc/guides/nics/vmxnet3.rst
+++ b/doc/guides/nics/vmxnet3.rst
@@ -110,7 +110,6 @@ The following prerequisites apply:
 *   Before starting a VM, a VMXNET3 interface to a VM through VMware vSphere Client must be assigned.
     This is shown in the figure below.
 
-.. _figure_vmxnet3_int:
 
 .. figure:: img/vmxnet3_int.*
 
@@ -135,7 +134,6 @@ VMXNET3 with a Native NIC Connected to a vSwitch
 
 This section describes an example setup for Phy-vSwitch-VM-Phy communication.
 
-.. _figure_vswitch_vm:
 
 .. figure:: img/vswitch_vm.*
 
@@ -162,7 +160,6 @@ VMXNET3 Chaining VMs Connected to a vSwitch
 
 The following figure shows an example VM-to-VM communication over a Phy-VM-vSwitch-VM-Phy communication channel.
 
-.. _figure_vm_vm_comms:
 
 .. figure:: img/vm_vm_comms.*
 
diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst
index 67a62ff420..6860515292 100644
--- a/doc/guides/prog_guide/dmadev.rst
+++ b/doc/guides/prog_guide/dmadev.rst
@@ -17,7 +17,6 @@ physical (hardware) and virtual (software) DMA devices, as well as a generic DMA
 API which allows DMA devices to be managed and configured, and supports DMA
 operations to be provisioned on DMA poll mode driver.
 
-.. _figure_dmadev:
 
 .. figure:: img/dmadev.*
 
diff --git a/doc/guides/prog_guide/efd_lib.rst b/doc/guides/prog_guide/efd_lib.rst
index 68404d5f33..f91fd1c80a 100644
--- a/doc/guides/prog_guide/efd_lib.rst
+++ b/doc/guides/prog_guide/efd_lib.rst
@@ -155,7 +155,6 @@ In summary, EFD is a set separation data structure that supports millions of
 keys. It is used to distribute a given key to an intended target. By itself
 EFD is not a FIB data structure with an exact match the input flow key.
 
-.. _Efd_example:
 
 Example of EFD Library Usage
 ----------------------------
@@ -199,7 +198,6 @@ the flows served at each node is used and is
 exact matched with the input key to rule out new never seen before
 flows.
 
-.. _Efd_api:
 
 Library API Overview
 --------------------
@@ -281,7 +279,6 @@ in the prev_value argument.
    This function is not multi-thread safe and should only be called
    from one thread.
 
-.. _Efd_internals:
 
 Library Internals
 -----------------
@@ -414,7 +411,6 @@ balanced key distribution across these four is selected the mapping result
 is stored in these two bits.
 
 
-.. _Efd_references:
 
 References
 -----------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index d716895c1d..ce97d8551f 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -52,7 +52,6 @@ A check is also performed at initialization time to ensure that the micro archit
 Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation).
 It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
 
-.. _figure_linux_launch:
 
 .. figure:: img/linuxapp_launch.*
 
@@ -1039,7 +1038,6 @@ The key fields of the heap structure and their function are described below
 
 *   last - this points to the last element in the heap.
 
-.. _figure_malloc_heap:
 
 .. figure:: img/malloc_heap.*
 
diff --git a/doc/guides/prog_guide/ethdev/qos_framework.rst b/doc/guides/prog_guide/ethdev/qos_framework.rst
index 1144037dfa..9d26e0478a 100644
--- a/doc/guides/prog_guide/ethdev/qos_framework.rst
+++ b/doc/guides/prog_guide/ethdev/qos_framework.rst
@@ -11,7 +11,6 @@ Packet Pipeline with QoS Support
 
 An example of a complex packet processing pipeline with QoS support is shown in the following figure.
 
-.. _figure_pkt_proc_pipeline_qos:
 
 .. figure:: ../img/pkt_proc_pipeline_qos.*
 
@@ -112,7 +111,6 @@ It typically acts like a buffer that is able to temporarily store a large number
 as the NIC TX is requesting more packets for transmission,
 these packets are later on removed and handed over to the NIC TX with the packet selection logic observing the predefined SLAs (dequeue operation).
 
-.. _figure_hier_sched_blk:
 
 .. figure:: ../img/hier_sched_blk.*
 
@@ -269,7 +267,6 @@ Internal Data Structures per Port
 
 A schematic of the internal data structures in shown in with details in.
 
-.. _figure_data_struct_per_port:
 
 .. figure:: ../img/data_struct_per_port.*
 
@@ -452,7 +449,6 @@ The dequeue pipe state machine exploits the data presence into the processor cac
 therefore it tries to send as many packets from the same pipe TC and pipe as possible (up to the available packets and credits) before
 moving to the next active TC from the same pipe (if any) or to another active pipe.
 
-.. _figure_pipe_prefetch_sm:
 
 .. figure:: ../img/pipe_prefetch_sm.*
 
diff --git a/doc/guides/prog_guide/eventdev/event_crypto_adapter.rst b/doc/guides/prog_guide/eventdev/event_crypto_adapter.rst
index e2481904b1..568280c0ee 100644
--- a/doc/guides/prog_guide/eventdev/event_crypto_adapter.rst
+++ b/doc/guides/prog_guide/eventdev/event_crypto_adapter.rst
@@ -45,7 +45,6 @@ In this mode, events dequeued from the adapter will be treated as new events.
 The application needs to specify event information (response information)
 which is needed to enqueue an event after the crypto operation is completed.
 
-.. _figure_event_crypto_adapter_op_new:
 
 .. figure:: ../img/event_crypto_adapter_op_new.*
 
@@ -72,7 +71,6 @@ to enqueue a crypto operation in addition to the event information (response
 information) needed to enqueue an event after the crypto operation has
 completed.
 
-.. _figure_event_crypto_adapter_op_forward:
 
 .. figure:: ../img/event_crypto_adapter_op_forward.*
 
diff --git a/doc/guides/prog_guide/eventdev/event_dma_adapter.rst b/doc/guides/prog_guide/eventdev/event_dma_adapter.rst
index e040d89e8b..2deda67c80 100644
--- a/doc/guides/prog_guide/eventdev/event_dma_adapter.rst
+++ b/doc/guides/prog_guide/eventdev/event_dma_adapter.rst
@@ -45,7 +45,6 @@ In this mode, events dequeued from the adapter are treated as new events.
 The application has to specify event information (response information)
 which is needed to enqueue an event after the DMA operation is completed.
 
-.. _figure_event_dma_adapter_op_new:
 
 .. figure:: ../img/event_dma_adapter_op_new.*
 
@@ -75,7 +74,6 @@ In this mode, events dequeued from the adapter will be treated as forwarded even
 Application has to specify event information (response information)
 needed to enqueue the event after the DMA operation has completed.
 
-.. _figure_event_dma_adapter_op_forward:
 
 .. figure:: ../img/event_dma_adapter_op_forward.*
 
diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst
index 5e49db8983..82d0124480 100644
--- a/doc/guides/prog_guide/eventdev/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev/eventdev.rst
@@ -167,7 +167,6 @@ illustration, refer to Eventdev Adapter documentation for further details.
 The diagram below shows the final state of the application after this
 walk-through:
 
-.. _figure_eventdev-usage1:
 
 .. figure:: ../img/eventdev_usage.*
 
diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst
index 8409e7666e..1d9c747e06 100644
--- a/doc/guides/prog_guide/graph_lib.rst
+++ b/doc/guides/prog_guide/graph_lib.rst
@@ -58,7 +58,6 @@ Programming model
 Anatomy of Node:
 ~~~~~~~~~~~~~~~~
 
-.. _figure_anatomy_of_a_node:
 
 .. figure:: img/anatomy_of_a_node.*
 
@@ -146,7 +145,6 @@ Node creation and registration
 
 Link the Nodes to create the graph topology
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. _figure_link_the_nodes:
 
 .. figure:: img/link_the_nodes.*
 
@@ -387,7 +385,6 @@ Example of intermediate node implementation with home run:
 
 Graph object memory layout
 --------------------------
-.. _figure_graph_mem_layout:
 
 .. figure:: img/graph_mem_layout.*
 
@@ -931,7 +928,6 @@ Inbuilt Nodes
 DPDK provides a set of nodes for data processing.
 The following diagram depicts inbuilt nodes data flow.
 
-.. _figure_graph_inbuit_node_flow:
 
 .. figure:: img/graph_inbuilt_node_flow.*
 
diff --git a/doc/guides/prog_guide/member_lib.rst b/doc/guides/prog_guide/member_lib.rst
index d2f76de35c..d21cf8563c 100644
--- a/doc/guides/prog_guide/member_lib.rst
+++ b/doc/guides/prog_guide/member_lib.rst
@@ -39,7 +39,6 @@ reduce space requirement and significantly improve the performance of set
 membership queries at the cost of introducing a very small membership test error
 probability.
 
-.. _figure_membership1:
 .. figure:: img/member_i1.*
 
   Example Usages of Membership Library
@@ -109,7 +108,6 @@ Y is a member of the set with certain false positive probability. As shown in
 the next equation, the false positive probability can be made arbitrarily small
 by changing the number of hash functions (``k``) and the vector length (``m``).
 
-.. _figure_membership2:
 .. figure:: img/member_i2.*
 
   Bloom Filter False Positive Probability
@@ -121,7 +119,6 @@ small bit-vector, which can be easily optimized. Hence the lookup throughput
 (set membership test) can be significantly faster than a normal hash table
 lookup with element comparison.
 
-.. _figure_membership3:
 .. figure:: img/member_i3.*
 
   Detecting Routing Loops Using BF
@@ -135,7 +132,6 @@ if the BF indicates that the current node is definitely not in the set then a
 loop-free route is guaranteed.
 
 
-.. _figure_membership4:
 .. figure:: img/member_i4.*
 
   Vector Bloom Filter (vBF) Overview
@@ -149,7 +145,6 @@ them. The basic idea of vBF is shown in the above figure where an element is
 used to address multiple bloom filters concurrently and the bloom filter
 index(es) with a hit is returned.
 
-.. _figure_membership5:
 .. figure:: img/member_i5.*
 
   vBF for Flow Scheduling to Worker Thread
@@ -184,7 +179,6 @@ requires testing a series of Bloom Filters each corresponding to one set.
 As a result, generally speaking vBF is more adequate for the case of a small limited number of sets
 while HTSS should be used with a larger number of sets.
 
-.. _figure_membership6:
 .. figure:: img/member_i6.*
 
   Using HTSS for Attack Signature Matching
@@ -237,7 +231,6 @@ set-summary. It is worth noting that the set-summary still has false positive
 probability, which means the application either can tolerate certain false positive
 or it has fall-back path when false positive happens.
 
-.. _figure_membership7:
 .. figure:: img/member_i7.*
 
   Using HTSS with False Negatives for Wild Card Classification
diff --git a/doc/guides/prog_guide/mldev.rst b/doc/guides/prog_guide/mldev.rst
index 61661b998b..4887fd0caf 100644
--- a/doc/guides/prog_guide/mldev.rst
+++ b/doc/guides/prog_guide/mldev.rst
@@ -12,7 +12,6 @@ The ML model creation and training is outside of the scope of this library.
 
 The ML framework is built on the following model:
 
-.. _figure_mldev_work_flow:
 
 .. figure:: img/mldev_flow.*
 
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a73918a5da..2108832342 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -65,7 +65,6 @@ and point to the same objects, in both processes.
     ``--single-file-segments`` switch, secondary processes must be run with the
     same switch specified. Otherwise, memory corruption may occur.
 
-.. _figure_multi_process_memory:
 
 .. figure:: img/multi_process_memory.*
 
diff --git a/doc/guides/prog_guide/overview.rst b/doc/guides/prog_guide/overview.rst
index c70023e8a1..942576707f 100644
--- a/doc/guides/prog_guide/overview.rst
+++ b/doc/guides/prog_guide/overview.rst
@@ -86,7 +86,6 @@ Core Components
 The *core components* are a set of libraries that provide all the elements needed
 for high-performance packet processing applications.
 
-.. _figure_architecture-overview:
 
 .. figure:: img/architecture-overview.*
 
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index 17010b07dc..9de922444b 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -885,7 +885,6 @@ and detail the bucket search pipeline used to implement 8-byte and 16-byte key h
 either with pre-computed signature or "do-sig").
 For each pipeline stage, the described operations are applied to each of the two packets handled by that stage.
 
-.. _figure_figure39:
 
 .. figure:: img/figure39.*
 
diff --git a/doc/guides/prog_guide/pdcp_lib.rst b/doc/guides/prog_guide/pdcp_lib.rst
index 266abb8574..235e84aebc 100644
--- a/doc/guides/prog_guide/pdcp_lib.rst
+++ b/doc/guides/prog_guide/pdcp_lib.rst
@@ -21,7 +21,6 @@ PDCP would involve the following operations:
 #. Uplink data compression
 #. Ciphering and integrity protection
 
-.. _figure_pdcp_functional_overview:
 
 .. figure:: img/pdcp_functional_overview.*
 
diff --git a/doc/guides/prog_guide/ring_lib.rst b/doc/guides/prog_guide/ring_lib.rst
index 98ef003aac..a95ff4ab95 100644
--- a/doc/guides/prog_guide/ring_lib.rst
+++ b/doc/guides/prog_guide/ring_lib.rst
@@ -45,7 +45,6 @@ The disadvantages:
 
 A simplified representation of a Ring is shown in with consumer and producer head and tail pointers to objects stored in the data structure.
 
-.. _figure_ring1:
 
 .. figure:: img/ring1.*
 
@@ -113,7 +112,6 @@ The prod_next local variable points to the next element of the table, or several
 If there is not enough room in the ring (this is detected by checking cons_tail), it returns an error.
 
 
-.. _figure_ring-enqueue1:
 
 .. figure:: img/ring-enqueue1.*
 
@@ -128,7 +126,6 @@ The second step is to modify *ring->prod_head* in ring structure to point to the
 The added object is copied in the ring (obj4).
 
 
-.. _figure_ring-enqueue2:
 
 .. figure:: img/ring-enqueue2.*
 
@@ -142,7 +139,6 @@ Once the object is added in the ring, ring->prod_tail in the ring structure is m
 The enqueue operation is finished.
 
 
-.. _figure_ring-enqueue3:
 
 .. figure:: img/ring-enqueue3.*
 
@@ -166,7 +162,6 @@ The cons_next local variable points to the next element of the table, or several
 If there are not enough objects in the ring (this is detected by checking prod_tail), it returns an error.
 
 
-.. _figure_ring-dequeue1:
 
 .. figure:: img/ring-dequeue1.*
 
@@ -181,7 +176,6 @@ The second step is to modify ring->cons_head in the ring structure to point to t
 The dequeued object (obj1) is copied in the pointer given by the user.
 
 
-.. _figure_ring-dequeue2:
 
 .. figure:: img/ring-dequeue2.*
 
@@ -195,7 +189,6 @@ Finally, ring->cons_tail in the ring structure is modified to point to the same
 The dequeue operation is finished.
 
 
-.. _figure_ring-dequeue3:
 
 .. figure:: img/ring-dequeue3.*
 
@@ -220,7 +213,6 @@ or several elements after in the case of bulk enqueue.
 If there is not enough room in the ring (this is detected by checking cons_tail), it returns an error.
 
 
-.. _figure_ring-mp-enqueue1:
 
 .. figure:: img/ring-mp-enqueue1.*
 
@@ -242,7 +234,6 @@ This operation is done using a Compare And Swap (CAS) instruction, which does th
 In the figure, the operation succeeded on core 1, and step one restarted on core 2.
 
 
-.. _figure_ring-mp-enqueue2:
 
 .. figure:: img/ring-mp-enqueue2.*
 
@@ -257,7 +248,6 @@ The CAS operation is retried on core 2 with success.
 The core 1 updates one element of the ring(obj4), and the core 2 updates another one (obj5).
 
 
-.. _figure_ring-mp-enqueue3:
 
 .. figure:: img/ring-mp-enqueue3.*
 
@@ -272,7 +262,6 @@ A core can only update it if ring->prod_tail is equal to the prod_head local var
 This is only true on core 1. The operation is finished on core 1.
 
 
-.. _figure_ring-mp-enqueue4:
 
 .. figure:: img/ring-mp-enqueue4.*
 
@@ -286,7 +275,6 @@ Once ring->prod_tail is updated by core 1, core 2 is allowed to update it too.
 The operation is also finished on core 2.
 
 
-.. _figure_ring-mp-enqueue5:
 
 .. figure:: img/ring-mp-enqueue5.*
 
@@ -311,7 +299,6 @@ The following are two examples that help to explain how indexes are used in a ri
     as opposed to unsigned 32-bit integers in the more realistic case.
 
 
-.. _figure_ring-modulo1:
 
 .. figure:: img/ring-modulo1.*
 
@@ -321,7 +308,6 @@ The following are two examples that help to explain how indexes are used in a ri
 This ring contains 11000 entries.
 
 
-.. _figure_ring-modulo2:
 
 .. figure:: img/ring-modulo2.*
 
@@ -536,7 +522,6 @@ On that picture ``obj5`` and ``obj4`` elements are acquired by stage 0,
 ``obj2`` and ``obj3`` are acquired by stage 1,
 while ``obj1`` was already released by stage 1 and is ready to be consumed.
 
-.. _figure_soring1:
 
 .. figure:: img/soring-pic1.*
 
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 925985b4f8..c207381f3d 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -230,7 +230,6 @@ API Changes
 * No change in this release.
 
 
-.. _20_02_abi_changes:
 
 ABI Changes
 -----------
diff --git a/doc/guides/sample_app_ug/dist_app.rst b/doc/guides/sample_app_ug/dist_app.rst
index 30b4184d40..8fc260e5b8 100644
--- a/doc/guides/sample_app_ug/dist_app.rst
+++ b/doc/guides/sample_app_ug/dist_app.rst
@@ -22,7 +22,6 @@ into each other.
 This application can be used to benchmark performance using the traffic
 generator as shown in the figure below.
 
-.. _figure_dist_perf:
 
 .. figure:: img/dist_perf.*
 
diff --git a/doc/guides/sample_app_ug/l2_forward_crypto.rst b/doc/guides/sample_app_ug/l2_forward_crypto.rst
index ba38d9f22e..e4c3022763 100644
--- a/doc/guides/sample_app_ug/l2_forward_crypto.rst
+++ b/doc/guides/sample_app_ug/l2_forward_crypto.rst
@@ -193,7 +193,6 @@ on a packet received on an RX PORT before forwarding it to a TX PORT.
 The following figure illustrates a sample flow of a packet in the application,
 from reception until transmission.
 
-.. _figure_l2_fwd_encrypt_flow:
 
 .. figure:: img/l2_fwd_encrypt_flow.*
 
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 9b0d0350aa..71d5342f77 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -282,7 +282,6 @@ R<destination_ip><source_ip><destination_port><source_port><protocol><output_por
 
 *   A typical IPv4 ACL rule line should have a format as shown below:
 
-.. _figure_ipv4_acl_rule:
 
 .. figure:: img/ipv4_acl_rule.*
 
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index 1bd858bfb5..444a86eb67 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -127,7 +127,6 @@ The symmetric multi process example demonstrates how a set of processes can run
 with each process performing the same set of packet- processing operations.
 The following diagram shows the data-flow through the application, using two processes.
 
-.. _figure_sym_multi_proc_app:
 
 .. figure:: img/sym_multi_proc_app.*
 
@@ -208,7 +207,6 @@ by sending each packet out on a different network port.
 
 The following diagram shows the data-flow through the application, using two client processes.
 
-.. _figure_client_svr_sym_multi_proc_app:
 
 .. figure:: img/client_svr_sym_multi_proc_app.*
 
diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
index 0df465bcb4..87e82b8695 100644
--- a/doc/guides/sample_app_ug/ptpclient.rst
+++ b/doc/guides/sample_app_ug/ptpclient.rst
@@ -30,7 +30,6 @@ In order to keep the application simple the following assumptions are made:
 How the Application Works
 -------------------------
 
-.. _figure_ptpclient_highlevel:
 
 .. figure:: img/ptpclient.*
 
diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst
index cd33beecb0..be7e78cc71 100644
--- a/doc/guides/sample_app_ug/qos_scheduler.rst
+++ b/doc/guides/sample_app_ug/qos_scheduler.rst
@@ -11,7 +11,6 @@ Overview
 
 The architecture of the QoS scheduler application is shown in the following figure.
 
-.. _figure_qos_sched_app_arch:
 
 .. figure:: img/qos_sched_app_arch.*
 
diff --git a/doc/guides/sample_app_ug/test_pipeline.rst b/doc/guides/sample_app_ug/test_pipeline.rst
index 818be93cd6..24f1f870e9 100644
--- a/doc/guides/sample_app_ug/test_pipeline.rst
+++ b/doc/guides/sample_app_ug/test_pipeline.rst
@@ -22,7 +22,6 @@ The application uses three CPU cores:
 
 *   Core C ("TX core") receives traffic from core B through software queues and sends it to the NIC ports for transmission.
 
-.. _figure_test_pipeline_app:
 
 .. figure:: img/test_pipeline_app.*
 
diff --git a/doc/guides/sample_app_ug/vm_power_management.rst b/doc/guides/sample_app_ug/vm_power_management.rst
index 1955140bb3..62d70c053a 100644
--- a/doc/guides/sample_app_ug/vm_power_management.rst
+++ b/doc/guides/sample_app_ug/vm_power_management.rst
@@ -54,7 +54,6 @@ directs frequency changes and policies to the host monitor rather than
 the APCI ``cpufreq`` ``sysfs`` interface used on the host in non-virtualised
 environments.
 
-.. _figure_vm_power_mgr_highlevel:
 
 .. figure:: img/vm_power_mgr_highlevel.*
 
@@ -109,7 +108,6 @@ receiving a request, the host translates the vCPU to a pCPU using the
 libvirt API before forwarding it to the host ``librte_power``.
 
 
-.. _figure_vm_power_mgr_vm_request_seq:
 
 .. figure:: img/vm_power_mgr_vm_request_seq.*
 
diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 016dc5e374..0bb8da3e46 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -526,7 +526,6 @@ The output is generated in ``build/doc/api/dts/html``.
 
    Make sure to fix any Sphinx warnings when adding or updating docstrings.
 
-.. _configuration_example:
 
 Configuration Example
 ---------------------
diff --git a/doc/guides/tools/graph.rst b/doc/guides/tools/graph.rst
index 0ffd29e41f..062062f68d 100644
--- a/doc/guides/tools/graph.rst
+++ b/doc/guides/tools/graph.rst
@@ -365,13 +365,11 @@ This section mentions the created graph for each use case.
 l3fwd
 ~~~~~
 
-.. _figure_l3fwd_graph:
 
 .. figure:: img/graph-usecase-l3fwd.*
 
 l2fwd
 ~~~~~
 
-.. _figure_l2fwd_graph:
 
 .. figure:: img/graph-usecase-l2fwd.*
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index cd367eb2a2..526a7b12c7 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -282,7 +282,6 @@ This is a functional test case that aims at testing the following:
    |   |              |                | port n                 |
    +---+--------------+----------------+------------------------+
 
-.. _figure_eventdev_order_queue_test:
 
 .. figure:: img/eventdev_order_queue_test.*
 
@@ -365,7 +364,6 @@ but differs in two critical ways:
    |   |              |                | port n.                   |
    +---+--------------+----------------+---------------------------+
 
-.. _figure_eventdev_atomic_queue_test:
 
 .. figure:: img/eventdev_atomic_queue_test.*
 
@@ -455,7 +453,6 @@ instead of two different queues for ordered and atomic.
    |   |              |                | port n.                |
    +---+--------------+----------------+------------------------+
 
-.. _figure_eventdev_order_atq_test:
 
 .. figure:: img/eventdev_order_atq_test.*
 
@@ -519,7 +516,6 @@ instead of two different atomic queues.
    |   |              |                | port n.                 |
    +---+--------------+----------------+-------------------------+
 
-.. _figure_eventdev_atomic_atq_test:
 
 .. figure:: img/eventdev_atomic_atq_test.*
 
@@ -582,7 +578,6 @@ This is a performance test case that aims at testing the following:
    |   |              | nb_producers   | Producers use port n to port p          |
    +---+--------------+----------------+-----------------------------------------+
 
-.. _figure_eventdev_perf_queue_test:
 
 .. figure:: img/eventdev_perf_queue_test.*
 
@@ -720,7 +715,6 @@ This is a performance test case that aims at testing the following with
    |   |              | nb_producers   | Producers use port n to port p          |
    +---+--------------+----------------+-----------------------------------------+
 
-.. _figure_eventdev_perf_atq_test:
 
 .. figure:: img/eventdev_perf_atq_test.*
 
@@ -833,11 +827,9 @@ This is a pipeline test case that aims at testing the following:
    |   |              |                | depending on the Tx adapter capability. |
    +---+--------------+----------------+-----------------------------------------+
 
-.. _figure_eventdev_pipeline_queue_test_generic:
 
 .. figure:: img/eventdev_pipeline_queue_test_generic.*
 
-.. _figure_eventdev_pipeline_queue_test_internal_port:
 
 .. figure:: img/eventdev_pipeline_queue_test_internal_port.*
 
@@ -962,11 +954,9 @@ This is a pipeline test case that aims at testing the following with
    |   |              |                | depending on the Tx adapter capability. |
    +---+--------------+----------------+-----------------------------------------+
 
-.. _figure_eventdev_pipeline_atq_test_generic:
 
 .. figure:: img/eventdev_pipeline_atq_test_generic.*
 
-.. _figure_eventdev_pipeline_atq_test_internal_port:
 
 .. figure:: img/eventdev_pipeline_atq_test_internal_port.*
 
diff --git a/doc/guides/tools/testmldev.rst b/doc/guides/tools/testmldev.rst
index e3182c960f..578a1f02e5 100644
--- a/doc/guides/tools/testmldev.rst
+++ b/doc/guides/tools/testmldev.rst
@@ -209,7 +209,6 @@ when handling with `N` number of models.
 executes the sequence of load / start / stop / unload for a model in order,
 followed by next model.
 
-.. _figure_mldev_model_ops_subtest_a:
 
 .. figure:: img/mldev_model_ops_subtest_a.*
 
@@ -219,7 +218,6 @@ followed by next model.
 executes load for all models, followed by a start for all models.
 Upon successful start of all models, stop is invoked for all models followed by unload.
 
-.. _figure_mldev_model_ops_subtest_b:
 
 .. figure:: img/mldev_model_ops_subtest_b.*
 
@@ -229,7 +227,6 @@ Upon successful start of all models, stop is invoked for all models followed by
 loads all models, followed by a start and stop of all models in order.
 Upon completion of stop, unload is invoked for all models.
 
-.. _figure_mldev_model_ops_subtest_c:
 
 .. figure:: img/mldev_model_ops_subtest_c.*
 
@@ -239,7 +236,6 @@ Upon completion of stop, unload is invoked for all models.
 executes load and start for all models available.
 Upon successful start of all models, stop is executed for the models.
 
-.. _figure_mldev_model_ops_subtest_d:
 
 .. figure:: img/mldev_model_ops_subtest_d.*
 
@@ -334,7 +330,6 @@ The model is unloaded upon completion of all inferences for the model.
 The test would continue loading and executing inference requests for all models
 specified through ``filelist`` option in an ordered manner.
 
-.. _figure_mldev_inference_ordered:
 
 .. figure:: img/mldev_inference_ordered.*
 
@@ -390,7 +385,6 @@ Total number of inferences enqueued for a model are equal to the repetitions spe
 An additional pool of threads would dequeue the inferences from the device.
 Models would be unloaded upon completion of inferences for all models loaded.
 
-.. _figure_mldev_inference_interleave:
 
 .. figure:: img/mldev_inference_interleave.*
 
-- 
2.51.0


^ permalink raw reply	[relevance 7%]

* [PATCH v3 6/7] doc: update docs for ethdev changes
  @ 2025-10-03 11:02  4%   ` Bruce Richardson
  0 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-10-03 11:02 UTC (permalink / raw)
  To: dev; +Cc: stephen, thomas, Bruce Richardson

Move text from deprecation notice to release note, and update.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 doc/guides/rel_notes/deprecation.rst   | 7 -------
 doc/guides/rel_notes/release_25_11.rst | 6 ++++++
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 483030cda8..4b9da99484 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -98,13 +98,6 @@ Deprecation Notices
   - ``rte_flow_item_pppoe``
   - ``rte_flow_item_pppoe_proto_id``
 
-* ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
-  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
-  ``q_errors``.
-  Instead queue stats will be received via xstats API. Current method support
-  will be limited to maximum 256 queues.
-  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
-
 * ethdev: Flow actions ``PF`` and ``VF`` have been deprecated since DPDK 21.11
   and are yet to be removed. That still has not happened because there are net
   drivers which support combined use of either action ``PF`` or action ``VF``
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..4b00d3ec9e 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -116,6 +116,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* ethdev: As previously announced in deprecation notes,
+  queue specific stats fields are now removed from ``struct rte_eth_stats``.
+  Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``, ``q_errors``.
+  Instead queue stats will be received via xstats API.
+  Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` is removed from public headers.
+
 
 ABI Changes
 -----------
-- 
2.48.1


^ permalink raw reply	[relevance 4%]

* RE: [PATCH v3 1/3] cryptodev: support PQC ML algorithms
  2025-10-01 17:56  3%       ` [PATCH v3 1/3] " Gowrishankar Muthukrishnan
@ 2025-10-03 14:24  0%         ` Akhil Goyal
  0 siblings, 0 replies; 77+ results
From: Akhil Goyal @ 2025-10-03 14:24 UTC (permalink / raw)
  To: Gowrishankar Muthukrishnan, dev, Fan Zhang, Kai Ji
  Cc: Anoob Joseph, Gowrishankar Muthukrishnan

> Subject: [PATCH v3 1/3] cryptodev: support PQC ML algorithms
> 
> Add support for PQC ML-KEM and ML-DSA algorithms.
> 
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
>  doc/guides/cryptodevs/features/default.ini |   2 +
>  doc/guides/prog_guide/cryptodev_lib.rst    |   3 +-
>  doc/guides/rel_notes/release_25_11.rst     |  11 +
>  lib/cryptodev/rte_crypto_asym.h            | 306 +++++++++++++++++++++
>  lib/cryptodev/rte_cryptodev.c              |  60 ++++
>  lib/cryptodev/rte_cryptodev.h              |  15 +-
>  6 files changed, 394 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 116ffce249..64198f013a 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -134,6 +134,8 @@ ECPM                    =
>  ECDH                    =
>  SM2                     =
>  EdDSA                   =
> +ML-DSA                  =
> +ML-KEM                  =
> 
>  ;
>  ; Supported Operating systems of a default crypto driver.
> diff --git a/doc/guides/prog_guide/cryptodev_lib.rst
> b/doc/guides/prog_guide/cryptodev_lib.rst
> index b54efcb74e..f0ee44eb54 100644
> --- a/doc/guides/prog_guide/cryptodev_lib.rst
> +++ b/doc/guides/prog_guide/cryptodev_lib.rst
> @@ -928,7 +928,8 @@ Asymmetric Cryptography
>  The cryptodev library currently provides support for the following asymmetric
>  Crypto operations; RSA, Modular exponentiation and inversion, Diffie-Hellman
> and
>  Elliptic Curve Diffie-Hellman public and/or private key generation and shared
> -secret compute, DSA and EdDSA signature generation and verification.
> +secret compute, DSA and EdDSA signature generation and verification,
> +PQC ML-KEM and ML-DSA algorithms.
> 
>  Session and Session Management
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> diff --git a/doc/guides/rel_notes/release_25_11.rst
> b/doc/guides/rel_notes/release_25_11.rst
> index c3b94e1896..9d47f762d7 100644
> --- a/doc/guides/rel_notes/release_25_11.rst
> +++ b/doc/guides/rel_notes/release_25_11.rst
> @@ -76,6 +76,14 @@ New Features
>    * Added multi-process per port.
>    * Optimized code.
> 
> +* **Added PQC ML-KEM and ML-DSA support.**
> +
> +  * Added PQC ML-KEM support with reference to FIPS203.
> +  * Added PQC ML-DSA support with reference to FIPS204.
> +
> +* **Updated openssl crypto driver.**
> +
> +  * Added support for PQC ML-KEM and ML-DSA algorithms.

Split the openssl update into your 2/3 patch.
> 
>  Removed Items
>  -------------
> @@ -138,6 +146,9 @@ ABI Changes
>  * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16
> bytes
>    to avoid unaligned accesses.
> 
> +* cryptodev: The enum ``rte_crypto_asym_xform_type``, struct
> ``rte_crypto_asym_xform``
> +  and struct ``rte_crypto_asym_op`` are updated to include new values to
> support
> +  ML-KEM and ML-DSA.
> 
>  Known Issues
>  ------------
> diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
> index 9787b710e7..14a0e57467 100644
> --- a/lib/cryptodev/rte_crypto_asym.h
> +++ b/lib/cryptodev/rte_crypto_asym.h
> @@ -37,6 +37,20 @@ rte_crypto_asym_ke_strings[];
>  extern const char *
>  rte_crypto_asym_op_strings[];
> 
> +/** PQC ML crypto op parameters size */
> +extern const uint16_t
> +rte_crypto_ml_kem_pubkey_size[];
> +extern const uint16_t
> +rte_crypto_ml_kem_privkey_size[];
> +extern const uint16_t
> +rte_crypto_ml_kem_cipher_size[];
> +extern const uint16_t
> +rte_crypto_ml_dsa_pubkey_size[];
> +extern const uint16_t
> +rte_crypto_ml_dsa_privkey_size[];
> +extern const uint16_t
> +rte_crypto_ml_dsa_sign_size[];
> +
>  #ifdef __cplusplus
>  }
>  #endif
> @@ -144,6 +158,14 @@ enum rte_crypto_asym_xform_type {
>  	/**< Edwards Curve Digital Signature Algorithm
>  	 * Perform Signature Generation and Verification.
>  	 */
> +	RTE_CRYPTO_ASYM_XFORM_ML_KEM,
> +	/**< Module Lattice based Key Encapsulation Mechanism
> +	 * Performs Key Pair Generation, Encapsulation and Decapsulation.
> +	 */
> +	RTE_CRYPTO_ASYM_XFORM_ML_DSA
> +	/**< Module Lattice based Digital Signature Algorithm
> +	 * Performs Key Pair Generation, Signature Generation and Verification.
> +	 */
>  };
> 
>  /**
> @@ -720,6 +742,282 @@ struct rte_crypto_sm2_op_param {
>  	 */
>  };
> 
> +/**
> + * PQC ML-KEM algorithms
> + *
> + * List of ML-KEM algorithms used in PQC
> + */
> +enum rte_crypto_ml_kem_param_set {
> +	RTE_CRYPTO_ML_KEM_PARAM_NONE,
> +	RTE_CRYPTO_ML_KEM_PARAM_512,
> +	RTE_CRYPTO_ML_KEM_PARAM_768,
> +	RTE_CRYPTO_ML_KEM_PARAM_1024,
> +};

Can we drop PARAM and use RTE_CRYPTO_ML_KEM_512

> +
> +/**
> + * PQC ML-KEM op types
> + *
> + * List of ML-KEM op types in PQC
> + */
> +enum rte_crypto_ml_kem_op_type {
> +	RTE_CRYPTO_ML_KEM_OP_KEYGEN,
> +	RTE_CRYPTO_ML_KEM_OP_KEYVER,
> +	RTE_CRYPTO_ML_KEM_OP_ENCAP,
> +	RTE_CRYPTO_ML_KEM_OP_DECAP,
> +	RTE_CRYPTO_ML_KEM_OP_END
> +};
> +
> +/**
> + * PQC ML-KEM transform data
> + *
> + * Structure describing ML-KEM xform params
> + */
> +struct rte_crypto_ml_kem_xform {
> +	enum rte_crypto_ml_kem_param_set param;
> +};
Doxygen comments missing.

> +
> +/**
> + * PQC ML-KEM KEYGEN op
> + *
> + * Parameters for PQC ML-KEM key generation operation
> + */
> +struct rte_crypto_ml_kem_keygen_op {
> +	rte_crypto_param d;
> +	/**< The seed d value (of 32 bytes in length) to generate key pair.*/
> +
> +	rte_crypto_param z;
> +	/**< The seed z value (of 32 bytes in length) to generate key pair.*/
> +
> +	rte_crypto_param ek;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed encapsulation key.
> +	 * - Refer `rte_crypto_ml_kem_pubkey_size` for size of buffer.
> +	 */
> +
> +	rte_crypto_param dk;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed decapsulation key.
> +	 * - Refer `rte_crypto_ml_kem_privkey_size` for size of buffer.
> +	 */
> +};
> +
> +/**
> + * PQC ML-KEM KEYVER op
> + *
> + * Parameters for PQC ML-KEM key verification operation
> + */
> +struct rte_crypto_ml_kem_keyver_op {
> +	enum rte_crypto_ml_kem_op_type op;
> +	/**<
> +	 * Op associated with key to be verified is one of below:
> +	 * - Encapsulation op
> +	 * - Decapsulation op
> +	 */
> +
> +	rte_crypto_param key;
> +	/**<
> +	 * KEM key to check.
> +	 * - ek in case of encapsulation op.
> +	 * - dk in case of decapsulation op.
> +	 */
> +};
> +
> +/**
> + * PQC ML-KEM ENCAP op
> + *
> + * Parameters for PQC ML-KEM encapsulation operation
> + */
> +struct rte_crypto_ml_kem_encap_op {
> +	rte_crypto_param message;
> +	/**< The message (of 32 bytes in length) for randomness.*/
> +
> +	rte_crypto_param ek;
> +	/**< The encapsulation key.*/
> +
> +	rte_crypto_param cipher;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed cipher.
> +	 * - Refer `rte_crypto_ml_kem_cipher_size` for size of buffer.
> +	 */
> +
> +	rte_crypto_param sk;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed shared secret key (32 bytes).
> +	 */
> +};
> +
> +/**
> + * PQC ML-KEM DECAP op
> + *
> + * Parameters for PQC ML-KEM decapsulation operation
> + */
> +struct rte_crypto_ml_kem_decap_op {
> +	rte_crypto_param cipher;
> +	/**< The cipher to be decapsulated.*/
> +
> +	rte_crypto_param dk;
> +	/**< The decapsulation key.*/
> +
> +	rte_crypto_param sk;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed shared secret key (32 bytes).
> +	 */
> +};
> +
> +/**
> + * PQC ML-KEM op
> + *
> + * Parameters for PQC ML-KEM operation
> + */
> +struct rte_crypto_ml_kem_op {
> +	enum rte_crypto_ml_kem_op_type op;
> +	union {
> +		struct rte_crypto_ml_kem_keygen_op keygen;
> +		struct rte_crypto_ml_kem_keyver_op keyver;
> +		struct rte_crypto_ml_kem_encap_op encap;
> +		struct rte_crypto_ml_kem_decap_op decap;
> +	};
> +};
> +
> +/**
> + * PQC ML-DSA algorithms
> + *
> + * List of ML-DSA algorithms used in PQC
> + */
> +enum rte_crypto_ml_dsa_param_set {
> +	RTE_CRYPTO_ML_DSA_PARAM_NONE,
> +	RTE_CRYPTO_ML_DSA_PARAM_44,
> +	RTE_CRYPTO_ML_DSA_PARAM_65,
> +	RTE_CRYPTO_ML_DSA_PARAM_87,
> +};
Can we drop PARAM?
> +
> +/**
> + * PQC ML-DSA op types
> + *
> + * List of ML-DSA op types in PQC
> + */
> +enum rte_crypto_ml_dsa_op_type {
> +	RTE_CRYPTO_ML_DSA_OP_KEYGEN,
> +	RTE_CRYPTO_ML_DSA_OP_SIGN,
> +	RTE_CRYPTO_ML_DSA_OP_VERIFY,
> +	RTE_CRYPTO_ML_DSA_OP_END
> +};
> +
> +/**
> + * PQC ML-DSA transform data
> + *
> + * Structure describing ML-DSA xform params
> + */
> +struct rte_crypto_ml_dsa_xform {
> +	enum rte_crypto_ml_dsa_param_set param;

Add missing doxygen comments.
> +
> +	bool sign_deterministic;
> +	/**< The signature generated using deterministic method. */
> +
> +	bool sign_prehash;
> +	/**< The signature generated using prehash or pure routine. */
> +};
> +
> +/**
> + * PQC ML-DSA KEYGEN op
> + *
> + * Parameters for PQC ML-DSA key generation operation
> + */
> +struct rte_crypto_ml_dsa_keygen_op {
> +	rte_crypto_param seed;
> +	/**< The random seed (of 32 bytes in length) to generate key pair.*/
> +
> +	rte_crypto_param pubkey;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed public key.
> +	 * - Refer `rte_crypto_ml_dsa_pubkey_size` for size of buffer.
> +	 */
> +
> +	rte_crypto_param privkey;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed secret key.
> +	 * - Refer `rte_crypto_ml_dsa_privkey_size` for size of buffer.
> +	 */
> +};
> +
> +/**
> + * PQC ML-DSA SIGGEN op
> + *
> + * Parameters for PQC ML-DSA sign operation
> + */
> +struct rte_crypto_ml_dsa_siggen_op {
> +	rte_crypto_param message;
> +	/**< The message to generate signature.*/
> +
> +	rte_crypto_param mu;
> +	/**< The mu to generate signature.*/
> +
> +	rte_crypto_param privkey;
> +	/**< The secret key to generate signature.*/
> +
> +	rte_crypto_param seed;
> +	/**< The seed to generate signature.*/
> +
> +	rte_crypto_param ctx;
> +	/**< The context key to generate signature.*/
> +
> +	enum rte_crypto_auth_algorithm hash;
> +	/**< Hash function to generate signature. */
> +
> +	rte_crypto_param sign;
> +	/**<
> +	 * Pointer to output data
> +	 * - The computed signature.
> +	 * - Refer `rte_crypto_ml_dsa_sign_size` for size of buffer.
> +	 */
> +};
> +
> +/**
> + * PQC ML-DSA SIGVER op
> + *
> + * Parameters for PQC ML-DSA verify operation
> + */
> +struct rte_crypto_ml_dsa_sigver_op {
> +	rte_crypto_param pubkey;
> +	/**< The public key to verify signature.*/
> +
> +	rte_crypto_param message;
> +	/**< The message used to verify signature.*/
> +
> +	rte_crypto_param sign;
> +	/**< The signature to verify.*/
> +
> +	rte_crypto_param mu;
> +	/**< The mu used to generate signature.*/
> +
> +	rte_crypto_param ctx;
> +	/**< The context key to generate signature.*/
> +
> +	enum rte_crypto_auth_algorithm hash;
> +	/**< Hash function to generate signature. */
> +};
> +
> +/**
> + * PQC ML-DSA op
> + *
> + * Parameters for PQC ML-DSA operation
> + */
> +struct rte_crypto_ml_dsa_op {
> +	enum rte_crypto_ml_dsa_op_type op;
> +	union {
> +		struct rte_crypto_ml_dsa_keygen_op keygen;
> +		struct rte_crypto_ml_dsa_siggen_op siggen;
> +		struct rte_crypto_ml_dsa_sigver_op sigver;
> +	};
> +};
> +
>  /**
>   * Asymmetric crypto transform data
>   *
> @@ -751,6 +1049,12 @@ struct rte_crypto_asym_xform {
>  		/**< EC xform parameters, used by elliptic curve based
>  		 * operations.
>  		 */
> +
> +		struct rte_crypto_ml_kem_xform mlkem;
> +		/**< PQC ML-KEM xform parameters */
> +
> +		struct rte_crypto_ml_dsa_xform mldsa;
> +		/**< PQC ML-DSA xform parameters */
>  	};
>  };
> 
> @@ -778,6 +1082,8 @@ struct rte_crypto_asym_op {
>  		struct rte_crypto_ecpm_op_param ecpm;
>  		struct rte_crypto_sm2_op_param sm2;
>  		struct rte_crypto_eddsa_op_param eddsa;
> +		struct rte_crypto_ml_kem_op mlkem;
> +		struct rte_crypto_ml_dsa_op mldsa;
>  	};
>  	uint16_t flags;
>  	/**<
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index bb7bab4dd5..fd40c8a64c 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -229,6 +229,66 @@ const char *rte_crypto_asym_ke_strings[] = {
>  	[RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY] = "pub_ec_key_verify"
>  };
> 
> +/**
> + * Public key size used in PQC ML-KEM based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_pubkey_size)
> +const uint16_t rte_crypto_ml_kem_pubkey_size[] = {
> +	[RTE_CRYPTO_ML_KEM_PARAM_512] = 800,
> +	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1184,
> +	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
> +};
> +
> +/**
> + * Private key size used in PQC ML-KEM based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_privkey_size)
> +const uint16_t rte_crypto_ml_kem_privkey_size[] = {
> +	[RTE_CRYPTO_ML_KEM_PARAM_512] = 1632,
> +	[RTE_CRYPTO_ML_KEM_PARAM_768] = 2400,
> +	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 3168,
> +};
> +
> +/**
> + * Cipher size used in PQC ML-KEM based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_cipher_size)
> +const uint16_t rte_crypto_ml_kem_cipher_size[] = {
> +	[RTE_CRYPTO_ML_KEM_PARAM_512] = 768,
> +	[RTE_CRYPTO_ML_KEM_PARAM_768] = 1088,
> +	[RTE_CRYPTO_ML_KEM_PARAM_1024] = 1568,
> +};
> +
> +/**
> + * Public key size used in PQC ML-DSA based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_pubkey_size)
> +const uint16_t rte_crypto_ml_dsa_pubkey_size[] = {
> +	[RTE_CRYPTO_ML_DSA_PARAM_44] = 1312,
> +	[RTE_CRYPTO_ML_DSA_PARAM_65] = 1952,
> +	[RTE_CRYPTO_ML_DSA_PARAM_87] = 2592,
> +};
> +
> +/**
> + * Private key size used in PQC ML-DSA based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_privkey_size)
> +const uint16_t rte_crypto_ml_dsa_privkey_size[] = {
> +	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2560,
> +	[RTE_CRYPTO_ML_DSA_PARAM_65] = 4032,
> +	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4896,
> +};
> +
> +/**
> + * Sign size used in PQC ML-KEM based crypto ops.
> + */
> +RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_sign_size)
> +const uint16_t rte_crypto_ml_dsa_sign_size[] = {
> +	[RTE_CRYPTO_ML_DSA_PARAM_44] = 2420,
> +	[RTE_CRYPTO_ML_DSA_PARAM_65] = 3309,
> +	[RTE_CRYPTO_ML_DSA_PARAM_87] = 4627,
> +};
> +
>  struct rte_cryptodev_sym_session_pool_private_data {
>  	uint16_t sess_data_sz;
>  	/**< driver session data size */
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index eaf0e50d37..37a6a5e49b 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -167,10 +167,13 @@ struct rte_cryptodev_asymmetric_xform_capability {
>  	uint32_t op_types;
>  	/**<
>  	 * Bitmask for supported rte_crypto_asym_op_type or
> +	 * rte_crypto_ml_kem_op_type or rte_crypto_ml_dsa_op_type or
>  	 * rte_crypto_asym_ke_type. Which enum is used is determined
>  	 * by the rte_crypto_asym_xform_type. For key exchange algorithms
> -	 * like Diffie-Hellman it is rte_crypto_asym_ke_type, for others
> -	 * it is rte_crypto_asym_op_type.
> +	 * like Diffie-Hellman it is rte_crypto_asym_ke_type,
> +	 * for ML-KEM algorithms it is rte_crypto_ml_kem_op_type,
> +	 * for ML-DSA algorithms it is rte_crypto_ml_dsa_op_type,
> +	 * or others it is rte_crypto_asym_op_type.
>  	 */
> 
>  	__extension__
> @@ -188,6 +191,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
> 
>  		uint32_t op_capa[RTE_CRYPTO_ASYM_OP_LIST_END];
>  		/**< Operation specific capabilities. */
> +
> +		uint32_t mlkem_capa[RTE_CRYPTO_ML_KEM_OP_END];
> +		/**< Bitmask of supported ML-KEM parameter sets. */
> +
> +		uint32_t mldsa_capa[RTE_CRYPTO_ML_DSA_OP_END];
> +		/**< Bitmask of supported ML-DSA parameter sets. */
>  	};
> 
>  	uint64_t hash_algos;
> @@ -577,6 +586,8 @@ rte_cryptodev_asym_get_xform_string(enum
> rte_crypto_asym_xform_type xform_enum);
>  /**< Support inner checksum computation/verification */
>  #define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
>  /**< Support Rx injection after security processing */
> +#define RTE_CRYPTODEV_FF_MLDSA_SIGN_PREHASH		(1ULL << 29)
> +/**< Support Pre Hash ML-DSA Signature Generation */
> 
>  /**
>   * Get the name of a crypto device feature flag
> --
> 2.37.1


^ permalink raw reply	[relevance 0%]

* [PATCH v4 1/3] cryptodev: support PQC ML algorithms
  @ 2025-10-04  3:22  3%         ` Gowrishankar Muthukrishnan
  0 siblings, 0 replies; 77+ results
From: Gowrishankar Muthukrishnan @ 2025-10-04  3:22 UTC (permalink / raw)
  To: dev, Akhil Goyal, Fan Zhang, Kai Ji; +Cc: anoobj, Gowrishankar Muthukrishnan

Add support for PQC ML-KEM and ML-DSA algorithms.

Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
v4:
 - enum and var name changes
---
 doc/guides/cryptodevs/features/default.ini |   2 +
 doc/guides/prog_guide/cryptodev_lib.rst    |   3 +-
 doc/guides/rel_notes/release_25_11.rst     |   7 +
 lib/cryptodev/rte_crypto_asym.h            | 308 +++++++++++++++++++++
 lib/cryptodev/rte_cryptodev.c              |  60 ++++
 lib/cryptodev/rte_cryptodev.h              |  15 +-
 6 files changed, 392 insertions(+), 3 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 116ffce249..64198f013a 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -134,6 +134,8 @@ ECPM                    =
 ECDH                    =
 SM2                     =
 EdDSA                   =
+ML-DSA                  =
+ML-KEM                  =
 
 ;
 ; Supported Operating systems of a default crypto driver.
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b54efcb74e..f0ee44eb54 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -928,7 +928,8 @@ Asymmetric Cryptography
 The cryptodev library currently provides support for the following asymmetric
 Crypto operations; RSA, Modular exponentiation and inversion, Diffie-Hellman and
 Elliptic Curve Diffie-Hellman public and/or private key generation and shared
-secret compute, DSA and EdDSA signature generation and verification.
+secret compute, DSA and EdDSA signature generation and verification,
+PQC ML-KEM and ML-DSA algorithms.
 
 Session and Session Management
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rel_notes/release_25_11.rst b/doc/guides/rel_notes/release_25_11.rst
index c3b94e1896..62b8767631 100644
--- a/doc/guides/rel_notes/release_25_11.rst
+++ b/doc/guides/rel_notes/release_25_11.rst
@@ -76,6 +76,10 @@ New Features
   * Added multi-process per port.
   * Optimized code.
 
+* **Added PQC ML-KEM and ML-DSA support.**
+
+  * Added PQC ML-KEM support with reference to FIPS203.
+  * Added PQC ML-DSA support with reference to FIPS204.
 
 Removed Items
 -------------
@@ -138,6 +142,9 @@ ABI Changes
 * stack: The structure ``rte_stack_lf_head`` alignment has been updated to 16 bytes
   to avoid unaligned accesses.
 
+* cryptodev: The enum ``rte_crypto_asym_xform_type``, struct ``rte_crypto_asym_xform``
+  and struct ``rte_crypto_asym_op`` are updated to include new values to support
+  ML-KEM and ML-DSA.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9787b710e7..7e066cdd54 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -37,6 +37,20 @@ rte_crypto_asym_ke_strings[];
 extern const char *
 rte_crypto_asym_op_strings[];
 
+/** PQC ML crypto op parameters size */
+extern const uint16_t
+rte_crypto_ml_kem_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_kem_cipher_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_pubkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_privkey_size[];
+extern const uint16_t
+rte_crypto_ml_dsa_sign_size[];
+
 #ifdef __cplusplus
 }
 #endif
@@ -144,6 +158,14 @@ enum rte_crypto_asym_xform_type {
 	/**< Edwards Curve Digital Signature Algorithm
 	 * Perform Signature Generation and Verification.
 	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_KEM,
+	/**< Module Lattice based Key Encapsulation Mechanism
+	 * Performs Key Pair Generation, Encapsulation and Decapsulation.
+	 */
+	RTE_CRYPTO_ASYM_XFORM_ML_DSA
+	/**< Module Lattice based Digital Signature Algorithm
+	 * Performs Key Pair Generation, Signature Generation and Verification.
+	 */
 };
 
 /**
@@ -720,6 +742,284 @@ struct rte_crypto_sm2_op_param {
 	 */
 };
 
+/**
+ * PQC ML-KEM parameter type
+ *
+ * List of ML-KEM parameter types used in PQC
+ */
+enum rte_crypto_ml_kem_type {
+	RTE_CRYPTO_ML_KEM_NONE,
+	RTE_CRYPTO_ML_KEM_512,
+	RTE_CRYPTO_ML_KEM_768,
+	RTE_CRYPTO_ML_KEM_1024,
+};
+
+/**
+ * PQC ML-KEM op type
+ *
+ * List of ML-KEM op types in PQC
+ */
+enum rte_crypto_ml_kem_op_type {
+	RTE_CRYPTO_ML_KEM_OP_KEYGEN,
+	RTE_CRYPTO_ML_KEM_OP_KEYVER,
+	RTE_CRYPTO_ML_KEM_OP_ENCAP,
+	RTE_CRYPTO_ML_KEM_OP_DECAP,
+	RTE_CRYPTO_ML_KEM_OP_END
+};
+
+/**
+ * PQC ML-KEM transform data
+ *
+ * Structure describing ML-KEM xform parameters
+ */
+struct rte_crypto_ml_kem_xform {
+	enum rte_crypto_ml_kem_type type;
+	/**< ML-KEM xform type */
+};
+
+/**
+ * PQC ML-KEM KEYGEN op
+ *
+ * Parameters for PQC ML-KEM key generation operation
+ */
+struct rte_crypto_ml_kem_keygen_op {
+	rte_crypto_param d;
+	/**< The seed d value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param z;
+	/**< The seed z value (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param ek;
+	/**<
+	 * Pointer to output data
+	 * - The computed encapsulation key.
+	 * - Refer `rte_crypto_ml_kem_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param dk;
+	/**<
+	 * Pointer to output data
+	 * - The computed decapsulation key.
+	 * - Refer `rte_crypto_ml_kem_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-KEM KEYVER op
+ *
+ * Parameters for PQC ML-KEM key verification operation
+ */
+struct rte_crypto_ml_kem_keyver_op {
+	enum rte_crypto_ml_kem_op_type op;
+	/**<
+	 * Op associated with key to be verified is one of below:
+	 * - Encapsulation op
+	 * - Decapsulation op
+	 */
+
+	rte_crypto_param key;
+	/**<
+	 * KEM key to check.
+	 * - ek in case of encapsulation op.
+	 * - dk in case of decapsulation op.
+	 */
+};
+
+/**
+ * PQC ML-KEM ENCAP op
+ *
+ * Parameters for PQC ML-KEM encapsulation operation
+ */
+struct rte_crypto_ml_kem_encap_op {
+	rte_crypto_param message;
+	/**< The message (of 32 bytes in length) for randomness.*/
+
+	rte_crypto_param ek;
+	/**< The encapsulation key.*/
+
+	rte_crypto_param cipher;
+	/**<
+	 * Pointer to output data
+	 * - The computed cipher.
+	 * - Refer `rte_crypto_ml_kem_cipher_size` for size of buffer.
+	 */
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM DECAP op
+ *
+ * Parameters for PQC ML-KEM decapsulation operation
+ */
+struct rte_crypto_ml_kem_decap_op {
+	rte_crypto_param cipher;
+	/**< The cipher to be decapsulated.*/
+
+	rte_crypto_param dk;
+	/**< The decapsulation key.*/
+
+	rte_crypto_param sk;
+	/**<
+	 * Pointer to output data
+	 * - The computed shared secret key (32 bytes).
+	 */
+};
+
+/**
+ * PQC ML-KEM op
+ *
+ * Parameters for PQC ML-KEM operation
+ */
+struct rte_crypto_ml_kem_op {
+	enum rte_crypto_ml_kem_op_type op;
+	union {
+		struct rte_crypto_ml_kem_keygen_op keygen;
+		struct rte_crypto_ml_kem_keyver_op keyver;
+		struct rte_crypto_ml_kem_encap_op encap;
+		struct rte_crypto_ml_kem_decap_op decap;
+	};
+};
+
+/**
+ * PQC ML-DSA parameter type
+ *
+ * List of ML-DSA parameter types used in PQC
+ */
+enum rte_crypto_ml_dsa_type {
+	RTE_CRYPTO_ML_DSA_NONE,
+	RTE_CRYPTO_ML_DSA_44,
+	RTE_CRYPTO_ML_DSA_65,
+	RTE_CRYPTO_ML_DSA_87,
+};
+
+/**
+ * PQC ML-DSA op type
+ *
+ * List of ML-DSA op types in PQC
+ */
+enum rte_crypto_ml_dsa_op_type {
+	RTE_CRYPTO_ML_DSA_OP_KEYGEN,
+	RTE_CRYPTO_ML_DSA_OP_SIGN,
+	RTE_CRYPTO_ML_DSA_OP_VERIFY,
+	RTE_CRYPTO_ML_DSA_OP_END
+};
+
+/**
+ * PQC ML-DSA transform data
+ *
+ * Structure describing ML-DSA xform parameters
+ */
+struct rte_crypto_ml_dsa_xform {
+	enum rte_crypto_ml_dsa_type type;
+	/**< ML-DSA xform type */
+
+	bool sign_deterministic;
+	/**< The signature generated using deterministic method. */
+
+	bool sign_prehash;
+	/**< The signature generated using prehash or pure routine. */
+};
+
+/**
+ * PQC ML-DSA KEYGEN op
+ *
+ * Parameters for PQC ML-DSA key generation operation
+ */
+struct rte_crypto_ml_dsa_keygen_op {
+	rte_crypto_param seed;
+	/**< The random seed (of 32 bytes in length) to generate key pair.*/
+
+	rte_crypto_param pubkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed public key.
+	 * - Refer `rte_crypto_ml_dsa_pubkey_size` for size of buffer.
+	 */
+
+	rte_crypto_param privkey;
+	/**<
+	 * Pointer to output data
+	 * - The computed secret key.
+	 * - Refer `rte_crypto_ml_dsa_privkey_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGGEN op
+ *
+ * Parameters for PQC ML-DSA sign operation
+ */
+struct rte_crypto_ml_dsa_siggen_op {
+	rte_crypto_param message;
+	/**< The message to generate signature.*/
+
+	rte_crypto_param mu;
+	/**< The mu to generate signature.*/
+
+	rte_crypto_param privkey;
+	/**< The secret key to generate signature.*/
+
+	rte_crypto_param seed;
+	/**< The seed to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+
+	rte_crypto_param sign;
+	/**<
+	 * Pointer to output data
+	 * - The computed signature.
+	 * - Refer `rte_crypto_ml_dsa_sign_size` for size of buffer.
+	 */
+};
+
+/**
+ * PQC ML-DSA SIGVER op
+ *
+ * Parameters for PQC ML-DSA verify operation
+ */
+struct rte_crypto_ml_dsa_sigver_op {
+	rte_crypto_param pubkey;
+	/**< The public key to verify signature.*/
+
+	rte_crypto_param message;
+	/**< The message used to verify signature.*/
+
+	rte_crypto_param sign;
+	/**< The signature to verify.*/
+
+	rte_crypto_param mu;
+	/**< The mu used to generate signature.*/
+
+	rte_crypto_param ctx;
+	/**< The context key to generate signature.*/
+
+	enum rte_crypto_auth_algorithm hash;
+	/**< Hash function to generate signature. */
+};
+
+/**
+ * PQC ML-DSA op
+ *
+ * Parameters for PQC ML-DSA operation
+ */
+struct rte_crypto_ml_dsa_op {
+	enum rte_crypto_ml_dsa_op_type op;
+	union {
+		struct rte_crypto_ml_dsa_keygen_op keygen;
+		struct rte_crypto_ml_dsa_siggen_op siggen;
+		struct rte_crypto_ml_dsa_sigver_op sigver;
+	};
+};
+
 /**
  * Asymmetric crypto transform data
  *
@@ -751,6 +1051,12 @@ struct rte_crypto_asym_xform {
 		/**< EC xform parameters, used by elliptic curve based
 		 * operations.
 		 */
+
+		struct rte_crypto_ml_kem_xform mlkem;
+		/**< PQC ML-KEM xform parameters */
+
+		struct rte_crypto_ml_dsa_xform mldsa;
+		/**< PQC ML-DSA xform parameters */
 	};
 };
 
@@ -778,6 +1084,8 @@ struct rte_crypto_asym_op {
 		struct rte_crypto_ecpm_op_param ecpm;
 		struct rte_crypto_sm2_op_param sm2;
 		struct rte_crypto_eddsa_op_param eddsa;
+		struct rte_crypto_ml_kem_op mlkem;
+		struct rte_crypto_ml_dsa_op mldsa;
 	};
 	uint16_t flags;
 	/**<
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index bb7bab4dd5..f4c6f692f0 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -229,6 +229,66 @@ const char *rte_crypto_asym_ke_strings[] = {
 	[RTE_CRYPTO_ASYM_KE_PUB_KEY_VERIFY] = "pub_ec_key_verify"
 };
 
+/**
+ * Public key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_pubkey_size)
+const uint16_t rte_crypto_ml_kem_pubkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_512] = 800,
+	[RTE_CRYPTO_ML_KEM_768] = 1184,
+	[RTE_CRYPTO_ML_KEM_1024] = 1568,
+};
+
+/**
+ * Private key size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_privkey_size)
+const uint16_t rte_crypto_ml_kem_privkey_size[] = {
+	[RTE_CRYPTO_ML_KEM_512] = 1632,
+	[RTE_CRYPTO_ML_KEM_768] = 2400,
+	[RTE_CRYPTO_ML_KEM_1024] = 3168,
+};
+
+/**
+ * Cipher size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_kem_cipher_size)
+const uint16_t rte_crypto_ml_kem_cipher_size[] = {
+	[RTE_CRYPTO_ML_KEM_512] = 768,
+	[RTE_CRYPTO_ML_KEM_768] = 1088,
+	[RTE_CRYPTO_ML_KEM_1024] = 1568,
+};
+
+/**
+ * Public key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_pubkey_size)
+const uint16_t rte_crypto_ml_dsa_pubkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_44] = 1312,
+	[RTE_CRYPTO_ML_DSA_65] = 1952,
+	[RTE_CRYPTO_ML_DSA_87] = 2592,
+};
+
+/**
+ * Private key size used in PQC ML-DSA based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_privkey_size)
+const uint16_t rte_crypto_ml_dsa_privkey_size[] = {
+	[RTE_CRYPTO_ML_DSA_44] = 2560,
+	[RTE_CRYPTO_ML_DSA_65] = 4032,
+	[RTE_CRYPTO_ML_DSA_87] = 4896,
+};
+
+/**
+ * Sign size used in PQC ML-KEM based crypto ops.
+ */
+RTE_EXPORT_SYMBOL(rte_crypto_ml_dsa_sign_size)
+const uint16_t rte_crypto_ml_dsa_sign_size[] = {
+	[RTE_CRYPTO_ML_DSA_44] = 2420,
+	[RTE_CRYPTO_ML_DSA_65] = 3309,
+	[RTE_CRYPTO_ML_DSA_87] = 4627,
+};
+
 struct rte_cryptodev_sym_session_pool_private_data {
 	uint16_t sess_data_sz;
 	/**< driver session data size */
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index eaf0e50d37..37a6a5e49b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -167,10 +167,13 @@ struct rte_cryptodev_asymmetric_xform_capability {
 	uint32_t op_types;
 	/**<
 	 * Bitmask for supported rte_crypto_asym_op_type or
+	 * rte_crypto_ml_kem_op_type or rte_crypto_ml_dsa_op_type or
 	 * rte_crypto_asym_ke_type. Which enum is used is determined
 	 * by the rte_crypto_asym_xform_type. For key exchange algorithms
-	 * like Diffie-Hellman it is rte_crypto_asym_ke_type, for others
-	 * it is rte_crypto_asym_op_type.
+	 * like Diffie-Hellman it is rte_crypto_asym_ke_type,
+	 * for ML-KEM algorithms it is rte_crypto_ml_kem_op_type,
+	 * for ML-DSA algorithms it is rte_crypto_ml_dsa_op_type,
+	 * or others it is rte_crypto_asym_op_type.
 	 */
 
 	__extension__
@@ -188,6 +191,12 @@ struct rte_cryptodev_asymmetric_xform_capability {
 
 		uint32_t op_capa[RTE_CRYPTO_ASYM_OP_LIST_END];
 		/**< Operation specific capabilities. */
+
+		uint32_t mlkem_capa[RTE_CRYPTO_ML_KEM_OP_END];
+		/**< Bitmask of supported ML-KEM parameter sets. */
+
+		uint32_t mldsa_capa[RTE_CRYPTO_ML_DSA_OP_END];
+		/**< Bitmask of supported ML-DSA parameter sets. */
 	};
 
 	uint64_t hash_algos;
@@ -577,6 +586,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support inner checksum computation/verification */
 #define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
 /**< Support Rx injection after security processing */
+#define RTE_CRYPTODEV_FF_MLDSA_SIGN_PREHASH		(1ULL << 29)
+/**< Support Pre Hash ML-DSA Signature Generation */
 
 /**
  * Get the name of a crypto device feature flag
-- 
2.37.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH] config/riscv: add rv64gcv cross compilation target
  2025-09-23 15:07 14% [PATCH] config/riscv: add rv64gcv cross compilation target sunyuechi
@ 2025-10-06 12:43  4% ` sunyuechi
  0 siblings, 0 replies; 77+ results
From: sunyuechi @ 2025-10-06 12:43 UTC (permalink / raw)
  To: dev; +Cc: Stanisław Kardach, Bruce Richardson

Hi, how is this patch?


&gt; -----原始邮件-----
&gt; 发件人: sunyuechi@iscas.ac.cn
&gt; 发送时间: 2025-09-23 23:07:34 (星期二)
&gt; 收件人: dev@dpdk.org
&gt; 抄送: "Sun Yuechi" <sunyuechi@iscas.ac.cn>, "Stanisław Kardach" <stanislaw.kardach@gmail.com>, "Bruce Richardson" <bruce.richardson@intel.com>
&gt; 主题: [PATCH] config/riscv: add rv64gcv cross compilation target
&gt; 
&gt; From: Sun Yuechi <sunyuechi@iscas.ac.cn>
&gt; 
&gt; Add a cross file for rv64gcv, enable it in devtools/test-meson-builds.sh,
&gt; and update the RISC-V cross-build guide to support the vector extension.
&gt; 
&gt; Signed-off-by: Sun Yuechi <sunyuechi@iscas.ac.cn>
&gt; ---
&gt;  config/riscv/meson.build                        |  3 ++-
&gt;  config/riscv/riscv64_rv64gcv_linux_gcc          | 17 +++++++++++++++++
&gt;  devtools/test-meson-builds.sh                   |  4 ++++
&gt;  .../linux_gsg/cross_build_dpdk_for_riscv.rst    |  2 ++
&gt;  4 files changed, 25 insertions(+), 1 deletion(-)
&gt;  create mode 100644 config/riscv/riscv64_rv64gcv_linux_gcc
&gt; 
&gt; diff --git a/config/riscv/meson.build b/config/riscv/meson.build
&gt; index f3daea0c0e..a06429a1e2 100644
&gt; --- a/config/riscv/meson.build
&gt; +++ b/config/riscv/meson.build
&gt; @@ -43,7 +43,8 @@ vendor_generic = {
&gt;          ['RTE_MAX_NUMA_NODES', 2]
&gt;      ],
&gt;      'arch_config': {
&gt; -        'generic': {'machine_args': ['-march=rv64gc']}
&gt; +        'generic': {'machine_args': ['-march=rv64gc']},
&gt; +        'rv64gcv': {'machine_args': ['-march=rv64gcv']},
&gt;      }
&gt;  }
&gt;  
&gt; diff --git a/config/riscv/riscv64_rv64gcv_linux_gcc b/config/riscv/riscv64_rv64gcv_linux_gcc
&gt; new file mode 100644
&gt; index 0000000000..ccc5115dec
&gt; --- /dev/null
&gt; +++ b/config/riscv/riscv64_rv64gcv_linux_gcc
&gt; @@ -0,0 +1,17 @@
&gt; +[binaries]
&gt; +c = ['ccache', 'riscv64-linux-gnu-gcc']
&gt; +cpp = ['ccache', 'riscv64-linux-gnu-g++']
&gt; +ar = 'riscv64-linux-gnu-ar'
&gt; +strip = 'riscv64-linux-gnu-strip'
&gt; +pcap-config = ''
&gt; +
&gt; +[host_machine]
&gt; +system = 'linux'
&gt; +cpu_family = 'riscv64'
&gt; +cpu = 'rv64gcv'
&gt; +endian = 'little'
&gt; +
&gt; +[properties]
&gt; +vendor_id = 'generic'
&gt; +arch_id = 'rv64gcv'
&gt; +pkg_config_libdir = '/usr/lib/riscv64-linux-gnu/pkgconfig'
&gt; diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
&gt; index 4fff1f7177..4f07f84eb0 100755
&gt; --- a/devtools/test-meson-builds.sh
&gt; +++ b/devtools/test-meson-builds.sh
&gt; @@ -290,6 +290,10 @@ build build-ppc64-power8-gcc $f ABI $use_shared
&gt;  f=$srcdir/config/riscv/riscv64_linux_gcc
&gt;  build build-riscv64-generic-gcc $f ABI $use_shared
&gt;  
&gt; +# RISC-V vector (rv64gcv)
&gt; +f=$srcdir/config/riscv/riscv64_rv64gcv_linux_gcc
&gt; +build build-riscv64_rv64gcv_gcc $f ABI $use_shared
&gt; +
&gt;  # Test installation of the x86-generic target, to be used for checking
&gt;  # the sample apps build using the pkg-config file for cflags and libs
&gt;  load_env cc
&gt; diff --git a/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst b/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
&gt; index 7d7f7ac72b..bcba12a604 100644
&gt; --- a/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
&gt; +++ b/doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
&gt; @@ -108,6 +108,8 @@ Currently the following targets are supported:
&gt;  
&gt;  * Generic rv64gc ISA: ``config/riscv/riscv64_linux_gcc``
&gt;  
&gt; +* RV64GCV ISA: ``config/riscv/riscv64_rv64gcv_linux_gcc``
&gt; +
&gt;  * SiFive U740 SoC: ``config/riscv/riscv64_sifive_u740_linux_gcc``
&gt;  
&gt;  To add a new target support, ``config/riscv/meson.build`` has to be modified by
&gt; -- 
&gt; 2.51.0
</sunyuechi@iscas.ac.cn></sunyuechi@iscas.ac.cn></bruce.richardson@intel.com></stanislaw.kardach@gmail.com></sunyuechi@iscas.ac.cn>

^ permalink raw reply	[relevance 4%]

* RE: [PATCH 3/3] vhost_user: support for memory regions
  2025-08-29 11:59  3%   ` Maxime Coquelin
@ 2025-10-08  9:23  0%     ` Bathija, Pravin
  0 siblings, 0 replies; 77+ results
From: Bathija, Pravin @ 2025-10-08  9:23 UTC (permalink / raw)
  To: Maxime Coquelin, dev; +Cc: pravin.m.bathija.dev

Dear Maxime,

I have made the changes as you suggested and also have some queries inline as per your comments.


Internal Use - Confidential
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, August 29, 2025 5:00 AM
> To: Bathija, Pravin <Pravin.Bathija@dell.com>; dev@dpdk.org
> Cc: pravin.m.bathija.dev@gmail.com
> Subject: Re: [PATCH 3/3] vhost_user: support for memory regions
>
>
> [EXTERNAL EMAIL]
>
> The title is not consistent with other commits in this library.
>
> On 8/12/25 4:33 AM, Pravin M Bathija wrote:
> > - modify data structures and add functions to support
> >    add and remove memory regions/slots
> > - define VHOST_MEMORY_MAX_NREGIONS & modify function
> >    vhost_user_set_mem_table accordingly
> > - dynamically add new memory slots via vhost_user_add_mem_reg
> > - remove unused memory slots via vhost_user_rem_mem_reg
> > - define data structure VhostUserSingleMemReg for single
> >    memory region
> > - modify data structures VhostUserRequest & VhostUserMsg
> >
>
> Please write full sentences, explaining the purpose of this change and not just
> listing the changes themselves.
Done the best I can in the new patch-set I just submitted.

>
> > Signed-off-by: Pravin M Bathija <pravin.bathija@dell.com>
> > ---
> >   lib/vhost/vhost_user.c | 325 +++++++++++++++++++++++++++++++++++---
> ---
> >   lib/vhost/vhost_user.h |  10 ++
> >   2 files changed, 291 insertions(+), 44 deletions(-)
> >
> > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index
> > b73dec6a22..6367f54b97 100644
> > --- a/lib/vhost/vhost_user.c
> > +++ b/lib/vhost/vhost_user.c
> > @@ -74,6 +74,9 @@
> VHOST_MESSAGE_HANDLER(VHOST_USER_SET_FEATURES,
> vhost_user_set_features, false, t
> >   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_OWNER,
> vhost_user_set_owner, false, true) \
> >   VHOST_MESSAGE_HANDLER(VHOST_USER_RESET_OWNER,
> vhost_user_reset_owner, false, false) \
> >   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_MEM_TABLE,
> > vhost_user_set_mem_table, true, true) \
> > +VHOST_MESSAGE_HANDLER(VHOST_USER_GET_MAX_MEM_SLOTS,
> > +vhost_user_get_max_mem_slots, false, false) \
> > +VHOST_MESSAGE_HANDLER(VHOST_USER_ADD_MEM_REG,
> vhost_user_add_mem_reg,
> > +true, true) \ VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG,
> > +vhost_user_rem_mem_reg, true, true) \
>
> Shouldn't it be:
> VHOST_MESSAGE_HANDLER(VHOST_USER_REM_MEM_REG,
> vhost_user_rem_mem_reg, false, true)
>
> And if not, aren't you not leaking FDs in vhost_user_rem_mem_reg?
>
Good catch. I have made the suggested change in the new patch-set.

> >   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_BASE,
> vhost_user_set_log_base, true, true) \
> >   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_LOG_FD,
> vhost_user_set_log_fd, true, true) \
> >   VHOST_MESSAGE_HANDLER(VHOST_USER_SET_VRING_NUM,
> > vhost_user_set_vring_num, false, true) \ @@ -228,7 +231,17 @@
> async_dma_map(struct virtio_net *dev, bool do_map)
> >   }
> >
> >   static void
> > -free_mem_region(struct virtio_net *dev)
> > +free_mem_region(struct rte_vhost_mem_region *reg) {
> > +   if (reg != NULL && reg->host_user_addr) {
> > +           munmap(reg->mmap_addr, reg->mmap_size);
> > +           close(reg->fd);
> > +           memset(reg, 0, sizeof(struct rte_vhost_mem_region));
> > +   }
> > +}
> > +
> > +static void
> > +free_all_mem_regions(struct virtio_net *dev)
> >   {
> >     uint32_t i;
> >     struct rte_vhost_mem_region *reg;
> > @@ -239,12 +252,10 @@ free_mem_region(struct virtio_net *dev)
> >     if (dev->async_copy && rte_vfio_is_enabled("vfio"))
> >             async_dma_map(dev, false);
> >
> > -   for (i = 0; i < dev->mem->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             reg = &dev->mem->regions[i];
> > -           if (reg->host_user_addr) {
> > -                   munmap(reg->mmap_addr, reg->mmap_size);
> > -                   close(reg->fd);
> > -           }
> > +           if (reg->mmap_addr)
> > +                   free_mem_region(reg);
>
> Please split this patch in multiple ones.
> Do the refactorings in dedicated patches.

I have split the original patch into multiple patches.

>
> >     }
> >   }
> >
> > @@ -258,7 +269,7 @@ vhost_backend_cleanup(struct virtio_net *dev)
> >             vdpa_dev->ops->dev_cleanup(dev->vid);
> >
> >     if (dev->mem) {
> > -           free_mem_region(dev);
> > +           free_all_mem_regions(dev);
> >             rte_free(dev->mem);
> >             dev->mem = NULL;
> >     }
> > @@ -707,7 +718,7 @@ numa_realloc(struct virtio_net **pdev, struct
> vhost_virtqueue **pvq)
> >     vhost_devices[dev->vid] = dev;
> >
> >     mem_size = sizeof(struct rte_vhost_memory) +
> > -           sizeof(struct rte_vhost_mem_region) * dev->mem->nregions;
> > +           sizeof(struct rte_vhost_mem_region) *
> VHOST_MEMORY_MAX_NREGIONS;
> >     mem = rte_realloc_socket(dev->mem, mem_size, 0, node);
> >     if (!mem) {
> >             VHOST_CONFIG_LOG(dev->ifname, ERR, @@ -811,8 +822,10
> @@
> > hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
> >     uint32_t i;
> >     uintptr_t hua = (uintptr_t)ptr;
> >
> > -   for (i = 0; i < mem->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             r = &mem->regions[i];
> > +           if (r->host_user_addr == 0)
> > +                   continue;
> >             if (hua >= r->host_user_addr &&
> >                     hua < r->host_user_addr + r->size) {
> >                     return get_blk_size(r->fd);
> > @@ -1250,9 +1263,13 @@ vhost_user_postcopy_register(struct virtio_net
> *dev, int main_fd,
> >      * retrieve the region offset when handling userfaults.
> >      */
> >     memory = &ctx->msg.payload.memory;
> > -   for (i = 0; i < memory->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> > +           int reg_msg_index = 0;
> >             reg = &dev->mem->regions[i];
> > -           memory->regions[i].userspace_addr = reg->host_user_addr;
> > +           if (reg->host_user_addr == 0)
> > +                   continue;
> > +           memory->regions[reg_msg_index].userspace_addr = reg-
> >host_user_addr;
> > +           reg_msg_index++;
> >     }
> >
> >     /* Send the addresses back to qemu */ @@ -1279,8 +1296,10 @@
> > vhost_user_postcopy_register(struct virtio_net *dev, int main_fd,
> >     }
> >
> >     /* Now userfault register and we can use the memory */
> > -   for (i = 0; i < memory->nregions; i++) {
> > +   for (i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> >             reg = &dev->mem->regions[i];
> > +           if (reg->host_user_addr == 0)
> > +                   continue;
> >             if (vhost_user_postcopy_region_register(dev, reg) < 0)
> >                     return -1;
> >     }
> > @@ -1385,6 +1404,46 @@ vhost_user_mmap_region(struct virtio_net *dev,
> >     return 0;
> >   }
> >
> > +static int
> > +vhost_user_initialize_memory(struct virtio_net **pdev) {
> > +   struct virtio_net *dev = *pdev;
> > +   int numa_node = SOCKET_ID_ANY;
> > +
> > +   /*
> > +    * If VQ 0 has already been allocated, try to allocate on the same
> > +    * NUMA node. It can be reallocated later in numa_realloc().
> > +    */
> > +   if (dev->nr_vring > 0)
> > +           numa_node = dev->virtqueue[0]->numa_node;
> > +
> > +   dev->nr_guest_pages = 0;
> > +   if (dev->guest_pages == NULL) {
> > +           dev->max_guest_pages = 8;
> > +           dev->guest_pages = rte_zmalloc_socket(NULL,
> > +                                   dev->max_guest_pages *
> > +                                   sizeof(struct guest_page),
> > +                                   RTE_CACHE_LINE_SIZE,
> > +                                   numa_node);
> > +           if (dev->guest_pages == NULL) {
> > +                   VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                           "failed to allocate memory for dev-
> >guest_pages");
> > +                   return -1;
> > +           }
> > +   }
> > +
> > +   dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct
> rte_vhost_memory) +
> > +           sizeof(struct rte_vhost_mem_region) *
> VHOST_MEMORY_MAX_NREGIONS, 0, numa_node);
> > +   if (dev->mem == NULL) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate
> memory for dev->mem");
> > +           rte_free(dev->guest_pages);
> > +           dev->guest_pages = NULL;
> > +           return -1;
> > +   }
> > +
> > +   return 0;
> > +}
> > +
> >   static int
> >   vhost_user_set_mem_table(struct virtio_net **pdev,
> >                     struct vhu_msg_context *ctx,
> > @@ -1393,7 +1452,6 @@ vhost_user_set_mem_table(struct virtio_net
> **pdev,
> >     struct virtio_net *dev = *pdev;
> >     struct VhostUserMemory *memory = &ctx->msg.payload.memory;
> >     struct rte_vhost_mem_region *reg;
> > -   int numa_node = SOCKET_ID_ANY;
> >     uint64_t mmap_offset;
> >     uint32_t i;
> >     bool async_notify = false;
> > @@ -1438,39 +1496,13 @@ vhost_user_set_mem_table(struct virtio_net
> **pdev,
> >             if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> >                     vhost_user_iotlb_flush_all(dev);
> >
> > -           free_mem_region(dev);
> > +           free_all_mem_regions(dev);
> >             rte_free(dev->mem);
> >             dev->mem = NULL;
> >     }
> >
> > -   /*
> > -    * If VQ 0 has already been allocated, try to allocate on the same
> > -    * NUMA node. It can be reallocated later in numa_realloc().
> > -    */
> > -   if (dev->nr_vring > 0)
> > -           numa_node = dev->virtqueue[0]->numa_node;
> > -
> > -   dev->nr_guest_pages = 0;
> > -   if (dev->guest_pages == NULL) {
> > -           dev->max_guest_pages = 8;
> > -           dev->guest_pages = rte_zmalloc_socket(NULL,
> > -                                   dev->max_guest_pages *
> > -                                   sizeof(struct guest_page),
> > -                                   RTE_CACHE_LINE_SIZE,
> > -                                   numa_node);
> > -           if (dev->guest_pages == NULL) {
> > -                   VHOST_CONFIG_LOG(dev->ifname, ERR,
> > -                           "failed to allocate memory for dev-
> >guest_pages");
> > -                   goto close_msg_fds;
> > -           }
> > -   }
> > -
> > -   dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct
> rte_vhost_memory) +
> > -           sizeof(struct rte_vhost_mem_region) * memory->nregions, 0,
> numa_node);
> > -   if (dev->mem == NULL) {
> > -           VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate
> memory for dev->mem");
> > -           goto free_guest_pages;
> > -   }
> > +   if (vhost_user_initialize_memory(pdev) < 0)
> > +           goto close_msg_fds;
>
> This part should be refactored into a dedicated preliminary patch.
The original patch has been divided into multiple patches.
>
> >
> >     for (i = 0; i < memory->nregions; i++) {
> >             reg = &dev->mem->regions[i];
> > @@ -1534,11 +1566,182 @@ vhost_user_set_mem_table(struct virtio_net
> **pdev,
> >     return RTE_VHOST_MSG_RESULT_OK;
> >
> >   free_mem_table:
> > -   free_mem_region(dev);
> > +   free_all_mem_regions(dev);
> >     rte_free(dev->mem);
> >     dev->mem = NULL;
> > +   rte_free(dev->guest_pages);
> > +   dev->guest_pages = NULL;
> > +close_msg_fds:
> > +   close_msg_fds(ctx);
> > +   return RTE_VHOST_MSG_RESULT_ERR;
> > +}
> > +
> > +
> > +static int
> > +vhost_user_get_max_mem_slots(struct virtio_net **pdev __rte_unused,
> > +                   struct vhu_msg_context *ctx,
> > +                   int main_fd __rte_unused)
> > +{
> > +   uint32_t max_mem_slots = VHOST_MEMORY_MAX_NREGIONS;
>
> This VHOST_MEMORY_MAX_NREGIONS value was hardcoded when only
> VHOST_USER_SET_MEM_TABLE was introduced.
>
> With this new features, my understanding is that we can get rid off this limit,
> right?
>
> The good news is increasing it should not break the DPDK ABI.
>
> Would it make sense to increase it?
I have increased the VHOST_MEMORY_MAX_NREGIONS to 128 and tested with qemu, talking to vhost testpmd and adding/removing 128 memory regions.

> > +
> > +   ctx->msg.payload.u64 = (uint64_t)max_mem_slots;
> > +   ctx->msg.size = sizeof(ctx->msg.payload.u64);
> > +   ctx->fd_num = 0;
> >
> > -free_guest_pages:
> > +   return RTE_VHOST_MSG_RESULT_REPLY;
> > +}
> > +
> > +static int
> > +vhost_user_add_mem_reg(struct virtio_net **pdev,
> > +                   struct vhu_msg_context *ctx,
> > +                   int main_fd __rte_unused)
> > +{
> > +   struct virtio_net *dev = *pdev;
> > +   struct VhostUserMemoryRegion *region = &ctx-
> >msg.payload.memory_single.region;
> > +   uint32_t i;
> > +
> > +   /* make sure new region will fit */
> > +   if (dev->mem != NULL && dev->mem->nregions >=
> VHOST_MEMORY_MAX_NREGIONS) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                   "too many memory regions already (%u)",
> > +                   dev->mem->nregions);
> > +           goto close_msg_fds;
> > +   }
> > +
> > +   /* make sure supplied memory fd present */
> > +   if (ctx->fd_num != 1) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                   "fd count makes no sense (%u)",
> > +                   ctx->fd_num);
> > +           goto close_msg_fds;
> > +   }
>
> There is a lack of support for vDPA devices.
> My understanding here is that the vDPA device does not get the new table
> entry.
>
> In set_mem_table, we call its close callback, but that might be a bit too much
> for simple memory hotplug. we might need another mechanism.
Could you please suggest the other mechanism ?

>
> > +
> > +   /* Make sure no overlap in guest virtual address space */
> > +   if (dev->mem != NULL && dev->mem->nregions > 0) {
> > +           for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++)
> {
> > +                   struct rte_vhost_mem_region *current_region =
> > +&dev->mem->regions[i];
> > +
> > +                   if (current_region->mmap_size == 0)
> > +                           continue;
> > +
> > +                   uint64_t current_region_guest_start = current_region-
> >guest_user_addr;
> > +                   uint64_t current_region_guest_end =
> current_region_guest_start
> > +                                                           +
> current_region->mmap_size - 1;
>
> Shouldn't it use size instead of mmap_size to check for overlaps?
>
> > +                   uint64_t proposed_region_guest_start = region-
> >userspace_addr;
> > +                   uint64_t proposed_region_guest_end =
> proposed_region_guest_start
> > +                                                           + region-
> >memory_size - 1;
> > +                   bool overlap = false;
> > +
> > +                   bool current_region_guest_start_overlap =
> > +                           current_region_guest_start >=
> proposed_region_guest_start
> > +                           && current_region_guest_start <=
> proposed_region_guest_end;
> > +                   bool current_region_guest_end_overlap =
> > +                           current_region_guest_end >=
> proposed_region_guest_start
> > +                           && current_region_guest_end <=
> proposed_region_guest_end;
> > +                   bool proposed_region_guest_start_overlap =
> > +                           proposed_region_guest_start >=
> current_region_guest_start
> > +                           && proposed_region_guest_start <=
> current_region_guest_end;
> > +                   bool proposed_region_guest_end_overlap =
> > +                           proposed_region_guest_end >=
> current_region_guest_start
> > +                           && proposed_region_guest_end <=
> current_region_guest_end;
> > +
> > +                   overlap = current_region_guest_start_overlap
> > +                           || current_region_guest_end_overlap
> > +                           || proposed_region_guest_start_overlap
> > +                           || proposed_region_guest_end_overlap;
> > +
> > +                   if (overlap) {
> > +                           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                                   "requested memory region overlaps
> with another region");
> > +                           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                                   "\tRequested region address:0x%"
> PRIx64,
> > +                                   region->userspace_addr);
> > +                           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                                   "\tRequested region size:0x%" PRIx64,
> > +                                   region->memory_size);
> > +                           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                                   "\tOverlapping region address:0x%"
> PRIx64,
> > +                                   current_region->guest_user_addr);
> > +                           VHOST_CONFIG_LOG(dev->ifname, ERR,
> > +                                   "\tOverlapping region size:0x%"
> PRIx64,
> > +                                   current_region->mmap_size);
> > +                           goto close_msg_fds;
> > +                   }
> > +
> > +           }
> > +   }
> > +
> > +   /* convert first region add to normal memory table set */
> > +   if (dev->mem == NULL) {
> > +           if (vhost_user_initialize_memory(pdev) < 0)
> > +                   goto close_msg_fds;
> > +   }
> > +
> > +   /* find a new region and set it like memory table set does */
> > +   struct rte_vhost_mem_region *reg = NULL;
> > +   uint64_t mmap_offset;
> > +
> > +   for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++) {
> > +           if (dev->mem->regions[i].guest_user_addr == 0) {
> > +                   reg = &dev->mem->regions[i];
> > +                   break;
> > +           }
> > +   }
> > +   if (reg == NULL) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR, "no free memory
> region");
> > +           goto close_msg_fds;
> > +   }
> > +
> > +   reg->guest_phys_addr = region->guest_phys_addr;
> > +   reg->guest_user_addr = region->userspace_addr;
> > +   reg->size            = region->memory_size;
> > +   reg->fd              = ctx->fds[0];
> > +
> > +   mmap_offset = region->mmap_offset;
> > +
> > +   if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) {
> > +           VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap
> region");
> > +           goto close_msg_fds;
> > +   }
> > +
> > +   dev->mem->nregions++;
> > +
> > +   if (dev->async_copy && rte_vfio_is_enabled("vfio"))
> > +           async_dma_map(dev, true);
> > +
> > +   if (vhost_user_postcopy_register(dev, main_fd, ctx) < 0)
> > +           goto free_mem_table;
> > +
> > +   for (i = 0; i < dev->nr_vring; i++) {
> > +           struct vhost_virtqueue *vq = dev->virtqueue[i];
> > +
> > +           if (!vq)
> > +                   continue;
> > +
> > +           if (vq->desc || vq->avail || vq->used) {
> > +                   /* vhost_user_lock_all_queue_pairs locked all qps */
> > +                   VHOST_USER_ASSERT_LOCK(dev, vq,
> VHOST_USER_SET_MEM_TABLE);
>
> VHOST_USER_ASSERT_LOCK(dev, vq, VHOST_USER_ADD_MEM_REG); ?
>
> > +
> > +                   /*
> > +                    * If the memory table got updated, the ring addresses
> > +                    * need to be translated again as virtual addresses have
> > +                    * changed.
> > +                    */
> > +                   vring_invalidate(dev, vq);
> > +
> > +                   translate_ring_addresses(&dev, &vq);
> > +                   *pdev = dev;
> > +           }
> > +   }
> > +
> > +   dump_guest_pages(dev);
> > +
> > +   return RTE_VHOST_MSG_RESULT_OK;
> > +
> > +free_mem_table:
> > +   free_all_mem_regions(dev);
> > +   rte_free(dev->mem);
> > +   dev->mem = NULL;
> >     rte_free(dev->guest_pages);
> >     dev->guest_pages = NULL;
> >   close_msg_fds:
> > @@ -1546,6 +1749,40 @@ vhost_user_set_mem_table(struct virtio_net
> **pdev,
> >     return RTE_VHOST_MSG_RESULT_ERR;
> >   }
> >
> > +static int
> > +vhost_user_rem_mem_reg(struct virtio_net **pdev __rte_unused,
> > +                   struct vhu_msg_context *ctx __rte_unused,
> > +                   int main_fd __rte_unused)
> > +{
> > +   struct virtio_net *dev = *pdev;
> > +   struct VhostUserMemoryRegion *region =
> > +&ctx->msg.payload.memory_single.region;
> > +
>
> It lacks support for vDPA devices.
> In set_mem_table, we call the vDPA close cb to ensure it is not actively
> accessing memoring being unmapped.
>
> We need something similar here, otherwise the vDPA device is not aware of the
> memory being unplugged.

I have incorporated this change in the latest patch-set.

>
> > +   if (dev->mem != NULL && dev->mem->nregions > 0) {
> > +           for (uint32_t i = 0; i < VHOST_MEMORY_MAX_NREGIONS; i++)
> {
> > +                   struct rte_vhost_mem_region *current_region =
> > +&dev->mem->regions[i];
> > +
> > +                   if (current_region->guest_user_addr == 0)
> > +                           continue;
> > +
> > +                   /*
> > +                    * According to the vhost-user specification:
> > +                    * The memory region to be removed is identified by
> its guest address,
> > +                    * user address and size. The mmap offset is ignored.
> > +                    */
> > +                   if (region->userspace_addr == current_region-
> >guest_user_addr
> > +                           && region->guest_phys_addr ==
> current_region->guest_phys_addr
> > +                           && region->memory_size == current_region-
> >size) {
> > +                           free_mem_region(current_region);
> > +                           dev->mem->nregions--;
> > +                           return RTE_VHOST_MSG_RESULT_OK;
> > +                   }
>
> There is a lack of IOTLB entries invalidation here, as IOTLB entries in the cache
> could point to memory being unmapped in this function.
>
> Same comment for vring invalidation, as the vring adresses are not re-
> translated at each burst.
I will incorporate in the next version.
>
> > +           }
> > +   }
> > +
> > +   VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to find region");
> > +   return RTE_VHOST_MSG_RESULT_ERR;
> > +}
> > +
> >   static bool
> >   vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
> >   {
> > diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h index
> > ef486545ba..5a0e747b58 100644
> > --- a/lib/vhost/vhost_user.h
> > +++ b/lib/vhost/vhost_user.h
> > @@ -32,6 +32,7 @@
> >                                      (1ULL <<
> VHOST_USER_PROTOCOL_F_BACKEND_SEND_FD) | \
> >                                      (1ULL <<
> VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \
> >                                      (1ULL <<
> VHOST_USER_PROTOCOL_F_PAGEFAULT) | \
> > +                                    (1ULL <<
> VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS) | \
> >                                      (1ULL <<
> VHOST_USER_PROTOCOL_F_STATUS))
> >
> >   typedef enum VhostUserRequest {
> > @@ -67,6 +68,9 @@ typedef enum VhostUserRequest {
> >     VHOST_USER_POSTCOPY_END = 30,
> >     VHOST_USER_GET_INFLIGHT_FD = 31,
> >     VHOST_USER_SET_INFLIGHT_FD = 32,
> > +   VHOST_USER_GET_MAX_MEM_SLOTS = 36,
> > +   VHOST_USER_ADD_MEM_REG = 37,
> > +   VHOST_USER_REM_MEM_REG = 38,
> >     VHOST_USER_SET_STATUS = 39,
> >     VHOST_USER_GET_STATUS = 40,
> >   } VhostUserRequest;
> > @@ -91,6 +95,11 @@ typedef struct VhostUserMemory {
> >     VhostUserMemoryRegion
> regions[VHOST_MEMORY_MAX_NREGIONS];
> >   } VhostUserMemory;
> >
> > +typedef struct VhostUserSingleMemReg {
> > +   uint64_t padding;
> > +   VhostUserMemoryRegion region;
> > +} VhostUserSingleMemReg;
> > +
> >   typedef struct VhostUserLog {
> >     uint64_t mmap_size;
> >     uint64_t mmap_offset;
> > @@ -186,6 +195,7 @@ typedef struct __rte_packed_begin VhostUserMsg {
> >             struct vhost_vring_state state;
> >             struct vhost_vring_addr addr;
> >             VhostUserMemory memory;
> > +           VhostUserSingleMemReg memory_single;
> >             VhostUserLog    log;
> >             struct vhost_iotlb_msg iotlb;
> >             VhostUserCryptoSessionParam crypto_session;


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v11 00/21]
  @ 2025-10-15 16:10  3%   ` David Marchand
  2025-10-15 16:31  0%     ` Bruce Richardson
  0 siblings, 1 reply; 77+ results
From: David Marchand @ 2025-10-15 16:10 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Chengwen Feng, Thomas Monjalon

Hello Bruce,

On Thu, 9 Oct 2025 at 15:01, Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> The ultimate of this patchset is to make it easier to run on systems
> with large numbers of cores, by simplifying the process of using core
> numbers >RTE_MAX_LCORE. The new EAL arg ``-remap-lcore-ids``, also
> shortened to ``-R``, is added to DPDK to support this.
>
> However, in order to add this new flag easily, the first dozen or more
> patches rework the argument handling in EAL to simplify things, using
> the argparse library for argument handling.
>
> When processing cmdline arguments in DPDK, we always do so with very
> little context. So, for example, when processing the "-l" flag, we have
> no idea whether there will be later a --proc-type=secondary flag. We
> have all sorts of post-arg-processing checks in place to try and catch
> these scenarios.
>
> To improve this situation, this patchset tries to simplify the handling
> of argument processing, by explicitly doing an initial pass to collate
> all arguments into a structure. Thereafter, the actual arg parsing is
> done in a fixed order, meaning that e.g. when processing the
> --main-lcore flag, we have already processed the service core flags. We
> also can far quicker and easier check for conflicting options, since
> they can all be checked for NULL/non-NULL in the arg structure
> immediately after the struct has been populated.
>
> An additional benefit of this work is that the argument parsing for EAL
> is much more centralised into common options and the options list file.
> This single list with ifdefs makes it clear to the viewer what options
> are common across OS's, vs what are unix-only or linux-only.
>
> Once the cleanup and rework is done, adding the new options for
> remapping cores becomes a lot simpler, since we can very easily check
> for scenarios like multi-process and handle those appropriately.
>
>
> V11:
> * fix issues flagged by unit tests in CI and subsequent testing:
>   - when passing in an lcore >= MAX_LCORES, return error rather than
>     ignoring it. (compatibility issue)
>   - return error when an invalid lcore set of "1-3-5" is passed in,
>     rather than just treating it as "3-5".

I did some tweaking on the series (and put my sob for taking the
bullet if I broke something ;-)), namely:
- squashed the init arg list patch into the initial patch that
introduces eal_option_list.h,
- inverted order of the patch on coremask rework with the one
introducing lcore remapping,
- I updated patch 6 as Chengwen requested, and I fixed return codes
for parse_arg_corelist(),
- I updated the doc for patch 8 as Chengwen requested,

I fixed a few reintroductions of socket-mem (should be numa-mem) in
intermediate patches.

I noticed that the leak reported earlier on patch "eal: gather EAL
args before processing" is still present when stopping at this commit,
and it is fixed in the next commit.
We could have avoid this transient issue, but I did not spend time to
fix as it is just a leak in the event wrong EAL options are passed.

There were some little checkpatch issues I fixed (plus some spurious
empty lines/spaces).
But I left the options definitions as is (wrt line length warning).

Wrt storing the cores as a fixed size cpuset, this storage is internal
and we can change in the future (no ABI concern afaics).

Series applied.

Thanks Chengwen for the reviews on argparse.

Thanks Bruce, this was kind of an unexpected long road.
This is a nice cleanup and I like this auto magic option and the debug logs.


I have two questions which could be addressed in followup patches but
seem more risky than what I touched, and require a new round of CI:
- are we missing a build check on RTE_MAX_LCORE < CPU_SETSIZE?
- should eal_clean_saved_args() be called in rte_eal_cleanup()?


And finally my dumb idea:

Do you think it would be feasible to extend this remapping mechanism
for multi process?
I would like to start all processes with only the -R option (and each
application has a dedicated cpu affinity, set by an external mechanism
out of DPDK).
Then some exchanges between primary and secondary processes are done
at init, with secondary announcing a number of lcores it needs, and
the primary replying with a lcoreid base for remapping.
One problem is that it would require tracking life and death of the
secondary processes to that primary can reallocate unused lcores
ranges.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v11 00/21]
  2025-10-15 16:10  3%   ` David Marchand
@ 2025-10-15 16:31  0%     ` Bruce Richardson
  0 siblings, 0 replies; 77+ results
From: Bruce Richardson @ 2025-10-15 16:31 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Chengwen Feng, Thomas Monjalon

On Wed, Oct 15, 2025 at 06:10:59PM +0200, David Marchand wrote:
> Hello Bruce,
> 
> On Thu, 9 Oct 2025 at 15:01, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > The ultimate of this patchset is to make it easier to run on systems
> > with large numbers of cores, by simplifying the process of using core
> > numbers >RTE_MAX_LCORE. The new EAL arg ``-remap-lcore-ids``, also
> > shortened to ``-R``, is added to DPDK to support this.
> >
> > However, in order to add this new flag easily, the first dozen or more
> > patches rework the argument handling in EAL to simplify things, using
> > the argparse library for argument handling.
> >
> > When processing cmdline arguments in DPDK, we always do so with very
> > little context. So, for example, when processing the "-l" flag, we have
> > no idea whether there will be later a --proc-type=secondary flag. We
> > have all sorts of post-arg-processing checks in place to try and catch
> > these scenarios.
> >
> > To improve this situation, this patchset tries to simplify the handling
> > of argument processing, by explicitly doing an initial pass to collate
> > all arguments into a structure. Thereafter, the actual arg parsing is
> > done in a fixed order, meaning that e.g. when processing the
> > --main-lcore flag, we have already processed the service core flags. We
> > also can far quicker and easier check for conflicting options, since
> > they can all be checked for NULL/non-NULL in the arg structure
> > immediately after the struct has been populated.
> >
> > An additional benefit of this work is that the argument parsing for EAL
> > is much more centralised into common options and the options list file.
> > This single list with ifdefs makes it clear to the viewer what options
> > are common across OS's, vs what are unix-only or linux-only.
> >
> > Once the cleanup and rework is done, adding the new options for
> > remapping cores becomes a lot simpler, since we can very easily check
> > for scenarios like multi-process and handle those appropriately.
> >
> >
> > V11:
> > * fix issues flagged by unit tests in CI and subsequent testing:
> >   - when passing in an lcore >= MAX_LCORES, return error rather than
> >     ignoring it. (compatibility issue)
> >   - return error when an invalid lcore set of "1-3-5" is passed in,
> >     rather than just treating it as "3-5".
> 
> I did some tweaking on the series (and put my sob for taking the
> bullet if I broke something ;-)), namely:
> - squashed the init arg list patch into the initial patch that
> introduces eal_option_list.h,
> - inverted order of the patch on coremask rework with the one
> introducing lcore remapping,
> - I updated patch 6 as Chengwen requested, and I fixed return codes
> for parse_arg_corelist(),
> - I updated the doc for patch 8 as Chengwen requested,
> 

Thanks for the rework.

> I fixed a few reintroductions of socket-mem (should be numa-mem) in
> intermediate patches.
> 

Are both not needing to be supported? With current implementation I think
they are just aliases for another. [Though maybe I'm missunderstanding the
rework here]

> I noticed that the leak reported earlier on patch "eal: gather EAL
> args before processing" is still present when stopping at this commit,
> and it is fixed in the next commit.
> We could have avoid this transient issue, but I did not spend time to
> fix as it is just a leak in the event wrong EAL options are passed.
> 
> There were some little checkpatch issues I fixed (plus some spurious
> empty lines/spaces).
> But I left the options definitions as is (wrt line length warning).
> 

Thanks. I think the options are better kept one-per-line irrespective of
checkpatch warnings.

> Wrt storing the cores as a fixed size cpuset, this storage is internal
> and we can change in the future (no ABI concern afaics).
> 
> Series applied.
> 
> Thanks Chengwen for the reviews on argparse.
> 
> Thanks Bruce, this was kind of an unexpected long road.
> This is a nice cleanup and I like this auto magic option and the debug logs.
> 

Thanks. It was indeed a long road, and far more churn than expected, since
I started out with only 5 or 7 patches in initial versions and ended up
with over 20!

> 
> I have two questions which could be addressed in followup patches but
> seem more risky than what I touched, and require a new round of CI:
> - are we missing a build check on RTE_MAX_LCORE < CPU_SETSIZE?

Maybe. However, I think be best way to fix that is as part of any cleanup
work to eliminate any dependency on CPU_SETSIZE. That will probably be its
own epic patchset!

> - should eal_clean_saved_args() be called in rte_eal_cleanup()?
> 
> 
> And finally my dumb idea:
> 
> Do you think it would be feasible to extend this remapping mechanism
> for multi process?
> I would like to start all processes with only the -R option (and each
> application has a dedicated cpu affinity, set by an external mechanism
> out of DPDK).
> Then some exchanges between primary and secondary processes are done
> at init, with secondary announcing a number of lcores it needs, and
> the primary replying with a lcoreid base for remapping.
> One problem is that it would require tracking life and death of the
> secondary processes to that primary can reallocate unused lcores
> ranges.
> 
I think that is a tough ask. I don't think I'll attempt to implement that.
Having users of secondary processes manage each process own starting lcore
id is probably not a massive ask.

Thanks again for all the reviews.

/Bruce

^ permalink raw reply	[relevance 0%]

* RE: [EXTERNAL] Re: [PATCH] rawdev: fix device ID retrieval function prototype
  @ 2025-10-17 11:05  3%       ` Akhil Goyal
  2025-10-17 15:52  0%         ` Thomas Monjalon
  0 siblings, 1 reply; 77+ results
From: Akhil Goyal @ 2025-10-17 11:05 UTC (permalink / raw)
  To: Thomas Monjalon, Nawal Kishor
  Cc: dev, Sachin Saxena, Hemant Agrawal, Jerin Jacob, Ashwin Sekhar T K

Hi Thomas,
> 16/10/2025 08:56, Nawal Kishor:
> >
> > >> Fixed rte_rawdev_get_dev_id() function prototype and its usage.
> > >
> > >What? Why?
> >
> > >[...]
> > >> -uint16_t
> > >> +int
> > >>  rte_rawdev_get_dev_id(const char *name);
> >
> > >Other functions handle dev_id as uint16_t, so why changing this function?
> >
> > The spec says that rte_rawdev_get_dev_id() returns negative number in case of
> failure.
> > But in the definition it is returning uint16_t which will never be negative, hence
> changed it to int.
> >
> > If this is not acceptable, what fix will you suggest?
> 
> You should change to int16_t for all rawdev id parameters.
> 
Wont that be an API/ABI break for all the APIs?
We have a similar rte_cryptodev_get_dev_id() and rte_event_dev_get_dev_id() APIs which 
return <0 value in case of failure and a positive value for valid ones.
dev_id is defined as unsigned value which is being used everywhere in all APIs of cryptodev, eventdev and rawdev.
The negative value here is just to denote that the API fails to retrieve dev_id and application should take action and not proceed further with that value.
Changing dev_id to a signed value in my opinion is not necessary just because this API may return <0 value on failure.

^ permalink raw reply	[relevance 3%]

* Re: [EXTERNAL] Re: [PATCH] rawdev: fix device ID retrieval function prototype
  2025-10-17 11:05  3%       ` [EXTERNAL] " Akhil Goyal
@ 2025-10-17 15:52  0%         ` Thomas Monjalon
  0 siblings, 0 replies; 77+ results
From: Thomas Monjalon @ 2025-10-17 15:52 UTC (permalink / raw)
  To: Nawal Kishor, Akhil Goyal
  Cc: dev, Sachin Saxena, Hemant Agrawal, Jerin Jacob, Ashwin Sekhar T K

17/10/2025 13:05, Akhil Goyal:
> Hi Thomas,
> > 16/10/2025 08:56, Nawal Kishor:
> > >
> > > >> Fixed rte_rawdev_get_dev_id() function prototype and its usage.
> > > >
> > > >What? Why?
> > >
> > > >[...]
> > > >> -uint16_t
> > > >> +int
> > > >>  rte_rawdev_get_dev_id(const char *name);
> > >
> > > >Other functions handle dev_id as uint16_t, so why changing this function?
> > >
> > > The spec says that rte_rawdev_get_dev_id() returns negative number in case of
> > failure.
> > > But in the definition it is returning uint16_t which will never be negative, hence
> > changed it to int.
> > >
> > > If this is not acceptable, what fix will you suggest?
> > 
> > You should change to int16_t for all rawdev id parameters.
> > 
> Wont that be an API/ABI break for all the APIs?
> We have a similar rte_cryptodev_get_dev_id() and rte_event_dev_get_dev_id() APIs which 
> return <0 value in case of failure and a positive value for valid ones.
> dev_id is defined as unsigned value which is being used everywhere in all APIs of cryptodev, eventdev and rawdev.
> The negative value here is just to denote that the API fails to retrieve dev_id and application should take action and not proceed further with that value.
> Changing dev_id to a signed value in my opinion is not necessary just because this API may return <0 value on failure.

OK but then it means that very high port IDs would be considered an error.
It is not realistic, but theoritically wrong.
OK to change only this function then.



^ permalink raw reply	[relevance 0%]

Results 14001-14077 of 14077	 | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2024-05-29 23:33     [PATCH v10 00/20] Remove use of noninclusive term sanity Stephen Hemminger
2025-04-02 23:23     ` [PATCH v12 00/10] replace use of term sanity check Stephen Hemminger
2025-04-02 23:23       ` [PATCH v12 01/10] mbuf: replace " Stephen Hemminger
2025-08-11  9:55  0%     ` Morten Brørup
2025-08-11 15:20  0%       ` Stephen Hemminger
2024-11-04  9:36     [PATCH v8 1/3] cryptodev: add ec points to sm2 op Arkadiusz Kusztal
2025-08-22 11:13  4% ` [dpdk-dev v9 " Kai Ji
2025-04-16 11:05     [PATCH] doc: announce DMA configuration structure changes pbhagavatula
2025-06-24  6:22     ` [EXTERNAL] " Amit Prakash Shukla
2025-07-21 17:49       ` Thomas Monjalon
2025-07-25  6:04  0%     ` Pavan Nikhilesh Bhagavatula
2025-07-26  0:55  0%       ` fengchengwen
2025-07-28  5:11  4%         ` Pavan Nikhilesh Bhagavatula
2025-08-12 10:59  0%           ` Thomas Monjalon
2025-05-20 16:40     [RFC PATCH 0/7] rework EAL argument parsing in DPDK Bruce Richardson
2025-07-23 16:19     ` [PATCH v7 00/13] Simplify running with high-numbered CPUs Bruce Richardson
2025-08-29 14:39  3%   ` Bruce Richardson
2025-10-09 13:00     ` [PATCH v11 00/21] Bruce Richardson
2025-10-15 16:10  3%   ` David Marchand
2025-10-15 16:31  0%     ` Bruce Richardson
2025-05-27 14:04     [PATCH] doc: fix anchors namespace in guides Nandini Persad
2025-10-02 11:32  7% ` [PATCH v2 1/2] doc: remove unused anchors David Marchand
2025-05-30 21:09     [PATCH] app/compress-perf: support dictionary files Sameer Vaze
2025-06-04 16:41     ` Sameer Vaze
2025-06-17 21:34       ` [EXTERNAL] " Akhil Goyal
2025-09-18 21:18  4%     ` Sameer Vaze
2025-09-19  5:08  0%       ` Akhil Goyal
2025-09-19 16:00  0%         ` Sameer Vaze
2025-09-30 15:27  0%           ` Sameer Vaze
2025-06-05 11:31     [PATCH] ethdev: add support to provide link type skori
2025-06-06  9:28     ` [PATCH v2 1/1] " skori
2025-06-06  9:54       ` Morten Brørup
2025-06-06 15:23         ` Stephen Hemminger
2025-06-10  5:02           ` [EXTERNAL] " Sunil Kumar Kori
2025-06-10  6:45             ` Morten Brørup
2025-08-13  7:42  4%           ` Sunil Kumar Kori
2025-06-19  7:10     [PATCH 00/10] Run with UBSan in GHA David Marchand
2025-07-23 13:31     ` [PATCH v5 00/22] " David Marchand
2025-07-23 13:31  8%   ` [PATCH v5 12/22] ipc: fix mp message alignment for malloc David Marchand
2025-07-10  8:51     [PATCH v0 1/1] doc: announce inter-device DMA capability support in dmadev Vamsi Krishna
2025-07-15  0:59     ` fengchengwen
2025-07-15  5:35       ` [EXTERNAL] " Vamsi Krishna Attunuru
2025-07-16  4:14         ` fengchengwen
2025-07-16 10:59           ` Vamsi Krishna Attunuru
2025-07-17  1:40             ` fengchengwen
2025-07-18  2:29               ` Vamsi Krishna Attunuru
2025-07-28  5:35                 ` Vamsi Krishna Attunuru
2025-07-30  4:36                   ` Vamsi Krishna Attunuru
2025-08-13 16:46  3%                 ` Vamsi Krishna Attunuru
2025-08-14  0:44  0%                   ` fengchengwen
2025-07-22 13:24     [PATCH 1/2] version: 25.11.0-rc0 David Marchand
2025-07-22 13:24     ` [PATCH 2/2] net: remove v25 ABI compatibility David Marchand
2025-07-23 12:14       ` Bruce Richardson
2025-07-24 10:10  4%     ` Finn, Emma
2025-08-12  2:33     [PATCH 0/3] vhost_user: configure memory slots Pravin M Bathija
2025-08-12  2:33     ` [PATCH 3/3] vhost_user: support for memory regions Pravin M Bathija
2025-08-29 11:59  3%   ` Maxime Coquelin
2025-10-08  9:23  0%     ` Bathija, Pravin
2025-08-18 23:27     [RFC 00/47] resolve issues with sys/queue.h Stephen Hemminger
2025-08-26 14:48     ` [PATCH v3 0/4] Cuckoo hash cleanup and optimizations Stephen Hemminger
2025-08-26 14:48  7%   ` [PATCH v3 1/4] hash: move table of hash compare functions out of header Stephen Hemminger
2025-08-21  5:32     [PATCH v8 1/1] ethdev: add support to provide link type skori
2025-09-01  5:44  3% ` [PATCH v9 " skori
2025-09-08  8:51  3% ` [PATCH v10 " skori
2025-09-11  8:48  3%   ` [PATCH v11 1/1] ethdev: add link connector type skori
2025-09-11  9:41  0%     ` Morten Brørup
2025-09-11 10:37  0%       ` Sunil Kumar Kori
2025-09-11 10:34  3%     ` [PATCH v12 " skori
2025-08-21 20:35     [RFC 0/3] hash: optimize compare logic Stephen Hemminger
2025-08-21 20:35  7% ` [RFC 1/3] hash: move table of hash compare functions out of header Stephen Hemminger
2025-08-22  9:05  0%   ` Morten Brørup
2025-08-22 18:19     ` [PATCH v2 0/4] Cuckoo hash cleanup and optimizations Stephen Hemminger
2025-08-22 18:19  7%   ` [PATCH v2 1/4] hash: move table of hash compare functions out of header Stephen Hemminger
2025-08-27 15:38  3% [PATCH v1] pcapng: allow any protocol link type for the interface block Schneide
2025-08-27 22:32  3% ` [PATCH v2] " Schneide
2025-08-28  2:46  1% [PATCH] dpdk: support quick jump to API definition Chengwen Feng
2025-08-28  2:59  1% Chengwen Feng
2025-08-29  2:34  1% ` [PATCH v2 0/3] " Chengwen Feng
2025-08-29  2:34  9%   ` [PATCH v2 3/3] doc: update ABI versioning guide Chengwen Feng
2025-09-01  1:21  1% ` [PATCH v3 0/5] add semicolon when export any symbol Chengwen Feng
2025-09-01  1:21  9%   ` [PATCH v3 5/5] doc: update ABI versioning guide Chengwen Feng
2025-09-01 10:46  1% ` [PATCH v4 0/5] add semicolon when export any symbol Chengwen Feng
2025-09-01 10:46  9%   ` [PATCH v4 5/5] doc: update ABI versioning guide Chengwen Feng
2025-09-03  2:05  1% ` [PATCH v5 0/5] add semicolon when export any symbol Chengwen Feng
2025-09-03  2:05  9%   ` [PATCH v5 5/5] doc: update ABI versioning guide Chengwen Feng
2025-09-03  7:04  0%   ` [PATCH v5 0/5] add semicolon when export any symbol David Marchand
2025-09-04  0:24  0%     ` fengchengwen
2025-08-28  7:06     [RFC] cryptodev: support PQC ML algorithms Gowrishankar Muthukrishnan
2025-09-30 18:03     ` [PATCH v1 0/3] " Gowrishankar Muthukrishnan
2025-09-30 18:03  3%   ` [PATCH v1 1/3] " Gowrishankar Muthukrishnan
2025-10-01  7:37       ` [PATCH v2 0/3] " Gowrishankar Muthukrishnan
2025-10-01  7:37  3%     ` [PATCH v2 1/3] " Gowrishankar Muthukrishnan
2025-10-01 17:56         ` [PATCH v3 0/3] " Gowrishankar Muthukrishnan
2025-10-01 17:56  3%       ` [PATCH v3 1/3] " Gowrishankar Muthukrishnan
2025-10-03 14:24  0%         ` Akhil Goyal
2025-10-04  3:22           ` [PATCH v4 0/3] " Gowrishankar Muthukrishnan
2025-10-04  3:22  3%         ` [PATCH v4 1/3] " Gowrishankar Muthukrishnan
2025-09-03  7:28     [RFC 0/8] Cleanup VFIO API and import Linux uAPI header David Marchand
2025-09-03  7:28  1% ` [RFC 7/8] uapi: import VFIO header David Marchand
2025-09-03 15:17     ` [RFC v2 0/9] Cleanup VFIO API and import Linux uAPI header David Marchand
2025-09-03 15:17  1%   ` [RFC v2 8/9] uapi: import VFIO header David Marchand
2025-09-19  8:37     ` [PATCH v3 00/10] Cleanup VFIO API and import Linux uAPI header David Marchand
2025-09-19  8:38  1%   ` [PATCH v3 09/10] uapi: import VFIO header David Marchand
2025-09-12  9:28  4% [DPDK/meson Bug 1787] ARM toolchin prefix changed in newest toolchain bugzilla
2025-09-15 18:54     [PATCH 0/1] ring: correct ordering issue in head/tail update Wathsala Vithanage
2025-09-15 18:54     ` [PATCH 1/1] ring: safe partial ordering for " Wathsala Vithanage
2025-09-16 22:57       ` Konstantin Ananyev
     [not found]         ` <2a611c3cf926d752a54b7655c27d6df874a2d0de.camel@arm.com>
2025-09-17  7:58           ` Konstantin Ananyev
2025-09-17  9:05             ` Ola Liljedahl
2025-09-20 12:01  3%           ` Konstantin Ananyev
     [not found]                 ` <cf7e14d4ba5e9d78fddf083b6c92d75942447931.camel@arm.com>
2025-09-22  7:12  0%               ` Konstantin Ananyev
2025-09-23 21:57  0%             ` Ola Liljedahl
2025-09-24  6:56  0%               ` Konstantin Ananyev
2025-09-24  7:50  0%                 ` Konstantin Ananyev
     [not found]     <0250818233102.180207-1-stephen@networkplumber.org>
2025-09-16 15:00     ` [PATCH v4 0/4] Cuckoo hash optimization for small sizes Stephen Hemminger
2025-09-16 15:00  7%   ` [PATCH v4 1/4] hash: move table of hash compare functions out of header Stephen Hemminger
2025-09-18  7:28     [PATCH 0/3] lib: fix AVX2 checks and macro exposure Thomas Monjalon
2025-09-18  8:10  4% ` Thomas Monjalon
2025-09-18  8:59  0%   ` Bruce Richardson
2025-09-19  7:57  5% [PATCH] build: remove deprecated kmods option Bruce Richardson
2025-09-19  8:44  5% ` [PATCH v2] " Bruce Richardson
2025-09-23 14:40  4% ` [PATCH v3] " Bruce Richardson
2025-09-22 11:07     [PATCH 1/2] build: add backward compatibility for nested drivers Kevin Traynor
2025-09-22 15:51     ` Thomas Monjalon
2025-09-23 13:08  3%   ` Kevin Traynor
2025-09-23 13:28  0%     ` Bruce Richardson
2025-09-24  8:43  0%       ` Thomas Monjalon
2025-09-23 14:12     [RFC PATCH 0/6] remove deprecated queue stats Bruce Richardson
2025-09-23 14:12  4% ` [RFC PATCH 6/6] doc: update docs for ethdev changes Bruce Richardson
2025-09-29 15:00     ` [PATCH v2 0/6] remove deprecated queue stats Bruce Richardson
2025-09-29 15:00  4%   ` [PATCH v2 6/6] doc: update docs for ethdev changes Bruce Richardson
2025-10-03 11:01     ` [PATCH v3 0/7] remove deprecated queue stats Bruce Richardson
2025-10-03 11:02  4%   ` [PATCH v3 6/7] doc: update docs for ethdev changes Bruce Richardson
2025-09-23 15:07 14% [PATCH] config/riscv: add rv64gcv cross compilation target sunyuechi
2025-10-06 12:43  4% ` sunyuechi
2025-09-24  5:56     [PATCH] rawdev: fix device ID retrieval function prototype Nawal Kishor
2025-10-15 15:05     ` Thomas Monjalon
2025-10-16  6:56       ` Nawal Kishor
2025-10-16  8:08         ` Thomas Monjalon
2025-10-17 11:05  3%       ` [EXTERNAL] " Akhil Goyal
2025-10-17 15:52  0%         ` Thomas Monjalon
2025-09-24 16:51  3% [RFC 0/6] get rid of pthread_cancel Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).