DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* Re: [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  2020-06-21  6:26  0%         ` Matan Azrad
@ 2020-06-22  8:06  0%           ` Maxime Coquelin
  0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2020-06-22  8:06 UTC (permalink / raw)
  To: Matan Azrad, Xiao Wang; +Cc: dev



On 6/21/20 8:26 AM, Matan Azrad wrote:
> Hi Maxime
> 
> From: Maxime Coquelin:
>> On 6/19/20 3:28 PM, Matan Azrad wrote:
>>>
>>>
>>> From: Maxime Coquelin:
>>>> On 6/18/20 6:28 PM, Matan Azrad wrote:
>>>>> As an arrangement to per queue operations in the vDPA device it is
>>>>> needed to change the next experimental API:
>>>>>
>>>>> The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
>>>>> instead of per device.
>>>>>
>>>>> A `qid` parameter was added to the API arguments list.
>>>>>
>>>>> Setting the parameter to the value VHOST_QUEUE_ALL will configure
>>>>> the host notifier to all the device queues as done before this patch.
>>>>>
>>>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>>>>> ---
>>>>>  doc/guides/rel_notes/release_20_08.rst |  2 ++
>>>>>  drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
>>>>>  drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
>>>>>  lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
>>>>>  lib/librte_vhost/rte_vhost.h           |  2 ++
>>>>>  lib/librte_vhost/vhost.h               |  3 ---
>>>>>  lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
>>>>>  7 files changed, 30 insertions(+), 14 deletions(-)
>>>>>
>>>>> diff --git a/doc/guides/rel_notes/release_20_08.rst
>>>>> b/doc/guides/rel_notes/release_20_08.rst
>>>>> index ba16d3b..9732959 100644
>>>>> --- a/doc/guides/rel_notes/release_20_08.rst
>>>>> +++ b/doc/guides/rel_notes/release_20_08.rst
>>>>> @@ -111,6 +111,8 @@ API Changes
>>>>>     Also, make sure to start the actual text at the margin.
>>>>>
>>>>
>> =========================================================
>>>>>
>>>>> +* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to
>>>>> +be per
>>>>> +  queue and not per device, a qid parameter was added to the
>>>>> +arguments
>>>> list.
>>>>>
>>>>>  ABI Changes
>>>>>  -----------
>>>>> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
>>>>> b/drivers/vdpa/ifc/ifcvf_vdpa.c index ec97178..336837a 100644
>>>>> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
>>>>> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
>>>>> @@ -839,7 +839,7 @@ struct internal_list {
>>>>>  	vdpa_ifcvf_stop(internal);
>>>>>  	vdpa_disable_vfio_intr(internal);
>>>>>
>>>>> -	ret = rte_vhost_host_notifier_ctrl(vid, false);
>>>>> +	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
>>>>>  	if (ret && ret != -ENOTSUP)
>>>>>  		goto error;
>>>>>
>>>>> @@ -858,7 +858,7 @@ struct internal_list {
>>>>>  	if (ret)
>>>>>  		goto stop_vf;
>>>>>
>>>>> -	rte_vhost_host_notifier_ctrl(vid, true);
>>>>> +	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
>>>>>
>>>>>  	internal->sw_fallback_running = true;
>>>>>
>>>>> @@ -893,7 +893,7 @@ struct internal_list {
>>>>>  	rte_atomic32_set(&internal->dev_attached, 1);
>>>>>  	update_datapath(internal);
>>>>>
>>>>> -	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
>>>>> +	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
>>>>>  		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
>>>>>
>>>>>  	return 0;
>>>>> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c
>>>>> b/drivers/vdpa/mlx5/mlx5_vdpa.c index 9e758b6..8ea1300 100644
>>>>> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
>>>>> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
>>>>> @@ -147,7 +147,8 @@
>>>>>  	int ret;
>>>>>
>>>>>  	if (priv->direct_notifier) {
>>>>> -		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
>>>>> +		ret = rte_vhost_host_notifier_ctrl(priv->vid,
>>>> VHOST_QUEUE_ALL,
>>>>> +						   false);
>>>>>  		if (ret != 0) {
>>>>>  			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
>>>>>  				"destroyed for device %d: %d.", priv->vid,
>>>> ret); @@ -155,7 +156,7
>>>>> @@
>>>>>  		}
>>>>>  		priv->direct_notifier = 0;
>>>>>  	}
>>>>> -	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
>>>>> +	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
>>>>> +true);
>>>>>  	if (ret != 0)
>>>>>  		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured
>>>> for"
>>>>>  			" device %d: %d.", priv->vid, ret); diff --git
>>>>> a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h index
>>>>> ecb3d91..2db536c 100644
>>>>> --- a/lib/librte_vhost/rte_vdpa.h
>>>>> +++ b/lib/librte_vhost/rte_vdpa.h
>>>>> @@ -202,22 +202,26 @@ struct rte_vdpa_device *  int
>>>>> rte_vdpa_get_device_num(void);
>>>>>
>>>>> +#define VHOST_QUEUE_ALL VHOST_MAX_VRING
>>>>> +
>>>>>  /**
>>>>>   * @warning
>>>>>   * @b EXPERIMENTAL: this API may change without prior notice
>>>>>   *
>>>>> - * Enable/Disable host notifier mapping for a vdpa port.
>>>>> + * Enable/Disable host notifier mapping for a vdpa queue.
>>>>>   *
>>>>>   * @param vid
>>>>>   *  vhost device id
>>>>>   * @param enable
>>>>>   *  true for host notifier map, false for host notifier unmap
>>>>> + * @param qid
>>>>> + *  vhost queue id, VHOST_QUEUE_ALL to configure all the device
>>>>> + queues
>>>> I would prefer two APIs that passing a special ID that means all queues:
>>>>
>>>> rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
>>>> rte_vhost_host_notifier_ctrl_all(int vid, bool enable);
>>>>
>>>> I think it is clearer for the user of the API.
>>>> Or if you think an extra API is overkill, just let the driver loop on
>>>> all the queues.
>>>
>>> We have a lot of options here with pros and cons.
>>> I took the rte_eth_dev_callback_register style.
>>
>> Ok, I didn't looked at this code.
>>
>>> It is less intrusive with minimum code change.
>>>
>>> I'm not sure what is the clearest option but the current suggestion is
>>> well defined and allows to configure all the queues too.
>>>
>>> Let me know what you prefer....
>>
>> I personally don't like the style, but I can live with it if you prefer doing it like
>> that.
>>
>> If you do it that way, you will have to rename VHOST_QUEUE_ALL to
>> RTE_VHOST_QUEUE_ALL, VHOST_MAX_VRING  to RTE_VHOST_MAX_VRING
>> and VHOST_MAX_QUEUE_PAIRS to RTE_VHOST_MAX_QUEUE_PAIRS as it
>> will become part of the ABI.
>>
>> Not that it also means that we won't be able to increase the maximum
>> number of rings without breaking the ABI.
> 
> What's about defining RTE_VHOST_QUEUE_ALL as UINT16_MAX?

I am not fan, but it is better than basing it on VHOST_MAX_QUEUE_PAIRS.

>>>>>   * @return
>>>>>   *  0 on success, -1 on failure
>>>>>   */
>>>>>  __rte_experimental
>>>>>  int
>>>>> -rte_vhost_host_notifier_ctrl(int vid, bool enable);
>>>>> +rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
>>>>>
>>>>>  /**
>>>
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v8 9/9] build: generate version.map file for MinGW on Windows
  @ 2020-06-22  7:55  4%   ` talshn
  0 siblings, 0 replies; 200+ results
From: talshn @ 2020-06-22  7:55 UTC (permalink / raw)
  To: dev
  Cc: thomas, pallavi.kadam, dmitry.kozliuk, david.marchand, grive,
	ranjit.menon, navasile, harini.ramakrishnan, ocardona,
	anatoly.burakov, fady, bruce.richardson, Tal Shnaiderman

From: Tal Shnaiderman <talshn@mellanox.com>

The MinGW build for Windows has special cases where exported
function contain additional prefix:

__emutls_v.per_lcore__*

To avoid adding those prefixed functions to the version.map file
the map_to_def.py script was modified to create a map file for MinGW
with the needed changed.

The file name was changed to map_to_win.py and lib/meson.build map output
was unified with drivers/meson.build output

Signed-off-by: Tal Shnaiderman <talshn@mellanox.com>
---
 buildtools/{map_to_def.py => map_to_win.py} | 11 ++++++++++-
 buildtools/meson.build                      |  4 ++--
 drivers/meson.build                         | 12 +++++++++---
 lib/meson.build                             | 19 ++++++++++++++-----
 4 files changed, 35 insertions(+), 11 deletions(-)
 rename buildtools/{map_to_def.py => map_to_win.py} (69%)

diff --git a/buildtools/map_to_def.py b/buildtools/map_to_win.py
similarity index 69%
rename from buildtools/map_to_def.py
rename to buildtools/map_to_win.py
index 6775b54a9d..a539f2129c 100644
--- a/buildtools/map_to_def.py
+++ b/buildtools/map_to_win.py
@@ -10,12 +10,21 @@
 def is_function_line(ln):
     return ln.startswith('\t') and ln.endswith(';\n') and ":" not in ln
 
+# MinGW keeps the original .map file but replaces per_lcore__* to __emutls_v.per_lcore__*
+def create_mingw_map_file(input_map, output_map):
+    with open(input_map) as f_in, open(output_map, 'w') as f_out:
+        f_out.writelines([lines.replace('per_lcore__', '__emutls_v.per_lcore__') for lines in f_in.readlines()])
 
 def main(args):
     if not args[1].endswith('version.map') or \
-            not args[2].endswith('exports.def'):
+            not args[2].endswith('exports.def') and \
+            not args[2].endswith('mingw.map'):
         return 1
 
+    if args[2].endswith('mingw.map'):
+        create_mingw_map_file(args[1], args[2])
+        return 0
+
 # special case, allow override if an def file already exists alongside map file
     override_file = join(dirname(args[1]), basename(args[2]))
     if exists(override_file):
diff --git a/buildtools/meson.build b/buildtools/meson.build
index d5f8291beb..f9d2fdf74b 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -9,14 +9,14 @@ list_dir_globs = find_program('list-dir-globs.py')
 check_symbols = find_program('check-symbols.sh')
 ldflags_ibverbs_static = find_program('options-ibverbs-static.sh')
 
-# set up map-to-def script using python, either built-in or external
+# set up map-to-win script using python, either built-in or external
 python3 = import('python').find_installation(required: false)
 if python3.found()
 	py3 = [python3]
 else
 	py3 = ['meson', 'runpython']
 endif
-map_to_def_cmd = py3 + files('map_to_def.py')
+map_to_win_cmd = py3 + files('map_to_win.py')
 sphinx_wrapper = py3 + files('call-sphinx-build.py')
 
 # stable ABI always starts with "DPDK_"
diff --git a/drivers/meson.build b/drivers/meson.build
index 646a7d5eb5..2cd8505d10 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -152,16 +152,22 @@ foreach class:dpdk_driver_classes
 			implib = 'lib' + lib_name + '.dll.a'
 
 			def_file = custom_target(lib_name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
 				output: '@0@_exports.def'.format(lib_name))
-			lk_deps = [version_map, def_file]
+
+			mingw_map = custom_target(lib_name + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: '@0@_mingw.map'.format(lib_name))
+
+			lk_deps = [version_map, def_file, mingw_map]
 			if is_windows
 				if is_ms_linker
 					lk_args = ['-Wl,/def:' + def_file.full_path(),
 						'-Wl,/implib:drivers\\' + implib]
 				else
-					lk_args = []
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
 				endif
 			else
 				lk_args = ['-Wl,--version-script=' + version_map]
diff --git a/lib/meson.build b/lib/meson.build
index a8fd317a18..af66610fcb 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -149,19 +149,28 @@ foreach l:libraries
 					meson.current_source_dir(), dir_name, name)
 			implib = dir_name + '.dll.a'
 
-			def_file = custom_target(name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+			def_file = custom_target(libname + '_def',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
-				output: 'rte_@0@_exports.def'.format(name))
+				output: '@0@_exports.def'.format(libname))
+
+			mingw_map = custom_target(libname + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: '@0@_mingw.map'.format(libname))
 
 			if is_ms_linker
 				lk_args = ['-Wl,/def:' + def_file.full_path(),
 					'-Wl,/implib:lib\\' + implib]
 			else
-				lk_args = ['-Wl,--version-script=' + version_map]
+				if is_windows
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
+				else
+					lk_args = ['-Wl,--version-script=' + version_map]
+				endif
 			endif
 
-			lk_deps = [version_map, def_file]
+			lk_deps = [version_map, def_file, mingw_map]
 			if not is_windows
 				# on unix systems check the output of the
 				# check-symbols.sh script, using it as a
-- 
2.16.1.windows.4


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v7 9/9] build: generate version.map file for MinGW on Windows
  @ 2020-06-21 10:26  4%   ` talshn
  0 siblings, 0 replies; 200+ results
From: talshn @ 2020-06-21 10:26 UTC (permalink / raw)
  To: dev
  Cc: thomas, pallavi.kadam, dmitry.kozliuk, david.marchand, grive,
	ranjit.menon, navasile, harini.ramakrishnan, ocardona,
	anatoly.burakov, fady, bruce.richardson, Tal Shnaiderman

From: Tal Shnaiderman <talshn@mellanox.com>

The MinGW build for Windows has special cases where exported
function contain additional prefix:

__emutls_v.per_lcore__*

To avoid adding those prefixed functions to the version.map file
the map_to_def.py script was modified to create a map file for MinGW
with the needed changed.

The file name was changed to map_to_win.py and lib/meson.build map output
was unified with drivers/meson.build output

Signed-off-by: Tal Shnaiderman <talshn@mellanox.com>
---
 buildtools/{map_to_def.py => map_to_win.py} | 11 ++++++++++-
 buildtools/meson.build                      |  4 ++--
 drivers/meson.build                         | 12 +++++++++---
 lib/meson.build                             | 19 ++++++++++++++-----
 4 files changed, 35 insertions(+), 11 deletions(-)
 rename buildtools/{map_to_def.py => map_to_win.py} (69%)

diff --git a/buildtools/map_to_def.py b/buildtools/map_to_win.py
similarity index 69%
rename from buildtools/map_to_def.py
rename to buildtools/map_to_win.py
index 6775b54a9d..a539f2129c 100644
--- a/buildtools/map_to_def.py
+++ b/buildtools/map_to_win.py
@@ -10,12 +10,21 @@
 def is_function_line(ln):
     return ln.startswith('\t') and ln.endswith(';\n') and ":" not in ln
 
+# MinGW keeps the original .map file but replaces per_lcore__* to __emutls_v.per_lcore__*
+def create_mingw_map_file(input_map, output_map):
+    with open(input_map) as f_in, open(output_map, 'w') as f_out:
+        f_out.writelines([lines.replace('per_lcore__', '__emutls_v.per_lcore__') for lines in f_in.readlines()])
 
 def main(args):
     if not args[1].endswith('version.map') or \
-            not args[2].endswith('exports.def'):
+            not args[2].endswith('exports.def') and \
+            not args[2].endswith('mingw.map'):
         return 1
 
+    if args[2].endswith('mingw.map'):
+        create_mingw_map_file(args[1], args[2])
+        return 0
+
 # special case, allow override if an def file already exists alongside map file
     override_file = join(dirname(args[1]), basename(args[2]))
     if exists(override_file):
diff --git a/buildtools/meson.build b/buildtools/meson.build
index d5f8291beb..f9d2fdf74b 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -9,14 +9,14 @@ list_dir_globs = find_program('list-dir-globs.py')
 check_symbols = find_program('check-symbols.sh')
 ldflags_ibverbs_static = find_program('options-ibverbs-static.sh')
 
-# set up map-to-def script using python, either built-in or external
+# set up map-to-win script using python, either built-in or external
 python3 = import('python').find_installation(required: false)
 if python3.found()
 	py3 = [python3]
 else
 	py3 = ['meson', 'runpython']
 endif
-map_to_def_cmd = py3 + files('map_to_def.py')
+map_to_win_cmd = py3 + files('map_to_win.py')
 sphinx_wrapper = py3 + files('call-sphinx-build.py')
 
 # stable ABI always starts with "DPDK_"
diff --git a/drivers/meson.build b/drivers/meson.build
index 646a7d5eb5..2cd8505d10 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -152,16 +152,22 @@ foreach class:dpdk_driver_classes
 			implib = 'lib' + lib_name + '.dll.a'
 
 			def_file = custom_target(lib_name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
 				output: '@0@_exports.def'.format(lib_name))
-			lk_deps = [version_map, def_file]
+
+			mingw_map = custom_target(lib_name + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: '@0@_mingw.map'.format(lib_name))
+
+			lk_deps = [version_map, def_file, mingw_map]
 			if is_windows
 				if is_ms_linker
 					lk_args = ['-Wl,/def:' + def_file.full_path(),
 						'-Wl,/implib:drivers\\' + implib]
 				else
-					lk_args = []
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
 				endif
 			else
 				lk_args = ['-Wl,--version-script=' + version_map]
diff --git a/lib/meson.build b/lib/meson.build
index a8fd317a18..af66610fcb 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -149,19 +149,28 @@ foreach l:libraries
 					meson.current_source_dir(), dir_name, name)
 			implib = dir_name + '.dll.a'
 
-			def_file = custom_target(name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+			def_file = custom_target(libname + '_def',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
-				output: 'rte_@0@_exports.def'.format(name))
+				output: '@0@_exports.def'.format(libname))
+
+			mingw_map = custom_target(libname + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: '@0@_mingw.map'.format(libname))
 
 			if is_ms_linker
 				lk_args = ['-Wl,/def:' + def_file.full_path(),
 					'-Wl,/implib:lib\\' + implib]
 			else
-				lk_args = ['-Wl,--version-script=' + version_map]
+				if is_windows
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
+				else
+					lk_args = ['-Wl,--version-script=' + version_map]
+				endif
 			endif
 
-			lk_deps = [version_map, def_file]
+			lk_deps = [version_map, def_file, mingw_map]
 			if not is_windows
 				# on unix systems check the output of the
 				# check-symbols.sh script, using it as a
-- 
2.16.1.windows.4


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  2020-06-19 14:01  4%       ` Maxime Coquelin
@ 2020-06-21  6:26  0%         ` Matan Azrad
  2020-06-22  8:06  0%           ` Maxime Coquelin
  0 siblings, 1 reply; 200+ results
From: Matan Azrad @ 2020-06-21  6:26 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

Hi Maxime

From: Maxime Coquelin:
> On 6/19/20 3:28 PM, Matan Azrad wrote:
> >
> >
> > From: Maxime Coquelin:
> >> On 6/18/20 6:28 PM, Matan Azrad wrote:
> >>> As an arrangement to per queue operations in the vDPA device it is
> >>> needed to change the next experimental API:
> >>>
> >>> The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
> >>> instead of per device.
> >>>
> >>> A `qid` parameter was added to the API arguments list.
> >>>
> >>> Setting the parameter to the value VHOST_QUEUE_ALL will configure
> >>> the host notifier to all the device queues as done before this patch.
> >>>
> >>> Signed-off-by: Matan Azrad <matan@mellanox.com>
> >>> ---
> >>>  doc/guides/rel_notes/release_20_08.rst |  2 ++
> >>>  drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
> >>>  drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
> >>>  lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
> >>>  lib/librte_vhost/rte_vhost.h           |  2 ++
> >>>  lib/librte_vhost/vhost.h               |  3 ---
> >>>  lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
> >>>  7 files changed, 30 insertions(+), 14 deletions(-)
> >>>
> >>> diff --git a/doc/guides/rel_notes/release_20_08.rst
> >>> b/doc/guides/rel_notes/release_20_08.rst
> >>> index ba16d3b..9732959 100644
> >>> --- a/doc/guides/rel_notes/release_20_08.rst
> >>> +++ b/doc/guides/rel_notes/release_20_08.rst
> >>> @@ -111,6 +111,8 @@ API Changes
> >>>     Also, make sure to start the actual text at the margin.
> >>>
> >>
> =========================================================
> >>>
> >>> +* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to
> >>> +be per
> >>> +  queue and not per device, a qid parameter was added to the
> >>> +arguments
> >> list.
> >>>
> >>>  ABI Changes
> >>>  -----------
> >>> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
> >>> b/drivers/vdpa/ifc/ifcvf_vdpa.c index ec97178..336837a 100644
> >>> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> >>> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> >>> @@ -839,7 +839,7 @@ struct internal_list {
> >>>  	vdpa_ifcvf_stop(internal);
> >>>  	vdpa_disable_vfio_intr(internal);
> >>>
> >>> -	ret = rte_vhost_host_notifier_ctrl(vid, false);
> >>> +	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
> >>>  	if (ret && ret != -ENOTSUP)
> >>>  		goto error;
> >>>
> >>> @@ -858,7 +858,7 @@ struct internal_list {
> >>>  	if (ret)
> >>>  		goto stop_vf;
> >>>
> >>> -	rte_vhost_host_notifier_ctrl(vid, true);
> >>> +	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
> >>>
> >>>  	internal->sw_fallback_running = true;
> >>>
> >>> @@ -893,7 +893,7 @@ struct internal_list {
> >>>  	rte_atomic32_set(&internal->dev_attached, 1);
> >>>  	update_datapath(internal);
> >>>
> >>> -	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
> >>> +	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
> >>>  		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
> >>>
> >>>  	return 0;
> >>> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c
> >>> b/drivers/vdpa/mlx5/mlx5_vdpa.c index 9e758b6..8ea1300 100644
> >>> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> >>> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> >>> @@ -147,7 +147,8 @@
> >>>  	int ret;
> >>>
> >>>  	if (priv->direct_notifier) {
> >>> -		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
> >>> +		ret = rte_vhost_host_notifier_ctrl(priv->vid,
> >> VHOST_QUEUE_ALL,
> >>> +						   false);
> >>>  		if (ret != 0) {
> >>>  			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
> >>>  				"destroyed for device %d: %d.", priv->vid,
> >> ret); @@ -155,7 +156,7
> >>> @@
> >>>  		}
> >>>  		priv->direct_notifier = 0;
> >>>  	}
> >>> -	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
> >>> +	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
> >>> +true);
> >>>  	if (ret != 0)
> >>>  		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured
> >> for"
> >>>  			" device %d: %d.", priv->vid, ret); diff --git
> >>> a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h index
> >>> ecb3d91..2db536c 100644
> >>> --- a/lib/librte_vhost/rte_vdpa.h
> >>> +++ b/lib/librte_vhost/rte_vdpa.h
> >>> @@ -202,22 +202,26 @@ struct rte_vdpa_device *  int
> >>> rte_vdpa_get_device_num(void);
> >>>
> >>> +#define VHOST_QUEUE_ALL VHOST_MAX_VRING
> >>> +
> >>>  /**
> >>>   * @warning
> >>>   * @b EXPERIMENTAL: this API may change without prior notice
> >>>   *
> >>> - * Enable/Disable host notifier mapping for a vdpa port.
> >>> + * Enable/Disable host notifier mapping for a vdpa queue.
> >>>   *
> >>>   * @param vid
> >>>   *  vhost device id
> >>>   * @param enable
> >>>   *  true for host notifier map, false for host notifier unmap
> >>> + * @param qid
> >>> + *  vhost queue id, VHOST_QUEUE_ALL to configure all the device
> >>> + queues
> >> I would prefer two APIs that passing a special ID that means all queues:
> >>
> >> rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
> >> rte_vhost_host_notifier_ctrl_all(int vid, bool enable);
> >>
> >> I think it is clearer for the user of the API.
> >> Or if you think an extra API is overkill, just let the driver loop on
> >> all the queues.
> >
> > We have a lot of options here with pros and cons.
> > I took the rte_eth_dev_callback_register style.
> 
> Ok, I didn't looked at this code.
> 
> > It is less intrusive with minimum code change.
> >
> > I'm not sure what is the clearest option but the current suggestion is
> > well defined and allows to configure all the queues too.
> >
> > Let me know what you prefer....
> 
> I personally don't like the style, but I can live with it if you prefer doing it like
> that.
> 
> If you do it that way, you will have to rename VHOST_QUEUE_ALL to
> RTE_VHOST_QUEUE_ALL, VHOST_MAX_VRING  to RTE_VHOST_MAX_VRING
> and VHOST_MAX_QUEUE_PAIRS to RTE_VHOST_MAX_QUEUE_PAIRS as it
> will become part of the ABI.
> 
> Not that it also means that we won't be able to increase the maximum
> number of rings without breaking the ABI.

What's about defining RTE_VHOST_QUEUE_ALL as UINT16_MAX?

> >>>   * @return
> >>>   *  0 on success, -1 on failure
> >>>   */
> >>>  __rte_experimental
> >>>  int
> >>> -rte_vhost_host_notifier_ctrl(int vid, bool enable);
> >>> +rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
> >>>
> >>>  /**
> >


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 2/9] eal: fix multiple definition of per lcore thread id
  @ 2020-06-19 16:22  3%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-06-19 16:22 UTC (permalink / raw)
  To: dev
  Cc: jerinjacobk, bruce.richardson, mdr, ktraynor, ian.stokes,
	i.maximets, Neil Horman, Cunming Liang, Konstantin Ananyev,
	Olivier Matz

Because of the inline accessor + static declaration in rte_gettid(),
we end up with multiple symbols for RTE_PER_LCORE(_thread_id).
Each compilation unit will pay a cost when accessing this information
for the first time.

$ nm build/app/dpdk-testpmd | grep per_lcore__thread_id
0000000000000054 d per_lcore__thread_id.5037
0000000000000040 d per_lcore__thread_id.5103
0000000000000048 d per_lcore__thread_id.5259
000000000000004c d per_lcore__thread_id.5259
0000000000000044 d per_lcore__thread_id.5933
0000000000000058 d per_lcore__thread_id.6261
0000000000000050 d per_lcore__thread_id.7378
000000000000005c d per_lcore__thread_id.7496
000000000000000c d per_lcore__thread_id.8016
0000000000000010 d per_lcore__thread_id.8431

Make it global as part of the DPDK_21 stable ABI.

Fixes: ef76436c6834 ("eal: get unique thread id")

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 lib/librte_eal/common/eal_common_thread.c | 1 +
 lib/librte_eal/include/rte_eal.h          | 3 ++-
 lib/librte_eal/rte_eal_version.map        | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
index a5f67d811c..280c64bb76 100644
--- a/lib/librte_eal/common/eal_common_thread.c
+++ b/lib/librte_eal/common/eal_common_thread.c
@@ -22,6 +22,7 @@
 #include "eal_thread.h"
 
 RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
+RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 static RTE_DEFINE_PER_LCORE(unsigned int, _socket_id) =
 	(unsigned int)SOCKET_ID_ANY;
 static RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index 2f9ed298de..2edf8c6556 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -447,6 +447,8 @@ enum rte_intr_mode rte_eal_vfio_intr_mode(void);
  */
 int rte_sys_gettid(void);
 
+RTE_DECLARE_PER_LCORE(int, _thread_id);
+
 /**
  * Get system unique thread id.
  *
@@ -456,7 +458,6 @@ int rte_sys_gettid(void);
  */
 static inline int rte_gettid(void)
 {
-	static RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 	if (RTE_PER_LCORE(_thread_id) == -1)
 		RTE_PER_LCORE(_thread_id) = rte_sys_gettid();
 	return RTE_PER_LCORE(_thread_id);
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 196eef5afa..0d42d44ce9 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -221,6 +221,13 @@ DPDK_20.0 {
 	local: *;
 };
 
+DPDK_21 {
+	global:
+
+	per_lcore__thread_id;
+
+} DPDK_20.0;
+
 EXPERIMENTAL {
 	global:
 
-- 
2.23.0


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  2020-06-19 13:28  0%     ` Matan Azrad
@ 2020-06-19 14:01  4%       ` Maxime Coquelin
  2020-06-21  6:26  0%         ` Matan Azrad
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2020-06-19 14:01 UTC (permalink / raw)
  To: Matan Azrad, Xiao Wang; +Cc: dev



On 6/19/20 3:28 PM, Matan Azrad wrote:
> 
> 
> From: Maxime Coquelin:
>> On 6/18/20 6:28 PM, Matan Azrad wrote:
>>> As an arrangement to per queue operations in the vDPA device it is
>>> needed to change the next experimental API:
>>>
>>> The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
>>> instead of per device.
>>>
>>> A `qid` parameter was added to the API arguments list.
>>>
>>> Setting the parameter to the value VHOST_QUEUE_ALL will configure the
>>> host notifier to all the device queues as done before this patch.
>>>
>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>>> ---
>>>  doc/guides/rel_notes/release_20_08.rst |  2 ++
>>>  drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
>>>  drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
>>>  lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
>>>  lib/librte_vhost/rte_vhost.h           |  2 ++
>>>  lib/librte_vhost/vhost.h               |  3 ---
>>>  lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
>>>  7 files changed, 30 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/doc/guides/rel_notes/release_20_08.rst
>>> b/doc/guides/rel_notes/release_20_08.rst
>>> index ba16d3b..9732959 100644
>>> --- a/doc/guides/rel_notes/release_20_08.rst
>>> +++ b/doc/guides/rel_notes/release_20_08.rst
>>> @@ -111,6 +111,8 @@ API Changes
>>>     Also, make sure to start the actual text at the margin.
>>>
>> =========================================================
>>>
>>> +* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to
>>> +be per
>>> +  queue and not per device, a qid parameter was added to the arguments
>> list.
>>>
>>>  ABI Changes
>>>  -----------
>>> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
>>> b/drivers/vdpa/ifc/ifcvf_vdpa.c index ec97178..336837a 100644
>>> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
>>> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
>>> @@ -839,7 +839,7 @@ struct internal_list {
>>>  	vdpa_ifcvf_stop(internal);
>>>  	vdpa_disable_vfio_intr(internal);
>>>
>>> -	ret = rte_vhost_host_notifier_ctrl(vid, false);
>>> +	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
>>>  	if (ret && ret != -ENOTSUP)
>>>  		goto error;
>>>
>>> @@ -858,7 +858,7 @@ struct internal_list {
>>>  	if (ret)
>>>  		goto stop_vf;
>>>
>>> -	rte_vhost_host_notifier_ctrl(vid, true);
>>> +	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
>>>
>>>  	internal->sw_fallback_running = true;
>>>
>>> @@ -893,7 +893,7 @@ struct internal_list {
>>>  	rte_atomic32_set(&internal->dev_attached, 1);
>>>  	update_datapath(internal);
>>>
>>> -	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
>>> +	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
>>>  		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
>>>
>>>  	return 0;
>>> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c
>>> b/drivers/vdpa/mlx5/mlx5_vdpa.c index 9e758b6..8ea1300 100644
>>> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
>>> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
>>> @@ -147,7 +147,8 @@
>>>  	int ret;
>>>
>>>  	if (priv->direct_notifier) {
>>> -		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
>>> +		ret = rte_vhost_host_notifier_ctrl(priv->vid,
>> VHOST_QUEUE_ALL,
>>> +						   false);
>>>  		if (ret != 0) {
>>>  			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
>>>  				"destroyed for device %d: %d.", priv->vid,
>> ret); @@ -155,7 +156,7
>>> @@
>>>  		}
>>>  		priv->direct_notifier = 0;
>>>  	}
>>> -	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
>>> +	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
>>> +true);
>>>  	if (ret != 0)
>>>  		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured
>> for"
>>>  			" device %d: %d.", priv->vid, ret); diff --git
>>> a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h index
>>> ecb3d91..2db536c 100644
>>> --- a/lib/librte_vhost/rte_vdpa.h
>>> +++ b/lib/librte_vhost/rte_vdpa.h
>>> @@ -202,22 +202,26 @@ struct rte_vdpa_device *  int
>>> rte_vdpa_get_device_num(void);
>>>
>>> +#define VHOST_QUEUE_ALL VHOST_MAX_VRING
>>> +
>>>  /**
>>>   * @warning
>>>   * @b EXPERIMENTAL: this API may change without prior notice
>>>   *
>>> - * Enable/Disable host notifier mapping for a vdpa port.
>>> + * Enable/Disable host notifier mapping for a vdpa queue.
>>>   *
>>>   * @param vid
>>>   *  vhost device id
>>>   * @param enable
>>>   *  true for host notifier map, false for host notifier unmap
>>> + * @param qid
>>> + *  vhost queue id, VHOST_QUEUE_ALL to configure all the device
>>> + queues
>> I would prefer two APIs that passing a special ID that means all queues:
>>
>> rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
>> rte_vhost_host_notifier_ctrl_all(int vid, bool enable);
>>
>> I think it is clearer for the user of the API.
>> Or if you think an extra API is overkill, just let the driver loop on all the
>> queues.
> 
> We have a lot of options here with pros and cons.
> I took the rte_eth_dev_callback_register style.

Ok, I didn't looked at this code.

> It is less intrusive with minimum code change.  
> 
> I'm not sure what is the clearest option but the current suggestion is well defined and 
> allows to configure all the queues too.
> 
> Let me know what you prefer....

I personally don't like the style, but I can live with it if you prefer
doing it like that.

If you do it that way, you will have to rename VHOST_QUEUE_ALL to
RTE_VHOST_QUEUE_ALL, VHOST_MAX_VRING  to RTE_VHOST_MAX_VRING and
VHOST_MAX_QUEUE_PAIRS to RTE_VHOST_MAX_QUEUE_PAIRS as it will become
part of the ABI.

Not that it also means that we won't be able to increase the maximum
number of rings without breaking the ABI.

>>>   * @return
>>>   *  0 on success, -1 on failure
>>>   */
>>>  __rte_experimental
>>>  int
>>> -rte_vhost_host_notifier_ctrl(int vid, bool enable);
>>> +rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
>>>
>>>  /**
> 


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  2020-06-19  6:44  0%   ` Maxime Coquelin
@ 2020-06-19 13:28  0%     ` Matan Azrad
  2020-06-19 14:01  4%       ` Maxime Coquelin
  0 siblings, 1 reply; 200+ results
From: Matan Azrad @ 2020-06-19 13:28 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev



From: Maxime Coquelin:
> On 6/18/20 6:28 PM, Matan Azrad wrote:
> > As an arrangement to per queue operations in the vDPA device it is
> > needed to change the next experimental API:
> >
> > The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
> > instead of per device.
> >
> > A `qid` parameter was added to the API arguments list.
> >
> > Setting the parameter to the value VHOST_QUEUE_ALL will configure the
> > host notifier to all the device queues as done before this patch.
> >
> > Signed-off-by: Matan Azrad <matan@mellanox.com>
> > ---
> >  doc/guides/rel_notes/release_20_08.rst |  2 ++
> >  drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
> >  drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
> >  lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
> >  lib/librte_vhost/rte_vhost.h           |  2 ++
> >  lib/librte_vhost/vhost.h               |  3 ---
> >  lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
> >  7 files changed, 30 insertions(+), 14 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/release_20_08.rst
> > b/doc/guides/rel_notes/release_20_08.rst
> > index ba16d3b..9732959 100644
> > --- a/doc/guides/rel_notes/release_20_08.rst
> > +++ b/doc/guides/rel_notes/release_20_08.rst
> > @@ -111,6 +111,8 @@ API Changes
> >     Also, make sure to start the actual text at the margin.
> >
> =========================================================
> >
> > +* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to
> > +be per
> > +  queue and not per device, a qid parameter was added to the arguments
> list.
> >
> >  ABI Changes
> >  -----------
> > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > b/drivers/vdpa/ifc/ifcvf_vdpa.c index ec97178..336837a 100644
> > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> > @@ -839,7 +839,7 @@ struct internal_list {
> >  	vdpa_ifcvf_stop(internal);
> >  	vdpa_disable_vfio_intr(internal);
> >
> > -	ret = rte_vhost_host_notifier_ctrl(vid, false);
> > +	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
> >  	if (ret && ret != -ENOTSUP)
> >  		goto error;
> >
> > @@ -858,7 +858,7 @@ struct internal_list {
> >  	if (ret)
> >  		goto stop_vf;
> >
> > -	rte_vhost_host_notifier_ctrl(vid, true);
> > +	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
> >
> >  	internal->sw_fallback_running = true;
> >
> > @@ -893,7 +893,7 @@ struct internal_list {
> >  	rte_atomic32_set(&internal->dev_attached, 1);
> >  	update_datapath(internal);
> >
> > -	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
> > +	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
> >  		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
> >
> >  	return 0;
> > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c
> > b/drivers/vdpa/mlx5/mlx5_vdpa.c index 9e758b6..8ea1300 100644
> > --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> > +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> > @@ -147,7 +147,8 @@
> >  	int ret;
> >
> >  	if (priv->direct_notifier) {
> > -		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
> > +		ret = rte_vhost_host_notifier_ctrl(priv->vid,
> VHOST_QUEUE_ALL,
> > +						   false);
> >  		if (ret != 0) {
> >  			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
> >  				"destroyed for device %d: %d.", priv->vid,
> ret); @@ -155,7 +156,7
> > @@
> >  		}
> >  		priv->direct_notifier = 0;
> >  	}
> > -	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
> > +	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
> > +true);
> >  	if (ret != 0)
> >  		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured
> for"
> >  			" device %d: %d.", priv->vid, ret); diff --git
> > a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h index
> > ecb3d91..2db536c 100644
> > --- a/lib/librte_vhost/rte_vdpa.h
> > +++ b/lib/librte_vhost/rte_vdpa.h
> > @@ -202,22 +202,26 @@ struct rte_vdpa_device *  int
> > rte_vdpa_get_device_num(void);
> >
> > +#define VHOST_QUEUE_ALL VHOST_MAX_VRING
> > +
> >  /**
> >   * @warning
> >   * @b EXPERIMENTAL: this API may change without prior notice
> >   *
> > - * Enable/Disable host notifier mapping for a vdpa port.
> > + * Enable/Disable host notifier mapping for a vdpa queue.
> >   *
> >   * @param vid
> >   *  vhost device id
> >   * @param enable
> >   *  true for host notifier map, false for host notifier unmap
> > + * @param qid
> > + *  vhost queue id, VHOST_QUEUE_ALL to configure all the device
> > + queues
> I would prefer two APIs that passing a special ID that means all queues:
> 
> rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
> rte_vhost_host_notifier_ctrl_all(int vid, bool enable);
> 
> I think it is clearer for the user of the API.
> Or if you think an extra API is overkill, just let the driver loop on all the
> queues.

We have a lot of options here with pros and cons.
I took the rte_eth_dev_callback_register style.

It is less intrusive with minimum code change.  

I'm not sure what is the clearest option but the current suggestion is well defined and 
allows to configure all the queues too.

Let me know what you prefer....

> >   * @return
> >   *  0 on success, -1 on failure
> >   */
> >  __rte_experimental
> >  int
> > -rte_vhost_host_notifier_ctrl(int vid, bool enable);
> > +rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
> >
> >  /**


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [dpdk-announce] DPDK 19.11.3 released
@ 2020-06-18 19:06  2% luca.boccassi
  0 siblings, 0 replies; 200+ results
From: luca.boccassi @ 2020-06-18 19:06 UTC (permalink / raw)
  To: announce

Hi all,

Here is a new stable release:
	https://fast.dpdk.org/rel/dpdk-19.11.3.tar.xz

The git tree is at:
	https://dpdk.org/browse/dpdk-stable/?h=19.11

Luca Boccassi

---
 .travis.yml                                        |   2 +-
 VERSION                                            |   2 +-
 app/pdump/main.c                                   |   2 +-
 app/test-acl/main.c                                |   2 +-
 app/test-crypto-perf/main.c                        |   3 +-
 app/test-eventdev/test_pipeline_common.c           |  10 +-
 app/test-pipeline/config.c                         |   2 -
 app/test-pmd/cmdline.c                             |   8 +-
 app/test-pmd/cmdline_flow.c                        |   8 +-
 app/test-pmd/config.c                              |  26 +-
 app/test-pmd/csumonly.c                            |  13 +-
 app/test-pmd/parameters.c                          |   2 +-
 app/test-pmd/testpmd.c                             |   4 +-
 app/test/meson.build                               |  34 +-
 app/test/test.h                                    |   2 -
 app/test/test_acl.c                                |  20 +-
 app/test/test_cryptodev.c                          |  13 +-
 app/test/test_cryptodev_blockcipher.c              |   2 +-
 app/test/test_cryptodev_hash_test_vectors.h        |  10 +
 app/test/test_fib_perf.c                           |   2 +-
 app/test/test_flow_classify.c                      |   2 +-
 app/test/test_hash.c                               |   7 +-
 app/test/test_ipsec.c                              |  33 +-
 app/test/test_kvargs.c                             |  40 +-
 app/test/test_lpm_perf.c                           |   2 +-
 app/test/test_malloc.c                             |  12 +
 app/test/test_mbuf.c                               |   2 +-
 app/test/test_pmd_perf.c                           |   2 +-
 app/test/test_table_pipeline.c                     |  12 +-
 buildtools/options-ibverbs-static.sh               |  11 +-
 config/common_base                                 |   1 -
 config/meson.build                                 |  30 +-
 devtools/check-symbol-change.sh                    |  10 +-
 devtools/checkpatches.sh                           |   8 +
 doc/api/doxy-api-index.md                          |   2 +-
 doc/api/doxy-api.conf.in                           |   1 +
 doc/guides/conf.py                                 |  22 +-
 doc/guides/contributing/abi_policy.rst             |  21 +-
 doc/guides/contributing/abi_versioning.rst         | 130 ++-
 doc/guides/contributing/documentation.rst          |  12 +-
 doc/guides/contributing/patches.rst                |  20 +-
 doc/guides/contributing/stable.rst                 |   8 +-
 doc/guides/contributing/vulnerability.rst          |   6 +-
 doc/guides/cryptodevs/aesni_gcm.rst                |  13 +
 doc/guides/cryptodevs/aesni_mb.rst                 |  13 +
 doc/guides/cryptodevs/features/qat.ini             |   5 +
 doc/guides/cryptodevs/qat.rst                      |   5 +
 doc/guides/eventdevs/index.rst                     |   2 +-
 doc/guides/freebsd_gsg/install_from_ports.rst      |   2 +-
 doc/guides/linux_gsg/eal_args.include.rst          |   2 +-
 doc/guides/linux_gsg/nic_perf_intel_platform.rst   |   2 +-
 doc/guides/nics/enic.rst                           |   2 +-
 doc/guides/nics/fail_safe.rst                      |   2 +-
 doc/guides/nics/features/hns3.ini                  |   1 +
 doc/guides/nics/features/hns3_vf.ini               |   1 +
 doc/guides/nics/features/i40e.ini                  |   1 -
 doc/guides/nics/features/iavf.ini                  |   1 -
 doc/guides/nics/features/ice.ini                   |   1 -
 doc/guides/nics/features/igb.ini                   |   1 +
 doc/guides/nics/features/ixgbe.ini                 |   1 +
 doc/guides/nics/hns3.rst                           |   1 +
 doc/guides/nics/i40e.rst                           |   9 +
 doc/guides/nics/ice.rst                            |   4 -
 doc/guides/nics/mlx5.rst                           |  48 +-
 doc/guides/prog_guide/cryptodev_lib.rst            |   2 +-
 doc/guides/prog_guide/lto.rst                      |   2 +-
 doc/guides/rel_notes/release_19_11.rst             | 512 ++++++++++++
 doc/guides/sample_app_ug/l2_forward_event.rst      |   8 -
 .../sample_app_ug/l2_forward_real_virtual.rst      |   9 -
 doc/guides/sample_app_ug/link_status_intr.rst      |   7 -
 doc/guides/sample_app_ug/multi_process.rst         |   2 +-
 doc/guides/testpmd_app_ug/testpmd_funcs.rst        |   2 +-
 doc/guides/windows_gsg/build_dpdk.rst              |  51 +-
 drivers/Makefile                                   |   2 +-
 drivers/baseband/turbo_sw/bbdev_turbo_software.c   |   2 +-
 drivers/bus/fslmc/qbman/qbman_debug.c              |   9 +-
 drivers/bus/ifpga/ifpga_bus.c                      |   1 +
 drivers/bus/ifpga/rte_bus_ifpga.h                  |   1 +
 drivers/bus/pci/linux/pci.c                        |   5 +
 drivers/bus/pci/pci_common.c                       |   6 +-
 drivers/bus/pci/pci_common_uio.c                   |   1 +
 drivers/bus/pci/private.h                          |  10 -
 drivers/bus/vmbus/linux/vmbus_uio.c                |   2 +-
 drivers/bus/vmbus/vmbus_common.c                   |   2 +-
 drivers/common/octeontx/octeontx_mbox.c            |  17 +-
 drivers/common/octeontx2/hw/otx2_npc.h             |   4 +-
 drivers/compress/octeontx/otx_zip_pmd.c            |   2 +-
 drivers/compress/zlib/zlib_pmd.c                   |   2 +
 drivers/compress/zlib/zlib_pmd_private.h           |   2 +-
 drivers/crypto/aesni_gcm/Makefile                  |   3 +-
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c           |   2 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h   |   2 +-
 drivers/crypto/aesni_mb/Makefile                   |   3 +-
 drivers/crypto/aesni_mb/aesni_mb_pmd_private.h     |   2 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c         |   2 +
 drivers/crypto/caam_jr/Makefile                    |   7 +
 drivers/crypto/caam_jr/caam_jr.c                   |  23 +-
 drivers/crypto/caam_jr/caam_jr_hw_specific.h       |   2 +-
 drivers/crypto/caam_jr/caam_jr_pvt.h               |   9 +-
 drivers/crypto/caam_jr/caam_jr_uio.c               |  34 +-
 drivers/crypto/caam_jr/meson.build                 |   5 +
 drivers/crypto/ccp/ccp_dev.c                       |   2 +-
 drivers/crypto/dpaa2_sec/Makefile                  |   7 +
 drivers/crypto/dpaa2_sec/meson.build               |   5 +
 drivers/crypto/dpaa_sec/Makefile                   |   7 +
 drivers/crypto/dpaa_sec/meson.build                |   5 +
 drivers/crypto/kasumi/kasumi_pmd_private.h         |   4 +-
 drivers/crypto/kasumi/rte_kasumi_pmd.c             |   1 +
 drivers/crypto/mvsam/mrvl_pmd_private.h            |   2 +-
 drivers/crypto/mvsam/rte_mrvl_pmd.c                |   1 +
 drivers/crypto/nitrox/nitrox_csr.h                 |  20 +-
 drivers/crypto/nitrox/nitrox_sym.c                 |   3 +-
 drivers/crypto/octeontx2/otx2_cryptodev.c          |   2 +
 drivers/crypto/octeontx2/otx2_cryptodev.h          |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.h      |   2 +-
 drivers/crypto/openssl/openssl_pmd_private.h       |   2 +-
 drivers/crypto/openssl/rte_openssl_pmd.c           |  24 +
 drivers/crypto/qat/qat_sym_capabilities.h          | 105 +++
 drivers/crypto/qat/qat_sym_session.c               | 122 ++-
 drivers/crypto/qat/qat_sym_session.h               |   1 +
 drivers/crypto/snow3g/rte_snow3g_pmd.c             |   1 +
 drivers/crypto/snow3g/snow3g_pmd_private.h         |   2 +-
 drivers/crypto/zuc/rte_zuc_pmd.c                   |   1 +
 drivers/crypto/zuc/zuc_pmd_private.h               |   4 +-
 drivers/event/dpaa2/dpaa2_eventdev.c               |   2 +-
 drivers/event/dsw/dsw_event.c                      |  15 +-
 drivers/event/octeontx2/otx2_evdev_adptr.c         |   4 +-
 drivers/event/octeontx2/otx2_evdev_stats.h         |   2 +-
 drivers/mempool/dpaa2/meson.build                  |   2 +
 drivers/mempool/octeontx2/otx2_mempool_ops.c       |   2 +-
 drivers/net/avp/avp_ethdev.c                       |   2 +-
 drivers/net/bnxt/bnxt.h                            |  13 +-
 drivers/net/bnxt/bnxt_ethdev.c                     |  58 +-
 drivers/net/bnxt/bnxt_hwrm.c                       |  29 +-
 drivers/net/bnxt/bnxt_ring.c                       |   2 +-
 drivers/net/bnxt/bnxt_rxq.c                        |   4 +-
 drivers/net/bnxt/bnxt_rxr.c                        |  36 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c               |   7 +-
 drivers/net/cxgbe/cxgbe_flow.c                     |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                     |  23 +-
 drivers/net/dpaa2/dpaa2_ethdev.c                   |   8 +-
 drivers/net/dpaa2/dpaa2_flow.c                     |   4 +-
 drivers/net/dpaa2/dpaa2_mux.c                      |   2 +-
 drivers/net/e1000/em_ethdev.c                      |   2 +-
 drivers/net/e1000/igb_ethdev.c                     |   4 +-
 drivers/net/ena/base/ena_com.c                     |  30 +-
 drivers/net/ena/base/ena_com.h                     |  32 +-
 drivers/net/ena/base/ena_plat_dpdk.h               |  39 +-
 drivers/net/ena/ena_ethdev.c                       |   7 +-
 drivers/net/enetc/base/enetc_hw.h                  |   3 +-
 drivers/net/enetc/enetc_ethdev.c                   |   5 +-
 drivers/net/enic/enic_fm_flow.c                    |  61 +-
 drivers/net/failsafe/failsafe.c                    |   1 +
 drivers/net/failsafe/failsafe_intr.c               |   2 +-
 drivers/net/failsafe/failsafe_ops.c                |   2 +-
 drivers/net/failsafe/failsafe_private.h            |   8 +
 drivers/net/hinic/base/hinic_compat.h              |  17 +-
 drivers/net/hinic/base/hinic_pmd_api_cmd.c         |   7 +-
 drivers/net/hinic/base/hinic_pmd_cmdq.c            |  12 +-
 drivers/net/hinic/base/hinic_pmd_cmdq.h            |   1 +
 drivers/net/hinic/base/hinic_pmd_eqs.c             |   2 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c           |  49 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.h           |   1 -
 drivers/net/hinic/base/hinic_pmd_mbox.c            |   8 +-
 drivers/net/hinic/base/hinic_pmd_mgmt.c            |  38 +-
 drivers/net/hinic/base/hinic_pmd_mgmt.h            |   2 +
 drivers/net/hinic/base/hinic_pmd_nicio.c           |  20 +-
 drivers/net/hinic/base/hinic_pmd_wq.c              |  11 +-
 drivers/net/hinic/base/hinic_pmd_wq.h              |   2 +-
 drivers/net/hinic/hinic_pmd_ethdev.c               |  24 +-
 drivers/net/hinic/hinic_pmd_rx.c                   |  73 +-
 drivers/net/hinic/hinic_pmd_rx.h                   |   5 +-
 drivers/net/hinic/hinic_pmd_tx.c                   |  24 +-
 drivers/net/hinic/hinic_pmd_tx.h                   |   4 +-
 drivers/net/hns3/hns3_cmd.c                        |  24 +-
 drivers/net/hns3/hns3_cmd.h                        |  49 +-
 drivers/net/hns3/hns3_dcb.c                        | 103 ++-
 drivers/net/hns3/hns3_dcb.h                        |   4 +-
 drivers/net/hns3/hns3_ethdev.c                     | 571 +++++++++++--
 drivers/net/hns3/hns3_ethdev.h                     |  18 +-
 drivers/net/hns3/hns3_ethdev_vf.c                  | 431 ++++++++--
 drivers/net/hns3/hns3_fdir.c                       |  21 +
 drivers/net/hns3/hns3_flow.c                       |  28 +-
 drivers/net/hns3/hns3_intr.c                       |   2 +
 drivers/net/hns3/hns3_mbx.c                        |  12 +-
 drivers/net/hns3/hns3_mbx.h                        |  13 +
 drivers/net/hns3/hns3_regs.h                       |  10 +
 drivers/net/hns3/hns3_rss.c                        |  35 +-
 drivers/net/hns3/hns3_rss.h                        |   2 +
 drivers/net/hns3/hns3_rxtx.c                       | 923 +++++++++++++++++----
 drivers/net/hns3/hns3_rxtx.h                       |  22 +-
 drivers/net/hns3/hns3_stats.c                      |  24 +-
 drivers/net/i40e/base/README                       |   2 +-
 drivers/net/i40e/base/i40e_adminq.c                |   2 +-
 drivers/net/i40e/base/i40e_adminq.h                |   2 +-
 drivers/net/i40e/base/i40e_adminq_cmd.h            |   2 +-
 drivers/net/i40e/base/i40e_alloc.h                 |   2 +-
 drivers/net/i40e/base/i40e_common.c                |   2 +-
 drivers/net/i40e/base/i40e_dcb.c                   |   2 +-
 drivers/net/i40e/base/i40e_dcb.h                   |   2 +-
 drivers/net/i40e/base/i40e_devids.h                |   2 +-
 drivers/net/i40e/base/i40e_diag.c                  |   2 +-
 drivers/net/i40e/base/i40e_diag.h                  |   2 +-
 drivers/net/i40e/base/i40e_hmc.c                   |   2 +-
 drivers/net/i40e/base/i40e_hmc.h                   |   2 +-
 drivers/net/i40e/base/i40e_lan_hmc.c               |   2 +-
 drivers/net/i40e/base/i40e_lan_hmc.h               |   2 +-
 drivers/net/i40e/base/i40e_nvm.c                   |   2 +-
 drivers/net/i40e/base/i40e_osdep.h                 |   2 +-
 drivers/net/i40e/base/i40e_prototype.h             |   2 +-
 drivers/net/i40e/base/i40e_register.h              |   2 +-
 drivers/net/i40e/base/i40e_status.h                |   2 +-
 drivers/net/i40e/base/i40e_type.h                  |   2 +-
 drivers/net/i40e/base/meson.build                  |   2 +-
 drivers/net/i40e/base/virtchnl.h                   |   2 +-
 drivers/net/i40e/i40e_ethdev.c                     | 131 +--
 drivers/net/i40e/i40e_ethdev_vf.c                  |   2 -
 drivers/net/i40e/i40e_fdir.c                       |   4 +-
 drivers/net/i40e/i40e_flow.c                       |  58 +-
 drivers/net/i40e/i40e_rxtx.c                       |  31 +-
 drivers/net/i40e/i40e_rxtx_vec_altivec.c           |   2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h            |   1 +
 drivers/net/i40e/i40e_rxtx_vec_neon.c              |   6 +-
 drivers/net/iavf/base/README                       |   2 +-
 drivers/net/iavf/base/iavf_adminq.c                |   2 +-
 drivers/net/iavf/base/iavf_adminq.h                |   2 +-
 drivers/net/iavf/base/iavf_alloc.h                 |   2 +-
 drivers/net/iavf/base/iavf_common.c                |   2 +-
 drivers/net/iavf/base/iavf_devids.h                |   2 +-
 drivers/net/iavf/base/iavf_osdep.h                 |   2 +-
 drivers/net/iavf/base/iavf_status.h                |   2 +-
 drivers/net/iavf/base/virtchnl.h                   |   2 +-
 drivers/net/iavf/iavf_ethdev.c                     |   2 +-
 drivers/net/iavf/iavf_rxtx_vec_common.h            |   1 +
 drivers/net/iavf/iavf_vchnl.c                      |  41 +-
 drivers/net/ice/base/ice_adminq_cmd.h              |  12 +-
 drivers/net/ice/base/ice_alloc.h                   |   2 +-
 drivers/net/ice/base/ice_bitops.h                  |   2 +-
 drivers/net/ice/base/ice_common.c                  |   8 +-
 drivers/net/ice/base/ice_common.h                  |   2 +-
 drivers/net/ice/base/ice_controlq.c                |   2 +-
 drivers/net/ice/base/ice_controlq.h                |   2 +-
 drivers/net/ice/base/ice_dcb.c                     |   2 +-
 drivers/net/ice/base/ice_dcb.h                     |   2 +-
 drivers/net/ice/base/ice_devids.h                  |   2 +-
 drivers/net/ice/base/ice_fdir.c                    |   2 +-
 drivers/net/ice/base/ice_fdir.h                    |   8 +-
 drivers/net/ice/base/ice_flex_pipe.c               |  54 +-
 drivers/net/ice/base/ice_flex_pipe.h               |   4 +-
 drivers/net/ice/base/ice_flex_type.h               |   2 +-
 drivers/net/ice/base/ice_flow.c                    |  51 +-
 drivers/net/ice/base/ice_flow.h                    |   4 +-
 drivers/net/ice/base/ice_hw_autogen.h              |   2 +-
 drivers/net/ice/base/ice_lan_tx_rx.h               |   2 +-
 drivers/net/ice/base/ice_nvm.c                     |   2 +-
 drivers/net/ice/base/ice_nvm.h                     |   2 +-
 drivers/net/ice/base/ice_osdep.h                   |  18 +-
 drivers/net/ice/base/ice_protocol_type.h           |   2 +-
 drivers/net/ice/base/ice_sbq_cmd.h                 |   2 +-
 drivers/net/ice/base/ice_sched.c                   |  61 +-
 drivers/net/ice/base/ice_sched.h                   |   9 +-
 drivers/net/ice/base/ice_status.h                  |   2 +-
 drivers/net/ice/base/ice_switch.c                  |  26 +-
 drivers/net/ice/base/ice_switch.h                  |   2 +-
 drivers/net/ice/base/ice_type.h                    |   6 +-
 drivers/net/ice/base/meson.build                   |   2 +-
 drivers/net/ice/ice_ethdev.c                       |  50 +-
 drivers/net/ice/ice_fdir_filter.c                  |  17 +-
 drivers/net/ice/ice_generic_flow.c                 |  31 +-
 drivers/net/ice/ice_hash.c                         |  27 +-
 drivers/net/ice/ice_rxtx.c                         |  59 +-
 drivers/net/ice/ice_rxtx_vec_common.h              |   1 +
 drivers/net/ice/ice_switch_filter.c                |  71 +-
 drivers/net/ipn3ke/ipn3ke_representor.c            |   3 +-
 drivers/net/ixgbe/base/README                      |   2 +-
 drivers/net/ixgbe/base/ixgbe_82598.c               |   2 +-
 drivers/net/ixgbe/base/ixgbe_82598.h               |   2 +-
 drivers/net/ixgbe/base/ixgbe_82599.c               |   2 +-
 drivers/net/ixgbe/base/ixgbe_82599.h               |   2 +-
 drivers/net/ixgbe/base/ixgbe_api.c                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_api.h                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_common.c              |   2 +-
 drivers/net/ixgbe/base/ixgbe_common.h              |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb.c                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb.h                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb_82598.c           |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb_82598.h           |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb_82599.c           |   2 +-
 drivers/net/ixgbe/base/ixgbe_dcb_82599.h           |   2 +-
 drivers/net/ixgbe/base/ixgbe_hv_vf.c               |   2 +-
 drivers/net/ixgbe/base/ixgbe_hv_vf.h               |   2 +-
 drivers/net/ixgbe/base/ixgbe_mbx.c                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_mbx.h                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_osdep.h               |   2 +-
 drivers/net/ixgbe/base/ixgbe_phy.c                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_phy.h                 |   2 +-
 drivers/net/ixgbe/base/ixgbe_type.h                |   2 +-
 drivers/net/ixgbe/base/ixgbe_vf.c                  |   2 +-
 drivers/net/ixgbe/base/ixgbe_vf.h                  |   2 +-
 drivers/net/ixgbe/base/ixgbe_x540.c                |   2 +-
 drivers/net/ixgbe/base/ixgbe_x540.h                |   2 +-
 drivers/net/ixgbe/base/ixgbe_x550.c                |   2 +-
 drivers/net/ixgbe/base/ixgbe_x550.h                |   2 +-
 drivers/net/ixgbe/base/meson.build                 |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.c                   |  58 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c                  |   6 +
 drivers/net/memif/memif_socket.c                   |  14 +-
 drivers/net/memif/rte_eth_memif.c                  |   2 +-
 drivers/net/mlx4/mlx4.c                            |   4 +
 drivers/net/mlx4/mlx4_flow.c                       |  11 +-
 drivers/net/mlx4/mlx4_glue.h                       |   2 +-
 drivers/net/mlx4/mlx4_rxtx.h                       |   2 +-
 drivers/net/mlx5/Makefile                          |   5 +
 drivers/net/mlx5/meson.build                       |   2 +
 drivers/net/mlx5/mlx5.c                            |  46 +-
 drivers/net/mlx5/mlx5.h                            |  13 +-
 drivers/net/mlx5/mlx5_defs.h                       |   3 +
 drivers/net/mlx5/mlx5_devx_cmds.c                  |   9 +-
 drivers/net/mlx5/mlx5_flow.c                       | 171 ++--
 drivers/net/mlx5/mlx5_flow.h                       |  32 +-
 drivers/net/mlx5/mlx5_flow_dv.c                    | 477 ++++++++---
 drivers/net/mlx5/mlx5_flow_verbs.c                 |  28 +-
 drivers/net/mlx5/mlx5_glue.c                       |   2 +-
 drivers/net/mlx5/mlx5_glue.h                       |   2 +-
 drivers/net/mlx5/mlx5_nl.c                         |  27 +-
 drivers/net/mlx5/mlx5_prm.h                        |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                        |  80 +-
 drivers/net/mlx5/mlx5_rxtx.c                       | 152 ++--
 drivers/net/mlx5/mlx5_rxtx.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h           |  27 +-
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h              |  47 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h               |  48 +-
 drivers/net/mlx5/mlx5_stats.c                      |  76 +-
 drivers/net/mlx5/mlx5_trigger.c                    |   2 +
 drivers/net/mlx5/mlx5_txq.c                        |   2 +-
 drivers/net/mlx5/mlx5_utils.h                      |  10 -
 drivers/net/mvneta/mvneta_ethdev.c                 |   2 +-
 drivers/net/mvpp2/mrvl_flow.c                      |   4 +-
 drivers/net/netvsc/hn_ethdev.c                     |  54 +-
 drivers/net/netvsc/hn_nvs.c                        |  41 +-
 drivers/net/netvsc/hn_nvs.h                        |   2 +-
 drivers/net/netvsc/hn_rxtx.c                       | 279 ++++---
 drivers/net/netvsc/hn_var.h                        |  12 +-
 drivers/net/netvsc/hn_vf.c                         |  13 +
 drivers/net/nfp/nfp_net.c                          |  25 +-
 drivers/net/null/rte_eth_null.c                    |  29 +-
 drivers/net/octeontx/base/meson.build              |   5 +-
 drivers/net/octeontx/octeontx_ethdev.c             |   1 +
 drivers/net/octeontx2/otx2_ethdev.c                |  24 +-
 drivers/net/octeontx2/otx2_ethdev.h                |   3 +
 drivers/net/octeontx2/otx2_ethdev_irq.c            |  38 +-
 drivers/net/octeontx2/otx2_link.c                  |  53 +-
 drivers/net/octeontx2/otx2_rss.c                   |   2 +-
 drivers/net/pfe/pfe_ethdev.c                       |   7 +-
 drivers/net/qede/qede_ethdev.c                     |  35 +-
 drivers/net/qede/qede_rxtx.c                       |   4 +-
 drivers/net/ring/rte_eth_ring.c                    |  29 +-
 drivers/net/sfc/base/ef10_evb.c                    |  28 +-
 drivers/net/sfc/base/ef10_filter.c                 | 564 +++++++++----
 drivers/net/sfc/base/ef10_impl.h                   |   4 +-
 drivers/net/sfc/base/ef10_nic.c                    |   4 +-
 drivers/net/sfc/base/ef10_proxy.c                  |   8 +-
 drivers/net/sfc/base/efx.h                         |  13 +-
 drivers/net/sfc/base/efx_evb.c                     |   4 +-
 drivers/net/sfc/base/efx_filter.c                  |  26 +-
 drivers/net/sfc/base/efx_impl.h                    |  21 +-
 drivers/net/sfc/base/efx_proxy.c                   |   4 +-
 drivers/net/sfc/sfc.c                              |   2 +-
 drivers/net/sfc/sfc_ethdev.c                       |  20 +-
 drivers/net/sfc/sfc_flow.c                         |   1 +
 drivers/net/sfc/sfc_rx.c                           |   6 +-
 drivers/net/softnic/rte_eth_softnic_thread.c       |  38 -
 drivers/net/tap/rte_eth_tap.c                      | 146 ++--
 drivers/net/tap/tap_flow.c                         |   8 +-
 drivers/net/tap/tap_intr.c                         |   3 +-
 drivers/net/thunderx/nicvf_ethdev.c                |  17 +-
 drivers/net/vhost/rte_eth_vhost.c                  |  16 +-
 drivers/net/virtio/virtio_ethdev.c                 |   6 +-
 drivers/net/virtio/virtio_rxtx.c                   |   6 +-
 drivers/net/virtio/virtio_rxtx_simple_altivec.c    |   3 +-
 drivers/net/virtio/virtio_user_ethdev.c            |  20 +-
 drivers/net/virtio/virtqueue.c                     |   2 +
 drivers/net/vmxnet3/vmxnet3_ethdev.c               |   3 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h               |   4 +
 drivers/net/vmxnet3/vmxnet3_rxtx.c                 |  14 +-
 examples/eventdev_pipeline/main.c                  |  17 +-
 examples/eventdev_pipeline/pipeline_common.h       |   4 +-
 examples/fips_validation/fips_validation.c         |  18 +
 examples/ioat/ioatfwd.c                            |   2 +-
 examples/ip_fragmentation/main.c                   |   2 +-
 examples/ip_pipeline/thread.c                      |  44 -
 examples/ip_reassembly/main.c                      |   2 +-
 examples/ipsec-secgw/ipsec-secgw.c                 |   2 +-
 examples/ipsec-secgw/ipsec_process.c               |   1 +
 examples/ipv4_multicast/main.c                     |   2 +-
 examples/kni/main.c                                |  32 +-
 examples/l2fwd-crypto/main.c                       |   2 +-
 examples/l2fwd-event/main.c                        |   2 +-
 examples/l2fwd-jobstats/main.c                     |   2 +-
 examples/l2fwd-keepalive/main.c                    |  20 +-
 examples/l2fwd/main.c                              |   2 +-
 examples/l3fwd-acl/main.c                          |   2 +-
 examples/l3fwd-power/main.c                        |   2 +-
 examples/l3fwd/main.c                              |   2 +-
 examples/link_status_interrupt/main.c              |   2 +-
 .../client_server_mp/mp_server/init.c              |   2 +-
 examples/multi_process/symmetric_mp/main.c         |   2 +-
 examples/performance-thread/l3fwd-thread/main.c    |   2 +-
 examples/qos_sched/cfg_file.c                      |   3 +
 examples/qos_sched/init.c                          |   2 +-
 examples/qos_sched/main.h                          |   4 +-
 examples/server_node_efd/server/init.c             |   2 +-
 examples/vhost_blk/vhost_blk.c                     |   2 +
 examples/vhost_blk/vhost_blk.h                     |   4 +-
 examples/vm_power_manager/channel_manager.c        |   3 +-
 examples/vm_power_manager/channel_manager.h        |   9 +-
 examples/vm_power_manager/main.c                   |   2 +-
 examples/vm_power_manager/power_manager.c          |   1 -
 examples/vmdq/main.c                               |  48 +-
 kernel/freebsd/contigmem/contigmem.c               |   4 +-
 lib/Makefile                                       |   2 +-
 lib/librte_bbdev/rte_bbdev.h                       |  16 +-
 lib/librte_bbdev/rte_bbdev_op.h                    |  16 +-
 lib/librte_bbdev/rte_bbdev_pmd.h                   |  14 +-
 lib/librte_cryptodev/rte_crypto_sym.h              |   7 +-
 lib/librte_cryptodev/rte_cryptodev.c               |  43 +-
 lib/librte_eal/common/eal_common_fbarray.c         |   2 +-
 lib/librte_eal/common/eal_common_log.c             |   2 +-
 lib/librte_eal/common/eal_common_memory.c          |   2 +-
 lib/librte_eal/common/eal_common_options.c         |   2 +-
 .../common/include/arch/arm/rte_cycles_32.h        |   2 +-
 .../common/include/arch/arm/rte_cycles_64.h        |   2 +-
 .../common/include/arch/ppc_64/meson.build         |   1 +
 .../common/include/arch/ppc_64/rte_altivec.h       |  22 +
 .../common/include/arch/ppc_64/rte_memcpy.h        |  15 +-
 .../common/include/arch/ppc_64/rte_vect.h          |   3 +-
 .../common/include/arch/x86/rte_atomic.h           |   2 +-
 .../common/include/arch/x86/rte_memcpy.h           |   9 +
 .../common/include/generic/rte_byteorder.h         |   6 +-
 lib/librte_eal/common/include/rte_common.h         |   4 +-
 lib/librte_eal/common/include/rte_service.h        |   8 +-
 .../common/include/rte_service_component.h         |   6 +-
 lib/librte_eal/common/malloc_elem.c                |   2 +-
 lib/librte_eal/common/malloc_heap.c                |   3 +
 lib/librte_eal/common/rte_random.c                 |   2 +-
 lib/librte_eal/common/rte_service.c                |  74 +-
 lib/librte_eal/freebsd/eal/eal_interrupts.c        |  79 +-
 lib/librte_eal/freebsd/eal/eal_memory.c            |   2 +-
 lib/librte_eal/linux/eal/eal.c                     |   2 +-
 lib/librte_eal/linux/eal/eal_memalloc.c            |   2 +-
 lib/librte_eal/linux/eal/eal_memory.c              |  24 +-
 lib/librte_eal/linux/eal/eal_vfio.c                |   6 +-
 lib/librte_ethdev/ethdev_profile.h                 |   9 +
 lib/librte_ethdev/rte_ethdev.c                     |  10 +-
 lib/librte_ethdev/rte_flow.c                       |   2 +-
 lib/librte_ethdev/rte_flow.h                       |   2 +-
 lib/librte_eventdev/rte_eventdev.c                 |  13 +-
 lib/librte_eventdev/rte_eventdev_pmd_pci.h         |   8 +-
 lib/librte_fib/rte_fib.h                           |   8 +
 lib/librte_fib/rte_fib6.h                          |   8 +
 lib/librte_ipsec/ipsec_sad.c                       |   2 +
 lib/librte_ipsec/sa.h                              |   2 +-
 lib/librte_kvargs/rte_kvargs.c                     |   2 +
 lib/librte_kvargs/rte_kvargs.h                     |   2 +-
 lib/librte_lpm/rte_lpm6.c                          |   9 +-
 lib/librte_mempool/rte_mempool_version.map         |   4 -
 lib/librte_pci/rte_pci.c                           |  17 +-
 lib/librte_pci/rte_pci.h                           |   6 -
 lib/librte_security/rte_security.c                 |  70 +-
 lib/librte_security/rte_security.h                 |   8 +-
 lib/librte_telemetry/rte_telemetry_parser.c        |   2 +-
 lib/librte_timer/rte_timer.c                       |  24 +-
 lib/librte_vhost/iotlb.c                           |   5 +-
 lib/librte_vhost/rte_vhost.h                       |   7 +-
 lib/librte_vhost/socket.c                          |   6 +
 lib/librte_vhost/vhost.h                           |   1 -
 lib/librte_vhost/vhost_crypto.c                    |   3 +-
 lib/librte_vhost/vhost_user.c                      |  10 +-
 lib/librte_vhost/virtio_net.c                      | 185 +++--
 lib/meson.build                                    |   8 +-
 mk/rte.app.mk                                      |   4 +
 mk/toolchain/gcc/rte.vars.mk                       |   5 +
 usertools/dpdk-pmdinfo.py                          |   5 +-
 483 files changed, 7009 insertions(+), 2821 deletions(-)
Adam Dybkowski (5):
      cryptodev: fix missing device id range checking
      common/qat: fix GEN3 marketing name
      app/crypto-perf: fix display of sample test vector
      crypto/qat: support plain SHA1..SHA512 hashes
      cryptodev: fix SHA-1 digest enum comment

Ajit Khaparde (3):
      net/bnxt: fix FW version query
      net/bnxt: fix error log for command timeout
      net/bnxt: fix using RSS config struct

Akhil Goyal (1):
      ipsec: fix build dependency on hash lib

Alex Kiselev (1):
      lpm6: fix size of tbl8 group

Alex Marginean (1):
      net/enetc: fix Rx lock-up

Alexander Kozyrev (9):
      net/mlx5: reduce Tx completion index memory loads
      net/mlx5: add device parameter for MPRQ stride size
      net/mlx5: enable MPRQ multi-stride operations
      net/mlx5: add multi-segment packets in MPRQ mode
      net/mlx5: set dynamic flow metadata in Rx queues
      net/mlx5: improve logging of MPRQ selection
      net/mlx5: fix assert in dynamic metadata handling
      net/mlx5: fix Tx queue release debug log timing
      net/mlx5: fix packet length assert in MPRQ

Alvin Zhang (2):
      net/iavf: fix link speed
      net/e1000: fix port hotplug for multi-process

Amit Gupta (1):
      net/octeontx: fix meson build for disabled drivers

Anatoly Burakov (1):
      mem: preallocate VA space in no-huge mode

Andrew Rybchenko (4):
      net/sfc: fix reported promiscuous/multicast mode
      net/sfc/base: use simpler EF10 family conditional check
      net/sfc/base: use simpler EF10 family run-time checks
      net/sfc/base: fix build when EVB is enabled

Andy Pei (1):
      net/ipn3ke: use control thread to check link status

Ankur Dwivedi (1):
      net/octeontx2: fix buffer size assignment

Apeksha Gupta (2):
      bus/fslmc: fix dereferencing null pointer
      test/crypto: fix statistics case

Archana Muniganti (1):
      examples/fips_validation: fix parsing of algorithms

Arek Kusztal (1):
      crypto/qat: fix cipher descriptor for ZUC and SNOW

Asaf Penso (2):
      net/mlx5: fix call to modify action without init item
      net/mlx5: fix assert in doorbell lookup

Ashish Gupta (1):
      net/octeontx2: fix link information for loopback port

Asim Jamshed (1):
      fib: fix headers for C++ support

Bernard Iremonger (1):
      net/i40e: fix flow director initialisation

Bing Zhao (6):
      net/mlx5: fix header modify action validation
      net/mlx5: fix actions validation on root table
      net/mlx5: fix assert in modify converting
      mk: fix static linkage of mlx dependency
      mem: fix overflow on allocation
      net/mlx5: fix doorbell bitmap management offsets

Bruce Richardson (3):
      pci: remove unneeded includes in public header file
      pci: fix build on FreeBSD
      drivers: fix log type variables for -fno-common

Cheng Peng (1):
      net/iavf: fix stats query error code

Chengchang Tang (3):
      net/hns3: fix promiscuous mode for PF
      net/hns3: fix default VLAN filter configuration for PF
      net/hns3: fix VLAN filter when setting promisucous mode

Chengwen Feng (7):
      net/hns3: fix packets offload features flags in Rx
      net/hns3: fix default error code of command interface
      net/hns3: fix crash when flushing RSS flow rules with FLR
      net/hns3: fix return value of setting VLAN offload
      net/hns3: clear residual flow rules on init
      net/hns3: fix Rx interrupt after reset
      net/hns3: replace memory barrier with data dependency order

Ciara Power (1):
      telemetry: fix port stats retrieval

Darek Stojaczyk (1):
      pci: accept 32-bit domain numbers

David Christensen (2):
      pci: fix build on ppc
      eal/ppc: fix build with gcc 9.3

David Marchand (5):
      mem: mark pages as not accessed when reserving VA
      test: load drivers when required
      eal: fix typo in endian conversion macros
      remove references to private PCI probe function
      doc: prefer https when pointing to dpdk.org

Dekel Peled (7):
      net/mlx5: fix mask used for IPv6 item validation
      net/mlx5: fix CVLAN tag set in IP item translation
      net/mlx5: update VLAN and encap actions validation
      net/mlx5: fix match on empty VLAN item in DV mode
      common/mlx5: fix umem buffer alignment
      net/mlx5: fix VLAN flow action with wildcard VLAN item
      net/mlx5: fix RSS key copy to TIR context

Dmitry Kozlyuk (2):
      build: fix linker warnings with clang on Windows
      build: support MinGW-w64 with Meson

Eduard Serra (1):
      net/vmxnet3: fix RSS setting on v4

Eugeny Parshutin (1):
      ethdev: fix build when vtune profiling is on

Fady Bader (1):
      mempool: remove inline functions from export list

Fan Zhang (1):
      vhost/crypto: add missing user protocol flag

Ferruh Yigit (7):
      net/nfp: fix log format specifiers
      net/null: fix secondary burst function selection
      net/null: remove redundant check
      mempool/octeontx2: fix build for gcc O1 optimization
      net/ena: fix build for O1 optimization
      event/octeontx2: fix build for O1 optimization
      examples/kni: fix crash during MTU set

Gaetan Rivet (5):
      doc: fix number of failsafe sub-devices
      net/ring: fix device pointer on allocation
      pci: reject negative values in PCI id
      doc: fix typos in ABI policy
      kvargs: fix strcmp helper documentation

Gavin Hu (2):
      net/i40e: relax barrier in Tx
      net/i40e: relax barrier in Tx for NEON

Guinan Sun (2):
      net/ixgbe: fix statistics in flow control mode
      net/ixgbe: check driver type in MACsec API

Haifeng Lin (1):
      eal/arm64: fix precise TSC

Haiyue Wang (1):
      net/ice/base: check memory pointer before copying

Hao Chen (1):
      net/hns3: support Rx interrupt

Harry van Haaren (3):
      service: fix crash on exit
      examples/eventdev: fix crash on exit
      test/flow_classify: enable multi-sockets system

Hemant Agrawal (3):
      drivers: add crypto as dependency for event drivers
      bus/fslmc: fix size of qman fq descriptor
      mempool/dpaa2: install missing header with meson

Honnappa Nagarahalli (3):
      timer: protect initialization with lock
      service: fix race condition for MT unsafe service
      service: fix identification of service running on other lcore

Hyong Youb Kim (1):
      net/enic: fix flow action reordering

Igor Chauskin (2):
      net/ena/base: make allocation macros thread-safe
      net/ena/base: prevent allocation of zero sized memory

Igor Romanov (9):
      net/sfc: fix initialization error path
      net/sfc: fix Rx queue start failure path
      net/sfc: fix promiscuous and allmulticast toggles errors
      net/sfc: set priority of created filters to manual
      net/sfc/base: reduce filter priorities to implemented only
      net/sfc/base: reject automatic filter creation by users
      net/sfc/base: refactor filter lookup loop in EF10
      net/sfc/base: handle manual and auto filter clashes in EF10
      net/sfc/base: fix manual filter delete in EF10

Itsuro Oda (2):
      net/vhost: fix potential memory leak on close
      vhost: make IOTLB cache name unique among processes

Ivan Dyukov (3):
      net/virtio-user: fix devargs parsing
      app: remove extra new line after link duplex
      examples: remove extra new line after link duplex

Jasvinder Singh (3):
      net/softnic: fix memory leak for thread
      net/softnic: fix resource leak for pipeline
      examples/ip_pipeline: remove check of null response

Jeff Guo (3):
      net/i40e: fix setting L2TAG
      net/iavf: fix setting L2TAG
      net/ice: fix setting L2TAG

Jiawei Wang (1):
      net/mlx5: fix imissed counter overflow

Jim Harris (1):
      contigmem: cleanup properly when load fails

Jun Yang (1):
      net/dpaa2: fix congestion ID for multiple traffic classes

Junyu Jiang (4):
      examples/vmdq: fix output of pools/queues
      examples/vmdq: fix RSS configuration
      net/ice: fix RSS advanced rule
      net/ice: fix crash in switch filter

Juraj Linkeš (1):
      ci: fix telemetry dependency in Travis

Július Milan (1):
      net/memif: fix init when already connected

Kalesh AP (9):
      net/bnxt: fix HWRM command during FW reset
      net/bnxt: use true/false for bool types
      net/bnxt: fix port start failure handling
      net/bnxt: fix VLAN add when port is stopped
      net/bnxt: fix VNIC Rx queue count on VNIC free
      net/bnxt: fix number of TQM ring
      net/bnxt: fix TQM ring context memory size
      app/testpmd: fix memory failure handling for i40e DDP
      net/bnxt: fix storing MAC address twice

Kevin Traynor (9):
      net/hinic: fix snprintf length of cable info
      net/hinic: fix repeating cable log and length check
      net/avp: fix gcc 10 maybe-uninitialized warning
      examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
      eal/x86: ignore gcc 10 stringop-overflow warnings
      net/mlx5: fix gcc 10 enum-conversion warning
      crypto/kasumi: fix extern declaration
      drivers/crypto: disable gcc 10 no-common errors
      build: disable gcc 10 zero-length-bounds warning

Konstantin Ananyev (1):
      security: fix crash at accessing non-implemented ops

Li Feng (1):
      mem: mark pages as not accessed when freeing memory

Lijun Ou (4):
      net/hns3: fix configuring RSS hash when rules are flushed
      net/hns3: add RSS hash offload to capabilities
      net/hns3: fix RSS key length
      net/hns3: fix RSS indirection table configuration

Linsi Yuan (1):
      net/bnxt: fix possible stack smashing

Louise Kilheeney (1):
      examples/l2fwd-keepalive: fix mbuf pool size

Luca Boccassi (6):
      fix various typos found by Lintian
      usertools: check for pci.ids in /usr/share/misc
      version: 19.11.3

Lukasz Bartosik (1):
      event/octeontx2: fix queue removal from Rx adapter

Lukasz Wojciechowski (5):
      drivers/crypto: fix log type variables for -fno-common
      security: fix verification of parameters
      security: fix return types in documentation
      security: fix session counter
      test: remove redundant macro

Marvin Liu (5):
      vhost: fix packed ring zero-copy
      vhost: fix shadow update
      vhost: fix shadowed descriptors not flushed
      net/virtio: fix crash when device reconnecting
      net/virtio: fix unexpected event after reconnect

Matteo Croce (1):
      doc: fix LTO config option

Mattias Rönnblom (3):
      event/dsw: remove redundant control ring poll
      event/dsw: remove unnecessary read barrier
      event/dsw: avoid reusing previously recorded events

Michael Baum (2):
      net/mlx5: fix meter color register consideration
      net/mlx4: fix drop queue error handling

Michael Haeuptle (1):
      vfio: fix race condition with sysfs

Michal Krawczyk (5):
      net/ena/base: fix documentation of functions
      net/ena/base: fix indentation in CQ polling
      net/ena/base: fix indentation of multiple defines
      net/ena: set IO ring size to valid value
      net/ena/base: fix testing for supported hash function

Min Hu (Connor) (3):
      net/hns3: fix configuring illegal VLAN PVID
      net/hns3: fix mailbox opcode data type
      net/hns3: fix VLAN PVID when configuring device

Mit Matelske (1):
      eal/freebsd: fix queuing duplicate alarm callbacks

Mohsin Shaikh (1):
      net/mlx5: use open/read/close for ib stats query

Muhammad Bilal (2):
      fix same typo in multiple places
      doc: fix typo in contributors guide

Nagadheeraj Rottela (2):
      crypto/nitrox: fix CSR register address generation
      crypto/nitrox: fix oversized device name

Nicolas Chautru (2):
      baseband/turbo_sw: fix exposed LLR decimals assumption
      bbdev: fix doxygen comments

Nithin Dabilpuram (2):
      devtools: fix symbol map change check
      net/octeontx2: disable unnecessary error interrupts

Olivier Matz (3):
      test/kvargs: fix to consider empty elements as valid
      test/kvargs: fix invalid cases check
      kvargs: fix invalid token parsing on FreeBSD

Ophir Munk (1):
      net/mlx5: fix VLAN PCP item calculation

Ori Kam (1):
      eal/ppc: fix bool type after altivec include

Pablo de Lara (4):
      cryptodev: add asymmetric session-less feature name
      test/crypto: fix flag check
      crypto/openssl: fix out-of-place encryption
      doc: add NASM installation steps

Pavan Nikhilesh (4):
      net/octeontx2: fix device configuration sequence
      eventdev: fix probe and remove for secondary process
      common/octeontx: fix gcc 9.1 ABI break
      app/eventdev: check Tx adapter service ID

Phil Yang (2):
      service: remove rte prefix from static functions
      net/ixgbe: fix link state timing on fiber ports

Qi Zhang (10):
      net/ice: remove unnecessary variable
      net/ice: remove bulk alloc option
      net/ice/base: fix uninitialized stack variables
      net/ice/base: read PSM clock frequency from register
      net/ice/base: minor fixes
      net/ice/base: fix MAC write command
      net/ice/base: fix binary order for GTPU filter
      net/ice/base: remove unused code in switch rule
      net/ice: fix variable initialization
      net/ice: fix RSS for GTPU

Qiming Yang (3):
      net/i40e: fix X722 performance
      doc: fix multicast filter feature announcement
      net/i40e: fix queue related exception handling

Rahul Gupta (2):
      net/bnxt: fix memory leak during queue restart
      net/bnxt: fix Rx ring producer index

Rasesh Mody (3):
      net/qede: fix link state configuration
      net/qede: fix port reconfiguration
      examples/kni: fix MTU change to setup Tx queue

Raslan Darawsheh (4):
      net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
      app/testpmd: add parsing for QinQ VLAN headers
      net/mlx5: fix matching for UDP tunnels with Verbs
      doc: fix build issue in ABI guide

Ray Kinsella (1):
      doc: fix default symbol binding in ABI guide

Rohit Raj (1):
      net/dpaa2: fix 10G port negotiation

Roland Qi (1):
      vhost: fix peer close check

Ruifeng Wang (2):
      test: skip some subtests in no-huge mode
      test/ipsec: fix crash in session destroy

Sarosh Arif (1):
      doc: fix typo in contributors guide

Shougang Wang (2):
      net/ixgbe: fix link status after port reset
      net/i40e: fix queue region in RSS flow

Simei Su (1):
      net/ice: support mark only action for flow director

Sivaprasad Tummala (1):
      vhost: handle mbuf allocation failure

Somnath Kotur (2):
      bus/pci: fix devargs on probing again
      net/bnxt: fix max ring count

Stephen Hemminger (24):
      ethdev: fix spelling
      net/mvneta: do not use PMD log type
      net/virtio: do not use PMD log type
      net/tap: do not use PMD log type
      net/pfe: do not use PMD log type
      net/bnxt: do not use PMD log type
      net/dpaa: use dynamic log type
      net/thunderx: use dynamic log type
      net/netvsc: propagate descriptor limits from VF
      net/netvsc: handle Rx packets during multi-channel setup
      net/netvsc: split send buffers from Tx descriptors
      net/netvsc: fix memory free on device close
      net/netvsc: remove process event optimization
      net/netvsc: handle Tx completions based on burst size
      net/netvsc: avoid possible live lock
      lpm6: fix comments spelling
      eal: fix comments spelling
      net/netvsc: fix comment spelling
      bus/vmbus: fix comment spelling
      net/netvsc: do RSS across Rx queue only
      net/netvsc: do not configure RSS if disabled
      net/tap: fix crash in flow destroy
      eal: fix C++17 compilation
      net/vmxnet3: handle bad host framing

Suanming Mou (3):
      net/mlx5: fix counter container usage
      net/mlx5: fix meter suffix table leak
      net/mlx5: fix jump table leak

Sunil Kumar Kori (1):
      eal: fix log message print for regex

Tao Zhu (3):
      net/ice: fix hash flow crash
      net/ixgbe: fix link status inconsistencies
      net/ixgbe: fix resource leak after thread exits normally

Thomas Monjalon (16):
      drivers/crypto: fix build with make 4.3
      doc: fix sphinx compatibility
      log: fix level picked with globbing on type register
      doc: fix matrix CSS for recent sphinx
      common/mlx5: fix build with -fno-common
      net/mlx4: fix build with -fno-common
      common/mlx5: fix build with rdma-core 21
      app: fix usage help of options separated by dashes
      net/mvpp2: fix build with gcc 10
      examples/vm_power: fix build with -fno-common
      examples/vm_power: drop Unix path limit redefinition
      doc: fix build with doxygen 1.8.18
      doc: fix API index
      doc: fix reference in ABI guide
      net/mlx5: fix build with separate glue lib for dlopen
      buildtools: get static mlx dependencies for meson

Timothy Redaelli (6):
      crypto/octeontx2: fix build with gcc 10
      test: fix build with gcc 10
      app/pipeline: fix build with gcc 10
      examples/vhost_blk: fix build with gcc 10
      examples/eventdev: fix build with gcc 10
      examples/qos_sched: fix build with gcc 10

Ting Xu (1):
      app/testpmd: fix DCB set

Tonghao Zhang (2):
      eal: fix PRNG init with HPET enabled
      net/mlx5: fix crash when releasing meter table

Vadim Podovinnikov (1):
      net/memif: fix resource leak

Vamsi Attunuru (1):
      net/octeontx2: enable error and RAS interrupt in configure

Viacheslav Ovsiienko (2):
      net/mlx5: fix metadata for compressed Rx CQEs
      common/mlx5: fix netlink buffer allocation from stack

Vijaya Mohan Guvva (1):
      bus/pci: fix UIO resource access from secondary process

Vladimir Medvedkin (1):
      ipsec: check SAD lookup error

Wei Hu (Xavier) (10):
      vfio: fix use after free with multiprocess
      net/hns3: fix status after repeated resets
      net/hns3: fix return value when clearing statistics
      app/testpmd: fix statistics after reset
      net/hns3: support different numbers of Rx and Tx queues
      net/hns3: fix Tx interrupt when enabling Rx interrupt
      net/hns3: fix MSI-X interrupt during initialization
      net/hns3: remove unnecessary assignments in Tx
      net/hns3: remove one IO barrier in Rx
      net/hns3: add free threshold in Rx

Wei Zhao (8):
      net/ice: change default tunnel type
      net/ice: add action number check for switch
      net/ice: fix input set of VLAN item
      net/i40e: fix flow director for ARP packets
      doc: add i40e limitation for flow director
      net/i40e: fix flush of flow director filter
      net/i40e: fix wild pointer
      net/i40e: fix flow director enabling

Wisam Jaddo (3):
      net/mlx5: fix zero metadata action
      net/mlx5: fix zero value validation for metadata
      net/mlx5: fix VLAN ID check

Xiao Zhang (1):
      app/testpmd: fix PPPoE flow command

Xiaolong Ye (3):
      net/virtio: fix outdated comment
      vhost: remove unused variable
      doc: fix log level example in Linux guide

Xiaoyu Min (3):
      net/mlx5: fix push VLAN action to use item info
      net/mlx5: fix validation of push VLAN without full mask
      net/mlx5: fix RSS enablement

Xiaoyun Li (4):
      net/ixgbe/base: update copyright
      net/i40e/base: update copyright
      common/iavf: update copyright
      net/ice/base: update copyright

Xiaoyun Wang (7):
      net/hinic: allocate IO memory with socket id
      net/hinic: fix LRO
      net/hinic/base: fix port start during FW hot update
      net/hinic/base: fix PF firmware hot-active problem
      net/hinic: fix queues resource free
      net/hinic: fix Tx mbuf length while copying
      net/hinic: fix TSO

Xuan Ding (2):
      vhost: prevent zero-copy with incompatible client mode
      vhost: fix zero-copy server mode

Yisen Zhuang (1):
      net/hns3: reduce judgements of free Tx ring space

Yunjian Wang (16):
      kvargs: fix buffer overflow when parsing list
      net/tap: remove unused assert
      net/nfp: fix dangling pointer on probe failure
      net/pfe: fix double free of MAC address
      net/tap: fix mbuf double free when writev fails
      net/tap: fix mbuf and mem leak during queue release
      net/tap: fix check for mbuf number of segment
      net/tap: fix file close on remove
      net/tap: fix fd leak on creation failure
      net/tap: fix unexpected link handler
      net/tap: fix queues fd check before close
      net/octeontx: fix dangling pointer on init failure
      crypto/ccp: fix fd leak on probe failure
      net/failsafe: fix fd leak
      crypto/caam_jr: fix check of file descriptors
      crypto/caam_jr: fix IRQ functions return type

Yuri Chipchev (1):
      event/dsw: fix enqueue burst return value

Zhihong Peng (1):
      net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [EXTERNAL] 19.11.3 patches review and test
  2020-06-03 19:43  3% [dpdk-dev] 19.11.3 patches review and test luca.boccassi
                   ` (2 preceding siblings ...)
  2020-06-16 14:20  0% ` Govindharajan, Hariprasad
@ 2020-06-18 18:11  3% ` Abhishek Marathe
  2020-06-18 18:17  0%   ` Luca Boccassi
  3 siblings, 1 reply; 200+ results
From: Abhishek Marathe @ 2020-06-18 18:11 UTC (permalink / raw)
  To: luca.boccassi, stable
  Cc: dev, Akhil Goyal, Ali Alnubani, benjamin.walker,
	David Christensen, Hemant Agrawal, Ian Stokes, Jerin Jacob,
	John McNamara, Ju-Hyoung Lee, Kevin Traynor, Pei Zhang, pingx.yu,
	qian.q.xu, Raslan Darawsheh, Thomas Monjalon, yuan.peng,
	zhaoyan.chen

Hi Luca,

All testcases pass for DPDK LTS 19.11.3. Failed testcases below were double checked and No issues found.

Test Report:

DPDK https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0 was validated on Azure for Canonical UbuntuServer 16.04-LTS latest, Canonical UbuntuServer 18.04-DAILY-LTS latest, RedHat RHEL 7-RAW latest, RedHat RHEL 7.5 latest, Openlogic CentOS 7.5 latest, SUSE SLES-15-sp1 gen1 latest.
Tested with Mellanox and netvsc poll-mode drivers.
The tests were executed using LISAv2 framework (https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FLIS%2FLISAv2&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=jGIuvSSwaOsQfeR3rL3fGcNhnloJ7WXHDKrbooW09wU%3D&amp;reserved=0).

Test case description:

* VERIFY-DPDK-COMPLIANCE - verifies kernel is supported and that the build is successful
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST - verifies using testpmd that packets can be sent from a VM to another VM
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK - disables/enables Accelerated Networking for the NICs under test and makes sure DPDK works in both scenarios
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC - disables/enables Accelerated Networking for the NICs while generating traffic using testpmd

* PERF-DPDK-FWD-PPS-DS15 - verifies DPDK forwarding performance using testpmd on 2, 4, 8 cores, rx and io mode on size Standard_DS15_v2
* PERF-DPDK-SINGLE-CORE-PPS-DS4 - verifies DPDK performance using testpmd on 1 core, rx and io mode on size Standard_DS4_v2
* PERF-DPDK-SINGLE-CORE-PPS-DS15 - verifies DPDK performance using testpmd on 1 core, rx and io mode on size Standard_DS15_v2
* PERF-DPDK-MULTICORE-PPS-DS15 - verifies DPDK performance using testpmd on 2, 4, 8 cores, rx and io mode on size Standard_DS15_v2
* PERF-DPDK-MULTICORE-PPS-F32 - verifies DPDK performance using testpmd on 2, 4, 8, 16 cores, rx and io mode on size Standard_F32s_v2

* DPDK-RING-LATENCY - verifies DPDK CPU latency using https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fshemminger%2Fdpdk-ring-ping.git&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Vc0%2BsJvFR%2B%2Fth2uzBIp5OHdmvyFHeGGomQpSCKAG2us%3D&amp;reserved=0
* VERIFY-DPDK-PRIMARY-SECONDARY-PROCESSES - verifies primary / secondary processes support for DPDK. Runs only on RHEL and Ubuntu distros with Linux kernel >= 4.20
* VERIFY-DPDK-OVS - builds OVS with DPDK support and tests if the OVS DPDK ports can be created. Runs only on Ubuntu distro.

 DPDK job exited with status: UNSTABLE - https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flinuxpipeline.westus2.cloudapp.azure.com%2Fjob%2FDPDK%2Fjob%2Fpipeline-dpdk-validation%2Fjob%2Fmaster%2F1027%2F&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=LH61pKFOClsARTyL3oSzHNIyctuQV7uKyIqjFNDl0Zc%3D&amp;reserved=0. 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Canonical UbuntuServer 16.04-LTS latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: PASSED 
* PERF-DPDK-MULTICORE-PPS-F32: FAILED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: PASSED 
* PERF-DPDK-FWD-PPS-DS15: ABORTED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
* PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: PASSED 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Canonical UbuntuServer 18.04-DAILY-LTS latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: PASSED 
* PERF-DPDK-MULTICORE-PPS-F32: PASSED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
* PERF-DPDK-FWD-PPS-DS15: PASSED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
* PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: PASSED 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'RedHat RHEL 7-RAW latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: ABORTED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: SKIPPED 
* PERF-DPDK-MULTICORE-PPS-F32: FAILED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: PASSED 
* PERF-DPDK-FWD-PPS-DS15: FAILED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
* PERF-DPDK-MULTICORE-PPS-DS15: FAILED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: PASSED 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'RedHat RHEL 7.5 latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: SKIPPED 
* PERF-DPDK-MULTICORE-PPS-F32: PASSED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
* PERF-DPDK-FWD-PPS-DS15: ABORTED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: PASSED 
* PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: ABORTED 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Openlogic CentOS 7.5 latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: SKIPPED 
* PERF-DPDK-MULTICORE-PPS-F32: ABORTED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
* PERF-DPDK-FWD-PPS-DS15: ABORTED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: PASSED 
* PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: PASSED 

Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'SUSE SLES-15-sp1 gen1 latest': 
 
* PERF-DPDK-SINGLE-CORE-PPS-DS4: FAILED 
* VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
* VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
* VERIFY-DPDK-OVS: SKIPPED 
* PERF-DPDK-MULTICORE-PPS-F32: ABORTED 
* VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
* PERF-DPDK-FWD-PPS-DS15: ABORTED 
* PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
* PERF-DPDK-MULTICORE-PPS-DS15: ABORTED 
* VERIFY-DPDK-COMPLIANCE: PASSED 
* VERIFY-DPDK-RING-LATENCY: PASSED 

Regards,
Abhishek

-----Original Message-----
From: luca.boccassi@gmail.com <luca.boccassi@gmail.com> 
Sent: Wednesday, June 3, 2020 12:44 PM
To: stable@dpdk.org
Cc: dev@dpdk.org; Abhishek Marathe <Abhishek.Marathe@microsoft.com>; Akhil Goyal <akhil.goyal@nxp.com>; Ali Alnubani <alialnu@mellanox.com>; benjamin.walker@intel.com; David Christensen <drc@linux.vnet.ibm.com>; Hemant Agrawal <hemant.agrawal@nxp.com>; Ian Stokes <ian.stokes@intel.com>; Jerin Jacob <jerinj@marvell.com>; John McNamara <john.mcnamara@intel.com>; Ju-Hyoung Lee <juhlee@microsoft.com>; Kevin Traynor <ktraynor@redhat.com>; Pei Zhang <pezhang@redhat.com>; pingx.yu@intel.com; qian.q.xu@intel.com; Raslan Darawsheh <rasland@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; yuan.peng@intel.com; zhaoyan.chen@intel.com
Subject: [EXTERNAL] 19.11.3 patches review and test

Hi all,

Here is a list of patches targeted for stable release 19.11.3.

The planned date for the final release is the 17th of June.

Please help with testing and validation of your use cases and report any issues/results with reply-all to this mail. For the final release the fixes and reported validations will be added to the release notes.

A release candidate tarball can be found at:

    https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpdk.org%2Fbrowse%2Fdpdk-stable%2Ftag%2F%3Fid%3Dv19.11.3-rc1&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7C542dc14ed21644ddf5d608d807f67bcf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268102516743014&amp;sdata=OfsMJBqEhwX9lvzC%2BnsyadhOiEi3P6vkDbKDaTNXS%2B0%3D&amp;reserved=0

These patches are located at branch 19.11 of dpdk-stable repo:
    https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpdk.org%2Fbrowse%2Fdpdk-stable%2F&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7C542dc14ed21644ddf5d608d807f67bcf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268102516743014&amp;sdata=DWtlIX3oNyN7oJbhzMg5re64Qehifnd4ad9MtRAtj8k%3D&amp;reserved=0

Thanks.

Luca Boccassi

---
Adam Dybkowski (5):
      cryptodev: fix missing device id range checking
      common/qat: fix GEN3 marketing name
      app/crypto-perf: fix display of sample test vector
      crypto/qat: support plain SHA1..SHA512 hashes
      cryptodev: fix SHA-1 digest enum comment

Ajit Khaparde (3):
      net/bnxt: fix FW version query
      net/bnxt: fix error log for command timeout
      net/bnxt: fix using RSS config struct

Akhil Goyal (1):
      ipsec: fix build dependency on hash lib

Alex Kiselev (1):
      lpm6: fix size of tbl8 group

Alex Marginean (1):
      net/enetc: fix Rx lock-up

Alexander Kozyrev (8):
      net/mlx5: reduce Tx completion index memory loads
      net/mlx5: add device parameter for MPRQ stride size
      net/mlx5: enable MPRQ multi-stride operations
      net/mlx5: add multi-segment packets in MPRQ mode
      net/mlx5: set dynamic flow metadata in Rx queues
      net/mlx5: improve logging of MPRQ selection
      net/mlx5: fix assert in dynamic metadata handling
      net/mlx5: fix Tx queue release debug log timing

Alvin Zhang (2):
      net/iavf: fix link speed
      net/e1000: fix port hotplug for multi-process

Amit Gupta (1):
      net/octeontx: fix meson build for disabled drivers

Anatoly Burakov (1):
      mem: preallocate VA space in no-huge mode

Andrew Rybchenko (4):
      net/sfc: fix reported promiscuous/multicast mode
      net/sfc/base: use simpler EF10 family conditional check
      net/sfc/base: use simpler EF10 family run-time checks
      net/sfc/base: fix build when EVB is enabled

Andy Pei (1):
      net/ipn3ke: use control thread to check link status

Ankur Dwivedi (1):
      net/octeontx2: fix buffer size assignment

Apeksha Gupta (2):
      bus/fslmc: fix dereferencing null pointer
      test/crypto: fix statistics case

Archana Muniganti (1):
      examples/fips_validation: fix parsing of algorithms

Arek Kusztal (1):
      crypto/qat: fix cipher descriptor for ZUC and SNOW

Asaf Penso (2):
      net/mlx5: fix call to modify action without init item
      net/mlx5: fix assert in doorbell lookup

Ashish Gupta (1):
      net/octeontx2: fix link information for loopback port

Asim Jamshed (1):
      fib: fix headers for C++ support

Bernard Iremonger (1):
      net/i40e: fix flow director initialisation

Bing Zhao (6):
      net/mlx5: fix header modify action validation
      net/mlx5: fix actions validation on root table
      net/mlx5: fix assert in modify converting
      mk: fix static linkage of mlx dependency
      mem: fix overflow on allocation
      net/mlx5: fix doorbell bitmap management offsets

Bruce Richardson (3):
      pci: remove unneeded includes in public header file
      pci: fix build on FreeBSD
      drivers: fix log type variables for -fno-common

Cheng Peng (1):
      net/iavf: fix stats query error code

Chengchang Tang (3):
      net/hns3: fix promiscuous mode for PF
      net/hns3: fix default VLAN filter configuration for PF
      net/hns3: fix VLAN filter when setting promisucous mode

Chengwen Feng (7):
      net/hns3: fix packets offload features flags in Rx
      net/hns3: fix default error code of command interface
      net/hns3: fix crash when flushing RSS flow rules with FLR
      net/hns3: fix return value of setting VLAN offload
      net/hns3: clear residual flow rules on init
      net/hns3: fix Rx interrupt after reset
      net/hns3: replace memory barrier with data dependency order

Ciara Power (1):
      telemetry: fix port stats retrieval

Darek Stojaczyk (1):
      pci: accept 32-bit domain numbers

David Christensen (2):
      pci: fix build on ppc
      eal/ppc: fix build with gcc 9.3

David Marchand (5):
      mem: mark pages as not accessed when reserving VA
      test: load drivers when required
      eal: fix typo in endian conversion macros
      remove references to private PCI probe function
      doc: prefer https when pointing to dpdk.org

Dekel Peled (7):
      net/mlx5: fix mask used for IPv6 item validation
      net/mlx5: fix CVLAN tag set in IP item translation
      net/mlx5: update VLAN and encap actions validation
      net/mlx5: fix match on empty VLAN item in DV mode
      common/mlx5: fix umem buffer alignment
      net/mlx5: fix VLAN flow action with wildcard VLAN item
      net/mlx5: fix RSS key copy to TIR context

Dmitry Kozlyuk (2):
      build: fix linker warnings with clang on Windows
      build: support MinGW-w64 with Meson

Eduard Serra (1):
      net/vmxnet3: fix RSS setting on v4

Eugeny Parshutin (1):
      ethdev: fix build when vtune profiling is on

Fady Bader (1):
      mempool: remove inline functions from export list

Fan Zhang (1):
      vhost/crypto: add missing user protocol flag

Ferruh Yigit (7):
      net/nfp: fix log format specifiers
      net/null: fix secondary burst function selection
      net/null: remove redundant check
      mempool/octeontx2: fix build for gcc O1 optimization
      net/ena: fix build for O1 optimization
      event/octeontx2: fix build for O1 optimization
      examples/kni: fix crash during MTU set

Gaetan Rivet (5):
      doc: fix number of failsafe sub-devices
      net/ring: fix device pointer on allocation
      pci: reject negative values in PCI id
      doc: fix typos in ABI policy
      kvargs: fix strcmp helper documentation

Gavin Hu (2):
      net/i40e: relax barrier in Tx
      net/i40e: relax barrier in Tx for NEON

Guinan Sun (2):
      net/ixgbe: fix statistics in flow control mode
      net/ixgbe: check driver type in MACsec API

Haifeng Lin (1):
      eal/arm64: fix precise TSC

Haiyue Wang (1):
      net/ice/base: check memory pointer before copying

Hao Chen (1):
      net/hns3: support Rx interrupt

Harry van Haaren (3):
      service: fix crash on exit
      examples/eventdev: fix crash on exit
      test/flow_classify: enable multi-sockets system

Hemant Agrawal (3):
      drivers: add crypto as dependency for event drivers
      bus/fslmc: fix size of qman fq descriptor
      mempool/dpaa2: install missing header with meson

Honnappa Nagarahalli (3):
      timer: protect initialization with lock
      service: fix race condition for MT unsafe service
      service: fix identification of service running on other lcore

Hyong Youb Kim (1):
      net/enic: fix flow action reordering

Igor Chauskin (2):
      net/ena/base: make allocation macros thread-safe
      net/ena/base: prevent allocation of zero sized memory

Igor Romanov (9):
      net/sfc: fix initialization error path
      net/sfc: fix Rx queue start failure path
      net/sfc: fix promiscuous and allmulticast toggles errors
      net/sfc: set priority of created filters to manual
      net/sfc/base: reduce filter priorities to implemented only
      net/sfc/base: reject automatic filter creation by users
      net/sfc/base: refactor filter lookup loop in EF10
      net/sfc/base: handle manual and auto filter clashes in EF10
      net/sfc/base: fix manual filter delete in EF10

Itsuro Oda (2):
      net/vhost: fix potential memory leak on close
      vhost: make IOTLB cache name unique among processes

Ivan Dyukov (3):
      net/virtio-user: fix devargs parsing
      app: remove extra new line after link duplex
      examples: remove extra new line after link duplex

Jasvinder Singh (3):
      net/softnic: fix memory leak for thread
      net/softnic: fix resource leak for pipeline
      examples/ip_pipeline: remove check of null response

Jeff Guo (3):
      net/i40e: fix setting L2TAG
      net/iavf: fix setting L2TAG
      net/ice: fix setting L2TAG

Jiawei Wang (1):
      net/mlx5: fix imissed counter overflow

Jim Harris (1):
      contigmem: cleanup properly when load fails

Jun Yang (1):
      net/dpaa2: fix congestion ID for multiple traffic classes

Junyu Jiang (4):
      examples/vmdq: fix output of pools/queues
      examples/vmdq: fix RSS configuration
      net/ice: fix RSS advanced rule
      net/ice: fix crash in switch filter

Juraj Linkeš (1):
      ci: fix telemetry dependency in Travis

Július Milan (1):
      net/memif: fix init when already connected

Kalesh AP (9):
      net/bnxt: fix HWRM command during FW reset
      net/bnxt: use true/false for bool types
      net/bnxt: fix port start failure handling
      net/bnxt: fix VLAN add when port is stopped
      net/bnxt: fix VNIC Rx queue count on VNIC free
      net/bnxt: fix number of TQM ring
      net/bnxt: fix TQM ring context memory size
      app/testpmd: fix memory failure handling for i40e DDP
      net/bnxt: fix storing MAC address twice

Kevin Traynor (9):
      net/hinic: fix snprintf length of cable info
      net/hinic: fix repeating cable log and length check
      net/avp: fix gcc 10 maybe-uninitialized warning
      examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
      eal/x86: ignore gcc 10 stringop-overflow warnings
      net/mlx5: fix gcc 10 enum-conversion warning
      crypto/kasumi: fix extern declaration
      drivers/crypto: disable gcc 10 no-common errors
      build: disable gcc 10 zero-length-bounds warning

Konstantin Ananyev (1):
      security: fix crash at accessing non-implemented ops

Lijun Ou (4):
      net/hns3: fix configuring RSS hash when rules are flushed
      net/hns3: add RSS hash offload to capabilities
      net/hns3: fix RSS key length
      net/hns3: fix RSS indirection table configuration

Linsi Yuan (1):
      net/bnxt: fix possible stack smashing

Louise Kilheeney (1):
      examples/l2fwd-keepalive: fix mbuf pool size

Luca Boccassi (4):
      fix various typos found by Lintian
      usertools: check for pci.ids in /usr/share/misc
      Revert "net/bnxt: fix TQM ring context memory size"
      Revert "net/bnxt: fix number of TQM ring"

Lukasz Bartosik (1):
      event/octeontx2: fix queue removal from Rx adapter

Lukasz Wojciechowski (5):
      drivers/crypto: fix log type variables for -fno-common
      security: fix verification of parameters
      security: fix return types in documentation
      security: fix session counter
      test: remove redundant macro

Marvin Liu (5):
      vhost: fix packed ring zero-copy
      vhost: fix shadow update
      vhost: fix shadowed descriptors not flushed
      net/virtio: fix crash when device reconnecting
      net/virtio: fix unexpected event after reconnect

Matteo Croce (1):
      doc: fix LTO config option

Mattias Rönnblom (3):
      event/dsw: remove redundant control ring poll
      event/dsw: remove unnecessary read barrier
      event/dsw: avoid reusing previously recorded events

Michael Baum (2):
      net/mlx5: fix meter color register consideration
      net/mlx4: fix drop queue error handling

Michael Haeuptle (1):
      vfio: fix race condition with sysfs

Michal Krawczyk (5):
      net/ena/base: fix documentation of functions
      net/ena/base: fix indentation in CQ polling
      net/ena/base: fix indentation of multiple defines
      net/ena: set IO ring size to valid value
      net/ena/base: fix testing for supported hash function

Min Hu (Connor) (3):
      net/hns3: fix configuring illegal VLAN PVID
      net/hns3: fix mailbox opcode data type
      net/hns3: fix VLAN PVID when configuring device

Mit Matelske (1):
      eal/freebsd: fix queuing duplicate alarm callbacks

Mohsin Shaikh (1):
      net/mlx5: use open/read/close for ib stats query

Muhammad Bilal (2):
      fix same typo in multiple places
      doc: fix typo in contributors guide

Nagadheeraj Rottela (2):
      crypto/nitrox: fix CSR register address generation
      crypto/nitrox: fix oversized device name

Nicolas Chautru (2):
      baseband/turbo_sw: fix exposed LLR decimals assumption
      bbdev: fix doxygen comments

Nithin Dabilpuram (2):
      devtools: fix symbol map change check
      net/octeontx2: disable unnecessary error interrupts

Olivier Matz (3):
      test/kvargs: fix to consider empty elements as valid
      test/kvargs: fix invalid cases check
      kvargs: fix invalid token parsing on FreeBSD

Ophir Munk (1):
      net/mlx5: fix VLAN PCP item calculation

Ori Kam (1):
      eal/ppc: fix bool type after altivec include

Pablo de Lara (4):
      cryptodev: add asymmetric session-less feature name
      test/crypto: fix flag check
      crypto/openssl: fix out-of-place encryption
      doc: add NASM installation steps

Pavan Nikhilesh (4):
      net/octeontx2: fix device configuration sequence
      eventdev: fix probe and remove for secondary process
      common/octeontx: fix gcc 9.1 ABI break
      app/eventdev: check Tx adapter service ID

Phil Yang (2):
      service: remove rte prefix from static functions
      net/ixgbe: fix link state timing on fiber ports

Qi Zhang (10):
      net/ice: remove unnecessary variable
      net/ice: remove bulk alloc option
      net/ice/base: fix uninitialized stack variables
      net/ice/base: read PSM clock frequency from register
      net/ice/base: minor fixes
      net/ice/base: fix MAC write command
      net/ice/base: fix binary order for GTPU filter
      net/ice/base: remove unused code in switch rule
      net/ice: fix variable initialization
      net/ice: fix RSS for GTPU

Qiming Yang (3):
      net/i40e: fix X722 performance
      doc: fix multicast filter feature announcement
      net/i40e: fix queue related exception handling

Rahul Gupta (2):
      net/bnxt: fix memory leak during queue restart
      net/bnxt: fix Rx ring producer index

Rasesh Mody (3):
      net/qede: fix link state configuration
      net/qede: fix port reconfiguration
      examples/kni: fix MTU change to setup Tx queue

Raslan Darawsheh (4):
      net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
      app/testpmd: add parsing for QinQ VLAN headers
      net/mlx5: fix matching for UDP tunnels with Verbs
      doc: fix build issue in ABI guide

Ray Kinsella (1):
      doc: fix default symbol binding in ABI guide

Rohit Raj (1):
      net/dpaa2: fix 10G port negotiation

Roland Qi (1):
      vhost: fix peer close check

Ruifeng Wang (2):
      test: skip some subtests in no-huge mode
      test/ipsec: fix crash in session destroy

Sarosh Arif (1):
      doc: fix typo in contributors guide

Shougang Wang (2):
      net/ixgbe: fix link status after port reset
      net/i40e: fix queue region in RSS flow

Simei Su (1):
      net/ice: support mark only action for flow director

Sivaprasad Tummala (1):
      vhost: handle mbuf allocation failure

Somnath Kotur (2):
      bus/pci: fix devargs on probing again
      net/bnxt: fix max ring count

Stephen Hemminger (24):
      ethdev: fix spelling
      net/mvneta: do not use PMD log type
      net/virtio: do not use PMD log type
      net/tap: do not use PMD log type
      net/pfe: do not use PMD log type
      net/bnxt: do not use PMD log type
      net/dpaa: use dynamic log type
      net/thunderx: use dynamic log type
      net/netvsc: propagate descriptor limits from VF
      net/netvsc: handle Rx packets during multi-channel setup
      net/netvsc: split send buffers from Tx descriptors
      net/netvsc: fix memory free on device close
      net/netvsc: remove process event optimization
      net/netvsc: handle Tx completions based on burst size
      net/netvsc: avoid possible live lock
      lpm6: fix comments spelling
      eal: fix comments spelling
      net/netvsc: fix comment spelling
      bus/vmbus: fix comment spelling
      net/netvsc: do RSS across Rx queue only
      net/netvsc: do not configure RSS if disabled
      net/tap: fix crash in flow destroy
      eal: fix C++17 compilation
      net/vmxnet3: handle bad host framing

Suanming Mou (3):
      net/mlx5: fix counter container usage
      net/mlx5: fix meter suffix table leak
      net/mlx5: fix jump table leak

Sunil Kumar Kori (1):
      eal: fix log message print for regex

Tao Zhu (3):
      net/ice: fix hash flow crash
      net/ixgbe: fix link status inconsistencies
      net/ixgbe: fix resource leak after thread exits normally

Thomas Monjalon (13):
      drivers/crypto: fix build with make 4.3
      doc: fix sphinx compatibility
      log: fix level picked with globbing on type register
      doc: fix matrix CSS for recent sphinx
      common/mlx5: fix build with -fno-common
      net/mlx4: fix build with -fno-common
      common/mlx5: fix build with rdma-core 21
      app: fix usage help of options separated by dashes
      net/mvpp2: fix build with gcc 10
      examples/vm_power: fix build with -fno-common
      examples/vm_power: drop Unix path limit redefinition
      doc: fix build with doxygen 1.8.18
      doc: fix API index

Timothy Redaelli (6):
      crypto/octeontx2: fix build with gcc 10
      test: fix build with gcc 10
      app/pipeline: fix build with gcc 10
      examples/vhost_blk: fix build with gcc 10
      examples/eventdev: fix build with gcc 10
      examples/qos_sched: fix build with gcc 10

Ting Xu (1):
      app/testpmd: fix DCB set

Tonghao Zhang (2):
      eal: fix PRNG init with HPET enabled
      net/mlx5: fix crash when releasing meter table

Vadim Podovinnikov (1):
      net/memif: fix resource leak

Vamsi Attunuru (1):
      net/octeontx2: enable error and RAS interrupt in configure

Viacheslav Ovsiienko (2):
      net/mlx5: fix metadata for compressed Rx CQEs
      common/mlx5: fix netlink buffer allocation from stack

Vijaya Mohan Guvva (1):
      bus/pci: fix UIO resource access from secondary process

Vladimir Medvedkin (1):
      ipsec: check SAD lookup error

Wei Hu (Xavier) (10):
      vfio: fix use after free with multiprocess
      net/hns3: fix status after repeated resets
      net/hns3: fix return value when clearing statistics
      app/testpmd: fix statistics after reset
      net/hns3: support different numbers of Rx and Tx queues
      net/hns3: fix Tx interrupt when enabling Rx interrupt
      net/hns3: fix MSI-X interrupt during initialization
      net/hns3: remove unnecessary assignments in Tx
      net/hns3: remove one IO barrier in Rx
      net/hns3: add free threshold in Rx

Wei Zhao (8):
      net/ice: change default tunnel type
      net/ice: add action number check for switch
      net/ice: fix input set of VLAN item
      net/i40e: fix flow director for ARP packets
      doc: add i40e limitation for flow director
      net/i40e: fix flush of flow director filter
      net/i40e: fix wild pointer
      net/i40e: fix flow director enabling

Wisam Jaddo (3):
      net/mlx5: fix zero metadata action
      net/mlx5: fix zero value validation for metadata
      net/mlx5: fix VLAN ID check

Xiao Zhang (1):
      app/testpmd: fix PPPoE flow command

Xiaolong Ye (3):
      net/virtio: fix outdated comment
      vhost: remove unused variable
      doc: fix log level example in Linux guide

Xiaoyu Min (3):
      net/mlx5: fix push VLAN action to use item info
      net/mlx5: fix validation of push VLAN without full mask
      net/mlx5: fix RSS enablement

Xiaoyun Li (4):
      net/ixgbe/base: update copyright
      net/i40e/base: update copyright
      common/iavf: update copyright
      net/ice/base: update copyright

Xiaoyun Wang (7):
      net/hinic: allocate IO memory with socket id
      net/hinic: fix LRO
      net/hinic/base: fix port start during FW hot update
      net/hinic/base: fix PF firmware hot-active problem
      net/hinic: fix queues resource free
      net/hinic: fix Tx mbuf length while copying
      net/hinic: fix TSO

Xuan Ding (2):
      vhost: prevent zero-copy with incompatible client mode
      vhost: fix zero-copy server mode

Yisen Zhuang (1):
      net/hns3: reduce judgements of free Tx ring space

Yunjian Wang (16):
      kvargs: fix buffer overflow when parsing list
      net/tap: remove unused assert
      net/nfp: fix dangling pointer on probe failure
      net/pfe: fix double free of MAC address
      net/tap: fix mbuf double free when writev fails
      net/tap: fix mbuf and mem leak during queue release
      net/tap: fix check for mbuf number of segment
      net/tap: fix file close on remove
      net/tap: fix fd leak on creation failure
      net/tap: fix unexpected link handler
      net/tap: fix queues fd check before close
      net/octeontx: fix dangling pointer on init failure
      crypto/ccp: fix fd leak on probe failure
      net/failsafe: fix fd leak
      crypto/caam_jr: fix check of file descriptors
      crypto/caam_jr: fix IRQ functions return type

Yuri Chipchev (1):
      event/dsw: fix enqueue burst return value

Zhihong Peng (1):
      net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  2020-06-18 16:28  4% ` [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration Matan Azrad
@ 2020-06-19  6:44  0%   ` Maxime Coquelin
  2020-06-19 13:28  0%     ` Matan Azrad
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2020-06-19  6:44 UTC (permalink / raw)
  To: Matan Azrad, Xiao Wang; +Cc: dev



On 6/18/20 6:28 PM, Matan Azrad wrote:
> As an arrangement to per queue operations in the vDPA device it is
> needed to change the next experimental API:
> 
> The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
> instead of per device.
> 
> A `qid` parameter was added to the API arguments list.
> 
> Setting the parameter to the value VHOST_QUEUE_ALL will configure the
> host notifier to all the device queues as done before this patch.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> ---
>  doc/guides/rel_notes/release_20_08.rst |  2 ++
>  drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
>  drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
>  lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
>  lib/librte_vhost/rte_vhost.h           |  2 ++
>  lib/librte_vhost/vhost.h               |  3 ---
>  lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
>  7 files changed, 30 insertions(+), 14 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
> index ba16d3b..9732959 100644
> --- a/doc/guides/rel_notes/release_20_08.rst
> +++ b/doc/guides/rel_notes/release_20_08.rst
> @@ -111,6 +111,8 @@ API Changes
>     Also, make sure to start the actual text at the margin.
>     =========================================================
>  
> +* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to be per
> +  queue and not per device, a qid parameter was added to the arguments list.
>  
>  ABI Changes
>  -----------
> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
> index ec97178..336837a 100644
> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> @@ -839,7 +839,7 @@ struct internal_list {
>  	vdpa_ifcvf_stop(internal);
>  	vdpa_disable_vfio_intr(internal);
>  
> -	ret = rte_vhost_host_notifier_ctrl(vid, false);
> +	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
>  	if (ret && ret != -ENOTSUP)
>  		goto error;
>  
> @@ -858,7 +858,7 @@ struct internal_list {
>  	if (ret)
>  		goto stop_vf;
>  
> -	rte_vhost_host_notifier_ctrl(vid, true);
> +	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
>  
>  	internal->sw_fallback_running = true;
>  
> @@ -893,7 +893,7 @@ struct internal_list {
>  	rte_atomic32_set(&internal->dev_attached, 1);
>  	update_datapath(internal);
>  
> -	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
> +	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
>  		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
>  
>  	return 0;
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
> index 9e758b6..8ea1300 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
> @@ -147,7 +147,8 @@
>  	int ret;
>  
>  	if (priv->direct_notifier) {
> -		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
> +		ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
> +						   false);
>  		if (ret != 0) {
>  			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
>  				"destroyed for device %d: %d.", priv->vid, ret);
> @@ -155,7 +156,7 @@
>  		}
>  		priv->direct_notifier = 0;
>  	}
> -	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
> +	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL, true);
>  	if (ret != 0)
>  		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured for"
>  			" device %d: %d.", priv->vid, ret);
> diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
> index ecb3d91..2db536c 100644
> --- a/lib/librte_vhost/rte_vdpa.h
> +++ b/lib/librte_vhost/rte_vdpa.h
> @@ -202,22 +202,26 @@ struct rte_vdpa_device *
>  int
>  rte_vdpa_get_device_num(void);
>  
> +#define VHOST_QUEUE_ALL VHOST_MAX_VRING
> +
>  /**
>   * @warning
>   * @b EXPERIMENTAL: this API may change without prior notice
>   *
> - * Enable/Disable host notifier mapping for a vdpa port.
> + * Enable/Disable host notifier mapping for a vdpa queue.
>   *
>   * @param vid
>   *  vhost device id
>   * @param enable
>   *  true for host notifier map, false for host notifier unmap
> + * @param qid
> + *  vhost queue id, VHOST_QUEUE_ALL to configure all the device queues
I would prefer two APIs that passing a special ID that means all queues:

rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
rte_vhost_host_notifier_ctrl_all(int vid, bool enable);

I think it is clearer for the user of the API.
Or if you think an extra API is overkill, just let the driver loop on
all the queues.

>   * @return
>   *  0 on success, -1 on failure
>   */
>  __rte_experimental
>  int
> -rte_vhost_host_notifier_ctrl(int vid, bool enable);
> +rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
>  
>  /**


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6 9/9] build: generate version.map file for MingW on Windows
  @ 2020-06-18 21:15  3%   ` talshn
  0 siblings, 0 replies; 200+ results
From: talshn @ 2020-06-18 21:15 UTC (permalink / raw)
  To: dev
  Cc: thomas, pallavi.kadam, dmitry.kozliuk, david.marchand, grive,
	ranjit.menon, navasile, harini.ramakrishnan, ocardona,
	anatoly.burakov, fady, bruce.richardson, Tal Shnaiderman

From: Tal Shnaiderman <talshn@mellanox.com>

The MingW build for Windows has special cases where exported
function contain additional prefix:

__emutls_v.per_lcore__*

To avoid adding those prefixed functions to the version.map file
the map_to_def.py script was modified to create a map file for Mingw
with the needed changed.

The file name was changed to map_to_win.py

Signed-off-by: Tal Shnaiderman <talshn@mellanox.com>
---
 buildtools/{map_to_def.py => map_to_win.py} | 21 ++++++++++++++++-----
 buildtools/meson.build                      |  4 ++--
 drivers/meson.build                         | 12 +++++++++---
 lib/meson.build                             | 15 ++++++++++++---
 4 files changed, 39 insertions(+), 13 deletions(-)
 rename buildtools/{map_to_def.py => map_to_win.py} (51%)

diff --git a/buildtools/map_to_def.py b/buildtools/map_to_win.py
similarity index 51%
rename from buildtools/map_to_def.py
rename to buildtools/map_to_win.py
index 6775b54a9d..dfb0748159 100644
--- a/buildtools/map_to_def.py
+++ b/buildtools/map_to_win.py
@@ -13,23 +13,34 @@ def is_function_line(ln):
 
 def main(args):
     if not args[1].endswith('version.map') or \
-            not args[2].endswith('exports.def'):
+            not args[2].endswith('exports.def') and \
+            not args[2].endswith('mingw.map'):
         return 1
 
 # special case, allow override if an def file already exists alongside map file
+# for mingw also replace per_lcore__* to __emutls_v.per_lcore__*
     override_file = join(dirname(args[1]), basename(args[2]))
     if exists(override_file):
         with open(override_file) as f_in:
-            functions = f_in.readlines()
+            lines = f_in.readlines()
+            if args[2].endswith('mingw.map'):
+                lines = [l.replace('per_lcore__', '__emutls_v.per_lcore__') for l in lines]
+            functions = lines
 
 # generate def file from map file.
-# This works taking indented lines only which end with a ";" and which don't
+# For clang this works taking indented lines only which end with a ";" and which don't
 # have a colon in them, i.e. the lines defining functions only.
+# mingw keeps the original .map file but replaces per_lcore__* to __emutls_v.per_lcore__*
     else:
         with open(args[1]) as f_in:
-            functions = [ln[:-2] + '\n' for ln in sorted(f_in.readlines())
+            lines = f_in.readlines()
+            if args[2].endswith('mingw.map'):
+                lines = [l.replace('per_lcore__', '__emutls_v.per_lcore__') for l in lines]
+                functions = lines
+            else:
+                functions = [ln[:-2] + '\n' for ln in sorted(lines)
                          if is_function_line(ln)]
-            functions = ["EXPORTS\n"] + functions
+                functions = ["EXPORTS\n"] + functions
 
     with open(args[2], 'w') as f_out:
         f_out.writelines(functions)
diff --git a/buildtools/meson.build b/buildtools/meson.build
index d5f8291beb..f9d2fdf74b 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -9,14 +9,14 @@ list_dir_globs = find_program('list-dir-globs.py')
 check_symbols = find_program('check-symbols.sh')
 ldflags_ibverbs_static = find_program('options-ibverbs-static.sh')
 
-# set up map-to-def script using python, either built-in or external
+# set up map-to-win script using python, either built-in or external
 python3 = import('python').find_installation(required: false)
 if python3.found()
 	py3 = [python3]
 else
 	py3 = ['meson', 'runpython']
 endif
-map_to_def_cmd = py3 + files('map_to_def.py')
+map_to_win_cmd = py3 + files('map_to_win.py')
 sphinx_wrapper = py3 + files('call-sphinx-build.py')
 
 # stable ABI always starts with "DPDK_"
diff --git a/drivers/meson.build b/drivers/meson.build
index 646a7d5eb5..b25a368531 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -152,16 +152,22 @@ foreach class:dpdk_driver_classes
 			implib = 'lib' + lib_name + '.dll.a'
 
 			def_file = custom_target(lib_name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
 				output: '@0@_exports.def'.format(lib_name))
-			lk_deps = [version_map, def_file]
+
+			mingw_map = custom_target(name + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: '@0@_mingw.map'.format(name))
+
+			lk_deps = [version_map, def_file, mingw_map]
 			if is_windows
 				if is_ms_linker
 					lk_args = ['-Wl,/def:' + def_file.full_path(),
 						'-Wl,/implib:drivers\\' + implib]
 				else
-					lk_args = []
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
 				endif
 			else
 				lk_args = ['-Wl,--version-script=' + version_map]
diff --git a/lib/meson.build b/lib/meson.build
index a8fd317a18..9f6c85a3e1 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -150,18 +150,27 @@ foreach l:libraries
 			implib = dir_name + '.dll.a'
 
 			def_file = custom_target(name + '_def',
-				command: [map_to_def_cmd, '@INPUT@', '@OUTPUT@'],
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
 				input: version_map,
 				output: 'rte_@0@_exports.def'.format(name))
 
+			mingw_map = custom_target(name + '_mingw',
+				command: [map_to_win_cmd, '@INPUT@', '@OUTPUT@'],
+				input: version_map,
+				output: 'rte_@0@_mingw.map'.format(name))
+
 			if is_ms_linker
 				lk_args = ['-Wl,/def:' + def_file.full_path(),
 					'-Wl,/implib:lib\\' + implib]
 			else
-				lk_args = ['-Wl,--version-script=' + version_map]
+				if is_windows
+					lk_args = ['-Wl,--version-script=' + mingw_map.full_path()]
+				else
+					lk_args = ['-Wl,--version-script=' + version_map]
+				endif
 			endif
 
-			lk_deps = [version_map, def_file]
+			lk_deps = [version_map, def_file, mingw_map]
 			if not is_windows
 				# on unix systems check the output of the
 				# check-symbols.sh script, using it as a
-- 
2.16.1.windows.4


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXTERNAL] 19.11.3 patches review and test
  2020-06-18 18:11  3% ` [dpdk-dev] [EXTERNAL] " Abhishek Marathe
@ 2020-06-18 18:17  0%   ` Luca Boccassi
  0 siblings, 0 replies; 200+ results
From: Luca Boccassi @ 2020-06-18 18:17 UTC (permalink / raw)
  To: Abhishek Marathe, stable
  Cc: dev, Akhil Goyal, Ali Alnubani, benjamin.walker,
	David Christensen, Hemant Agrawal, Ian Stokes, Jerin Jacob,
	John McNamara, Ju-Hyoung Lee, Kevin Traynor, Pei Zhang, pingx.yu,
	qian.q.xu, Raslan Darawsheh, Thomas Monjalon, yuan.peng,
	zhaoyan.chen

Thank you!

On Thu, 2020-06-18 at 18:11 +0000, Abhishek Marathe wrote:
> Hi Luca,
> 
> All testcases pass for DPDK LTS 19.11.3. Failed testcases below were double checked and No issues found.
> 
> Test Report:
> 
> DPDK https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0 was validated on Azure for Canonical UbuntuServer 16.04-LTS latest, Canonical UbuntuServer 18.04-DAILY-LTS latest, RedHat RHEL 7-RAW latest, RedHat RHEL 7.5 latest, Openlogic CentOS 7.5 latest, SUSE SLES-15-sp1 gen1 latest.
> Tested with Mellanox and netvsc poll-mode drivers.
> The tests were executed using LISAv2 framework (https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FLIS%2FLISAv2&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=jGIuvSSwaOsQfeR3rL3fGcNhnloJ7WXHDKrbooW09wU%3D&amp;reserved=0).
> 
> Test case description:
> 
> * VERIFY-DPDK-COMPLIANCE - verifies kernel is supported and that the build is successful
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST - verifies using testpmd that packets can be sent from a VM to another VM
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK - disables/enables Accelerated Networking for the NICs under test and makes sure DPDK works in both scenarios
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC - disables/enables Accelerated Networking for the NICs while generating traffic using testpmd
> 
> * PERF-DPDK-FWD-PPS-DS15 - verifies DPDK forwarding performance using testpmd on 2, 4, 8 cores, rx and io mode on size Standard_DS15_v2
> * PERF-DPDK-SINGLE-CORE-PPS-DS4 - verifies DPDK performance using testpmd on 1 core, rx and io mode on size Standard_DS4_v2
> * PERF-DPDK-SINGLE-CORE-PPS-DS15 - verifies DPDK performance using testpmd on 1 core, rx and io mode on size Standard_DS15_v2
> * PERF-DPDK-MULTICORE-PPS-DS15 - verifies DPDK performance using testpmd on 2, 4, 8 cores, rx and io mode on size Standard_DS15_v2
> * PERF-DPDK-MULTICORE-PPS-F32 - verifies DPDK performance using testpmd on 2, 4, 8, 16 cores, rx and io mode on size Standard_F32s_v2
> 
> * DPDK-RING-LATENCY - verifies DPDK CPU latency using https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fshemminger%2Fdpdk-ring-ping.git&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Vc0%2BsJvFR%2B%2Fth2uzBIp5OHdmvyFHeGGomQpSCKAG2us%3D&amp;reserved=0
> * VERIFY-DPDK-PRIMARY-SECONDARY-PROCESSES - verifies primary / secondary processes support for DPDK. Runs only on RHEL and Ubuntu distros with Linux kernel >= 4.20
> * VERIFY-DPDK-OVS - builds OVS with DPDK support and tests if the OVS DPDK ports can be created. Runs only on Ubuntu distro.
> 
>  DPDK job exited with status: UNSTABLE - https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flinuxpipeline.westus2.cloudapp.azure.com%2Fjob%2FDPDK%2Fjob%2Fpipeline-dpdk-validation%2Fjob%2Fmaster%2F1027%2F&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=LH61pKFOClsARTyL3oSzHNIyctuQV7uKyIqjFNDl0Zc%3D&amp;reserved=0. 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Canonical UbuntuServer 16.04-LTS latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: PASSED 
> * PERF-DPDK-MULTICORE-PPS-F32: FAILED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: PASSED 
> * PERF-DPDK-FWD-PPS-DS15: ABORTED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
> * PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: PASSED 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Canonical UbuntuServer 18.04-DAILY-LTS latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: PASSED 
> * PERF-DPDK-MULTICORE-PPS-F32: PASSED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
> * PERF-DPDK-FWD-PPS-DS15: PASSED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
> * PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: PASSED 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'RedHat RHEL 7-RAW latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: ABORTED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: SKIPPED 
> * PERF-DPDK-MULTICORE-PPS-F32: FAILED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: PASSED 
> * PERF-DPDK-FWD-PPS-DS15: FAILED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
> * PERF-DPDK-MULTICORE-PPS-DS15: FAILED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: PASSED 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'RedHat RHEL 7.5 latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: SKIPPED 
> * PERF-DPDK-MULTICORE-PPS-F32: PASSED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
> * PERF-DPDK-FWD-PPS-DS15: ABORTED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: PASSED 
> * PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: ABORTED 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'Openlogic CentOS 7.5 latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: PASSED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: SKIPPED 
> * PERF-DPDK-MULTICORE-PPS-F32: ABORTED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
> * PERF-DPDK-FWD-PPS-DS15: ABORTED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: PASSED 
> * PERF-DPDK-MULTICORE-PPS-DS15: PASSED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: PASSED 
> 
> Test results for DPDK 'https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Fsnapshot%2Fdpdk-stable-19.11.3-rc1.tar.gz&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7Cd0e4f9770b9048f95fe308d808380af7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268384083921356&amp;sdata=Pi0gdxkkDrlN0rjjicKor%2F0Lzb8bA5gNZg%2BJVV63li0%3D&amp;reserved=0' and Azure image: 'SUSE SLES-15-sp1 gen1 latest': 
>  
> * PERF-DPDK-SINGLE-CORE-PPS-DS4: FAILED 
> * VERIFY-DPDK-BUILD-AND-TESTPMD-TEST: PASSED 
> * VERIFY-SRIOV-FAILSAFE-FOR-DPDK: PASSED 
> * VERIFY-DPDK-OVS: SKIPPED 
> * PERF-DPDK-MULTICORE-PPS-F32: ABORTED 
> * VERIFY-DPDK-FAILSAFE-DURING-TRAFFIC: ABORTED 
> * PERF-DPDK-FWD-PPS-DS15: ABORTED 
> * PERF-DPDK-SINGLE-CORE-PPS-DS15: ABORTED 
> * PERF-DPDK-MULTICORE-PPS-DS15: ABORTED 
> * VERIFY-DPDK-COMPLIANCE: PASSED 
> * VERIFY-DPDK-RING-LATENCY: PASSED 
> 
> Regards,
> Abhishek
> 
> -----Original Message-----
> From: luca.boccassi@gmail.com <luca.boccassi@gmail.com> 
> Sent: Wednesday, June 3, 2020 12:44 PM
> To: stable@dpdk.org
> Cc: dev@dpdk.org; Abhishek Marathe <Abhishek.Marathe@microsoft.com>; Akhil Goyal <akhil.goyal@nxp.com>; Ali Alnubani <alialnu@mellanox.com>; benjamin.walker@intel.com; David Christensen <drc@linux.vnet.ibm.com>; Hemant Agrawal <hemant.agrawal@nxp.com>; Ian Stokes <ian.stokes@intel.com>; Jerin Jacob <jerinj@marvell.com>; John McNamara <john.mcnamara@intel.com>; Ju-Hyoung Lee <juhlee@microsoft.com>; Kevin Traynor <ktraynor@redhat.com>; Pei Zhang <pezhang@redhat.com>; pingx.yu@intel.com; qian.q.xu@intel.com; Raslan Darawsheh <rasland@mellanox.com>; Thomas Monjalon <thomas@monjalon.net>; yuan.peng@intel.com; zhaoyan.chen@intel.com
> Subject: [EXTERNAL] 19.11.3 patches review and test
> 
> Hi all,
> 
> Here is a list of patches targeted for stable release 19.11.3.
> 
> The planned date for the final release is the 17th of June.
> 
> Please help with testing and validation of your use cases and report any issues/results with reply-all to this mail. For the final release the fixes and reported validations will be added to the release notes.
> 
> A release candidate tarball can be found at:
> 
>     https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpdk.org%2Fbrowse%2Fdpdk-stable%2Ftag%2F%3Fid%3Dv19.11.3-rc1&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7C542dc14ed21644ddf5d608d807f67bcf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268102516743014&amp;sdata=OfsMJBqEhwX9lvzC%2BnsyadhOiEi3P6vkDbKDaTNXS%2B0%3D&amp;reserved=0
> 
> These patches are located at branch 19.11 of dpdk-stable repo:
>     https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpdk.org%2Fbrowse%2Fdpdk-stable%2F&amp;data=02%7C01%7CAbhishek.Marathe%40microsoft.com%7C542dc14ed21644ddf5d608d807f67bcf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637268102516743014&amp;sdata=DWtlIX3oNyN7oJbhzMg5re64Qehifnd4ad9MtRAtj8k%3D&amp;reserved=0
> 
> Thanks.
> 
> Luca Boccassi
> 
> ---
> Adam Dybkowski (5):
>       cryptodev: fix missing device id range checking
>       common/qat: fix GEN3 marketing name
>       app/crypto-perf: fix display of sample test vector
>       crypto/qat: support plain SHA1..SHA512 hashes
>       cryptodev: fix SHA-1 digest enum comment
> 
> Ajit Khaparde (3):
>       net/bnxt: fix FW version query
>       net/bnxt: fix error log for command timeout
>       net/bnxt: fix using RSS config struct
> 
> Akhil Goyal (1):
>       ipsec: fix build dependency on hash lib
> 
> Alex Kiselev (1):
>       lpm6: fix size of tbl8 group
> 
> Alex Marginean (1):
>       net/enetc: fix Rx lock-up
> 
> Alexander Kozyrev (8):
>       net/mlx5: reduce Tx completion index memory loads
>       net/mlx5: add device parameter for MPRQ stride size
>       net/mlx5: enable MPRQ multi-stride operations
>       net/mlx5: add multi-segment packets in MPRQ mode
>       net/mlx5: set dynamic flow metadata in Rx queues
>       net/mlx5: improve logging of MPRQ selection
>       net/mlx5: fix assert in dynamic metadata handling
>       net/mlx5: fix Tx queue release debug log timing
> 
> Alvin Zhang (2):
>       net/iavf: fix link speed
>       net/e1000: fix port hotplug for multi-process
> 
> Amit Gupta (1):
>       net/octeontx: fix meson build for disabled drivers
> 
> Anatoly Burakov (1):
>       mem: preallocate VA space in no-huge mode
> 
> Andrew Rybchenko (4):
>       net/sfc: fix reported promiscuous/multicast mode
>       net/sfc/base: use simpler EF10 family conditional check
>       net/sfc/base: use simpler EF10 family run-time checks
>       net/sfc/base: fix build when EVB is enabled
> 
> Andy Pei (1):
>       net/ipn3ke: use control thread to check link status
> 
> Ankur Dwivedi (1):
>       net/octeontx2: fix buffer size assignment
> 
> Apeksha Gupta (2):
>       bus/fslmc: fix dereferencing null pointer
>       test/crypto: fix statistics case
> 
> Archana Muniganti (1):
>       examples/fips_validation: fix parsing of algorithms
> 
> Arek Kusztal (1):
>       crypto/qat: fix cipher descriptor for ZUC and SNOW
> 
> Asaf Penso (2):
>       net/mlx5: fix call to modify action without init item
>       net/mlx5: fix assert in doorbell lookup
> 
> Ashish Gupta (1):
>       net/octeontx2: fix link information for loopback port
> 
> Asim Jamshed (1):
>       fib: fix headers for C++ support
> 
> Bernard Iremonger (1):
>       net/i40e: fix flow director initialisation
> 
> Bing Zhao (6):
>       net/mlx5: fix header modify action validation
>       net/mlx5: fix actions validation on root table
>       net/mlx5: fix assert in modify converting
>       mk: fix static linkage of mlx dependency
>       mem: fix overflow on allocation
>       net/mlx5: fix doorbell bitmap management offsets
> 
> Bruce Richardson (3):
>       pci: remove unneeded includes in public header file
>       pci: fix build on FreeBSD
>       drivers: fix log type variables for -fno-common
> 
> Cheng Peng (1):
>       net/iavf: fix stats query error code
> 
> Chengchang Tang (3):
>       net/hns3: fix promiscuous mode for PF
>       net/hns3: fix default VLAN filter configuration for PF
>       net/hns3: fix VLAN filter when setting promisucous mode
> 
> Chengwen Feng (7):
>       net/hns3: fix packets offload features flags in Rx
>       net/hns3: fix default error code of command interface
>       net/hns3: fix crash when flushing RSS flow rules with FLR
>       net/hns3: fix return value of setting VLAN offload
>       net/hns3: clear residual flow rules on init
>       net/hns3: fix Rx interrupt after reset
>       net/hns3: replace memory barrier with data dependency order
> 
> Ciara Power (1):
>       telemetry: fix port stats retrieval
> 
> Darek Stojaczyk (1):
>       pci: accept 32-bit domain numbers
> 
> David Christensen (2):
>       pci: fix build on ppc
>       eal/ppc: fix build with gcc 9.3
> 
> David Marchand (5):
>       mem: mark pages as not accessed when reserving VA
>       test: load drivers when required
>       eal: fix typo in endian conversion macros
>       remove references to private PCI probe function
>       doc: prefer https when pointing to dpdk.org
> 
> Dekel Peled (7):
>       net/mlx5: fix mask used for IPv6 item validation
>       net/mlx5: fix CVLAN tag set in IP item translation
>       net/mlx5: update VLAN and encap actions validation
>       net/mlx5: fix match on empty VLAN item in DV mode
>       common/mlx5: fix umem buffer alignment
>       net/mlx5: fix VLAN flow action with wildcard VLAN item
>       net/mlx5: fix RSS key copy to TIR context
> 
> Dmitry Kozlyuk (2):
>       build: fix linker warnings with clang on Windows
>       build: support MinGW-w64 with Meson
> 
> Eduard Serra (1):
>       net/vmxnet3: fix RSS setting on v4
> 
> Eugeny Parshutin (1):
>       ethdev: fix build when vtune profiling is on
> 
> Fady Bader (1):
>       mempool: remove inline functions from export list
> 
> Fan Zhang (1):
>       vhost/crypto: add missing user protocol flag
> 
> Ferruh Yigit (7):
>       net/nfp: fix log format specifiers
>       net/null: fix secondary burst function selection
>       net/null: remove redundant check
>       mempool/octeontx2: fix build for gcc O1 optimization
>       net/ena: fix build for O1 optimization
>       event/octeontx2: fix build for O1 optimization
>       examples/kni: fix crash during MTU set
> 
> Gaetan Rivet (5):
>       doc: fix number of failsafe sub-devices
>       net/ring: fix device pointer on allocation
>       pci: reject negative values in PCI id
>       doc: fix typos in ABI policy
>       kvargs: fix strcmp helper documentation
> 
> Gavin Hu (2):
>       net/i40e: relax barrier in Tx
>       net/i40e: relax barrier in Tx for NEON
> 
> Guinan Sun (2):
>       net/ixgbe: fix statistics in flow control mode
>       net/ixgbe: check driver type in MACsec API
> 
> Haifeng Lin (1):
>       eal/arm64: fix precise TSC
> 
> Haiyue Wang (1):
>       net/ice/base: check memory pointer before copying
> 
> Hao Chen (1):
>       net/hns3: support Rx interrupt
> 
> Harry van Haaren (3):
>       service: fix crash on exit
>       examples/eventdev: fix crash on exit
>       test/flow_classify: enable multi-sockets system
> 
> Hemant Agrawal (3):
>       drivers: add crypto as dependency for event drivers
>       bus/fslmc: fix size of qman fq descriptor
>       mempool/dpaa2: install missing header with meson
> 
> Honnappa Nagarahalli (3):
>       timer: protect initialization with lock
>       service: fix race condition for MT unsafe service
>       service: fix identification of service running on other lcore
> 
> Hyong Youb Kim (1):
>       net/enic: fix flow action reordering
> 
> Igor Chauskin (2):
>       net/ena/base: make allocation macros thread-safe
>       net/ena/base: prevent allocation of zero sized memory
> 
> Igor Romanov (9):
>       net/sfc: fix initialization error path
>       net/sfc: fix Rx queue start failure path
>       net/sfc: fix promiscuous and allmulticast toggles errors
>       net/sfc: set priority of created filters to manual
>       net/sfc/base: reduce filter priorities to implemented only
>       net/sfc/base: reject automatic filter creation by users
>       net/sfc/base: refactor filter lookup loop in EF10
>       net/sfc/base: handle manual and auto filter clashes in EF10
>       net/sfc/base: fix manual filter delete in EF10
> 
> Itsuro Oda (2):
>       net/vhost: fix potential memory leak on close
>       vhost: make IOTLB cache name unique among processes
> 
> Ivan Dyukov (3):
>       net/virtio-user: fix devargs parsing
>       app: remove extra new line after link duplex
>       examples: remove extra new line after link duplex
> 
> Jasvinder Singh (3):
>       net/softnic: fix memory leak for thread
>       net/softnic: fix resource leak for pipeline
>       examples/ip_pipeline: remove check of null response
> 
> Jeff Guo (3):
>       net/i40e: fix setting L2TAG
>       net/iavf: fix setting L2TAG
>       net/ice: fix setting L2TAG
> 
> Jiawei Wang (1):
>       net/mlx5: fix imissed counter overflow
> 
> Jim Harris (1):
>       contigmem: cleanup properly when load fails
> 
> Jun Yang (1):
>       net/dpaa2: fix congestion ID for multiple traffic classes
> 
> Junyu Jiang (4):
>       examples/vmdq: fix output of pools/queues
>       examples/vmdq: fix RSS configuration
>       net/ice: fix RSS advanced rule
>       net/ice: fix crash in switch filter
> 
> Juraj Linkeš (1):
>       ci: fix telemetry dependency in Travis
> 
> Július Milan (1):
>       net/memif: fix init when already connected
> 
> Kalesh AP (9):
>       net/bnxt: fix HWRM command during FW reset
>       net/bnxt: use true/false for bool types
>       net/bnxt: fix port start failure handling
>       net/bnxt: fix VLAN add when port is stopped
>       net/bnxt: fix VNIC Rx queue count on VNIC free
>       net/bnxt: fix number of TQM ring
>       net/bnxt: fix TQM ring context memory size
>       app/testpmd: fix memory failure handling for i40e DDP
>       net/bnxt: fix storing MAC address twice
> 
> Kevin Traynor (9):
>       net/hinic: fix snprintf length of cable info
>       net/hinic: fix repeating cable log and length check
>       net/avp: fix gcc 10 maybe-uninitialized warning
>       examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
>       eal/x86: ignore gcc 10 stringop-overflow warnings
>       net/mlx5: fix gcc 10 enum-conversion warning
>       crypto/kasumi: fix extern declaration
>       drivers/crypto: disable gcc 10 no-common errors
>       build: disable gcc 10 zero-length-bounds warning
> 
> Konstantin Ananyev (1):
>       security: fix crash at accessing non-implemented ops
> 
> Lijun Ou (4):
>       net/hns3: fix configuring RSS hash when rules are flushed
>       net/hns3: add RSS hash offload to capabilities
>       net/hns3: fix RSS key length
>       net/hns3: fix RSS indirection table configuration
> 
> Linsi Yuan (1):
>       net/bnxt: fix possible stack smashing
> 
> Louise Kilheeney (1):
>       examples/l2fwd-keepalive: fix mbuf pool size
> 
> Luca Boccassi (4):
>       fix various typos found by Lintian
>       usertools: check for pci.ids in /usr/share/misc
>       Revert "net/bnxt: fix TQM ring context memory size"
>       Revert "net/bnxt: fix number of TQM ring"
> 
> Lukasz Bartosik (1):
>       event/octeontx2: fix queue removal from Rx adapter
> 
> Lukasz Wojciechowski (5):
>       drivers/crypto: fix log type variables for -fno-common
>       security: fix verification of parameters
>       security: fix return types in documentation
>       security: fix session counter
>       test: remove redundant macro
> 
> Marvin Liu (5):
>       vhost: fix packed ring zero-copy
>       vhost: fix shadow update
>       vhost: fix shadowed descriptors not flushed
>       net/virtio: fix crash when device reconnecting
>       net/virtio: fix unexpected event after reconnect
> 
> Matteo Croce (1):
>       doc: fix LTO config option
> 
> Mattias Rönnblom (3):
>       event/dsw: remove redundant control ring poll
>       event/dsw: remove unnecessary read barrier
>       event/dsw: avoid reusing previously recorded events
> 
> Michael Baum (2):
>       net/mlx5: fix meter color register consideration
>       net/mlx4: fix drop queue error handling
> 
> Michael Haeuptle (1):
>       vfio: fix race condition with sysfs
> 
> Michal Krawczyk (5):
>       net/ena/base: fix documentation of functions
>       net/ena/base: fix indentation in CQ polling
>       net/ena/base: fix indentation of multiple defines
>       net/ena: set IO ring size to valid value
>       net/ena/base: fix testing for supported hash function
> 
> Min Hu (Connor) (3):
>       net/hns3: fix configuring illegal VLAN PVID
>       net/hns3: fix mailbox opcode data type
>       net/hns3: fix VLAN PVID when configuring device
> 
> Mit Matelske (1):
>       eal/freebsd: fix queuing duplicate alarm callbacks
> 
> Mohsin Shaikh (1):
>       net/mlx5: use open/read/close for ib stats query
> 
> Muhammad Bilal (2):
>       fix same typo in multiple places
>       doc: fix typo in contributors guide
> 
> Nagadheeraj Rottela (2):
>       crypto/nitrox: fix CSR register address generation
>       crypto/nitrox: fix oversized device name
> 
> Nicolas Chautru (2):
>       baseband/turbo_sw: fix exposed LLR decimals assumption
>       bbdev: fix doxygen comments
> 
> Nithin Dabilpuram (2):
>       devtools: fix symbol map change check
>       net/octeontx2: disable unnecessary error interrupts
> 
> Olivier Matz (3):
>       test/kvargs: fix to consider empty elements as valid
>       test/kvargs: fix invalid cases check
>       kvargs: fix invalid token parsing on FreeBSD
> 
> Ophir Munk (1):
>       net/mlx5: fix VLAN PCP item calculation
> 
> Ori Kam (1):
>       eal/ppc: fix bool type after altivec include
> 
> Pablo de Lara (4):
>       cryptodev: add asymmetric session-less feature name
>       test/crypto: fix flag check
>       crypto/openssl: fix out-of-place encryption
>       doc: add NASM installation steps
> 
> Pavan Nikhilesh (4):
>       net/octeontx2: fix device configuration sequence
>       eventdev: fix probe and remove for secondary process
>       common/octeontx: fix gcc 9.1 ABI break
>       app/eventdev: check Tx adapter service ID
> 
> Phil Yang (2):
>       service: remove rte prefix from static functions
>       net/ixgbe: fix link state timing on fiber ports
> 
> Qi Zhang (10):
>       net/ice: remove unnecessary variable
>       net/ice: remove bulk alloc option
>       net/ice/base: fix uninitialized stack variables
>       net/ice/base: read PSM clock frequency from register
>       net/ice/base: minor fixes
>       net/ice/base: fix MAC write command
>       net/ice/base: fix binary order for GTPU filter
>       net/ice/base: remove unused code in switch rule
>       net/ice: fix variable initialization
>       net/ice: fix RSS for GTPU
> 
> Qiming Yang (3):
>       net/i40e: fix X722 performance
>       doc: fix multicast filter feature announcement
>       net/i40e: fix queue related exception handling
> 
> Rahul Gupta (2):
>       net/bnxt: fix memory leak during queue restart
>       net/bnxt: fix Rx ring producer index
> 
> Rasesh Mody (3):
>       net/qede: fix link state configuration
>       net/qede: fix port reconfiguration
>       examples/kni: fix MTU change to setup Tx queue
> 
> Raslan Darawsheh (4):
>       net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
>       app/testpmd: add parsing for QinQ VLAN headers
>       net/mlx5: fix matching for UDP tunnels with Verbs
>       doc: fix build issue in ABI guide
> 
> Ray Kinsella (1):
>       doc: fix default symbol binding in ABI guide
> 
> Rohit Raj (1):
>       net/dpaa2: fix 10G port negotiation
> 
> Roland Qi (1):
>       vhost: fix peer close check
> 
> Ruifeng Wang (2):
>       test: skip some subtests in no-huge mode
>       test/ipsec: fix crash in session destroy
> 
> Sarosh Arif (1):
>       doc: fix typo in contributors guide
> 
> Shougang Wang (2):
>       net/ixgbe: fix link status after port reset
>       net/i40e: fix queue region in RSS flow
> 
> Simei Su (1):
>       net/ice: support mark only action for flow director
> 
> Sivaprasad Tummala (1):
>       vhost: handle mbuf allocation failure
> 
> Somnath Kotur (2):
>       bus/pci: fix devargs on probing again
>       net/bnxt: fix max ring count
> 
> Stephen Hemminger (24):
>       ethdev: fix spelling
>       net/mvneta: do not use PMD log type
>       net/virtio: do not use PMD log type
>       net/tap: do not use PMD log type
>       net/pfe: do not use PMD log type
>       net/bnxt: do not use PMD log type
>       net/dpaa: use dynamic log type
>       net/thunderx: use dynamic log type
>       net/netvsc: propagate descriptor limits from VF
>       net/netvsc: handle Rx packets during multi-channel setup
>       net/netvsc: split send buffers from Tx descriptors
>       net/netvsc: fix memory free on device close
>       net/netvsc: remove process event optimization
>       net/netvsc: handle Tx completions based on burst size
>       net/netvsc: avoid possible live lock
>       lpm6: fix comments spelling
>       eal: fix comments spelling
>       net/netvsc: fix comment spelling
>       bus/vmbus: fix comment spelling
>       net/netvsc: do RSS across Rx queue only
>       net/netvsc: do not configure RSS if disabled
>       net/tap: fix crash in flow destroy
>       eal: fix C++17 compilation
>       net/vmxnet3: handle bad host framing
> 
> Suanming Mou (3):
>       net/mlx5: fix counter container usage
>       net/mlx5: fix meter suffix table leak
>       net/mlx5: fix jump table leak
> 
> Sunil Kumar Kori (1):
>       eal: fix log message print for regex
> 
> Tao Zhu (3):
>       net/ice: fix hash flow crash
>       net/ixgbe: fix link status inconsistencies
>       net/ixgbe: fix resource leak after thread exits normally
> 
> Thomas Monjalon (13):
>       drivers/crypto: fix build with make 4.3
>       doc: fix sphinx compatibility
>       log: fix level picked with globbing on type register
>       doc: fix matrix CSS for recent sphinx
>       common/mlx5: fix build with -fno-common
>       net/mlx4: fix build with -fno-common
>       common/mlx5: fix build with rdma-core 21
>       app: fix usage help of options separated by dashes
>       net/mvpp2: fix build with gcc 10
>       examples/vm_power: fix build with -fno-common
>       examples/vm_power: drop Unix path limit redefinition
>       doc: fix build with doxygen 1.8.18
>       doc: fix API index
> 
> Timothy Redaelli (6):
>       crypto/octeontx2: fix build with gcc 10
>       test: fix build with gcc 10
>       app/pipeline: fix build with gcc 10
>       examples/vhost_blk: fix build with gcc 10
>       examples/eventdev: fix build with gcc 10
>       examples/qos_sched: fix build with gcc 10
> 
> Ting Xu (1):
>       app/testpmd: fix DCB set
> 
> Tonghao Zhang (2):
>       eal: fix PRNG init with HPET enabled
>       net/mlx5: fix crash when releasing meter table
> 
> Vadim Podovinnikov (1):
>       net/memif: fix resource leak
> 
> Vamsi Attunuru (1):
>       net/octeontx2: enable error and RAS interrupt in configure
> 
> Viacheslav Ovsiienko (2):
>       net/mlx5: fix metadata for compressed Rx CQEs
>       common/mlx5: fix netlink buffer allocation from stack
> 
> Vijaya Mohan Guvva (1):
>       bus/pci: fix UIO resource access from secondary process
> 
> Vladimir Medvedkin (1):
>       ipsec: check SAD lookup error
> 
> Wei Hu (Xavier) (10):
>       vfio: fix use after free with multiprocess
>       net/hns3: fix status after repeated resets
>       net/hns3: fix return value when clearing statistics
>       app/testpmd: fix statistics after reset
>       net/hns3: support different numbers of Rx and Tx queues
>       net/hns3: fix Tx interrupt when enabling Rx interrupt
>       net/hns3: fix MSI-X interrupt during initialization
>       net/hns3: remove unnecessary assignments in Tx
>       net/hns3: remove one IO barrier in Rx
>       net/hns3: add free threshold in Rx
> 
> Wei Zhao (8):
>       net/ice: change default tunnel type
>       net/ice: add action number check for switch
>       net/ice: fix input set of VLAN item
>       net/i40e: fix flow director for ARP packets
>       doc: add i40e limitation for flow director
>       net/i40e: fix flush of flow director filter
>       net/i40e: fix wild pointer
>       net/i40e: fix flow director enabling
> 
> Wisam Jaddo (3):
>       net/mlx5: fix zero metadata action
>       net/mlx5: fix zero value validation for metadata
>       net/mlx5: fix VLAN ID check
> 
> Xiao Zhang (1):
>       app/testpmd: fix PPPoE flow command
> 
> Xiaolong Ye (3):
>       net/virtio: fix outdated comment
>       vhost: remove unused variable
>       doc: fix log level example in Linux guide
> 
> Xiaoyu Min (3):
>       net/mlx5: fix push VLAN action to use item info
>       net/mlx5: fix validation of push VLAN without full mask
>       net/mlx5: fix RSS enablement
> 
> Xiaoyun Li (4):
>       net/ixgbe/base: update copyright
>       net/i40e/base: update copyright
>       common/iavf: update copyright
>       net/ice/base: update copyright
> 
> Xiaoyun Wang (7):
>       net/hinic: allocate IO memory with socket id
>       net/hinic: fix LRO
>       net/hinic/base: fix port start during FW hot update
>       net/hinic/base: fix PF firmware hot-active problem
>       net/hinic: fix queues resource free
>       net/hinic: fix Tx mbuf length while copying
>       net/hinic: fix TSO
> 
> Xuan Ding (2):
>       vhost: prevent zero-copy with incompatible client mode
>       vhost: fix zero-copy server mode
> 
> Yisen Zhuang (1):
>       net/hns3: reduce judgements of free Tx ring space
> 
> Yunjian Wang (16):
>       kvargs: fix buffer overflow when parsing list
>       net/tap: remove unused assert
>       net/nfp: fix dangling pointer on probe failure
>       net/pfe: fix double free of MAC address
>       net/tap: fix mbuf double free when writev fails
>       net/tap: fix mbuf and mem leak during queue release
>       net/tap: fix check for mbuf number of segment
>       net/tap: fix file close on remove
>       net/tap: fix fd leak on creation failure
>       net/tap: fix unexpected link handler
>       net/tap: fix queues fd check before close
>       net/octeontx: fix dangling pointer on init failure
>       crypto/ccp: fix fd leak on probe failure
>       net/failsafe: fix fd leak
>       crypto/caam_jr: fix check of file descriptors
>       crypto/caam_jr: fix IRQ functions return type
> 
> Yuri Chipchev (1):
>       event/dsw: fix enqueue burst return value
> 
> Zhihong Peng (1):
>       net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
  2020-06-08 18:46  3%     ` Honnappa Nagarahalli
@ 2020-06-18 17:36  0%       ` Medvedkin, Vladimir
  0 siblings, 0 replies; 200+ results
From: Medvedkin, Vladimir @ 2020-06-18 17:36 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Ruifeng Wang, Bruce Richardson,
	John McNamara, Marko Kovacevic, Ray Kinsella, Neil Horman
  Cc: dev, konstantin.ananyev, nd

Hi Honnappa,

On 08/06/2020 19:46, Honnappa Nagarahalli wrote:
> <snip>
>
>> Subject: [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
>>
>> Currently, the tbl8 group is freed even though the readers might be using the
>> tbl8 group entries. The freed tbl8 group can be reallocated quickly. This
>> results in incorrect lookup results.
>>
>> RCU QSBR process is integrated for safe tbl8 group reclaim.
>> Refer to RCU documentation to understand various aspects of integrating
>> RCU library into other libraries.
>>
>> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
>> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>> ---
>>   doc/guides/prog_guide/lpm_lib.rst  |  32 ++++++++
>>   lib/librte_lpm/Makefile            |   2 +-
>>   lib/librte_lpm/meson.build         |   1 +
>>   lib/librte_lpm/rte_lpm.c           | 123 ++++++++++++++++++++++++++---
>>   lib/librte_lpm/rte_lpm.h           |  59 ++++++++++++++
>>   lib/librte_lpm/rte_lpm_version.map |   6 ++
>>   6 files changed, 211 insertions(+), 12 deletions(-)
>>
>> diff --git a/doc/guides/prog_guide/lpm_lib.rst
>> b/doc/guides/prog_guide/lpm_lib.rst
>> index 1609a57d0..7cc99044a 100644
>> --- a/doc/guides/prog_guide/lpm_lib.rst
>> +++ b/doc/guides/prog_guide/lpm_lib.rst
>> @@ -145,6 +145,38 @@ depending on whether we need to move to the next
>> table or not.
>>   Prefix expansion is one of the keys of this algorithm,  since it improves the
>> speed dramatically by adding redundancy.
>>
>> +Deletion
>> +~~~~~~~~
>> +
>> +When deleting a rule, a replacement rule is searched for. Replacement
>> +rule is an existing rule that has the longest prefix match with the rule to be
>> deleted, but has smaller depth.
>> +
>> +If a replacement rule is found, target tbl24 and tbl8 entries are
>> +updated to have the same depth and next hop value with the replacement
>> rule.
>> +
>> +If no replacement rule can be found, target tbl24 and tbl8 entries will be
>> cleared.
>> +
>> +Prefix expansion is performed if the rule's depth is not exactly 24 bits or 32
>> bits.
>> +
>> +After deleting a rule, a group of tbl8s that belongs to the same tbl24 entry
>> are freed in following cases:
>> +
>> +*   All tbl8s in the group are empty .
>> +
>> +*   All tbl8s in the group have the same values and with depth no greater
>> than 24.
>> +
>> +Free of tbl8s have different behaviors:
>> +
>> +*   If RCU is not used, tbl8s are cleared and reclaimed immediately.
>> +
>> +*   If RCU is used, tbl8s are reclaimed when readers are in quiescent state.
>> +
>> +When the LPM is not using RCU, tbl8 group can be freed immediately even
>> +though the readers might be using the tbl8 group entries. This might result
>> in incorrect lookup results.
>> +
>> +RCU QSBR process is integrated for safe tbl8 group reclaimation.
>> +Application has certain responsibilities while using this feature.
>> +Please refer to resource reclaimation framework of :ref:`RCU library
>> <RCU_Library>` for more details.
>> +
>>   Lookup
>>   ~~~~~~
>>
>> diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index
>> d682785b6..6f06c5c03 100644
>> --- a/lib/librte_lpm/Makefile
>> +++ b/lib/librte_lpm/Makefile
>> @@ -8,7 +8,7 @@ LIB = librte_lpm.a
>>
>>   CFLAGS += -O3
>>   CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
>> -LDLIBS += -lrte_eal -lrte_hash
>> +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu
>>
>>   EXPORT_MAP := rte_lpm_version.map
>>
>> diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build index
>> 021ac6d8d..6cfc083c5 100644
>> --- a/lib/librte_lpm/meson.build
>> +++ b/lib/librte_lpm/meson.build
>> @@ -7,3 +7,4 @@ headers = files('rte_lpm.h', 'rte_lpm6.h')  # without
>> worrying about which architecture we actually need  headers +=
>> files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')  deps += ['hash']
>> +deps += ['rcu']
>> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
>> 38ab512a4..30f541179 100644
>> --- a/lib/librte_lpm/rte_lpm.c
>> +++ b/lib/librte_lpm/rte_lpm.c
>> @@ -1,5 +1,6 @@
>>   /* SPDX-License-Identifier: BSD-3-Clause
>>    * Copyright(c) 2010-2014 Intel Corporation
>> + * Copyright(c) 2020 Arm Limited
>>    */
>>
>>   #include <string.h>
>> @@ -246,12 +247,85 @@ rte_lpm_free(struct rte_lpm *lpm)
>>
>>   	rte_mcfg_tailq_write_unlock();
>>
>> +	if (lpm->dq)
>> +		rte_rcu_qsbr_dq_delete(lpm->dq);
>>   	rte_free(lpm->tbl8);
>>   	rte_free(lpm->rules_tbl);
>>   	rte_free(lpm);
>>   	rte_free(te);
>>   }
>>
>> +static void
>> +__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n) {
>> +	struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
>> +	uint32_t tbl8_group_index = *(uint32_t *)data;
>> +	struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p;
>> +
>> +	RTE_SET_USED(n);
>> +	/* Set tbl8 group invalid */
>> +	__atomic_store(&tbl8[tbl8_group_index], &zero_tbl8_entry,
>> +		__ATOMIC_RELAXED);
>> +}
>> +
>> +/* Associate QSBR variable with an LPM object.
>> + */
>> +int
>> +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg,
>> +	struct rte_rcu_qsbr_dq **dq)
> I prefer not to return the defer queue to the user here. I see 3 different ways how RCU can be integrated in the libraries:
>
> 1) The sync mode in which the defer queue is not created. The rte_rcu_qsbr_synchronize API is called after delete. The resource is freed after rte_rcu_qsbr_synchronize returns and the control is given back to the user.
>
> 2) The mode where the defer queue is created. There is a lot of flexibility provided now as the defer queue size, reclaim threshold and how many resources to reclaim are all configurable. IMO, this solves most of the use cases and helps the application integrate lock-less algorithms with minimal effort.
>
> 3) This is where the application has its own method of reclamation that does not fall under 1) or 2). To address this use case, I think we should make changes to the LPM library. Today, in LPM, the delete and free are combined into a single API. We can split this single API into 2 separate APIs - delete and free (similar thing was done to rte_hash library) without affecting the ABI. This should provide all the flexibility required for the application to implement any kind of reclamation algorithm it wants. Returning the defer queue to the user in the above API does not solve this use case.


Agree, I don't see any case when user will need the defer queue. From my 
perspective reclamation of tbl8 is totally internal and user should not 
worry about it. So, in case of LPM I don't see any real use case when we 
need to enable third way of how RCU can be integrated.
P.S. In rte_fib case we even don't have an opportunity to reclaim it not 
from the library due to rte_fib struct layout is hidden from user. 
Moreover, there may not be tbl8 at all, since it supports different 
algorithms


>> +{
>> +	char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
>> +	struct rte_rcu_qsbr_dq_parameters params = {0};
>> +
>> +	if ((lpm == NULL) || (cfg == NULL)) {
>> +		rte_errno = EINVAL;
>> +		return 1;
>> +	}
>> +
>> +	if (lpm->v) {
>> +		rte_errno = EEXIST;
>> +		return 1;
>> +	}
>> +
>> +	if (cfg->mode == RTE_LPM_QSBR_MODE_SYNC) {
>> +		/* No other things to do. */
>> +	} else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) {
>> +		/* Init QSBR defer queue. */
>> +		snprintf(rcu_dq_name, sizeof(rcu_dq_name),
>> +				"LPM_RCU_%s", lpm->name);
>> +		params.name = rcu_dq_name;
>> +		params.size = cfg->dq_size;
>> +		if (params.size == 0)
>> +			params.size = lpm->number_tbl8s;
>> +		params.trigger_reclaim_limit = cfg->reclaim_thd;
>> +		if (params.trigger_reclaim_limit == 0)
>> +			params.trigger_reclaim_limit =
>> +					RTE_LPM_RCU_DQ_RECLAIM_THD;
>> +		params.max_reclaim_size = cfg->reclaim_max;
>> +		if (params.max_reclaim_size == 0)
>> +			params.max_reclaim_size =
>> RTE_LPM_RCU_DQ_RECLAIM_MAX;
>> +		params.esize = sizeof(uint32_t);	/* tbl8 group index */
>> +		params.free_fn = __lpm_rcu_qsbr_free_resource;
>> +		params.p = lpm->tbl8;
>> +		params.v = cfg->v;
>> +		lpm->dq = rte_rcu_qsbr_dq_create(&params);
>> +		if (lpm->dq == NULL) {
>> +			RTE_LOG(ERR, LPM,
>> +					"LPM QS defer queue creation
>> failed\n");
>> +			return 1;
>> +		}
>> +		if (dq)
>> +			*dq = lpm->dq;
>> +	} else {
>> +		rte_errno = EINVAL;
>> +		return 1;
>> +	}
>> +	lpm->rcu_mode = cfg->mode;
>> +	lpm->v = cfg->v;
>> +
>> +	return 0;
>> +}
>> +
>>   /*
>>    * Adds a rule to the rule table.
>>    *
>> @@ -394,14 +468,15 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked,
>> uint8_t depth)
>>    * Find, clean and allocate a tbl8.
>>    */
>>   static int32_t
>> -tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
>> +_tbl8_alloc(struct rte_lpm *lpm)
>>   {
>>   	uint32_t group_idx; /* tbl8 group index. */
>>   	struct rte_lpm_tbl_entry *tbl8_entry;
>>
>>   	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
>> -	for (group_idx = 0; group_idx < number_tbl8s; group_idx++) {
>> -		tbl8_entry = &tbl8[group_idx *
>> RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
>> +	for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
>> +		tbl8_entry = &lpm->tbl8[group_idx *
>> +
>> 	RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
>>   		/* If a free tbl8 group is found clean it and set as VALID. */
>>   		if (!tbl8_entry->valid_group) {
>>   			struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -
>> 427,14 +502,40 @@ tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t
>> number_tbl8s)
>>   	return -ENOSPC;
>>   }
>>
>> +static int32_t
>> +tbl8_alloc(struct rte_lpm *lpm)
>> +{
>> +	int32_t group_idx; /* tbl8 group index. */
>> +
>> +	group_idx = _tbl8_alloc(lpm);
>> +	if ((group_idx < 0) && (lpm->dq != NULL)) {
>> +		/* If there are no tbl8 groups try to reclaim one. */
>> +		if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL)
>> == 0)
>> +			group_idx = _tbl8_alloc(lpm);
>> +	}
>> +
>> +	return group_idx;
>> +}
>> +
>>   static void
>> -tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
>> +tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
>>   {
>> -	/* Set tbl8 group invalid*/
>>   	struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
>>
>> -	__atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
>> -			__ATOMIC_RELAXED);
>> +	if (!lpm->v) {
>> +		/* Set tbl8 group invalid*/
>> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
>> &zero_tbl8_entry,
>> +				__ATOMIC_RELAXED);
>> +	} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {
>> +		/* Wait for quiescent state change. */
>> +		rte_rcu_qsbr_synchronize(lpm->v,
>> RTE_QSBR_THRID_INVALID);
>> +		/* Set tbl8 group invalid*/
>> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
>> &zero_tbl8_entry,
>> +				__ATOMIC_RELAXED);
>> +	} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {
>> +		/* Push into QSBR defer queue. */
>> +		rte_rcu_qsbr_dq_enqueue(lpm->dq, (void
>> *)&tbl8_group_start);
>> +	}
>>   }
>>
>>   static __rte_noinline int32_t
>> @@ -523,7 +624,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
>> ip_masked, uint8_t depth,
>>
>>   	if (!lpm->tbl24[tbl24_index].valid) {
>>   		/* Search for a free tbl8 group. */
>> -		tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
>> +		tbl8_group_index = tbl8_alloc(lpm);
>>
>>   		/* Check tbl8 allocation was successful. */
>>   		if (tbl8_group_index < 0) {
>> @@ -569,7 +670,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
>> ip_masked, uint8_t depth,
>>   	} /* If valid entry but not extended calculate the index into Table8. */
>>   	else if (lpm->tbl24[tbl24_index].valid_group == 0) {
>>   		/* Search for free tbl8 group. */
>> -		tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
>> +		tbl8_group_index = tbl8_alloc(lpm);
>>
>>   		if (tbl8_group_index < 0) {
>>   			return tbl8_group_index;
>> @@ -977,7 +1078,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
>> ip_masked,
>>   		 */
>>   		lpm->tbl24[tbl24_index].valid = 0;
>>   		__atomic_thread_fence(__ATOMIC_RELEASE);
>> -		tbl8_free(lpm->tbl8, tbl8_group_start);
>> +		tbl8_free(lpm, tbl8_group_start);
>>   	} else if (tbl8_recycle_index > -1) {
>>   		/* Update tbl24 entry. */
>>   		struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -993,7
>> +1094,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
>>   		__atomic_store(&lpm->tbl24[tbl24_index],
>> &new_tbl24_entry,
>>   				__ATOMIC_RELAXED);
>>   		__atomic_thread_fence(__ATOMIC_RELEASE);
>> -		tbl8_free(lpm->tbl8, tbl8_group_start);
>> +		tbl8_free(lpm, tbl8_group_start);
>>   	}
>>   #undef group_idx
>>   	return 0;
>> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
>> b9d49ac87..8c054509a 100644
>> --- a/lib/librte_lpm/rte_lpm.h
>> +++ b/lib/librte_lpm/rte_lpm.h
>> @@ -1,5 +1,6 @@
>>   /* SPDX-License-Identifier: BSD-3-Clause
>>    * Copyright(c) 2010-2014 Intel Corporation
>> + * Copyright(c) 2020 Arm Limited
>>    */
>>
>>   #ifndef _RTE_LPM_H_
>> @@ -20,6 +21,7 @@
>>   #include <rte_memory.h>
>>   #include <rte_common.h>
>>   #include <rte_vect.h>
>> +#include <rte_rcu_qsbr.h>
>>
>>   #ifdef __cplusplus
>>   extern "C" {
>> @@ -62,6 +64,17 @@ extern "C" {
>>   /** Bitmask used to indicate successful lookup */
>>   #define RTE_LPM_LOOKUP_SUCCESS          0x01000000
>>
>> +/** @internal Default threshold to trigger RCU defer queue reclaimation. */
>> +#define RTE_LPM_RCU_DQ_RECLAIM_THD	32
>> +
>> +/** @internal Default RCU defer queue entries to reclaim in one go. */
>> +#define RTE_LPM_RCU_DQ_RECLAIM_MAX	16
>> +
>> +/* Create defer queue for reclaim. */
>> +#define RTE_LPM_QSBR_MODE_DQ		0
>> +/* Use blocking mode reclaim. No defer queue created. */
>> +#define RTE_LPM_QSBR_MODE_SYNC		0x01
>> +
>>   #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>>   /** @internal Tbl24 entry structure. */  __extension__ @@ -130,6 +143,28
>> @@ struct rte_lpm {
>>   			__rte_cache_aligned; /**< LPM tbl24 table. */
>>   	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>>   	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
>> +
>> +	/* RCU config. */
>> +	struct rte_rcu_qsbr *v;		/* RCU QSBR variable. */
>> +	uint32_t rcu_mode;		/* Blocking, defer queue. */
>> +	struct rte_rcu_qsbr_dq *dq;	/* RCU QSBR defer queue. */
>> +};
>> +
>> +/** LPM RCU QSBR configuration structure. */ struct rte_lpm_rcu_config
>> +{
>> +	struct rte_rcu_qsbr *v;	/* RCU QSBR variable. */
>> +	/* Mode of RCU QSBR. RTE_LPM_QSBR_MODE_xxx
>> +	 * '0' for default: create defer queue for reclaim.
>> +	 */
>> +	uint32_t mode;
>> +	/* RCU defer queue size. default: lpm->number_tbl8s. */
>> +	uint32_t dq_size;
>> +	uint32_t reclaim_thd;	/* Threshold to trigger auto reclaim.
>> +				 * default:
>> RTE_LPM_RCU_DQ_RECLAIM_TRHD.
>> +				 */
>> +	uint32_t reclaim_max;	/* Max entries to reclaim in one go.
>> +				 * default:
>> RTE_LPM_RCU_DQ_RECLAIM_MAX.
>> +				 */
>>   };
>>
>>   /**
>> @@ -179,6 +214,30 @@ rte_lpm_find_existing(const char *name);  void
>> rte_lpm_free(struct rte_lpm *lpm);
>>
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice
>> + *
>> + * Associate RCU QSBR variable with an LPM object.
>> + *
>> + * @param lpm
>> + *   the lpm object to add RCU QSBR
>> + * @param cfg
>> + *   RCU QSBR configuration
>> + * @param dq
>> + *   handler of created RCU QSBR defer queue
>> + * @return
>> + *   On success - 0
>> + *   On error - 1 with error code set in rte_errno.
>> + *   Possible rte_errno codes are:
>> + *   - EINVAL - invalid pointer
>> + *   - EEXIST - already added QSBR
>> + *   - ENOMEM - memory allocation failure
>> + */
>> +__rte_experimental
>> +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config
>> *cfg,
>> +	struct rte_rcu_qsbr_dq **dq);
>> +
>>   /**
>>    * Add a rule to the LPM table.
>>    *
>> diff --git a/lib/librte_lpm/rte_lpm_version.map
>> b/lib/librte_lpm/rte_lpm_version.map
>> index 500f58b80..bfccd7eac 100644
>> --- a/lib/librte_lpm/rte_lpm_version.map
>> +++ b/lib/librte_lpm/rte_lpm_version.map
>> @@ -21,3 +21,9 @@ DPDK_20.0 {
>>
>>   	local: *;
>>   };
>> +
>> +EXPERIMENTAL {
>> +	global:
>> +
>> +	rte_lpm_rcu_qsbr_add;
>> +};
>> --
>> 2.17.1

-- 
Regards,
Vladimir


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration
  @ 2020-06-18 16:28  4% ` Matan Azrad
  2020-06-19  6:44  0%   ` Maxime Coquelin
  0 siblings, 1 reply; 200+ results
From: Matan Azrad @ 2020-06-18 16:28 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

As an arrangement to per queue operations in the vDPA device it is
needed to change the next experimental API:

The API ``rte_vhost_host_notifier_ctrl`` was changed to be per queue
instead of per device.

A `qid` parameter was added to the API arguments list.

Setting the parameter to the value VHOST_QUEUE_ALL will configure the
host notifier to all the device queues as done before this patch.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/rel_notes/release_20_08.rst |  2 ++
 drivers/vdpa/ifc/ifcvf_vdpa.c          |  6 +++---
 drivers/vdpa/mlx5/mlx5_vdpa.c          |  5 +++--
 lib/librte_vhost/rte_vdpa.h            |  8 ++++++--
 lib/librte_vhost/rte_vhost.h           |  2 ++
 lib/librte_vhost/vhost.h               |  3 ---
 lib/librte_vhost/vhost_user.c          | 18 ++++++++++++++----
 7 files changed, 30 insertions(+), 14 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index ba16d3b..9732959 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -111,6 +111,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to be per
+  queue and not per device, a qid parameter was added to the arguments list.
 
 ABI Changes
 -----------
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index ec97178..336837a 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -839,7 +839,7 @@ struct internal_list {
 	vdpa_ifcvf_stop(internal);
 	vdpa_disable_vfio_intr(internal);
 
-	ret = rte_vhost_host_notifier_ctrl(vid, false);
+	ret = rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, false);
 	if (ret && ret != -ENOTSUP)
 		goto error;
 
@@ -858,7 +858,7 @@ struct internal_list {
 	if (ret)
 		goto stop_vf;
 
-	rte_vhost_host_notifier_ctrl(vid, true);
+	rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true);
 
 	internal->sw_fallback_running = true;
 
@@ -893,7 +893,7 @@ struct internal_list {
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_datapath(internal);
 
-	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
+	if (rte_vhost_host_notifier_ctrl(vid, VHOST_QUEUE_ALL, true) != 0)
 		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
 
 	return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 9e758b6..8ea1300 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -147,7 +147,8 @@
 	int ret;
 
 	if (priv->direct_notifier) {
-		ret = rte_vhost_host_notifier_ctrl(priv->vid, false);
+		ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL,
+						   false);
 		if (ret != 0) {
 			DRV_LOG(INFO, "Direct HW notifier FD cannot be "
 				"destroyed for device %d: %d.", priv->vid, ret);
@@ -155,7 +156,7 @@
 		}
 		priv->direct_notifier = 0;
 	}
-	ret = rte_vhost_host_notifier_ctrl(priv->vid, true);
+	ret = rte_vhost_host_notifier_ctrl(priv->vid, VHOST_QUEUE_ALL, true);
 	if (ret != 0)
 		DRV_LOG(INFO, "Direct HW notifier FD cannot be configured for"
 			" device %d: %d.", priv->vid, ret);
diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index ecb3d91..2db536c 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -202,22 +202,26 @@ struct rte_vdpa_device *
 int
 rte_vdpa_get_device_num(void);
 
+#define VHOST_QUEUE_ALL VHOST_MAX_VRING
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
  *
- * Enable/Disable host notifier mapping for a vdpa port.
+ * Enable/Disable host notifier mapping for a vdpa queue.
  *
  * @param vid
  *  vhost device id
  * @param enable
  *  true for host notifier map, false for host notifier unmap
+ * @param qid
+ *  vhost queue id, VHOST_QUEUE_ALL to configure all the device queues
  * @return
  *  0 on success, -1 on failure
  */
 __rte_experimental
 int
-rte_vhost_host_notifier_ctrl(int vid, bool enable);
+rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable);
 
 /**
  * @warning
diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 329ed8a..14bf7c2 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -107,6 +107,8 @@
 #define VHOST_USER_F_PROTOCOL_FEATURES	30
 #endif
 
+#define VHOST_MAX_VRING			0x100
+#define VHOST_MAX_QUEUE_PAIRS		0x80
 
 /**
  * Information relating to memory regions including offsets to
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 17f1e9a..28b991d 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -202,9 +202,6 @@ struct vhost_virtqueue {
 	TAILQ_HEAD(, vhost_iotlb_entry) iotlb_pending_list;
 } __rte_cache_aligned;
 
-#define VHOST_MAX_VRING			0x100
-#define VHOST_MAX_QUEUE_PAIRS		0x80
-
 /* Declare IOMMU related bits for older kernels */
 #ifndef VIRTIO_F_IOMMU_PLATFORM
 
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 84bebad..cddfa4b 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -2960,13 +2960,13 @@ static int vhost_user_slave_set_vring_host_notifier(struct virtio_net *dev,
 	return process_slave_message_reply(dev, &msg);
 }
 
-int rte_vhost_host_notifier_ctrl(int vid, bool enable)
+int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable)
 {
 	struct virtio_net *dev;
 	struct rte_vdpa_device *vdpa_dev;
 	int vfio_device_fd, did, ret = 0;
 	uint64_t offset, size;
-	unsigned int i;
+	unsigned int i, q_start, q_last;
 
 	dev = get_device(vid);
 	if (!dev)
@@ -2990,6 +2990,16 @@ int rte_vhost_host_notifier_ctrl(int vid, bool enable)
 	if (!vdpa_dev)
 		return -ENODEV;
 
+	if (qid == VHOST_QUEUE_ALL) {
+		q_start = 0;
+		q_last = dev->nr_vring - 1;
+	} else {
+		if (qid >= dev->nr_vring)
+			return -EINVAL;
+		q_start = qid;
+		q_last = qid;
+	}
+
 	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_vfio_device_fd, -ENOTSUP);
 	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_notify_area, -ENOTSUP);
 
@@ -2998,7 +3008,7 @@ int rte_vhost_host_notifier_ctrl(int vid, bool enable)
 		return -ENOTSUP;
 
 	if (enable) {
-		for (i = 0; i < dev->nr_vring; i++) {
+		for (i = q_start; i <= q_last; i++) {
 			if (vdpa_dev->ops->get_notify_area(vid, i, &offset,
 					&size) < 0) {
 				ret = -ENOTSUP;
@@ -3013,7 +3023,7 @@ int rte_vhost_host_notifier_ctrl(int vid, bool enable)
 		}
 	} else {
 disable:
-		for (i = 0; i < dev->nr_vring; i++) {
+		for (i = q_start; i <= q_last; i++) {
 			vhost_user_slave_set_vring_host_notifier(dev, i, -1,
 					0, 0);
 		}
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
  2020-06-13 10:43  0%     ` Mattias Rönnblom
@ 2020-06-18 15:51  0%       ` McDaniel, Timothy
  0 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-06-18 15:51 UTC (permalink / raw)
  To: Mattias Rönnblom, Jerin Jacob
  Cc: Jerin Jacob, dpdk-dev, Eads, Gage, Van Haaren, Harry,
	Ray Kinsella, Neil Horman

Hello Mattias,

Thank you for your review comments. I will incorporate the changes you have suggested in V2 of the patchset, which I am currently working on.

Thanks,
Tim

-----Original Message-----
From: Mattias Rönnblom <mattias.ronnblom@ericsson.com> 
Sent: Saturday, June 13, 2020 5:44 AM
To: Jerin Jacob <jerinjacobk@gmail.com>; McDaniel, Timothy <timothy.mcdaniel@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>; dpdk-dev <dev@dpdk.org>; Eads, Gage <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites

On 2020-06-13 05:59, Jerin Jacob wrote:
> On Sat, Jun 13, 2020 at 2:56 AM McDaniel, Timothy
> <timothy.mcdaniel@intel.com> wrote:
>> The DLB hardware does not conform exactly to the eventdev interface.
>> 1) It has a limit on the number of queues that may be linked to a port.
>> 2) Some ports a further restricted to a maximum of 1 linked queue.
>> 3) It does not (currently) have the ability to carry the flow_id as part
>> of the event (QE) payload.
>>
>> Due to the above, we would like to propose the following enhancements.
>
> Thanks, McDaniel, Good to see new HW PMD for eventdev.
>
> + Ray and Neil.
>
> Hello McDaniel,
> I assume this patchset is for v20.08. It is adding new elements in
> pubic structures. Have you checked the ABI breakage?
>
> I will review the rest of the series if there is NO ABI breakage as we
> can not have the ABI breakage 20.08 version.
>
>
> ABI validator
> ~~~~~~~~~~~~~~
> 1. meson build
> 2.  Compile and install known stable abi libs i.e ToT.
>           DESTDIR=$PWD/install-meson-stable ninja -C build install
>       Compile and install with patches to be verified.
>           DESTDIR=$PWD/install-meson-new ninja -C build install
> 3. Gen ABI for both
>          devtools/gen-abi.sh install-meson-stable
>          devtools/gen-abi.sh install-meson-new
> 4. Run abi checker
>          devtools/check-abi.sh install-meson-stable install-meson-new
>
>
> DPDK_ABI_REF_DIR=/build/dpdk/reference/ DPDK_ABI_REF_VERSION=v20.02
> ./devtools/test-meson-builds.sh
> DPDK_ABI_REF_DIR - needs an absolute path, for reasons that are still
> unclear to me.
> DPDK_ABI_REF_VERSION - you need to use the last DPDK release.
>
>> 1) Add new fields to the rte_event_dev_info struct. These fields allow
>> the device to advertize its capabilities so that applications can take
>> the appropriate actions based on those capabilities.
>>
>>      struct rte_event_dev_info {
>>          uint32_t max_event_port_links;
>>          /**< Maximum number of queues that can be linked to a single event
>>           * port by this device.
>>           */
>>
>>          uint8_t max_single_link_event_port_queue_pairs;
>>          /**< Maximum number of event ports and queues that are optimized for
>>           * (and only capable of) single-link configurations supported by this
>>           * device. These ports and queues are not accounted for in
>>           * max_event_ports or max_event_queues.
>>           */
>>      }
>>
>> 2) Add a new field to the rte_event_dev_config struct. This field allows the
>> application to specify how many of its ports are limited to a single link,
>> or will be used in single link mode.
>>
>>      /** Event device configuration structure */
>>      struct rte_event_dev_config {
>>          uint8_t nb_single_link_event_port_queues;
>>          /**< Number of event ports and queues that will be singly-linked to
>>           * each other. These are a subset of the overall event ports and
>>           * queues; this value cannot exceed *nb_event_ports* or
>>           * *nb_event_queues*. If the device has ports and queues that are
>>           * optimized for single-link usage, this field is a hint for how many
>>           * to allocate; otherwise, regular event ports and queues can be used.
>>           */
>>      }
>>
>> 3) Replace the dedicated implicit_release_disabled field with a bit field
>> of explicit port capabilities. The implicit_release_disable functionality
>> is assiged to one bit, and a port-is-single-link-only  attribute is
>> assigned to other, with the remaining bits available for future assignment.
>>
>>          * Event port configuration bitmap flags */
>>          #define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>>          /**< Configure the port not to release outstanding events in
>>           * rte_event_dev_dequeue_burst(). If set, all events received through
>>           * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>>           * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>>           * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>>           */
>>          #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>>
>>          /**< This event port links only to a single event queue.
>>           *
>>           *  @see rte_event_port_setup(), rte_event_port_link()
>>           */
>>
>>          #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>>          /**
>>           * The implicit release disable attribute of the port
>>           */
>>
>>          struct rte_event_port_conf {
>>                  uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>>          }
>>
>> 4) Add UMWAIT/UMONITOR bit to rte_cpuflags
>>
>> 5) Added a new API that is useful for probing PCI devices.
>>
>>          /**
>>           * @internal
>>           * Wrapper for use by pci drivers as a .probe function to attach to a event
>>           * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>>           * the name.
>>           */
>>          static inline int
>>          rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>>                                      struct rte_pci_device *pci_dev,
>>                                      size_t private_data_size,
>>                                      eventdev_pmd_pci_callback_t devinit,
>>                                      const char *name);
>>
>> Change-Id: I4cf00015296e2b3feca9886895765554730594be
>> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
>> ---
>>   app/test-eventdev/evt_common.h                     |  1 +
>>   app/test-eventdev/test_order_atq.c                 |  4 ++
>>   app/test-eventdev/test_order_common.c              |  6 ++-
>>   app/test-eventdev/test_order_queue.c               |  4 ++
>>   app/test-eventdev/test_perf_atq.c                  |  1 +
>>   app/test-eventdev/test_perf_queue.c                |  1 +
>>   app/test-eventdev/test_pipeline_atq.c              |  1 +
>>   app/test-eventdev/test_pipeline_queue.c            |  1 +
>>   app/test/test_eventdev.c                           |  4 +-
>>   drivers/event/dpaa2/dpaa2_eventdev.c               |  2 +-
>>   drivers/event/octeontx/ssovf_evdev.c               |  2 +-
>>   drivers/event/skeleton/skeleton_eventdev.c         |  2 +-
>>   drivers/event/sw/sw_evdev.c                        |  5 +-
>>   drivers/event/sw/sw_evdev_selftest.c               |  9 ++--
>>   .../eventdev_pipeline/pipeline_worker_generic.c    |  8 ++-
>>   examples/eventdev_pipeline/pipeline_worker_tx.c    |  3 ++
>>   examples/l2fwd-event/l2fwd_event_generic.c         |  5 +-
>>   examples/l2fwd-event/l2fwd_event_internal_port.c   |  5 +-
>>   examples/l3fwd/l3fwd_event_generic.c               |  5 +-
>>   examples/l3fwd/l3fwd_event_internal_port.c         |  5 +-
>>   lib/librte_eal/x86/include/rte_cpuflags.h          |  1 +
>>   lib/librte_eal/x86/rte_cpuflags.c                  |  1 +
>>   lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
>>   lib/librte_eventdev/rte_eventdev.c                 | 62 +++++++++++++++++++---
>>   lib/librte_eventdev/rte_eventdev.h                 | 51 +++++++++++++++---
>>   lib/librte_eventdev/rte_eventdev_pmd_pci.h         | 54 +++++++++++++++++++
>>   26 files changed, 208 insertions(+), 37 deletions(-)
>>
>> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
>> index f9d7378d3..120c27b33 100644
>> --- a/app/test-eventdev/evt_common.h
>> +++ b/app/test-eventdev/evt_common.h
>> @@ -169,6 +169,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
>>                          .dequeue_timeout_ns = opt->deq_tmo_nsec,
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 0,
>>                          .nb_events_limit  = info.max_num_events,
>>                          .nb_event_queue_flows = opt->nb_flows,
>>                          .nb_event_port_dequeue_depth =
>> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
>> index 3366cfce9..8246b96f0 100644
>> --- a/app/test-eventdev/test_order_atq.c
>> +++ b/app/test-eventdev/test_order_atq.c
>> @@ -34,6 +34,8 @@ order_atq_worker(void *arg)
>>                          continue;
>>                  }
>>
>> +               ev.flow_id = ev.mbuf->udata64;
>> +
>>                  if (ev.sub_event_type == 0) { /* stage 0 from producer */
>>                          order_atq_process_stage_0(&ev);
>>                          while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
>> @@ -68,6 +70,8 @@ order_atq_worker_burst(void *arg)
>>                  }
>>
>>                  for (i = 0; i < nb_rx; i++) {
>> +                       ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>                          if (ev[i].sub_event_type == 0) { /*stage 0 */
>>                                  order_atq_process_stage_0(&ev[i]);
>>                          } else if (ev[i].sub_event_type == 1) { /* stage 1 */
>> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
>> index 4190f9ade..c6fcd0509 100644
>> --- a/app/test-eventdev/test_order_common.c
>> +++ b/app/test-eventdev/test_order_common.c
>> @@ -49,6 +49,7 @@ order_producer(void *arg)
>>                  const uint32_t flow = (uintptr_t)m % nb_flows;
>>                  /* Maintain seq number per flow */
>>                  m->seqn = producer_flow_seq[flow]++;
>> +               m->udata64 = flow;
>>
>>                  ev.flow_id = flow;
>>                  ev.mbuf = m;
>> @@ -318,10 +319,11 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>>                  opt->wkr_deq_dep = dev_info.max_event_port_dequeue_depth;
>>
>>          /* port configuration */
>> -       const struct rte_event_port_conf p_conf = {
>> +       struct rte_event_port_conf p_conf = {
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          /* setup one port per worker, linking to all queues */
>> @@ -351,6 +353,8 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>>          p->queue_id = 0;
>>          p->t = t;
>>
>> +       p_conf.new_event_threshold /= 2;
>> +
>>          ret = rte_event_port_setup(opt->dev_id, port, &p_conf);
>>          if (ret) {
>>                  evt_err("failed to setup producer port %d", port);
>> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
>> index 495efd92f..a0a2187a2 100644
>> --- a/app/test-eventdev/test_order_queue.c
>> +++ b/app/test-eventdev/test_order_queue.c
>> @@ -34,6 +34,8 @@ order_queue_worker(void *arg)
>>                          continue;
>>                  }
>>
>> +               ev.flow_id = ev.mbuf->udata64;
>> +
>>                  if (ev.queue_id == 0) { /* from ordered queue */
>>                          order_queue_process_stage_0(&ev);
>>                          while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
>> @@ -68,6 +70,8 @@ order_queue_worker_burst(void *arg)
>>                  }
>>
>>                  for (i = 0; i < nb_rx; i++) {
>> +                       ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>                          if (ev[i].queue_id == 0) { /* from ordered queue */
>>                                  order_queue_process_stage_0(&ev[i]);
>>                          } else if (ev[i].queue_id == 1) {/* from atomic queue */
>> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
>> index 8fd51004e..10846f202 100644
>> --- a/app/test-eventdev/test_perf_atq.c
>> +++ b/app/test-eventdev/test_perf_atq.c
>> @@ -204,6 +204,7 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues,
>> diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
>> index f4ea3a795..a0119da60 100644
>> --- a/app/test-eventdev/test_perf_queue.c
>> +++ b/app/test-eventdev/test_perf_queue.c
>> @@ -219,6 +219,7 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          ret = perf_event_dev_port_setup(test, opt, nb_stages /* stride */,
>> diff --git a/app/test-eventdev/test_pipeline_atq.c b/app/test-eventdev/test_pipeline_atq.c
>> index 8e8686c14..a95ec0aa5 100644
>> --- a/app/test-eventdev/test_pipeline_atq.c
>> +++ b/app/test-eventdev/test_pipeline_atq.c
>> @@ -356,6 +356,7 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                  .dequeue_depth = opt->wkr_deq_dep,
>>                  .enqueue_depth = info.max_event_port_dequeue_depth,
>>                  .new_event_threshold = info.max_num_events,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          if (!t->internal_port)
>> diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
>> index 7bebac34f..30817dc78 100644
>> --- a/app/test-eventdev/test_pipeline_queue.c
>> +++ b/app/test-eventdev/test_pipeline_queue.c
>> @@ -379,6 +379,7 @@ pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          if (!t->internal_port) {
>> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
>> index 43ccb1ce9..62019c185 100644
>> --- a/app/test/test_eventdev.c
>> +++ b/app/test/test_eventdev.c
>> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>>          if (!(info.event_dev_cap &
>>                RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>>                  pconf.enqueue_depth = info.max_event_port_enqueue_depth;
>> -               pconf.disable_implicit_release = 1;
>> +               pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>                  ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>>                  TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
>> -               pconf.disable_implicit_release = 0;
>> +               pconf.event_port_cfg = 0;
>>          }
>>
>>          ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
>> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
>> index a196ad4c6..8568bfcfc 100644
>> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
>> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
>> @@ -537,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>                  DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
>>          port_conf->enqueue_depth =
>>                  DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static int
>> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
>> index 1b1a5d939..99c0b2efb 100644
>> --- a/drivers/event/octeontx/ssovf_evdev.c
>> +++ b/drivers/event/octeontx/ssovf_evdev.c
>> @@ -224,7 +224,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = edev->max_num_events;
>>          port_conf->dequeue_depth = 1;
>>          port_conf->enqueue_depth = 1;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static void
>> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
>> index c889220e0..37d569b8c 100644
>> --- a/drivers/event/skeleton/skeleton_eventdev.c
>> +++ b/drivers/event/skeleton/skeleton_eventdev.c
>> @@ -209,7 +209,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = 32 * 1024;
>>          port_conf->dequeue_depth = 16;
>>          port_conf->enqueue_depth = 16;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static void
>> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
>> index fb8e8bebb..0b3dd9c1c 100644
>> --- a/drivers/event/sw/sw_evdev.c
>> +++ b/drivers/event/sw/sw_evdev.c
>> @@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
>>          }
>>
>>          p->inflight_max = conf->new_event_threshold;
>> -       p->implicit_release = !conf->disable_implicit_release;
>> +       p->implicit_release = !(conf->event_port_cfg &
>> +                               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>>
>>          /* check if ring exists, same as rx_worker above */
>>          snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
>> @@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = 1024;
>>          port_conf->dequeue_depth = 16;
>>          port_conf->enqueue_depth = 16;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static int
>> diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
>> index 38c21fa0f..a78d6cd0d 100644
>> --- a/drivers/event/sw/sw_evdev_selftest.c
>> +++ b/drivers/event/sw/sw_evdev_selftest.c
>> @@ -172,7 +172,7 @@ create_ports(struct test *t, int num_ports)
>>                          .new_event_threshold = 1024,
>>                          .dequeue_depth = 32,
>>                          .enqueue_depth = 64,
>> -                       .disable_implicit_release = 0,
>> +                       .event_port_cfg = 0,
>>          };
>>          if (num_ports > MAX_PORTS)
>>                  return -1;
>> @@ -1227,7 +1227,7 @@ port_reconfig_credits(struct test *t)
>>                                  .new_event_threshold = 128,
>>                                  .dequeue_depth = 32,
>>                                  .enqueue_depth = 64,
>> -                               .disable_implicit_release = 0,
>> +                               .event_port_cfg = 0,
>>                  };
>>                  if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>>                          printf("%d Error setting up port\n", __LINE__);
>> @@ -1317,7 +1317,7 @@ port_single_lb_reconfig(struct test *t)
>>                  .new_event_threshold = 128,
>>                  .dequeue_depth = 32,
>>                  .enqueue_depth = 64,
>> -               .disable_implicit_release = 0,
>> +               .event_port_cfg = 0,
>>          };
>>          if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>>                  printf("%d Error setting up port\n", __LINE__);
>> @@ -3079,7 +3079,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
>>           * only be initialized once - and this needs to be set for multiple runs
>>           */
>>          conf.new_event_threshold = 512;
>> -       conf.disable_implicit_release = disable_implicit_release;
>> +       conf.event_port_cfg = disable_implicit_release ?
>> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>>
>>          if (rte_event_port_setup(evdev, 0, &conf) < 0) {
>>                  printf("Error setting up RX port\n");
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> index 42ff4eeb9..a091da3ba 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>          struct rte_event_dev_config config = {
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 1,
>>                          .nb_events_limit  = 4096,
>>                          .nb_event_queue_flows = 1024,
>>                          .nb_event_port_dequeue_depth = 128,
>> @@ -138,12 +139,13 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>                          .dequeue_depth = cdata.worker_cq_depth,
>>                          .enqueue_depth = 64,
>>                          .new_event_threshold = 4096,
>> +                       .event_port_cfg = 0,


No need to set this value; it's guaranteed to be 0 anyways. You might 
argue you do it for readability, but two other fields of that struct is 
already implicitly initialized.


This would apply to other of your changes as well.


>>          };
>>          struct rte_event_queue_conf wkr_q_conf = {
>>                          .schedule_type = cdata.queue_type,
>>                          .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>>                          .nb_atomic_flows = 1024,
>> -               .nb_atomic_order_sequences = 1024,
>> +                       .nb_atomic_order_sequences = 1024,
>>          };
>>          struct rte_event_queue_conf tx_q_conf = {
>>                          .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
>> @@ -167,7 +169,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>          disable_implicit_release = (dev_info.event_dev_cap &
>>                          RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>>
>> -       wkr_p_conf.disable_implicit_release = disable_implicit_release;
>> +       wkr_p_conf.event_port_cfg = disable_implicit_release ?
>> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>>
>>          if (dev_info.max_num_events < config.nb_events_limit)
>>                  config.nb_events_limit = dev_info.max_num_events;
>> @@ -417,6 +420,7 @@ init_adapters(uint16_t nb_ports)
>>                  .dequeue_depth = cdata.worker_cq_depth,
>>                  .enqueue_depth = 64,
>>                  .new_event_threshold = 4096,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          if (adptr_p_conf.new_event_threshold > dev_info.max_num_events)
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> index 55bb2f762..e8a9652aa 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>>          struct rte_event_dev_config config = {
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 0,
>>                          .nb_events_limit  = 4096,
>>                          .nb_event_queue_flows = 1024,
>>                          .nb_event_port_dequeue_depth = 128,
>> @@ -445,6 +446,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>>                          .dequeue_depth = cdata.worker_cq_depth,
>>                          .enqueue_depth = 64,
>>                          .new_event_threshold = 4096,
>> +                       .event_port_cfg = 0,
>>          };
>>          struct rte_event_queue_conf wkr_q_conf = {
>>                          .schedule_type = cdata.queue_type,
>> @@ -746,6 +748,7 @@ init_adapters(uint16_t nb_ports)
>>                  .dequeue_depth = cdata.worker_cq_depth,
>>                  .enqueue_depth = 64,
>>                  .new_event_threshold = 4096,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          init_ports(nb_ports);
>> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
>> index 2dc95e5f7..e01df0435 100644
>> --- a/examples/l2fwd-event/l2fwd_event_generic.c
>> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
>> @@ -126,8 +126,9 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>          evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
>> index 63d57b46c..f54327b4f 100644
>> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
>> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
>> @@ -123,8 +123,9 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>                                                                  event_p_id++) {
>> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
>> index f8c98435d..409a4107e 100644
>> --- a/examples/l3fwd/l3fwd_event_generic.c
>> +++ b/examples/l3fwd/l3fwd_event_generic.c
>> @@ -115,8 +115,9 @@ l3fwd_event_port_setup_generic(void)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>          evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
>> index 03ac581d6..df410f10f 100644
>> --- a/examples/l3fwd/l3fwd_event_internal_port.c
>> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
>> @@ -113,8 +113,9 @@ l3fwd_event_port_setup_internal_port(void)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>                                                                  event_p_id++) {
>> diff --git a/lib/librte_eal/x86/include/rte_cpuflags.h b/lib/librte_eal/x86/include/rte_cpuflags.h
>> index c1d20364d..ab2c3b379 100644
>> --- a/lib/librte_eal/x86/include/rte_cpuflags.h
>> +++ b/lib/librte_eal/x86/include/rte_cpuflags.h
>> @@ -130,6 +130,7 @@ enum rte_cpu_flag_t {
>>          RTE_CPUFLAG_CLDEMOTE,               /**< Cache Line Demote */
>>          RTE_CPUFLAG_MOVDIRI,                /**< Direct Store Instructions */
>>          RTE_CPUFLAG_MOVDIR64B,              /**< Direct Store Instructions 64B */
>> +       RTE_CPUFLAG_UMWAIT,                 /**< UMONITOR/UMWAIT */
>>          RTE_CPUFLAG_AVX512VP2INTERSECT,     /**< AVX512 Two Register Intersection */
>>
>>          /* The last item */
>> diff --git a/lib/librte_eal/x86/rte_cpuflags.c b/lib/librte_eal/x86/rte_cpuflags.c
>> index 30439e795..69ac0dbce 100644
>> --- a/lib/librte_eal/x86/rte_cpuflags.c
>> +++ b/lib/librte_eal/x86/rte_cpuflags.c
>> @@ -137,6 +137,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
>>          FEAT_DEF(CLDEMOTE, 0x00000007, 0, RTE_REG_ECX, 25)
>>          FEAT_DEF(MOVDIRI, 0x00000007, 0, RTE_REG_ECX, 27)
>>          FEAT_DEF(MOVDIR64B, 0x00000007, 0, RTE_REG_ECX, 28)
>> +        FEAT_DEF(UMWAIT, 0x00000007, 0, RTE_REG_ECX, 5)
>>          FEAT_DEF(AVX512VP2INTERSECT, 0x00000007, 0, RTE_REG_EDX, 8)
>>   };
>>
>> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> index bb21dc407..8a72256de 100644
>> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> @@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
>>                  return ret;
>>          }
>>
>> -       pc->disable_implicit_release = 0;
>> +       pc->event_port_cfg = 0;
>>          ret = rte_event_port_setup(dev_id, port_id, pc);
>>          if (ret) {
>>                  RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
>> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
>> index 82c177c73..4955ab1a0 100644
>> --- a/lib/librte_eventdev/rte_eventdev.c
>> +++ b/lib/librte_eventdev/rte_eventdev.c
>> @@ -437,9 +437,29 @@ rte_event_dev_configure(uint8_t dev_id,
>>                                          dev_id);
>>                  return -EINVAL;
>>          }
>> -       if (dev_conf->nb_event_queues > info.max_event_queues) {
>> -               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
>> -               dev_id, dev_conf->nb_event_queues, info.max_event_queues);
>> +       if (dev_conf->nb_event_queues > info.max_event_queues +
>> +                       info.max_single_link_event_port_queue_pairs) {
>> +               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
>> +                                dev_id, dev_conf->nb_event_queues,
>> +                                info.max_event_queues,
>> +                                info.max_single_link_event_port_queue_pairs);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_event_queues -
>> +                       dev_conf->nb_single_link_event_port_queues >
>> +                       info.max_event_queues) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
>> +                                dev_id, dev_conf->nb_event_queues,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                info.max_event_queues);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_single_link_event_port_queues >
>> +                       dev_conf->nb_event_queues) {
>> +               RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
>> +                                dev_id,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                dev_conf->nb_event_queues);
>>                  return -EINVAL;
>>          }
>>
>> @@ -448,9 +468,31 @@ rte_event_dev_configure(uint8_t dev_id,
>>                  RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
>>                  return -EINVAL;
>>          }
>> -       if (dev_conf->nb_event_ports > info.max_event_ports) {
>> -               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
>> -               dev_id, dev_conf->nb_event_ports, info.max_event_ports);
>> +       if (dev_conf->nb_event_ports > info.max_event_ports +
>> +                       info.max_single_link_event_port_queue_pairs) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
>> +                                dev_id, dev_conf->nb_event_ports,
>> +                                info.max_event_ports,
>> +                                info.max_single_link_event_port_queue_pairs);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_event_ports -
>> +                       dev_conf->nb_single_link_event_port_queues
>> +                       > info.max_event_ports) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
>> +                                dev_id, dev_conf->nb_event_ports,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                info.max_event_ports);
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (dev_conf->nb_single_link_event_port_queues >
>> +           dev_conf->nb_event_ports) {
>> +               RTE_EDEV_LOG_ERR(
>> +                                "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
>> +                                dev_id,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                dev_conf->nb_event_ports);
>>                  return -EINVAL;
>>          }
>>
>> @@ -737,7 +779,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>>                  return -EINVAL;
>>          }
>>
>> -       if (port_conf && port_conf->disable_implicit_release &&
>> +       if (port_conf &&
>> +           (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
>>              !(dev->data->event_dev_cap &
>>                RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>>                  RTE_EDEV_LOG_ERR(
>> @@ -809,6 +852,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>>                          uint32_t *attr_value)
>>   {
>>          struct rte_eventdev *dev;
>> +       uint32_t config;
>>
>>          if (!attr_value)
>>                  return -EINVAL;
>> @@ -830,6 +874,10 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>>          case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
>>                  *attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
>>                  break;
>> +       case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
>> +               config = dev->data->ports_cfg[port_id].event_port_cfg;
>> +               *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>> +               break;
>>          default:
>>                  return -EINVAL;
>>          };
>> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
>> index 7dc832353..7f7a8a275 100644
>> --- a/lib/librte_eventdev/rte_eventdev.h
>> +++ b/lib/librte_eventdev/rte_eventdev.h
>> @@ -291,6 +291,13 @@ struct rte_event;
>>    * single queue to each port or map a single queue to many port.
>>    */
>>
>> +#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
>> +/**< Event device is capable of carrying the flow ID from the enqueued
>> + * event to the dequeued event. If the flag is set, the dequeued event's flow
>> + * ID matches the corresponding enqueued event's flow ID. If the flag is not
>> + * set, the dequeued event's flow ID field is uninitialized.
>> + */
>> +

The dequeued event's value should be undefined, to let an implementation 
overwrite an existing value. Replace "is capable of carrying" with 
"carries".


Is "maintain" better than "carry"? Or "preserve". I don't know.

>>   /* Event device priority levels */
>>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>   /**< Highest priority expressed across eventdev subsystem
>> @@ -380,6 +387,10 @@ struct rte_event_dev_info {
>>           * event port by this device.
>>           * A device that does not support bulk enqueue will set this as 1.
>>           */
>> +       uint32_t max_event_port_links;
>> +       /**< Maximum number of queues that can be linked to a single event
>> +        * port by this device.
>> +        */


Eventdev API supports 255 queues, so you should use an uint8_t here.


>>          int32_t max_num_events;
>>          /**< A *closed system* event dev has a limit on the number of events it
>>           * can manage at a time. An *open system* event dev does not have a
>> @@ -387,6 +398,12 @@ struct rte_event_dev_info {
>>           */
>>          uint32_t event_dev_cap;
>>          /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
>> +       uint8_t max_single_link_event_port_queue_pairs;
>> +       /**< Maximum number of event ports and queues that are optimized for
>> +        * (and only capable of) single-link configurations supported by this
>> +        * device. These ports and queues are not accounted for in
>> +        * max_event_ports or max_event_queues.
>> +        */
>>   };
>>
>>   /**
>> @@ -494,6 +511,14 @@ struct rte_event_dev_config {
>>           */
>>          uint32_t event_dev_cfg;
>>          /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
>> +       uint8_t nb_single_link_event_port_queues;
>> +       /**< Number of event ports and queues that will be singly-linked to
>> +        * each other. These are a subset of the overall event ports and
>> +        * queues; this value cannot exceed *nb_event_ports* or
>> +        * *nb_event_queues*. If the device has ports and queues that are
>> +        * optimized for single-link usage, this field is a hint for how many
>> +        * to allocate; otherwise, regular event ports and queues can be used.
>> +        */
>>   };
>>
>>   /**
>> @@ -671,6 +696,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>
>>   /* Event port specific APIs */
>>
>> +/* Event port configuration bitmap flags */
>> +#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>> +/**< Configure the port not to release outstanding events in
>> + * rte_event_dev_dequeue_burst(). If set, all events received through
>> + * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>> + * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>> + * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>> + */
>> +#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>> +/**< This event port links only to a single event queue.
>> + *
>> + *  @see rte_event_port_setup(), rte_event_port_link()
>> + */
>> +
>>   /** Event port configuration structure */
>>   struct rte_event_port_conf {
>>          int32_t new_event_threshold;
>> @@ -698,13 +737,7 @@ struct rte_event_port_conf {
>>           * which previously supplied to rte_event_dev_configure().
>>           * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
>>           */
>> -       uint8_t disable_implicit_release;
>> -       /**< Configure the port not to release outstanding events in
>> -        * rte_event_dev_dequeue_burst(). If true, all events received through
>> -        * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>> -        * RTE_EVENT_OP_FORWARD. Must be false when the device is not
>> -        * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>> -        */
>> +       uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>>   };
>>
>>   /**
>> @@ -769,6 +802,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>>    * The new event threshold of the port
>>    */
>>   #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>> +/**
>> + * The implicit release disable attribute of the port
>> + */
>> +#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>>
>>   /**
>>    * Get an attribute from a port.
>> diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> index 443cd38c2..157299983 100644
>> --- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> +++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> @@ -88,6 +88,60 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>>          return -ENXIO;
>>   }
>>
>> +/**
>> + * @internal
>> + * Wrapper for use by pci drivers as a .probe function to attach to a event
>> + * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>> + * the name.
>> + */
>> +static inline int
>> +rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>> +                           struct rte_pci_device *pci_dev,
>> +                           size_t private_data_size,
>> +                           eventdev_pmd_pci_callback_t devinit,
>> +                           const char *name)
>> +{
>> +       struct rte_eventdev *eventdev;
>> +
>> +       int retval;
>> +
>> +       if (devinit == NULL)
>> +               return -EINVAL;
>> +
>> +       eventdev = rte_event_pmd_allocate(name,
>> +                        pci_dev->device.numa_node);
>> +       if (eventdev == NULL)
>> +               return -ENOMEM;
>> +
>> +       if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>> +               eventdev->data->dev_private =
>> +                               rte_zmalloc_socket(
>> +                                               "eventdev private structure",
>> +                                               private_data_size,
>> +                                               RTE_CACHE_LINE_SIZE,
>> +                                               rte_socket_id());
>> +
>> +               if (eventdev->data->dev_private == NULL)
>> +                       rte_panic("Cannot allocate memzone for private "
>> +                                       "device data");
>> +       }
>> +
>> +       eventdev->dev = &pci_dev->device;
>> +
>> +       /* Invoke PMD device initialization function */
>> +       retval = devinit(eventdev);
>> +       if (retval == 0)
>> +               return 0;
>> +
>> +       RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
>> +                       " failed", pci_drv->driver.name,
>> +                       (unsigned int) pci_dev->id.vendor_id,
>> +                       (unsigned int) pci_dev->id.device_id);
>> +
>> +       rte_event_pmd_release(eventdev);
>> +
>> +       return -ENXIO;
>> +}
>>
>>   /**
>>    * @internal
>> --
>> 2.13.6
>>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
  2020-06-13  3:59  5%   ` Jerin Jacob
  2020-06-13 10:43  0%     ` Mattias Rönnblom
@ 2020-06-18 15:44  5%     ` McDaniel, Timothy
  1 sibling, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-06-18 15:44 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Jerin Jacob, dpdk-dev, Eads, Gage, Van Haaren, Harry,
	Ray Kinsella, Neil Horman, Mattias Rönnblom

Hello Jerin,

I am working on V2 of the patchset, and the ABI breakage will be corrected in that version.

Thanks,
Tim

-----Original Message-----
From: Jerin Jacob <jerinjacobk@gmail.com> 
Sent: Friday, June 12, 2020 10:59 PM
To: McDaniel, Timothy <timothy.mcdaniel@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>; dpdk-dev <dev@dpdk.org>; Eads, Gage <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>; Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Subject: Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites

On Sat, Jun 13, 2020 at 2:56 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
> The DLB hardware does not conform exactly to the eventdev interface.
> 1) It has a limit on the number of queues that may be linked to a port.
> 2) Some ports a further restricted to a maximum of 1 linked queue.
> 3) It does not (currently) have the ability to carry the flow_id as part
> of the event (QE) payload.
>
> Due to the above, we would like to propose the following enhancements.


Thanks, McDaniel, Good to see new HW PMD for eventdev.

+ Ray and Neil.

Hello McDaniel,
I assume this patchset is for v20.08. It is adding new elements in
pubic structures. Have you checked the ABI breakage?

I will review the rest of the series if there is NO ABI breakage as we
can not have the ABI breakage 20.08 version.


ABI validator
~~~~~~~~~~~~~~
1. meson build
2.  Compile and install known stable abi libs i.e ToT.
         DESTDIR=$PWD/install-meson-stable ninja -C build install
     Compile and install with patches to be verified.
         DESTDIR=$PWD/install-meson-new ninja -C build install
3. Gen ABI for both
        devtools/gen-abi.sh install-meson-stable
        devtools/gen-abi.sh install-meson-new
4. Run abi checker
        devtools/check-abi.sh install-meson-stable install-meson-new


DPDK_ABI_REF_DIR=/build/dpdk/reference/ DPDK_ABI_REF_VERSION=v20.02
./devtools/test-meson-builds.sh
DPDK_ABI_REF_DIR - needs an absolute path, for reasons that are still
unclear to me.
DPDK_ABI_REF_VERSION - you need to use the last DPDK release.

>
> 1) Add new fields to the rte_event_dev_info struct. These fields allow
> the device to advertize its capabilities so that applications can take
> the appropriate actions based on those capabilities.
>
>     struct rte_event_dev_info {
>         uint32_t max_event_port_links;
>         /**< Maximum number of queues that can be linked to a single event
>          * port by this device.
>          */
>
>         uint8_t max_single_link_event_port_queue_pairs;
>         /**< Maximum number of event ports and queues that are optimized for
>          * (and only capable of) single-link configurations supported by this
>          * device. These ports and queues are not accounted for in
>          * max_event_ports or max_event_queues.
>          */
>     }
>
> 2) Add a new field to the rte_event_dev_config struct. This field allows the
> application to specify how many of its ports are limited to a single link,
> or will be used in single link mode.
>
>     /** Event device configuration structure */
>     struct rte_event_dev_config {
>         uint8_t nb_single_link_event_port_queues;
>         /**< Number of event ports and queues that will be singly-linked to
>          * each other. These are a subset of the overall event ports and
>          * queues; this value cannot exceed *nb_event_ports* or
>          * *nb_event_queues*. If the device has ports and queues that are
>          * optimized for single-link usage, this field is a hint for how many
>          * to allocate; otherwise, regular event ports and queues can be used.
>          */
>     }
>
> 3) Replace the dedicated implicit_release_disabled field with a bit field
> of explicit port capabilities. The implicit_release_disable functionality
> is assiged to one bit, and a port-is-single-link-only  attribute is
> assigned to other, with the remaining bits available for future assignment.
>
>         * Event port configuration bitmap flags */
>         #define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>         /**< Configure the port not to release outstanding events in
>          * rte_event_dev_dequeue_burst(). If set, all events received through
>          * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>          * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>          * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>          */
>         #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>
>         /**< This event port links only to a single event queue.
>          *
>          *  @see rte_event_port_setup(), rte_event_port_link()
>          */
>
>         #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>         /**
>          * The implicit release disable attribute of the port
>          */
>
>         struct rte_event_port_conf {
>                 uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>         }
>
> 4) Add UMWAIT/UMONITOR bit to rte_cpuflags
>
> 5) Added a new API that is useful for probing PCI devices.
>
>         /**
>          * @internal
>          * Wrapper for use by pci drivers as a .probe function to attach to a event
>          * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>          * the name.
>          */
>         static inline int
>         rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>                                     struct rte_pci_device *pci_dev,
>                                     size_t private_data_size,
>                                     eventdev_pmd_pci_callback_t devinit,
>                                     const char *name);
>
> Change-Id: I4cf00015296e2b3feca9886895765554730594be
> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> ---
>  app/test-eventdev/evt_common.h                     |  1 +
>  app/test-eventdev/test_order_atq.c                 |  4 ++
>  app/test-eventdev/test_order_common.c              |  6 ++-
>  app/test-eventdev/test_order_queue.c               |  4 ++
>  app/test-eventdev/test_perf_atq.c                  |  1 +
>  app/test-eventdev/test_perf_queue.c                |  1 +
>  app/test-eventdev/test_pipeline_atq.c              |  1 +
>  app/test-eventdev/test_pipeline_queue.c            |  1 +
>  app/test/test_eventdev.c                           |  4 +-
>  drivers/event/dpaa2/dpaa2_eventdev.c               |  2 +-
>  drivers/event/octeontx/ssovf_evdev.c               |  2 +-
>  drivers/event/skeleton/skeleton_eventdev.c         |  2 +-
>  drivers/event/sw/sw_evdev.c                        |  5 +-
>  drivers/event/sw/sw_evdev_selftest.c               |  9 ++--
>  .../eventdev_pipeline/pipeline_worker_generic.c    |  8 ++-
>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  3 ++
>  examples/l2fwd-event/l2fwd_event_generic.c         |  5 +-
>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  5 +-
>  examples/l3fwd/l3fwd_event_generic.c               |  5 +-
>  examples/l3fwd/l3fwd_event_internal_port.c         |  5 +-
>  lib/librte_eal/x86/include/rte_cpuflags.h          |  1 +
>  lib/librte_eal/x86/rte_cpuflags.c                  |  1 +
>  lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
>  lib/librte_eventdev/rte_eventdev.c                 | 62 +++++++++++++++++++---
>  lib/librte_eventdev/rte_eventdev.h                 | 51 +++++++++++++++---
>  lib/librte_eventdev/rte_eventdev_pmd_pci.h         | 54 +++++++++++++++++++
>  26 files changed, 208 insertions(+), 37 deletions(-)
>
> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
> index f9d7378d3..120c27b33 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -169,6 +169,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
>                         .dequeue_timeout_ns = opt->deq_tmo_nsec,
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = info.max_num_events,
>                         .nb_event_queue_flows = opt->nb_flows,
>                         .nb_event_port_dequeue_depth =
> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
> index 3366cfce9..8246b96f0 100644
> --- a/app/test-eventdev/test_order_atq.c
> +++ b/app/test-eventdev/test_order_atq.c
> @@ -34,6 +34,8 @@ order_atq_worker(void *arg)
>                         continue;
>                 }
>
> +               ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.sub_event_type == 0) { /* stage 0 from producer */
>                         order_atq_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -68,6 +70,8 @@ order_atq_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +                       ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].sub_event_type == 0) { /*stage 0 */
>                                 order_atq_process_stage_0(&ev[i]);
>                         } else if (ev[i].sub_event_type == 1) { /* stage 1 */
> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
> index 4190f9ade..c6fcd0509 100644
> --- a/app/test-eventdev/test_order_common.c
> +++ b/app/test-eventdev/test_order_common.c
> @@ -49,6 +49,7 @@ order_producer(void *arg)
>                 const uint32_t flow = (uintptr_t)m % nb_flows;
>                 /* Maintain seq number per flow */
>                 m->seqn = producer_flow_seq[flow]++;
> +               m->udata64 = flow;
>
>                 ev.flow_id = flow;
>                 ev.mbuf = m;
> @@ -318,10 +319,11 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>                 opt->wkr_deq_dep = dev_info.max_event_port_dequeue_depth;
>
>         /* port configuration */
> -       const struct rte_event_port_conf p_conf = {
> +       struct rte_event_port_conf p_conf = {
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         /* setup one port per worker, linking to all queues */
> @@ -351,6 +353,8 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>         p->queue_id = 0;
>         p->t = t;
>
> +       p_conf.new_event_threshold /= 2;
> +
>         ret = rte_event_port_setup(opt->dev_id, port, &p_conf);
>         if (ret) {
>                 evt_err("failed to setup producer port %d", port);
> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
> index 495efd92f..a0a2187a2 100644
> --- a/app/test-eventdev/test_order_queue.c
> +++ b/app/test-eventdev/test_order_queue.c
> @@ -34,6 +34,8 @@ order_queue_worker(void *arg)
>                         continue;
>                 }
>
> +               ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.queue_id == 0) { /* from ordered queue */
>                         order_queue_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -68,6 +70,8 @@ order_queue_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +                       ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].queue_id == 0) { /* from ordered queue */
>                                 order_queue_process_stage_0(&ev[i]);
>                         } else if (ev[i].queue_id == 1) {/* from atomic queue */
> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
> index 8fd51004e..10846f202 100644
> --- a/app/test-eventdev/test_perf_atq.c
> +++ b/app/test-eventdev/test_perf_atq.c
> @@ -204,6 +204,7 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues,
> diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
> index f4ea3a795..a0119da60 100644
> --- a/app/test-eventdev/test_perf_queue.c
> +++ b/app/test-eventdev/test_perf_queue.c
> @@ -219,6 +219,7 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         ret = perf_event_dev_port_setup(test, opt, nb_stages /* stride */,
> diff --git a/app/test-eventdev/test_pipeline_atq.c b/app/test-eventdev/test_pipeline_atq.c
> index 8e8686c14..a95ec0aa5 100644
> --- a/app/test-eventdev/test_pipeline_atq.c
> +++ b/app/test-eventdev/test_pipeline_atq.c
> @@ -356,6 +356,7 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                 .dequeue_depth = opt->wkr_deq_dep,
>                 .enqueue_depth = info.max_event_port_dequeue_depth,
>                 .new_event_threshold = info.max_num_events,
> +               .event_port_cfg = 0,
>         };
>
>         if (!t->internal_port)
> diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
> index 7bebac34f..30817dc78 100644
> --- a/app/test-eventdev/test_pipeline_queue.c
> +++ b/app/test-eventdev/test_pipeline_queue.c
> @@ -379,6 +379,7 @@ pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = info.max_event_port_dequeue_depth,
>                         .new_event_threshold = info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         if (!t->internal_port) {
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 43ccb1ce9..62019c185 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>         if (!(info.event_dev_cap &
>               RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>                 pconf.enqueue_depth = info.max_event_port_enqueue_depth;
> -               pconf.disable_implicit_release = 1;
> +               pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>                 ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>                 TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
> -               pconf.disable_implicit_release = 0;
> +               pconf.event_port_cfg = 0;
>         }
>
>         ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
> index a196ad4c6..8568bfcfc 100644
> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
> @@ -537,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>                 DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
>         port_conf->enqueue_depth =
>                 DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static int
> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
> index 1b1a5d939..99c0b2efb 100644
> --- a/drivers/event/octeontx/ssovf_evdev.c
> +++ b/drivers/event/octeontx/ssovf_evdev.c
> @@ -224,7 +224,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = edev->max_num_events;
>         port_conf->dequeue_depth = 1;
>         port_conf->enqueue_depth = 1;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static void
> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
> index c889220e0..37d569b8c 100644
> --- a/drivers/event/skeleton/skeleton_eventdev.c
> +++ b/drivers/event/skeleton/skeleton_eventdev.c
> @@ -209,7 +209,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = 32 * 1024;
>         port_conf->dequeue_depth = 16;
>         port_conf->enqueue_depth = 16;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static void
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index fb8e8bebb..0b3dd9c1c 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
>         }
>
>         p->inflight_max = conf->new_event_threshold;
> -       p->implicit_release = !conf->disable_implicit_release;
> +       p->implicit_release = !(conf->event_port_cfg &
> +                               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>
>         /* check if ring exists, same as rx_worker above */
>         snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
> @@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = 1024;
>         port_conf->dequeue_depth = 16;
>         port_conf->enqueue_depth = 16;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static int
> diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
> index 38c21fa0f..a78d6cd0d 100644
> --- a/drivers/event/sw/sw_evdev_selftest.c
> +++ b/drivers/event/sw/sw_evdev_selftest.c
> @@ -172,7 +172,7 @@ create_ports(struct test *t, int num_ports)
>                         .new_event_threshold = 1024,
>                         .dequeue_depth = 32,
>                         .enqueue_depth = 64,
> -                       .disable_implicit_release = 0,
> +                       .event_port_cfg = 0,
>         };
>         if (num_ports > MAX_PORTS)
>                 return -1;
> @@ -1227,7 +1227,7 @@ port_reconfig_credits(struct test *t)
>                                 .new_event_threshold = 128,
>                                 .dequeue_depth = 32,
>                                 .enqueue_depth = 64,
> -                               .disable_implicit_release = 0,
> +                               .event_port_cfg = 0,
>                 };
>                 if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>                         printf("%d Error setting up port\n", __LINE__);
> @@ -1317,7 +1317,7 @@ port_single_lb_reconfig(struct test *t)
>                 .new_event_threshold = 128,
>                 .dequeue_depth = 32,
>                 .enqueue_depth = 64,
> -               .disable_implicit_release = 0,
> +               .event_port_cfg = 0,
>         };
>         if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>                 printf("%d Error setting up port\n", __LINE__);
> @@ -3079,7 +3079,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
>          * only be initialized once - and this needs to be set for multiple runs
>          */
>         conf.new_event_threshold = 512;
> -       conf.disable_implicit_release = disable_implicit_release;
> +       conf.event_port_cfg = disable_implicit_release ?
> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
>         if (rte_event_port_setup(evdev, 0, &conf) < 0) {
>                 printf("Error setting up RX port\n");
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 42ff4eeb9..a091da3ba 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 1,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> @@ -138,12 +139,13 @@ setup_eventdev_generic(struct worker_data *worker_data)
>                         .dequeue_depth = cdata.worker_cq_depth,
>                         .enqueue_depth = 64,
>                         .new_event_threshold = 4096,
> +                       .event_port_cfg = 0,
>         };
>         struct rte_event_queue_conf wkr_q_conf = {
>                         .schedule_type = cdata.queue_type,
>                         .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>                         .nb_atomic_flows = 1024,
> -               .nb_atomic_order_sequences = 1024,
> +                       .nb_atomic_order_sequences = 1024,
>         };
>         struct rte_event_queue_conf tx_q_conf = {
>                         .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> @@ -167,7 +169,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         disable_implicit_release = (dev_info.event_dev_cap &
>                         RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>
> -       wkr_p_conf.disable_implicit_release = disable_implicit_release;
> +       wkr_p_conf.event_port_cfg = disable_implicit_release ?
> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
>         if (dev_info.max_num_events < config.nb_events_limit)
>                 config.nb_events_limit = dev_info.max_num_events;
> @@ -417,6 +420,7 @@ init_adapters(uint16_t nb_ports)
>                 .dequeue_depth = cdata.worker_cq_depth,
>                 .enqueue_depth = 64,
>                 .new_event_threshold = 4096,
> +               .event_port_cfg = 0,
>         };
>
>         if (adptr_p_conf.new_event_threshold > dev_info.max_num_events)
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index 55bb2f762..e8a9652aa 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> @@ -445,6 +446,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>                         .dequeue_depth = cdata.worker_cq_depth,
>                         .enqueue_depth = 64,
>                         .new_event_threshold = 4096,
> +                       .event_port_cfg = 0,
>         };
>         struct rte_event_queue_conf wkr_q_conf = {
>                         .schedule_type = cdata.queue_type,
> @@ -746,6 +748,7 @@ init_adapters(uint16_t nb_ports)
>                 .dequeue_depth = cdata.worker_cq_depth,
>                 .enqueue_depth = 64,
>                 .new_event_threshold = 4096,
> +               .event_port_cfg = 0,
>         };
>
>         init_ports(nb_ports);
> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
> index 2dc95e5f7..e01df0435 100644
> --- a/examples/l2fwd-event/l2fwd_event_generic.c
> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> @@ -126,8 +126,9 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
> index 63d57b46c..f54327b4f 100644
> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> @@ -123,8 +123,9 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
> index f8c98435d..409a4107e 100644
> --- a/examples/l3fwd/l3fwd_event_generic.c
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -115,8 +115,9 @@ l3fwd_event_port_setup_generic(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
> index 03ac581d6..df410f10f 100644
> --- a/examples/l3fwd/l3fwd_event_internal_port.c
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -113,8 +113,9 @@ l3fwd_event_port_setup_internal_port(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> diff --git a/lib/librte_eal/x86/include/rte_cpuflags.h b/lib/librte_eal/x86/include/rte_cpuflags.h
> index c1d20364d..ab2c3b379 100644
> --- a/lib/librte_eal/x86/include/rte_cpuflags.h
> +++ b/lib/librte_eal/x86/include/rte_cpuflags.h
> @@ -130,6 +130,7 @@ enum rte_cpu_flag_t {
>         RTE_CPUFLAG_CLDEMOTE,               /**< Cache Line Demote */
>         RTE_CPUFLAG_MOVDIRI,                /**< Direct Store Instructions */
>         RTE_CPUFLAG_MOVDIR64B,              /**< Direct Store Instructions 64B */
> +       RTE_CPUFLAG_UMWAIT,                 /**< UMONITOR/UMWAIT */
>         RTE_CPUFLAG_AVX512VP2INTERSECT,     /**< AVX512 Two Register Intersection */
>
>         /* The last item */
> diff --git a/lib/librte_eal/x86/rte_cpuflags.c b/lib/librte_eal/x86/rte_cpuflags.c
> index 30439e795..69ac0dbce 100644
> --- a/lib/librte_eal/x86/rte_cpuflags.c
> +++ b/lib/librte_eal/x86/rte_cpuflags.c
> @@ -137,6 +137,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
>         FEAT_DEF(CLDEMOTE, 0x00000007, 0, RTE_REG_ECX, 25)
>         FEAT_DEF(MOVDIRI, 0x00000007, 0, RTE_REG_ECX, 27)
>         FEAT_DEF(MOVDIR64B, 0x00000007, 0, RTE_REG_ECX, 28)
> +        FEAT_DEF(UMWAIT, 0x00000007, 0, RTE_REG_ECX, 5)
>         FEAT_DEF(AVX512VP2INTERSECT, 0x00000007, 0, RTE_REG_EDX, 8)
>  };
>
> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> index bb21dc407..8a72256de 100644
> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> @@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
>                 return ret;
>         }
>
> -       pc->disable_implicit_release = 0;
> +       pc->event_port_cfg = 0;
>         ret = rte_event_port_setup(dev_id, port_id, pc);
>         if (ret) {
>                 RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
> index 82c177c73..4955ab1a0 100644
> --- a/lib/librte_eventdev/rte_eventdev.c
> +++ b/lib/librte_eventdev/rte_eventdev.c
> @@ -437,9 +437,29 @@ rte_event_dev_configure(uint8_t dev_id,
>                                         dev_id);
>                 return -EINVAL;
>         }
> -       if (dev_conf->nb_event_queues > info.max_event_queues) {
> -               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
> -               dev_id, dev_conf->nb_event_queues, info.max_event_queues);
> +       if (dev_conf->nb_event_queues > info.max_event_queues +
> +                       info.max_single_link_event_port_queue_pairs) {
> +               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
> +                                dev_id, dev_conf->nb_event_queues,
> +                                info.max_event_queues,
> +                                info.max_single_link_event_port_queue_pairs);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_event_queues -
> +                       dev_conf->nb_single_link_event_port_queues >
> +                       info.max_event_queues) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
> +                                dev_id, dev_conf->nb_event_queues,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                info.max_event_queues);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_single_link_event_port_queues >
> +                       dev_conf->nb_event_queues) {
> +               RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
> +                                dev_id,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                dev_conf->nb_event_queues);
>                 return -EINVAL;
>         }
>
> @@ -448,9 +468,31 @@ rte_event_dev_configure(uint8_t dev_id,
>                 RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
>                 return -EINVAL;
>         }
> -       if (dev_conf->nb_event_ports > info.max_event_ports) {
> -               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
> -               dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> +       if (dev_conf->nb_event_ports > info.max_event_ports +
> +                       info.max_single_link_event_port_queue_pairs) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
> +                                dev_id, dev_conf->nb_event_ports,
> +                                info.max_event_ports,
> +                                info.max_single_link_event_port_queue_pairs);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_event_ports -
> +                       dev_conf->nb_single_link_event_port_queues
> +                       > info.max_event_ports) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
> +                                dev_id, dev_conf->nb_event_ports,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                info.max_event_ports);
> +               return -EINVAL;
> +       }
> +
> +       if (dev_conf->nb_single_link_event_port_queues >
> +           dev_conf->nb_event_ports) {
> +               RTE_EDEV_LOG_ERR(
> +                                "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
> +                                dev_id,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                dev_conf->nb_event_ports);
>                 return -EINVAL;
>         }
>
> @@ -737,7 +779,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>                 return -EINVAL;
>         }
>
> -       if (port_conf && port_conf->disable_implicit_release &&
> +       if (port_conf &&
> +           (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
>             !(dev->data->event_dev_cap &
>               RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>                 RTE_EDEV_LOG_ERR(
> @@ -809,6 +852,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>                         uint32_t *attr_value)
>  {
>         struct rte_eventdev *dev;
> +       uint32_t config;
>
>         if (!attr_value)
>                 return -EINVAL;
> @@ -830,6 +874,10 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>         case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
>                 *attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
>                 break;
> +       case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
> +               config = dev->data->ports_cfg[port_id].event_port_cfg;
> +               *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
> +               break;
>         default:
>                 return -EINVAL;
>         };
> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
> index 7dc832353..7f7a8a275 100644
> --- a/lib/librte_eventdev/rte_eventdev.h
> +++ b/lib/librte_eventdev/rte_eventdev.h
> @@ -291,6 +291,13 @@ struct rte_event;
>   * single queue to each port or map a single queue to many port.
>   */
>
> +#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
> +/**< Event device is capable of carrying the flow ID from the enqueued
> + * event to the dequeued event. If the flag is set, the dequeued event's flow
> + * ID matches the corresponding enqueued event's flow ID. If the flag is not
> + * set, the dequeued event's flow ID field is uninitialized.
> + */
> +
>  /* Event device priority levels */
>  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>  /**< Highest priority expressed across eventdev subsystem
> @@ -380,6 +387,10 @@ struct rte_event_dev_info {
>          * event port by this device.
>          * A device that does not support bulk enqueue will set this as 1.
>          */
> +       uint32_t max_event_port_links;
> +       /**< Maximum number of queues that can be linked to a single event
> +        * port by this device.
> +        */
>         int32_t max_num_events;
>         /**< A *closed system* event dev has a limit on the number of events it
>          * can manage at a time. An *open system* event dev does not have a
> @@ -387,6 +398,12 @@ struct rte_event_dev_info {
>          */
>         uint32_t event_dev_cap;
>         /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
> +       uint8_t max_single_link_event_port_queue_pairs;
> +       /**< Maximum number of event ports and queues that are optimized for
> +        * (and only capable of) single-link configurations supported by this
> +        * device. These ports and queues are not accounted for in
> +        * max_event_ports or max_event_queues.
> +        */
>  };
>
>  /**
> @@ -494,6 +511,14 @@ struct rte_event_dev_config {
>          */
>         uint32_t event_dev_cfg;
>         /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
> +       uint8_t nb_single_link_event_port_queues;
> +       /**< Number of event ports and queues that will be singly-linked to
> +        * each other. These are a subset of the overall event ports and
> +        * queues; this value cannot exceed *nb_event_ports* or
> +        * *nb_event_queues*. If the device has ports and queues that are
> +        * optimized for single-link usage, this field is a hint for how many
> +        * to allocate; otherwise, regular event ports and queues can be used.
> +        */
>  };
>
>  /**
> @@ -671,6 +696,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>
>  /* Event port specific APIs */
>
> +/* Event port configuration bitmap flags */
> +#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
> +/**< Configure the port not to release outstanding events in
> + * rte_event_dev_dequeue_burst(). If set, all events received through
> + * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
> + * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
> + * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
> + */
> +#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
> +/**< This event port links only to a single event queue.
> + *
> + *  @see rte_event_port_setup(), rte_event_port_link()
> + */
> +
>  /** Event port configuration structure */
>  struct rte_event_port_conf {
>         int32_t new_event_threshold;
> @@ -698,13 +737,7 @@ struct rte_event_port_conf {
>          * which previously supplied to rte_event_dev_configure().
>          * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
>          */
> -       uint8_t disable_implicit_release;
> -       /**< Configure the port not to release outstanding events in
> -        * rte_event_dev_dequeue_burst(). If true, all events received through
> -        * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
> -        * RTE_EVENT_OP_FORWARD. Must be false when the device is not
> -        * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
> -        */
> +       uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>  };
>
>  /**
> @@ -769,6 +802,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>   * The new event threshold of the port
>   */
>  #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
> +/**
> + * The implicit release disable attribute of the port
> + */
> +#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>
>  /**
>   * Get an attribute from a port.
> diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> index 443cd38c2..157299983 100644
> --- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> +++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> @@ -88,6 +88,60 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>         return -ENXIO;
>  }
>
> +/**
> + * @internal
> + * Wrapper for use by pci drivers as a .probe function to attach to a event
> + * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
> + * the name.
> + */
> +static inline int
> +rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
> +                           struct rte_pci_device *pci_dev,
> +                           size_t private_data_size,
> +                           eventdev_pmd_pci_callback_t devinit,
> +                           const char *name)
> +{
> +       struct rte_eventdev *eventdev;
> +
> +       int retval;
> +
> +       if (devinit == NULL)
> +               return -EINVAL;
> +
> +       eventdev = rte_event_pmd_allocate(name,
> +                        pci_dev->device.numa_node);
> +       if (eventdev == NULL)
> +               return -ENOMEM;
> +
> +       if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +               eventdev->data->dev_private =
> +                               rte_zmalloc_socket(
> +                                               "eventdev private structure",
> +                                               private_data_size,
> +                                               RTE_CACHE_LINE_SIZE,
> +                                               rte_socket_id());
> +
> +               if (eventdev->data->dev_private == NULL)
> +                       rte_panic("Cannot allocate memzone for private "
> +                                       "device data");
> +       }
> +
> +       eventdev->dev = &pci_dev->device;
> +
> +       /* Invoke PMD device initialization function */
> +       retval = devinit(eventdev);
> +       if (retval == 0)
> +               return 0;
> +
> +       RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
> +                       " failed", pci_drv->driver.name,
> +                       (unsigned int) pci_dev->id.vendor_id,
> +                       (unsigned int) pci_dev->id.device_id);
> +
> +       rte_event_pmd_release(eventdev);
> +
> +       return -ENXIO;
> +}
>
>  /**
>   * @internal
> --
> 2.13.6
>

^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] DPDK Release Status Meeting 18/06/2020
  2020-06-18 15:26  3%   ` Ferruh Yigit
@ 2020-06-18 15:30  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-06-18 15:30 UTC (permalink / raw)
  To: Trahe, Fiona, Ferruh Yigit; +Cc: dev, Mcnamara, John

18/06/2020 17:26, Ferruh Yigit:
> On 6/18/2020 3:09 PM, Trahe, Fiona wrote:
> > Hi all,
> > 
> > If there's a cryptodev API change planned for 20.11 (an ABI breakage), is it necessary
> > to send a deprecation notice in 20.08?
> > Or can this just be worked during the normal 20.11 patch review cycle?
> 
> As far as I got it, it needs to follow the regular deprecation process. Which is
> deprecation notice should be sent and merged (with 3 acks) in 20.08, so that it
> can be changed in 20.11.
> 
> It can be good to highlight this, perhaps in next release status meeting, since
> there may be some ABI changes waiting for 20.11, this release is last change for
> them to send a deprecation notice.

Yes
Good idea to highlight it.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] DPDK Release Status Meeting 18/06/2020
  2020-06-18 14:09  3% ` Trahe, Fiona
@ 2020-06-18 15:26  3%   ` Ferruh Yigit
  2020-06-18 15:30  0%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-06-18 15:26 UTC (permalink / raw)
  To: Trahe, Fiona, dev; +Cc: Thomas Monjalon, Mcnamara, John

On 6/18/2020 3:09 PM, Trahe, Fiona wrote:
> Hi all,
> 
> If there's a cryptodev API change planned for 20.11 (an ABI breakage), is it necessary
> to send a deprecation notice in 20.08?
> Or can this just be worked during the normal 20.11 patch review cycle?

As far as I got it, it needs to follow the regular deprecation process. Which is
deprecation notice should be sent and merged (with 3 acks) in 20.08, so that it
can be changed in 20.11.

It can be good to highlight this, perhaps in next release status meeting, since
there may be some ABI changes waiting for 20.11, this release is last change for
them to send a deprecation notice.

> 
> Regards,
> Fiona
> 
> 
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
>> Sent: Thursday, June 18, 2020 11:26 AM
>> To: dev@dpdk.org
>> Cc: Thomas Monjalon <thomas@monjalon.net>
>> Subject: [dpdk-dev] DPDK Release Status Meeting 18/06/2020
>>
>> Minutes 18 June 2020
>> --------------------
>>
>> Agenda:
>> * Release Dates
>> * Subtrees
>> * LTS
>> * Opens
>>
>> Participants:
>> * Arm
>> * Debian/Microsoft
>> * Intel
>> * Marvell
>> * Mellanox
>> * NXP
>> * Red Hat
>>
>>
>> Release Dates
>> -------------
>>
>> * v20.08 dates:
>>   * Proposal/V1 deadline passed, it was on Friday 12 June 2020
>>   * -rc1:           Wednesday, 8 July   2020
>>   * -rc2:           Monday,   20 July   2020
>>   * Release:        Tuesday,   4 August 2020
>>
>>
>> Subtrees
>> --------
>>
>> * main
>>   * Need to update PMD docs to use meson instead of make
>>   * Pulling from next-net
>>   * Bitops series merged
>>   * Windows port progressing
>>   * If-proxy related, control path discussion not started
>>     * Need to have discussion for more generic control path
>>       * Telemetry, if-proxy and others can use this control path
>>     * Plan was to create a spec for the control path and if-proxy will comply it
>>   * vfio token, new version is out with two acks
>>     * Marvell will test and send a tested-by tag
>>   * Roadmap is more complete in this release
>>     * Almost all major contributors shared the roadmap
>>       * 20.08 will be bigger than expected
>>     * Good indicator that shows project maturity is increasing
>>     * Will be good to get 20.11 roadmaps early
>>
>> * next-net
>>   * Pulled from vendor sub-trees for mlx & intel
>>     * brcm also has a big base code update not merged yet to vendor tree
>>   * Some driver patches merged
>>   * There are a few ethdev patches, discussions going on
>>
>> * next-crypto
>>   * Will start reviews next week
>>
>> * next-eventdev
>>   * There is a new PMD from Intel (DLB) which cause ABI breakage
>>     * There will be new versions to prevent ABI break
>>   * Will merge some patches next week
>>
>> * next-virtio
>>   * There are several series under review
>>   * Planning to send a pull request tomorrow
>>   * Big refactor on vdpa, there are API changes
>>   * Would like to get more reviews
>>     * Chenbo will support from Intel side
>>
>> * next-net-mlx
>>   * Some patches pulled to next-net, more expected for release
>>
>> * next-net-intel
>>   * Qi will take over the next-net-intel
>>
>> * next-net-mrvl
>>   * A few patches will be merged
>>     * Will check if mvneta & mvpp2 PMDs are still active
>>
>>
>> LTS
>> ---
>>
>> * v19.11.3-rc1 is out, please test
>>   * https://mails.dpdk.org/archives/dev/2020-June/169263.html
>>   * Test results from Intel, Mellanox, Red Hat & OvS
>>   * Waiting for results from Microsoft, expecting to have it today/tomorrow
>>   * Planning to release today/tomorrow after Microsoft test result
>>
>> * v18.11.9 in progress
>>   * Issues with gcc10 for KNI ethtool which is still exist in 18.11
>>   * -rc1 expected this week
>>
>>
>> Opens
>> -----
>>
>> * DPDK Userspace Summit, CFP is open
>>   * https://mails.dpdk.org/archives/announce/2020-April/000317.html
>>   * More to come related "DPDK Userspace Summit" soon
>>
>>
>>
>> DPDK Release Status Meetings
>> ============================
>>
>> The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
>> status of the master tree and sub-trees, and for project managers to track
>> progress or milestone dates.
>>
>> The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK
>>
>> If you wish to attend just send an email to
>> "John McNamara <john.mcnamara@intel.com>" for the invite.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] DPDK Release Status Meeting 18/06/2020
  2020-06-18 10:26  4% [dpdk-dev] DPDK Release Status Meeting 18/06/2020 Ferruh Yigit
@ 2020-06-18 14:09  3% ` Trahe, Fiona
  2020-06-18 15:26  3%   ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Trahe, Fiona @ 2020-06-18 14:09 UTC (permalink / raw)
  To: Yigit, Ferruh, dev; +Cc: Thomas Monjalon, Trahe, Fiona, Mcnamara, John

Hi all,

If there's a cryptodev API change planned for 20.11 (an ABI breakage), is it necessary
to send a deprecation notice in 20.08?
Or can this just be worked during the normal 20.11 patch review cycle?

Regards,
Fiona 


> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> Sent: Thursday, June 18, 2020 11:26 AM
> To: dev@dpdk.org
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Subject: [dpdk-dev] DPDK Release Status Meeting 18/06/2020
> 
> Minutes 18 June 2020
> --------------------
> 
> Agenda:
> * Release Dates
> * Subtrees
> * LTS
> * Opens
> 
> Participants:
> * Arm
> * Debian/Microsoft
> * Intel
> * Marvell
> * Mellanox
> * NXP
> * Red Hat
> 
> 
> Release Dates
> -------------
> 
> * v20.08 dates:
>   * Proposal/V1 deadline passed, it was on Friday 12 June 2020
>   * -rc1:           Wednesday, 8 July   2020
>   * -rc2:           Monday,   20 July   2020
>   * Release:        Tuesday,   4 August 2020
> 
> 
> Subtrees
> --------
> 
> * main
>   * Need to update PMD docs to use meson instead of make
>   * Pulling from next-net
>   * Bitops series merged
>   * Windows port progressing
>   * If-proxy related, control path discussion not started
>     * Need to have discussion for more generic control path
>       * Telemetry, if-proxy and others can use this control path
>     * Plan was to create a spec for the control path and if-proxy will comply it
>   * vfio token, new version is out with two acks
>     * Marvell will test and send a tested-by tag
>   * Roadmap is more complete in this release
>     * Almost all major contributors shared the roadmap
>       * 20.08 will be bigger than expected
>     * Good indicator that shows project maturity is increasing
>     * Will be good to get 20.11 roadmaps early
> 
> * next-net
>   * Pulled from vendor sub-trees for mlx & intel
>     * brcm also has a big base code update not merged yet to vendor tree
>   * Some driver patches merged
>   * There are a few ethdev patches, discussions going on
> 
> * next-crypto
>   * Will start reviews next week
> 
> * next-eventdev
>   * There is a new PMD from Intel (DLB) which cause ABI breakage
>     * There will be new versions to prevent ABI break
>   * Will merge some patches next week
> 
> * next-virtio
>   * There are several series under review
>   * Planning to send a pull request tomorrow
>   * Big refactor on vdpa, there are API changes
>   * Would like to get more reviews
>     * Chenbo will support from Intel side
> 
> * next-net-mlx
>   * Some patches pulled to next-net, more expected for release
> 
> * next-net-intel
>   * Qi will take over the next-net-intel
> 
> * next-net-mrvl
>   * A few patches will be merged
>     * Will check if mvneta & mvpp2 PMDs are still active
> 
> 
> LTS
> ---
> 
> * v19.11.3-rc1 is out, please test
>   * https://mails.dpdk.org/archives/dev/2020-June/169263.html
>   * Test results from Intel, Mellanox, Red Hat & OvS
>   * Waiting for results from Microsoft, expecting to have it today/tomorrow
>   * Planning to release today/tomorrow after Microsoft test result
> 
> * v18.11.9 in progress
>   * Issues with gcc10 for KNI ethtool which is still exist in 18.11
>   * -rc1 expected this week
> 
> 
> Opens
> -----
> 
> * DPDK Userspace Summit, CFP is open
>   * https://mails.dpdk.org/archives/announce/2020-April/000317.html
>   * More to come related "DPDK Userspace Summit" soon
> 
> 
> 
> DPDK Release Status Meetings
> ============================
> 
> The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
> status of the master tree and sub-trees, and for project managers to track
> progress or milestone dates.
> 
> The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK
> 
> If you wish to attend just send an email to
> "John McNamara <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] DPDK Release Status Meeting 18/06/2020
@ 2020-06-18 10:26  4% Ferruh Yigit
  2020-06-18 14:09  3% ` Trahe, Fiona
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-06-18 10:26 UTC (permalink / raw)
  To: dev; +Cc: Thomas Monjalon

Minutes 18 June 2020
--------------------

Agenda:
* Release Dates
* Subtrees
* LTS
* Opens

Participants:
* Arm
* Debian/Microsoft
* Intel
* Marvell
* Mellanox
* NXP
* Red Hat


Release Dates
-------------

* v20.08 dates:
  * Proposal/V1 deadline passed, it was on Friday 12 June 2020
  * -rc1:           Wednesday, 8 July   2020
  * -rc2:           Monday,   20 July   2020
  * Release:        Tuesday,   4 August 2020


Subtrees
--------

* main
  * Need to update PMD docs to use meson instead of make
  * Pulling from next-net
  * Bitops series merged
  * Windows port progressing
  * If-proxy related, control path discussion not started
    * Need to have discussion for more generic control path
      * Telemetry, if-proxy and others can use this control path
    * Plan was to create a spec for the control path and if-proxy will comply it
  * vfio token, new version is out with two acks
    * Marvell will test and send a tested-by tag
  * Roadmap is more complete in this release
    * Almost all major contributors shared the roadmap
      * 20.08 will be bigger than expected
    * Good indicator that shows project maturity is increasing
    * Will be good to get 20.11 roadmaps early

* next-net
  * Pulled from vendor sub-trees for mlx & intel
    * brcm also has a big base code update not merged yet to vendor tree
  * Some driver patches merged
  * There are a few ethdev patches, discussions going on

* next-crypto
  * Will start reviews next week

* next-eventdev
  * There is a new PMD from Intel (DLB) which cause ABI breakage
    * There will be new versions to prevent ABI break
  * Will merge some patches next week

* next-virtio
  * There are several series under review
  * Planning to send a pull request tomorrow
  * Big refactor on vdpa, there are API changes
  * Would like to get more reviews
    * Chenbo will support from Intel side

* next-net-mlx
  * Some patches pulled to next-net, more expected for release

* next-net-intel
  * Qi will take over the next-net-intel

* next-net-mrvl
  * A few patches will be merged
    * Will check if mvneta & mvpp2 PMDs are still active


LTS
---

* v19.11.3-rc1 is out, please test
  * https://mails.dpdk.org/archives/dev/2020-June/169263.html
  * Test results from Intel, Mellanox, Red Hat & OvS
  * Waiting for results from Microsoft, expecting to have it today/tomorrow
  * Planning to release today/tomorrow after Microsoft test result

* v18.11.9 in progress
  * Issues with gcc10 for KNI ethtool which is still exist in 18.11
  * -rc1 expected this week


Opens
-----

* DPDK Userspace Summit, CFP is open
  * https://mails.dpdk.org/archives/announce/2020-April/000317.html
  * More to come related "DPDK Userspace Summit" soon



DPDK Release Status Meetings
============================

The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.

The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to
"John McNamara <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-03 12:10  0%         ` Dekel Peled
@ 2020-06-18  6:58  0%           ` Dekel Peled
  0 siblings, 0 replies; 200+ results
From: Dekel Peled @ 2020-06-18  6:58 UTC (permalink / raw)
  To: Adrien Mazarguil, Ori Kam, Andrew Rybchenko
  Cc: ferruh.yigit, john.mcnamara, marko.kovacevic, Asaf Penso,
	Matan Azrad, Eli Britstein, dev, Ivan Malov

Hi,

Kind reminder, please respond on the recent correspondence so we can conclude this issue.

Regards,
Dekel

> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Wednesday, June 3, 2020 3:11 PM
> To: Ori Kam <orika@mellanox.com>; Adrien Mazarguil
> <adrien.mazarguil@6wind.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>;
> ferruh.yigit@intel.com; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; Asaf Penso <asafp@mellanox.com>; Matan
> Azrad <matan@mellanox.com>; Eli Britstein <elibr@mellanox.com>;
> dev@dpdk.org; Ivan Malov <Ivan.Malov@oktetlabs.ru>
> Subject: RE: [RFC] ethdev: add fragment attribute to IPv6 item
> 
> Hi, PSB.
> 
> > -----Original Message-----
> > From: Ori Kam <orika@mellanox.com>
> > Sent: Wednesday, June 3, 2020 11:16 AM
> > To: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Dekel Peled
> > <dekelp@mellanox.com>; ferruh.yigit@intel.com;
> > john.mcnamara@intel.com; marko.kovacevic@intel.com; Asaf Penso
> > <asafp@mellanox.com>; Matan Azrad <matan@mellanox.com>; Eli
> Britstein
> > <elibr@mellanox.com>; dev@dpdk.org; Ivan Malov
> > <Ivan.Malov@oktetlabs.ru>
> > Subject: RE: [RFC] ethdev: add fragment attribute to IPv6 item
> >
> > Hi Adrien,
> >
> > Great to hear from you again.
> >
> > > -----Original Message-----
> > > From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> > > Sent: Tuesday, June 2, 2020 10:04 PM
> > > To: Ori Kam <orika@mellanox.com>
> > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Dekel Peled
> > > <dekelp@mellanox.com>; ferruh.yigit@intel.com;
> > > john.mcnamara@intel.com; marko.kovacevic@intel.com; Asaf Penso
> > > <asafp@mellanox.com>; Matan Azrad <matan@mellanox.com>; Eli
> > Britstein
> > > <elibr@mellanox.com>; dev@dpdk.org; Ivan Malov
> > > <Ivan.Malov@oktetlabs.ru>
> > > Subject: Re: [RFC] ethdev: add fragment attribute to IPv6 item
> > >
> > > Hi Ori, Andrew, Delek,
> 
> It's Dekel, not Delek ;-)
> 
> > >
> > > (been a while eh?)
> > >
> > > On Tue, Jun 02, 2020 at 06:28:41PM +0000, Ori Kam wrote:
> > > > Hi Andrew,
> > > >
> > > > PSB,
> > > [...]
> > > > > > diff --git a/lib/librte_ethdev/rte_flow.h
> > > > > > b/lib/librte_ethdev/rte_flow.h index b0e4199..3bc8ce1 100644
> > > > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > > > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> > > > > >   */
> > > > > >  struct rte_flow_item_ipv6 {
> > > > > >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > > > > > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-
> > fragmented. */
> > > > > > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> > > > >
> > > > > The solution is simple, but hardly generic and adds an example
> > > > > for the future extensions. I doubt that it is a right way to go.
> > > > >
> > > > I agree with you that this is not the most generic way possible,
> > > > but the IPV6 extensions are very unique. So the solution is also unique.
> > > > In general, I'm always in favor of finding the most generic way,
> > > > but
> > > sometimes
> > > > it is better to keep things simple, and see how it goes.
> > >
> > > Same feeling here, it doesn't look right.
> > >
> > > > > May be we should add 256-bit string with one bit for each IP
> > > > > protocol number and apply it to extension headers only?
> > > > > If bit A is set in the mask:
> > > > >  - if bit A is set in spec as well, extension header with
> > > > >    IP protocol (1 << A) number must present
> > > > >  - if bit A is clear in spec, extension header with
> > > > >    IP protocol (1 << A) number must absent If bit is clear in
> > > > > the mask, corresponding extension header may present and may
> > > > > absent (i.e. don't care).
> > > > >
> > > > There are only 12 possible extension headers and currently none of
> > > > them are supported in rte_flow. So adding a logic to parse the 256
> > > > just to get a max
> > > of 12
> > > > possible values is an overkill. Also, if we disregard the case of
> > > > the extension, the application must select only one next proto.
> > > > For example, the application can't select udp + tcp. There is the
> > > > option to add a flag for each of the possible extensions, does it
> > > > makes more
> > sense to you?
> > >
> > > Each of these extension headers has its own structure, we first need
> > > the ability to match them properly by adding the necessary pattern items.
> > >
> > > > > The RFC indirectly touches IPv6 proto (next header) matching
> > > > > logic.
> > > > >
> > > > > If logic used in ETH+VLAN is applied on IPv6 as well, it would
> > > > > make pattern specification and handling complicated. E.g.:
> > > > >   eth / ipv6 / udp / end
> > > > > should match UDP over IPv6 without any extension headers only.
> > > > >
> > > > The issue with VLAN I agree is different since by definition VLAN
> > > > is layer 2.5. We can add the same logic also to the VLAN case,
> > > > maybe it will be easier.
> > > > In any case, in your example above and according to the RFC we
> > > > will get all ipv6 udp traffic with and without extensions.
> > > >
> > > > > And how to specify UPD over IPv6 regardless extension headers?
> > > >
> > > > Please see above the rule will be eth / ipv6 /udp.
> > > >
> > > > >   eth / ipv6 / ipv6_ext / udp / end with a convention that
> > > > > ipv6_ext is optional if spec and mask are NULL (or mask is empty).
> > > > >
> > > > I would guess that this flow should match all ipv6 that has one
> > > > ext and the
> > > next
> > > > proto is udp.
> > >
> > > In my opinion RTE_FLOW_ITEM_TYPE_IPV6_EXT is a bit useless on its
> own.
> > > It's only for matching packets that contain some kind of extension
> > > header, not a specific one, more about that below.
> > >
> > > > > I'm wondering if any driver treats it this way?
> > > > >
> > > > I'm not sure, we can support only the frag ext by default, but if
> > > > required we
> > > can support other
> > > > ext.
> > > >
> > > > > I agree that the problem really comes when we'd like match
> > > > > IPv6 frags or even worse not fragments.
> > > > >
> > > > > Two patterns for fragments:
> > > > >   eth / ipv6 (proto=FRAGMENT) / end
> > > > >   eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end
> > > > >
> > > > > Any sensible solution for not-fragments with any other extension
> > > > > headers?
> > > > >
> > > > The one propose in this mail 😊
> > > >
> > > > > INVERT exists, but hardly useful, since it simply says that
> > > > > patches which do not match pattern without INVERT matches the
> > > > > pattern with INVERT and
> > > > >   invert / eth / ipv6 (proto=FRAGMENT) / end will match ARP,
> > > > > IPv4,
> > > > > IPv6 with an extension header before fragment header and so on.
> > > > >
> > > > I agree with you, INVERT in this doesn’t help.
> > > > We were considering adding some kind of not mask / item per item.
> > > > some think around this line:
> > > > user request ipv6 unfragmented udp packets. The flow would look
> > > > something like this:
> > > > Eth / ipv6 / Not (Ipv6.proto = frag_proto) / udp But it makes the
> > > > rules much harder to use, and I don't think that there is any HW
> > > > that support not, and adding such feature to all items is overkill.
> > > >
> > > >
> > > > > Bit string suggested above will allow to match:
> > > > >  - UDP over IPv6 with any extension headers:
> > > > >     eth / ipv6 (ext_hdrs mask empty) / udp / end
> > > > >  - UDP over IPv6 without any extension headers:
> > > > >     eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
> > > > >  - UDP over IPv6 without fragment header:
> > > > >     eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp
> > > > > / end
> > > > >  - UDP over IPv6 with fragment header
> > > > >     eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp
> > > > > / end
> > > > >
> > > > > where FRAGMENT is 1 << IPPROTO_FRAGMENT.
> > > > >
> > > > Please see my response regarding this above.
> > > >
> > > > > Above I intentionally keep 'proto' unspecified in ipv6 since
> > > > > otherwise it would specify the next header after IPv6 header.
> > > > >
> > > > > Extension headers mask should be empty by default.
> > >
> > > This is a deliberate design choice/issue with rte_flow: an empty
> > > pattern matches everything; adding items only narrows the selection.
> > > As Andrew said there is currently no way to provide a specific item
> > > to reject, it can only be done globally on a pattern through INVERT
> > > that no
> > PMD implements so far.
> > >
> > > So we have two requirements here: the ability to specifically match
> > > IPv6 fragment headers and the ability to reject them.
> > >
> > > To match IPv6 fragment headers, we need a dedicated pattern item.
> > > The generic RTE_FLOW_ITEM_TYPE_IPV6_EXT is useless for that on its
> > > own, it must be completed with RTE_FLOW_ITEM_TYPE_IPV6_EXT_FRAG
> and
> > associated
> > > object
> >
> > Yes, we must add EXT_FRAG to be able to match on the FRAG bits.
> >
> 
> Please see previous RFC I sent.
> [RFC] ethdev: add IPv6 fragment extension header item
> http://mails.dpdk.org/archives/dev/2020-March/160255.html
> It is complemented by this RFC.
> 
> > > to match individual fields if needed (like all the others
> > > protocols/headers).
> > >
> > > Then to reject a pattern item... My preference goes to a new "NOT"
> > > meta item affecting the meaning of the item coming immediately after
> > > in the pattern list. That would be ultra generic, wouldn't break any
> > > ABI/API and like INVERT, wouldn't even require a new object
> > > associated
> > with it.
> > >
> > > To match UDPv6 traffic when there is no fragment header, one could
> > > then do something like:
> > >
> > >  eth / ipv6 / not / ipv6_ext_frag / udp
> > >
> > > PMD support would be trivial to implement (I'm sure!)
> > >
> > I agree with you as I said above. The issue is not PMD, the issues are:
> > 1. think about the rule you stated above from logic point there is
> > some contradiction, you are saying ipv6 next proto udp but you also
> > say not frag, this is logic only for IPV6 ext.
> > 2. HW issue, I don't know of HW that knows how to support not on an item.
> > So adding something for all items for only one case is overkill.
> >
> >
> >
> > > We may later implement other kinds of "operator" items as Andrew
> > > suggested, for bit-wise stuff and so on. Let's keep adding features
> > > on a needed basis though.
> > >
> > > --
> > > Adrien Mazarguil
> > > 6WIND
> >
> > Best,
> > Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/4] devtools: shrink cross-compilation test definition
  2020-06-15 22:22  7%   ` [dpdk-dev] [PATCH v2 1/4] devtools: shrink cross-compilation test definition Thomas Monjalon
@ 2020-06-17 21:05  0%     ` David Christensen
  0 siblings, 0 replies; 200+ results
From: David Christensen @ 2020-06-17 21:05 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: david.marchand, bruce.richardson, dmitry.kozliuk

On 6/15/20 3:22 PM, Thomas Monjalon wrote:
> Each cross-compilation case needs to define the target compiler
> and the meson cross file.
> Given the compiler is already defined in the cross file,
> the latter is enough.
> 
> The function "build" is changed to accept a cross file alternatively
> to the compiler name. In the case of a file (detected if readable),
> the compiler is extracted with sed and tr, and the option --cross-file
> is automatically added.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> v2: fix ABI check config (thanks David)

Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] RE: [RFC] mbuf: accurate packet Tx scheduling
  2020-06-10 15:16  0%   ` Slava Ovsiienko
@ 2020-06-17 15:57  0%     ` Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2020-06-17 15:57 UTC (permalink / raw)
  To: Slava Ovsiienko
  Cc: dev, Thomas Monjalon, Matan Azrad, Raslan Darawsheh, Ori Kam,
	olivier.matz, Shahaf Shuler

On Wed, Jun 10, 2020 at 03:16:12PM +0000, Slava Ovsiienko wrote:

> External Email
> 
> ----------------------------------------------------------------------
> Hi, Harman
> 
> > -----Original Message-----
> > From: Harman Kalra <hkalra@marvell.com>
> > Sent: Wednesday, June 10, 2020 16:34
> > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Matan
> > Azrad <matan@mellanox.com>; Raslan Darawsheh
> > <rasland@mellanox.com>; Ori Kam <orika@mellanox.com>;
> > olivier.matz@6wind.com; Shahaf Shuler <shahafs@mellanox.com>
> > Subject: Re: [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
> > 
> > On Wed, Jun 10, 2020 at 06:38:05AM +0000, Viacheslav Ovsiienko wrote:
> > 
> > Hi Viacheslav,
> > 
> >    I have some queries below:
> > 
> > > There is the requirement on some networks for precise traffic timing
> > > management. The ability to send (and, generally speaking, receive) the
> > > packets at the very precisely specified moment of time provides the
> > > opportunity to support the connections with Time Division Multiplexing
> > > using the contemporary general purpose NIC without involving an
> > > auxiliary hardware. For example, the supporting of O-RAN Fronthaul
> > > interface is one of the promising features for potentially usage of
> > > the precise time management for the egress packets.
> > >
> > > The main objective of this RFC is to specify the way how applications
> > > can provide the moment of time at what the packet transmission must be
> > > started and to describe in preliminary the supporting this feature
> > > from
> > > mlx5 PMD side.
> > >
> > > The new dynamic timestamp field is proposed, it provides some timing
> > > information, the units and time references (initial phase) are not
> > > explicitly defined but are maintained always the same for a given port.
> > > Some devices allow to query rte_eth_read_clock() that will return the
> > > current device timestamp. The dynamic timestamp flag tells whether the
> > > field contains actual timestamp value. For the packets being sent this
> > > value can be used by PMD to schedule packet sending.
> > >
> > > After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation and
> > > obsoleting, these dynamic flag and field will be used to manage the
> > > timestamps on receiving datapath as well.
> > >
> > > When PMD sees the "rte_dynfield_timestamp" set on the packet being
> > > sent it tries to synchronize the time of packet appearing on the wire
> > > with the specified packet timestamp. It the specified one is in the
> > > past it should be ignored, if one is in the distant future it should
> > > be capped with some reasonable value (in range of seconds). These
> > > specific cases ("too late" and "distant future") can be optionally
> > > reported via device xstats to assist applications to detect the
> > > time-related problems.
> > >
> > > There is no any packet reordering according timestamps is supposed,
> > > neither within packet burst, nor between packets, it is an entirely
> > > application responsibility to generate packets and its timestamps in
> > > desired order. The timestamps can be put only in the first packet in
> > > the burst providing the entire burst scheduling.
> > 
> > Since its applicaiton responsibility to care of packet reordering and many
> > other parameters, so why cant application itself take the responsibility of
> > packet scheduling, i.e. applicaton can hold for the required time before
> > calling tx-burst? Why are we even offloading this job to PMD?
> > 
> - The scheduling is required to be very precise. Within handred(s) of nanoseconds.
> - It saves CPU cycles. Application just should prepare the packets, put the desired timestamps
>  and call tx_burst().  "Shut-n-forget" approach. 
> 
> SW approach is potentially possible, application can hold the time and schedule packets itself.
> But... Can we guarantee the stable delay between tx_burst call and data on the wire?
> Should we waste CPU cycles to wait the desired moment of time? Can we guarantee
> stable interrupt latency if we choose to schedule on interrupts approach?
> 
> This RFC splits the responsibility - application should prepare the data and specify
> when it desires to send, the rest is on PMD.

I agree with the fact that we cannot guarantee the delay between tx
burst call and data on wire, hence PMD should take care of it.
Even if PMD is holding, it is wastage of CPU cycles or if we setup
an alarm then also interrupt latency might be a concern to achieve
precise timming. So how are you planning to address both of above
issue in PMD.

>  
> > >
> > > PMD reports the ability to synchronize packet sending on timestamp
> > > with new offload flag:
> > >
> > > This is palliative and is going to be replaced with new eth_dev API
> > > about reporting/managing the supported dynamic flags and its related
> > > features. This API would break ABI compatibility and can't be
> > > introduced at the moment, so is postponed to 20.11.
> > >
> > > For testing purposes it is proposed to update testpmd "txonly"
> > > forwarding mode routine. With this update testpmd application
> > > generates the packets and sets the dynamic timestamps according to
> > > specified time pattern if it sees the "rte_dynfield_timestamp" is registered.
> > 
> > So what I am understanding here is "rte_dynfield_timestamp" will provide
> > information about three parameters:
> > - timestamp at which TX should start
> > - intra packet gap
> > - intra burst gap.
> > 
> > If its about "intra packet gap" then PMD can take care, but if it is about intra
> > burst gap, application can take care of it.
> 
> Not sure - the intra-burst gap might be pretty small.
> It is supposed to handle intra-burst in the same way - by specifying
> the timestamps. Waiting is supposed to be implemented on tx_burst() retry.
> Prepare the packets with timestamps, tx_burst - if not all packets are sent -
> it means queue is waiting for the schedult, retry with the remaining packets.
> As option - we can implement intra-burst wait basing rte_eth_read_clock().

Yeah, I think app can make use of rte_eth_read_clock() to implement
intra-burst gap.
But my actual doubt was, what all information will app provide as
part of "rte_dynfield_timestamp" - one I understand will be timestamp
at which packets should be sent out. What else? intra-packet gap ?


Thanks
Harman

> 
> > > The new testpmd command is proposed to configure sending pattern:
> > >
> > > set tx_times <intra_gap>,<burst_gap>
> > >
> > > <intra_gap> - the delay between the packets within the burst
> > >               specified in the device clock units. The number
> > >               of packets in the burst is defined by txburst parameter
> > >
> > > <burst_gap> - the delay between the bursts in the device clock units
> > >
> > > As the result the bursts of packet will be transmitted with specific
> > > delays between the packets within the burst and specific delay between
> > > the bursts. The rte_eth_get_clock is supposed to be engaged to get the
> > 
> > I think here you mean "rte_eth_read_clock".
> Yes, exactly. Thank you for the correction.
> 
> With best regards, Slava
> 
> > 
> > 
> > Thanks
> > Harman
> > 
> > > current device clock value and provide the reference for the timestamps.
> > >
> > > Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> > > ---
> > >  lib/librte_ethdev/rte_ethdev.h |  4 ++++
> > > lib/librte_mbuf/rte_mbuf_dyn.h | 16 ++++++++++++++++
> > >  2 files changed, 20 insertions(+)
> > >
> > > diff --git a/lib/librte_ethdev/rte_ethdev.h
> > > b/lib/librte_ethdev/rte_ethdev.h index a49242b..6f6454c 100644
> > > --- a/lib/librte_ethdev/rte_ethdev.h
> > > +++ b/lib/librte_ethdev/rte_ethdev.h
> > > @@ -1178,6 +1178,10 @@ struct rte_eth_conf {
> > >  /** Device supports outer UDP checksum */  #define
> > > DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
> > >
> > > +/** Device supports send on timestamp */ #define
> > > +DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
> > > +
> > > +
> > >  #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
> > /**<
> > > Device supports Rx queue setup after device started*/  #define
> > > RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 diff --git
> > > a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
> > > index 96c3631..fb5477c 100644
> > > --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > > @@ -250,4 +250,20 @@ int rte_mbuf_dynflag_lookup(const char *name,
> > > #define RTE_MBUF_DYNFIELD_METADATA_NAME
> > "rte_flow_dynfield_metadata"
> > >  #define RTE_MBUF_DYNFLAG_METADATA_NAME
> > "rte_flow_dynflag_metadata"
> > >
> > > +/*
> > > + * The timestamp dynamic field provides some timing information, the
> > > + * units and time references (initial phase) are not explicitly
> > > +defined
> > > + * but are maintained always the same for a given port. Some devices
> > > +allow
> > > + * to query rte_eth_read_clock() that will return the current device
> > > + * timestamp. The dynamic timestamp flag tells whether the field
> > > +contains
> > > + * actual timestamp value. For the packets being sent this value can
> > > +be
> > > + * used by PMD to schedule packet sending.
> > > + *
> > > + * After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
> > > + * and obsoleting, these dynamic flag and field will be used to
> > > +manage
> > > + * the timestamps on receiving datapath as well.
> > > + */
> > > +#define RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > "rte_dynfield_timestamp"
> > > +#define RTE_MBUF_DYNFLAG_TIMESTAMP_NAME
> > "rte_dynflag_timestamp"
> > > +
> > >  #endif
> > > --
> > > 1.8.3.1
> > >

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 8/9] devtools: support python3 only
  @ 2020-06-17 15:10  4%   ` Louise Kilheeney
  0 siblings, 0 replies; 200+ results
From: Louise Kilheeney @ 2020-06-17 15:10 UTC (permalink / raw)
  To: dev
  Cc: robin.jarry, anatoly.burakov, bruce.richardson, Louise Kilheeney,
	Neil Horman, Ray Kinsella

Changed script to explicitly use python3 only to avoid
maintaining python 2.

Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Ray Kinsella <mdr@ashroe.eu>

Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/update_version_map_abi.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/devtools/update_version_map_abi.py b/devtools/update_version_map_abi.py
index e2104e61e..830e6c58c 100755
--- a/devtools/update_version_map_abi.py
+++ b/devtools/update_version_map_abi.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2019 Intel Corporation
 
@@ -9,7 +9,6 @@
 from the devtools/update-abi.sh utility.
 """
 
-from __future__ import print_function
 import argparse
 import sys
 import re
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v16 0/2] support for VFIO-PCI VF token interface
    2020-05-28  1:22  4% ` [dpdk-dev] [PATCH v14 0/2] support for VFIO-PCI VF token interface Haiyue Wang
  2020-05-29  1:37  4% ` [dpdk-dev] [PATCH v15 " Haiyue Wang
@ 2020-06-17  6:33  4% ` Haiyue Wang
  2 siblings, 0 replies; 200+ results
From: Haiyue Wang @ 2020-06-17  6:33 UTC (permalink / raw)
  To: dev, anatoly.burakov, thomas, jerinj, david.marchand, arybchenko,
	xiaolong.ye
  Cc: Haiyue Wang

v16: Rebase the patch for 20.08 release note.

v15: Add the missed EXPERIMENTAL warning for API doxgen.

v14: Rebase the patch for 20.08 release note.

v13: Rename the EAL get VF token function, and leave the freebsd type as empty.

v12: support to vfio devices with VF token and no token.

v11: Use the eal parameter to pass the VF token, then not every PCI
     device needs to be specified with this token. Also no ABI issue
     now.

v10: Use the __rte_internal to mark the internal API changing.

v9: Rewrite the document.

v8: Update the document.

v7: Add the Fixes tag in uuid, the release note and help
    document.

v6: Drop the Fixes tag in uuid, since the file has been
    moved to another place, not suitable to apply on stable.
    And this is not a bug, just some kind of enhancement.

v5: 1. Add the VF token parse error handling.
    2. Split into two patches for different logic module.
    3. Add more comments into the code for explaining the design.
    4. Drop the ABI change workaround, this patch set focuses on code review.

v4: 1. Ignore rte_vfio_setup_device ABI check since it is
       for Linux driver use.

v3: Fix the Travis build failed:
           (1). rte_uuid.h:97:55: error: unknown type name ‘size_t’
           (2). rte_uuid.h:58:2: error: implicit declaration of function ‘memcpy’

v2: Fix the FreeBSD build error.

v1: Update the commit message.

RFC v2:
         Based on Vamsi's RFC v1, and Alex's patch for Qemu
        [https://lore.kernel.org/lkml/20200204161737.34696b91@w520.home/]: 
       Use the devarg to pass-down the VF token.

RFC v1: https://patchwork.dpdk.org/patch/66281/ by Vamsi.

Haiyue Wang (2):
  eal: add uuid dependent header files explicitly
  eal: support for VFIO-PCI VF token

 doc/guides/linux_gsg/linux_drivers.rst        | 35 ++++++++++++++++++-
 doc/guides/linux_gsg/linux_eal_parameters.rst |  4 +++
 doc/guides/rel_notes/release_20_08.rst        |  5 +++
 lib/librte_eal/common/eal_common_options.c    |  2 ++
 lib/librte_eal/common/eal_internal_cfg.h      |  2 ++
 lib/librte_eal/common/eal_options.h           |  2 ++
 lib/librte_eal/freebsd/eal.c                  |  4 +++
 lib/librte_eal/include/rte_eal.h              | 15 ++++++++
 lib/librte_eal/include/rte_uuid.h             |  2 ++
 lib/librte_eal/linux/eal.c                    | 29 +++++++++++++++
 lib/librte_eal/linux/eal_vfio.c               | 19 ++++++++++
 lib/librte_eal/rte_eal_version.map            |  1 +
 12 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.27.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] Aligning DPDK Link bonding with current standards terminology
  2020-06-16 15:45  3%     ` Stephen Hemminger
@ 2020-06-16 20:27  3%       ` Chas Williams
  0 siblings, 0 replies; 200+ results
From: Chas Williams @ 2020-06-16 20:27 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev



On 6/16/20 11:45 AM, Stephen Hemminger wrote:
 > On Tue, 16 Jun 2020 09:52:01 -0400
 > Chas Williams <3chas3@gmail.com> wrote:
 >
 >> On 6/16/20 7:48 AM, Jay Rolette wrote:
 >>   > On Mon, Jun 15, 2020 at 5:52 PM Stephen Hemminger <
 >>   > stephen@networkplumber.org> wrote:
 >>   >
 >>   >> I am disturbed by the wide spread use of master/slave in Ethernet
 >> bonding.
 >>   >> Asked the current IEEE chairs and it looks like it is already fixed
 >>   >> "upstream".
 >>   >>
 >>   >> The proper terminology is for Ethernet link aggregation in the
 >>   >> the current standard 802.1AX 2020 revision (pay walled) for the 
parts
 >>   >> formerly known as master and slave is now "Protocol Parser" and
 >> "Protocol
 >>   >> multiplexer".
 >>   >>
 >>   >> Also it is not called bonding anywhere; it uses LACP only.
 >>   >>
 >>   >
 >>   > LACP is only 1 of 5 bonding modes.
 >>   >
 >>   >
 >>   >> Given the large scope of the name changes. Maybe it would be 
best to
 >> just
 >>   >> convert the names
 >>   >> all of rte_eth_bond to rte_eth_lacp and fix the master/slave
 >> references at
 >>   >> the same time.
 >>   >>
 >>   >
 >>   > Why rename rte_eth_bond at all?
 >>
 >> If there is a strong desire to rename the PMD, I suggest using link
 >> aggregration group (LAG/lag) since that is a more accurate 
description of
 >> this feature. That's the terminology used in 802.1AX. This would make
 >> some of the internal name changes more natural as well.
 >
 > The words that matter most are getting rid of master/slave and 
blacklist/whitelist.
 > The worst is "bonded slave". Luckily the master and slave are only 
used internally

After looking at the specification, I might suggest renaming slaves to
links. Member would likely be fine as well if one is concerned about
confusion with link status. As for the container name, aggregator or
aggregate would be fine. aggport would be closer to the specification.

 > in the driver so no visible API/ABI with those terms.

I am not entirely convinced of that.

% egrep -i 'master|slave' rte_pmd_bond_version.map
	rte_eth_bond_8023ad_slave_info;
	rte_eth_bond_active_slaves_get;
	rte_eth_bond_slave_add;
	rte_eth_bond_slave_remove;
	rte_eth_bond_slaves_get;
%

 > One option would be to substitute slave with multiplexer in the comments
 > and shorter term like mux in the variables. And replace master with 
aggregator.

There are very few master references in the bonding PMD. It's usage
appears to be as an adjective and easily removed. The "master" port is
typically called bonded which isn't fantastic since bond is often used
in a slightly different context.

 > You are right, the standard name is LACP other names seem to be 
viewed as early
 > history alternatives:
 > 	Cisco - Etherchannel
 > 	Juniper - Aggregated Ethernet
 > 	Others - Multi-link
 > 	BSD - lagg
 > 	Linux bonding
 > 	Solaris aggr
 >
 > The point of this thread is to get consensus about best future naming.
 >

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 0/4] Enforce checking on flag values in API's
  @ 2020-06-16 15:47  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-06-16 15:47 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Bruce Richardson, konstantin.ananyev

28/04/2020 12:28, Bruce Richardson:
> On Mon, Apr 27, 2020 at 04:16:21PM -0700, Stephen Hemminger wrote:
> > The DPDK API's are lax about checking for undefined flag values.
> > This makes it impossible to add new bits to existing API's
> > without causing ABI breakage. This means we end up doing unnecessary
> > symbol versioning just to work around applications that might
> > pass in naughty bits.
> > 
> > This is the DPDK analog of the Linux kernel openat() problem.
> > Openat api was added but since kernel did not check flags it
> > ended up that another syscall openat2() was necessary before
> > the flags could be used.
> > 
> > v3 - define mask based on existing defines for ring and hash
> > 
> > Stephen Hemminger (4):
> >   ring: future proof flag settings
> >   hash: check flags on creation
> >   stack: check flags on creation
> >   cfgfile: check flags value
> > 
> I think this is a good idea to do in DPDK
> 
> Series-acked-by: Bruce Richardson <bruce.richardson@intel.com>

Applied, thanks




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] Aligning DPDK Link bonding with current standards terminology
  @ 2020-06-16 15:45  3%     ` Stephen Hemminger
  2020-06-16 20:27  3%       ` Chas Williams
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2020-06-16 15:45 UTC (permalink / raw)
  To: Chas Williams; +Cc: dev

On Tue, 16 Jun 2020 09:52:01 -0400
Chas Williams <3chas3@gmail.com> wrote:

> On 6/16/20 7:48 AM, Jay Rolette wrote:
>  > On Mon, Jun 15, 2020 at 5:52 PM Stephen Hemminger <  
>  > stephen@networkplumber.org> wrote:  
>  >  
>  >> I am disturbed by the wide spread use of master/slave in Ethernet   
> bonding.
>  >> Asked the current IEEE chairs and it looks like it is already fixed
>  >> "upstream".
>  >>
>  >> The proper terminology is for Ethernet link aggregation in the
>  >> the current standard 802.1AX 2020 revision (pay walled) for the parts
>  >> formerly known as master and slave is now "Protocol Parser" and   
> "Protocol
>  >> multiplexer".
>  >>
>  >> Also it is not called bonding anywhere; it uses LACP only.
>  >>  
>  >
>  > LACP is only 1 of 5 bonding modes.
>  >
>  >  
>  >> Given the large scope of the name changes. Maybe it would be best to   
> just
>  >> convert the names
>  >> all of rte_eth_bond to rte_eth_lacp and fix the master/slave   
> references at
>  >> the same time.
>  >>  
>  >
>  > Why rename rte_eth_bond at all?  
> 
> If there is a strong desire to rename the PMD, I suggest using link
> aggregration group (LAG/lag) since that is a more accurate description of
> this feature. That's the terminology used in 802.1AX. This would make
> some of the internal name changes more natural as well.

The words that matter most are getting rid of master/slave and blacklist/whitelist.
The worst is "bonded slave". Luckily the master and slave are only used internally
in the driver so no visible API/ABI with those terms.

One option would be to substitute slave with multiplexer in the comments
and shorter term like mux in the variables. And replace master with aggregator.


You are right, the standard name is LACP other names seem to be viewed as early
history alternatives:
	Cisco - Etherchannel
	Juniper - Aggregated Ethernet
	Others - Multi-link
	BSD - lagg
	Linux bonding
	Solaris aggr

The point of this thread is to get consensus about best future naming.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] 19.11.3 patches review and test
  2020-06-03 19:43  3% [dpdk-dev] 19.11.3 patches review and test luca.boccassi
  2020-06-10  7:19  0% ` Yu, PingX
  2020-06-15  2:05  3% ` Pei Zhang
@ 2020-06-16 14:20  0% ` Govindharajan, Hariprasad
  2020-06-18 18:11  3% ` [dpdk-dev] [EXTERNAL] " Abhishek Marathe
  3 siblings, 0 replies; 200+ results
From: Govindharajan, Hariprasad @ 2020-06-16 14:20 UTC (permalink / raw)
  To: luca.boccassi, stable
  Cc: dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani, Walker,
	Benjamin, David Christensen, Hemant Agrawal, Stokes, Ian,
	Jerin Jacob, Mcnamara, John, Ju-Hyoung Lee, Kevin Traynor,
	Pei Zhang, Yu, PingX, Xu, Qian Q, Raslan Darawsheh,
	Thomas Monjalon, Peng, Yuan, Chen, Zhaoyan



 
> Hi all,
> 
> Here is a list of patches targeted for stable release 19.11.3.
> 
> The planned date for the final release is the 17th of June.
> 
> Please help with testing and validation of your use cases and report any
> issues/results with reply-all to this mail. For the final release the fixes and
> reported validations will be added to the release notes.
> 
> A release candidate tarball can be found at:
> 
>     https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1
> 
> These patches are located at branch 19.11 of dpdk-stable repo:
>     https://dpdk.org/browse/dpdk-stable/
> 
> Thanks.
> 
> Luca Boccassi
[Govindharajan, Hariprasad]  Hi Luca,

Hi Luca,

following performance and functional tests carried out with ixgbe, i40e, ice and vhost devices

DPDK 19.11.3 RC1 with OvS 2.13.0 and Master

[IS] no need to specify OVS 2.13.0, you can just say OVS branch 2.13 and OVS master.

P2P
PVP
PVPV
PVVP
Multi-queue RSS 
vHost reconnect
jumbo frames 1500, 6000, 9702 

Regards
G Hariprasad
> 
> ---
> Adam Dybkowski (5):
>       cryptodev: fix missing device id range checking
>       common/qat: fix GEN3 marketing name
>       app/crypto-perf: fix display of sample test vector
>       crypto/qat: support plain SHA1..SHA512 hashes
>       cryptodev: fix SHA-1 digest enum comment
> 
> Ajit Khaparde (3):
>       net/bnxt: fix FW version query
>       net/bnxt: fix error log for command timeout
>       net/bnxt: fix using RSS config struct
> 
> Akhil Goyal (1):
>       ipsec: fix build dependency on hash lib
> 
> Alex Kiselev (1):
>       lpm6: fix size of tbl8 group
> 
> Alex Marginean (1):
>       net/enetc: fix Rx lock-up
> 
> Alexander Kozyrev (8):
>       net/mlx5: reduce Tx completion index memory loads
>       net/mlx5: add device parameter for MPRQ stride size
>       net/mlx5: enable MPRQ multi-stride operations
>       net/mlx5: add multi-segment packets in MPRQ mode
>       net/mlx5: set dynamic flow metadata in Rx queues
>       net/mlx5: improve logging of MPRQ selection
>       net/mlx5: fix assert in dynamic metadata handling
>       net/mlx5: fix Tx queue release debug log timing
> 
> Alvin Zhang (2):
>       net/iavf: fix link speed
>       net/e1000: fix port hotplug for multi-process
> 
> Amit Gupta (1):
>       net/octeontx: fix meson build for disabled drivers
> 
> Anatoly Burakov (1):
>       mem: preallocate VA space in no-huge mode
> 
> Andrew Rybchenko (4):
>       net/sfc: fix reported promiscuous/multicast mode
>       net/sfc/base: use simpler EF10 family conditional check
>       net/sfc/base: use simpler EF10 family run-time checks
>       net/sfc/base: fix build when EVB is enabled
> 
> Andy Pei (1):
>       net/ipn3ke: use control thread to check link status
> 
> Ankur Dwivedi (1):
>       net/octeontx2: fix buffer size assignment
> 
> Apeksha Gupta (2):
>       bus/fslmc: fix dereferencing null pointer
>       test/crypto: fix statistics case
> 
> Archana Muniganti (1):
>       examples/fips_validation: fix parsing of algorithms
> 
> Arek Kusztal (1):
>       crypto/qat: fix cipher descriptor for ZUC and SNOW
> 
> Asaf Penso (2):
>       net/mlx5: fix call to modify action without init item
>       net/mlx5: fix assert in doorbell lookup
> 
> Ashish Gupta (1):
>       net/octeontx2: fix link information for loopback port
> 
> Asim Jamshed (1):
>       fib: fix headers for C++ support
> 
> Bernard Iremonger (1):
>       net/i40e: fix flow director initialisation
> 
> Bing Zhao (6):
>       net/mlx5: fix header modify action validation
>       net/mlx5: fix actions validation on root table
>       net/mlx5: fix assert in modify converting
>       mk: fix static linkage of mlx dependency
>       mem: fix overflow on allocation
>       net/mlx5: fix doorbell bitmap management offsets
> 
> Bruce Richardson (3):
>       pci: remove unneeded includes in public header file
>       pci: fix build on FreeBSD
>       drivers: fix log type variables for -fno-common
> 
> Cheng Peng (1):
>       net/iavf: fix stats query error code
> 
> Chengchang Tang (3):
>       net/hns3: fix promiscuous mode for PF
>       net/hns3: fix default VLAN filter configuration for PF
>       net/hns3: fix VLAN filter when setting promisucous mode
> 
> Chengwen Feng (7):
>       net/hns3: fix packets offload features flags in Rx
>       net/hns3: fix default error code of command interface
>       net/hns3: fix crash when flushing RSS flow rules with FLR
>       net/hns3: fix return value of setting VLAN offload
>       net/hns3: clear residual flow rules on init
>       net/hns3: fix Rx interrupt after reset
>       net/hns3: replace memory barrier with data dependency order
> 
> Ciara Power (1):
>       telemetry: fix port stats retrieval
> 
> Darek Stojaczyk (1):
>       pci: accept 32-bit domain numbers
> 
> David Christensen (2):
>       pci: fix build on ppc
>       eal/ppc: fix build with gcc 9.3
> 
> David Marchand (5):
>       mem: mark pages as not accessed when reserving VA
>       test: load drivers when required
>       eal: fix typo in endian conversion macros
>       remove references to private PCI probe function
>       doc: prefer https when pointing to dpdk.org
> 
> Dekel Peled (7):
>       net/mlx5: fix mask used for IPv6 item validation
>       net/mlx5: fix CVLAN tag set in IP item translation
>       net/mlx5: update VLAN and encap actions validation
>       net/mlx5: fix match on empty VLAN item in DV mode
>       common/mlx5: fix umem buffer alignment
>       net/mlx5: fix VLAN flow action with wildcard VLAN item
>       net/mlx5: fix RSS key copy to TIR context
> 
> Dmitry Kozlyuk (2):
>       build: fix linker warnings with clang on Windows
>       build: support MinGW-w64 with Meson
> 
> Eduard Serra (1):
>       net/vmxnet3: fix RSS setting on v4
> 
> Eugeny Parshutin (1):
>       ethdev: fix build when vtune profiling is on
> 
> Fady Bader (1):
>       mempool: remove inline functions from export list
> 
> Fan Zhang (1):
>       vhost/crypto: add missing user protocol flag
> 
> Ferruh Yigit (7):
>       net/nfp: fix log format specifiers
>       net/null: fix secondary burst function selection
>       net/null: remove redundant check
>       mempool/octeontx2: fix build for gcc O1 optimization
>       net/ena: fix build for O1 optimization
>       event/octeontx2: fix build for O1 optimization
>       examples/kni: fix crash during MTU set
> 
> Gaetan Rivet (5):
>       doc: fix number of failsafe sub-devices
>       net/ring: fix device pointer on allocation
>       pci: reject negative values in PCI id
>       doc: fix typos in ABI policy
>       kvargs: fix strcmp helper documentation
> 
> Gavin Hu (2):
>       net/i40e: relax barrier in Tx
>       net/i40e: relax barrier in Tx for NEON
> 
> Guinan Sun (2):
>       net/ixgbe: fix statistics in flow control mode
>       net/ixgbe: check driver type in MACsec API
> 
> Haifeng Lin (1):
>       eal/arm64: fix precise TSC
> 
> Haiyue Wang (1):
>       net/ice/base: check memory pointer before copying
> 
> Hao Chen (1):
>       net/hns3: support Rx interrupt
> 
> Harry van Haaren (3):
>       service: fix crash on exit
>       examples/eventdev: fix crash on exit
>       test/flow_classify: enable multi-sockets system
> 
> Hemant Agrawal (3):
>       drivers: add crypto as dependency for event drivers
>       bus/fslmc: fix size of qman fq descriptor
>       mempool/dpaa2: install missing header with meson
> 
> Honnappa Nagarahalli (3):
>       timer: protect initialization with lock
>       service: fix race condition for MT unsafe service
>       service: fix identification of service running on other lcore
> 
> Hyong Youb Kim (1):
>       net/enic: fix flow action reordering
> 
> Igor Chauskin (2):
>       net/ena/base: make allocation macros thread-safe
>       net/ena/base: prevent allocation of zero sized memory
> 
> Igor Romanov (9):
>       net/sfc: fix initialization error path
>       net/sfc: fix Rx queue start failure path
>       net/sfc: fix promiscuous and allmulticast toggles errors
>       net/sfc: set priority of created filters to manual
>       net/sfc/base: reduce filter priorities to implemented only
>       net/sfc/base: reject automatic filter creation by users
>       net/sfc/base: refactor filter lookup loop in EF10
>       net/sfc/base: handle manual and auto filter clashes in EF10
>       net/sfc/base: fix manual filter delete in EF10
> 
> Itsuro Oda (2):
>       net/vhost: fix potential memory leak on close
>       vhost: make IOTLB cache name unique among processes
> 
> Ivan Dyukov (3):
>       net/virtio-user: fix devargs parsing
>       app: remove extra new line after link duplex
>       examples: remove extra new line after link duplex
> 
> Jasvinder Singh (3):
>       net/softnic: fix memory leak for thread
>       net/softnic: fix resource leak for pipeline
>       examples/ip_pipeline: remove check of null response
> 
> Jeff Guo (3):
>       net/i40e: fix setting L2TAG
>       net/iavf: fix setting L2TAG
>       net/ice: fix setting L2TAG
> 
> Jiawei Wang (1):
>       net/mlx5: fix imissed counter overflow
> 
> Jim Harris (1):
>       contigmem: cleanup properly when load fails
> 
> Jun Yang (1):
>       net/dpaa2: fix congestion ID for multiple traffic classes
> 
> Junyu Jiang (4):
>       examples/vmdq: fix output of pools/queues
>       examples/vmdq: fix RSS configuration
>       net/ice: fix RSS advanced rule
>       net/ice: fix crash in switch filter
> 
> Juraj Linkeš (1):
>       ci: fix telemetry dependency in Travis
> 
> Július Milan (1):
>       net/memif: fix init when already connected
> 
> Kalesh AP (9):
>       net/bnxt: fix HWRM command during FW reset
>       net/bnxt: use true/false for bool types
>       net/bnxt: fix port start failure handling
>       net/bnxt: fix VLAN add when port is stopped
>       net/bnxt: fix VNIC Rx queue count on VNIC free
>       net/bnxt: fix number of TQM ring
>       net/bnxt: fix TQM ring context memory size
>       app/testpmd: fix memory failure handling for i40e DDP
>       net/bnxt: fix storing MAC address twice
> 
> Kevin Traynor (9):
>       net/hinic: fix snprintf length of cable info
>       net/hinic: fix repeating cable log and length check
>       net/avp: fix gcc 10 maybe-uninitialized warning
>       examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
>       eal/x86: ignore gcc 10 stringop-overflow warnings
>       net/mlx5: fix gcc 10 enum-conversion warning
>       crypto/kasumi: fix extern declaration
>       drivers/crypto: disable gcc 10 no-common errors
>       build: disable gcc 10 zero-length-bounds warning
> 
> Konstantin Ananyev (1):
>       security: fix crash at accessing non-implemented ops
> 
> Lijun Ou (4):
>       net/hns3: fix configuring RSS hash when rules are flushed
>       net/hns3: add RSS hash offload to capabilities
>       net/hns3: fix RSS key length
>       net/hns3: fix RSS indirection table configuration
> 
> Linsi Yuan (1):
>       net/bnxt: fix possible stack smashing
> 
> Louise Kilheeney (1):
>       examples/l2fwd-keepalive: fix mbuf pool size
> 
> Luca Boccassi (4):
>       fix various typos found by Lintian
>       usertools: check for pci.ids in /usr/share/misc
>       Revert "net/bnxt: fix TQM ring context memory size"
>       Revert "net/bnxt: fix number of TQM ring"
> 
> Lukasz Bartosik (1):
>       event/octeontx2: fix queue removal from Rx adapter
> 
> Lukasz Wojciechowski (5):
>       drivers/crypto: fix log type variables for -fno-common
>       security: fix verification of parameters
>       security: fix return types in documentation
>       security: fix session counter
>       test: remove redundant macro
> 
> Marvin Liu (5):
>       vhost: fix packed ring zero-copy
>       vhost: fix shadow update
>       vhost: fix shadowed descriptors not flushed
>       net/virtio: fix crash when device reconnecting
>       net/virtio: fix unexpected event after reconnect
> 
> Matteo Croce (1):
>       doc: fix LTO config option
> 
> Mattias Rönnblom (3):
>       event/dsw: remove redundant control ring poll
>       event/dsw: remove unnecessary read barrier
>       event/dsw: avoid reusing previously recorded events
> 
> Michael Baum (2):
>       net/mlx5: fix meter color register consideration
>       net/mlx4: fix drop queue error handling
> 
> Michael Haeuptle (1):
>       vfio: fix race condition with sysfs
> 
> Michal Krawczyk (5):
>       net/ena/base: fix documentation of functions
>       net/ena/base: fix indentation in CQ polling
>       net/ena/base: fix indentation of multiple defines
>       net/ena: set IO ring size to valid value
>       net/ena/base: fix testing for supported hash function
> 
> Min Hu (Connor) (3):
>       net/hns3: fix configuring illegal VLAN PVID
>       net/hns3: fix mailbox opcode data type
>       net/hns3: fix VLAN PVID when configuring device
> 
> Mit Matelske (1):
>       eal/freebsd: fix queuing duplicate alarm callbacks
> 
> Mohsin Shaikh (1):
>       net/mlx5: use open/read/close for ib stats query
> 
> Muhammad Bilal (2):
>       fix same typo in multiple places
>       doc: fix typo in contributors guide
> 
> Nagadheeraj Rottela (2):
>       crypto/nitrox: fix CSR register address generation
>       crypto/nitrox: fix oversized device name
> 
> Nicolas Chautru (2):
>       baseband/turbo_sw: fix exposed LLR decimals assumption
>       bbdev: fix doxygen comments
> 
> Nithin Dabilpuram (2):
>       devtools: fix symbol map change check
>       net/octeontx2: disable unnecessary error interrupts
> 
> Olivier Matz (3):
>       test/kvargs: fix to consider empty elements as valid
>       test/kvargs: fix invalid cases check
>       kvargs: fix invalid token parsing on FreeBSD
> 
> Ophir Munk (1):
>       net/mlx5: fix VLAN PCP item calculation
> 
> Ori Kam (1):
>       eal/ppc: fix bool type after altivec include
> 
> Pablo de Lara (4):
>       cryptodev: add asymmetric session-less feature name
>       test/crypto: fix flag check
>       crypto/openssl: fix out-of-place encryption
>       doc: add NASM installation steps
> 
> Pavan Nikhilesh (4):
>       net/octeontx2: fix device configuration sequence
>       eventdev: fix probe and remove for secondary process
>       common/octeontx: fix gcc 9.1 ABI break
>       app/eventdev: check Tx adapter service ID
> 
> Phil Yang (2):
>       service: remove rte prefix from static functions
>       net/ixgbe: fix link state timing on fiber ports
> 
> Qi Zhang (10):
>       net/ice: remove unnecessary variable
>       net/ice: remove bulk alloc option
>       net/ice/base: fix uninitialized stack variables
>       net/ice/base: read PSM clock frequency from register
>       net/ice/base: minor fixes
>       net/ice/base: fix MAC write command
>       net/ice/base: fix binary order for GTPU filter
>       net/ice/base: remove unused code in switch rule
>       net/ice: fix variable initialization
>       net/ice: fix RSS for GTPU
> 
> Qiming Yang (3):
>       net/i40e: fix X722 performance
>       doc: fix multicast filter feature announcement
>       net/i40e: fix queue related exception handling
> 
> Rahul Gupta (2):
>       net/bnxt: fix memory leak during queue restart
>       net/bnxt: fix Rx ring producer index
> 
> Rasesh Mody (3):
>       net/qede: fix link state configuration
>       net/qede: fix port reconfiguration
>       examples/kni: fix MTU change to setup Tx queue
> 
> Raslan Darawsheh (4):
>       net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
>       app/testpmd: add parsing for QinQ VLAN headers
>       net/mlx5: fix matching for UDP tunnels with Verbs
>       doc: fix build issue in ABI guide
> 
> Ray Kinsella (1):
>       doc: fix default symbol binding in ABI guide
> 
> Rohit Raj (1):
>       net/dpaa2: fix 10G port negotiation
> 
> Roland Qi (1):
>       vhost: fix peer close check
> 
> Ruifeng Wang (2):
>       test: skip some subtests in no-huge mode
>       test/ipsec: fix crash in session destroy
> 
> Sarosh Arif (1):
>       doc: fix typo in contributors guide
> 
> Shougang Wang (2):
>       net/ixgbe: fix link status after port reset
>       net/i40e: fix queue region in RSS flow
> 
> Simei Su (1):
>       net/ice: support mark only action for flow director
> 
> Sivaprasad Tummala (1):
>       vhost: handle mbuf allocation failure
> 
> Somnath Kotur (2):
>       bus/pci: fix devargs on probing again
>       net/bnxt: fix max ring count
> 
> Stephen Hemminger (24):
>       ethdev: fix spelling
>       net/mvneta: do not use PMD log type
>       net/virtio: do not use PMD log type
>       net/tap: do not use PMD log type
>       net/pfe: do not use PMD log type
>       net/bnxt: do not use PMD log type
>       net/dpaa: use dynamic log type
>       net/thunderx: use dynamic log type
>       net/netvsc: propagate descriptor limits from VF
>       net/netvsc: handle Rx packets during multi-channel setup
>       net/netvsc: split send buffers from Tx descriptors
>       net/netvsc: fix memory free on device close
>       net/netvsc: remove process event optimization
>       net/netvsc: handle Tx completions based on burst size
>       net/netvsc: avoid possible live lock
>       lpm6: fix comments spelling
>       eal: fix comments spelling
>       net/netvsc: fix comment spelling
>       bus/vmbus: fix comment spelling
>       net/netvsc: do RSS across Rx queue only
>       net/netvsc: do not configure RSS if disabled
>       net/tap: fix crash in flow destroy
>       eal: fix C++17 compilation
>       net/vmxnet3: handle bad host framing
> 
> Suanming Mou (3):
>       net/mlx5: fix counter container usage
>       net/mlx5: fix meter suffix table leak
>       net/mlx5: fix jump table leak
> 
> Sunil Kumar Kori (1):
>       eal: fix log message print for regex
> 
> Tao Zhu (3):
>       net/ice: fix hash flow crash
>       net/ixgbe: fix link status inconsistencies
>       net/ixgbe: fix resource leak after thread exits normally
> 
> Thomas Monjalon (13):
>       drivers/crypto: fix build with make 4.3
>       doc: fix sphinx compatibility
>       log: fix level picked with globbing on type register
>       doc: fix matrix CSS for recent sphinx
>       common/mlx5: fix build with -fno-common
>       net/mlx4: fix build with -fno-common
>       common/mlx5: fix build with rdma-core 21
>       app: fix usage help of options separated by dashes
>       net/mvpp2: fix build with gcc 10
>       examples/vm_power: fix build with -fno-common
>       examples/vm_power: drop Unix path limit redefinition
>       doc: fix build with doxygen 1.8.18
>       doc: fix API index
> 
> Timothy Redaelli (6):
>       crypto/octeontx2: fix build with gcc 10
>       test: fix build with gcc 10
>       app/pipeline: fix build with gcc 10
>       examples/vhost_blk: fix build with gcc 10
>       examples/eventdev: fix build with gcc 10
>       examples/qos_sched: fix build with gcc 10
> 
> Ting Xu (1):
>       app/testpmd: fix DCB set
> 
> Tonghao Zhang (2):
>       eal: fix PRNG init with HPET enabled
>       net/mlx5: fix crash when releasing meter table
> 
> Vadim Podovinnikov (1):
>       net/memif: fix resource leak
> 
> Vamsi Attunuru (1):
>       net/octeontx2: enable error and RAS interrupt in configure
> 
> Viacheslav Ovsiienko (2):
>       net/mlx5: fix metadata for compressed Rx CQEs
>       common/mlx5: fix netlink buffer allocation from stack
> 
> Vijaya Mohan Guvva (1):
>       bus/pci: fix UIO resource access from secondary process
> 
> Vladimir Medvedkin (1):
>       ipsec: check SAD lookup error
> 
> Wei Hu (Xavier) (10):
>       vfio: fix use after free with multiprocess
>       net/hns3: fix status after repeated resets
>       net/hns3: fix return value when clearing statistics
>       app/testpmd: fix statistics after reset
>       net/hns3: support different numbers of Rx and Tx queues
>       net/hns3: fix Tx interrupt when enabling Rx interrupt
>       net/hns3: fix MSI-X interrupt during initialization
>       net/hns3: remove unnecessary assignments in Tx
>       net/hns3: remove one IO barrier in Rx
>       net/hns3: add free threshold in Rx
> 
> Wei Zhao (8):
>       net/ice: change default tunnel type
>       net/ice: add action number check for switch
>       net/ice: fix input set of VLAN item
>       net/i40e: fix flow director for ARP packets
>       doc: add i40e limitation for flow director
>       net/i40e: fix flush of flow director filter
>       net/i40e: fix wild pointer
>       net/i40e: fix flow director enabling
> 
> Wisam Jaddo (3):
>       net/mlx5: fix zero metadata action
>       net/mlx5: fix zero value validation for metadata
>       net/mlx5: fix VLAN ID check
> 
> Xiao Zhang (1):
>       app/testpmd: fix PPPoE flow command
> 
> Xiaolong Ye (3):
>       net/virtio: fix outdated comment
>       vhost: remove unused variable
>       doc: fix log level example in Linux guide
> 
> Xiaoyu Min (3):
>       net/mlx5: fix push VLAN action to use item info
>       net/mlx5: fix validation of push VLAN without full mask
>       net/mlx5: fix RSS enablement
> 
> Xiaoyun Li (4):
>       net/ixgbe/base: update copyright
>       net/i40e/base: update copyright
>       common/iavf: update copyright
>       net/ice/base: update copyright
> 
> Xiaoyun Wang (7):
>       net/hinic: allocate IO memory with socket id
>       net/hinic: fix LRO
>       net/hinic/base: fix port start during FW hot update
>       net/hinic/base: fix PF firmware hot-active problem
>       net/hinic: fix queues resource free
>       net/hinic: fix Tx mbuf length while copying
>       net/hinic: fix TSO
> 
> Xuan Ding (2):
>       vhost: prevent zero-copy with incompatible client mode
>       vhost: fix zero-copy server mode
> 
> Yisen Zhuang (1):
>       net/hns3: reduce judgements of free Tx ring space
> 
> Yunjian Wang (16):
>       kvargs: fix buffer overflow when parsing list
>       net/tap: remove unused assert
>       net/nfp: fix dangling pointer on probe failure
>       net/pfe: fix double free of MAC address
>       net/tap: fix mbuf double free when writev fails
>       net/tap: fix mbuf and mem leak during queue release
>       net/tap: fix check for mbuf number of segment
>       net/tap: fix file close on remove
>       net/tap: fix fd leak on creation failure
>       net/tap: fix unexpected link handler
>       net/tap: fix queues fd check before close
>       net/octeontx: fix dangling pointer on init failure
>       crypto/ccp: fix fd leak on probe failure
>       net/failsafe: fix fd leak
>       crypto/caam_jr: fix check of file descriptors
>       crypto/caam_jr: fix IRQ functions return type
> 
> Yuri Chipchev (1):
>       event/dsw: fix enqueue burst return value
> 
> Zhihong Peng (1):
>       net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 1/4] devtools: shrink cross-compilation test definition
  2020-06-15 22:22  3% ` [dpdk-dev] [PATCH v2 0/4] add PPC and Windows cross-compilation " Thomas Monjalon
@ 2020-06-15 22:22  7%   ` Thomas Monjalon
  2020-06-17 21:05  0%     ` David Christensen
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-06-15 22:22 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson, drc, dmitry.kozliuk

Each cross-compilation case needs to define the target compiler
and the meson cross file.
Given the compiler is already defined in the cross file,
the latter is enough.

The function "build" is changed to accept a cross file alternatively
to the compiler name. In the case of a file (detected if readable),
the compiler is extracted with sed and tr, and the option --cross-file
is automatically added.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v2: fix ABI check config (thanks David)
---
 devtools/test-meson-builds.sh | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 18b874fac5..bee55ec038 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -117,16 +117,24 @@ install_target () # <builddir> <installdir>
 	fi
 }
 
-build () # <directory> <target compiler> <meson options>
+build () # <directory> <target compiler | cross file> <meson options>
 {
 	targetdir=$1
 	shift
-	targetcc=$1
+	crossfile=
+	[ -r $1 ] && crossfile=$1 || targetcc=$1
 	shift
 	# skip build if compiler not available
 	command -v ${CC##* } >/dev/null 2>&1 || return 0
+	if [ -n "$crossfile" ] ; then
+		cross="--cross-file $crossfile"
+		targetcc=$(sed -n 's,^c[[:space:]]*=[[:space:]]*,,p' \
+			$crossfile | tr -d "'" | tr -d '"')
+	else
+		cross=
+	fi
 	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir --werror $*
+	config $srcdir $builds_dir/$targetdir $cross --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
@@ -140,7 +148,7 @@ build () # <directory> <target compiler> <meson options>
 			fi
 
 			rm -rf $abirefdir/build
-			config $abirefdir/src $abirefdir/build $*
+			config $abirefdir/src $abirefdir/build $cross $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
 			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
@@ -186,17 +194,15 @@ if [ "$ok" = "false" ] ; then
 fi
 build build-x86-default cc -Dlibdir=lib -Dmachine=$default_machine $use_shared
 
-c=aarch64-linux-gnu-gcc
 # generic armv8a with clang as host compiler
+f=$srcdir/config/arm/arm64_armv8_linux_gcc
 export CC="clang"
-build build-arm64-host-clang $c $use_shared \
-	--cross-file $srcdir/config/arm/arm64_armv8_linux_gcc
+build build-arm64-host-clang $f $use_shared
 unset CC
-# all gcc/arm configurations
+# some gcc/arm configurations
 for f in $srcdir/config/arm/arm64_[bdo]*gcc ; do
 	export CC="$CCACHE gcc"
-	build build-$(basename $f | tr '_' '-' | cut -d'-' -f-2) $c \
-		$use_shared --cross-file $f
+	build build-$(basename $f | tr '_' '-' | cut -d'-' -f-2) $f $use_shared
 	unset CC
 done
 
-- 
2.26.2


^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH v2 0/4] add PPC and Windows cross-compilation to meson test
  @ 2020-06-15 22:22  3% ` Thomas Monjalon
  2020-06-15 22:22  7%   ` [dpdk-dev] [PATCH v2 1/4] devtools: shrink cross-compilation test definition Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-06-15 22:22 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson, drc, dmitry.kozliuk

In order to better support PPC and Windows,
their compilation is tested on Linux with Meson
with the script test-meson-builds.sh,
supposed to be called in every CI labs.


Thomas Monjalon (4):
  devtools: shrink cross-compilation test definition
  devtools: allow non-standard toolchain in meson test
  devtools: add ppc64 in meson build test
  devtools: add Windows cross-build test with MinGW


v2: update some explanations and fix ABI check


 config/ppc/ppc64le-power8-linux-gcc         | 11 ++++++
 config/x86/{meson_mingw.txt => cross-mingw} |  0
 devtools/test-meson-builds.sh               | 44 +++++++++++++++------
 doc/guides/windows_gsg/build_dpdk.rst       |  2 +-
 4 files changed, 44 insertions(+), 13 deletions(-)
 create mode 100644 config/ppc/ppc64le-power8-linux-gcc
 rename config/x86/{meson_mingw.txt => cross-mingw} (100%)

-- 
2.26.2


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] 19.11.3 patches review and test
  2020-06-15  2:05  3% ` Pei Zhang
@ 2020-06-15  9:09  0%   ` Luca Boccassi
  0 siblings, 0 replies; 200+ results
From: Luca Boccassi @ 2020-06-15  9:09 UTC (permalink / raw)
  To: Pei Zhang
  Cc: stable, dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani,
	benjamin walker, David Christensen, Hemant Agrawal, Ian Stokes,
	Jerin Jacob, John McNamara, Ju-Hyoung Lee, Kevin Traynor,
	pingx yu, qian q xu, Raslan Darawsheh, Thomas Monjalon,
	yuan peng, zhaoyan chen

On Sun, 2020-06-14 at 22:05 -0400, Pei Zhang wrote:
> Hi Luca,
> 
> Testing with dpdk v19.11.3-rc1 from Red Hat looks good.
> 
> We cover below 14 scenarios and and all get PASS on RHEL8 testing:
> 
> (1)Guest with device assignment(PF) throughput testing(1G hugepage size): PASS
> (2)Guest with device assignment(PF) throughput testing(2M hugepage size) : PASS
> (3)Guest with device assignment(VF) throughput testing: PASS
> (4)PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
> (5)PVP vhost-user 2Q throughput testing: PASS
> (6)PVP vhost-user 1Q - cross numa node throughput testing: PASS
> (7)Guest with vhost-user 2 queues throughput testing: PASS
> (8)vhost-user reconnect with dpdk-client, qemu-server: qemu reconnect: PASS
> (9)PVP 1Q live migration testing: PASS
> (10)PVP 1Q cross numa node live migration testing: PASS
> (11)Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
> (12)Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
> (13)Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
> (14)Allocate memory from the NUMA node which Virtio device locates: PASS
> 
> Versions:
> 
> kernel 4.18
> qemu 4.2
> dpdk: git://dpdk.org/dpdk-stable remotes/origin/19.11
> 
> # git log -1
> commit d764b2cd1f357cbba18148488c144f5c30c53ae0 (HEAD, tag: v19.11.3-rc1)
> Author: Apeksha Gupta <apeksha.gupta@nxp.com>
> Date:   Wed Jun 3 18:47:07 2020 +0530
> 
>     test/crypto: fix statistics case
>     
>     [ upstream commit 29fdc5bf4555e16e866188dd9fe95f9bab01404a ]
>     
>     The test case - test_stats is directly accessing the
>     cryptodev and its dev_ops which are internal to library
>     and should not be used directly by the application.
>     However, the test case is also missing to check for the
>     error ENOTSUP. It should skip the case if the API returns
>     ENOTSUP. This patch fixes these two issues.
>     
>     Fixes: 202d375c60bc ("app/test: add cryptodev unit and performance tests")
>     
>     Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
>     Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> 
> NICs: X540-AT2 NIC(ixgbe, 10G)
> 
> Best regards,
> 
> Pei

Great, thank you!

> ----- Original Message -----
> From: "luca boccassi" <luca.boccassi@gmail.com>
> To: stable@dpdk.org
> Cc: dev@dpdk.org, "Abhishek Marathe" <Abhishek.Marathe@microsoft.com>, "Akhil Goyal" <akhil.goyal@nxp.com>, "Ali Alnubani" <alialnu@mellanox.com>, "benjamin walker" <benjamin.walker@intel.com>, "David Christensen" <drc@linux.vnet.ibm.com>, "Hemant Agrawal" <hemant.agrawal@nxp.com>, "Ian Stokes" <ian.stokes@intel.com>, "Jerin Jacob" <jerinj@marvell.com>, "John McNamara" <john.mcnamara@intel.com>, "Ju-Hyoung Lee" <juhlee@microsoft.com>, "Kevin Traynor" <ktraynor@redhat.com>, "Pei Zhang" <pezhang@redhat.com>, "pingx yu" <pingx.yu@intel.com>, "qian q xu" <qian.q.xu@intel.com>, "Raslan Darawsheh" <rasland@mellanox.com>, "Thomas Monjalon" <thomas@monjalon.net>, "yuan peng" <yuan.peng@intel.com>, "zhaoyan chen" <zhaoyan.chen@intel.com>
> Sent: Thursday, June 4, 2020 3:43:49 AM
> Subject: 19.11.3 patches review and test
> 
> Hi all,
> 
> Here is a list of patches targeted for stable release 19.11.3.
> 
> The planned date for the final release is the 17th of June.
> 
> Please help with testing and validation of your use cases and report
> any issues/results with reply-all to this mail. For the final release
> the fixes and reported validations will be added to the release notes.
> 
> A release candidate tarball can be found at:
> 
>     https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1
> 
> These patches are located at branch 19.11 of dpdk-stable repo:
>     https://dpdk.org/browse/dpdk-stable/
> 
> Thanks.
> 
> Luca Boccassi
> 
> ---
> Adam Dybkowski (5):
>       cryptodev: fix missing device id range checking
>       common/qat: fix GEN3 marketing name
>       app/crypto-perf: fix display of sample test vector
>       crypto/qat: support plain SHA1..SHA512 hashes
>       cryptodev: fix SHA-1 digest enum comment
> 
> Ajit Khaparde (3):
>       net/bnxt: fix FW version query
>       net/bnxt: fix error log for command timeout
>       net/bnxt: fix using RSS config struct
> 
> Akhil Goyal (1):
>       ipsec: fix build dependency on hash lib
> 
> Alex Kiselev (1):
>       lpm6: fix size of tbl8 group
> 
> Alex Marginean (1):
>       net/enetc: fix Rx lock-up
> 
> Alexander Kozyrev (8):
>       net/mlx5: reduce Tx completion index memory loads
>       net/mlx5: add device parameter for MPRQ stride size
>       net/mlx5: enable MPRQ multi-stride operations
>       net/mlx5: add multi-segment packets in MPRQ mode
>       net/mlx5: set dynamic flow metadata in Rx queues
>       net/mlx5: improve logging of MPRQ selection
>       net/mlx5: fix assert in dynamic metadata handling
>       net/mlx5: fix Tx queue release debug log timing
> 
> Alvin Zhang (2):
>       net/iavf: fix link speed
>       net/e1000: fix port hotplug for multi-process
> 
> Amit Gupta (1):
>       net/octeontx: fix meson build for disabled drivers
> 
> Anatoly Burakov (1):
>       mem: preallocate VA space in no-huge mode
> 
> Andrew Rybchenko (4):
>       net/sfc: fix reported promiscuous/multicast mode
>       net/sfc/base: use simpler EF10 family conditional check
>       net/sfc/base: use simpler EF10 family run-time checks
>       net/sfc/base: fix build when EVB is enabled
> 
> Andy Pei (1):
>       net/ipn3ke: use control thread to check link status
> 
> Ankur Dwivedi (1):
>       net/octeontx2: fix buffer size assignment
> 
> Apeksha Gupta (2):
>       bus/fslmc: fix dereferencing null pointer
>       test/crypto: fix statistics case
> 
> Archana Muniganti (1):
>       examples/fips_validation: fix parsing of algorithms
> 
> Arek Kusztal (1):
>       crypto/qat: fix cipher descriptor for ZUC and SNOW
> 
> Asaf Penso (2):
>       net/mlx5: fix call to modify action without init item
>       net/mlx5: fix assert in doorbell lookup
> 
> Ashish Gupta (1):
>       net/octeontx2: fix link information for loopback port
> 
> Asim Jamshed (1):
>       fib: fix headers for C++ support
> 
> Bernard Iremonger (1):
>       net/i40e: fix flow director initialisation
> 
> Bing Zhao (6):
>       net/mlx5: fix header modify action validation
>       net/mlx5: fix actions validation on root table
>       net/mlx5: fix assert in modify converting
>       mk: fix static linkage of mlx dependency
>       mem: fix overflow on allocation
>       net/mlx5: fix doorbell bitmap management offsets
> 
> Bruce Richardson (3):
>       pci: remove unneeded includes in public header file
>       pci: fix build on FreeBSD
>       drivers: fix log type variables for -fno-common
> 
> Cheng Peng (1):
>       net/iavf: fix stats query error code
> 
> Chengchang Tang (3):
>       net/hns3: fix promiscuous mode for PF
>       net/hns3: fix default VLAN filter configuration for PF
>       net/hns3: fix VLAN filter when setting promisucous mode
> 
> Chengwen Feng (7):
>       net/hns3: fix packets offload features flags in Rx
>       net/hns3: fix default error code of command interface
>       net/hns3: fix crash when flushing RSS flow rules with FLR
>       net/hns3: fix return value of setting VLAN offload
>       net/hns3: clear residual flow rules on init
>       net/hns3: fix Rx interrupt after reset
>       net/hns3: replace memory barrier with data dependency order
> 
> Ciara Power (1):
>       telemetry: fix port stats retrieval
> 
> Darek Stojaczyk (1):
>       pci: accept 32-bit domain numbers
> 
> David Christensen (2):
>       pci: fix build on ppc
>       eal/ppc: fix build with gcc 9.3
> 
> David Marchand (5):
>       mem: mark pages as not accessed when reserving VA
>       test: load drivers when required
>       eal: fix typo in endian conversion macros
>       remove references to private PCI probe function
>       doc: prefer https when pointing to dpdk.org
> 
> Dekel Peled (7):
>       net/mlx5: fix mask used for IPv6 item validation
>       net/mlx5: fix CVLAN tag set in IP item translation
>       net/mlx5: update VLAN and encap actions validation
>       net/mlx5: fix match on empty VLAN item in DV mode
>       common/mlx5: fix umem buffer alignment
>       net/mlx5: fix VLAN flow action with wildcard VLAN item
>       net/mlx5: fix RSS key copy to TIR context
> 
> Dmitry Kozlyuk (2):
>       build: fix linker warnings with clang on Windows
>       build: support MinGW-w64 with Meson
> 
> Eduard Serra (1):
>       net/vmxnet3: fix RSS setting on v4
> 
> Eugeny Parshutin (1):
>       ethdev: fix build when vtune profiling is on
> 
> Fady Bader (1):
>       mempool: remove inline functions from export list
> 
> Fan Zhang (1):
>       vhost/crypto: add missing user protocol flag
> 
> Ferruh Yigit (7):
>       net/nfp: fix log format specifiers
>       net/null: fix secondary burst function selection
>       net/null: remove redundant check
>       mempool/octeontx2: fix build for gcc O1 optimization
>       net/ena: fix build for O1 optimization
>       event/octeontx2: fix build for O1 optimization
>       examples/kni: fix crash during MTU set
> 
> Gaetan Rivet (5):
>       doc: fix number of failsafe sub-devices
>       net/ring: fix device pointer on allocation
>       pci: reject negative values in PCI id
>       doc: fix typos in ABI policy
>       kvargs: fix strcmp helper documentation
> 
> Gavin Hu (2):
>       net/i40e: relax barrier in Tx
>       net/i40e: relax barrier in Tx for NEON
> 
> Guinan Sun (2):
>       net/ixgbe: fix statistics in flow control mode
>       net/ixgbe: check driver type in MACsec API
> 
> Haifeng Lin (1):
>       eal/arm64: fix precise TSC
> 
> Haiyue Wang (1):
>       net/ice/base: check memory pointer before copying
> 
> Hao Chen (1):
>       net/hns3: support Rx interrupt
> 
> Harry van Haaren (3):
>       service: fix crash on exit
>       examples/eventdev: fix crash on exit
>       test/flow_classify: enable multi-sockets system
> 
> Hemant Agrawal (3):
>       drivers: add crypto as dependency for event drivers
>       bus/fslmc: fix size of qman fq descriptor
>       mempool/dpaa2: install missing header with meson
> 
> Honnappa Nagarahalli (3):
>       timer: protect initialization with lock
>       service: fix race condition for MT unsafe service
>       service: fix identification of service running on other lcore
> 
> Hyong Youb Kim (1):
>       net/enic: fix flow action reordering
> 
> Igor Chauskin (2):
>       net/ena/base: make allocation macros thread-safe
>       net/ena/base: prevent allocation of zero sized memory
> 
> Igor Romanov (9):
>       net/sfc: fix initialization error path
>       net/sfc: fix Rx queue start failure path
>       net/sfc: fix promiscuous and allmulticast toggles errors
>       net/sfc: set priority of created filters to manual
>       net/sfc/base: reduce filter priorities to implemented only
>       net/sfc/base: reject automatic filter creation by users
>       net/sfc/base: refactor filter lookup loop in EF10
>       net/sfc/base: handle manual and auto filter clashes in EF10
>       net/sfc/base: fix manual filter delete in EF10
> 
> Itsuro Oda (2):
>       net/vhost: fix potential memory leak on close
>       vhost: make IOTLB cache name unique among processes
> 
> Ivan Dyukov (3):
>       net/virtio-user: fix devargs parsing
>       app: remove extra new line after link duplex
>       examples: remove extra new line after link duplex
> 
> Jasvinder Singh (3):
>       net/softnic: fix memory leak for thread
>       net/softnic: fix resource leak for pipeline
>       examples/ip_pipeline: remove check of null response
> 
> Jeff Guo (3):
>       net/i40e: fix setting L2TAG
>       net/iavf: fix setting L2TAG
>       net/ice: fix setting L2TAG
> 
> Jiawei Wang (1):
>       net/mlx5: fix imissed counter overflow
> 
> Jim Harris (1):
>       contigmem: cleanup properly when load fails
> 
> Jun Yang (1):
>       net/dpaa2: fix congestion ID for multiple traffic classes
> 
> Junyu Jiang (4):
>       examples/vmdq: fix output of pools/queues
>       examples/vmdq: fix RSS configuration
>       net/ice: fix RSS advanced rule
>       net/ice: fix crash in switch filter
> 
> Juraj Linkeš (1):
>       ci: fix telemetry dependency in Travis
> 
> Július Milan (1):
>       net/memif: fix init when already connected
> 
> Kalesh AP (9):
>       net/bnxt: fix HWRM command during FW reset
>       net/bnxt: use true/false for bool types
>       net/bnxt: fix port start failure handling
>       net/bnxt: fix VLAN add when port is stopped
>       net/bnxt: fix VNIC Rx queue count on VNIC free
>       net/bnxt: fix number of TQM ring
>       net/bnxt: fix TQM ring context memory size
>       app/testpmd: fix memory failure handling for i40e DDP
>       net/bnxt: fix storing MAC address twice
> 
> Kevin Traynor (9):
>       net/hinic: fix snprintf length of cable info
>       net/hinic: fix repeating cable log and length check
>       net/avp: fix gcc 10 maybe-uninitialized warning
>       examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
>       eal/x86: ignore gcc 10 stringop-overflow warnings
>       net/mlx5: fix gcc 10 enum-conversion warning
>       crypto/kasumi: fix extern declaration
>       drivers/crypto: disable gcc 10 no-common errors
>       build: disable gcc 10 zero-length-bounds warning
> 
> Konstantin Ananyev (1):
>       security: fix crash at accessing non-implemented ops
> 
> Lijun Ou (4):
>       net/hns3: fix configuring RSS hash when rules are flushed
>       net/hns3: add RSS hash offload to capabilities
>       net/hns3: fix RSS key length
>       net/hns3: fix RSS indirection table configuration
> 
> Linsi Yuan (1):
>       net/bnxt: fix possible stack smashing
> 
> Louise Kilheeney (1):
>       examples/l2fwd-keepalive: fix mbuf pool size
> 
> Luca Boccassi (4):
>       fix various typos found by Lintian
>       usertools: check for pci.ids in /usr/share/misc
>       Revert "net/bnxt: fix TQM ring context memory size"
>       Revert "net/bnxt: fix number of TQM ring"
> 
> Lukasz Bartosik (1):
>       event/octeontx2: fix queue removal from Rx adapter
> 
> Lukasz Wojciechowski (5):
>       drivers/crypto: fix log type variables for -fno-common
>       security: fix verification of parameters
>       security: fix return types in documentation
>       security: fix session counter
>       test: remove redundant macro
> 
> Marvin Liu (5):
>       vhost: fix packed ring zero-copy
>       vhost: fix shadow update
>       vhost: fix shadowed descriptors not flushed
>       net/virtio: fix crash when device reconnecting
>       net/virtio: fix unexpected event after reconnect
> 
> Matteo Croce (1):
>       doc: fix LTO config option
> 
> Mattias Rönnblom (3):
>       event/dsw: remove redundant control ring poll
>       event/dsw: remove unnecessary read barrier
>       event/dsw: avoid reusing previously recorded events
> 
> Michael Baum (2):
>       net/mlx5: fix meter color register consideration
>       net/mlx4: fix drop queue error handling
> 
> Michael Haeuptle (1):
>       vfio: fix race condition with sysfs
> 
> Michal Krawczyk (5):
>       net/ena/base: fix documentation of functions
>       net/ena/base: fix indentation in CQ polling
>       net/ena/base: fix indentation of multiple defines
>       net/ena: set IO ring size to valid value
>       net/ena/base: fix testing for supported hash function
> 
> Min Hu (Connor) (3):
>       net/hns3: fix configuring illegal VLAN PVID
>       net/hns3: fix mailbox opcode data type
>       net/hns3: fix VLAN PVID when configuring device
> 
> Mit Matelske (1):
>       eal/freebsd: fix queuing duplicate alarm callbacks
> 
> Mohsin Shaikh (1):
>       net/mlx5: use open/read/close for ib stats query
> 
> Muhammad Bilal (2):
>       fix same typo in multiple places
>       doc: fix typo in contributors guide
> 
> Nagadheeraj Rottela (2):
>       crypto/nitrox: fix CSR register address generation
>       crypto/nitrox: fix oversized device name
> 
> Nicolas Chautru (2):
>       baseband/turbo_sw: fix exposed LLR decimals assumption
>       bbdev: fix doxygen comments
> 
> Nithin Dabilpuram (2):
>       devtools: fix symbol map change check
>       net/octeontx2: disable unnecessary error interrupts
> 
> Olivier Matz (3):
>       test/kvargs: fix to consider empty elements as valid
>       test/kvargs: fix invalid cases check
>       kvargs: fix invalid token parsing on FreeBSD
> 
> Ophir Munk (1):
>       net/mlx5: fix VLAN PCP item calculation
> 
> Ori Kam (1):
>       eal/ppc: fix bool type after altivec include
> 
> Pablo de Lara (4):
>       cryptodev: add asymmetric session-less feature name
>       test/crypto: fix flag check
>       crypto/openssl: fix out-of-place encryption
>       doc: add NASM installation steps
> 
> Pavan Nikhilesh (4):
>       net/octeontx2: fix device configuration sequence
>       eventdev: fix probe and remove for secondary process
>       common/octeontx: fix gcc 9.1 ABI break
>       app/eventdev: check Tx adapter service ID
> 
> Phil Yang (2):
>       service: remove rte prefix from static functions
>       net/ixgbe: fix link state timing on fiber ports
> 
> Qi Zhang (10):
>       net/ice: remove unnecessary variable
>       net/ice: remove bulk alloc option
>       net/ice/base: fix uninitialized stack variables
>       net/ice/base: read PSM clock frequency from register
>       net/ice/base: minor fixes
>       net/ice/base: fix MAC write command
>       net/ice/base: fix binary order for GTPU filter
>       net/ice/base: remove unused code in switch rule
>       net/ice: fix variable initialization
>       net/ice: fix RSS for GTPU
> 
> Qiming Yang (3):
>       net/i40e: fix X722 performance
>       doc: fix multicast filter feature announcement
>       net/i40e: fix queue related exception handling
> 
> Rahul Gupta (2):
>       net/bnxt: fix memory leak during queue restart
>       net/bnxt: fix Rx ring producer index
> 
> Rasesh Mody (3):
>       net/qede: fix link state configuration
>       net/qede: fix port reconfiguration
>       examples/kni: fix MTU change to setup Tx queue
> 
> Raslan Darawsheh (4):
>       net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
>       app/testpmd: add parsing for QinQ VLAN headers
>       net/mlx5: fix matching for UDP tunnels with Verbs
>       doc: fix build issue in ABI guide
> 
> Ray Kinsella (1):
>       doc: fix default symbol binding in ABI guide
> 
> Rohit Raj (1):
>       net/dpaa2: fix 10G port negotiation
> 
> Roland Qi (1):
>       vhost: fix peer close check
> 
> Ruifeng Wang (2):
>       test: skip some subtests in no-huge mode
>       test/ipsec: fix crash in session destroy
> 
> Sarosh Arif (1):
>       doc: fix typo in contributors guide
> 
> Shougang Wang (2):
>       net/ixgbe: fix link status after port reset
>       net/i40e: fix queue region in RSS flow
> 
> Simei Su (1):
>       net/ice: support mark only action for flow director
> 
> Sivaprasad Tummala (1):
>       vhost: handle mbuf allocation failure
> 
> Somnath Kotur (2):
>       bus/pci: fix devargs on probing again
>       net/bnxt: fix max ring count
> 
> Stephen Hemminger (24):
>       ethdev: fix spelling
>       net/mvneta: do not use PMD log type
>       net/virtio: do not use PMD log type
>       net/tap: do not use PMD log type
>       net/pfe: do not use PMD log type
>       net/bnxt: do not use PMD log type
>       net/dpaa: use dynamic log type
>       net/thunderx: use dynamic log type
>       net/netvsc: propagate descriptor limits from VF
>       net/netvsc: handle Rx packets during multi-channel setup
>       net/netvsc: split send buffers from Tx descriptors
>       net/netvsc: fix memory free on device close
>       net/netvsc: remove process event optimization
>       net/netvsc: handle Tx completions based on burst size
>       net/netvsc: avoid possible live lock
>       lpm6: fix comments spelling
>       eal: fix comments spelling
>       net/netvsc: fix comment spelling
>       bus/vmbus: fix comment spelling
>       net/netvsc: do RSS across Rx queue only
>       net/netvsc: do not configure RSS if disabled
>       net/tap: fix crash in flow destroy
>       eal: fix C++17 compilation
>       net/vmxnet3: handle bad host framing
> 
> Suanming Mou (3):
>       net/mlx5: fix counter container usage
>       net/mlx5: fix meter suffix table leak
>       net/mlx5: fix jump table leak
> 
> Sunil Kumar Kori (1):
>       eal: fix log message print for regex
> 
> Tao Zhu (3):
>       net/ice: fix hash flow crash
>       net/ixgbe: fix link status inconsistencies
>       net/ixgbe: fix resource leak after thread exits normally
> 
> Thomas Monjalon (13):
>       drivers/crypto: fix build with make 4.3
>       doc: fix sphinx compatibility
>       log: fix level picked with globbing on type register
>       doc: fix matrix CSS for recent sphinx
>       common/mlx5: fix build with -fno-common
>       net/mlx4: fix build with -fno-common
>       common/mlx5: fix build with rdma-core 21
>       app: fix usage help of options separated by dashes
>       net/mvpp2: fix build with gcc 10
>       examples/vm_power: fix build with -fno-common
>       examples/vm_power: drop Unix path limit redefinition
>       doc: fix build with doxygen 1.8.18
>       doc: fix API index
> 
> Timothy Redaelli (6):
>       crypto/octeontx2: fix build with gcc 10
>       test: fix build with gcc 10
>       app/pipeline: fix build with gcc 10
>       examples/vhost_blk: fix build with gcc 10
>       examples/eventdev: fix build with gcc 10
>       examples/qos_sched: fix build with gcc 10
> 
> Ting Xu (1):
>       app/testpmd: fix DCB set
> 
> Tonghao Zhang (2):
>       eal: fix PRNG init with HPET enabled
>       net/mlx5: fix crash when releasing meter table
> 
> Vadim Podovinnikov (1):
>       net/memif: fix resource leak
> 
> Vamsi Attunuru (1):
>       net/octeontx2: enable error and RAS interrupt in configure
> 
> Viacheslav Ovsiienko (2):
>       net/mlx5: fix metadata for compressed Rx CQEs
>       common/mlx5: fix netlink buffer allocation from stack
> 
> Vijaya Mohan Guvva (1):
>       bus/pci: fix UIO resource access from secondary process
> 
> Vladimir Medvedkin (1):
>       ipsec: check SAD lookup error
> 
> Wei Hu (Xavier) (10):
>       vfio: fix use after free with multiprocess
>       net/hns3: fix status after repeated resets
>       net/hns3: fix return value when clearing statistics
>       app/testpmd: fix statistics after reset
>       net/hns3: support different numbers of Rx and Tx queues
>       net/hns3: fix Tx interrupt when enabling Rx interrupt
>       net/hns3: fix MSI-X interrupt during initialization
>       net/hns3: remove unnecessary assignments in Tx
>       net/hns3: remove one IO barrier in Rx
>       net/hns3: add free threshold in Rx
> 
> Wei Zhao (8):
>       net/ice: change default tunnel type
>       net/ice: add action number check for switch
>       net/ice: fix input set of VLAN item
>       net/i40e: fix flow director for ARP packets
>       doc: add i40e limitation for flow director
>       net/i40e: fix flush of flow director filter
>       net/i40e: fix wild pointer
>       net/i40e: fix flow director enabling
> 
> Wisam Jaddo (3):
>       net/mlx5: fix zero metadata action
>       net/mlx5: fix zero value validation for metadata
>       net/mlx5: fix VLAN ID check
> 
> Xiao Zhang (1):
>       app/testpmd: fix PPPoE flow command
> 
> Xiaolong Ye (3):
>       net/virtio: fix outdated comment
>       vhost: remove unused variable
>       doc: fix log level example in Linux guide
> 
> Xiaoyu Min (3):
>       net/mlx5: fix push VLAN action to use item info
>       net/mlx5: fix validation of push VLAN without full mask
>       net/mlx5: fix RSS enablement
> 
> Xiaoyun Li (4):
>       net/ixgbe/base: update copyright
>       net/i40e/base: update copyright
>       common/iavf: update copyright
>       net/ice/base: update copyright
> 
> Xiaoyun Wang (7):
>       net/hinic: allocate IO memory with socket id
>       net/hinic: fix LRO
>       net/hinic/base: fix port start during FW hot update
>       net/hinic/base: fix PF firmware hot-active problem
>       net/hinic: fix queues resource free
>       net/hinic: fix Tx mbuf length while copying
>       net/hinic: fix TSO
> 
> Xuan Ding (2):
>       vhost: prevent zero-copy with incompatible client mode
>       vhost: fix zero-copy server mode
> 
> Yisen Zhuang (1):
>       net/hns3: reduce judgements of free Tx ring space
> 
> Yunjian Wang (16):
>       kvargs: fix buffer overflow when parsing list
>       net/tap: remove unused assert
>       net/nfp: fix dangling pointer on probe failure
>       net/pfe: fix double free of MAC address
>       net/tap: fix mbuf double free when writev fails
>       net/tap: fix mbuf and mem leak during queue release
>       net/tap: fix check for mbuf number of segment
>       net/tap: fix file close on remove
>       net/tap: fix fd leak on creation failure
>       net/tap: fix unexpected link handler
>       net/tap: fix queues fd check before close
>       net/octeontx: fix dangling pointer on init failure
>       crypto/ccp: fix fd leak on probe failure
>       net/failsafe: fix fd leak
>       crypto/caam_jr: fix check of file descriptors
>       crypto/caam_jr: fix IRQ functions return type
> 
> Yuri Chipchev (1):
>       event/dsw: fix enqueue burst return value
> 
> Zhihong Peng (1):
>       net/ixgbe: fix link status synchronization on BSD
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 7/7] eal: add lcore hotplug notifications
  2020-06-15  6:34  3%   ` Kinsella, Ray
@ 2020-06-15  7:13  0%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-06-15  7:13 UTC (permalink / raw)
  To: Kinsella, Ray; +Cc: dev, Neil Horman, Richardson, Bruce, Thomas Monjalon

On Mon, Jun 15, 2020 at 8:34 AM Kinsella, Ray <mdr@ashroe.eu> wrote:
>
> From ABI  PoV, you are 100%.
>
> Is the agreed term 'callback'?, not 'notifier' for example rte_dev_event_callback_register, rte_mem_event_callback_register
>
> I did wonder however, if all these cb's would be better handled through a EventDev event notification style approach.

I am reconsidering the term.
callback seems better yes and, actually, there is no need for a lcore
event framework.
Cooking a v2 for this week.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 2/7] eal: fix multiple definition of per lcore thread id
  2020-06-10 14:45  3% ` [dpdk-dev] [PATCH 2/7] eal: fix multiple definition of per lcore thread id David Marchand
@ 2020-06-15  6:46  0%   ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-06-15  6:46 UTC (permalink / raw)
  To: David Marchand, dev
  Cc: Neil Horman, Cunming Liang, Konstantin Ananyev, Olivier Matz


On 10/06/2020 15:45, David Marchand wrote:
> Because of the inline accessor + static declaration in rte_gettid(),
> we end up with multiple symbols for RTE_PER_LCORE(_thread_id).
> Each compilation unit will pay a cost when accessing this information
> for the first time.
>
> $ nm build/app/dpdk-testpmd | grep per_lcore__thread_id
> 0000000000000054 d per_lcore__thread_id.5037
> 0000000000000040 d per_lcore__thread_id.5103
> 0000000000000048 d per_lcore__thread_id.5259
> 000000000000004c d per_lcore__thread_id.5259
> 0000000000000044 d per_lcore__thread_id.5933
> 0000000000000058 d per_lcore__thread_id.6261
> 0000000000000050 d per_lcore__thread_id.7378
> 000000000000005c d per_lcore__thread_id.7496
> 000000000000000c d per_lcore__thread_id.8016
> 0000000000000010 d per_lcore__thread_id.8431
>
> Make it global as part of the DPDK_21 stable ABI.
>
> Fixes: ef76436c6834 ("eal: get unique thread id")
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  lib/librte_eal/common/eal_common_thread.c | 1 +
>  lib/librte_eal/include/rte_eal.h          | 3 ++-
>  lib/librte_eal/rte_eal_version.map        | 7 +++++++
>  3 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
> index 25200e5a99..f04d880880 100644
> --- a/lib/librte_eal/common/eal_common_thread.c
> +++ b/lib/librte_eal/common/eal_common_thread.c
> @@ -24,6 +24,7 @@
>  #include "eal_thread.h"
>  
>  RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
> +RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
>  static RTE_DEFINE_PER_LCORE(unsigned int, _socket_id) =
>  	(unsigned int)SOCKET_ID_ANY;
>  static RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
> diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
> index 2f9ed298de..2edf8c6556 100644
> --- a/lib/librte_eal/include/rte_eal.h
> +++ b/lib/librte_eal/include/rte_eal.h
> @@ -447,6 +447,8 @@ enum rte_intr_mode rte_eal_vfio_intr_mode(void);
>   */
>  int rte_sys_gettid(void);
>  
> +RTE_DECLARE_PER_LCORE(int, _thread_id);
> +
>  /**
>   * Get system unique thread id.
>   *
> @@ -456,7 +458,6 @@ int rte_sys_gettid(void);
>   */
>  static inline int rte_gettid(void)
>  {
> -	static RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
>  	if (RTE_PER_LCORE(_thread_id) == -1)
>  		RTE_PER_LCORE(_thread_id) = rte_sys_gettid();
>  	return RTE_PER_LCORE(_thread_id);
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index d8038749a4..fdfc3f1a88 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -221,6 +221,13 @@ DPDK_20.0 {
>  	local: *;
>  };
>  
> +DPDK_21 {
> +	global:
> +
> +	per_lcore__thread_id;
> +
> +} DPDK_20.0;
> +
>  EXPERIMENTAL {
>  	global: 

Acked-by: Ray Kinsella <mdr@ashroe.eu>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 7/7] eal: add lcore hotplug notifications
  @ 2020-06-15  6:34  3%   ` Kinsella, Ray
  2020-06-15  7:13  0%     ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2020-06-15  6:34 UTC (permalink / raw)
  To: David Marchand, dev; +Cc: Neil Horman, Richardson, Bruce, Thomas Monjalon

From ABI  PoV, you are 100%.

Is the agreed term 'callback'?, not 'notifier' for example rte_dev_event_callback_register, rte_mem_event_callback_register

I did wonder however, if all these cb's would be better handled through a EventDev event notification style approach.

Ray K

On 10/06/2020 15:45, David Marchand wrote:
> Now that lcores can be dynamically allocated/freed, we will have to
> notify DPDK components and applications of such events for cases where
> per lcore context must be allocated/initialised.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  lib/librte_eal/common/eal_common_lcore.c  | 91 +++++++++++++++++++++++
>  lib/librte_eal/common/eal_common_thread.c | 11 ++-
>  lib/librte_eal/common/eal_private.h       | 26 +++++++
>  lib/librte_eal/include/rte_lcore.h        | 49 ++++++++++++
>  lib/librte_eal/rte_eal_version.map        |  2 +
>  5 files changed, 178 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
> index 6aca1b2fee..3a997d8115 100644
> --- a/lib/librte_eal/common/eal_common_lcore.c
> +++ b/lib/librte_eal/common/eal_common_lcore.c
> @@ -212,6 +212,47 @@ rte_socket_id_by_idx(unsigned int idx)
>  	return config->numa_nodes[idx];
>  }
>  
> +struct lcore_notifier {
> +	TAILQ_ENTRY(lcore_notifier) next;
> +	rte_lcore_notifier_cb cb;
> +	void *arg;
> +};
> +static TAILQ_HEAD(lcore_notifiers_head, lcore_notifier) lcore_notifiers =
> +	TAILQ_HEAD_INITIALIZER(lcore_notifiers);
> +static rte_spinlock_t lcore_notifiers_lock = RTE_SPINLOCK_INITIALIZER;
> +
> +void *
> +rte_lcore_notifier_register(rte_lcore_notifier_cb cb, void *arg)
> +{
> +	struct lcore_notifier *notifier;
> +
> +	if (cb == NULL)
> +		return NULL;
> +
> +	notifier = calloc(1, sizeof(*notifier));
> +	if (notifier == NULL)
> +		return NULL;
> +
> +	notifier->cb = cb;
> +	notifier->arg = arg;
> +	rte_spinlock_lock(&lcore_notifiers_lock);
> +	TAILQ_INSERT_TAIL(&lcore_notifiers, notifier, next);
> +	rte_spinlock_unlock(&lcore_notifiers_lock);
> +
> +	return notifier;
> +}
> +
> +void
> +rte_lcore_notifier_unregister(void *handle)
> +{
> +	struct lcore_notifier *notifier = handle;
> +
> +	rte_spinlock_lock(&lcore_notifiers_lock);
> +	TAILQ_REMOVE(&lcore_notifiers, notifier, next);
> +	rte_spinlock_unlock(&lcore_notifiers_lock);
> +	free(notifier);
> +}
> +
>  rte_spinlock_t external_lcore_lock = RTE_SPINLOCK_INITIALIZER;
>  
>  unsigned int
> @@ -277,3 +318,53 @@ rte_lcore_dump(FILE *f)
>  	}
>  	rte_spinlock_unlock(&external_lcore_lock);
>  }
> +
> +int
> +eal_lcore_external_notify_allocated(unsigned int lcore_id)
> +{
> +	struct lcore_notifier *notifier;
> +	int ret = 0;
> +
> +	RTE_LOG(DEBUG, EAL, "New lcore %u.\n", lcore_id);
> +	rte_spinlock_lock(&lcore_notifiers_lock);
> +	TAILQ_FOREACH(notifier, &lcore_notifiers, next) {
> +		if (notifier->cb(lcore_id, RTE_LCORE_EVENT_NEW_EXTERNAL,
> +				notifier->arg) == 0)
> +			continue;
> +
> +		/* Some notifier refused the new lcore, inform all notifiers
> +		 * that acked it.
> +		 */
> +		RTE_LOG(DEBUG, EAL, "A lcore notifier refused new lcore %u.\n",
> +			lcore_id);
> +
> +		notifier = TAILQ_PREV(notifier, lcore_notifiers_head, next);
> +		while (notifier != NULL) {
> +			notifier->cb(lcore_id,
> +				RTE_LCORE_EVENT_RELEASE_EXTERNAL,
> +				notifier->arg);
> +			notifier = TAILQ_PREV(notifier, lcore_notifiers_head,
> +				next);
> +		}
> +		ret = -1;
> +		break;
> +	}
> +	rte_spinlock_unlock(&lcore_notifiers_lock);
> +
> +	return ret;
> +}
> +
> +void
> +eal_lcore_external_notify_removed(unsigned int lcore_id)
> +{
> +	struct lcore_notifier *notifier;
> +
> +	RTE_LOG(DEBUG, EAL, "Released lcore %u.\n", lcore_id);
> +	rte_spinlock_lock(&lcore_notifiers_lock);
> +	TAILQ_FOREACH_REVERSE(notifier, &lcore_notifiers, lcore_notifiers_head,
> +			next) {
> +		notifier->cb(lcore_id, RTE_LCORE_EVENT_RELEASE_EXTERNAL,
> +			notifier->arg);
> +	}
> +	rte_spinlock_unlock(&lcore_notifiers_lock);
> +}
> diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
> index a81b192ff3..f66d1ccaef 100644
> --- a/lib/librte_eal/common/eal_common_thread.c
> +++ b/lib/librte_eal/common/eal_common_thread.c
> @@ -285,6 +285,12 @@ rte_thread_register(void)
>  
>  	rte_thread_init(lcore_id, &cpuset);
>  
> +	if (lcore_id != LCORE_ID_ANY &&
> +			eal_lcore_external_notify_allocated(lcore_id) < 0) {
> +		eal_lcore_external_release(lcore_id);
> +		RTE_PER_LCORE(_lcore_id) = lcore_id = LCORE_ID_ANY;
> +	}
> +
>  	RTE_LOG(DEBUG, EAL, "Registered thread as lcore %u.\n", lcore_id);
>  	RTE_PER_LCORE(thread_registered) = true;
>  }
> @@ -298,8 +304,11 @@ rte_thread_unregister(void)
>  		return;
>  
>  	lcore_id = RTE_PER_LCORE(_lcore_id);
> -	if (lcore_id != LCORE_ID_ANY)
> +	if (lcore_id != LCORE_ID_ANY) {
> +		eal_lcore_external_notify_removed(lcore_id);
>  		eal_lcore_external_release(lcore_id);
> +		RTE_PER_LCORE(_lcore_id) = LCORE_ID_ANY;
> +	}
>  
>  	rte_thread_uninit();
>  
> diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
> index 8dd850f68a..649697c368 100644
> --- a/lib/librte_eal/common/eal_private.h
> +++ b/lib/librte_eal/common/eal_private.h
> @@ -283,6 +283,21 @@ uint64_t get_tsc_freq_arch(void);
>   */
>  unsigned int eal_lcore_external_reserve(void);
>  
> +/**
> + * Evaluate all lcore notifiers with a RTE_LCORE_EVENT_NEW_EXTERNAL event for
> + * the passed lcore.
> + * If an error is returned by one of them, then this change is rolled back:
> + * all previous lcore notifiers that had acked the RTE_LCORE_EVENT_NEW_EXTERNAL
> + * event receive a RTE_LCORE_EVENT_RELEASE_EXTERNAL event for the passed lcore.
> + *
> + * @param lcore_id
> + *   The lcore to consider.
> + * @return
> + *   - 0 if all notifiers agreed on the new lcore
> + *   - -1 if one of them refused
> + */
> +int eal_lcore_external_notify_allocated(unsigned int lcore_id);
> +
>  /**
>   * Release an external lcore.
>   *
> @@ -291,6 +306,17 @@ unsigned int eal_lcore_external_reserve(void);
>   */
>  void eal_lcore_external_release(unsigned int lcore_id);
>  
> +/**
> + * Evaluate all lcore notifiers with a RTE_LCORE_EVENT_RELEASE_EXTERNAL event
> + * for the passed lcore.
> + * This function must be called with a lcore that successfully passed
> + * eal_lcore_external_notify_allocated().
> + *
> + * @param lcore_id
> + *   The lcore with role ROLE_EXTERNAL to release.
> + */
> +void eal_lcore_external_notify_removed(unsigned int lcore_id);
> +
>  /**
>   * Prepare physical memory mapping
>   * i.e. hugepages on Linux and
> diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h
> index 9cf34efef4..e0fec33d5a 100644
> --- a/lib/librte_eal/include/rte_lcore.h
> +++ b/lib/librte_eal/include/rte_lcore.h
> @@ -238,6 +238,55 @@ __rte_experimental
>  void
>  rte_lcore_dump(FILE *f);
>  
> +enum rte_lcore_event_type {
> +	RTE_LCORE_EVENT_NEW_EXTERNAL,
> +	RTE_LCORE_EVENT_RELEASE_EXTERNAL,
> +};
> +
> +/**
> + * Callback prototype for getting lcore events.
> + *
> + * @param lcore_id
> + *   The lcore to consider for this event.
> + * @param event
> + *   The type of event on the lcore.
> + * @param arg
> + *   An opaque pointer passed at notifier registration.
> + * @return
> + *   - -1 when refusing this event,
> + *   - 0 otherwise.
> + */
> +typedef int (*rte_lcore_notifier_cb)(unsigned int lcore_id,
> +	enum rte_lcore_event_type event, void *arg);
> +
> +/**
> + * Register a lcore notifier.
> + *
> + * @param cb
> + *   The callback invoked for each lcore event with the arg argument.
> + *   See rte_lcore_notifier_cb description.
> + * @param arg
> + *   An optional argument that gets passed to the callback when it gets
> + *   invoked.
> + * @return
> + *   On success, returns an opaque pointer for the created notifier.
> + *   NULL on failure.
> + */
> +__rte_experimental
> +void *
> +rte_lcore_notifier_register(rte_lcore_notifier_cb cb, void *arg);
> +
> +/**
> + * Unregister a lcore notifier.
> + *
> + * @param handle
> + *   The handle pointer returned by a former successful call to
> + *   rte_lcore_notifier_register.
> + */
> +__rte_experimental
> +void
> +rte_lcore_notifier_unregister(void *handle);
> +
>  /**
>   * Set core affinity of the current thread.
>   * Support both EAL and non-EAL thread and update TLS.
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index 6754d52543..1e6f2aaacc 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -396,6 +396,8 @@ EXPERIMENTAL {
>  
>  	# added in 20.08
>  	rte_lcore_dump;
> +	rte_lcore_notifier_register;
> +	rte_lcore_notifier_unregister;
>  	rte_thread_register;
>  	rte_thread_unregister;
>  };


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] 19.11.3 patches review and test
  2020-06-03 19:43  3% [dpdk-dev] 19.11.3 patches review and test luca.boccassi
  2020-06-10  7:19  0% ` Yu, PingX
@ 2020-06-15  2:05  3% ` Pei Zhang
  2020-06-15  9:09  0%   ` Luca Boccassi
  2020-06-16 14:20  0% ` Govindharajan, Hariprasad
  2020-06-18 18:11  3% ` [dpdk-dev] [EXTERNAL] " Abhishek Marathe
  3 siblings, 1 reply; 200+ results
From: Pei Zhang @ 2020-06-15  2:05 UTC (permalink / raw)
  To: luca boccassi
  Cc: stable, dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani,
	benjamin walker, David Christensen, Hemant Agrawal, Ian Stokes,
	Jerin Jacob, John McNamara, Ju-Hyoung Lee, Kevin Traynor,
	pingx yu, qian q xu, Raslan Darawsheh, Thomas Monjalon,
	yuan peng, zhaoyan chen

Hi Luca,

Testing with dpdk v19.11.3-rc1 from Red Hat looks good.

We cover below 14 scenarios and and all get PASS on RHEL8 testing:

(1)Guest with device assignment(PF) throughput testing(1G hugepage size): PASS
(2)Guest with device assignment(PF) throughput testing(2M hugepage size) : PASS
(3)Guest with device assignment(VF) throughput testing: PASS
(4)PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
(5)PVP vhost-user 2Q throughput testing: PASS
(6)PVP vhost-user 1Q - cross numa node throughput testing: PASS
(7)Guest with vhost-user 2 queues throughput testing: PASS
(8)vhost-user reconnect with dpdk-client, qemu-server: qemu reconnect: PASS
(9)PVP 1Q live migration testing: PASS
(10)PVP 1Q cross numa node live migration testing: PASS
(11)Guest with ovs+dpdk+vhost-user 1Q live migration testing: PASS
(12)Guest with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
(13)Guest with ovs+dpdk+vhost-user 2Q live migration testing: PASS
(14)Allocate memory from the NUMA node which Virtio device locates: PASS

Versions:

kernel 4.18
qemu 4.2
dpdk: git://dpdk.org/dpdk-stable remotes/origin/19.11

# git log -1
commit d764b2cd1f357cbba18148488c144f5c30c53ae0 (HEAD, tag: v19.11.3-rc1)
Author: Apeksha Gupta <apeksha.gupta@nxp.com>
Date:   Wed Jun 3 18:47:07 2020 +0530

    test/crypto: fix statistics case
    
    [ upstream commit 29fdc5bf4555e16e866188dd9fe95f9bab01404a ]
    
    The test case - test_stats is directly accessing the
    cryptodev and its dev_ops which are internal to library
    and should not be used directly by the application.
    However, the test case is also missing to check for the
    error ENOTSUP. It should skip the case if the API returns
    ENOTSUP. This patch fixes these two issues.
    
    Fixes: 202d375c60bc ("app/test: add cryptodev unit and performance tests")
    
    Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
    Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

NICs: X540-AT2 NIC(ixgbe, 10G)

Best regards,

Pei

----- Original Message -----
From: "luca boccassi" <luca.boccassi@gmail.com>
To: stable@dpdk.org
Cc: dev@dpdk.org, "Abhishek Marathe" <Abhishek.Marathe@microsoft.com>, "Akhil Goyal" <akhil.goyal@nxp.com>, "Ali Alnubani" <alialnu@mellanox.com>, "benjamin walker" <benjamin.walker@intel.com>, "David Christensen" <drc@linux.vnet.ibm.com>, "Hemant Agrawal" <hemant.agrawal@nxp.com>, "Ian Stokes" <ian.stokes@intel.com>, "Jerin Jacob" <jerinj@marvell.com>, "John McNamara" <john.mcnamara@intel.com>, "Ju-Hyoung Lee" <juhlee@microsoft.com>, "Kevin Traynor" <ktraynor@redhat.com>, "Pei Zhang" <pezhang@redhat.com>, "pingx yu" <pingx.yu@intel.com>, "qian q xu" <qian.q.xu@intel.com>, "Raslan Darawsheh" <rasland@mellanox.com>, "Thomas Monjalon" <thomas@monjalon.net>, "yuan peng" <yuan.peng@intel.com>, "zhaoyan chen" <zhaoyan.chen@intel.com>
Sent: Thursday, June 4, 2020 3:43:49 AM
Subject: 19.11.3 patches review and test

Hi all,

Here is a list of patches targeted for stable release 19.11.3.

The planned date for the final release is the 17th of June.

Please help with testing and validation of your use cases and report
any issues/results with reply-all to this mail. For the final release
the fixes and reported validations will be added to the release notes.

A release candidate tarball can be found at:

    https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1

These patches are located at branch 19.11 of dpdk-stable repo:
    https://dpdk.org/browse/dpdk-stable/

Thanks.

Luca Boccassi

---
Adam Dybkowski (5):
      cryptodev: fix missing device id range checking
      common/qat: fix GEN3 marketing name
      app/crypto-perf: fix display of sample test vector
      crypto/qat: support plain SHA1..SHA512 hashes
      cryptodev: fix SHA-1 digest enum comment

Ajit Khaparde (3):
      net/bnxt: fix FW version query
      net/bnxt: fix error log for command timeout
      net/bnxt: fix using RSS config struct

Akhil Goyal (1):
      ipsec: fix build dependency on hash lib

Alex Kiselev (1):
      lpm6: fix size of tbl8 group

Alex Marginean (1):
      net/enetc: fix Rx lock-up

Alexander Kozyrev (8):
      net/mlx5: reduce Tx completion index memory loads
      net/mlx5: add device parameter for MPRQ stride size
      net/mlx5: enable MPRQ multi-stride operations
      net/mlx5: add multi-segment packets in MPRQ mode
      net/mlx5: set dynamic flow metadata in Rx queues
      net/mlx5: improve logging of MPRQ selection
      net/mlx5: fix assert in dynamic metadata handling
      net/mlx5: fix Tx queue release debug log timing

Alvin Zhang (2):
      net/iavf: fix link speed
      net/e1000: fix port hotplug for multi-process

Amit Gupta (1):
      net/octeontx: fix meson build for disabled drivers

Anatoly Burakov (1):
      mem: preallocate VA space in no-huge mode

Andrew Rybchenko (4):
      net/sfc: fix reported promiscuous/multicast mode
      net/sfc/base: use simpler EF10 family conditional check
      net/sfc/base: use simpler EF10 family run-time checks
      net/sfc/base: fix build when EVB is enabled

Andy Pei (1):
      net/ipn3ke: use control thread to check link status

Ankur Dwivedi (1):
      net/octeontx2: fix buffer size assignment

Apeksha Gupta (2):
      bus/fslmc: fix dereferencing null pointer
      test/crypto: fix statistics case

Archana Muniganti (1):
      examples/fips_validation: fix parsing of algorithms

Arek Kusztal (1):
      crypto/qat: fix cipher descriptor for ZUC and SNOW

Asaf Penso (2):
      net/mlx5: fix call to modify action without init item
      net/mlx5: fix assert in doorbell lookup

Ashish Gupta (1):
      net/octeontx2: fix link information for loopback port

Asim Jamshed (1):
      fib: fix headers for C++ support

Bernard Iremonger (1):
      net/i40e: fix flow director initialisation

Bing Zhao (6):
      net/mlx5: fix header modify action validation
      net/mlx5: fix actions validation on root table
      net/mlx5: fix assert in modify converting
      mk: fix static linkage of mlx dependency
      mem: fix overflow on allocation
      net/mlx5: fix doorbell bitmap management offsets

Bruce Richardson (3):
      pci: remove unneeded includes in public header file
      pci: fix build on FreeBSD
      drivers: fix log type variables for -fno-common

Cheng Peng (1):
      net/iavf: fix stats query error code

Chengchang Tang (3):
      net/hns3: fix promiscuous mode for PF
      net/hns3: fix default VLAN filter configuration for PF
      net/hns3: fix VLAN filter when setting promisucous mode

Chengwen Feng (7):
      net/hns3: fix packets offload features flags in Rx
      net/hns3: fix default error code of command interface
      net/hns3: fix crash when flushing RSS flow rules with FLR
      net/hns3: fix return value of setting VLAN offload
      net/hns3: clear residual flow rules on init
      net/hns3: fix Rx interrupt after reset
      net/hns3: replace memory barrier with data dependency order

Ciara Power (1):
      telemetry: fix port stats retrieval

Darek Stojaczyk (1):
      pci: accept 32-bit domain numbers

David Christensen (2):
      pci: fix build on ppc
      eal/ppc: fix build with gcc 9.3

David Marchand (5):
      mem: mark pages as not accessed when reserving VA
      test: load drivers when required
      eal: fix typo in endian conversion macros
      remove references to private PCI probe function
      doc: prefer https when pointing to dpdk.org

Dekel Peled (7):
      net/mlx5: fix mask used for IPv6 item validation
      net/mlx5: fix CVLAN tag set in IP item translation
      net/mlx5: update VLAN and encap actions validation
      net/mlx5: fix match on empty VLAN item in DV mode
      common/mlx5: fix umem buffer alignment
      net/mlx5: fix VLAN flow action with wildcard VLAN item
      net/mlx5: fix RSS key copy to TIR context

Dmitry Kozlyuk (2):
      build: fix linker warnings with clang on Windows
      build: support MinGW-w64 with Meson

Eduard Serra (1):
      net/vmxnet3: fix RSS setting on v4

Eugeny Parshutin (1):
      ethdev: fix build when vtune profiling is on

Fady Bader (1):
      mempool: remove inline functions from export list

Fan Zhang (1):
      vhost/crypto: add missing user protocol flag

Ferruh Yigit (7):
      net/nfp: fix log format specifiers
      net/null: fix secondary burst function selection
      net/null: remove redundant check
      mempool/octeontx2: fix build for gcc O1 optimization
      net/ena: fix build for O1 optimization
      event/octeontx2: fix build for O1 optimization
      examples/kni: fix crash during MTU set

Gaetan Rivet (5):
      doc: fix number of failsafe sub-devices
      net/ring: fix device pointer on allocation
      pci: reject negative values in PCI id
      doc: fix typos in ABI policy
      kvargs: fix strcmp helper documentation

Gavin Hu (2):
      net/i40e: relax barrier in Tx
      net/i40e: relax barrier in Tx for NEON

Guinan Sun (2):
      net/ixgbe: fix statistics in flow control mode
      net/ixgbe: check driver type in MACsec API

Haifeng Lin (1):
      eal/arm64: fix precise TSC

Haiyue Wang (1):
      net/ice/base: check memory pointer before copying

Hao Chen (1):
      net/hns3: support Rx interrupt

Harry van Haaren (3):
      service: fix crash on exit
      examples/eventdev: fix crash on exit
      test/flow_classify: enable multi-sockets system

Hemant Agrawal (3):
      drivers: add crypto as dependency for event drivers
      bus/fslmc: fix size of qman fq descriptor
      mempool/dpaa2: install missing header with meson

Honnappa Nagarahalli (3):
      timer: protect initialization with lock
      service: fix race condition for MT unsafe service
      service: fix identification of service running on other lcore

Hyong Youb Kim (1):
      net/enic: fix flow action reordering

Igor Chauskin (2):
      net/ena/base: make allocation macros thread-safe
      net/ena/base: prevent allocation of zero sized memory

Igor Romanov (9):
      net/sfc: fix initialization error path
      net/sfc: fix Rx queue start failure path
      net/sfc: fix promiscuous and allmulticast toggles errors
      net/sfc: set priority of created filters to manual
      net/sfc/base: reduce filter priorities to implemented only
      net/sfc/base: reject automatic filter creation by users
      net/sfc/base: refactor filter lookup loop in EF10
      net/sfc/base: handle manual and auto filter clashes in EF10
      net/sfc/base: fix manual filter delete in EF10

Itsuro Oda (2):
      net/vhost: fix potential memory leak on close
      vhost: make IOTLB cache name unique among processes

Ivan Dyukov (3):
      net/virtio-user: fix devargs parsing
      app: remove extra new line after link duplex
      examples: remove extra new line after link duplex

Jasvinder Singh (3):
      net/softnic: fix memory leak for thread
      net/softnic: fix resource leak for pipeline
      examples/ip_pipeline: remove check of null response

Jeff Guo (3):
      net/i40e: fix setting L2TAG
      net/iavf: fix setting L2TAG
      net/ice: fix setting L2TAG

Jiawei Wang (1):
      net/mlx5: fix imissed counter overflow

Jim Harris (1):
      contigmem: cleanup properly when load fails

Jun Yang (1):
      net/dpaa2: fix congestion ID for multiple traffic classes

Junyu Jiang (4):
      examples/vmdq: fix output of pools/queues
      examples/vmdq: fix RSS configuration
      net/ice: fix RSS advanced rule
      net/ice: fix crash in switch filter

Juraj Linkeš (1):
      ci: fix telemetry dependency in Travis

Július Milan (1):
      net/memif: fix init when already connected

Kalesh AP (9):
      net/bnxt: fix HWRM command during FW reset
      net/bnxt: use true/false for bool types
      net/bnxt: fix port start failure handling
      net/bnxt: fix VLAN add when port is stopped
      net/bnxt: fix VNIC Rx queue count on VNIC free
      net/bnxt: fix number of TQM ring
      net/bnxt: fix TQM ring context memory size
      app/testpmd: fix memory failure handling for i40e DDP
      net/bnxt: fix storing MAC address twice

Kevin Traynor (9):
      net/hinic: fix snprintf length of cable info
      net/hinic: fix repeating cable log and length check
      net/avp: fix gcc 10 maybe-uninitialized warning
      examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
      eal/x86: ignore gcc 10 stringop-overflow warnings
      net/mlx5: fix gcc 10 enum-conversion warning
      crypto/kasumi: fix extern declaration
      drivers/crypto: disable gcc 10 no-common errors
      build: disable gcc 10 zero-length-bounds warning

Konstantin Ananyev (1):
      security: fix crash at accessing non-implemented ops

Lijun Ou (4):
      net/hns3: fix configuring RSS hash when rules are flushed
      net/hns3: add RSS hash offload to capabilities
      net/hns3: fix RSS key length
      net/hns3: fix RSS indirection table configuration

Linsi Yuan (1):
      net/bnxt: fix possible stack smashing

Louise Kilheeney (1):
      examples/l2fwd-keepalive: fix mbuf pool size

Luca Boccassi (4):
      fix various typos found by Lintian
      usertools: check for pci.ids in /usr/share/misc
      Revert "net/bnxt: fix TQM ring context memory size"
      Revert "net/bnxt: fix number of TQM ring"

Lukasz Bartosik (1):
      event/octeontx2: fix queue removal from Rx adapter

Lukasz Wojciechowski (5):
      drivers/crypto: fix log type variables for -fno-common
      security: fix verification of parameters
      security: fix return types in documentation
      security: fix session counter
      test: remove redundant macro

Marvin Liu (5):
      vhost: fix packed ring zero-copy
      vhost: fix shadow update
      vhost: fix shadowed descriptors not flushed
      net/virtio: fix crash when device reconnecting
      net/virtio: fix unexpected event after reconnect

Matteo Croce (1):
      doc: fix LTO config option

Mattias Rönnblom (3):
      event/dsw: remove redundant control ring poll
      event/dsw: remove unnecessary read barrier
      event/dsw: avoid reusing previously recorded events

Michael Baum (2):
      net/mlx5: fix meter color register consideration
      net/mlx4: fix drop queue error handling

Michael Haeuptle (1):
      vfio: fix race condition with sysfs

Michal Krawczyk (5):
      net/ena/base: fix documentation of functions
      net/ena/base: fix indentation in CQ polling
      net/ena/base: fix indentation of multiple defines
      net/ena: set IO ring size to valid value
      net/ena/base: fix testing for supported hash function

Min Hu (Connor) (3):
      net/hns3: fix configuring illegal VLAN PVID
      net/hns3: fix mailbox opcode data type
      net/hns3: fix VLAN PVID when configuring device

Mit Matelske (1):
      eal/freebsd: fix queuing duplicate alarm callbacks

Mohsin Shaikh (1):
      net/mlx5: use open/read/close for ib stats query

Muhammad Bilal (2):
      fix same typo in multiple places
      doc: fix typo in contributors guide

Nagadheeraj Rottela (2):
      crypto/nitrox: fix CSR register address generation
      crypto/nitrox: fix oversized device name

Nicolas Chautru (2):
      baseband/turbo_sw: fix exposed LLR decimals assumption
      bbdev: fix doxygen comments

Nithin Dabilpuram (2):
      devtools: fix symbol map change check
      net/octeontx2: disable unnecessary error interrupts

Olivier Matz (3):
      test/kvargs: fix to consider empty elements as valid
      test/kvargs: fix invalid cases check
      kvargs: fix invalid token parsing on FreeBSD

Ophir Munk (1):
      net/mlx5: fix VLAN PCP item calculation

Ori Kam (1):
      eal/ppc: fix bool type after altivec include

Pablo de Lara (4):
      cryptodev: add asymmetric session-less feature name
      test/crypto: fix flag check
      crypto/openssl: fix out-of-place encryption
      doc: add NASM installation steps

Pavan Nikhilesh (4):
      net/octeontx2: fix device configuration sequence
      eventdev: fix probe and remove for secondary process
      common/octeontx: fix gcc 9.1 ABI break
      app/eventdev: check Tx adapter service ID

Phil Yang (2):
      service: remove rte prefix from static functions
      net/ixgbe: fix link state timing on fiber ports

Qi Zhang (10):
      net/ice: remove unnecessary variable
      net/ice: remove bulk alloc option
      net/ice/base: fix uninitialized stack variables
      net/ice/base: read PSM clock frequency from register
      net/ice/base: minor fixes
      net/ice/base: fix MAC write command
      net/ice/base: fix binary order for GTPU filter
      net/ice/base: remove unused code in switch rule
      net/ice: fix variable initialization
      net/ice: fix RSS for GTPU

Qiming Yang (3):
      net/i40e: fix X722 performance
      doc: fix multicast filter feature announcement
      net/i40e: fix queue related exception handling

Rahul Gupta (2):
      net/bnxt: fix memory leak during queue restart
      net/bnxt: fix Rx ring producer index

Rasesh Mody (3):
      net/qede: fix link state configuration
      net/qede: fix port reconfiguration
      examples/kni: fix MTU change to setup Tx queue

Raslan Darawsheh (4):
      net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
      app/testpmd: add parsing for QinQ VLAN headers
      net/mlx5: fix matching for UDP tunnels with Verbs
      doc: fix build issue in ABI guide

Ray Kinsella (1):
      doc: fix default symbol binding in ABI guide

Rohit Raj (1):
      net/dpaa2: fix 10G port negotiation

Roland Qi (1):
      vhost: fix peer close check

Ruifeng Wang (2):
      test: skip some subtests in no-huge mode
      test/ipsec: fix crash in session destroy

Sarosh Arif (1):
      doc: fix typo in contributors guide

Shougang Wang (2):
      net/ixgbe: fix link status after port reset
      net/i40e: fix queue region in RSS flow

Simei Su (1):
      net/ice: support mark only action for flow director

Sivaprasad Tummala (1):
      vhost: handle mbuf allocation failure

Somnath Kotur (2):
      bus/pci: fix devargs on probing again
      net/bnxt: fix max ring count

Stephen Hemminger (24):
      ethdev: fix spelling
      net/mvneta: do not use PMD log type
      net/virtio: do not use PMD log type
      net/tap: do not use PMD log type
      net/pfe: do not use PMD log type
      net/bnxt: do not use PMD log type
      net/dpaa: use dynamic log type
      net/thunderx: use dynamic log type
      net/netvsc: propagate descriptor limits from VF
      net/netvsc: handle Rx packets during multi-channel setup
      net/netvsc: split send buffers from Tx descriptors
      net/netvsc: fix memory free on device close
      net/netvsc: remove process event optimization
      net/netvsc: handle Tx completions based on burst size
      net/netvsc: avoid possible live lock
      lpm6: fix comments spelling
      eal: fix comments spelling
      net/netvsc: fix comment spelling
      bus/vmbus: fix comment spelling
      net/netvsc: do RSS across Rx queue only
      net/netvsc: do not configure RSS if disabled
      net/tap: fix crash in flow destroy
      eal: fix C++17 compilation
      net/vmxnet3: handle bad host framing

Suanming Mou (3):
      net/mlx5: fix counter container usage
      net/mlx5: fix meter suffix table leak
      net/mlx5: fix jump table leak

Sunil Kumar Kori (1):
      eal: fix log message print for regex

Tao Zhu (3):
      net/ice: fix hash flow crash
      net/ixgbe: fix link status inconsistencies
      net/ixgbe: fix resource leak after thread exits normally

Thomas Monjalon (13):
      drivers/crypto: fix build with make 4.3
      doc: fix sphinx compatibility
      log: fix level picked with globbing on type register
      doc: fix matrix CSS for recent sphinx
      common/mlx5: fix build with -fno-common
      net/mlx4: fix build with -fno-common
      common/mlx5: fix build with rdma-core 21
      app: fix usage help of options separated by dashes
      net/mvpp2: fix build with gcc 10
      examples/vm_power: fix build with -fno-common
      examples/vm_power: drop Unix path limit redefinition
      doc: fix build with doxygen 1.8.18
      doc: fix API index

Timothy Redaelli (6):
      crypto/octeontx2: fix build with gcc 10
      test: fix build with gcc 10
      app/pipeline: fix build with gcc 10
      examples/vhost_blk: fix build with gcc 10
      examples/eventdev: fix build with gcc 10
      examples/qos_sched: fix build with gcc 10

Ting Xu (1):
      app/testpmd: fix DCB set

Tonghao Zhang (2):
      eal: fix PRNG init with HPET enabled
      net/mlx5: fix crash when releasing meter table

Vadim Podovinnikov (1):
      net/memif: fix resource leak

Vamsi Attunuru (1):
      net/octeontx2: enable error and RAS interrupt in configure

Viacheslav Ovsiienko (2):
      net/mlx5: fix metadata for compressed Rx CQEs
      common/mlx5: fix netlink buffer allocation from stack

Vijaya Mohan Guvva (1):
      bus/pci: fix UIO resource access from secondary process

Vladimir Medvedkin (1):
      ipsec: check SAD lookup error

Wei Hu (Xavier) (10):
      vfio: fix use after free with multiprocess
      net/hns3: fix status after repeated resets
      net/hns3: fix return value when clearing statistics
      app/testpmd: fix statistics after reset
      net/hns3: support different numbers of Rx and Tx queues
      net/hns3: fix Tx interrupt when enabling Rx interrupt
      net/hns3: fix MSI-X interrupt during initialization
      net/hns3: remove unnecessary assignments in Tx
      net/hns3: remove one IO barrier in Rx
      net/hns3: add free threshold in Rx

Wei Zhao (8):
      net/ice: change default tunnel type
      net/ice: add action number check for switch
      net/ice: fix input set of VLAN item
      net/i40e: fix flow director for ARP packets
      doc: add i40e limitation for flow director
      net/i40e: fix flush of flow director filter
      net/i40e: fix wild pointer
      net/i40e: fix flow director enabling

Wisam Jaddo (3):
      net/mlx5: fix zero metadata action
      net/mlx5: fix zero value validation for metadata
      net/mlx5: fix VLAN ID check

Xiao Zhang (1):
      app/testpmd: fix PPPoE flow command

Xiaolong Ye (3):
      net/virtio: fix outdated comment
      vhost: remove unused variable
      doc: fix log level example in Linux guide

Xiaoyu Min (3):
      net/mlx5: fix push VLAN action to use item info
      net/mlx5: fix validation of push VLAN without full mask
      net/mlx5: fix RSS enablement

Xiaoyun Li (4):
      net/ixgbe/base: update copyright
      net/i40e/base: update copyright
      common/iavf: update copyright
      net/ice/base: update copyright

Xiaoyun Wang (7):
      net/hinic: allocate IO memory with socket id
      net/hinic: fix LRO
      net/hinic/base: fix port start during FW hot update
      net/hinic/base: fix PF firmware hot-active problem
      net/hinic: fix queues resource free
      net/hinic: fix Tx mbuf length while copying
      net/hinic: fix TSO

Xuan Ding (2):
      vhost: prevent zero-copy with incompatible client mode
      vhost: fix zero-copy server mode

Yisen Zhuang (1):
      net/hns3: reduce judgements of free Tx ring space

Yunjian Wang (16):
      kvargs: fix buffer overflow when parsing list
      net/tap: remove unused assert
      net/nfp: fix dangling pointer on probe failure
      net/pfe: fix double free of MAC address
      net/tap: fix mbuf double free when writev fails
      net/tap: fix mbuf and mem leak during queue release
      net/tap: fix check for mbuf number of segment
      net/tap: fix file close on remove
      net/tap: fix fd leak on creation failure
      net/tap: fix unexpected link handler
      net/tap: fix queues fd check before close
      net/octeontx: fix dangling pointer on init failure
      crypto/ccp: fix fd leak on probe failure
      net/failsafe: fix fd leak
      crypto/caam_jr: fix check of file descriptors
      crypto/caam_jr: fix IRQ functions return type

Yuri Chipchev (1):
      event/dsw: fix enqueue burst return value

Zhihong Peng (1):
      net/ixgbe: fix link status synchronization on BSD


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v9 01/12] eal: replace rte_page_sizes with a set of constants
  @ 2020-06-15  0:43  9%           ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2020-06-15  0:43 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Dmitry Kozlyuk, Jerin Jacob, John McNamara,
	Marko Kovacevic, Anatoly Burakov

Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
Enum rte_page_sizes has members valued above this limit, which get
wrapped to zero, resulting in compilation error (duplicate values in
enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.

Remove rte_page_sizes and replace its values with #define's.
This enumeration is not used in public API, so there's no ABI breakage.
Announce API changes for 20.08 in documentation.

Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 doc/guides/rel_notes/release_20_08.rst |  2 ++
 lib/librte_eal/include/rte_memory.h    | 23 ++++++++++-------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5..86d240213 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* ``rte_page_sizes`` enumeration is replaced with ``RTE_PGSIZE_xxx`` defines.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/include/rte_memory.h b/lib/librte_eal/include/rte_memory.h
index 3d8d0bd69..65374d53a 100644
--- a/lib/librte_eal/include/rte_memory.h
+++ b/lib/librte_eal/include/rte_memory.h
@@ -24,19 +24,16 @@ extern "C" {
 #include <rte_config.h>
 #include <rte_fbarray.h>
 
-__extension__
-enum rte_page_sizes {
-	RTE_PGSIZE_4K    = 1ULL << 12,
-	RTE_PGSIZE_64K   = 1ULL << 16,
-	RTE_PGSIZE_256K  = 1ULL << 18,
-	RTE_PGSIZE_2M    = 1ULL << 21,
-	RTE_PGSIZE_16M   = 1ULL << 24,
-	RTE_PGSIZE_256M  = 1ULL << 28,
-	RTE_PGSIZE_512M  = 1ULL << 29,
-	RTE_PGSIZE_1G    = 1ULL << 30,
-	RTE_PGSIZE_4G    = 1ULL << 32,
-	RTE_PGSIZE_16G   = 1ULL << 34,
-};
+#define RTE_PGSIZE_4K   (1ULL << 12)
+#define RTE_PGSIZE_64K  (1ULL << 16)
+#define RTE_PGSIZE_256K (1ULL << 18)
+#define RTE_PGSIZE_2M   (1ULL << 21)
+#define RTE_PGSIZE_16M  (1ULL << 24)
+#define RTE_PGSIZE_256M (1ULL << 28)
+#define RTE_PGSIZE_512M (1ULL << 29)
+#define RTE_PGSIZE_1G   (1ULL << 30)
+#define RTE_PGSIZE_4G   (1ULL << 32)
+#define RTE_PGSIZE_16G  (1ULL << 34)
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
 
-- 
2.25.4


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
  2020-06-13  3:59  5%   ` Jerin Jacob
@ 2020-06-13 10:43  0%     ` Mattias Rönnblom
  2020-06-18 15:51  0%       ` McDaniel, Timothy
  2020-06-18 15:44  5%     ` McDaniel, Timothy
  1 sibling, 1 reply; 200+ results
From: Mattias Rönnblom @ 2020-06-13 10:43 UTC (permalink / raw)
  To: Jerin Jacob, McDaniel, Timothy
  Cc: Jerin Jacob, dpdk-dev, Gage Eads, Van Haaren, Harry,
	Ray Kinsella, Neil Horman

On 2020-06-13 05:59, Jerin Jacob wrote:
> On Sat, Jun 13, 2020 at 2:56 AM McDaniel, Timothy
> <timothy.mcdaniel@intel.com> wrote:
>> The DLB hardware does not conform exactly to the eventdev interface.
>> 1) It has a limit on the number of queues that may be linked to a port.
>> 2) Some ports a further restricted to a maximum of 1 linked queue.
>> 3) It does not (currently) have the ability to carry the flow_id as part
>> of the event (QE) payload.
>>
>> Due to the above, we would like to propose the following enhancements.
>
> Thanks, McDaniel, Good to see new HW PMD for eventdev.
>
> + Ray and Neil.
>
> Hello McDaniel,
> I assume this patchset is for v20.08. It is adding new elements in
> pubic structures. Have you checked the ABI breakage?
>
> I will review the rest of the series if there is NO ABI breakage as we
> can not have the ABI breakage 20.08 version.
>
>
> ABI validator
> ~~~~~~~~~~~~~~
> 1. meson build
> 2.  Compile and install known stable abi libs i.e ToT.
>           DESTDIR=$PWD/install-meson-stable ninja -C build install
>       Compile and install with patches to be verified.
>           DESTDIR=$PWD/install-meson-new ninja -C build install
> 3. Gen ABI for both
>          devtools/gen-abi.sh install-meson-stable
>          devtools/gen-abi.sh install-meson-new
> 4. Run abi checker
>          devtools/check-abi.sh install-meson-stable install-meson-new
>
>
> DPDK_ABI_REF_DIR=/build/dpdk/reference/ DPDK_ABI_REF_VERSION=v20.02
> ./devtools/test-meson-builds.sh
> DPDK_ABI_REF_DIR - needs an absolute path, for reasons that are still
> unclear to me.
> DPDK_ABI_REF_VERSION - you need to use the last DPDK release.
>
>> 1) Add new fields to the rte_event_dev_info struct. These fields allow
>> the device to advertize its capabilities so that applications can take
>> the appropriate actions based on those capabilities.
>>
>>      struct rte_event_dev_info {
>>          uint32_t max_event_port_links;
>>          /**< Maximum number of queues that can be linked to a single event
>>           * port by this device.
>>           */
>>
>>          uint8_t max_single_link_event_port_queue_pairs;
>>          /**< Maximum number of event ports and queues that are optimized for
>>           * (and only capable of) single-link configurations supported by this
>>           * device. These ports and queues are not accounted for in
>>           * max_event_ports or max_event_queues.
>>           */
>>      }
>>
>> 2) Add a new field to the rte_event_dev_config struct. This field allows the
>> application to specify how many of its ports are limited to a single link,
>> or will be used in single link mode.
>>
>>      /** Event device configuration structure */
>>      struct rte_event_dev_config {
>>          uint8_t nb_single_link_event_port_queues;
>>          /**< Number of event ports and queues that will be singly-linked to
>>           * each other. These are a subset of the overall event ports and
>>           * queues; this value cannot exceed *nb_event_ports* or
>>           * *nb_event_queues*. If the device has ports and queues that are
>>           * optimized for single-link usage, this field is a hint for how many
>>           * to allocate; otherwise, regular event ports and queues can be used.
>>           */
>>      }
>>
>> 3) Replace the dedicated implicit_release_disabled field with a bit field
>> of explicit port capabilities. The implicit_release_disable functionality
>> is assiged to one bit, and a port-is-single-link-only  attribute is
>> assigned to other, with the remaining bits available for future assignment.
>>
>>          * Event port configuration bitmap flags */
>>          #define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>>          /**< Configure the port not to release outstanding events in
>>           * rte_event_dev_dequeue_burst(). If set, all events received through
>>           * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>>           * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>>           * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>>           */
>>          #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>>
>>          /**< This event port links only to a single event queue.
>>           *
>>           *  @see rte_event_port_setup(), rte_event_port_link()
>>           */
>>
>>          #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>>          /**
>>           * The implicit release disable attribute of the port
>>           */
>>
>>          struct rte_event_port_conf {
>>                  uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>>          }
>>
>> 4) Add UMWAIT/UMONITOR bit to rte_cpuflags
>>
>> 5) Added a new API that is useful for probing PCI devices.
>>
>>          /**
>>           * @internal
>>           * Wrapper for use by pci drivers as a .probe function to attach to a event
>>           * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>>           * the name.
>>           */
>>          static inline int
>>          rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>>                                      struct rte_pci_device *pci_dev,
>>                                      size_t private_data_size,
>>                                      eventdev_pmd_pci_callback_t devinit,
>>                                      const char *name);
>>
>> Change-Id: I4cf00015296e2b3feca9886895765554730594be
>> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
>> ---
>>   app/test-eventdev/evt_common.h                     |  1 +
>>   app/test-eventdev/test_order_atq.c                 |  4 ++
>>   app/test-eventdev/test_order_common.c              |  6 ++-
>>   app/test-eventdev/test_order_queue.c               |  4 ++
>>   app/test-eventdev/test_perf_atq.c                  |  1 +
>>   app/test-eventdev/test_perf_queue.c                |  1 +
>>   app/test-eventdev/test_pipeline_atq.c              |  1 +
>>   app/test-eventdev/test_pipeline_queue.c            |  1 +
>>   app/test/test_eventdev.c                           |  4 +-
>>   drivers/event/dpaa2/dpaa2_eventdev.c               |  2 +-
>>   drivers/event/octeontx/ssovf_evdev.c               |  2 +-
>>   drivers/event/skeleton/skeleton_eventdev.c         |  2 +-
>>   drivers/event/sw/sw_evdev.c                        |  5 +-
>>   drivers/event/sw/sw_evdev_selftest.c               |  9 ++--
>>   .../eventdev_pipeline/pipeline_worker_generic.c    |  8 ++-
>>   examples/eventdev_pipeline/pipeline_worker_tx.c    |  3 ++
>>   examples/l2fwd-event/l2fwd_event_generic.c         |  5 +-
>>   examples/l2fwd-event/l2fwd_event_internal_port.c   |  5 +-
>>   examples/l3fwd/l3fwd_event_generic.c               |  5 +-
>>   examples/l3fwd/l3fwd_event_internal_port.c         |  5 +-
>>   lib/librte_eal/x86/include/rte_cpuflags.h          |  1 +
>>   lib/librte_eal/x86/rte_cpuflags.c                  |  1 +
>>   lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
>>   lib/librte_eventdev/rte_eventdev.c                 | 62 +++++++++++++++++++---
>>   lib/librte_eventdev/rte_eventdev.h                 | 51 +++++++++++++++---
>>   lib/librte_eventdev/rte_eventdev_pmd_pci.h         | 54 +++++++++++++++++++
>>   26 files changed, 208 insertions(+), 37 deletions(-)
>>
>> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
>> index f9d7378d3..120c27b33 100644
>> --- a/app/test-eventdev/evt_common.h
>> +++ b/app/test-eventdev/evt_common.h
>> @@ -169,6 +169,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
>>                          .dequeue_timeout_ns = opt->deq_tmo_nsec,
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 0,
>>                          .nb_events_limit  = info.max_num_events,
>>                          .nb_event_queue_flows = opt->nb_flows,
>>                          .nb_event_port_dequeue_depth =
>> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
>> index 3366cfce9..8246b96f0 100644
>> --- a/app/test-eventdev/test_order_atq.c
>> +++ b/app/test-eventdev/test_order_atq.c
>> @@ -34,6 +34,8 @@ order_atq_worker(void *arg)
>>                          continue;
>>                  }
>>
>> +               ev.flow_id = ev.mbuf->udata64;
>> +
>>                  if (ev.sub_event_type == 0) { /* stage 0 from producer */
>>                          order_atq_process_stage_0(&ev);
>>                          while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
>> @@ -68,6 +70,8 @@ order_atq_worker_burst(void *arg)
>>                  }
>>
>>                  for (i = 0; i < nb_rx; i++) {
>> +                       ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>                          if (ev[i].sub_event_type == 0) { /*stage 0 */
>>                                  order_atq_process_stage_0(&ev[i]);
>>                          } else if (ev[i].sub_event_type == 1) { /* stage 1 */
>> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
>> index 4190f9ade..c6fcd0509 100644
>> --- a/app/test-eventdev/test_order_common.c
>> +++ b/app/test-eventdev/test_order_common.c
>> @@ -49,6 +49,7 @@ order_producer(void *arg)
>>                  const uint32_t flow = (uintptr_t)m % nb_flows;
>>                  /* Maintain seq number per flow */
>>                  m->seqn = producer_flow_seq[flow]++;
>> +               m->udata64 = flow;
>>
>>                  ev.flow_id = flow;
>>                  ev.mbuf = m;
>> @@ -318,10 +319,11 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>>                  opt->wkr_deq_dep = dev_info.max_event_port_dequeue_depth;
>>
>>          /* port configuration */
>> -       const struct rte_event_port_conf p_conf = {
>> +       struct rte_event_port_conf p_conf = {
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          /* setup one port per worker, linking to all queues */
>> @@ -351,6 +353,8 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>>          p->queue_id = 0;
>>          p->t = t;
>>
>> +       p_conf.new_event_threshold /= 2;
>> +
>>          ret = rte_event_port_setup(opt->dev_id, port, &p_conf);
>>          if (ret) {
>>                  evt_err("failed to setup producer port %d", port);
>> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
>> index 495efd92f..a0a2187a2 100644
>> --- a/app/test-eventdev/test_order_queue.c
>> +++ b/app/test-eventdev/test_order_queue.c
>> @@ -34,6 +34,8 @@ order_queue_worker(void *arg)
>>                          continue;
>>                  }
>>
>> +               ev.flow_id = ev.mbuf->udata64;
>> +
>>                  if (ev.queue_id == 0) { /* from ordered queue */
>>                          order_queue_process_stage_0(&ev);
>>                          while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
>> @@ -68,6 +70,8 @@ order_queue_worker_burst(void *arg)
>>                  }
>>
>>                  for (i = 0; i < nb_rx; i++) {
>> +                       ev[i].flow_id = ev[i].mbuf->udata64;
>> +
>>                          if (ev[i].queue_id == 0) { /* from ordered queue */
>>                                  order_queue_process_stage_0(&ev[i]);
>>                          } else if (ev[i].queue_id == 1) {/* from atomic queue */
>> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
>> index 8fd51004e..10846f202 100644
>> --- a/app/test-eventdev/test_perf_atq.c
>> +++ b/app/test-eventdev/test_perf_atq.c
>> @@ -204,6 +204,7 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues,
>> diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
>> index f4ea3a795..a0119da60 100644
>> --- a/app/test-eventdev/test_perf_queue.c
>> +++ b/app/test-eventdev/test_perf_queue.c
>> @@ -219,6 +219,7 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = dev_info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          ret = perf_event_dev_port_setup(test, opt, nb_stages /* stride */,
>> diff --git a/app/test-eventdev/test_pipeline_atq.c b/app/test-eventdev/test_pipeline_atq.c
>> index 8e8686c14..a95ec0aa5 100644
>> --- a/app/test-eventdev/test_pipeline_atq.c
>> +++ b/app/test-eventdev/test_pipeline_atq.c
>> @@ -356,6 +356,7 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                  .dequeue_depth = opt->wkr_deq_dep,
>>                  .enqueue_depth = info.max_event_port_dequeue_depth,
>>                  .new_event_threshold = info.max_num_events,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          if (!t->internal_port)
>> diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
>> index 7bebac34f..30817dc78 100644
>> --- a/app/test-eventdev/test_pipeline_queue.c
>> +++ b/app/test-eventdev/test_pipeline_queue.c
>> @@ -379,6 +379,7 @@ pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>>                          .dequeue_depth = opt->wkr_deq_dep,
>>                          .enqueue_depth = info.max_event_port_dequeue_depth,
>>                          .new_event_threshold = info.max_num_events,
>> +                       .event_port_cfg = 0,
>>          };
>>
>>          if (!t->internal_port) {
>> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
>> index 43ccb1ce9..62019c185 100644
>> --- a/app/test/test_eventdev.c
>> +++ b/app/test/test_eventdev.c
>> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>>          if (!(info.event_dev_cap &
>>                RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>>                  pconf.enqueue_depth = info.max_event_port_enqueue_depth;
>> -               pconf.disable_implicit_release = 1;
>> +               pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>                  ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>>                  TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
>> -               pconf.disable_implicit_release = 0;
>> +               pconf.event_port_cfg = 0;
>>          }
>>
>>          ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
>> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
>> index a196ad4c6..8568bfcfc 100644
>> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
>> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
>> @@ -537,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>                  DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
>>          port_conf->enqueue_depth =
>>                  DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static int
>> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
>> index 1b1a5d939..99c0b2efb 100644
>> --- a/drivers/event/octeontx/ssovf_evdev.c
>> +++ b/drivers/event/octeontx/ssovf_evdev.c
>> @@ -224,7 +224,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = edev->max_num_events;
>>          port_conf->dequeue_depth = 1;
>>          port_conf->enqueue_depth = 1;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static void
>> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
>> index c889220e0..37d569b8c 100644
>> --- a/drivers/event/skeleton/skeleton_eventdev.c
>> +++ b/drivers/event/skeleton/skeleton_eventdev.c
>> @@ -209,7 +209,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = 32 * 1024;
>>          port_conf->dequeue_depth = 16;
>>          port_conf->enqueue_depth = 16;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static void
>> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
>> index fb8e8bebb..0b3dd9c1c 100644
>> --- a/drivers/event/sw/sw_evdev.c
>> +++ b/drivers/event/sw/sw_evdev.c
>> @@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
>>          }
>>
>>          p->inflight_max = conf->new_event_threshold;
>> -       p->implicit_release = !conf->disable_implicit_release;
>> +       p->implicit_release = !(conf->event_port_cfg &
>> +                               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>>
>>          /* check if ring exists, same as rx_worker above */
>>          snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
>> @@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>>          port_conf->new_event_threshold = 1024;
>>          port_conf->dequeue_depth = 16;
>>          port_conf->enqueue_depth = 16;
>> -       port_conf->disable_implicit_release = 0;
>> +       port_conf->event_port_cfg = 0;
>>   }
>>
>>   static int
>> diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
>> index 38c21fa0f..a78d6cd0d 100644
>> --- a/drivers/event/sw/sw_evdev_selftest.c
>> +++ b/drivers/event/sw/sw_evdev_selftest.c
>> @@ -172,7 +172,7 @@ create_ports(struct test *t, int num_ports)
>>                          .new_event_threshold = 1024,
>>                          .dequeue_depth = 32,
>>                          .enqueue_depth = 64,
>> -                       .disable_implicit_release = 0,
>> +                       .event_port_cfg = 0,
>>          };
>>          if (num_ports > MAX_PORTS)
>>                  return -1;
>> @@ -1227,7 +1227,7 @@ port_reconfig_credits(struct test *t)
>>                                  .new_event_threshold = 128,
>>                                  .dequeue_depth = 32,
>>                                  .enqueue_depth = 64,
>> -                               .disable_implicit_release = 0,
>> +                               .event_port_cfg = 0,
>>                  };
>>                  if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>>                          printf("%d Error setting up port\n", __LINE__);
>> @@ -1317,7 +1317,7 @@ port_single_lb_reconfig(struct test *t)
>>                  .new_event_threshold = 128,
>>                  .dequeue_depth = 32,
>>                  .enqueue_depth = 64,
>> -               .disable_implicit_release = 0,
>> +               .event_port_cfg = 0,
>>          };
>>          if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>>                  printf("%d Error setting up port\n", __LINE__);
>> @@ -3079,7 +3079,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
>>           * only be initialized once - and this needs to be set for multiple runs
>>           */
>>          conf.new_event_threshold = 512;
>> -       conf.disable_implicit_release = disable_implicit_release;
>> +       conf.event_port_cfg = disable_implicit_release ?
>> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>>
>>          if (rte_event_port_setup(evdev, 0, &conf) < 0) {
>>                  printf("Error setting up RX port\n");
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> index 42ff4eeb9..a091da3ba 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
>> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>          struct rte_event_dev_config config = {
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 1,
>>                          .nb_events_limit  = 4096,
>>                          .nb_event_queue_flows = 1024,
>>                          .nb_event_port_dequeue_depth = 128,
>> @@ -138,12 +139,13 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>                          .dequeue_depth = cdata.worker_cq_depth,
>>                          .enqueue_depth = 64,
>>                          .new_event_threshold = 4096,
>> +                       .event_port_cfg = 0,


No need to set this value; it's guaranteed to be 0 anyways. You might 
argue you do it for readability, but two other fields of that struct is 
already implicitly initialized.


This would apply to other of your changes as well.


>>          };
>>          struct rte_event_queue_conf wkr_q_conf = {
>>                          .schedule_type = cdata.queue_type,
>>                          .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>>                          .nb_atomic_flows = 1024,
>> -               .nb_atomic_order_sequences = 1024,
>> +                       .nb_atomic_order_sequences = 1024,
>>          };
>>          struct rte_event_queue_conf tx_q_conf = {
>>                          .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
>> @@ -167,7 +169,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>>          disable_implicit_release = (dev_info.event_dev_cap &
>>                          RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>>
>> -       wkr_p_conf.disable_implicit_release = disable_implicit_release;
>> +       wkr_p_conf.event_port_cfg = disable_implicit_release ?
>> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>>
>>          if (dev_info.max_num_events < config.nb_events_limit)
>>                  config.nb_events_limit = dev_info.max_num_events;
>> @@ -417,6 +420,7 @@ init_adapters(uint16_t nb_ports)
>>                  .dequeue_depth = cdata.worker_cq_depth,
>>                  .enqueue_depth = 64,
>>                  .new_event_threshold = 4096,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          if (adptr_p_conf.new_event_threshold > dev_info.max_num_events)
>> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> index 55bb2f762..e8a9652aa 100644
>> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
>> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
>> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>>          struct rte_event_dev_config config = {
>>                          .nb_event_queues = nb_queues,
>>                          .nb_event_ports = nb_ports,
>> +                       .nb_single_link_event_port_queues = 0,
>>                          .nb_events_limit  = 4096,
>>                          .nb_event_queue_flows = 1024,
>>                          .nb_event_port_dequeue_depth = 128,
>> @@ -445,6 +446,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>>                          .dequeue_depth = cdata.worker_cq_depth,
>>                          .enqueue_depth = 64,
>>                          .new_event_threshold = 4096,
>> +                       .event_port_cfg = 0,
>>          };
>>          struct rte_event_queue_conf wkr_q_conf = {
>>                          .schedule_type = cdata.queue_type,
>> @@ -746,6 +748,7 @@ init_adapters(uint16_t nb_ports)
>>                  .dequeue_depth = cdata.worker_cq_depth,
>>                  .enqueue_depth = 64,
>>                  .new_event_threshold = 4096,
>> +               .event_port_cfg = 0,
>>          };
>>
>>          init_ports(nb_ports);
>> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
>> index 2dc95e5f7..e01df0435 100644
>> --- a/examples/l2fwd-event/l2fwd_event_generic.c
>> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
>> @@ -126,8 +126,9 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>          evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
>> index 63d57b46c..f54327b4f 100644
>> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
>> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
>> @@ -123,8 +123,9 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>                                                                  event_p_id++) {
>> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
>> index f8c98435d..409a4107e 100644
>> --- a/examples/l3fwd/l3fwd_event_generic.c
>> +++ b/examples/l3fwd/l3fwd_event_generic.c
>> @@ -115,8 +115,9 @@ l3fwd_event_port_setup_generic(void)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>          evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
>> index 03ac581d6..df410f10f 100644
>> --- a/examples/l3fwd/l3fwd_event_internal_port.c
>> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
>> @@ -113,8 +113,9 @@ l3fwd_event_port_setup_internal_port(void)
>>          if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>>                  event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>>
>> -       event_p_conf.disable_implicit_release =
>> -               evt_rsrc->disable_implicit_release;
>> +       event_p_conf.event_port_cfg = 0;
>> +       if (evt_rsrc->disable_implicit_release)
>> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>>
>>          for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>>                                                                  event_p_id++) {
>> diff --git a/lib/librte_eal/x86/include/rte_cpuflags.h b/lib/librte_eal/x86/include/rte_cpuflags.h
>> index c1d20364d..ab2c3b379 100644
>> --- a/lib/librte_eal/x86/include/rte_cpuflags.h
>> +++ b/lib/librte_eal/x86/include/rte_cpuflags.h
>> @@ -130,6 +130,7 @@ enum rte_cpu_flag_t {
>>          RTE_CPUFLAG_CLDEMOTE,               /**< Cache Line Demote */
>>          RTE_CPUFLAG_MOVDIRI,                /**< Direct Store Instructions */
>>          RTE_CPUFLAG_MOVDIR64B,              /**< Direct Store Instructions 64B */
>> +       RTE_CPUFLAG_UMWAIT,                 /**< UMONITOR/UMWAIT */
>>          RTE_CPUFLAG_AVX512VP2INTERSECT,     /**< AVX512 Two Register Intersection */
>>
>>          /* The last item */
>> diff --git a/lib/librte_eal/x86/rte_cpuflags.c b/lib/librte_eal/x86/rte_cpuflags.c
>> index 30439e795..69ac0dbce 100644
>> --- a/lib/librte_eal/x86/rte_cpuflags.c
>> +++ b/lib/librte_eal/x86/rte_cpuflags.c
>> @@ -137,6 +137,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
>>          FEAT_DEF(CLDEMOTE, 0x00000007, 0, RTE_REG_ECX, 25)
>>          FEAT_DEF(MOVDIRI, 0x00000007, 0, RTE_REG_ECX, 27)
>>          FEAT_DEF(MOVDIR64B, 0x00000007, 0, RTE_REG_ECX, 28)
>> +        FEAT_DEF(UMWAIT, 0x00000007, 0, RTE_REG_ECX, 5)
>>          FEAT_DEF(AVX512VP2INTERSECT, 0x00000007, 0, RTE_REG_EDX, 8)
>>   };
>>
>> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> index bb21dc407..8a72256de 100644
>> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>> @@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
>>                  return ret;
>>          }
>>
>> -       pc->disable_implicit_release = 0;
>> +       pc->event_port_cfg = 0;
>>          ret = rte_event_port_setup(dev_id, port_id, pc);
>>          if (ret) {
>>                  RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
>> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
>> index 82c177c73..4955ab1a0 100644
>> --- a/lib/librte_eventdev/rte_eventdev.c
>> +++ b/lib/librte_eventdev/rte_eventdev.c
>> @@ -437,9 +437,29 @@ rte_event_dev_configure(uint8_t dev_id,
>>                                          dev_id);
>>                  return -EINVAL;
>>          }
>> -       if (dev_conf->nb_event_queues > info.max_event_queues) {
>> -               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
>> -               dev_id, dev_conf->nb_event_queues, info.max_event_queues);
>> +       if (dev_conf->nb_event_queues > info.max_event_queues +
>> +                       info.max_single_link_event_port_queue_pairs) {
>> +               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
>> +                                dev_id, dev_conf->nb_event_queues,
>> +                                info.max_event_queues,
>> +                                info.max_single_link_event_port_queue_pairs);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_event_queues -
>> +                       dev_conf->nb_single_link_event_port_queues >
>> +                       info.max_event_queues) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
>> +                                dev_id, dev_conf->nb_event_queues,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                info.max_event_queues);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_single_link_event_port_queues >
>> +                       dev_conf->nb_event_queues) {
>> +               RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
>> +                                dev_id,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                dev_conf->nb_event_queues);
>>                  return -EINVAL;
>>          }
>>
>> @@ -448,9 +468,31 @@ rte_event_dev_configure(uint8_t dev_id,
>>                  RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
>>                  return -EINVAL;
>>          }
>> -       if (dev_conf->nb_event_ports > info.max_event_ports) {
>> -               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
>> -               dev_id, dev_conf->nb_event_ports, info.max_event_ports);
>> +       if (dev_conf->nb_event_ports > info.max_event_ports +
>> +                       info.max_single_link_event_port_queue_pairs) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
>> +                                dev_id, dev_conf->nb_event_ports,
>> +                                info.max_event_ports,
>> +                                info.max_single_link_event_port_queue_pairs);
>> +               return -EINVAL;
>> +       }
>> +       if (dev_conf->nb_event_ports -
>> +                       dev_conf->nb_single_link_event_port_queues
>> +                       > info.max_event_ports) {
>> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
>> +                                dev_id, dev_conf->nb_event_ports,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                info.max_event_ports);
>> +               return -EINVAL;
>> +       }
>> +
>> +       if (dev_conf->nb_single_link_event_port_queues >
>> +           dev_conf->nb_event_ports) {
>> +               RTE_EDEV_LOG_ERR(
>> +                                "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
>> +                                dev_id,
>> +                                dev_conf->nb_single_link_event_port_queues,
>> +                                dev_conf->nb_event_ports);
>>                  return -EINVAL;
>>          }
>>
>> @@ -737,7 +779,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>>                  return -EINVAL;
>>          }
>>
>> -       if (port_conf && port_conf->disable_implicit_release &&
>> +       if (port_conf &&
>> +           (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
>>              !(dev->data->event_dev_cap &
>>                RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>>                  RTE_EDEV_LOG_ERR(
>> @@ -809,6 +852,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>>                          uint32_t *attr_value)
>>   {
>>          struct rte_eventdev *dev;
>> +       uint32_t config;
>>
>>          if (!attr_value)
>>                  return -EINVAL;
>> @@ -830,6 +874,10 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>>          case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
>>                  *attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
>>                  break;
>> +       case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
>> +               config = dev->data->ports_cfg[port_id].event_port_cfg;
>> +               *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>> +               break;
>>          default:
>>                  return -EINVAL;
>>          };
>> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
>> index 7dc832353..7f7a8a275 100644
>> --- a/lib/librte_eventdev/rte_eventdev.h
>> +++ b/lib/librte_eventdev/rte_eventdev.h
>> @@ -291,6 +291,13 @@ struct rte_event;
>>    * single queue to each port or map a single queue to many port.
>>    */
>>
>> +#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
>> +/**< Event device is capable of carrying the flow ID from the enqueued
>> + * event to the dequeued event. If the flag is set, the dequeued event's flow
>> + * ID matches the corresponding enqueued event's flow ID. If the flag is not
>> + * set, the dequeued event's flow ID field is uninitialized.
>> + */
>> +

The dequeued event's value should be undefined, to let an implementation 
overwrite an existing value. Replace "is capable of carrying" with 
"carries".


Is "maintain" better than "carry"? Or "preserve". I don't know.

>>   /* Event device priority levels */
>>   #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>>   /**< Highest priority expressed across eventdev subsystem
>> @@ -380,6 +387,10 @@ struct rte_event_dev_info {
>>           * event port by this device.
>>           * A device that does not support bulk enqueue will set this as 1.
>>           */
>> +       uint32_t max_event_port_links;
>> +       /**< Maximum number of queues that can be linked to a single event
>> +        * port by this device.
>> +        */


Eventdev API supports 255 queues, so you should use an uint8_t here.


>>          int32_t max_num_events;
>>          /**< A *closed system* event dev has a limit on the number of events it
>>           * can manage at a time. An *open system* event dev does not have a
>> @@ -387,6 +398,12 @@ struct rte_event_dev_info {
>>           */
>>          uint32_t event_dev_cap;
>>          /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
>> +       uint8_t max_single_link_event_port_queue_pairs;
>> +       /**< Maximum number of event ports and queues that are optimized for
>> +        * (and only capable of) single-link configurations supported by this
>> +        * device. These ports and queues are not accounted for in
>> +        * max_event_ports or max_event_queues.
>> +        */
>>   };
>>
>>   /**
>> @@ -494,6 +511,14 @@ struct rte_event_dev_config {
>>           */
>>          uint32_t event_dev_cfg;
>>          /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
>> +       uint8_t nb_single_link_event_port_queues;
>> +       /**< Number of event ports and queues that will be singly-linked to
>> +        * each other. These are a subset of the overall event ports and
>> +        * queues; this value cannot exceed *nb_event_ports* or
>> +        * *nb_event_queues*. If the device has ports and queues that are
>> +        * optimized for single-link usage, this field is a hint for how many
>> +        * to allocate; otherwise, regular event ports and queues can be used.
>> +        */
>>   };
>>
>>   /**
>> @@ -671,6 +696,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>>
>>   /* Event port specific APIs */
>>
>> +/* Event port configuration bitmap flags */
>> +#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>> +/**< Configure the port not to release outstanding events in
>> + * rte_event_dev_dequeue_burst(). If set, all events received through
>> + * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>> + * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>> + * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>> + */
>> +#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>> +/**< This event port links only to a single event queue.
>> + *
>> + *  @see rte_event_port_setup(), rte_event_port_link()
>> + */
>> +
>>   /** Event port configuration structure */
>>   struct rte_event_port_conf {
>>          int32_t new_event_threshold;
>> @@ -698,13 +737,7 @@ struct rte_event_port_conf {
>>           * which previously supplied to rte_event_dev_configure().
>>           * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
>>           */
>> -       uint8_t disable_implicit_release;
>> -       /**< Configure the port not to release outstanding events in
>> -        * rte_event_dev_dequeue_burst(). If true, all events received through
>> -        * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>> -        * RTE_EVENT_OP_FORWARD. Must be false when the device is not
>> -        * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>> -        */
>> +       uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>>   };
>>
>>   /**
>> @@ -769,6 +802,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>>    * The new event threshold of the port
>>    */
>>   #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>> +/**
>> + * The implicit release disable attribute of the port
>> + */
>> +#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>>
>>   /**
>>    * Get an attribute from a port.
>> diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> index 443cd38c2..157299983 100644
>> --- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> +++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>> @@ -88,6 +88,60 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>>          return -ENXIO;
>>   }
>>
>> +/**
>> + * @internal
>> + * Wrapper for use by pci drivers as a .probe function to attach to a event
>> + * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>> + * the name.
>> + */
>> +static inline int
>> +rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>> +                           struct rte_pci_device *pci_dev,
>> +                           size_t private_data_size,
>> +                           eventdev_pmd_pci_callback_t devinit,
>> +                           const char *name)
>> +{
>> +       struct rte_eventdev *eventdev;
>> +
>> +       int retval;
>> +
>> +       if (devinit == NULL)
>> +               return -EINVAL;
>> +
>> +       eventdev = rte_event_pmd_allocate(name,
>> +                        pci_dev->device.numa_node);
>> +       if (eventdev == NULL)
>> +               return -ENOMEM;
>> +
>> +       if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>> +               eventdev->data->dev_private =
>> +                               rte_zmalloc_socket(
>> +                                               "eventdev private structure",
>> +                                               private_data_size,
>> +                                               RTE_CACHE_LINE_SIZE,
>> +                                               rte_socket_id());
>> +
>> +               if (eventdev->data->dev_private == NULL)
>> +                       rte_panic("Cannot allocate memzone for private "
>> +                                       "device data");
>> +       }
>> +
>> +       eventdev->dev = &pci_dev->device;
>> +
>> +       /* Invoke PMD device initialization function */
>> +       retval = devinit(eventdev);
>> +       if (retval == 0)
>> +               return 0;
>> +
>> +       RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
>> +                       " failed", pci_drv->driver.name,
>> +                       (unsigned int) pci_dev->id.vendor_id,
>> +                       (unsigned int) pci_dev->id.device_id);
>> +
>> +       rte_event_pmd_release(eventdev);
>> +
>> +       return -ENXIO;
>> +}
>>
>>   /**
>>    * @internal
>> --
>> 2.13.6
>>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites
  @ 2020-06-13  3:59  5%   ` Jerin Jacob
  2020-06-13 10:43  0%     ` Mattias Rönnblom
  2020-06-18 15:44  5%     ` McDaniel, Timothy
  0 siblings, 2 replies; 200+ results
From: Jerin Jacob @ 2020-06-13  3:59 UTC (permalink / raw)
  To: McDaniel, Timothy
  Cc: Jerin Jacob, dpdk-dev, Gage Eads, Van Haaren, Harry,
	Ray Kinsella, Neil Horman, Mattias Rönnblom

On Sat, Jun 13, 2020 at 2:56 AM McDaniel, Timothy
<timothy.mcdaniel@intel.com> wrote:
>
> The DLB hardware does not conform exactly to the eventdev interface.
> 1) It has a limit on the number of queues that may be linked to a port.
> 2) Some ports a further restricted to a maximum of 1 linked queue.
> 3) It does not (currently) have the ability to carry the flow_id as part
> of the event (QE) payload.
>
> Due to the above, we would like to propose the following enhancements.


Thanks, McDaniel, Good to see new HW PMD for eventdev.

+ Ray and Neil.

Hello McDaniel,
I assume this patchset is for v20.08. It is adding new elements in
pubic structures. Have you checked the ABI breakage?

I will review the rest of the series if there is NO ABI breakage as we
can not have the ABI breakage 20.08 version.


ABI validator
~~~~~~~~~~~~~~
1. meson build
2.  Compile and install known stable abi libs i.e ToT.
         DESTDIR=$PWD/install-meson-stable ninja -C build install
     Compile and install with patches to be verified.
         DESTDIR=$PWD/install-meson-new ninja -C build install
3. Gen ABI for both
        devtools/gen-abi.sh install-meson-stable
        devtools/gen-abi.sh install-meson-new
4. Run abi checker
        devtools/check-abi.sh install-meson-stable install-meson-new


DPDK_ABI_REF_DIR=/build/dpdk/reference/ DPDK_ABI_REF_VERSION=v20.02
./devtools/test-meson-builds.sh
DPDK_ABI_REF_DIR - needs an absolute path, for reasons that are still
unclear to me.
DPDK_ABI_REF_VERSION - you need to use the last DPDK release.

>
> 1) Add new fields to the rte_event_dev_info struct. These fields allow
> the device to advertize its capabilities so that applications can take
> the appropriate actions based on those capabilities.
>
>     struct rte_event_dev_info {
>         uint32_t max_event_port_links;
>         /**< Maximum number of queues that can be linked to a single event
>          * port by this device.
>          */
>
>         uint8_t max_single_link_event_port_queue_pairs;
>         /**< Maximum number of event ports and queues that are optimized for
>          * (and only capable of) single-link configurations supported by this
>          * device. These ports and queues are not accounted for in
>          * max_event_ports or max_event_queues.
>          */
>     }
>
> 2) Add a new field to the rte_event_dev_config struct. This field allows the
> application to specify how many of its ports are limited to a single link,
> or will be used in single link mode.
>
>     /** Event device configuration structure */
>     struct rte_event_dev_config {
>         uint8_t nb_single_link_event_port_queues;
>         /**< Number of event ports and queues that will be singly-linked to
>          * each other. These are a subset of the overall event ports and
>          * queues; this value cannot exceed *nb_event_ports* or
>          * *nb_event_queues*. If the device has ports and queues that are
>          * optimized for single-link usage, this field is a hint for how many
>          * to allocate; otherwise, regular event ports and queues can be used.
>          */
>     }
>
> 3) Replace the dedicated implicit_release_disabled field with a bit field
> of explicit port capabilities. The implicit_release_disable functionality
> is assiged to one bit, and a port-is-single-link-only  attribute is
> assigned to other, with the remaining bits available for future assignment.
>
>         * Event port configuration bitmap flags */
>         #define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
>         /**< Configure the port not to release outstanding events in
>          * rte_event_dev_dequeue_burst(). If set, all events received through
>          * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
>          * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>          * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>          */
>         #define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
>
>         /**< This event port links only to a single event queue.
>          *
>          *  @see rte_event_port_setup(), rte_event_port_link()
>          */
>
>         #define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>         /**
>          * The implicit release disable attribute of the port
>          */
>
>         struct rte_event_port_conf {
>                 uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>         }
>
> 4) Add UMWAIT/UMONITOR bit to rte_cpuflags
>
> 5) Added a new API that is useful for probing PCI devices.
>
>         /**
>          * @internal
>          * Wrapper for use by pci drivers as a .probe function to attach to a event
>          * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
>          * the name.
>          */
>         static inline int
>         rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
>                                     struct rte_pci_device *pci_dev,
>                                     size_t private_data_size,
>                                     eventdev_pmd_pci_callback_t devinit,
>                                     const char *name);
>
> Change-Id: I4cf00015296e2b3feca9886895765554730594be
> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> ---
>  app/test-eventdev/evt_common.h                     |  1 +
>  app/test-eventdev/test_order_atq.c                 |  4 ++
>  app/test-eventdev/test_order_common.c              |  6 ++-
>  app/test-eventdev/test_order_queue.c               |  4 ++
>  app/test-eventdev/test_perf_atq.c                  |  1 +
>  app/test-eventdev/test_perf_queue.c                |  1 +
>  app/test-eventdev/test_pipeline_atq.c              |  1 +
>  app/test-eventdev/test_pipeline_queue.c            |  1 +
>  app/test/test_eventdev.c                           |  4 +-
>  drivers/event/dpaa2/dpaa2_eventdev.c               |  2 +-
>  drivers/event/octeontx/ssovf_evdev.c               |  2 +-
>  drivers/event/skeleton/skeleton_eventdev.c         |  2 +-
>  drivers/event/sw/sw_evdev.c                        |  5 +-
>  drivers/event/sw/sw_evdev_selftest.c               |  9 ++--
>  .../eventdev_pipeline/pipeline_worker_generic.c    |  8 ++-
>  examples/eventdev_pipeline/pipeline_worker_tx.c    |  3 ++
>  examples/l2fwd-event/l2fwd_event_generic.c         |  5 +-
>  examples/l2fwd-event/l2fwd_event_internal_port.c   |  5 +-
>  examples/l3fwd/l3fwd_event_generic.c               |  5 +-
>  examples/l3fwd/l3fwd_event_internal_port.c         |  5 +-
>  lib/librte_eal/x86/include/rte_cpuflags.h          |  1 +
>  lib/librte_eal/x86/rte_cpuflags.c                  |  1 +
>  lib/librte_eventdev/rte_event_eth_tx_adapter.c     |  2 +-
>  lib/librte_eventdev/rte_eventdev.c                 | 62 +++++++++++++++++++---
>  lib/librte_eventdev/rte_eventdev.h                 | 51 +++++++++++++++---
>  lib/librte_eventdev/rte_eventdev_pmd_pci.h         | 54 +++++++++++++++++++
>  26 files changed, 208 insertions(+), 37 deletions(-)
>
> diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
> index f9d7378d3..120c27b33 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -169,6 +169,7 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues,
>                         .dequeue_timeout_ns = opt->deq_tmo_nsec,
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = info.max_num_events,
>                         .nb_event_queue_flows = opt->nb_flows,
>                         .nb_event_port_dequeue_depth =
> diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
> index 3366cfce9..8246b96f0 100644
> --- a/app/test-eventdev/test_order_atq.c
> +++ b/app/test-eventdev/test_order_atq.c
> @@ -34,6 +34,8 @@ order_atq_worker(void *arg)
>                         continue;
>                 }
>
> +               ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.sub_event_type == 0) { /* stage 0 from producer */
>                         order_atq_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -68,6 +70,8 @@ order_atq_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +                       ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].sub_event_type == 0) { /*stage 0 */
>                                 order_atq_process_stage_0(&ev[i]);
>                         } else if (ev[i].sub_event_type == 1) { /* stage 1 */
> diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
> index 4190f9ade..c6fcd0509 100644
> --- a/app/test-eventdev/test_order_common.c
> +++ b/app/test-eventdev/test_order_common.c
> @@ -49,6 +49,7 @@ order_producer(void *arg)
>                 const uint32_t flow = (uintptr_t)m % nb_flows;
>                 /* Maintain seq number per flow */
>                 m->seqn = producer_flow_seq[flow]++;
> +               m->udata64 = flow;
>
>                 ev.flow_id = flow;
>                 ev.mbuf = m;
> @@ -318,10 +319,11 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>                 opt->wkr_deq_dep = dev_info.max_event_port_dequeue_depth;
>
>         /* port configuration */
> -       const struct rte_event_port_conf p_conf = {
> +       struct rte_event_port_conf p_conf = {
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         /* setup one port per worker, linking to all queues */
> @@ -351,6 +353,8 @@ order_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>         p->queue_id = 0;
>         p->t = t;
>
> +       p_conf.new_event_threshold /= 2;
> +
>         ret = rte_event_port_setup(opt->dev_id, port, &p_conf);
>         if (ret) {
>                 evt_err("failed to setup producer port %d", port);
> diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
> index 495efd92f..a0a2187a2 100644
> --- a/app/test-eventdev/test_order_queue.c
> +++ b/app/test-eventdev/test_order_queue.c
> @@ -34,6 +34,8 @@ order_queue_worker(void *arg)
>                         continue;
>                 }
>
> +               ev.flow_id = ev.mbuf->udata64;
> +
>                 if (ev.queue_id == 0) { /* from ordered queue */
>                         order_queue_process_stage_0(&ev);
>                         while (rte_event_enqueue_burst(dev_id, port, &ev, 1)
> @@ -68,6 +70,8 @@ order_queue_worker_burst(void *arg)
>                 }
>
>                 for (i = 0; i < nb_rx; i++) {
> +                       ev[i].flow_id = ev[i].mbuf->udata64;
> +
>                         if (ev[i].queue_id == 0) { /* from ordered queue */
>                                 order_queue_process_stage_0(&ev[i]);
>                         } else if (ev[i].queue_id == 1) {/* from atomic queue */
> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
> index 8fd51004e..10846f202 100644
> --- a/app/test-eventdev/test_perf_atq.c
> +++ b/app/test-eventdev/test_perf_atq.c
> @@ -204,6 +204,7 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues,
> diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
> index f4ea3a795..a0119da60 100644
> --- a/app/test-eventdev/test_perf_queue.c
> +++ b/app/test-eventdev/test_perf_queue.c
> @@ -219,6 +219,7 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = dev_info.max_event_port_dequeue_depth,
>                         .new_event_threshold = dev_info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         ret = perf_event_dev_port_setup(test, opt, nb_stages /* stride */,
> diff --git a/app/test-eventdev/test_pipeline_atq.c b/app/test-eventdev/test_pipeline_atq.c
> index 8e8686c14..a95ec0aa5 100644
> --- a/app/test-eventdev/test_pipeline_atq.c
> +++ b/app/test-eventdev/test_pipeline_atq.c
> @@ -356,6 +356,7 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                 .dequeue_depth = opt->wkr_deq_dep,
>                 .enqueue_depth = info.max_event_port_dequeue_depth,
>                 .new_event_threshold = info.max_num_events,
> +               .event_port_cfg = 0,
>         };
>
>         if (!t->internal_port)
> diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
> index 7bebac34f..30817dc78 100644
> --- a/app/test-eventdev/test_pipeline_queue.c
> +++ b/app/test-eventdev/test_pipeline_queue.c
> @@ -379,6 +379,7 @@ pipeline_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>                         .dequeue_depth = opt->wkr_deq_dep,
>                         .enqueue_depth = info.max_event_port_dequeue_depth,
>                         .new_event_threshold = info.max_num_events,
> +                       .event_port_cfg = 0,
>         };
>
>         if (!t->internal_port) {
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 43ccb1ce9..62019c185 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -559,10 +559,10 @@ test_eventdev_port_setup(void)
>         if (!(info.event_dev_cap &
>               RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>                 pconf.enqueue_depth = info.max_event_port_enqueue_depth;
> -               pconf.disable_implicit_release = 1;
> +               pconf.event_port_cfg = RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>                 ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
>                 TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
> -               pconf.disable_implicit_release = 0;
> +               pconf.event_port_cfg = 0;
>         }
>
>         ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
> diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
> index a196ad4c6..8568bfcfc 100644
> --- a/drivers/event/dpaa2/dpaa2_eventdev.c
> +++ b/drivers/event/dpaa2/dpaa2_eventdev.c
> @@ -537,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>                 DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
>         port_conf->enqueue_depth =
>                 DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static int
> diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
> index 1b1a5d939..99c0b2efb 100644
> --- a/drivers/event/octeontx/ssovf_evdev.c
> +++ b/drivers/event/octeontx/ssovf_evdev.c
> @@ -224,7 +224,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = edev->max_num_events;
>         port_conf->dequeue_depth = 1;
>         port_conf->enqueue_depth = 1;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static void
> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
> index c889220e0..37d569b8c 100644
> --- a/drivers/event/skeleton/skeleton_eventdev.c
> +++ b/drivers/event/skeleton/skeleton_eventdev.c
> @@ -209,7 +209,7 @@ skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = 32 * 1024;
>         port_conf->dequeue_depth = 16;
>         port_conf->enqueue_depth = 16;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static void
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index fb8e8bebb..0b3dd9c1c 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
>         }
>
>         p->inflight_max = conf->new_event_threshold;
> -       p->implicit_release = !conf->disable_implicit_release;
> +       p->implicit_release = !(conf->event_port_cfg &
> +                               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>
>         /* check if ring exists, same as rx_worker above */
>         snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
> @@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
>         port_conf->new_event_threshold = 1024;
>         port_conf->dequeue_depth = 16;
>         port_conf->enqueue_depth = 16;
> -       port_conf->disable_implicit_release = 0;
> +       port_conf->event_port_cfg = 0;
>  }
>
>  static int
> diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c
> index 38c21fa0f..a78d6cd0d 100644
> --- a/drivers/event/sw/sw_evdev_selftest.c
> +++ b/drivers/event/sw/sw_evdev_selftest.c
> @@ -172,7 +172,7 @@ create_ports(struct test *t, int num_ports)
>                         .new_event_threshold = 1024,
>                         .dequeue_depth = 32,
>                         .enqueue_depth = 64,
> -                       .disable_implicit_release = 0,
> +                       .event_port_cfg = 0,
>         };
>         if (num_ports > MAX_PORTS)
>                 return -1;
> @@ -1227,7 +1227,7 @@ port_reconfig_credits(struct test *t)
>                                 .new_event_threshold = 128,
>                                 .dequeue_depth = 32,
>                                 .enqueue_depth = 64,
> -                               .disable_implicit_release = 0,
> +                               .event_port_cfg = 0,
>                 };
>                 if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>                         printf("%d Error setting up port\n", __LINE__);
> @@ -1317,7 +1317,7 @@ port_single_lb_reconfig(struct test *t)
>                 .new_event_threshold = 128,
>                 .dequeue_depth = 32,
>                 .enqueue_depth = 64,
> -               .disable_implicit_release = 0,
> +               .event_port_cfg = 0,
>         };
>         if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
>                 printf("%d Error setting up port\n", __LINE__);
> @@ -3079,7 +3079,8 @@ worker_loopback(struct test *t, uint8_t disable_implicit_release)
>          * only be initialized once - and this needs to be set for multiple runs
>          */
>         conf.new_event_threshold = 512;
> -       conf.disable_implicit_release = disable_implicit_release;
> +       conf.event_port_cfg = disable_implicit_release ?
> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
>         if (rte_event_port_setup(evdev, 0, &conf) < 0) {
>                 printf("Error setting up RX port\n");
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 42ff4eeb9..a091da3ba 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -129,6 +129,7 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 1,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> @@ -138,12 +139,13 @@ setup_eventdev_generic(struct worker_data *worker_data)
>                         .dequeue_depth = cdata.worker_cq_depth,
>                         .enqueue_depth = 64,
>                         .new_event_threshold = 4096,
> +                       .event_port_cfg = 0,
>         };
>         struct rte_event_queue_conf wkr_q_conf = {
>                         .schedule_type = cdata.queue_type,
>                         .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>                         .nb_atomic_flows = 1024,
> -               .nb_atomic_order_sequences = 1024,
> +                       .nb_atomic_order_sequences = 1024,
>         };
>         struct rte_event_queue_conf tx_q_conf = {
>                         .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> @@ -167,7 +169,8 @@ setup_eventdev_generic(struct worker_data *worker_data)
>         disable_implicit_release = (dev_info.event_dev_cap &
>                         RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>
> -       wkr_p_conf.disable_implicit_release = disable_implicit_release;
> +       wkr_p_conf.event_port_cfg = disable_implicit_release ?
> +               RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
>         if (dev_info.max_num_events < config.nb_events_limit)
>                 config.nb_events_limit = dev_info.max_num_events;
> @@ -417,6 +420,7 @@ init_adapters(uint16_t nb_ports)
>                 .dequeue_depth = cdata.worker_cq_depth,
>                 .enqueue_depth = 64,
>                 .new_event_threshold = 4096,
> +               .event_port_cfg = 0,
>         };
>
>         if (adptr_p_conf.new_event_threshold > dev_info.max_num_events)
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index 55bb2f762..e8a9652aa 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -436,6 +436,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>         struct rte_event_dev_config config = {
>                         .nb_event_queues = nb_queues,
>                         .nb_event_ports = nb_ports,
> +                       .nb_single_link_event_port_queues = 0,
>                         .nb_events_limit  = 4096,
>                         .nb_event_queue_flows = 1024,
>                         .nb_event_port_dequeue_depth = 128,
> @@ -445,6 +446,7 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data)
>                         .dequeue_depth = cdata.worker_cq_depth,
>                         .enqueue_depth = 64,
>                         .new_event_threshold = 4096,
> +                       .event_port_cfg = 0,
>         };
>         struct rte_event_queue_conf wkr_q_conf = {
>                         .schedule_type = cdata.queue_type,
> @@ -746,6 +748,7 @@ init_adapters(uint16_t nb_ports)
>                 .dequeue_depth = cdata.worker_cq_depth,
>                 .enqueue_depth = 64,
>                 .new_event_threshold = 4096,
> +               .event_port_cfg = 0,
>         };
>
>         init_ports(nb_ports);
> diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
> index 2dc95e5f7..e01df0435 100644
> --- a/examples/l2fwd-event/l2fwd_event_generic.c
> +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> @@ -126,8 +126,9 @@ l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
> index 63d57b46c..f54327b4f 100644
> --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> @@ -123,8 +123,9 @@ l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
> index f8c98435d..409a4107e 100644
> --- a/examples/l3fwd/l3fwd_event_generic.c
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -115,8 +115,9 @@ l3fwd_event_port_setup_generic(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>         evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
> index 03ac581d6..df410f10f 100644
> --- a/examples/l3fwd/l3fwd_event_internal_port.c
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -113,8 +113,9 @@ l3fwd_event_port_setup_internal_port(void)
>         if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
>                 event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
>
> -       event_p_conf.disable_implicit_release =
> -               evt_rsrc->disable_implicit_release;
> +       event_p_conf.event_port_cfg = 0;
> +       if (evt_rsrc->disable_implicit_release)
> +               event_p_conf.event_port_cfg |= RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL;
>
>         for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
>                                                                 event_p_id++) {
> diff --git a/lib/librte_eal/x86/include/rte_cpuflags.h b/lib/librte_eal/x86/include/rte_cpuflags.h
> index c1d20364d..ab2c3b379 100644
> --- a/lib/librte_eal/x86/include/rte_cpuflags.h
> +++ b/lib/librte_eal/x86/include/rte_cpuflags.h
> @@ -130,6 +130,7 @@ enum rte_cpu_flag_t {
>         RTE_CPUFLAG_CLDEMOTE,               /**< Cache Line Demote */
>         RTE_CPUFLAG_MOVDIRI,                /**< Direct Store Instructions */
>         RTE_CPUFLAG_MOVDIR64B,              /**< Direct Store Instructions 64B */
> +       RTE_CPUFLAG_UMWAIT,                 /**< UMONITOR/UMWAIT */
>         RTE_CPUFLAG_AVX512VP2INTERSECT,     /**< AVX512 Two Register Intersection */
>
>         /* The last item */
> diff --git a/lib/librte_eal/x86/rte_cpuflags.c b/lib/librte_eal/x86/rte_cpuflags.c
> index 30439e795..69ac0dbce 100644
> --- a/lib/librte_eal/x86/rte_cpuflags.c
> +++ b/lib/librte_eal/x86/rte_cpuflags.c
> @@ -137,6 +137,7 @@ const struct feature_entry rte_cpu_feature_table[] = {
>         FEAT_DEF(CLDEMOTE, 0x00000007, 0, RTE_REG_ECX, 25)
>         FEAT_DEF(MOVDIRI, 0x00000007, 0, RTE_REG_ECX, 27)
>         FEAT_DEF(MOVDIR64B, 0x00000007, 0, RTE_REG_ECX, 28)
> +        FEAT_DEF(UMWAIT, 0x00000007, 0, RTE_REG_ECX, 5)
>         FEAT_DEF(AVX512VP2INTERSECT, 0x00000007, 0, RTE_REG_EDX, 8)
>  };
>
> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> index bb21dc407..8a72256de 100644
> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
> @@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id,
>                 return ret;
>         }
>
> -       pc->disable_implicit_release = 0;
> +       pc->event_port_cfg = 0;
>         ret = rte_event_port_setup(dev_id, port_id, pc);
>         if (ret) {
>                 RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
> index 82c177c73..4955ab1a0 100644
> --- a/lib/librte_eventdev/rte_eventdev.c
> +++ b/lib/librte_eventdev/rte_eventdev.c
> @@ -437,9 +437,29 @@ rte_event_dev_configure(uint8_t dev_id,
>                                         dev_id);
>                 return -EINVAL;
>         }
> -       if (dev_conf->nb_event_queues > info.max_event_queues) {
> -               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
> -               dev_id, dev_conf->nb_event_queues, info.max_event_queues);
> +       if (dev_conf->nb_event_queues > info.max_event_queues +
> +                       info.max_single_link_event_port_queue_pairs) {
> +               RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d + max_single_link_event_port_queue_pairs=%d",
> +                                dev_id, dev_conf->nb_event_queues,
> +                                info.max_event_queues,
> +                                info.max_single_link_event_port_queue_pairs);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_event_queues -
> +                       dev_conf->nb_single_link_event_port_queues >
> +                       info.max_event_queues) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d - nb_single_link_event_port_queues=%d > max_event_queues=%d",
> +                                dev_id, dev_conf->nb_event_queues,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                info.max_event_queues);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_single_link_event_port_queues >
> +                       dev_conf->nb_event_queues) {
> +               RTE_EDEV_LOG_ERR("dev%d nb_single_link_event_port_queues=%d > nb_event_queues=%d",
> +                                dev_id,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                dev_conf->nb_event_queues);
>                 return -EINVAL;
>         }
>
> @@ -448,9 +468,31 @@ rte_event_dev_configure(uint8_t dev_id,
>                 RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
>                 return -EINVAL;
>         }
> -       if (dev_conf->nb_event_ports > info.max_event_ports) {
> -               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
> -               dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> +       if (dev_conf->nb_event_ports > info.max_event_ports +
> +                       info.max_single_link_event_port_queue_pairs) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
> +                                dev_id, dev_conf->nb_event_ports,
> +                                info.max_event_ports,
> +                                info.max_single_link_event_port_queue_pairs);
> +               return -EINVAL;
> +       }
> +       if (dev_conf->nb_event_ports -
> +                       dev_conf->nb_single_link_event_port_queues
> +                       > info.max_event_ports) {
> +               RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d - nb_single_link_event_port_queues=%d > max_event_ports=%d",
> +                                dev_id, dev_conf->nb_event_ports,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                info.max_event_ports);
> +               return -EINVAL;
> +       }
> +
> +       if (dev_conf->nb_single_link_event_port_queues >
> +           dev_conf->nb_event_ports) {
> +               RTE_EDEV_LOG_ERR(
> +                                "dev%d nb_single_link_event_port_queues=%d > nb_event_ports=%d",
> +                                dev_id,
> +                                dev_conf->nb_single_link_event_port_queues,
> +                                dev_conf->nb_event_ports);
>                 return -EINVAL;
>         }
>
> @@ -737,7 +779,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>                 return -EINVAL;
>         }
>
> -       if (port_conf && port_conf->disable_implicit_release &&
> +       if (port_conf &&
> +           (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
>             !(dev->data->event_dev_cap &
>               RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
>                 RTE_EDEV_LOG_ERR(
> @@ -809,6 +852,7 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>                         uint32_t *attr_value)
>  {
>         struct rte_eventdev *dev;
> +       uint32_t config;
>
>         if (!attr_value)
>                 return -EINVAL;
> @@ -830,6 +874,10 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id,
>         case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
>                 *attr_value = dev->data->ports_cfg[port_id].new_event_threshold;
>                 break;
> +       case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
> +               config = dev->data->ports_cfg[port_id].event_port_cfg;
> +               *attr_value = !!(config & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
> +               break;
>         default:
>                 return -EINVAL;
>         };
> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
> index 7dc832353..7f7a8a275 100644
> --- a/lib/librte_eventdev/rte_eventdev.h
> +++ b/lib/librte_eventdev/rte_eventdev.h
> @@ -291,6 +291,13 @@ struct rte_event;
>   * single queue to each port or map a single queue to many port.
>   */
>
> +#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
> +/**< Event device is capable of carrying the flow ID from the enqueued
> + * event to the dequeued event. If the flag is set, the dequeued event's flow
> + * ID matches the corresponding enqueued event's flow ID. If the flag is not
> + * set, the dequeued event's flow ID field is uninitialized.
> + */
> +
>  /* Event device priority levels */
>  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>  /**< Highest priority expressed across eventdev subsystem
> @@ -380,6 +387,10 @@ struct rte_event_dev_info {
>          * event port by this device.
>          * A device that does not support bulk enqueue will set this as 1.
>          */
> +       uint32_t max_event_port_links;
> +       /**< Maximum number of queues that can be linked to a single event
> +        * port by this device.
> +        */
>         int32_t max_num_events;
>         /**< A *closed system* event dev has a limit on the number of events it
>          * can manage at a time. An *open system* event dev does not have a
> @@ -387,6 +398,12 @@ struct rte_event_dev_info {
>          */
>         uint32_t event_dev_cap;
>         /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
> +       uint8_t max_single_link_event_port_queue_pairs;
> +       /**< Maximum number of event ports and queues that are optimized for
> +        * (and only capable of) single-link configurations supported by this
> +        * device. These ports and queues are not accounted for in
> +        * max_event_ports or max_event_queues.
> +        */
>  };
>
>  /**
> @@ -494,6 +511,14 @@ struct rte_event_dev_config {
>          */
>         uint32_t event_dev_cfg;
>         /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
> +       uint8_t nb_single_link_event_port_queues;
> +       /**< Number of event ports and queues that will be singly-linked to
> +        * each other. These are a subset of the overall event ports and
> +        * queues; this value cannot exceed *nb_event_ports* or
> +        * *nb_event_queues*. If the device has ports and queues that are
> +        * optimized for single-link usage, this field is a hint for how many
> +        * to allocate; otherwise, regular event ports and queues can be used.
> +        */
>  };
>
>  /**
> @@ -671,6 +696,20 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
>
>  /* Event port specific APIs */
>
> +/* Event port configuration bitmap flags */
> +#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL    (1ULL << 0)
> +/**< Configure the port not to release outstanding events in
> + * rte_event_dev_dequeue_burst(). If set, all events received through
> + * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
> + * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
> + * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
> + */
> +#define RTE_EVENT_PORT_CFG_SINGLE_LINK         (1ULL << 1)
> +/**< This event port links only to a single event queue.
> + *
> + *  @see rte_event_port_setup(), rte_event_port_link()
> + */
> +
>  /** Event port configuration structure */
>  struct rte_event_port_conf {
>         int32_t new_event_threshold;
> @@ -698,13 +737,7 @@ struct rte_event_port_conf {
>          * which previously supplied to rte_event_dev_configure().
>          * Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
>          */
> -       uint8_t disable_implicit_release;
> -       /**< Configure the port not to release outstanding events in
> -        * rte_event_dev_dequeue_burst(). If true, all events received through
> -        * the port must be explicitly released with RTE_EVENT_OP_RELEASE or
> -        * RTE_EVENT_OP_FORWARD. Must be false when the device is not
> -        * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
> -        */
> +       uint32_t event_port_cfg; /**< Port cfg flags(EVENT_PORT_CFG_) */
>  };
>
>  /**
> @@ -769,6 +802,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>   * The new event threshold of the port
>   */
>  #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
> +/**
> + * The implicit release disable attribute of the port
> + */
> +#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>
>  /**
>   * Get an attribute from a port.
> diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> index 443cd38c2..157299983 100644
> --- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> +++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
> @@ -88,6 +88,60 @@ rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>         return -ENXIO;
>  }
>
> +/**
> + * @internal
> + * Wrapper for use by pci drivers as a .probe function to attach to a event
> + * interface.  Same as rte_event_pmd_pci_probe, except caller can specify
> + * the name.
> + */
> +static inline int
> +rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
> +                           struct rte_pci_device *pci_dev,
> +                           size_t private_data_size,
> +                           eventdev_pmd_pci_callback_t devinit,
> +                           const char *name)
> +{
> +       struct rte_eventdev *eventdev;
> +
> +       int retval;
> +
> +       if (devinit == NULL)
> +               return -EINVAL;
> +
> +       eventdev = rte_event_pmd_allocate(name,
> +                        pci_dev->device.numa_node);
> +       if (eventdev == NULL)
> +               return -ENOMEM;
> +
> +       if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +               eventdev->data->dev_private =
> +                               rte_zmalloc_socket(
> +                                               "eventdev private structure",
> +                                               private_data_size,
> +                                               RTE_CACHE_LINE_SIZE,
> +                                               rte_socket_id());
> +
> +               if (eventdev->data->dev_private == NULL)
> +                       rte_panic("Cannot allocate memzone for private "
> +                                       "device data");
> +       }
> +
> +       eventdev->dev = &pci_dev->device;
> +
> +       /* Invoke PMD device initialization function */
> +       retval = devinit(eventdev);
> +       if (retval == 0)
> +               return 0;
> +
> +       RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
> +                       " failed", pci_drv->driver.name,
> +                       (unsigned int) pci_dev->id.vendor_id,
> +                       (unsigned int) pci_dev->id.device_id);
> +
> +       rte_event_pmd_release(eventdev);
> +
> +       return -ENXIO;
> +}
>
>  /**
>   * @internal
> --
> 2.13.6
>

^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v3 09/10] doc: add note about blacklist/whitelist changes
  @ 2020-06-13  0:00  4% ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-06-13  0:00 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Luca Boccassi

The blacklist/whitelist changes to API will not be a breaking
change for applications in this release but worth adding a note
to encourage migration.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Luca Boccassi <bluca@debian.org>
---
 doc/guides/rel_notes/release_20_08.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5887..9e68544e7920 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* eal: The definitions related to including and excluding devices
+  has been changed from blacklist/whitelist to blocklist/allowlist.
+  There are compatiablity macros and command line mapping to accept
+  the old values but applications and scripts are strongly encouraged
+  to migrate to the new names.
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 08/27] event/dlb: add definitions shared with LKM or shared code
      2020-06-12 21:24  1% ` [dpdk-dev] [PATCH 03/27] event/dlb: add shared code version 10.7.9 McDaniel, Timothy
@ 2020-06-12 21:24  1% ` McDaniel, Timothy
  2 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-06-12 21:24 UTC (permalink / raw)
  To: jerinj; +Cc: dev, gage.eads, harry.van.haaren

Change-Id: Ie39013936676771d096c2166b2a4745cdeb772b0
Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb/dlb_user.h | 1351 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1351 insertions(+)
 create mode 100644 drivers/event/dlb/dlb_user.h

diff --git a/drivers/event/dlb/dlb_user.h b/drivers/event/dlb/dlb_user.h
new file mode 100644
index 000000000..f2dcee190
--- /dev/null
+++ b/drivers/event/dlb/dlb_user.h
@@ -0,0 +1,1351 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_USER_H
+#define __DLB_USER_H
+
+#define DLB_MAX_NAME_LEN 64
+
+#include <linux/types.h>
+
+enum dlb_error {
+	DLB_ST_SUCCESS = 0,
+	DLB_ST_NAME_EXISTS,
+	DLB_ST_DOMAIN_UNAVAILABLE,
+	DLB_ST_LDB_PORTS_UNAVAILABLE,
+	DLB_ST_DIR_PORTS_UNAVAILABLE,
+	DLB_ST_LDB_QUEUES_UNAVAILABLE,
+	DLB_ST_LDB_CREDITS_UNAVAILABLE,
+	DLB_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE,
+	DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE,
+	DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
+	DLB_ST_INVALID_DOMAIN_ID,
+	DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION,
+	DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE,
+	DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_INVALID_LDB_CREDIT_POOL_ID,
+	DLB_ST_INVALID_DIR_CREDIT_POOL_ID,
+	DLB_ST_INVALID_POP_COUNT_VIRT_ADDR,
+	DLB_ST_INVALID_LDB_QUEUE_ID,
+	DLB_ST_INVALID_CQ_DEPTH,
+	DLB_ST_INVALID_CQ_VIRT_ADDR,
+	DLB_ST_INVALID_PORT_ID,
+	DLB_ST_INVALID_QID,
+	DLB_ST_INVALID_PRIORITY,
+	DLB_ST_NO_QID_SLOTS_AVAILABLE,
+	DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_INVALID_DIR_QUEUE_ID,
+	DLB_ST_DIR_QUEUES_UNAVAILABLE,
+	DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK,
+	DLB_ST_INVALID_LDB_CREDIT_QUANTUM,
+	DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK,
+	DLB_ST_INVALID_DIR_CREDIT_QUANTUM,
+	DLB_ST_DOMAIN_NOT_CONFIGURED,
+	DLB_ST_PID_ALREADY_ATTACHED,
+	DLB_ST_PID_NOT_ATTACHED,
+	DLB_ST_INTERNAL_ERROR,
+	DLB_ST_DOMAIN_IN_USE,
+	DLB_ST_IOMMU_MAPPING_ERROR,
+	DLB_ST_FAIL_TO_PIN_MEMORY_PAGE,
+	DLB_ST_UNABLE_TO_PIN_POPCOUNT_PAGES,
+	DLB_ST_UNABLE_TO_PIN_CQ_PAGES,
+	DLB_ST_DISCONTIGUOUS_CQ_MEMORY,
+	DLB_ST_DISCONTIGUOUS_POP_COUNT_MEMORY,
+	DLB_ST_DOMAIN_STARTED,
+	DLB_ST_LARGE_POOL_NOT_SPECIFIED,
+	DLB_ST_SMALL_POOL_NOT_SPECIFIED,
+	DLB_ST_NEITHER_POOL_SPECIFIED,
+	DLB_ST_DOMAIN_NOT_STARTED,
+	DLB_ST_INVALID_MEASUREMENT_DURATION,
+	DLB_ST_INVALID_PERF_METRIC_GROUP_ID,
+	DLB_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES,
+	DLB_ST_DOMAIN_RESET_FAILED,
+	DLB_ST_MBOX_ERROR,
+	DLB_ST_INVALID_HIST_LIST_DEPTH,
+	DLB_ST_NO_MEMORY,
+};
+
+static const char dlb_error_strings[][128] = {
+	"DLB_ST_SUCCESS",
+	"DLB_ST_NAME_EXISTS",
+	"DLB_ST_DOMAIN_UNAVAILABLE",
+	"DLB_ST_LDB_PORTS_UNAVAILABLE",
+	"DLB_ST_DIR_PORTS_UNAVAILABLE",
+	"DLB_ST_LDB_QUEUES_UNAVAILABLE",
+	"DLB_ST_LDB_CREDITS_UNAVAILABLE",
+	"DLB_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE",
+	"DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE",
+	"DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
+	"DLB_ST_INVALID_DOMAIN_ID",
+	"DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION",
+	"DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE",
+	"DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_INVALID_LDB_CREDIT_POOL_ID",
+	"DLB_ST_INVALID_DIR_CREDIT_POOL_ID",
+	"DLB_ST_INVALID_POP_COUNT_VIRT_ADDR",
+	"DLB_ST_INVALID_LDB_QUEUE_ID",
+	"DLB_ST_INVALID_CQ_DEPTH",
+	"DLB_ST_INVALID_CQ_VIRT_ADDR",
+	"DLB_ST_INVALID_PORT_ID",
+	"DLB_ST_INVALID_QID",
+	"DLB_ST_INVALID_PRIORITY",
+	"DLB_ST_NO_QID_SLOTS_AVAILABLE",
+	"DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_INVALID_DIR_QUEUE_ID",
+	"DLB_ST_DIR_QUEUES_UNAVAILABLE",
+	"DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK",
+	"DLB_ST_INVALID_LDB_CREDIT_QUANTUM",
+	"DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK",
+	"DLB_ST_INVALID_DIR_CREDIT_QUANTUM",
+	"DLB_ST_DOMAIN_NOT_CONFIGURED",
+	"DLB_ST_PID_ALREADY_ATTACHED",
+	"DLB_ST_PID_NOT_ATTACHED",
+	"DLB_ST_INTERNAL_ERROR",
+	"DLB_ST_DOMAIN_IN_USE",
+	"DLB_ST_IOMMU_MAPPING_ERROR",
+	"DLB_ST_FAIL_TO_PIN_MEMORY_PAGE",
+	"DLB_ST_UNABLE_TO_PIN_POPCOUNT_PAGES",
+	"DLB_ST_UNABLE_TO_PIN_CQ_PAGES",
+	"DLB_ST_DISCONTIGUOUS_CQ_MEMORY",
+	"DLB_ST_DISCONTIGUOUS_POP_COUNT_MEMORY",
+	"DLB_ST_DOMAIN_STARTED",
+	"DLB_ST_LARGE_POOL_NOT_SPECIFIED",
+	"DLB_ST_SMALL_POOL_NOT_SPECIFIED",
+	"DLB_ST_NEITHER_POOL_SPECIFIED",
+	"DLB_ST_DOMAIN_NOT_STARTED",
+	"DLB_ST_INVALID_MEASUREMENT_DURATION",
+	"DLB_ST_INVALID_PERF_METRIC_GROUP_ID",
+	"DLB_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES",
+	"DLB_ST_DOMAIN_RESET_FAILED",
+	"DLB_ST_MBOX_ERROR",
+	"DLB_ST_INVALID_HIST_LIST_DEPTH",
+	"DLB_ST_NO_MEMORY",
+};
+
+struct dlb_cmd_response {
+	__u32 status; /* Interpret using enum dlb_error */
+	__u32 id;
+};
+
+/******************************/
+/* 'dlb' device file commands */
+/******************************/
+
+#define DLB_DEVICE_VERSION(x) (((x) >> 8) & 0xFF)
+#define DLB_DEVICE_REVISION(x) ((x) & 0xFF)
+
+enum dlb_revisions {
+	DLB_REV_A0 = 0,
+	DLB_REV_A1 = 1,
+	DLB_REV_A2 = 2,
+	DLB_REV_A3 = 3,
+	DLB_REV_B0 = 4,
+};
+
+/*
+ * DLB_CMD_GET_DEVICE_VERSION: Query the DLB device version.
+ *
+ *	This ioctl interface is the same in all driver versions and is always
+ *	the first ioctl.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id[7:0]: Device revision.
+ *	response.id[15:8]: Device version.
+ */
+
+struct dlb_get_device_version_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+#define DLB_VERSION_MAJOR_NUMBER 10
+#define DLB_VERSION_MINOR_NUMBER 7
+#define DLB_VERSION_REVISION_NUMBER 9
+#define DLB_VERSION (DLB_VERSION_MAJOR_NUMBER << 24 | \
+		     DLB_VERSION_MINOR_NUMBER << 16 | \
+		     DLB_VERSION_REVISION_NUMBER)
+
+#define DLB_VERSION_GET_MAJOR_NUMBER(x) (((x) >> 24) & 0xFF)
+#define DLB_VERSION_GET_MINOR_NUMBER(x) (((x) >> 16) & 0xFF)
+#define DLB_VERSION_GET_REVISION_NUMBER(x) ((x) & 0xFFFF)
+
+static inline __u8 dlb_version_incompatible(__u32 version)
+{
+	__u8 inc;
+
+	inc = DLB_VERSION_GET_MAJOR_NUMBER(version) != DLB_VERSION_MAJOR_NUMBER;
+	inc |= (int)DLB_VERSION_GET_MINOR_NUMBER(version) <
+		DLB_VERSION_MINOR_NUMBER;
+
+	return inc;
+}
+
+/*
+ * DLB_CMD_GET_DRIVER_VERSION: Query the DLB driver version. The major number
+ *	is changed when there is an ABI-breaking change, the minor number is
+ *	changed if the API is changed in a backwards-compatible way, and the
+ *	revision number is changed for fixes that don't affect the API.
+ *
+ *	If the kernel driver's API version major number and the header's
+ *	DLB_VERSION_MAJOR_NUMBER differ, the two are incompatible, or if the
+ *	major numbers match but the kernel driver's minor number is less than
+ *	the header file's, they are incompatible. The DLB_VERSION_INCOMPATIBLE
+ *	macro should be used to check for compatibility.
+ *
+ *	This ioctl interface is the same in all driver versions. Applications
+ *	should check the driver version before performing any other ioctl
+ *	operations.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Driver API version. Use the DLB_VERSION_GET_MAJOR_NUMBER,
+ *		DLB_VERSION_GET_MINOR_NUMBER, and
+ *		DLB_VERSION_GET_REVISION_NUMBER macros to interpret the field.
+ */
+
+struct dlb_get_driver_version_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+/*
+ * DLB_CMD_CREATE_SCHED_DOMAIN: Create a DLB scheduling domain and reserve the
+ *	resources (queues, ports, etc.) that it contains.
+ *
+ * Input parameters:
+ * - num_ldb_queues: Number of load-balanced queues.
+ * - num_ldb_ports: Number of load-balanced ports.
+ * - num_dir_ports: Number of directed ports. A directed port has one directed
+ *	queue, so no num_dir_queues argument is necessary.
+ * - num_atomic_inflights: This specifies the amount of temporary atomic QE
+ *	storage for the domain. This storage is divided among the domain's
+ *	load-balanced queues that are configured for atomic scheduling.
+ * - num_hist_list_entries: Amount of history list storage. This is divided
+ *	among the domain's CQs.
+ * - num_ldb_credits: Amount of load-balanced QE storage (QED). QEs occupy this
+ *	space until they are scheduled to a load-balanced CQ. One credit
+ *	represents the storage for one QE.
+ * - num_dir_credits: Amount of directed QE storage (DQED). QEs occupy this
+ *	space until they are scheduled to a directed CQ. One credit represents
+ *	the storage for one QE.
+ * - num_ldb_credit_pools: Number of pools into which the load-balanced credits
+ *	are placed.
+ * - num_dir_credit_pools: Number of pools into which the directed credits are
+ *	placed.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: domain ID.
+ */
+struct dlb_create_sched_domain_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_ldb_queues;
+	__u32 num_ldb_ports;
+	__u32 num_dir_ports;
+	__u32 num_atomic_inflights;
+	__u32 num_hist_list_entries;
+	__u32 num_ldb_credits;
+	__u32 num_dir_credits;
+	__u32 num_ldb_credit_pools;
+	__u32 num_dir_credit_pools;
+};
+
+/*
+ * DLB_CMD_GET_NUM_RESOURCES: Return the number of available resources
+ *	(queues, ports, etc.) that this device owns.
+ *
+ * Output parameters:
+ * - num_domains: Number of available scheduling domains.
+ * - num_ldb_queues: Number of available load-balanced queues.
+ * - num_ldb_ports: Number of available load-balanced ports.
+ * - num_dir_ports: Number of available directed ports. There is one directed
+ *	queue for every directed port.
+ * - num_atomic_inflights: Amount of available temporary atomic QE storage.
+ * - max_contiguous_atomic_inflights: When a domain is created, the temporary
+ *	atomic QE storage is allocated in a contiguous chunk. This return value
+ *	is the longest available contiguous range of atomic QE storage.
+ * - num_hist_list_entries: Amount of history list storage.
+ * - max_contiguous_hist_list_entries: History list storage is allocated in
+ *	a contiguous chunk, and this return value is the longest available
+ *	contiguous range of history list entries.
+ * - num_ldb_credits: Amount of available load-balanced QE storage.
+ * - max_contiguous_ldb_credits: QED storage is allocated in a contiguous
+ *	chunk, and this return value is the longest available contiguous range
+ *	of load-balanced credit storage.
+ * - num_dir_credits: Amount of available directed QE storage.
+ * - max_contiguous_dir_credits: DQED storage is allocated in a contiguous
+ *	chunk, and this return value is the longest available contiguous range
+ *	of directed credit storage.
+ * - num_ldb_credit_pools: Number of available load-balanced credit pools.
+ * - num_dir_credit_pools: Number of available directed credit pools.
+ * - padding0: Reserved for future use.
+ */
+struct dlb_get_num_resources_args {
+	/* Output parameters */
+	__u32 num_sched_domains;
+	__u32 num_ldb_queues;
+	__u32 num_ldb_ports;
+	__u32 num_dir_ports;
+	__u32 num_atomic_inflights;
+	__u32 max_contiguous_atomic_inflights;
+	__u32 num_hist_list_entries;
+	__u32 max_contiguous_hist_list_entries;
+	__u32 num_ldb_credits;
+	__u32 max_contiguous_ldb_credits;
+	__u32 num_dir_credits;
+	__u32 max_contiguous_dir_credits;
+	__u32 num_ldb_credit_pools;
+	__u32 num_dir_credit_pools;
+	__u32 padding0;
+};
+
+/*
+ * DLB_CMD_SAMPLE_PERF_COUNTERS: Gather a set of DLB performance data by
+ *	enabling performance counters for a user-specified measurement duration.
+ *	This ioctl is blocking; the calling thread sleeps in the kernel driver
+ *	for the duration of the measurement, then writes the data to user
+ *	memory before returning.
+ *
+ *	Certain metrics cannot be measured simultaneously, so multiple
+ *	invocations of this command are necessary to gather all metrics.
+ *	Metrics that can be collected simultaneously are grouped together in
+ *	struct dlb_perf_metric_group_X.
+ *
+ *	The driver allows only one active measurement at a time. If a thread
+ *	calls this command while a measurement is ongoing, the thread will
+ *	block until the original measurement completes.
+ *
+ *	This ioctl is not supported for VF devices.
+ *
+ * Input parameters:
+ * - measurement_duration_us: Duration, in microseconds, of the
+ *	measurement period. The duration must be between 1us and 60s,
+ *	inclusive.
+ * - perf_metric_group_id: ID of the metric group to measure.
+ * - perf_metric_group_data: Pointer to union dlb_perf_metric_group_data
+ *	structure. The driver will interpret the union according to
+ *	perf_metric_group_ID.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_perf_metric_group_0 {
+	__u32 dlb_iosf_to_sys_enq_count;
+	__u32 dlb_sys_to_iosf_deq_count;
+	__u32 dlb_sys_to_dlb_enq_count;
+	__u32 dlb_dlb_to_sys_deq_count;
+};
+
+struct dlb_perf_metric_group_1 {
+	__u32 dlb_push_ptr_update_count;
+};
+
+struct dlb_perf_metric_group_2 {
+	__u32 dlb_avg_hist_list_depth;
+};
+
+struct dlb_perf_metric_group_3 {
+	__u32 dlb_avg_qed_depth;
+};
+
+struct dlb_perf_metric_group_4 {
+	__u32 dlb_avg_dqed_depth;
+};
+
+struct dlb_perf_metric_group_5 {
+	__u32 dlb_noop_hcw_count;
+	__u32 dlb_bat_t_hcw_count;
+};
+
+struct dlb_perf_metric_group_6 {
+	__u32 dlb_comp_hcw_count;
+	__u32 dlb_comp_t_hcw_count;
+};
+
+struct dlb_perf_metric_group_7 {
+	__u32 dlb_enq_hcw_count;
+	__u32 dlb_enq_t_hcw_count;
+};
+
+struct dlb_perf_metric_group_8 {
+	__u32 dlb_renq_hcw_count;
+	__u32 dlb_renq_t_hcw_count;
+};
+
+struct dlb_perf_metric_group_9 {
+	__u32 dlb_rel_hcw_count;
+};
+
+struct dlb_perf_metric_group_10 {
+	__u32 dlb_frag_hcw_count;
+	__u32 dlb_frag_t_hcw_count;
+};
+
+union dlb_perf_metric_group_data {
+	struct dlb_perf_metric_group_0 group_0;
+	struct dlb_perf_metric_group_1 group_1;
+	struct dlb_perf_metric_group_2 group_2;
+	struct dlb_perf_metric_group_3 group_3;
+	struct dlb_perf_metric_group_4 group_4;
+	struct dlb_perf_metric_group_5 group_5;
+	struct dlb_perf_metric_group_6 group_6;
+	struct dlb_perf_metric_group_7 group_7;
+	struct dlb_perf_metric_group_8 group_8;
+	struct dlb_perf_metric_group_9 group_9;
+	struct dlb_perf_metric_group_10 group_10;
+};
+
+struct dlb_sample_perf_counters_args {
+	/* Output parameters */
+	__u64 elapsed_time_us;
+	__u64 response;
+	/* Input parameters */
+	__u32 measurement_duration_us;
+	__u32 perf_metric_group_id;
+	__u64 perf_metric_group_data;
+};
+
+/*
+ * DLB_CMD_MEASURE_SCHED_COUNTS: Measure the DLB scheduling activity for a
+ *	user-specified measurement duration. This ioctl is blocking; the
+ *	calling thread sleeps in the kernel driver for the duration of the
+ *	measurement, then writes the result to user memory before returning.
+ *
+ *	Unlike the DLB_CMD_SAMPLE_PERF_COUNTERS ioctl, multiple threads can
+ *	measure scheduling counts simultaneously.
+ *
+ *	Note: VF devices can only measure the scheduling counts of their CQs;
+ *	all other counts will be set to 0.
+ *
+ * Input parameters:
+ * - measurement_duration_us: Duration, in microseconds, of the
+ *	measurement period. The duration must be between 1us and 60s,
+ *	inclusive.
+ * - padding0: Reserved for future use.
+ * - sched_count_data: Pointer to a struct dlb_sched_count data structure.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_sched_counts {
+	__u64 ldb_sched_count;
+	__u64 dir_sched_count;
+	__u64 ldb_cq_sched_count[64];
+	__u64 dir_cq_sched_count[128];
+};
+
+struct dlb_measure_sched_count_args {
+	/* Output parameters */
+	__u64 elapsed_time_us;
+	__u64 response;
+	/* Input parameters */
+	__u32 measurement_duration_us;
+	__u32 padding0;
+	__u64 sched_count_data;
+};
+
+/*
+ * DLB_CMD_SET_SN_ALLOCATION: Configure a sequence number group
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - num: Number of sequence numbers per queue.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_set_sn_allocation_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 num;
+};
+
+/*
+ * DLB_CMD_GET_SN_ALLOCATION: Get a sequence number group's configuration
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Specified group's number of sequence numbers per queue.
+ */
+struct dlb_get_sn_allocation_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 padding0;
+};
+
+enum dlb_cq_poll_modes {
+	DLB_CQ_POLL_MODE_STD,
+	DLB_CQ_POLL_MODE_SPARSE,
+
+	/* NUM_DLB_CQ_POLL_MODE must be last */
+	NUM_DLB_CQ_POLL_MODE,
+};
+
+/*
+ * DLB_CMD_QUERY_CQ_POLL_MODE: Query the CQ poll mode the kernel driver is using
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: CQ poll mode (see enum dlb_cq_poll_modes).
+ */
+struct dlb_query_cq_poll_mode_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+/*
+ * DLB_CMD_GET_SN_OCCUPANCY: Get a sequence number group's occupancy
+ *
+ * Each sequence number group has one or more slots, depending on its
+ * configuration. I.e.:
+ * - If configured for 1024 sequence numbers per queue, the group has 1 slot
+ * - If configured for 512 sequence numbers per queue, the group has 2 slots
+ *   ...
+ * - If configured for 32 sequence numbers per queue, the group has 32 slots
+ *
+ * This ioctl returns the group's number of in-use slots. If its occupancy is
+ * 0, the group's sequence number allocation can be reconfigured.
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Specified group's number of used slots.
+ */
+struct dlb_get_sn_occupancy_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 padding0;
+};
+
+enum dlb_user_interface_commands {
+	DLB_CMD_GET_DEVICE_VERSION,
+	DLB_CMD_CREATE_SCHED_DOMAIN,
+	DLB_CMD_GET_NUM_RESOURCES,
+	DLB_CMD_GET_DRIVER_VERSION,
+	DLB_CMD_SAMPLE_PERF_COUNTERS,
+	DLB_CMD_SET_SN_ALLOCATION,
+	DLB_CMD_GET_SN_ALLOCATION,
+	DLB_CMD_MEASURE_SCHED_COUNTS,
+	DLB_CMD_QUERY_CQ_POLL_MODE,
+	DLB_CMD_GET_SN_OCCUPANCY,
+
+	/* NUM_DLB_CMD must be last */
+	NUM_DLB_CMD,
+};
+
+/*******************************/
+/* 'domain' device file alerts */
+/*******************************/
+
+/* Scheduling domain device files can be read to receive domain-specific
+ * notifications, for alerts such as hardware errors.
+ *
+ * Each alert is encoded in a 16B message. The first 8B contains the alert ID,
+ * and the second 8B is optional and contains additional information.
+ * Applications should cast read data to a struct dlb_domain_alert, and
+ * interpret the struct's alert_id according to dlb_domain_alert_id. The read
+ * length must be 16B, or the function will return -EINVAL.
+ *
+ * Reads are destructive, and in the case of multiple file descriptors for the
+ * same domain device file, an alert will be read by only one of the file
+ * descriptors.
+ *
+ * The driver stores alerts in a fixed-size alert ring until they are read. If
+ * the alert ring fills completely, subsequent alerts will be dropped. It is
+ * recommended that DLB applications dedicate a thread to perform blocking
+ * reads on the device file.
+ */
+enum dlb_domain_alert_id {
+	/* A destination domain queue that this domain connected to has
+	 * unregistered, and can no longer be sent to. The aux alert data
+	 * contains the queue ID.
+	 */
+	DLB_DOMAIN_ALERT_REMOTE_QUEUE_UNREGISTER,
+	/* A producer port in this domain attempted to send a QE without a
+	 * credit. aux_alert_data[7:0] contains the port ID, and
+	 * aux_alert_data[15:8] contains a flag indicating whether the port is
+	 * load-balanced (1) or directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS,
+	/* Software issued an illegal enqueue for a port in this domain. An
+	 * illegal enqueue could be:
+	 * - Illegal (excess) completion
+	 * - Illegal fragment
+	 * - Illegal enqueue command
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ,
+	/* Software issued excess CQ token pops for a port in this domain.
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS,
+	/* A enqueue contained either an invalid command encoding or a REL,
+	 * REL_T, RLS, FWD, FWD_T, FRAG, or FRAG_T from a directed port.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_ILLEGAL_HCW,
+	/* The QID must be valid and less than 128.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_ILLEGAL_QID,
+	/* An enqueue went to a disabled QID.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_DISABLED_QID,
+	/* The device containing this domain was reset. All applications using
+	 * the device need to exit for the driver to complete the reset
+	 * procedure.
+	 *
+	 * aux_alert_data doesn't contain any information for this alert.
+	 */
+	DLB_DOMAIN_ALERT_DEVICE_RESET,
+	/* User-space has enqueued an alert.
+	 *
+	 * aux_alert_data contains user-provided data.
+	 */
+	DLB_DOMAIN_ALERT_USER,
+
+	/* Number of DLB domain alerts */
+	NUM_DLB_DOMAIN_ALERTS
+};
+
+static const char dlb_domain_alert_strings[][128] = {
+	"DLB_DOMAIN_ALERT_REMOTE_QUEUE_UNREGISTER",
+	"DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS",
+	"DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ",
+	"DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS",
+	"DLB_DOMAIN_ALERT_ILLEGAL_HCW",
+	"DLB_DOMAIN_ALERT_ILLEGAL_QID",
+	"DLB_DOMAIN_ALERT_DISABLED_QID",
+	"DLB_DOMAIN_ALERT_DEVICE_RESET",
+	"DLB_DOMAIN_ALERT_USER",
+};
+
+struct dlb_domain_alert {
+	__u64 alert_id;
+	__u64 aux_alert_data;
+};
+
+/*********************************/
+/* 'domain' device file commands */
+/*********************************/
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_POOL: Configure a load-balanced credit pool.
+ * Input parameters:
+ * - num_ldb_credits: Number of load-balanced credits (QED space) for this
+ *	pool.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: pool ID.
+ */
+struct dlb_create_ldb_pool_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_ldb_credits;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_POOL: Configure a directed credit pool.
+ * Input parameters:
+ * - num_dir_credits: Number of directed credits (DQED space) for this pool.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Pool ID.
+ */
+struct dlb_create_dir_pool_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_dir_credits;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_QUEUE: Configure a load-balanced queue.
+ * Input parameters:
+ * - num_atomic_inflights: This specifies the amount of temporary atomic QE
+ *	storage for this queue. If zero, the queue will not support atomic
+ *	scheduling.
+ * - num_sequence_numbers: This specifies the number of sequence numbers used
+ *	by this queue. If zero, the queue will not support ordered scheduling.
+ *	If non-zero, the queue will not support unordered scheduling.
+ * - num_qid_inflights: The maximum number of QEs that can be inflight
+ *	(scheduled to a CQ but not completed) at any time. If
+ *	num_sequence_numbers is non-zero, num_qid_inflights must be set equal
+ *	to num_sequence_numbers.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Queue ID.
+ */
+struct dlb_create_ldb_queue_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_sequence_numbers;
+	__u32 num_qid_inflights;
+	__u32 num_atomic_inflights;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_QUEUE: Configure a directed queue.
+ * Input parameters:
+ * - port_id: Port ID. If the corresponding directed port is already created,
+ *	specify its ID here. Else this argument must be 0xFFFFFFFF to indicate
+ *	that the queue is being created before the port.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Queue ID.
+ */
+struct dlb_create_dir_queue_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__s32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_PORT: Configure a load-balanced port.
+ * Input parameters:
+ * - ldb_credit_pool_id: Load-balanced credit pool this port will belong to.
+ * - dir_credit_pool_id: Directed credit pool this port will belong to.
+ * - ldb_credit_high_watermark: Number of load-balanced credits from the pool
+ *	that this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_high_watermark: Number of directed credits from the pool that
+ *	this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - ldb_credit_low_watermark: Load-balanced credit low watermark. When the
+ *	port's credits reach this watermark, they become eligible to be
+ *	refilled by the DLB as credits until the high watermark
+ *	(num_ldb_credits) is reached.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_low_watermark: Directed credit low watermark. When the port's
+ *	credits reach this watermark, they become eligible to be refilled by
+ *	the DLB as credits until the high watermark (num_dir_credits) is
+ *	reached.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - ldb_credit_quantum: Number of load-balanced credits for the DLB to refill
+ *	per refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_quantum: Number of directed credits for the DLB to refill per
+ *	refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - padding0: Reserved for future use.
+ * - cq_depth: Depth of the port's CQ. Must be a power-of-two between 8 and
+ *	1024, inclusive.
+ * - cq_depth_threshold: CQ depth interrupt threshold. A value of N means that
+ *	the CQ interrupt won't fire until there are N or more outstanding CQ
+ *	tokens.
+ * - cq_history_list_size: Number of history list entries. This must be greater
+ *	than or equal to cq_depth.
+ * - padding1: Reserved for future use.
+ * - padding2: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: port ID.
+ */
+struct dlb_create_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 ldb_credit_pool_id;
+	__u32 dir_credit_pool_id;
+	__u16 ldb_credit_high_watermark;
+	__u16 ldb_credit_low_watermark;
+	__u16 ldb_credit_quantum;
+	__u16 dir_credit_high_watermark;
+	__u16 dir_credit_low_watermark;
+	__u16 dir_credit_quantum;
+	__u16 padding0;
+	__u16 cq_depth;
+	__u16 cq_depth_threshold;
+	__u16 cq_history_list_size;
+	__u32 padding1;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_PORT: Configure a directed port.
+ * Input parameters:
+ * - ldb_credit_pool_id: Load-balanced credit pool this port will belong to.
+ * - dir_credit_pool_id: Directed credit pool this port will belong to.
+ * - ldb_credit_high_watermark: Number of load-balanced credits from the pool
+ *	that this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_high_watermark: Number of directed credits from the pool that
+ *	this port will own.
+ * - ldb_credit_low_watermark: Load-balanced credit low watermark. When the
+ *	port's credits reach this watermark, they become eligible to be
+ *	refilled by the DLB as credits until the high watermark
+ *	(num_ldb_credits) is reached.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_low_watermark: Directed credit low watermark. When the port's
+ *	credits reach this watermark, they become eligible to be refilled by
+ *	the DLB as credits until the high watermark (num_dir_credits) is
+ *	reached.
+ * - ldb_credit_quantum: Number of load-balanced credits for the DLB to refill
+ *	per refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_quantum: Number of directed credits for the DLB to refill per
+ *	refill operation.
+ * - cq_depth: Depth of the port's CQ. Must be a power-of-two between 8 and
+ *	1024, inclusive.
+ * - cq_depth_threshold: CQ depth interrupt threshold. A value of N means that
+ *	the CQ interrupt won't fire until there are N or more outstanding CQ
+ *	tokens.
+ * - qid: Queue ID. If the corresponding directed queue is already created,
+ *	specify its ID here. Else this argument must be 0xFFFFFFFF to indicate
+ *	that the port is being created before the queue.
+ * - padding1: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Port ID.
+ */
+struct dlb_create_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 ldb_credit_pool_id;
+	__u32 dir_credit_pool_id;
+	__u16 ldb_credit_high_watermark;
+	__u16 ldb_credit_low_watermark;
+	__u16 ldb_credit_quantum;
+	__u16 dir_credit_high_watermark;
+	__u16 dir_credit_low_watermark;
+	__u16 dir_credit_quantum;
+	__u16 cq_depth;
+	__u16 cq_depth_threshold;
+	__s32 queue_id;
+	__u32 padding1;
+};
+
+/*
+ * DLB_DOMAIN_CMD_START_DOMAIN: Mark the end of the domain configuration. This
+ *	must be called before passing QEs into the device, and no configuration
+ *	ioctls can be issued once the domain has started. Sending QEs into the
+ *	device before calling this ioctl will result in undefined behavior.
+ * Input parameters:
+ * - (None)
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_start_domain_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+};
+
+/*
+ * DLB_DOMAIN_CMD_MAP_QID: Map a load-balanced queue to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - qid: Load-balanced queue ID.
+ * - priority: Queue->port service priority.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_map_qid_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 qid;
+	__u32 priority;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_UNMAP_QID: Unmap a load-balanced queue to a load-balanced
+ *	port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - qid: Load-balanced queue ID.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_unmap_qid_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 qid;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENABLE_LDB_PORT: Enable scheduling to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enable_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENABLE_DIR_PORT: Enable scheduling to a directed port.
+ * Input parameters:
+ * - port_id: Directed port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enable_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+};
+
+/*
+ * DLB_DOMAIN_CMD_DISABLE_LDB_PORT: Disable scheduling to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_disable_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_DISABLE_DIR_PORT: Disable scheduling to a directed port.
+ * Input parameters:
+ * - port_id: Directed port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_disable_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_BLOCK_ON_CQ_INTERRUPT: Block on a CQ interrupt until a QE
+ *	arrives for the specified port. If a QE is already present, the ioctl
+ *	will immediately return.
+ *
+ *	Note: Only one thread can block on a CQ's interrupt at a time. Doing
+ *	otherwise can result in hung threads.
+ *
+ * Input parameters:
+ * - port_id: Port ID.
+ * - is_ldb: True if the port is load-balanced, false otherwise.
+ * - arm: Tell the driver to arm the interrupt.
+ * - cq_gen: Current CQ generation bit.
+ * - padding0: Reserved for future use.
+ * - cq_va: VA of the CQ entry where the next QE will be placed.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_block_on_cq_interrupt_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u8 is_ldb;
+	__u8 arm;
+	__u8 cq_gen;
+	__u8 padding0;
+	__u64 cq_va;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENQUEUE_DOMAIN_ALERT: Enqueue a domain alert that will be
+ *	read by one reader thread.
+ *
+ * Input parameters:
+ * - aux_alert_data: user-defined auxiliary data.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enqueue_domain_alert_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u64 aux_alert_data;
+};
+
+/*
+ * DLB_DOMAIN_CMD_GET_LDB_QUEUE_DEPTH: Get a load-balanced queue's depth.
+ * Input parameters:
+ * - queue_id: The load-balanced queue ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: queue depth.
+ */
+struct dlb_get_ldb_queue_depth_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 queue_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_GET_DIR_QUEUE_DEPTH: Get a directed queue's depth.
+ * Input parameters:
+ * - queue_id: The directed queue ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: queue depth.
+ */
+struct dlb_get_dir_queue_depth_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 queue_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_PENDING_PORT_UNMAPS: Get number of queue unmap operations in
+ *	progress for a load-balanced port.
+ *
+ *	Note: This is a snapshot; the number of unmap operations in progress
+ *	is subject to change at any time.
+ *
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: number of unmaps in progress.
+ */
+struct dlb_pending_port_unmaps_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+enum dlb_domain_user_interface_commands {
+	DLB_DOMAIN_CMD_CREATE_LDB_POOL,
+	DLB_DOMAIN_CMD_CREATE_DIR_POOL,
+	DLB_DOMAIN_CMD_CREATE_LDB_QUEUE,
+	DLB_DOMAIN_CMD_CREATE_DIR_QUEUE,
+	DLB_DOMAIN_CMD_CREATE_LDB_PORT,
+	DLB_DOMAIN_CMD_CREATE_DIR_PORT,
+	DLB_DOMAIN_CMD_START_DOMAIN,
+	DLB_DOMAIN_CMD_MAP_QID,
+	DLB_DOMAIN_CMD_UNMAP_QID,
+	DLB_DOMAIN_CMD_ENABLE_LDB_PORT,
+	DLB_DOMAIN_CMD_ENABLE_DIR_PORT,
+	DLB_DOMAIN_CMD_DISABLE_LDB_PORT,
+	DLB_DOMAIN_CMD_DISABLE_DIR_PORT,
+	DLB_DOMAIN_CMD_BLOCK_ON_CQ_INTERRUPT,
+	DLB_DOMAIN_CMD_ENQUEUE_DOMAIN_ALERT,
+	DLB_DOMAIN_CMD_GET_LDB_QUEUE_DEPTH,
+	DLB_DOMAIN_CMD_GET_DIR_QUEUE_DEPTH,
+	DLB_DOMAIN_CMD_PENDING_PORT_UNMAPS,
+
+	/* NUM_DLB_DOMAIN_CMD must be last */
+	NUM_DLB_DOMAIN_CMD,
+};
+
+/*
+ * Base addresses for memory mapping the consumer queue (CQ) and popcount (PC)
+ * memory space, and producer port (PP) MMIO space. The CQ, PC, and PP
+ * addresses are per-port. Every address is page-separated (e.g. LDB PP 0 is at
+ * 0x2100000 and LDB PP 1 is at 0x2101000).
+ */
+#define DLB_LDB_CQ_BASE 0x3000000
+#define DLB_LDB_CQ_MAX_SIZE 65536
+#define DLB_LDB_CQ_OFFS(id) (DLB_LDB_CQ_BASE + (id) * DLB_LDB_CQ_MAX_SIZE)
+
+#define DLB_DIR_CQ_BASE 0x3800000
+#define DLB_DIR_CQ_MAX_SIZE 65536
+#define DLB_DIR_CQ_OFFS(id) (DLB_DIR_CQ_BASE + (id) * DLB_DIR_CQ_MAX_SIZE)
+
+#define DLB_LDB_PC_BASE 0x2300000
+#define DLB_LDB_PC_MAX_SIZE 4096
+#define DLB_LDB_PC_OFFS(id) (DLB_LDB_PC_BASE + (id) * DLB_LDB_PC_MAX_SIZE)
+
+#define DLB_DIR_PC_BASE 0x2200000
+#define DLB_DIR_PC_MAX_SIZE 4096
+#define DLB_DIR_PC_OFFS(id) (DLB_DIR_PC_BASE + (id) * DLB_DIR_PC_MAX_SIZE)
+
+#define DLB_LDB_PP_BASE 0x2100000
+#define DLB_LDB_PP_MAX_SIZE 4096
+#define DLB_LDB_PP_OFFS(id) (DLB_LDB_PP_BASE + (id) * DLB_LDB_PP_MAX_SIZE)
+
+#define DLB_DIR_PP_BASE 0x2000000
+#define DLB_DIR_PP_MAX_SIZE 4096
+#define DLB_DIR_PP_OFFS(id) (DLB_DIR_PP_BASE + (id) * DLB_DIR_PP_MAX_SIZE)
+
+/*******************/
+/* dlb ioctl codes */
+/*******************/
+
+#define DLB_IOC_MAGIC  'h'
+
+#define DLB_IOC_GET_DEVICE_VERSION				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_GET_DEVICE_VERSION,		\
+		      struct dlb_get_driver_version_args)
+#define DLB_IOC_CREATE_SCHED_DOMAIN				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_CREATE_SCHED_DOMAIN,		\
+		      struct dlb_create_sched_domain_args)
+#define DLB_IOC_GET_NUM_RESOURCES				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_GET_NUM_RESOURCES,		\
+		      struct dlb_get_num_resources_args)
+#define DLB_IOC_GET_DRIVER_VERSION				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_GET_DRIVER_VERSION,		\
+		      struct dlb_get_driver_version_args)
+#define DLB_IOC_SAMPLE_PERF_COUNTERS				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_SAMPLE_PERF_COUNTERS,		\
+		      struct dlb_sample_perf_counters_args)
+#define DLB_IOC_SET_SN_ALLOCATION				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_SET_SN_ALLOCATION,		\
+		      struct dlb_set_sn_allocation_args)
+#define DLB_IOC_GET_SN_ALLOCATION				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_GET_SN_ALLOCATION,		\
+		      struct dlb_get_sn_allocation_args)
+#define DLB_IOC_MEASURE_SCHED_COUNTS				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_MEASURE_SCHED_COUNTS,		\
+		      struct dlb_measure_sched_count_args)
+#define DLB_IOC_QUERY_CQ_POLL_MODE				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_QUERY_CQ_POLL_MODE,		\
+		      struct dlb_query_cq_poll_mode_args)
+#define DLB_IOC_GET_SN_OCCUPANCY				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_CMD_GET_SN_OCCUPANCY,			\
+		      struct dlb_get_sn_occupancy_args)
+#define DLB_IOC_CREATE_LDB_POOL					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_LDB_POOL,		\
+		      struct dlb_create_ldb_pool_args)
+#define DLB_IOC_CREATE_DIR_POOL					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_DIR_POOL,		\
+		      struct dlb_create_dir_pool_args)
+#define DLB_IOC_CREATE_LDB_QUEUE				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_LDB_QUEUE,		\
+		      struct dlb_create_ldb_queue_args)
+#define DLB_IOC_CREATE_DIR_QUEUE				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_DIR_QUEUE,		\
+		      struct dlb_create_dir_queue_args)
+#define DLB_IOC_CREATE_LDB_PORT					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_LDB_PORT,		\
+		      struct dlb_create_ldb_port_args)
+#define DLB_IOC_CREATE_DIR_PORT					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_CREATE_DIR_PORT,		\
+		      struct dlb_create_dir_port_args)
+#define DLB_IOC_START_DOMAIN					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_START_DOMAIN,		\
+		      struct dlb_start_domain_args)
+#define DLB_IOC_MAP_QID						\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_MAP_QID,			\
+		      struct dlb_map_qid_args)
+#define DLB_IOC_UNMAP_QID					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_UNMAP_QID,			\
+		      struct dlb_unmap_qid_args)
+#define DLB_IOC_ENABLE_LDB_PORT					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_ENABLE_LDB_PORT,		\
+		      struct dlb_enable_ldb_port_args)
+#define DLB_IOC_ENABLE_DIR_PORT					\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_ENABLE_DIR_PORT,		\
+		      struct dlb_enable_dir_port_args)
+#define DLB_IOC_DISABLE_LDB_PORT				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_DISABLE_LDB_PORT,		\
+		      struct dlb_disable_ldb_port_args)
+#define DLB_IOC_DISABLE_DIR_PORT				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_DISABLE_DIR_PORT,		\
+		      struct dlb_disable_dir_port_args)
+#define DLB_IOC_BLOCK_ON_CQ_INTERRUPT				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_BLOCK_ON_CQ_INTERRUPT,	\
+		      struct dlb_block_on_cq_interrupt_args)
+#define DLB_IOC_ENQUEUE_DOMAIN_ALERT				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_ENQUEUE_DOMAIN_ALERT,	\
+		      struct dlb_enqueue_domain_alert_args)
+#define DLB_IOC_GET_LDB_QUEUE_DEPTH				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_GET_LDB_QUEUE_DEPTH,	\
+		      struct dlb_get_ldb_queue_depth_args)
+#define DLB_IOC_GET_DIR_QUEUE_DEPTH				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_GET_DIR_QUEUE_DEPTH,	\
+		      struct dlb_get_dir_queue_depth_args)
+#define DLB_IOC_PENDING_PORT_UNMAPS				\
+		_IOWR(DLB_IOC_MAGIC,				\
+		      DLB_DOMAIN_CMD_PENDING_PORT_UNMAPS,	\
+		      struct dlb_pending_port_unmaps_args)
+
+#endif /* __DLB_USER_H */
-- 
2.13.6


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH 03/27] event/dlb: add shared code version 10.7.9
    @ 2020-06-12 21:24  1% ` McDaniel, Timothy
  2020-06-12 21:24  1% ` [dpdk-dev] [PATCH 08/27] event/dlb: add definitions shared with LKM or shared code McDaniel, Timothy
  2 siblings, 0 replies; 200+ results
From: McDaniel, Timothy @ 2020-06-12 21:24 UTC (permalink / raw)
  To: jerinj; +Cc: dev, gage.eads, harry.van.haaren

The DLB shared code is auto generated by Intel, and is being committed
here so that it can be built in the DPDK environment. The shared code
should not be modified. The shared code must be present in order to
successfully build the DLB PMD.

Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
---
 drivers/event/dlb/pf/base/dlb_hw_types.h     |  360 +
 drivers/event/dlb/pf/base/dlb_mbox.h         |  645 ++
 drivers/event/dlb/pf/base/dlb_osdep.h        |  350 +
 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h |  449 ++
 drivers/event/dlb/pf/base/dlb_osdep_list.h   |  131 +
 drivers/event/dlb/pf/base/dlb_osdep_types.h  |   31 +
 drivers/event/dlb/pf/base/dlb_regs.h         | 2646 +++++++
 drivers/event/dlb/pf/base/dlb_resource.c     | 9699 ++++++++++++++++++++++++++
 drivers/event/dlb/pf/base/dlb_resource.h     | 1625 +++++
 drivers/event/dlb/pf/base/dlb_user.h         | 1084 +++
 10 files changed, 17020 insertions(+)
 create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_mbox.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
 create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
 create mode 100644 drivers/event/dlb/pf/base/dlb_user.h

diff --git a/drivers/event/dlb/pf/base/dlb_hw_types.h b/drivers/event/dlb/pf/base/dlb_hw_types.h
new file mode 100644
index 000000000..d56590e64
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_hw_types.h
@@ -0,0 +1,360 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_HW_TYPES_H
+#define __DLB_HW_TYPES_H
+
+#include "dlb_user.h"
+#include "dlb_osdep_types.h"
+#include "dlb_osdep_list.h"
+
+#define DLB_MAX_NUM_VFS 16
+#define DLB_MAX_NUM_DOMAINS 32
+#define DLB_MAX_NUM_LDB_QUEUES 128
+#define DLB_MAX_NUM_LDB_PORTS 64
+#define DLB_MAX_NUM_DIR_PORTS 128
+#define DLB_MAX_NUM_LDB_CREDITS 16384
+#define DLB_MAX_NUM_DIR_CREDITS 4096
+#define DLB_MAX_NUM_LDB_CREDIT_POOLS 64
+#define DLB_MAX_NUM_DIR_CREDIT_POOLS 64
+#define DLB_MAX_NUM_HIST_LIST_ENTRIES 5120
+#define DLB_MAX_NUM_AQOS_ENTRIES 2048
+#define DLB_MAX_NUM_TOTAL_OUTSTANDING_COMPLETIONS 4096
+#define DLB_MAX_NUM_QIDS_PER_LDB_CQ 8
+#define DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS 4
+#define DLB_MAX_NUM_SEQUENCE_NUMBER_MODES 6
+#define DLB_QID_PRIORITIES 8
+#define DLB_NUM_ARB_WEIGHTS 8
+#define DLB_MAX_WEIGHT 255
+#define DLB_MAX_PORT_CREDIT_QUANTUM 1023
+#define DLB_MAX_CQ_COMP_CHECK_LOOPS 409600
+#define DLB_MAX_QID_EMPTY_CHECK_LOOPS (32 * 64 * 1024 * (800 / 30))
+#define DLB_HZ 800000000
+
+/* Used for DLB A-stepping workaround for hardware write buffer lock up issue */
+#define DLB_A_STEP_MAX_PORTS 128
+
+#define DLB_PF_DEV_ID 0x270B
+#define DLB_VF_DEV_ID 0x270C
+
+/* Interrupt related macros */
+#define DLB_PF_NUM_NON_CQ_INTERRUPT_VECTORS 8
+#define DLB_PF_NUM_CQ_INTERRUPT_VECTORS	 64
+#define DLB_PF_TOTAL_NUM_INTERRUPT_VECTORS \
+	(DLB_PF_NUM_NON_CQ_INTERRUPT_VECTORS + \
+	 DLB_PF_NUM_CQ_INTERRUPT_VECTORS)
+#define DLB_PF_NUM_COMPRESSED_MODE_VECTORS \
+	(DLB_PF_NUM_NON_CQ_INTERRUPT_VECTORS + 1)
+#define DLB_PF_NUM_PACKED_MODE_VECTORS	 DLB_PF_TOTAL_NUM_INTERRUPT_VECTORS
+#define DLB_PF_COMPRESSED_MODE_CQ_VECTOR_ID DLB_PF_NUM_NON_CQ_INTERRUPT_VECTORS
+
+#define DLB_VF_NUM_NON_CQ_INTERRUPT_VECTORS 1
+#define DLB_VF_NUM_CQ_INTERRUPT_VECTORS 31
+#define DLB_VF_BASE_CQ_VECTOR_ID 0
+#define DLB_VF_LAST_CQ_VECTOR_ID 30
+#define DLB_VF_MBOX_VECTOR_ID 31
+#define DLB_VF_TOTAL_NUM_INTERRUPT_VECTORS \
+	(DLB_VF_NUM_NON_CQ_INTERRUPT_VECTORS + \
+	 DLB_VF_NUM_CQ_INTERRUPT_VECTORS)
+
+#define DLB_PF_NUM_ALARM_INTERRUPT_VECTORS 4
+/* DLB ALARM interrupts */
+#define DLB_INT_ALARM 0
+/* VF to PF Mailbox Service Request */
+#define DLB_INT_VF_TO_PF_MBOX 1
+/* HCW Ingress Errors */
+#define DLB_INT_INGRESS_ERROR 3
+
+#define DLB_ALARM_HW_SOURCE_SYS 0
+#define DLB_ALARM_HW_SOURCE_DLB 1
+
+#define DLB_ALARM_HW_UNIT_CHP 1
+#define DLB_ALARM_HW_UNIT_LSP 3
+
+#define DLB_ALARM_HW_CHP_AID_OUT_OF_CREDITS 6
+#define DLB_ALARM_HW_CHP_AID_ILLEGAL_ENQ 7
+#define DLB_ALARM_HW_LSP_AID_EXCESS_TOKEN_POPS 15
+#define DLB_ALARM_SYS_AID_ILLEGAL_HCW 0
+#define DLB_ALARM_SYS_AID_ILLEGAL_QID 3
+#define DLB_ALARM_SYS_AID_DISABLED_QID 4
+#define DLB_ALARM_SYS_AID_ILLEGAL_CQID 6
+
+/* Hardware-defined base addresses */
+#define DLB_LDB_PP_BASE 0x2100000
+#define DLB_LDB_PP_STRIDE 0x1000
+#define DLB_LDB_PP_BOUND \
+	(DLB_LDB_PP_BASE + DLB_LDB_PP_STRIDE * DLB_MAX_NUM_LDB_PORTS)
+#define DLB_DIR_PP_BASE 0x2000000
+#define DLB_DIR_PP_STRIDE 0x1000
+#define DLB_DIR_PP_BOUND \
+	(DLB_DIR_PP_BASE + DLB_DIR_PP_STRIDE * DLB_MAX_NUM_DIR_PORTS)
+
+struct dlb_resource_id {
+	u32 phys_id;
+	u32 virt_id;
+	u8 vf_owned;
+	u8 vf_id;
+};
+
+struct dlb_freelist {
+	u32 base;
+	u32 bound;
+	u32 offset;
+};
+
+static inline u32 dlb_freelist_count(struct dlb_freelist *list)
+{
+	return (list->bound - list->base) - list->offset;
+}
+
+struct dlb_hcw {
+	u64 data;
+	/* Word 3 */
+	u16 opaque;
+	u8 qid;
+	u8 sched_type:2;
+	u8 priority:3;
+	u8 msg_type:3;
+	/* Word 4 */
+	u16 lock_id;
+	u8 meas_lat:1;
+	u8 rsvd1:2;
+	u8 no_dec:1;
+	u8 cmp_id:4;
+	u8 cq_token:1;
+	u8 qe_comp:1;
+	u8 qe_frag:1;
+	u8 qe_valid:1;
+	u8 int_arm:1;
+	u8 error:1;
+	u8 rsvd:2;
+};
+
+struct dlb_ldb_queue {
+	struct dlb_list_entry domain_list;
+	struct dlb_list_entry func_list;
+	struct dlb_resource_id id;
+	struct dlb_resource_id domain_id;
+	u32 num_qid_inflights;
+	struct dlb_freelist aqed_freelist;
+	u8 sn_cfg_valid;
+	u32 sn_group;
+	u32 sn_slot;
+	u32 num_mappings;
+	u8 num_pending_additions;
+	u8 owned;
+	u8 configured;
+};
+
+/* Directed ports and queues are paired by nature, so the driver tracks them
+ * with a single data structure.
+ */
+struct dlb_dir_pq_pair {
+	struct dlb_list_entry domain_list;
+	struct dlb_list_entry func_list;
+	struct dlb_resource_id id;
+	struct dlb_resource_id domain_id;
+	u8 ldb_pool_used;
+	u8 dir_pool_used;
+	u8 queue_configured;
+	u8 port_configured;
+	u8 owned;
+	u8 enabled;
+	u32 ref_cnt;
+};
+
+enum dlb_qid_map_state {
+	/* The slot doesn't contain a valid queue mapping */
+	DLB_QUEUE_UNMAPPED,
+	/* The slot contains a valid queue mapping */
+	DLB_QUEUE_MAPPED,
+	/* The driver is mapping a queue into this slot */
+	DLB_QUEUE_MAP_IN_PROGRESS,
+	/* The driver is unmapping a queue from this slot */
+	DLB_QUEUE_UNMAP_IN_PROGRESS,
+	/* The driver is unmapping a queue from this slot, and once complete
+	 * will replace it with another mapping.
+	 */
+	DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP,
+};
+
+struct dlb_ldb_port_qid_map {
+	u16 qid;
+	u8 priority;
+	u16 pending_qid;
+	u8 pending_priority;
+	enum dlb_qid_map_state state;
+};
+
+struct dlb_ldb_port {
+	struct dlb_list_entry domain_list;
+	struct dlb_list_entry func_list;
+	struct dlb_resource_id id;
+	struct dlb_resource_id domain_id;
+	u8 ldb_pool_used;
+	u8 dir_pool_used;
+	u8 init_tkn_cnt;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_limit;
+	/* The qid_map represents the hardware QID mapping state. */
+	struct dlb_ldb_port_qid_map qid_map[DLB_MAX_NUM_QIDS_PER_LDB_CQ];
+	u32 ref_cnt;
+	u8 num_pending_removals;
+	u8 num_mappings;
+	u8 owned;
+	u8 enabled;
+	u8 configured;
+};
+
+struct dlb_credit_pool {
+	struct dlb_list_entry domain_list;
+	struct dlb_list_entry func_list;
+	struct dlb_resource_id id;
+	struct dlb_resource_id domain_id;
+	u32 total_credits;
+	u32 avail_credits;
+	u8 owned;
+	u8 configured;
+};
+
+struct dlb_sn_group {
+	u32 mode;
+	u32 sequence_numbers_per_queue;
+	u32 slot_use_bitmap;
+	u32 id;
+};
+
+static inline bool dlb_sn_group_full(struct dlb_sn_group *group)
+{
+	u32 mask[6] = {
+		0xffffffff,  /* 32 SNs per queue */
+		0x0000ffff,  /* 64 SNs per queue */
+		0x000000ff,  /* 128 SNs per queue */
+		0x0000000f,  /* 256 SNs per queue */
+		0x00000003,  /* 512 SNs per queue */
+		0x00000001}; /* 1024 SNs per queue */
+
+	return group->slot_use_bitmap == mask[group->mode];
+}
+
+static inline int dlb_sn_group_alloc_slot(struct dlb_sn_group *group)
+{
+	int bound[6] = {32, 16, 8, 4, 2, 1};
+	int i;
+
+	for (i = 0; i < bound[group->mode]; i++) {
+		if (!(group->slot_use_bitmap & (1 << i))) {
+			group->slot_use_bitmap |= 1 << i;
+			return i;
+		}
+	}
+
+	return -1;
+}
+
+static inline void dlb_sn_group_free_slot(struct dlb_sn_group *group, int slot)
+{
+	group->slot_use_bitmap &= ~(1 << slot);
+}
+
+static inline int dlb_sn_group_used_slots(struct dlb_sn_group *group)
+{
+	int i, cnt = 0;
+
+	for (i = 0; i < 32; i++)
+		cnt += !!(group->slot_use_bitmap & (1 << i));
+
+	return cnt;
+}
+
+struct dlb_domain {
+	struct dlb_function_resources *parent_func;
+	struct dlb_list_entry func_list;
+	struct dlb_list_head used_ldb_queues;
+	struct dlb_list_head used_ldb_ports;
+	struct dlb_list_head used_dir_pq_pairs;
+	struct dlb_list_head used_ldb_credit_pools;
+	struct dlb_list_head used_dir_credit_pools;
+	struct dlb_list_head avail_ldb_queues;
+	struct dlb_list_head avail_ldb_ports;
+	struct dlb_list_head avail_dir_pq_pairs;
+	struct dlb_list_head avail_ldb_credit_pools;
+	struct dlb_list_head avail_dir_credit_pools;
+	u32 total_hist_list_entries;
+	u32 avail_hist_list_entries;
+	u32 hist_list_entry_base;
+	u32 hist_list_entry_offset;
+	struct dlb_freelist qed_freelist;
+	struct dlb_freelist dqed_freelist;
+	struct dlb_freelist aqed_freelist;
+	struct dlb_resource_id id;
+	int num_pending_removals;
+	int num_pending_additions;
+	u8 configured;
+	u8 started;
+};
+
+struct dlb_bitmap;
+
+struct dlb_function_resources {
+	u32 num_avail_domains;
+	struct dlb_list_head avail_domains;
+	struct dlb_list_head used_domains;
+	u32 num_avail_ldb_queues;
+	struct dlb_list_head avail_ldb_queues;
+	u32 num_avail_ldb_ports;
+	struct dlb_list_head avail_ldb_ports;
+	u32 num_avail_dir_pq_pairs;
+	struct dlb_list_head avail_dir_pq_pairs;
+	struct dlb_bitmap *avail_hist_list_entries;
+	struct dlb_bitmap *avail_qed_freelist_entries;
+	struct dlb_bitmap *avail_dqed_freelist_entries;
+	struct dlb_bitmap *avail_aqed_freelist_entries;
+	u32 num_avail_ldb_credit_pools;
+	struct dlb_list_head avail_ldb_credit_pools;
+	u32 num_avail_dir_credit_pools;
+	struct dlb_list_head avail_dir_credit_pools;
+	u32 num_enabled_ldb_ports; /* (PF only) */
+	u8 locked; /* (VF only) */
+};
+
+/* After initialization, each resource in dlb_hw_resources is located in one of
+ * the following lists:
+ * -- The PF's available resources list. These are unconfigured resources owned
+ *	by the PF and not allocated to a DLB scheduling domain.
+ * -- A VF's available resources list. These are VF-owned unconfigured
+ *	resources not allocated to a DLB scheduling domain.
+ * -- A domain's available resources list. These are domain-owned unconfigured
+ *	resources.
+ * -- A domain's used resources list. These are are domain-owned configured
+ *	resources.
+ *
+ * A resource moves to a new list when a VF or domain is created or destroyed,
+ * or when the resource is configured.
+ */
+struct dlb_hw_resources {
+	struct dlb_ldb_queue ldb_queues[DLB_MAX_NUM_LDB_QUEUES];
+	struct dlb_ldb_port ldb_ports[DLB_MAX_NUM_LDB_PORTS];
+	struct dlb_dir_pq_pair dir_pq_pairs[DLB_MAX_NUM_DIR_PORTS];
+	struct dlb_credit_pool ldb_credit_pools[DLB_MAX_NUM_LDB_CREDIT_POOLS];
+	struct dlb_credit_pool dir_credit_pools[DLB_MAX_NUM_DIR_CREDIT_POOLS];
+	struct dlb_sn_group sn_groups[DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS];
+};
+
+struct dlb_hw {
+	/* BAR 0 address */
+	void  *csr_kva;
+	unsigned long csr_phys_addr;
+	/* BAR 2 address */
+	void  *func_kva;
+	unsigned long func_phys_addr;
+
+	/* Resource tracking */
+	struct dlb_hw_resources rsrcs;
+	struct dlb_function_resources pf;
+	struct dlb_function_resources vf[DLB_MAX_NUM_VFS];
+	struct dlb_domain domains[DLB_MAX_NUM_DOMAINS];
+};
+
+#endif /* __DLB_HW_TYPES_H */
diff --git a/drivers/event/dlb/pf/base/dlb_mbox.h b/drivers/event/dlb/pf/base/dlb_mbox.h
new file mode 100644
index 000000000..e195526a2
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_mbox.h
@@ -0,0 +1,645 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_BASE_DLB_MBOX_H
+#define __DLB_BASE_DLB_MBOX_H
+
+#include "dlb_regs.h"
+#include "dlb_osdep_types.h"
+
+#define DLB_MBOX_INTERFACE_VERSION 1
+
+/* The PF uses its PF->VF mailbox to send responses to VF requests, as well as
+ * to send requests of its own (e.g. notifying a VF of an impending FLR).
+ * To avoid communication race conditions, e.g. the PF sends a response and then
+ * sends a request before the VF reads the response, the PF->VF mailbox is
+ * divided into two sections:
+ * - Bytes 0-47: PF responses
+ * - Bytes 48-63: PF requests
+ *
+ * Partitioning the PF->VF mailbox allows responses and requests to occupy the
+ * mailbox simultaneously.
+ */
+#define DLB_PF2VF_RESP_BYTES 48
+#define DLB_PF2VF_RESP_BASE 0
+#define DLB_PF2VF_RESP_BASE_WORD (DLB_PF2VF_RESP_BASE / 4)
+
+#define DLB_PF2VF_REQ_BYTES \
+	(DLB_FUNC_PF_PF2VF_MAILBOX_BYTES - DLB_PF2VF_RESP_BYTES)
+#define DLB_PF2VF_REQ_BASE DLB_PF2VF_RESP_BYTES
+#define DLB_PF2VF_REQ_BASE_WORD (DLB_PF2VF_REQ_BASE / 4)
+
+/* Similarly, the VF->PF mailbox is divided into two sections:
+ * - Bytes 0-239: VF requests
+ * - Bytes 240-255: VF responses
+ */
+#define DLB_VF2PF_REQ_BYTES 240
+#define DLB_VF2PF_REQ_BASE 0
+#define DLB_VF2PF_REQ_BASE_WORD (DLB_VF2PF_REQ_BASE / 4)
+
+#define DLB_VF2PF_RESP_BYTES \
+	(DLB_FUNC_VF_VF2PF_MAILBOX_BYTES - DLB_VF2PF_REQ_BYTES)
+#define DLB_VF2PF_RESP_BASE DLB_VF2PF_REQ_BYTES
+#define DLB_VF2PF_RESP_BASE_WORD (DLB_VF2PF_RESP_BASE / 4)
+
+/* VF-initiated commands */
+enum dlb_mbox_cmd_type {
+	DLB_MBOX_CMD_REGISTER,
+	DLB_MBOX_CMD_UNREGISTER,
+	DLB_MBOX_CMD_GET_NUM_RESOURCES,
+	DLB_MBOX_CMD_CREATE_SCHED_DOMAIN,
+	DLB_MBOX_CMD_RESET_SCHED_DOMAIN,
+	DLB_MBOX_CMD_CREATE_LDB_POOL,
+	DLB_MBOX_CMD_CREATE_DIR_POOL,
+	DLB_MBOX_CMD_CREATE_LDB_QUEUE,
+	DLB_MBOX_CMD_CREATE_DIR_QUEUE,
+	DLB_MBOX_CMD_CREATE_LDB_PORT,
+	DLB_MBOX_CMD_CREATE_DIR_PORT,
+	DLB_MBOX_CMD_ENABLE_LDB_PORT,
+	DLB_MBOX_CMD_DISABLE_LDB_PORT,
+	DLB_MBOX_CMD_ENABLE_DIR_PORT,
+	DLB_MBOX_CMD_DISABLE_DIR_PORT,
+	DLB_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN,
+	DLB_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN,
+	DLB_MBOX_CMD_MAP_QID,
+	DLB_MBOX_CMD_UNMAP_QID,
+	DLB_MBOX_CMD_START_DOMAIN,
+	DLB_MBOX_CMD_ENABLE_LDB_PORT_INTR,
+	DLB_MBOX_CMD_ENABLE_DIR_PORT_INTR,
+	DLB_MBOX_CMD_ARM_CQ_INTR,
+	DLB_MBOX_CMD_GET_NUM_USED_RESOURCES,
+	DLB_MBOX_CMD_INIT_CQ_SCHED_COUNT,
+	DLB_MBOX_CMD_COLLECT_CQ_SCHED_COUNT,
+	DLB_MBOX_CMD_ACK_VF_FLR_DONE,
+	DLB_MBOX_CMD_GET_SN_ALLOCATION,
+	DLB_MBOX_CMD_GET_LDB_QUEUE_DEPTH,
+	DLB_MBOX_CMD_GET_DIR_QUEUE_DEPTH,
+	DLB_MBOX_CMD_PENDING_PORT_UNMAPS,
+	DLB_MBOX_CMD_QUERY_CQ_POLL_MODE,
+	DLB_MBOX_CMD_GET_SN_OCCUPANCY,
+
+	/* NUM_QE_CMD_TYPES must be last */
+	NUM_DLB_MBOX_CMD_TYPES,
+};
+
+static const char dlb_mbox_cmd_type_strings[][128] = {
+	"DLB_MBOX_CMD_REGISTER",
+	"DLB_MBOX_CMD_UNREGISTER",
+	"DLB_MBOX_CMD_GET_NUM_RESOURCES",
+	"DLB_MBOX_CMD_CREATE_SCHED_DOMAIN",
+	"DLB_MBOX_CMD_RESET_SCHED_DOMAIN",
+	"DLB_MBOX_CMD_CREATE_LDB_POOL",
+	"DLB_MBOX_CMD_CREATE_DIR_POOL",
+	"DLB_MBOX_CMD_CREATE_LDB_QUEUE",
+	"DLB_MBOX_CMD_CREATE_DIR_QUEUE",
+	"DLB_MBOX_CMD_CREATE_LDB_PORT",
+	"DLB_MBOX_CMD_CREATE_DIR_PORT",
+	"DLB_MBOX_CMD_ENABLE_LDB_PORT",
+	"DLB_MBOX_CMD_DISABLE_LDB_PORT",
+	"DLB_MBOX_CMD_ENABLE_DIR_PORT",
+	"DLB_MBOX_CMD_DISABLE_DIR_PORT",
+	"DLB_MBOX_CMD_LDB_PORT_OWNED_BY_DOMAIN",
+	"DLB_MBOX_CMD_DIR_PORT_OWNED_BY_DOMAIN",
+	"DLB_MBOX_CMD_MAP_QID",
+	"DLB_MBOX_CMD_UNMAP_QID",
+	"DLB_MBOX_CMD_START_DOMAIN",
+	"DLB_MBOX_CMD_ENABLE_LDB_PORT_INTR",
+	"DLB_MBOX_CMD_ENABLE_DIR_PORT_INTR",
+	"DLB_MBOX_CMD_ARM_CQ_INTR",
+	"DLB_MBOX_CMD_GET_NUM_USED_RESOURCES",
+	"DLB_MBOX_CMD_INIT_CQ_SCHED_COUNT",
+	"DLB_MBOX_CMD_COLLECT_CQ_SCHED_COUNT",
+	"DLB_MBOX_CMD_ACK_VF_FLR_DONE",
+	"DLB_MBOX_CMD_GET_SN_ALLOCATION",
+	"DLB_MBOX_CMD_GET_LDB_QUEUE_DEPTH",
+	"DLB_MBOX_CMD_GET_DIR_QUEUE_DEPTH",
+	"DLB_MBOX_CMD_PENDING_PORT_UNMAPS",
+	"DLB_MBOX_CMD_QUERY_CQ_POLL_MODE",
+	"DLB_MBOX_CMD_GET_SN_OCCUPANCY",
+};
+
+/* PF-initiated commands */
+enum dlb_mbox_vf_cmd_type {
+	DLB_MBOX_VF_CMD_DOMAIN_ALERT,
+	DLB_MBOX_VF_CMD_NOTIFICATION,
+	DLB_MBOX_VF_CMD_IN_USE,
+
+	/* NUM_DLB_MBOX_VF_CMD_TYPES must be last */
+	NUM_DLB_MBOX_VF_CMD_TYPES,
+};
+
+static const char dlb_mbox_vf_cmd_type_strings[][128] = {
+	"DLB_MBOX_VF_CMD_DOMAIN_ALERT",
+	"DLB_MBOX_VF_CMD_NOTIFICATION",
+	"DLB_MBOX_VF_CMD_IN_USE",
+};
+
+#define DLB_MBOX_CMD_TYPE(hdr) \
+	(((struct dlb_mbox_req_hdr *)hdr)->type)
+#define DLB_MBOX_CMD_STRING(hdr) \
+	dlb_mbox_cmd_type_strings[DLB_MBOX_CMD_TYPE(hdr)]
+
+enum dlb_mbox_status_type {
+	DLB_MBOX_ST_SUCCESS,
+	DLB_MBOX_ST_INVALID_CMD_TYPE,
+	DLB_MBOX_ST_VERSION_MISMATCH,
+	DLB_MBOX_ST_EXPECTED_PHASE_ONE,
+	DLB_MBOX_ST_EXPECTED_PHASE_TWO,
+	DLB_MBOX_ST_INVALID_OWNER_VF,
+};
+
+static const char dlb_mbox_status_type_strings[][128] = {
+	"DLB_MBOX_ST_SUCCESS",
+	"DLB_MBOX_ST_INVALID_CMD_TYPE",
+	"DLB_MBOX_ST_VERSION_MISMATCH",
+	"DLB_MBOX_ST_EXPECTED_PHASE_ONE",
+	"DLB_MBOX_ST_EXPECTED_PHASE_TWO",
+	"DLB_MBOX_ST_INVALID_OWNER_VF",
+};
+
+#define DLB_MBOX_ST_TYPE(hdr) \
+	(((struct dlb_mbox_resp_hdr *)hdr)->status)
+#define DLB_MBOX_ST_STRING(hdr) \
+	dlb_mbox_status_type_strings[DLB_MBOX_ST_TYPE(hdr)]
+
+/* This structure is always the first field in a request structure */
+struct dlb_mbox_req_hdr {
+	u32 type;
+};
+
+/* This structure is always the first field in a response structure */
+struct dlb_mbox_resp_hdr {
+	u32 status;
+};
+
+struct dlb_mbox_register_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u16 min_interface_version;
+	u16 max_interface_version;
+};
+
+struct dlb_mbox_register_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 interface_version;
+	u8 pf_id;
+	u8 vf_id;
+	u8 is_auxiliary_vf;
+	u8 primary_vf_id;
+	u32 padding;
+};
+
+struct dlb_mbox_unregister_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_unregister_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_get_num_resources_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_get_num_resources_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u16 num_sched_domains;
+	u16 num_ldb_queues;
+	u16 num_ldb_ports;
+	u16 num_dir_ports;
+	u16 padding0;
+	u8 num_ldb_credit_pools;
+	u8 num_dir_credit_pools;
+	u32 num_atomic_inflights;
+	u32 max_contiguous_atomic_inflights;
+	u32 num_hist_list_entries;
+	u32 max_contiguous_hist_list_entries;
+	u16 num_ldb_credits;
+	u16 max_contiguous_ldb_credits;
+	u16 num_dir_credits;
+	u16 max_contiguous_dir_credits;
+	u32 padding1;
+};
+
+struct dlb_mbox_create_sched_domain_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 num_ldb_queues;
+	u32 num_ldb_ports;
+	u32 num_dir_ports;
+	u32 num_atomic_inflights;
+	u32 num_hist_list_entries;
+	u32 num_ldb_credits;
+	u32 num_dir_credits;
+	u32 num_ldb_credit_pools;
+	u32 num_dir_credit_pools;
+};
+
+struct dlb_mbox_create_sched_domain_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_reset_sched_domain_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 id;
+};
+
+struct dlb_mbox_reset_sched_domain_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+};
+
+struct dlb_mbox_create_credit_pool_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 num_credits;
+	u32 padding;
+};
+
+struct dlb_mbox_create_credit_pool_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_create_ldb_queue_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 num_sequence_numbers;
+	u32 num_qid_inflights;
+	u32 num_atomic_inflights;
+	u32 padding;
+};
+
+struct dlb_mbox_create_ldb_queue_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_create_dir_queue_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding0;
+};
+
+struct dlb_mbox_create_dir_queue_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_create_ldb_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 ldb_credit_pool_id;
+	u32 dir_credit_pool_id;
+	u64 pop_count_address;
+	u16 ldb_credit_high_watermark;
+	u16 ldb_credit_low_watermark;
+	u16 ldb_credit_quantum;
+	u16 dir_credit_high_watermark;
+	u16 dir_credit_low_watermark;
+	u16 dir_credit_quantum;
+	u32 padding0;
+	u16 cq_depth;
+	u16 cq_history_list_size;
+	u32 padding1;
+	u64 cq_base_address;
+	u64 nq_base_address;
+	u32 nq_size;
+	u32 padding2;
+};
+
+struct dlb_mbox_create_ldb_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_create_dir_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 ldb_credit_pool_id;
+	u32 dir_credit_pool_id;
+	u64 pop_count_address;
+	u16 ldb_credit_high_watermark;
+	u16 ldb_credit_low_watermark;
+	u16 ldb_credit_quantum;
+	u16 dir_credit_high_watermark;
+	u16 dir_credit_low_watermark;
+	u16 dir_credit_quantum;
+	u16 cq_depth;
+	u16 padding0;
+	u64 cq_base_address;
+	s32 queue_id;
+	u32 padding1;
+};
+
+struct dlb_mbox_create_dir_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_enable_ldb_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_enable_ldb_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_disable_ldb_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_disable_ldb_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_enable_dir_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_enable_dir_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_disable_dir_port_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_disable_dir_port_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_ldb_port_owned_by_domain_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_ldb_port_owned_by_domain_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	s32 owned;
+};
+
+struct dlb_mbox_dir_port_owned_by_domain_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_dir_port_owned_by_domain_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	s32 owned;
+};
+
+struct dlb_mbox_map_qid_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 qid;
+	u32 priority;
+	u32 padding0;
+};
+
+struct dlb_mbox_map_qid_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 id;
+};
+
+struct dlb_mbox_unmap_qid_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 qid;
+};
+
+struct dlb_mbox_unmap_qid_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_start_domain_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+};
+
+struct dlb_mbox_start_domain_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_enable_ldb_port_intr_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u16 port_id;
+	u16 thresh;
+	u16 vector;
+	u16 owner_vf;
+	u16 reserved[2];
+};
+
+struct dlb_mbox_enable_ldb_port_intr_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding0;
+};
+
+struct dlb_mbox_enable_dir_port_intr_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u16 port_id;
+	u16 thresh;
+	u16 vector;
+	u16 owner_vf;
+	u16 reserved[2];
+};
+
+struct dlb_mbox_enable_dir_port_intr_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding0;
+};
+
+struct dlb_mbox_arm_cq_intr_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 is_ldb;
+};
+
+struct dlb_mbox_arm_cq_intr_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding0;
+};
+
+/* The alert_id and aux_alert_data follows the format of the alerts defined in
+ * dlb_types.h. The alert id contains an enum dlb_domain_alert_id value, and
+ * the aux_alert_data value varies depending on the alert.
+ */
+struct dlb_mbox_vf_alert_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 alert_id;
+	u32 aux_alert_data;
+};
+
+enum dlb_mbox_vf_notification_type {
+	DLB_MBOX_VF_NOTIFICATION_PRE_RESET,
+	DLB_MBOX_VF_NOTIFICATION_POST_RESET,
+
+	/* NUM_DLB_MBOX_VF_NOTIFICATION_TYPES must be last */
+	NUM_DLB_MBOX_VF_NOTIFICATION_TYPES,
+};
+
+struct dlb_mbox_vf_notification_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 notification;
+};
+
+struct dlb_mbox_vf_in_use_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_vf_in_use_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 in_use;
+};
+
+struct dlb_mbox_ack_vf_flr_done_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_ack_vf_flr_done_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 padding;
+};
+
+struct dlb_mbox_get_sn_allocation_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 group_id;
+};
+
+struct dlb_mbox_get_sn_allocation_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 num;
+};
+
+struct dlb_mbox_get_ldb_queue_depth_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 queue_id;
+	u32 padding;
+};
+
+struct dlb_mbox_get_ldb_queue_depth_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 depth;
+};
+
+struct dlb_mbox_get_dir_queue_depth_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 queue_id;
+	u32 padding;
+};
+
+struct dlb_mbox_get_dir_queue_depth_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 depth;
+};
+
+struct dlb_mbox_pending_port_unmaps_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 domain_id;
+	u32 port_id;
+	u32 padding;
+};
+
+struct dlb_mbox_pending_port_unmaps_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 num;
+};
+
+struct dlb_mbox_query_cq_poll_mode_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 padding;
+};
+
+struct dlb_mbox_query_cq_poll_mode_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 error_code;
+	u32 status;
+	u32 mode;
+};
+
+struct dlb_mbox_get_sn_occupancy_cmd_req {
+	struct dlb_mbox_req_hdr hdr;
+	u32 group_id;
+};
+
+struct dlb_mbox_get_sn_occupancy_cmd_resp {
+	struct dlb_mbox_resp_hdr hdr;
+	u32 num;
+};
+
+#endif /* __DLB_BASE_DLB_MBOX_H */
diff --git a/drivers/event/dlb/pf/base/dlb_osdep.h b/drivers/event/dlb/pf/base/dlb_osdep.h
new file mode 100644
index 000000000..8b1d22bbb
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_osdep.h
@@ -0,0 +1,350 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_OSDEP_H__
+#define __DLB_OSDEP_H__
+
+#include <string.h>
+#include <time.h>
+#include <unistd.h>
+#include <pthread.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+#include "../dlb_main.h"
+#include "dlb_resource.h"
+#include "../../dlb_user.h"
+
+
+#define DLB_PCI_REG_READ(reg)        rte_read32((void *)reg)
+#define DLB_PCI_REG_WRITE(reg, val)   rte_write32(val, (void *)reg)
+
+#define DLB_CSR_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->csr_kva + (reg)))
+#define DLB_CSR_RD(hw, reg) \
+	DLB_PCI_REG_READ(DLB_CSR_REG_ADDR((hw), (reg)))
+#define DLB_CSR_WR(hw, reg, val) \
+	DLB_PCI_REG_WRITE(DLB_CSR_REG_ADDR((hw), (reg)), (val))
+
+#define DLB_FUNC_REG_ADDR(a, reg) ((void *)((uintptr_t)(a)->func_kva + (reg)))
+#define DLB_FUNC_RD(hw, reg) \
+	DLB_PCI_REG_READ(DLB_FUNC_REG_ADDR((hw), (reg)))
+#define DLB_FUNC_WR(hw, reg, val) \
+	DLB_PCI_REG_WRITE(DLB_FUNC_REG_ADDR((hw), (reg)), (val))
+
+#define READ_ONCE(x) (x)
+#define WRITE_ONCE(x, y) ((x) = (y))
+
+#define OS_READ_ONCE(x) READ_ONCE(x)
+#define OS_WRITE_ONCE(x, y) WRITE_ONCE(x, y)
+
+
+extern unsigned int dlb_unregister_timeout_s;
+/**
+ * os_queue_unregister_timeout_s() - timeout (in seconds) to wait for queue
+ *                                   unregister acknowledgments.
+ */
+static inline unsigned int os_queue_unregister_timeout_s(void)
+{
+	return dlb_unregister_timeout_s;
+}
+
+static inline size_t os_strlcpy(char *dst, const char *src, size_t sz)
+{
+	return rte_strlcpy(dst, src, sz);
+}
+
+/**
+ * os_udelay() - busy-wait for a number of microseconds
+ * @usecs: delay duration.
+ */
+static inline void os_udelay(int usecs)
+{
+	rte_delay_us(usecs);
+}
+
+/**
+ * os_msleep() - sleep for a number of milliseconds
+ * @usecs: delay duration.
+ */
+
+static inline void os_msleep(int msecs)
+{
+	rte_delay_ms(msecs);
+}
+
+/**
+ * os_curtime_s() - get the current time (in seconds)
+ * @usecs: delay duration.
+ */
+static inline unsigned long os_curtime_s(void)
+{
+	struct timespec tv;
+
+	clock_gettime(CLOCK_MONOTONIC, &tv);
+
+	return (unsigned long)tv.tv_sec;
+}
+
+#define DLB_PP_BASE(__is_ldb) ((__is_ldb) ? DLB_LDB_PP_BASE : DLB_DIR_PP_BASE)
+/**
+ * os_map_producer_port() - map a producer port into the caller's address space
+ * @hw: dlb_hw handle for a particular device.
+ * @port_id: port ID
+ * @is_ldb: true for load-balanced port, false for a directed port
+ *
+ * This function maps the requested producer port memory into the caller's
+ * address space.
+ *
+ * Return:
+ * Returns the base address at which the PP memory was mapped, else NULL.
+ */
+static inline void *os_map_producer_port(struct dlb_hw *hw,
+					 u8 port_id,
+					 bool is_ldb)
+{
+	uint64_t addr;
+	uint64_t pp_dma_base;
+
+
+	pp_dma_base = (uintptr_t)hw->func_kva + DLB_PP_BASE(is_ldb);
+	addr = (pp_dma_base + (PAGE_SIZE * port_id));
+
+	return (void *)(uintptr_t)addr;
+
+}
+/**
+ * os_unmap_producer_port() - unmap a producer port
+ * @addr: mapped producer port address
+ *
+ * This function undoes os_map_producer_port() by unmapping the producer port
+ * memory from the caller's address space.
+ *
+ * Return:
+ * Returns the base address at which the PP memory was mapped, else NULL.
+ */
+
+/* PFPMD - Nothing to do here, since memory was not actually mapped by us */
+static inline void os_unmap_producer_port(struct dlb_hw *hw, void *addr)
+{
+	RTE_SET_USED(hw);
+	RTE_SET_USED(addr);
+}
+/**
+ * os_enqueue_four_hcws() - enqueue four HCWs to DLB
+ * @hw: dlb_hw handle for a particular device.
+ * @hcw: pointer to the 64B-aligned contiguous HCW memory
+ * @addr: producer port address
+ */
+static inline void os_enqueue_four_hcws(struct dlb_hw *hw,
+					struct dlb_hcw *hcw,
+					void *addr)
+{
+	struct dlb_dev *dlb_dev;
+
+	dlb_dev = container_of(hw, struct dlb_dev, hw);
+
+	dlb_dev->enqueue_four(hcw, addr);
+}
+
+/**
+ * os_fence_hcw() - fence an HCW to ensure it arrives at the device
+ * @hw: dlb_hw handle for a particular device.
+ * @pp_addr: producer port address
+ */
+static inline void os_fence_hcw(struct dlb_hw *hw, u64 *pp_addr)
+{
+	RTE_SET_USED(hw);
+
+	/* To ensure outstanding HCWs reach the device, read the PP address. IA
+	 * memory ordering prevents reads from passing older writes, and the
+	 * mfence also ensures this.
+	 */
+	rte_mb();
+
+	*(volatile u64 *)pp_addr;
+}
+
+#define DLB_ERR(dev, fmt, args...) \
+	RTE_LOG(ERR, PMD, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+
+#define DLB_INFO(dev, fmt, args...) \
+	RTE_LOG(INFO, PMD, "%s() line %u: " fmt "\n", \
+			__func__, __LINE__, ## args)
+
+#define DLB_DEBUG(dev, fmt, args...) \
+	RTE_LOG(DEBUG, PMD, "%s() line %u: " fmt "\n", \
+			__func__, __LINE__, ## args)
+
+/**
+ * DLB_HW_ERR() - log an error message
+ * @dlb: dlb_hw handle for a particular device.
+ * @...: variable string args.
+ */
+#define DLB_HW_ERR(dlb, ...) do {	\
+	RTE_SET_USED(dlb);		\
+	DLB_ERR(dlb, __VA_ARGS__);	\
+} while (0)
+
+/**
+ * DLB_HW_INFO() - log an info message
+ * @dlb: dlb_hw handle for a particular device.
+ * @...: variable string args.
+ */
+#define DLB_HW_INFO(dlb, ...) do {	\
+	RTE_SET_USED(dlb);		\
+	DLB_INFO(dlb, __VA_ARGS__);	\
+} while (0)
+
+/*** scheduling functions ***/
+
+/* The callback runs until it completes all outstanding QID->CQ
+ * map and unmap requests. To prevent deadlock, this function gives other
+ * threads a chance to grab the resource mutex and configure hardware.
+ */
+static void *dlb_complete_queue_map_unmap(void *__args)
+{
+	struct dlb_dev *dlb_dev = (struct dlb_dev *)__args;
+	int ret;
+
+	while (1) {
+		rte_spinlock_lock(&dlb_dev->resource_mutex);
+
+		ret = dlb_finish_unmap_qid_procedures(&dlb_dev->hw);
+		ret += dlb_finish_map_qid_procedures(&dlb_dev->hw);
+
+		if (ret != 0) {
+			rte_spinlock_unlock(&dlb_dev->resource_mutex);
+			/* Relinquish the CPU so the application can process
+			 * its CQs, so this function doesn't deadlock.
+			 */
+			sched_yield();
+		} else
+			break;
+	}
+
+	dlb_dev->worker_launched = false;
+
+	rte_spinlock_unlock(&dlb_dev->resource_mutex);
+
+	return NULL;
+}
+
+
+/**
+ * os_schedule_work() - launch a thread to process pending map and unmap work
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function launches a thread that will run until all pending
+ * map and unmap procedures are complete.
+ */
+static inline void os_schedule_work(struct dlb_hw *hw)
+{
+	struct dlb_dev *dlb_dev;
+	pthread_t complete_queue_map_unmap_thread;
+	int ret;
+
+	dlb_dev = container_of(hw, struct dlb_dev, hw);
+
+	ret = pthread_create(&complete_queue_map_unmap_thread,
+			     NULL,
+			     dlb_complete_queue_map_unmap,
+			     dlb_dev);
+
+	/* PFPMD_FIXME - this function should be allowed to return an error */
+	if (ret)
+		DLB_ERR(dlb_dev,
+		"Could not create queue complete map /unmap thread, err=%d\n",
+			  ret);
+	else
+		dlb_dev->worker_launched = true;
+}
+
+/**
+ * os_worker_active() - query whether the map/unmap worker thread is active
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function returns a boolean indicating whether a thread (launched by
+ * os_schedule_work()) is active. This function is used to determine
+ * whether or not to launch a worker thread.
+ */
+static inline bool os_worker_active(struct dlb_hw *hw)
+{
+	struct dlb_dev *dlb_dev;
+
+	dlb_dev = container_of(hw, struct dlb_dev, hw);
+
+	return dlb_dev->worker_launched;
+}
+
+/**
+ * os_notify_user_space() - notify user space
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: ID of domain to notify.
+ * @alert_id: alert ID.
+ * @aux_alert_data: additional alert data.
+ *
+ * This function notifies user space of an alert (such as a remote queue
+ * unregister or hardware alarm).
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ */
+static inline int os_notify_user_space(struct dlb_hw *hw,
+				       u32 domain_id,
+				       u64 alert_id,
+				       u64 aux_alert_data)
+{
+	RTE_SET_USED(hw);
+	RTE_SET_USED(domain_id);
+	RTE_SET_USED(alert_id);
+	RTE_SET_USED(aux_alert_data);
+
+	rte_panic("internal_error: %s should never be called for DLB PF PMD\n",
+		  __func__);
+	return -1;
+}
+
+enum dlb_dev_revision {
+	DLB_A0,
+	DLB_A1,
+	DLB_A2,
+	DLB_A3,
+	DLB_B0,
+};
+
+#include <cpuid.h>
+
+/**
+ * os_get_dev_revision() - query the device_revision
+ * @hw: dlb_hw handle for a particular device.
+ */
+static inline enum dlb_dev_revision os_get_dev_revision(struct dlb_hw *hw)
+{
+	uint32_t a, b, c, d, stepping;
+
+	RTE_SET_USED(hw);
+
+	__cpuid(0x1, a, b, c, d);
+
+	stepping = a & 0xf;
+
+	switch (stepping) {
+	case 0:
+		return DLB_A0;
+	case 1:
+		return DLB_A1;
+	case 2:
+		return DLB_A2;
+	case 3:
+		return DLB_A3;
+	default:
+		/* Treat all revisions >= 4 as B0 */
+		return DLB_B0;
+	}
+}
+
+#endif /*  __DLB_OSDEP_H__ */
diff --git a/drivers/event/dlb/pf/base/dlb_osdep_bitmap.h b/drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
new file mode 100644
index 000000000..2c95796f5
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
@@ -0,0 +1,449 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_OSDEP_BITMAP_H__
+#define __DLB_OSDEP_BITMAP_H__
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <rte_bitmap.h>
+#include <rte_string_fns.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+#include "../dlb_main.h"
+
+
+/*************************/
+/*** Bitmap operations ***/
+/*************************/
+struct dlb_bitmap {
+	struct rte_bitmap *map;
+	unsigned int len;
+	struct dlb_hw *hw;
+};
+
+/**
+ * dlb_bitmap_alloc() - alloc a bitmap data structure
+ * @bitmap: pointer to dlb_bitmap structure pointer.
+ * @len: number of entries in the bitmap.
+ *
+ * This function allocates a bitmap and initializes it with length @len. All
+ * entries are initially zero.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or len is 0.
+ * ENOMEM - could not allocate memory for the bitmap data structure.
+ */
+static inline int dlb_bitmap_alloc(struct dlb_hw *hw,
+				   struct dlb_bitmap **bitmap,
+				   unsigned int len)
+{
+	struct dlb_bitmap *bm;
+	void *mem;
+	uint32_t alloc_size;
+	uint32_t nbits = (uint32_t) len;
+	RTE_SET_USED(hw);
+
+	if (!bitmap || nbits == 0)
+		return -EINVAL;
+
+	/* Allocate DLB bitmap control struct */
+	bm = rte_malloc("DLB_PF",
+		sizeof(struct dlb_bitmap),
+		RTE_CACHE_LINE_SIZE);
+
+	if (!bm)
+		return -ENOMEM;
+
+	/* Allocate bitmap memory */
+	alloc_size = rte_bitmap_get_memory_footprint(nbits);
+	mem = rte_malloc("DLB_PF_BITMAP", alloc_size, RTE_CACHE_LINE_SIZE);
+	if (!mem) {
+		rte_free(bm);
+		return -ENOMEM;
+	}
+
+	bm->map = rte_bitmap_init(len, mem, alloc_size);
+	if (!bm->map) {
+		rte_free(mem);
+		rte_free(bm);
+		return -ENOMEM;
+	}
+
+	bm->len = len;
+
+	*bitmap = bm;
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_free() - free a previously allocated bitmap data structure
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * This function frees a bitmap that was allocated with dlb_bitmap_alloc().
+ */
+static inline void dlb_bitmap_free(struct dlb_bitmap *bitmap)
+{
+	if (!bitmap)
+		rte_panic("NULL dlb_bitmap in %s\n", __func__);
+
+	rte_free(bitmap->map);
+	rte_free(bitmap);
+}
+
+/**
+ * dlb_bitmap_fill() - fill a bitmap with all 1s
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * This function sets all bitmap values to 1.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized.
+ */
+static inline int dlb_bitmap_fill(struct dlb_bitmap *bitmap)
+{
+	unsigned int i;
+
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != bitmap->len; i++)
+		rte_bitmap_set(bitmap->map, i);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_fill() - fill a bitmap with all 0s
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * This function sets all bitmap values to 0.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized.
+ */
+static inline int dlb_bitmap_zero(struct dlb_bitmap *bitmap)
+{
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	rte_bitmap_reset(bitmap->map);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_set() - set a bitmap entry
+ * @bitmap: pointer to dlb_bitmap structure.
+ * @bit: bit index.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
+ *	    bitmap length.
+ */
+static inline int dlb_bitmap_set(struct dlb_bitmap *bitmap,
+				 unsigned int bit)
+{
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	if (bitmap->len <= bit)
+		return -EINVAL;
+
+	rte_bitmap_set(bitmap->map, bit);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_set_range() - set a range of bitmap entries
+ * @bitmap: pointer to dlb_bitmap structure.
+ * @bit: starting bit index.
+ * @len: length of the range.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
+ *	    length.
+ */
+static inline int dlb_bitmap_set_range(struct dlb_bitmap *bitmap,
+				       unsigned int bit,
+				       unsigned int len)
+{
+	unsigned int i;
+
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	if (bitmap->len <= bit)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != len; i++)
+		rte_bitmap_set(bitmap->map, bit + i);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_clear() - clear a bitmap entry
+ * @bitmap: pointer to dlb_bitmap structure.
+ * @bit: bit index.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized, or bit is larger than the
+ *	    bitmap length.
+ */
+static inline int dlb_bitmap_clear(struct dlb_bitmap *bitmap,
+				   unsigned int bit)
+{
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	if (bitmap->len <= bit)
+		return -EINVAL;
+
+	rte_bitmap_clear(bitmap->map, bit);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_clear_range() - clear a range of bitmap entries
+ * @bitmap: pointer to dlb_bitmap structure.
+ * @bit: starting bit index.
+ * @len: length of the range.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized, or the range exceeds the bitmap
+ *	    length.
+ */
+static inline int dlb_bitmap_clear_range(struct dlb_bitmap *bitmap,
+					 unsigned int bit,
+					 unsigned int len)
+{
+	unsigned int i;
+
+	if (!bitmap || !bitmap->map)
+		return -EINVAL;
+
+	if (bitmap->len <= bit)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != len; i++)
+		rte_bitmap_clear(bitmap->map, bit + i);
+
+	return 0;
+}
+
+/**
+ * dlb_bitmap_find_set_bit_range() - find a range of set bits
+ * @bitmap: pointer to dlb_bitmap structure.
+ * @len: length of the range.
+ *
+ * This function looks for a range of set bits of length @len.
+ *
+ * Return:
+ * Returns the base bit index upon success, < 0 otherwise.
+ *
+ * Errors:
+ * ENOENT - unable to find a length *len* range of set bits.
+ * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
+ */
+static inline int dlb_bitmap_find_set_bit_range(struct dlb_bitmap *bitmap,
+						unsigned int len)
+{
+	unsigned int i, j = 0;
+
+	if (!bitmap || !bitmap->map || len == 0)
+		return -EINVAL;
+
+	if (bitmap->len < len)
+		return -ENOENT;
+
+	/* TODO - optimize */
+	for (i = 0; i != bitmap->len; i++) {
+		if  (rte_bitmap_get(bitmap->map, i)) {
+			if (++j == len)
+				return i - j + 1;
+		} else
+			j = 0;
+	}
+
+	/* No set bit range of length len? */
+	return -ENOENT;
+}
+
+/**
+ * dlb_bitmap_find_set_bit() - find the first set bit
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * This function looks for a single set bit.
+ *
+ * Return:
+ * Returns the base bit index upon success, < 0 otherwise.
+ *
+ * Errors:
+ * ENOENT - the bitmap contains no set bits.
+ * EINVAL - bitmap is NULL or is uninitialized, or len is invalid.
+ */
+static inline int dlb_bitmap_find_set_bit(struct dlb_bitmap *bitmap)
+{
+	unsigned int i;
+
+	if (!bitmap)
+		return -EINVAL;
+
+	if (!bitmap->map)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != bitmap->len; i++) {
+		if  (rte_bitmap_get(bitmap->map, i))
+			return i;
+	}
+
+	return -ENOENT;
+}
+
+/**
+ * dlb_bitmap_count() - returns the number of set bits
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * This function looks for a single set bit.
+ *
+ * Return:
+ * Returns the number of set bits upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized.
+ */
+static inline int dlb_bitmap_count(struct dlb_bitmap *bitmap)
+{
+	int weight = 0;
+	unsigned int i;
+
+	if (!bitmap)
+		return -EINVAL;
+
+	if (!bitmap->map)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != bitmap->len; i++) {
+		if  (rte_bitmap_get(bitmap->map, i))
+			weight++;
+	}
+	return weight;
+}
+
+/**
+ * dlb_bitmap_longest_set_range() - returns longest contiguous range of set bits
+ * @bitmap: pointer to dlb_bitmap structure.
+ *
+ * Return:
+ * Returns the bitmap's longest contiguous range of of set bits upon success,
+ * <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - bitmap is NULL or is uninitialized.
+ */
+static inline int dlb_bitmap_longest_set_range(struct dlb_bitmap *bitmap)
+{
+	int max_len = 0, len = 0;
+	unsigned int i;
+
+	if (!bitmap)
+		return -EINVAL;
+
+	if (!bitmap->map)
+		return -EINVAL;
+
+	/* TODO - optimize */
+	for (i = 0; i != bitmap->len; i++) {
+		if  (rte_bitmap_get(bitmap->map, i)) {
+			len++;
+		} else {
+			if (len > max_len)
+				max_len = len;
+			len = 0;
+		}
+	}
+
+	if (len > max_len)
+		max_len = len;
+
+	return max_len;
+}
+
+/**
+ * dlb_bitmap_or() - store the logical 'or' of two bitmaps into a third
+ * @dest: pointer to dlb_bitmap structure, which will contain the results of
+ *	  the 'or' of src1 and src2.
+ * @src1: pointer to dlb_bitmap structure, will be 'or'ed with src2.
+ * @src2: pointer to dlb_bitmap structure, will be 'or'ed with src1.
+ *
+ * This function 'or's two bitmaps together and stores the result in a third
+ * bitmap. The source and destination bitmaps can be the same.
+ *
+ * Return:
+ * Returns the number of set bits upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - One of the bitmaps is NULL or is uninitialized.
+ */
+static inline int dlb_bitmap_or(struct dlb_bitmap *dest,
+				struct dlb_bitmap *src1,
+				struct dlb_bitmap *src2)
+{
+	unsigned int i, min;
+	int numset = 0;
+
+	if (!dest || !dest->map ||
+	    !src1 || !src1->map ||
+	    !src2 || !src2->map)
+		return -EINVAL;
+
+	min = dest->len;
+	min = (min > src1->len) ? src1->len : min;
+	min = (min > src2->len) ? src2->len : min;
+
+	for (i = 0; i != min; i++) {
+		if  (rte_bitmap_get(src1->map, i) ||
+				rte_bitmap_get(src2->map, i)) {
+			rte_bitmap_set(dest->map, i);
+			numset++;
+		} else
+			rte_bitmap_clear(dest->map, i);
+	}
+
+	return numset;
+}
+
+#endif /*  __DLB_OSDEP_BITMAP_H__ */
diff --git a/drivers/event/dlb/pf/base/dlb_osdep_list.h b/drivers/event/dlb/pf/base/dlb_osdep_list.h
new file mode 100644
index 000000000..a53b3626e
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_osdep_list.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_OSDEP_LIST_H__
+#define __DLB_OSDEP_LIST_H__
+
+#include <rte_tailq.h>
+
+struct dlb_list_entry {
+	TAILQ_ENTRY(dlb_list_entry) node;
+};
+
+/* Dummy - just a struct definition */
+TAILQ_HEAD(dlb_list_head, dlb_list_entry);
+
+/* =================
+ * TAILQ Supplements
+ * =================
+ */
+
+#ifndef TAILQ_FOREACH_ENTRY
+#define TAILQ_FOREACH_ENTRY(ptr, head, name, iter)		\
+	for ((iter) = TAILQ_FIRST(&head);			\
+	    (iter)						\
+		&& (ptr = container_of(iter, typeof(*(ptr)), name)); \
+	    (iter) = TAILQ_NEXT((iter), node))
+#endif
+
+#ifndef TAILQ_FOREACH_ENTRY_SAFE
+#define TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, iter, tvar)	\
+	for ((iter) = TAILQ_FIRST(&head);			\
+	    (iter) &&						\
+		(ptr = container_of(iter, typeof(*(ptr)), name)) &&\
+		((tvar) = TAILQ_NEXT((iter), node), 1);	\
+	    (iter) = (tvar))
+#endif
+
+/* =========
+ * DLB Lists
+ * =========
+ */
+
+/**
+ * dlb_list_init_head() - initialize the head of a list
+ * @head: list head
+ */
+static inline void dlb_list_init_head(struct dlb_list_head *head)
+{
+	TAILQ_INIT(head);
+}
+
+/**
+ * dlb_list_add() - add an entry to a list
+ * @head: new entry will be added after this list header
+ * @entry: new list entry to be added
+ */
+static inline void dlb_list_add(struct dlb_list_head *head,
+				struct dlb_list_entry *entry)
+{
+	TAILQ_INSERT_TAIL(head, entry, node);
+}
+
+/**
+ * @head: list head
+ * @entry: list entry to be deleted
+ */
+static inline void dlb_list_del(struct dlb_list_head *head,
+				struct dlb_list_entry *entry)
+{
+	TAILQ_REMOVE(head, entry, node);
+}
+
+/**
+ * dlb_list_empty() - check if a list is empty
+ * @head: list head
+ *
+ * Return:
+ * Returns 1 if empty, 0 if not.
+ */
+static inline bool dlb_list_empty(struct dlb_list_head *head)
+{
+	return TAILQ_EMPTY(head);
+}
+
+/**
+ * dlb_list_empty() - check if a list is empty
+ * @src_head: list to be added
+ * @ head: where src_head will be inserted
+ */
+static inline void dlb_list_splice(struct dlb_list_head *src_head,
+				   struct dlb_list_head *head)
+{
+	TAILQ_CONCAT(head, src_head, node);
+}
+
+/**
+ * DLB_LIST_HEAD() - retrieve the head of the list
+ * @head: list head
+ * @type: type of the list variable
+ * @name: name of the dlb_list within the struct
+ */
+#define DLB_LIST_HEAD(head, type, name)				\
+	(TAILQ_FIRST(&head) ?					\
+		container_of(TAILQ_FIRST(&head), type, name) :	\
+		NULL)
+
+/**
+ * DLB_LIST_FOR_EACH() - iterate over a list
+ * @head: list head
+ * @ptr: pointer to struct containing a struct dlb_list_entry
+ * @name: name of the dlb_list_entry field within the containing struct
+ * @iter: iterator variable
+ */
+#define DLB_LIST_FOR_EACH(head, ptr, name, tmp_iter) \
+	TAILQ_FOREACH_ENTRY(ptr, head, name, tmp_iter)
+
+/**
+ * DLB_LIST_FOR_EACH_SAFE() - iterate over a list. This loop works even if
+ * an element is removed from the list while processing it.
+ * @ptr: pointer to struct containing a struct dlb_list_entry
+ * @ptr_tmp: pointer to struct containing a struct dlb_list_entry (temporary)
+ * @head: list head
+ * @name: name of the dlb_list_entry field within the containing struct
+ * @iter: iterator variable
+ * @iter_tmp: iterator variable (temporary)
+ */
+#define DLB_LIST_FOR_EACH_SAFE(head, ptr, ptr_tmp, name, tmp_iter, saf_iter) \
+	TAILQ_FOREACH_ENTRY_SAFE(ptr, head, name, tmp_iter, saf_iter)
+
+#endif /*  __DLB_OSDEP_LIST_H__ */
diff --git a/drivers/event/dlb/pf/base/dlb_osdep_types.h b/drivers/event/dlb/pf/base/dlb_osdep_types.h
new file mode 100644
index 000000000..2e9d7d8d0
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_osdep_types.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_OSDEP_TYPES_H
+#define __DLB_OSDEP_TYPES_H
+
+#include <linux/types.h>
+
+#include <inttypes.h>
+#include <ctype.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+
+/* Types for user mode PF PMD */
+typedef uint8_t         u8;
+typedef int8_t          s8;
+typedef uint16_t        u16;
+typedef int16_t         s16;
+typedef uint32_t        u32;
+typedef int32_t         s32;
+typedef uint64_t        u64;
+
+#define __iomem
+
+/* END types for user mode PF PMD */
+
+#endif /* __DLB_OSDEP_TYPES_H */
diff --git a/drivers/event/dlb/pf/base/dlb_regs.h b/drivers/event/dlb/pf/base/dlb_regs.h
new file mode 100644
index 000000000..3b0be239a
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_regs.h
@@ -0,0 +1,2646 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_REGS_H
+#define __DLB_REGS_H
+
+#include "dlb_osdep_types.h"
+
+#define DLB_FUNC_PF_VF2PF_MAILBOX_BYTES 256
+#define DLB_FUNC_PF_VF2PF_MAILBOX(vf_id, x) \
+	(0x1000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_VF2PF_MAILBOX_RST 0x0
+union dlb_func_pf_vf2pf_mailbox {
+	struct {
+		u32 msg : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_VF2PF_MAILBOX_ISR(vf_id) \
+	(0x1f00 + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_VF2PF_MAILBOX_ISR_RST 0x0
+union dlb_func_pf_vf2pf_mailbox_isr {
+	struct {
+		u32 vf_isr : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_VF2PF_FLR_ISR(vf_id) \
+	(0x1f04 + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_VF2PF_FLR_ISR_RST 0x0
+union dlb_func_pf_vf2pf_flr_isr {
+	struct {
+		u32 vf_isr : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_VF2PF_ISR_PEND(vf_id) \
+	(0x1f10 + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_VF2PF_ISR_PEND_RST 0x0
+union dlb_func_pf_vf2pf_isr_pend {
+	struct {
+		u32 isr_pend : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_PF2VF_MAILBOX_BYTES 64
+#define DLB_FUNC_PF_PF2VF_MAILBOX(vf_id, x) \
+	(0x2000 + 0x4 * (x) + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_PF2VF_MAILBOX_RST 0x0
+union dlb_func_pf_pf2vf_mailbox {
+	struct {
+		u32 msg : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id) \
+	(0x2f00 + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_PF2VF_MAILBOX_ISR_RST 0x0
+union dlb_func_pf_pf2vf_mailbox_isr {
+	struct {
+		u32 isr : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_PF_VF_RESET_IN_PROGRESS(vf_id) \
+	(0x3000 + (vf_id) * 0x10000)
+#define DLB_FUNC_PF_VF_RESET_IN_PROGRESS_RST 0xffff
+union dlb_func_pf_vf_reset_in_progress {
+	struct {
+		u32 reset_in_progress : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_MSIX_MEM_VECTOR_CTRL(x) \
+	(0x100000c + (x) * 0x10)
+#define DLB_MSIX_MEM_VECTOR_CTRL_RST 0x1
+union dlb_msix_mem_vector_ctrl {
+	struct {
+		u32 vec_mask : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_TOTAL_VAS 0x124
+#define DLB_SYS_TOTAL_VAS_RST 0x20
+union dlb_sys_total_vas {
+	struct {
+		u32 total_vas : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_PF_SYND2 0x508
+#define DLB_SYS_ALARM_PF_SYND2_RST 0x0
+union dlb_sys_alarm_pf_synd2 {
+	struct {
+		u32 lock_id : 16;
+		u32 meas : 1;
+		u32 debug : 7;
+		u32 cq_pop : 1;
+		u32 qe_uhl : 1;
+		u32 qe_orsp : 1;
+		u32 qe_valid : 1;
+		u32 cq_int_rearm : 1;
+		u32 dsi_error : 1;
+		u32 rsvd0 : 2;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_PF_SYND1 0x504
+#define DLB_SYS_ALARM_PF_SYND1_RST 0x0
+union dlb_sys_alarm_pf_synd1 {
+	struct {
+		u32 dsi : 16;
+		u32 qid : 8;
+		u32 qtype : 2;
+		u32 qpri : 3;
+		u32 msg_type : 3;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_PF_SYND0 0x500
+#define DLB_SYS_ALARM_PF_SYND0_RST 0x0
+union dlb_sys_alarm_pf_synd0 {
+	struct {
+		u32 syndrome : 8;
+		u32 rtype : 2;
+		u32 rsvd0 : 2;
+		u32 from_dmv : 1;
+		u32 is_ldb : 1;
+		u32 cls : 2;
+		u32 aid : 6;
+		u32 unit : 4;
+		u32 source : 4;
+		u32 more : 1;
+		u32 valid : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_LDB_VPP_V(x) \
+	(0xf00 + (x) * 0x1000)
+#define DLB_SYS_VF_LDB_VPP_V_RST 0x0
+union dlb_sys_vf_ldb_vpp_v {
+	struct {
+		u32 vpp_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_LDB_VPP2PP(x) \
+	(0xf08 + (x) * 0x1000)
+#define DLB_SYS_VF_LDB_VPP2PP_RST 0x0
+union dlb_sys_vf_ldb_vpp2pp {
+	struct {
+		u32 pp : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_DIR_VPP_V(x) \
+	(0xf10 + (x) * 0x1000)
+#define DLB_SYS_VF_DIR_VPP_V_RST 0x0
+union dlb_sys_vf_dir_vpp_v {
+	struct {
+		u32 vpp_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_DIR_VPP2PP(x) \
+	(0xf18 + (x) * 0x1000)
+#define DLB_SYS_VF_DIR_VPP2PP_RST 0x0
+union dlb_sys_vf_dir_vpp2pp {
+	struct {
+		u32 pp : 7;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_LDB_VQID_V(x) \
+	(0xf20 + (x) * 0x1000)
+#define DLB_SYS_VF_LDB_VQID_V_RST 0x0
+union dlb_sys_vf_ldb_vqid_v {
+	struct {
+		u32 vqid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_LDB_VQID2QID(x) \
+	(0xf28 + (x) * 0x1000)
+#define DLB_SYS_VF_LDB_VQID2QID_RST 0x0
+union dlb_sys_vf_ldb_vqid2qid {
+	struct {
+		u32 qid : 7;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_QID2VQID(x) \
+	(0xf2c + (x) * 0x1000)
+#define DLB_SYS_LDB_QID2VQID_RST 0x0
+union dlb_sys_ldb_qid2vqid {
+	struct {
+		u32 vqid : 7;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_DIR_VQID_V(x) \
+	(0xf30 + (x) * 0x1000)
+#define DLB_SYS_VF_DIR_VQID_V_RST 0x0
+union dlb_sys_vf_dir_vqid_v {
+	struct {
+		u32 vqid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_VF_DIR_VQID2QID(x) \
+	(0xf38 + (x) * 0x1000)
+#define DLB_SYS_VF_DIR_VQID2QID_RST 0x0
+union dlb_sys_vf_dir_vqid2qid {
+	struct {
+		u32 qid : 7;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_VASQID_V(x) \
+	(0xf60 + (x) * 0x1000)
+#define DLB_SYS_LDB_VASQID_V_RST 0x0
+union dlb_sys_ldb_vasqid_v {
+	struct {
+		u32 vasqid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_VASQID_V(x) \
+	(0xf68 + (x) * 0x1000)
+#define DLB_SYS_DIR_VASQID_V_RST 0x0
+union dlb_sys_dir_vasqid_v {
+	struct {
+		u32 vasqid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_WBUF_DIR_FLAGS(x) \
+	(0xf70 + (x) * 0x1000)
+#define DLB_SYS_WBUF_DIR_FLAGS_RST 0x0
+union dlb_sys_wbuf_dir_flags {
+	struct {
+		u32 wb_v : 4;
+		u32 cl : 1;
+		u32 busy : 1;
+		u32 opt : 1;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_WBUF_LDB_FLAGS(x) \
+	(0xf78 + (x) * 0x1000)
+#define DLB_SYS_WBUF_LDB_FLAGS_RST 0x0
+union dlb_sys_wbuf_ldb_flags {
+	struct {
+		u32 wb_v : 4;
+		u32 cl : 1;
+		u32 busy : 1;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_VF_SYND2(x) \
+	(0x8000018 + (x) * 0x1000)
+#define DLB_SYS_ALARM_VF_SYND2_RST 0x0
+union dlb_sys_alarm_vf_synd2 {
+	struct {
+		u32 lock_id : 16;
+		u32 meas : 1;
+		u32 debug : 7;
+		u32 cq_pop : 1;
+		u32 qe_uhl : 1;
+		u32 qe_orsp : 1;
+		u32 qe_valid : 1;
+		u32 cq_int_rearm : 1;
+		u32 dsi_error : 1;
+		u32 rsvd0 : 2;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_VF_SYND1(x) \
+	(0x8000014 + (x) * 0x1000)
+#define DLB_SYS_ALARM_VF_SYND1_RST 0x0
+union dlb_sys_alarm_vf_synd1 {
+	struct {
+		u32 dsi : 16;
+		u32 qid : 8;
+		u32 qtype : 2;
+		u32 qpri : 3;
+		u32 msg_type : 3;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_VF_SYND0(x) \
+	(0x8000010 + (x) * 0x1000)
+#define DLB_SYS_ALARM_VF_SYND0_RST 0x0
+union dlb_sys_alarm_vf_synd0 {
+	struct {
+		u32 syndrome : 8;
+		u32 rtype : 2;
+		u32 rsvd0 : 2;
+		u32 from_dmv : 1;
+		u32 is_ldb : 1;
+		u32 cls : 2;
+		u32 aid : 6;
+		u32 unit : 4;
+		u32 source : 4;
+		u32 more : 1;
+		u32 valid : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_QID_V(x) \
+	(0x8000034 + (x) * 0x1000)
+#define DLB_SYS_LDB_QID_V_RST 0x0
+union dlb_sys_ldb_qid_v {
+	struct {
+		u32 qid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_QID_CFG_V(x) \
+	(0x8000030 + (x) * 0x1000)
+#define DLB_SYS_LDB_QID_CFG_V_RST 0x0
+union dlb_sys_ldb_qid_cfg_v {
+	struct {
+		u32 sn_cfg_v : 1;
+		u32 fid_cfg_v : 1;
+		u32 rsvd0 : 30;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_QID_V(x) \
+	(0x8000040 + (x) * 0x1000)
+#define DLB_SYS_DIR_QID_V_RST 0x0
+union dlb_sys_dir_qid_v {
+	struct {
+		u32 qid_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_POOL_ENBLD(x) \
+	(0x8000070 + (x) * 0x1000)
+#define DLB_SYS_LDB_POOL_ENBLD_RST 0x0
+union dlb_sys_ldb_pool_enbld {
+	struct {
+		u32 pool_enabled : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_POOL_ENBLD(x) \
+	(0x8000080 + (x) * 0x1000)
+#define DLB_SYS_DIR_POOL_ENBLD_RST 0x0
+union dlb_sys_dir_pool_enbld {
+	struct {
+		u32 pool_enabled : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP2VPP(x) \
+	(0x8000090 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP2VPP_RST 0x0
+union dlb_sys_ldb_pp2vpp {
+	struct {
+		u32 vpp : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP2VPP(x) \
+	(0x8000094 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP2VPP_RST 0x0
+union dlb_sys_dir_pp2vpp {
+	struct {
+		u32 vpp : 7;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP_V(x) \
+	(0x8000128 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP_V_RST 0x0
+union dlb_sys_ldb_pp_v {
+	struct {
+		u32 pp_v : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ_ISR(x) \
+	(0x8000124 + (x) * 0x1000)
+#define DLB_SYS_LDB_CQ_ISR_RST 0x0
+/* CQ Interrupt Modes */
+#define DLB_CQ_ISR_MODE_DIS  0
+#define DLB_CQ_ISR_MODE_MSI  1
+#define DLB_CQ_ISR_MODE_MSIX 2
+union dlb_sys_ldb_cq_isr {
+	struct {
+		u32 vector : 6;
+		u32 vf : 4;
+		u32 en_code : 2;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ2VF_PF(x) \
+	(0x8000120 + (x) * 0x1000)
+#define DLB_SYS_LDB_CQ2VF_PF_RST 0x0
+union dlb_sys_ldb_cq2vf_pf {
+	struct {
+		u32 vf : 4;
+		u32 is_pf : 1;
+		u32 rsvd0 : 27;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP2VAS(x) \
+	(0x800011c + (x) * 0x1000)
+#define DLB_SYS_LDB_PP2VAS_RST 0x0
+union dlb_sys_ldb_pp2vas {
+	struct {
+		u32 vas : 5;
+		u32 rsvd0 : 27;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP2LDBPOOL(x) \
+	(0x8000118 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP2LDBPOOL_RST 0x0
+union dlb_sys_ldb_pp2ldbpool {
+	struct {
+		u32 ldbpool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP2DIRPOOL(x) \
+	(0x8000114 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP2DIRPOOL_RST 0x0
+union dlb_sys_ldb_pp2dirpool {
+	struct {
+		u32 dirpool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP2VF_PF(x) \
+	(0x8000110 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP2VF_PF_RST 0x0
+union dlb_sys_ldb_pp2vf_pf {
+	struct {
+		u32 vf : 4;
+		u32 is_pf : 1;
+		u32 rsvd0 : 27;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP_ADDR_U(x) \
+	(0x800010c + (x) * 0x1000)
+#define DLB_SYS_LDB_PP_ADDR_U_RST 0x0
+union dlb_sys_ldb_pp_addr_u {
+	struct {
+		u32 addr_u : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_PP_ADDR_L(x) \
+	(0x8000108 + (x) * 0x1000)
+#define DLB_SYS_LDB_PP_ADDR_L_RST 0x0
+union dlb_sys_ldb_pp_addr_l {
+	struct {
+		u32 rsvd0 : 7;
+		u32 addr_l : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ_ADDR_U(x) \
+	(0x8000104 + (x) * 0x1000)
+#define DLB_SYS_LDB_CQ_ADDR_U_RST 0x0
+union dlb_sys_ldb_cq_addr_u {
+	struct {
+		u32 addr_u : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ_ADDR_L(x) \
+	(0x8000100 + (x) * 0x1000)
+#define DLB_SYS_LDB_CQ_ADDR_L_RST 0x0
+union dlb_sys_ldb_cq_addr_l {
+	struct {
+		u32 rsvd0 : 6;
+		u32 addr_l : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP_V(x) \
+	(0x8000228 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP_V_RST 0x0
+union dlb_sys_dir_pp_v {
+	struct {
+		u32 pp_v : 1;
+		u32 mb_dm : 1;
+		u32 rsvd0 : 30;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_ISR(x) \
+	(0x8000224 + (x) * 0x1000)
+#define DLB_SYS_DIR_CQ_ISR_RST 0x0
+union dlb_sys_dir_cq_isr {
+	struct {
+		u32 vector : 6;
+		u32 vf : 4;
+		u32 en_code : 2;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ2VF_PF(x) \
+	(0x8000220 + (x) * 0x1000)
+#define DLB_SYS_DIR_CQ2VF_PF_RST 0x0
+union dlb_sys_dir_cq2vf_pf {
+	struct {
+		u32 vf : 4;
+		u32 is_pf : 1;
+		u32 rsvd0 : 27;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP2VAS(x) \
+	(0x800021c + (x) * 0x1000)
+#define DLB_SYS_DIR_PP2VAS_RST 0x0
+union dlb_sys_dir_pp2vas {
+	struct {
+		u32 vas : 5;
+		u32 rsvd0 : 27;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP2LDBPOOL(x) \
+	(0x8000218 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP2LDBPOOL_RST 0x0
+union dlb_sys_dir_pp2ldbpool {
+	struct {
+		u32 ldbpool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP2DIRPOOL(x) \
+	(0x8000214 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP2DIRPOOL_RST 0x0
+union dlb_sys_dir_pp2dirpool {
+	struct {
+		u32 dirpool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP2VF_PF(x) \
+	(0x8000210 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP2VF_PF_RST 0x0
+union dlb_sys_dir_pp2vf_pf {
+	struct {
+		u32 vf : 4;
+		u32 is_pf : 1;
+		u32 is_hw_dsi : 1;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP_ADDR_U(x) \
+	(0x800020c + (x) * 0x1000)
+#define DLB_SYS_DIR_PP_ADDR_U_RST 0x0
+union dlb_sys_dir_pp_addr_u {
+	struct {
+		u32 addr_u : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_PP_ADDR_L(x) \
+	(0x8000208 + (x) * 0x1000)
+#define DLB_SYS_DIR_PP_ADDR_L_RST 0x0
+union dlb_sys_dir_pp_addr_l {
+	struct {
+		u32 rsvd0 : 7;
+		u32 addr_l : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_ADDR_U(x) \
+	(0x8000204 + (x) * 0x1000)
+#define DLB_SYS_DIR_CQ_ADDR_U_RST 0x0
+union dlb_sys_dir_cq_addr_u {
+	struct {
+		u32 addr_u : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_ADDR_L(x) \
+	(0x8000200 + (x) * 0x1000)
+#define DLB_SYS_DIR_CQ_ADDR_L_RST 0x0
+union dlb_sys_dir_cq_addr_l {
+	struct {
+		u32 rsvd0 : 6;
+		u32 addr_l : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_INGRESS_ALARM_ENBL 0x300
+#define DLB_SYS_INGRESS_ALARM_ENBL_RST 0x0
+union dlb_sys_ingress_alarm_enbl {
+	struct {
+		u32 illegal_hcw : 1;
+		u32 illegal_pp : 1;
+		u32 disabled_pp : 1;
+		u32 illegal_qid : 1;
+		u32 disabled_qid : 1;
+		u32 illegal_ldb_qid_cfg : 1;
+		u32 illegal_cqid : 1;
+		u32 rsvd0 : 25;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_CQ_MODE 0x30c
+#define DLB_SYS_CQ_MODE_RST 0x0
+union dlb_sys_cq_mode {
+	struct {
+		u32 ldb_cq64 : 1;
+		u32 dir_cq64 : 1;
+		u32 rsvd0 : 30;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_FUNC_VF_BAR_DSBL(x) \
+	(0x310 + (x) * 0x4)
+#define DLB_SYS_FUNC_VF_BAR_DSBL_RST 0x0
+union dlb_sys_func_vf_bar_dsbl {
+	struct {
+		u32 func_vf_bar_dis : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_MSIX_ACK 0x400
+#define DLB_SYS_MSIX_ACK_RST 0x0
+union dlb_sys_msix_ack {
+	struct {
+		u32 msix_0_ack : 1;
+		u32 msix_1_ack : 1;
+		u32 msix_2_ack : 1;
+		u32 msix_3_ack : 1;
+		u32 msix_4_ack : 1;
+		u32 msix_5_ack : 1;
+		u32 msix_6_ack : 1;
+		u32 msix_7_ack : 1;
+		u32 msix_8_ack : 1;
+		u32 rsvd0 : 23;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_MSIX_PASSTHRU 0x404
+#define DLB_SYS_MSIX_PASSTHRU_RST 0x0
+union dlb_sys_msix_passthru {
+	struct {
+		u32 msix_0_passthru : 1;
+		u32 msix_1_passthru : 1;
+		u32 msix_2_passthru : 1;
+		u32 msix_3_passthru : 1;
+		u32 msix_4_passthru : 1;
+		u32 msix_5_passthru : 1;
+		u32 msix_6_passthru : 1;
+		u32 msix_7_passthru : 1;
+		u32 msix_8_passthru : 1;
+		u32 rsvd0 : 23;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_MSIX_MODE 0x408
+#define DLB_SYS_MSIX_MODE_RST 0x0
+/* MSI-X Modes */
+#define DLB_MSIX_MODE_PACKED     0
+#define DLB_MSIX_MODE_COMPRESSED 1
+union dlb_sys_msix_mode {
+	struct {
+		u32 mode : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_31_0_OCC_INT_STS 0x440
+#define DLB_SYS_DIR_CQ_31_0_OCC_INT_STS_RST 0x0
+union dlb_sys_dir_cq_31_0_occ_int_sts {
+	struct {
+		u32 cq_0_occ_int : 1;
+		u32 cq_1_occ_int : 1;
+		u32 cq_2_occ_int : 1;
+		u32 cq_3_occ_int : 1;
+		u32 cq_4_occ_int : 1;
+		u32 cq_5_occ_int : 1;
+		u32 cq_6_occ_int : 1;
+		u32 cq_7_occ_int : 1;
+		u32 cq_8_occ_int : 1;
+		u32 cq_9_occ_int : 1;
+		u32 cq_10_occ_int : 1;
+		u32 cq_11_occ_int : 1;
+		u32 cq_12_occ_int : 1;
+		u32 cq_13_occ_int : 1;
+		u32 cq_14_occ_int : 1;
+		u32 cq_15_occ_int : 1;
+		u32 cq_16_occ_int : 1;
+		u32 cq_17_occ_int : 1;
+		u32 cq_18_occ_int : 1;
+		u32 cq_19_occ_int : 1;
+		u32 cq_20_occ_int : 1;
+		u32 cq_21_occ_int : 1;
+		u32 cq_22_occ_int : 1;
+		u32 cq_23_occ_int : 1;
+		u32 cq_24_occ_int : 1;
+		u32 cq_25_occ_int : 1;
+		u32 cq_26_occ_int : 1;
+		u32 cq_27_occ_int : 1;
+		u32 cq_28_occ_int : 1;
+		u32 cq_29_occ_int : 1;
+		u32 cq_30_occ_int : 1;
+		u32 cq_31_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_63_32_OCC_INT_STS 0x444
+#define DLB_SYS_DIR_CQ_63_32_OCC_INT_STS_RST 0x0
+union dlb_sys_dir_cq_63_32_occ_int_sts {
+	struct {
+		u32 cq_32_occ_int : 1;
+		u32 cq_33_occ_int : 1;
+		u32 cq_34_occ_int : 1;
+		u32 cq_35_occ_int : 1;
+		u32 cq_36_occ_int : 1;
+		u32 cq_37_occ_int : 1;
+		u32 cq_38_occ_int : 1;
+		u32 cq_39_occ_int : 1;
+		u32 cq_40_occ_int : 1;
+		u32 cq_41_occ_int : 1;
+		u32 cq_42_occ_int : 1;
+		u32 cq_43_occ_int : 1;
+		u32 cq_44_occ_int : 1;
+		u32 cq_45_occ_int : 1;
+		u32 cq_46_occ_int : 1;
+		u32 cq_47_occ_int : 1;
+		u32 cq_48_occ_int : 1;
+		u32 cq_49_occ_int : 1;
+		u32 cq_50_occ_int : 1;
+		u32 cq_51_occ_int : 1;
+		u32 cq_52_occ_int : 1;
+		u32 cq_53_occ_int : 1;
+		u32 cq_54_occ_int : 1;
+		u32 cq_55_occ_int : 1;
+		u32 cq_56_occ_int : 1;
+		u32 cq_57_occ_int : 1;
+		u32 cq_58_occ_int : 1;
+		u32 cq_59_occ_int : 1;
+		u32 cq_60_occ_int : 1;
+		u32 cq_61_occ_int : 1;
+		u32 cq_62_occ_int : 1;
+		u32 cq_63_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_95_64_OCC_INT_STS 0x448
+#define DLB_SYS_DIR_CQ_95_64_OCC_INT_STS_RST 0x0
+union dlb_sys_dir_cq_95_64_occ_int_sts {
+	struct {
+		u32 cq_64_occ_int : 1;
+		u32 cq_65_occ_int : 1;
+		u32 cq_66_occ_int : 1;
+		u32 cq_67_occ_int : 1;
+		u32 cq_68_occ_int : 1;
+		u32 cq_69_occ_int : 1;
+		u32 cq_70_occ_int : 1;
+		u32 cq_71_occ_int : 1;
+		u32 cq_72_occ_int : 1;
+		u32 cq_73_occ_int : 1;
+		u32 cq_74_occ_int : 1;
+		u32 cq_75_occ_int : 1;
+		u32 cq_76_occ_int : 1;
+		u32 cq_77_occ_int : 1;
+		u32 cq_78_occ_int : 1;
+		u32 cq_79_occ_int : 1;
+		u32 cq_80_occ_int : 1;
+		u32 cq_81_occ_int : 1;
+		u32 cq_82_occ_int : 1;
+		u32 cq_83_occ_int : 1;
+		u32 cq_84_occ_int : 1;
+		u32 cq_85_occ_int : 1;
+		u32 cq_86_occ_int : 1;
+		u32 cq_87_occ_int : 1;
+		u32 cq_88_occ_int : 1;
+		u32 cq_89_occ_int : 1;
+		u32 cq_90_occ_int : 1;
+		u32 cq_91_occ_int : 1;
+		u32 cq_92_occ_int : 1;
+		u32 cq_93_occ_int : 1;
+		u32 cq_94_occ_int : 1;
+		u32 cq_95_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_DIR_CQ_127_96_OCC_INT_STS 0x44c
+#define DLB_SYS_DIR_CQ_127_96_OCC_INT_STS_RST 0x0
+union dlb_sys_dir_cq_127_96_occ_int_sts {
+	struct {
+		u32 cq_96_occ_int : 1;
+		u32 cq_97_occ_int : 1;
+		u32 cq_98_occ_int : 1;
+		u32 cq_99_occ_int : 1;
+		u32 cq_100_occ_int : 1;
+		u32 cq_101_occ_int : 1;
+		u32 cq_102_occ_int : 1;
+		u32 cq_103_occ_int : 1;
+		u32 cq_104_occ_int : 1;
+		u32 cq_105_occ_int : 1;
+		u32 cq_106_occ_int : 1;
+		u32 cq_107_occ_int : 1;
+		u32 cq_108_occ_int : 1;
+		u32 cq_109_occ_int : 1;
+		u32 cq_110_occ_int : 1;
+		u32 cq_111_occ_int : 1;
+		u32 cq_112_occ_int : 1;
+		u32 cq_113_occ_int : 1;
+		u32 cq_114_occ_int : 1;
+		u32 cq_115_occ_int : 1;
+		u32 cq_116_occ_int : 1;
+		u32 cq_117_occ_int : 1;
+		u32 cq_118_occ_int : 1;
+		u32 cq_119_occ_int : 1;
+		u32 cq_120_occ_int : 1;
+		u32 cq_121_occ_int : 1;
+		u32 cq_122_occ_int : 1;
+		u32 cq_123_occ_int : 1;
+		u32 cq_124_occ_int : 1;
+		u32 cq_125_occ_int : 1;
+		u32 cq_126_occ_int : 1;
+		u32 cq_127_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ_31_0_OCC_INT_STS 0x460
+#define DLB_SYS_LDB_CQ_31_0_OCC_INT_STS_RST 0x0
+union dlb_sys_ldb_cq_31_0_occ_int_sts {
+	struct {
+		u32 cq_0_occ_int : 1;
+		u32 cq_1_occ_int : 1;
+		u32 cq_2_occ_int : 1;
+		u32 cq_3_occ_int : 1;
+		u32 cq_4_occ_int : 1;
+		u32 cq_5_occ_int : 1;
+		u32 cq_6_occ_int : 1;
+		u32 cq_7_occ_int : 1;
+		u32 cq_8_occ_int : 1;
+		u32 cq_9_occ_int : 1;
+		u32 cq_10_occ_int : 1;
+		u32 cq_11_occ_int : 1;
+		u32 cq_12_occ_int : 1;
+		u32 cq_13_occ_int : 1;
+		u32 cq_14_occ_int : 1;
+		u32 cq_15_occ_int : 1;
+		u32 cq_16_occ_int : 1;
+		u32 cq_17_occ_int : 1;
+		u32 cq_18_occ_int : 1;
+		u32 cq_19_occ_int : 1;
+		u32 cq_20_occ_int : 1;
+		u32 cq_21_occ_int : 1;
+		u32 cq_22_occ_int : 1;
+		u32 cq_23_occ_int : 1;
+		u32 cq_24_occ_int : 1;
+		u32 cq_25_occ_int : 1;
+		u32 cq_26_occ_int : 1;
+		u32 cq_27_occ_int : 1;
+		u32 cq_28_occ_int : 1;
+		u32 cq_29_occ_int : 1;
+		u32 cq_30_occ_int : 1;
+		u32 cq_31_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_LDB_CQ_63_32_OCC_INT_STS 0x464
+#define DLB_SYS_LDB_CQ_63_32_OCC_INT_STS_RST 0x0
+union dlb_sys_ldb_cq_63_32_occ_int_sts {
+	struct {
+		u32 cq_32_occ_int : 1;
+		u32 cq_33_occ_int : 1;
+		u32 cq_34_occ_int : 1;
+		u32 cq_35_occ_int : 1;
+		u32 cq_36_occ_int : 1;
+		u32 cq_37_occ_int : 1;
+		u32 cq_38_occ_int : 1;
+		u32 cq_39_occ_int : 1;
+		u32 cq_40_occ_int : 1;
+		u32 cq_41_occ_int : 1;
+		u32 cq_42_occ_int : 1;
+		u32 cq_43_occ_int : 1;
+		u32 cq_44_occ_int : 1;
+		u32 cq_45_occ_int : 1;
+		u32 cq_46_occ_int : 1;
+		u32 cq_47_occ_int : 1;
+		u32 cq_48_occ_int : 1;
+		u32 cq_49_occ_int : 1;
+		u32 cq_50_occ_int : 1;
+		u32 cq_51_occ_int : 1;
+		u32 cq_52_occ_int : 1;
+		u32 cq_53_occ_int : 1;
+		u32 cq_54_occ_int : 1;
+		u32 cq_55_occ_int : 1;
+		u32 cq_56_occ_int : 1;
+		u32 cq_57_occ_int : 1;
+		u32 cq_58_occ_int : 1;
+		u32 cq_59_occ_int : 1;
+		u32 cq_60_occ_int : 1;
+		u32 cq_61_occ_int : 1;
+		u32 cq_62_occ_int : 1;
+		u32 cq_63_occ_int : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_SYS_ALARM_HW_SYND 0x50c
+#define DLB_SYS_ALARM_HW_SYND_RST 0x0
+union dlb_sys_alarm_hw_synd {
+	struct {
+		u32 syndrome : 8;
+		u32 rtype : 2;
+		u32 rsvd0 : 2;
+		u32 from_dmv : 1;
+		u32 is_ldb : 1;
+		u32 cls : 2;
+		u32 aid : 6;
+		u32 unit : 4;
+		u32 source : 4;
+		u32 more : 1;
+		u32 valid : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNT_CTRL(x) \
+	(0x20000000 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNT_CTRL_RST 0x0
+union dlb_lsp_cq_ldb_tot_sch_cnt_ctrl {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_DSBL(x) \
+	(0x20000124 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_DSBL_RST 0x1
+union dlb_lsp_cq_ldb_dsbl {
+	struct {
+		u32 disabled : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNTH(x) \
+	(0x20000120 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNTH_RST 0x0
+union dlb_lsp_cq_ldb_tot_sch_cnth {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNTL(x) \
+	(0x2000011c + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_TOT_SCH_CNTL_RST 0x0
+union dlb_lsp_cq_ldb_tot_sch_cntl {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_TKN_DEPTH_SEL(x) \
+	(0x20000118 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_TKN_DEPTH_SEL_RST 0x0
+union dlb_lsp_cq_ldb_tkn_depth_sel {
+	struct {
+		u32 token_depth_select : 4;
+		u32 ignore_depth : 1;
+		u32 enab_shallow_cq : 1;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_TKN_CNT(x) \
+	(0x20000114 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_TKN_CNT_RST 0x0
+union dlb_lsp_cq_ldb_tkn_cnt {
+	struct {
+		u32 token_count : 11;
+		u32 rsvd0 : 21;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_INFL_LIM(x) \
+	(0x20000110 + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_INFL_LIM_RST 0x0
+union dlb_lsp_cq_ldb_infl_lim {
+	struct {
+		u32 limit : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_LDB_INFL_CNT(x) \
+	(0x2000010c + (x) * 0x1000)
+#define DLB_LSP_CQ_LDB_INFL_CNT_RST 0x0
+union dlb_lsp_cq_ldb_infl_cnt {
+	struct {
+		u32 count : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ2QID(x, y) \
+	(0x20000104 + (x) * 0x1000 + (y) * 0x4)
+#define DLB_LSP_CQ2QID_RST 0x0
+union dlb_lsp_cq2qid {
+	struct {
+		u32 qid_p0 : 7;
+		u32 rsvd3 : 1;
+		u32 qid_p1 : 7;
+		u32 rsvd2 : 1;
+		u32 qid_p2 : 7;
+		u32 rsvd1 : 1;
+		u32 qid_p3 : 7;
+		u32 rsvd0 : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ2PRIOV(x) \
+	(0x20000100 + (x) * 0x1000)
+#define DLB_LSP_CQ2PRIOV_RST 0x0
+union dlb_lsp_cq2priov {
+	struct {
+		u32 prio : 24;
+		u32 v : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_DIR_DSBL(x) \
+	(0x20000310 + (x) * 0x1000)
+#define DLB_LSP_CQ_DIR_DSBL_RST 0x1
+union dlb_lsp_cq_dir_dsbl {
+	struct {
+		u32 disabled : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(x) \
+	(0x2000030c + (x) * 0x1000)
+#define DLB_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST 0x0
+union dlb_lsp_cq_dir_tkn_depth_sel_dsi {
+	struct {
+		u32 token_depth_select : 4;
+		u32 disable_wb_opt : 1;
+		u32 ignore_depth : 1;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_DIR_TOT_SCH_CNTH(x) \
+	(0x20000308 + (x) * 0x1000)
+#define DLB_LSP_CQ_DIR_TOT_SCH_CNTH_RST 0x0
+union dlb_lsp_cq_dir_tot_sch_cnth {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_DIR_TOT_SCH_CNTL(x) \
+	(0x20000304 + (x) * 0x1000)
+#define DLB_LSP_CQ_DIR_TOT_SCH_CNTL_RST 0x0
+union dlb_lsp_cq_dir_tot_sch_cntl {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CQ_DIR_TKN_CNT(x) \
+	(0x20000300 + (x) * 0x1000)
+#define DLB_LSP_CQ_DIR_TKN_CNT_RST 0x0
+union dlb_lsp_cq_dir_tkn_cnt {
+	struct {
+		u32 count : 11;
+		u32 rsvd0 : 21;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_QID2CQIDX(x, y) \
+	(0x20000400 + (x) * 0x1000 + (y) * 0x4)
+#define DLB_LSP_QID_LDB_QID2CQIDX_RST 0x0
+union dlb_lsp_qid_ldb_qid2cqidx {
+	struct {
+		u32 cq_p0 : 8;
+		u32 cq_p1 : 8;
+		u32 cq_p2 : 8;
+		u32 cq_p3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_QID2CQIDX2(x, y) \
+	(0x20000500 + (x) * 0x1000 + (y) * 0x4)
+#define DLB_LSP_QID_LDB_QID2CQIDX2_RST 0x0
+union dlb_lsp_qid_ldb_qid2cqidx2 {
+	struct {
+		u32 cq_p0 : 8;
+		u32 cq_p1 : 8;
+		u32 cq_p2 : 8;
+		u32 cq_p3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_ATQ_ENQUEUE_CNT(x) \
+	(0x2000066c + (x) * 0x1000)
+#define DLB_LSP_QID_ATQ_ENQUEUE_CNT_RST 0x0
+union dlb_lsp_qid_atq_enqueue_cnt {
+	struct {
+		u32 count : 15;
+		u32 rsvd0 : 17;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_INFL_LIM(x) \
+	(0x2000064c + (x) * 0x1000)
+#define DLB_LSP_QID_LDB_INFL_LIM_RST 0x0
+union dlb_lsp_qid_ldb_infl_lim {
+	struct {
+		u32 limit : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_INFL_CNT(x) \
+	(0x2000062c + (x) * 0x1000)
+#define DLB_LSP_QID_LDB_INFL_CNT_RST 0x0
+union dlb_lsp_qid_ldb_infl_cnt {
+	struct {
+		u32 count : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_AQED_ACTIVE_LIM(x) \
+	(0x20000628 + (x) * 0x1000)
+#define DLB_LSP_QID_AQED_ACTIVE_LIM_RST 0x0
+union dlb_lsp_qid_aqed_active_lim {
+	struct {
+		u32 limit : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_AQED_ACTIVE_CNT(x) \
+	(0x20000624 + (x) * 0x1000)
+#define DLB_LSP_QID_AQED_ACTIVE_CNT_RST 0x0
+union dlb_lsp_qid_aqed_active_cnt {
+	struct {
+		u32 count : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_ENQUEUE_CNT(x) \
+	(0x20000604 + (x) * 0x1000)
+#define DLB_LSP_QID_LDB_ENQUEUE_CNT_RST 0x0
+union dlb_lsp_qid_ldb_enqueue_cnt {
+	struct {
+		u32 count : 15;
+		u32 rsvd0 : 17;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_LDB_REPLAY_CNT(x) \
+	(0x20000600 + (x) * 0x1000)
+#define DLB_LSP_QID_LDB_REPLAY_CNT_RST 0x0
+union dlb_lsp_qid_ldb_replay_cnt {
+	struct {
+		u32 count : 15;
+		u32 rsvd0 : 17;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_QID_DIR_ENQUEUE_CNT(x) \
+	(0x20000700 + (x) * 0x1000)
+#define DLB_LSP_QID_DIR_ENQUEUE_CNT_RST 0x0
+union dlb_lsp_qid_dir_enqueue_cnt {
+	struct {
+		u32 count : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CTRL_CONFIG_0 0x2800002c
+#define DLB_LSP_CTRL_CONFIG_0_RST 0x12cc
+union dlb_lsp_ctrl_config_0 {
+	struct {
+		u32 atm_cq_qid_priority_prot : 1;
+		u32 ldb_arb_ignore_empty : 1;
+		u32 ldb_arb_mode : 2;
+		u32 ldb_arb_threshold : 18;
+		u32 cfg_cq_sla_upd_always : 1;
+		u32 cfg_cq_wcn_upd_always : 1;
+		u32 spare : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1 0x28000028
+#define DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1_RST 0x0
+union dlb_lsp_cfg_arb_weight_atm_nalb_qid_1 {
+	struct {
+		u32 slot4_weight : 8;
+		u32 slot5_weight : 8;
+		u32 slot6_weight : 8;
+		u32 slot7_weight : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0 0x28000024
+#define DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0_RST 0x0
+union dlb_lsp_cfg_arb_weight_atm_nalb_qid_0 {
+	struct {
+		u32 slot0_weight : 8;
+		u32 slot1_weight : 8;
+		u32 slot2_weight : 8;
+		u32 slot3_weight : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_1 0x28000020
+#define DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_1_RST 0x0
+union dlb_lsp_cfg_arb_weight_ldb_qid_1 {
+	struct {
+		u32 slot4_weight : 8;
+		u32 slot5_weight : 8;
+		u32 slot6_weight : 8;
+		u32 slot7_weight : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_0 0x2800001c
+#define DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_0_RST 0x0
+union dlb_lsp_cfg_arb_weight_ldb_qid_0 {
+	struct {
+		u32 slot0_weight : 8;
+		u32 slot1_weight : 8;
+		u32 slot2_weight : 8;
+		u32 slot3_weight : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_LDB_SCHED_CTRL 0x28100000
+#define DLB_LSP_LDB_SCHED_CTRL_RST 0x0
+union dlb_lsp_ldb_sched_ctrl {
+	struct {
+		u32 cq : 8;
+		u32 qidix : 3;
+		u32 value : 1;
+		u32 nalb_haswork_v : 1;
+		u32 rlist_haswork_v : 1;
+		u32 slist_haswork_v : 1;
+		u32 inflight_ok_v : 1;
+		u32 aqed_nfull_v : 1;
+		u32 spare0 : 15;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_DIR_SCH_CNT_H 0x2820000c
+#define DLB_LSP_DIR_SCH_CNT_H_RST 0x0
+union dlb_lsp_dir_sch_cnt_h {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_DIR_SCH_CNT_L 0x28200008
+#define DLB_LSP_DIR_SCH_CNT_L_RST 0x0
+union dlb_lsp_dir_sch_cnt_l {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_LDB_SCH_CNT_H 0x28200004
+#define DLB_LSP_LDB_SCH_CNT_H_RST 0x0
+union dlb_lsp_ldb_sch_cnt_h {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_LSP_LDB_SCH_CNT_L 0x28200000
+#define DLB_LSP_LDB_SCH_CNT_L_RST 0x0
+union dlb_lsp_ldb_sch_cnt_l {
+	struct {
+		u32 count : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_DP_DIR_CSR_CTRL 0x38000018
+#define DLB_DP_DIR_CSR_CTRL_RST 0xc0000000
+union dlb_dp_dir_csr_ctrl {
+	struct {
+		u32 cfg_int_dis : 1;
+		u32 cfg_int_dis_sbe : 1;
+		u32 cfg_int_dis_mbe : 1;
+		u32 spare0 : 27;
+		u32 cfg_vasr_dis : 1;
+		u32 cfg_int_dis_synd : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1 0x38000014
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1_RST 0xfffefdfc
+union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_1 {
+	struct {
+		u32 pri4 : 8;
+		u32 pri5 : 8;
+		u32 pri6 : 8;
+		u32 pri7 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0 0x38000010
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfbfaf9f8
+union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1 0x3800000c
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0xfffefdfc
+union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_1 {
+	struct {
+		u32 pri4 : 8;
+		u32 pri5 : 8;
+		u32 pri6 : 8;
+		u32 pri7 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0 0x38000008
+#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfbfaf9f8
+union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1 0x6800001c
+#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1_RST 0xfffefdfc
+union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_1 {
+	struct {
+		u32 pri4 : 8;
+		u32 pri5 : 8;
+		u32 pri6 : 8;
+		u32 pri7 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0 0x68000018
+#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfbfaf9f8
+union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1 0x68000014
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1_RST 0xfffefdfc
+union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_1 {
+	struct {
+		u32 pri4 : 8;
+		u32 pri5 : 8;
+		u32 pri6 : 8;
+		u32 pri7 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0 0x68000010
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0_RST 0xfbfaf9f8
+union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1 0x6800000c
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0xfffefdfc
+union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_1 {
+	struct {
+		u32 pri4 : 8;
+		u32 pri5 : 8;
+		u32 pri6 : 8;
+		u32 pri7 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0 0x68000008
+#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfbfaf9f8
+union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX(x, y) \
+	(0x70000000 + (x) * 0x1000 + (y) * 0x4)
+#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX_RST 0x0
+union dlb_atm_pipe_qid_ldb_qid2cqidx {
+	struct {
+		u32 cq_p0 : 8;
+		u32 cq_p1 : 8;
+		u32 cq_p2 : 8;
+		u32 cq_p3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN 0x7800000c
+#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
+union dlb_atm_pipe_cfg_ctrl_arb_weights_sched_bin {
+	struct {
+		u32 bin0 : 8;
+		u32 bin1 : 8;
+		u32 bin2 : 8;
+		u32 bin3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN 0x78000008
+#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
+union dlb_atm_pipe_ctrl_arb_weights_rdy_bin {
+	struct {
+		u32 bin0 : 8;
+		u32 bin1 : 8;
+		u32 bin2 : 8;
+		u32 bin3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_QID_FID_LIM(x) \
+	(0x80000014 + (x) * 0x1000)
+#define DLB_AQED_PIPE_QID_FID_LIM_RST 0x7ff
+union dlb_aqed_pipe_qid_fid_lim {
+	struct {
+		u32 qid_fid_limit : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_FL_POP_PTR(x) \
+	(0x80000010 + (x) * 0x1000)
+#define DLB_AQED_PIPE_FL_POP_PTR_RST 0x0
+union dlb_aqed_pipe_fl_pop_ptr {
+	struct {
+		u32 pop_ptr : 11;
+		u32 generation : 1;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_FL_PUSH_PTR(x) \
+	(0x8000000c + (x) * 0x1000)
+#define DLB_AQED_PIPE_FL_PUSH_PTR_RST 0x0
+union dlb_aqed_pipe_fl_push_ptr {
+	struct {
+		u32 push_ptr : 11;
+		u32 generation : 1;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_FL_BASE(x) \
+	(0x80000008 + (x) * 0x1000)
+#define DLB_AQED_PIPE_FL_BASE_RST 0x0
+union dlb_aqed_pipe_fl_base {
+	struct {
+		u32 base : 11;
+		u32 rsvd0 : 21;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_FL_LIM(x) \
+	(0x80000004 + (x) * 0x1000)
+#define DLB_AQED_PIPE_FL_LIM_RST 0x800
+union dlb_aqed_pipe_fl_lim {
+	struct {
+		u32 limit : 11;
+		u32 freelist_disable : 1;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0 0x88000008
+#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0_RST 0xfffe
+union dlb_aqed_pipe_cfg_ctrl_arb_weights_tqpri_atm_0 {
+	struct {
+		u32 pri0 : 8;
+		u32 pri1 : 8;
+		u32 pri2 : 8;
+		u32 pri3 : 8;
+	} field;
+	u32 val;
+};
+
+#define DLB_RO_PIPE_QID2GRPSLT(x) \
+	(0x90000000 + (x) * 0x1000)
+#define DLB_RO_PIPE_QID2GRPSLT_RST 0x0
+union dlb_ro_pipe_qid2grpslt {
+	struct {
+		u32 slot : 5;
+		u32 rsvd1 : 3;
+		u32 group : 2;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_RO_PIPE_GRP_SN_MODE 0x98000008
+#define DLB_RO_PIPE_GRP_SN_MODE_RST 0x0
+union dlb_ro_pipe_grp_sn_mode {
+	struct {
+		u32 sn_mode_0 : 3;
+		u32 reserved0 : 5;
+		u32 sn_mode_1 : 3;
+		u32 reserved1 : 5;
+		u32 sn_mode_2 : 3;
+		u32 reserved2 : 5;
+		u32 sn_mode_3 : 3;
+		u32 reserved3 : 5;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN(x) \
+	(0xa000003c + (x) * 0x1000)
+#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN_RST 0x1
+union dlb_chp_cfg_dir_pp_sw_alarm_en {
+	struct {
+		u32 alarm_enable : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_WD_ENB(x) \
+	(0xa0000038 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_WD_ENB_RST 0x0
+union dlb_chp_dir_cq_wd_enb {
+	struct {
+		u32 wd_enable : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_LDB_PP2POOL(x) \
+	(0xa0000034 + (x) * 0x1000)
+#define DLB_CHP_DIR_LDB_PP2POOL_RST 0x0
+union dlb_chp_dir_ldb_pp2pool {
+	struct {
+		u32 pool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_DIR_PP2POOL(x) \
+	(0xa0000030 + (x) * 0x1000)
+#define DLB_CHP_DIR_DIR_PP2POOL_RST 0x0
+union dlb_chp_dir_dir_pp2pool {
+	struct {
+		u32 pool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_LDB_CRD_CNT(x) \
+	(0xa000002c + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_LDB_CRD_CNT_RST 0x0
+union dlb_chp_dir_pp_ldb_crd_cnt {
+	struct {
+		u32 count : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_DIR_CRD_CNT(x) \
+	(0xa0000028 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_DIR_CRD_CNT_RST 0x0
+union dlb_chp_dir_pp_dir_crd_cnt {
+	struct {
+		u32 count : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_TMR_THRESHOLD(x) \
+	(0xa0000024 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_TMR_THRESHOLD_RST 0x0
+union dlb_chp_dir_cq_tmr_threshold {
+	struct {
+		u32 timer_thrsh : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INT_ENB(x) \
+	(0xa0000020 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_INT_ENB_RST 0x0
+union dlb_chp_dir_cq_int_enb {
+	struct {
+		u32 en_tim : 1;
+		u32 en_depth : 1;
+		u32 rsvd0 : 30;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
+	(0xa000001c + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
+union dlb_chp_dir_cq_int_depth_thrsh {
+	struct {
+		u32 depth_threshold : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
+	(0xa0000018 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
+union dlb_chp_dir_cq_tkn_depth_sel {
+	struct {
+		u32 token_depth_select : 4;
+		u32 rsvd0 : 28;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT(x) \
+	(0xa0000014 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT_RST 0x1
+union dlb_chp_dir_pp_ldb_min_crd_qnt {
+	struct {
+		u32 quanta : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT(x) \
+	(0xa0000010 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT_RST 0x1
+union dlb_chp_dir_pp_dir_min_crd_qnt {
+	struct {
+		u32 quanta : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_LDB_CRD_LWM(x) \
+	(0xa000000c + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_LDB_CRD_LWM_RST 0x0
+union dlb_chp_dir_pp_ldb_crd_lwm {
+	struct {
+		u32 lwm : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_LDB_CRD_HWM(x) \
+	(0xa0000008 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_LDB_CRD_HWM_RST 0x0
+union dlb_chp_dir_pp_ldb_crd_hwm {
+	struct {
+		u32 hwm : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_DIR_CRD_LWM(x) \
+	(0xa0000004 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_DIR_CRD_LWM_RST 0x0
+union dlb_chp_dir_pp_dir_crd_lwm {
+	struct {
+		u32 lwm : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_DIR_CRD_HWM(x) \
+	(0xa0000000 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_DIR_CRD_HWM_RST 0x0
+union dlb_chp_dir_pp_dir_crd_hwm {
+	struct {
+		u32 hwm : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN(x) \
+	(0xa0000148 + (x) * 0x1000)
+#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN_RST 0x1
+union dlb_chp_cfg_ldb_pp_sw_alarm_en {
+	struct {
+		u32 alarm_enable : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_WD_ENB(x) \
+	(0xa0000144 + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_WD_ENB_RST 0x0
+union dlb_chp_ldb_cq_wd_enb {
+	struct {
+		u32 wd_enable : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_SN_CHK_ENBL(x) \
+	(0xa0000140 + (x) * 0x1000)
+#define DLB_CHP_SN_CHK_ENBL_RST 0x0
+union dlb_chp_sn_chk_enbl {
+	struct {
+		u32 en : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_HIST_LIST_BASE(x) \
+	(0xa000013c + (x) * 0x1000)
+#define DLB_CHP_HIST_LIST_BASE_RST 0x0
+union dlb_chp_hist_list_base {
+	struct {
+		u32 base : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_HIST_LIST_LIM(x) \
+	(0xa0000138 + (x) * 0x1000)
+#define DLB_CHP_HIST_LIST_LIM_RST 0x0
+union dlb_chp_hist_list_lim {
+	struct {
+		u32 limit : 13;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_LDB_PP2POOL(x) \
+	(0xa0000134 + (x) * 0x1000)
+#define DLB_CHP_LDB_LDB_PP2POOL_RST 0x0
+union dlb_chp_ldb_ldb_pp2pool {
+	struct {
+		u32 pool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_DIR_PP2POOL(x) \
+	(0xa0000130 + (x) * 0x1000)
+#define DLB_CHP_LDB_DIR_PP2POOL_RST 0x0
+union dlb_chp_ldb_dir_pp2pool {
+	struct {
+		u32 pool : 6;
+		u32 rsvd0 : 26;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_LDB_CRD_CNT(x) \
+	(0xa000012c + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_LDB_CRD_CNT_RST 0x0
+union dlb_chp_ldb_pp_ldb_crd_cnt {
+	struct {
+		u32 count : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_DIR_CRD_CNT(x) \
+	(0xa0000128 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_DIR_CRD_CNT_RST 0x0
+union dlb_chp_ldb_pp_dir_crd_cnt {
+	struct {
+		u32 count : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_TMR_THRESHOLD(x) \
+	(0xa0000124 + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_TMR_THRESHOLD_RST 0x0
+union dlb_chp_ldb_cq_tmr_threshold {
+	struct {
+		u32 thrsh : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_INT_ENB(x) \
+	(0xa0000120 + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_INT_ENB_RST 0x0
+union dlb_chp_ldb_cq_int_enb {
+	struct {
+		u32 en_tim : 1;
+		u32 en_depth : 1;
+		u32 rsvd0 : 30;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
+	(0xa000011c + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
+union dlb_chp_ldb_cq_int_depth_thrsh {
+	struct {
+		u32 depth_threshold : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
+	(0xa0000118 + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
+union dlb_chp_ldb_cq_tkn_depth_sel {
+	struct {
+		u32 token_depth_select : 4;
+		u32 rsvd0 : 28;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT(x) \
+	(0xa0000114 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT_RST 0x1
+union dlb_chp_ldb_pp_ldb_min_crd_qnt {
+	struct {
+		u32 quanta : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT(x) \
+	(0xa0000110 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT_RST 0x1
+union dlb_chp_ldb_pp_dir_min_crd_qnt {
+	struct {
+		u32 quanta : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_LDB_CRD_LWM(x) \
+	(0xa000010c + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_LDB_CRD_LWM_RST 0x0
+union dlb_chp_ldb_pp_ldb_crd_lwm {
+	struct {
+		u32 lwm : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_LDB_CRD_HWM(x) \
+	(0xa0000108 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_LDB_CRD_HWM_RST 0x0
+union dlb_chp_ldb_pp_ldb_crd_hwm {
+	struct {
+		u32 hwm : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_DIR_CRD_LWM(x) \
+	(0xa0000104 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_DIR_CRD_LWM_RST 0x0
+union dlb_chp_ldb_pp_dir_crd_lwm {
+	struct {
+		u32 lwm : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_DIR_CRD_HWM(x) \
+	(0xa0000100 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_DIR_CRD_HWM_RST 0x0
+union dlb_chp_ldb_pp_dir_crd_hwm {
+	struct {
+		u32 hwm : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_DEPTH(x) \
+	(0xa0000218 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_DEPTH_RST 0x0
+union dlb_chp_dir_cq_depth {
+	struct {
+		u32 cq_depth : 11;
+		u32 rsvd0 : 21;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_WPTR(x) \
+	(0xa0000214 + (x) * 0x1000)
+#define DLB_CHP_DIR_CQ_WPTR_RST 0x0
+union dlb_chp_dir_cq_wptr {
+	struct {
+		u32 write_pointer : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_LDB_PUSH_PTR(x) \
+	(0xa0000210 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_LDB_PUSH_PTR_RST 0x0
+union dlb_chp_dir_pp_ldb_push_ptr {
+	struct {
+		u32 push_pointer : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_DIR_PUSH_PTR(x) \
+	(0xa000020c + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_DIR_PUSH_PTR_RST 0x0
+union dlb_chp_dir_pp_dir_push_ptr {
+	struct {
+		u32 push_pointer : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_STATE_RESET(x) \
+	(0xa0000204 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_STATE_RESET_RST 0x0
+union dlb_chp_dir_pp_state_reset {
+	struct {
+		u32 rsvd1 : 7;
+		u32 dir_type : 1;
+		u32 rsvd0 : 23;
+		u32 reset_pp_state : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_PP_CRD_REQ_STATE(x) \
+	(0xa0000200 + (x) * 0x1000)
+#define DLB_CHP_DIR_PP_CRD_REQ_STATE_RST 0x0
+union dlb_chp_dir_pp_crd_req_state {
+	struct {
+		u32 dir_crd_req_active_valid : 1;
+		u32 dir_crd_req_active_check : 1;
+		u32 dir_crd_req_active_busy : 1;
+		u32 rsvd1 : 1;
+		u32 ldb_crd_req_active_valid : 1;
+		u32 ldb_crd_req_active_check : 1;
+		u32 ldb_crd_req_active_busy : 1;
+		u32 rsvd0 : 1;
+		u32 no_pp_credit_update : 1;
+		u32 crd_req_state : 23;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_DEPTH(x) \
+	(0xa0000320 + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_DEPTH_RST 0x0
+union dlb_chp_ldb_cq_depth {
+	struct {
+		u32 depth : 11;
+		u32 reserved : 2;
+		u32 rsvd0 : 19;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_WPTR(x) \
+	(0xa000031c + (x) * 0x1000)
+#define DLB_CHP_LDB_CQ_WPTR_RST 0x0
+union dlb_chp_ldb_cq_wptr {
+	struct {
+		u32 write_pointer : 10;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_LDB_PUSH_PTR(x) \
+	(0xa0000318 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_LDB_PUSH_PTR_RST 0x0
+union dlb_chp_ldb_pp_ldb_push_ptr {
+	struct {
+		u32 push_pointer : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_DIR_PUSH_PTR(x) \
+	(0xa0000314 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_DIR_PUSH_PTR_RST 0x0
+union dlb_chp_ldb_pp_dir_push_ptr {
+	struct {
+		u32 push_pointer : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_HIST_LIST_POP_PTR(x) \
+	(0xa000030c + (x) * 0x1000)
+#define DLB_CHP_HIST_LIST_POP_PTR_RST 0x0
+union dlb_chp_hist_list_pop_ptr {
+	struct {
+		u32 pop_ptr : 13;
+		u32 generation : 1;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_HIST_LIST_PUSH_PTR(x) \
+	(0xa0000308 + (x) * 0x1000)
+#define DLB_CHP_HIST_LIST_PUSH_PTR_RST 0x0
+union dlb_chp_hist_list_push_ptr {
+	struct {
+		u32 push_ptr : 13;
+		u32 generation : 1;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_STATE_RESET(x) \
+	(0xa0000304 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_STATE_RESET_RST 0x0
+union dlb_chp_ldb_pp_state_reset {
+	struct {
+		u32 rsvd1 : 7;
+		u32 dir_type : 1;
+		u32 rsvd0 : 23;
+		u32 reset_pp_state : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_PP_CRD_REQ_STATE(x) \
+	(0xa0000300 + (x) * 0x1000)
+#define DLB_CHP_LDB_PP_CRD_REQ_STATE_RST 0x0
+union dlb_chp_ldb_pp_crd_req_state {
+	struct {
+		u32 dir_crd_req_active_valid : 1;
+		u32 dir_crd_req_active_check : 1;
+		u32 dir_crd_req_active_busy : 1;
+		u32 rsvd1 : 1;
+		u32 ldb_crd_req_active_valid : 1;
+		u32 ldb_crd_req_active_check : 1;
+		u32 ldb_crd_req_active_busy : 1;
+		u32 rsvd0 : 1;
+		u32 no_pp_credit_update : 1;
+		u32 crd_req_state : 23;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_ORD_QID_SN(x) \
+	(0xa0000408 + (x) * 0x1000)
+#define DLB_CHP_ORD_QID_SN_RST 0x0
+union dlb_chp_ord_qid_sn {
+	struct {
+		u32 sn : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_ORD_QID_SN_MAP(x) \
+	(0xa0000404 + (x) * 0x1000)
+#define DLB_CHP_ORD_QID_SN_MAP_RST 0x0
+union dlb_chp_ord_qid_sn_map {
+	struct {
+		u32 mode : 3;
+		u32 slot : 5;
+		u32 grp : 2;
+		u32 rsvd0 : 22;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_POOL_CRD_CNT(x) \
+	(0xa000050c + (x) * 0x1000)
+#define DLB_CHP_LDB_POOL_CRD_CNT_RST 0x0
+union dlb_chp_ldb_pool_crd_cnt {
+	struct {
+		u32 count : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_QED_FL_BASE(x) \
+	(0xa0000508 + (x) * 0x1000)
+#define DLB_CHP_QED_FL_BASE_RST 0x0
+union dlb_chp_qed_fl_base {
+	struct {
+		u32 base : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_QED_FL_LIM(x) \
+	(0xa0000504 + (x) * 0x1000)
+#define DLB_CHP_QED_FL_LIM_RST 0x8000
+union dlb_chp_qed_fl_lim {
+	struct {
+		u32 limit : 14;
+		u32 rsvd1 : 1;
+		u32 freelist_disable : 1;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_POOL_CRD_LIM(x) \
+	(0xa0000500 + (x) * 0x1000)
+#define DLB_CHP_LDB_POOL_CRD_LIM_RST 0x0
+union dlb_chp_ldb_pool_crd_lim {
+	struct {
+		u32 limit : 16;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_QED_FL_POP_PTR(x) \
+	(0xa0000604 + (x) * 0x1000)
+#define DLB_CHP_QED_FL_POP_PTR_RST 0x0
+union dlb_chp_qed_fl_pop_ptr {
+	struct {
+		u32 pop_ptr : 14;
+		u32 reserved0 : 1;
+		u32 generation : 1;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_QED_FL_PUSH_PTR(x) \
+	(0xa0000600 + (x) * 0x1000)
+#define DLB_CHP_QED_FL_PUSH_PTR_RST 0x0
+union dlb_chp_qed_fl_push_ptr {
+	struct {
+		u32 push_ptr : 14;
+		u32 reserved0 : 1;
+		u32 generation : 1;
+		u32 rsvd0 : 16;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_POOL_CRD_CNT(x) \
+	(0xa000070c + (x) * 0x1000)
+#define DLB_CHP_DIR_POOL_CRD_CNT_RST 0x0
+union dlb_chp_dir_pool_crd_cnt {
+	struct {
+		u32 count : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DQED_FL_BASE(x) \
+	(0xa0000708 + (x) * 0x1000)
+#define DLB_CHP_DQED_FL_BASE_RST 0x0
+union dlb_chp_dqed_fl_base {
+	struct {
+		u32 base : 12;
+		u32 rsvd0 : 20;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DQED_FL_LIM(x) \
+	(0xa0000704 + (x) * 0x1000)
+#define DLB_CHP_DQED_FL_LIM_RST 0x2000
+union dlb_chp_dqed_fl_lim {
+	struct {
+		u32 limit : 12;
+		u32 rsvd1 : 1;
+		u32 freelist_disable : 1;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_POOL_CRD_LIM(x) \
+	(0xa0000700 + (x) * 0x1000)
+#define DLB_CHP_DIR_POOL_CRD_LIM_RST 0x0
+union dlb_chp_dir_pool_crd_lim {
+	struct {
+		u32 limit : 14;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DQED_FL_POP_PTR(x) \
+	(0xa0000804 + (x) * 0x1000)
+#define DLB_CHP_DQED_FL_POP_PTR_RST 0x0
+union dlb_chp_dqed_fl_pop_ptr {
+	struct {
+		u32 pop_ptr : 12;
+		u32 reserved0 : 1;
+		u32 generation : 1;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DQED_FL_PUSH_PTR(x) \
+	(0xa0000800 + (x) * 0x1000)
+#define DLB_CHP_DQED_FL_PUSH_PTR_RST 0x0
+union dlb_chp_dqed_fl_push_ptr {
+	struct {
+		u32 push_ptr : 12;
+		u32 reserved0 : 1;
+		u32 generation : 1;
+		u32 rsvd0 : 18;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_CTRL_DIAG_02 0xa8000154
+#define DLB_CHP_CTRL_DIAG_02_RST 0x0
+union dlb_chp_ctrl_diag_02 {
+	struct {
+		u32 control : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_CFG_CHP_CSR_CTRL 0xa8000130
+#define DLB_CHP_CFG_CHP_CSR_CTRL_RST 0xc0003fff
+#define DLB_CHP_CFG_EXCESS_TOKENS_SHIFT 12
+union dlb_chp_cfg_chp_csr_ctrl {
+	struct {
+		u32 int_inf_alarm_enable_0 : 1;
+		u32 int_inf_alarm_enable_1 : 1;
+		u32 int_inf_alarm_enable_2 : 1;
+		u32 int_inf_alarm_enable_3 : 1;
+		u32 int_inf_alarm_enable_4 : 1;
+		u32 int_inf_alarm_enable_5 : 1;
+		u32 int_inf_alarm_enable_6 : 1;
+		u32 int_inf_alarm_enable_7 : 1;
+		u32 int_inf_alarm_enable_8 : 1;
+		u32 int_inf_alarm_enable_9 : 1;
+		u32 int_inf_alarm_enable_10 : 1;
+		u32 int_inf_alarm_enable_11 : 1;
+		u32 int_inf_alarm_enable_12 : 1;
+		u32 int_cor_alarm_enable : 1;
+		u32 csr_control_spare : 14;
+		u32 cfg_vasr_dis : 1;
+		u32 counter_clear : 1;
+		u32 blk_cor_report : 1;
+		u32 blk_cor_synd : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_INTR_ARMED1 0xa8000068
+#define DLB_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
+union dlb_chp_ldb_cq_intr_armed1 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_LDB_CQ_INTR_ARMED0 0xa8000064
+#define DLB_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
+union dlb_chp_ldb_cq_intr_armed0 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INTR_ARMED3 0xa8000024
+#define DLB_CHP_DIR_CQ_INTR_ARMED3_RST 0x0
+union dlb_chp_dir_cq_intr_armed3 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INTR_ARMED2 0xa8000020
+#define DLB_CHP_DIR_CQ_INTR_ARMED2_RST 0x0
+union dlb_chp_dir_cq_intr_armed2 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INTR_ARMED1 0xa800001c
+#define DLB_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
+union dlb_chp_dir_cq_intr_armed1 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CHP_DIR_CQ_INTR_ARMED0 0xa8000018
+#define DLB_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
+union dlb_chp_dir_cq_intr_armed0 {
+	struct {
+		u32 armed : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_CFG_MSTR_DIAG_RESET_STS 0xb8000004
+#define DLB_CFG_MSTR_DIAG_RESET_STS_RST 0x1ff
+union dlb_cfg_mstr_diag_reset_sts {
+	struct {
+		u32 chp_pf_reset_done : 1;
+		u32 rop_pf_reset_done : 1;
+		u32 lsp_pf_reset_done : 1;
+		u32 nalb_pf_reset_done : 1;
+		u32 ap_pf_reset_done : 1;
+		u32 dp_pf_reset_done : 1;
+		u32 qed_pf_reset_done : 1;
+		u32 dqed_pf_reset_done : 1;
+		u32 aqed_pf_reset_done : 1;
+		u32 rsvd1 : 6;
+		u32 pf_reset_active : 1;
+		u32 chp_vf_reset_done : 1;
+		u32 rop_vf_reset_done : 1;
+		u32 lsp_vf_reset_done : 1;
+		u32 nalb_vf_reset_done : 1;
+		u32 ap_vf_reset_done : 1;
+		u32 dp_vf_reset_done : 1;
+		u32 qed_vf_reset_done : 1;
+		u32 dqed_vf_reset_done : 1;
+		u32 aqed_vf_reset_done : 1;
+		u32 rsvd0 : 6;
+		u32 vf_reset_active : 1;
+	} field;
+	u32 val;
+};
+
+#define DLB_CFG_MSTR_BCAST_RESET_VF_START 0xc8100000
+#define DLB_CFG_MSTR_BCAST_RESET_VF_START_RST 0x0
+/* HW Reset Types */
+#define VF_RST_TYPE_CQ_LDB   0
+#define VF_RST_TYPE_QID_LDB  1
+#define VF_RST_TYPE_POOL_LDB 2
+#define VF_RST_TYPE_CQ_DIR   8
+#define VF_RST_TYPE_QID_DIR  9
+#define VF_RST_TYPE_POOL_DIR 10
+union dlb_cfg_mstr_bcast_reset_vf_start {
+	struct {
+		u32 vf_reset_start : 1;
+		u32 reserved : 3;
+		u32 vf_reset_type : 4;
+		u32 vf_reset_id : 24;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_VF2PF_MAILBOX_BYTES 256
+#define DLB_FUNC_VF_VF2PF_MAILBOX(x) \
+	(0x1000 + (x) * 0x4)
+#define DLB_FUNC_VF_VF2PF_MAILBOX_RST 0x0
+union dlb_func_vf_vf2pf_mailbox {
+	struct {
+		u32 msg : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
+#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
+union dlb_func_vf_vf2pf_mailbox_isr {
+	struct {
+		u32 isr : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_PF2VF_MAILBOX_BYTES 64
+#define DLB_FUNC_VF_PF2VF_MAILBOX(x) \
+	(0x2000 + (x) * 0x4)
+#define DLB_FUNC_VF_PF2VF_MAILBOX_RST 0x0
+union dlb_func_vf_pf2vf_mailbox {
+	struct {
+		u32 msg : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
+#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
+union dlb_func_vf_pf2vf_mailbox_isr {
+	struct {
+		u32 pf_isr : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
+#define DLB_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
+union dlb_func_vf_vf_msi_isr_pend {
+	struct {
+		u32 isr_pend : 32;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
+#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
+union dlb_func_vf_vf_reset_in_progress {
+	struct {
+		u32 reset_in_progress : 1;
+		u32 rsvd0 : 31;
+	} field;
+	u32 val;
+};
+
+#define DLB_FUNC_VF_VF_MSI_ISR 0x4000
+#define DLB_FUNC_VF_VF_MSI_ISR_RST 0x0
+union dlb_func_vf_vf_msi_isr {
+	struct {
+		u32 vf_msi_isr : 32;
+	} field;
+	u32 val;
+};
+
+#endif /* __DLB_REGS_H */
diff --git a/drivers/event/dlb/pf/base/dlb_resource.c b/drivers/event/dlb/pf/base/dlb_resource.c
new file mode 100644
index 000000000..cef81b8b4
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_resource.c
@@ -0,0 +1,9699 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+/* Copyright(c) 2016-2020 Intel Corporation */
+
+#include "dlb_hw_types.h"
+#include "dlb_user.h"
+#include "dlb_resource.h"
+#include "dlb_osdep.h"
+#include "dlb_osdep_bitmap.h"
+#include "dlb_osdep_types.h"
+#include "dlb_regs.h"
+#include "dlb_mbox.h"
+
+#define DLB_DOM_LIST_HEAD(head, type) \
+	DLB_LIST_HEAD((head), type, domain_list)
+
+#define DLB_FUNC_LIST_HEAD(head, type) \
+	DLB_LIST_HEAD((head), type, func_list)
+
+#define DLB_DOM_LIST_FOR(head, ptr, iter) \
+	DLB_LIST_FOR_EACH(head, ptr, domain_list, iter)
+
+#define DLB_FUNC_LIST_FOR(head, ptr, iter) \
+	DLB_LIST_FOR_EACH(head, ptr, func_list, iter)
+
+#define DLB_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)
+
+#define DLB_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
+	DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
+
+/* The PF driver cannot assume that a register write will affect subsequent HCW
+ * writes. To ensure a write completes, the driver must read back a CSR. This
+ * function only need be called for configuration that can occur after the
+ * domain has started; prior to starting, applications can't send HCWs.
+ */
+static inline void dlb_flush_csr(struct dlb_hw *hw)
+{
+	DLB_CSR_RD(hw, DLB_SYS_TOTAL_VAS);
+}
+
+static void dlb_init_fn_rsrc_lists(struct dlb_function_resources *rsrc)
+{
+	dlb_list_init_head(&rsrc->avail_domains);
+	dlb_list_init_head(&rsrc->used_domains);
+	dlb_list_init_head(&rsrc->avail_ldb_queues);
+	dlb_list_init_head(&rsrc->avail_ldb_ports);
+	dlb_list_init_head(&rsrc->avail_dir_pq_pairs);
+	dlb_list_init_head(&rsrc->avail_ldb_credit_pools);
+	dlb_list_init_head(&rsrc->avail_dir_credit_pools);
+}
+
+static void dlb_init_domain_rsrc_lists(struct dlb_domain *domain)
+{
+	dlb_list_init_head(&domain->used_ldb_queues);
+	dlb_list_init_head(&domain->used_ldb_ports);
+	dlb_list_init_head(&domain->used_dir_pq_pairs);
+	dlb_list_init_head(&domain->used_ldb_credit_pools);
+	dlb_list_init_head(&domain->used_dir_credit_pools);
+	dlb_list_init_head(&domain->avail_ldb_queues);
+	dlb_list_init_head(&domain->avail_ldb_ports);
+	dlb_list_init_head(&domain->avail_dir_pq_pairs);
+	dlb_list_init_head(&domain->avail_ldb_credit_pools);
+	dlb_list_init_head(&domain->avail_dir_credit_pools);
+}
+
+int dlb_resource_init(struct dlb_hw *hw)
+{
+	struct dlb_list_entry *list;
+	unsigned int i;
+
+	/* For optimal load-balancing, ports that map to one or more QIDs in
+	 * common should not be in numerical sequence. This is application
+	 * dependent, but the driver interleaves port IDs as much as possible
+	 * to reduce the likelihood of this. This initial allocation maximizes
+	 * the average distance between an ID and its immediate neighbors (i.e.
+	 * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
+	 * 3, etc.).
+	 */
+	u32 init_ldb_port_allocation[DLB_MAX_NUM_LDB_PORTS] = {
+		0,  31, 62, 29, 60, 27, 58, 25, 56, 23, 54, 21, 52, 19, 50, 17,
+		48, 15, 46, 13, 44, 11, 42,  9, 40,  7, 38,  5, 36,  3, 34, 1,
+		32, 63, 30, 61, 28, 59, 26, 57, 24, 55, 22, 53, 20, 51, 18, 49,
+		16, 47, 14, 45, 12, 43, 10, 41,  8, 39,  6, 37,  4, 35,  2, 33
+	};
+
+	/* Zero-out resource tracking data structures */
+	memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
+	memset(&hw->pf, 0, sizeof(hw->pf));
+
+	dlb_init_fn_rsrc_lists(&hw->pf);
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		memset(&hw->vf[i], 0, sizeof(hw->vf[i]));
+		dlb_init_fn_rsrc_lists(&hw->vf[i]);
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
+		memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
+		dlb_init_domain_rsrc_lists(&hw->domains[i]);
+		hw->domains[i].parent_func = &hw->pf;
+	}
+
+	/* Give all resources to the PF driver */
+	hw->pf.num_avail_domains = DLB_MAX_NUM_DOMAINS;
+	for (i = 0; i < hw->pf.num_avail_domains; i++) {
+		list = &hw->domains[i].func_list;
+
+		dlb_list_add(&hw->pf.avail_domains, list);
+	}
+
+	hw->pf.num_avail_ldb_queues = DLB_MAX_NUM_LDB_QUEUES;
+	for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
+		list = &hw->rsrcs.ldb_queues[i].func_list;
+
+		dlb_list_add(&hw->pf.avail_ldb_queues, list);
+	}
+
+	hw->pf.num_avail_ldb_ports = DLB_MAX_NUM_LDB_PORTS;
+	for (i = 0; i < hw->pf.num_avail_ldb_ports; i++) {
+		struct dlb_ldb_port *port;
+
+		port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
+
+		dlb_list_add(&hw->pf.avail_ldb_ports, &port->func_list);
+	}
+
+	hw->pf.num_avail_dir_pq_pairs = DLB_MAX_NUM_DIR_PORTS;
+	for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
+		list = &hw->rsrcs.dir_pq_pairs[i].func_list;
+
+		dlb_list_add(&hw->pf.avail_dir_pq_pairs, list);
+	}
+
+	hw->pf.num_avail_ldb_credit_pools = DLB_MAX_NUM_LDB_CREDIT_POOLS;
+	for (i = 0; i < hw->pf.num_avail_ldb_credit_pools; i++) {
+		list = &hw->rsrcs.ldb_credit_pools[i].func_list;
+
+		dlb_list_add(&hw->pf.avail_ldb_credit_pools, list);
+	}
+
+	hw->pf.num_avail_dir_credit_pools = DLB_MAX_NUM_DIR_CREDIT_POOLS;
+	for (i = 0; i < hw->pf.num_avail_dir_credit_pools; i++) {
+		list = &hw->rsrcs.dir_credit_pools[i].func_list;
+
+		dlb_list_add(&hw->pf.avail_dir_credit_pools, list);
+	}
+
+	/* There are 5120 history list entries, which allows us to overprovision
+	 * the inflight limit (4096) by 1k.
+	 */
+	if (dlb_bitmap_alloc(hw,
+			     &hw->pf.avail_hist_list_entries,
+			     DLB_MAX_NUM_HIST_LIST_ENTRIES))
+		return -1;
+
+	if (dlb_bitmap_fill(hw->pf.avail_hist_list_entries))
+		return -1;
+
+	if (dlb_bitmap_alloc(hw,
+			     &hw->pf.avail_qed_freelist_entries,
+			     DLB_MAX_NUM_LDB_CREDITS))
+		return -1;
+
+	if (dlb_bitmap_fill(hw->pf.avail_qed_freelist_entries))
+		return -1;
+
+	if (dlb_bitmap_alloc(hw,
+			     &hw->pf.avail_dqed_freelist_entries,
+			     DLB_MAX_NUM_DIR_CREDITS))
+		return -1;
+
+	if (dlb_bitmap_fill(hw->pf.avail_dqed_freelist_entries))
+		return -1;
+
+	if (dlb_bitmap_alloc(hw,
+			     &hw->pf.avail_aqed_freelist_entries,
+			     DLB_MAX_NUM_AQOS_ENTRIES))
+		return -1;
+
+	if (dlb_bitmap_fill(hw->pf.avail_aqed_freelist_entries))
+		return -1;
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		if (dlb_bitmap_alloc(hw,
+				     &hw->vf[i].avail_hist_list_entries,
+				     DLB_MAX_NUM_HIST_LIST_ENTRIES))
+			return -1;
+		if (dlb_bitmap_alloc(hw,
+				     &hw->vf[i].avail_qed_freelist_entries,
+				     DLB_MAX_NUM_LDB_CREDITS))
+			return -1;
+		if (dlb_bitmap_alloc(hw,
+				     &hw->vf[i].avail_dqed_freelist_entries,
+				     DLB_MAX_NUM_DIR_CREDITS))
+			return -1;
+		if (dlb_bitmap_alloc(hw,
+				     &hw->vf[i].avail_aqed_freelist_entries,
+				     DLB_MAX_NUM_AQOS_ENTRIES))
+			return -1;
+
+		if (dlb_bitmap_zero(hw->vf[i].avail_hist_list_entries))
+			return -1;
+
+		if (dlb_bitmap_zero(hw->vf[i].avail_qed_freelist_entries))
+			return -1;
+
+		if (dlb_bitmap_zero(hw->vf[i].avail_dqed_freelist_entries))
+			return -1;
+
+		if (dlb_bitmap_zero(hw->vf[i].avail_aqed_freelist_entries))
+			return -1;
+	}
+
+	/* Initialize the hardware resource IDs */
+	for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
+		hw->domains[i].id.phys_id = i;
+		hw->domains[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_QUEUES; i++) {
+		hw->rsrcs.ldb_queues[i].id.phys_id = i;
+		hw->rsrcs.ldb_queues[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++) {
+		hw->rsrcs.ldb_ports[i].id.phys_id = i;
+		hw->rsrcs.ldb_ports[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++) {
+		hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
+		hw->rsrcs.dir_pq_pairs[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_CREDIT_POOLS; i++) {
+		hw->rsrcs.ldb_credit_pools[i].id.phys_id = i;
+		hw->rsrcs.ldb_credit_pools[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_DIR_CREDIT_POOLS; i++) {
+		hw->rsrcs.dir_credit_pools[i].id.phys_id = i;
+		hw->rsrcs.dir_credit_pools[i].id.vf_owned = false;
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		hw->rsrcs.sn_groups[i].id = i;
+		/* Default mode (0) is 32 sequence numbers per queue */
+		hw->rsrcs.sn_groups[i].mode = 0;
+		hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 32;
+		hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
+	}
+
+	return 0;
+}
+
+void dlb_resource_free(struct dlb_hw *hw)
+{
+	int i;
+
+	dlb_bitmap_free(hw->pf.avail_hist_list_entries);
+
+	dlb_bitmap_free(hw->pf.avail_qed_freelist_entries);
+
+	dlb_bitmap_free(hw->pf.avail_dqed_freelist_entries);
+
+	dlb_bitmap_free(hw->pf.avail_aqed_freelist_entries);
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		dlb_bitmap_free(hw->vf[i].avail_hist_list_entries);
+		dlb_bitmap_free(hw->vf[i].avail_qed_freelist_entries);
+		dlb_bitmap_free(hw->vf[i].avail_dqed_freelist_entries);
+		dlb_bitmap_free(hw->vf[i].avail_aqed_freelist_entries);
+	}
+}
+
+static struct dlb_domain *dlb_get_domain_from_id(struct dlb_hw *hw,
+						 u32 id,
+						 bool vf_request,
+						 unsigned int vf_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+	struct dlb_domain *domain;
+
+	if (id >= DLB_MAX_NUM_DOMAINS)
+		return NULL;
+
+	if (!vf_request)
+		return &hw->domains[id];
+
+	rsrcs = &hw->vf[vf_id];
+
+	DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter)
+		if (domain->id.virt_id == id)
+			return domain;
+
+	return NULL;
+}
+
+static struct dlb_credit_pool *
+dlb_get_domain_ldb_pool(u32 id,
+			bool vf_request,
+			struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	if (id >= DLB_MAX_NUM_LDB_CREDIT_POOLS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter)
+		if ((!vf_request && pool->id.phys_id == id) ||
+		    (vf_request && pool->id.virt_id == id))
+			return pool;
+
+	return NULL;
+}
+
+static struct dlb_credit_pool *
+dlb_get_domain_dir_pool(u32 id,
+			bool vf_request,
+			struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	if (id >= DLB_MAX_NUM_DIR_CREDIT_POOLS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter)
+		if ((!vf_request && pool->id.phys_id == id) ||
+		    (vf_request && pool->id.virt_id == id))
+			return pool;
+
+	return NULL;
+}
+
+static struct dlb_ldb_port *dlb_get_ldb_port_from_id(struct dlb_hw *hw,
+						     u32 id,
+						     bool vf_request,
+						     unsigned int vf_id)
+{
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+
+	if (id >= DLB_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
+
+	if (!vf_request)
+		return &hw->rsrcs.ldb_ports[id];
+
+	DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter2)
+			if (port->id.virt_id == id)
+				return port;
+	}
+
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter1)
+		if (port->id.virt_id == id)
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_ldb_port *
+dlb_get_domain_used_ldb_port(u32 id,
+			     bool vf_request,
+			     struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	if (id >= DLB_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_ldb_port *dlb_get_domain_ldb_port(u32 id,
+						    bool vf_request,
+						    struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	if (id >= DLB_MAX_NUM_LDB_PORTS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_dir_pq_pair *dlb_get_dir_pq_from_id(struct dlb_hw *hw,
+						      u32 id,
+						      bool vf_request,
+						      unsigned int vf_id)
+{
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+	struct dlb_dir_pq_pair *port;
+	struct dlb_domain *domain;
+
+	if (id >= DLB_MAX_NUM_DIR_PORTS)
+		return NULL;
+
+	rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
+
+	if (!vf_request)
+		return &hw->rsrcs.dir_pq_pairs[id];
+
+	DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter2)
+			if (port->id.virt_id == id)
+				return port;
+	}
+
+	DLB_FUNC_LIST_FOR(rsrcs->avail_dir_pq_pairs, port, iter1)
+		if (port->id.virt_id == id)
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_dir_pq_pair *
+dlb_get_domain_used_dir_pq(u32 id,
+			   bool vf_request,
+			   struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+
+	if (id >= DLB_MAX_NUM_DIR_PORTS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_dir_pq_pair *dlb_get_domain_dir_pq(u32 id,
+						     bool vf_request,
+						     struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+
+	if (id >= DLB_MAX_NUM_DIR_PORTS)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	DLB_DOM_LIST_FOR(domain->avail_dir_pq_pairs, port, iter)
+		if ((!vf_request && port->id.phys_id == id) ||
+		    (vf_request && port->id.virt_id == id))
+			return port;
+
+	return NULL;
+}
+
+static struct dlb_ldb_queue *dlb_get_ldb_queue_from_id(struct dlb_hw *hw,
+						       u32 id,
+						       bool vf_request,
+						       unsigned int vf_id)
+{
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+	struct dlb_ldb_queue *queue;
+	struct dlb_domain *domain;
+
+	if (id >= DLB_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
+
+	if (!vf_request)
+		return &hw->rsrcs.ldb_queues[id];
+
+	DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
+			if (queue->id.virt_id == id)
+				return queue;
+	}
+
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
+		if (queue->id.virt_id == id)
+			return queue;
+
+	return NULL;
+}
+
+static struct dlb_ldb_queue *dlb_get_domain_ldb_queue(u32 id,
+						      bool vf_request,
+						      struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_queue *queue;
+
+	if (id >= DLB_MAX_NUM_LDB_QUEUES)
+		return NULL;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
+		if ((!vf_request && queue->id.phys_id == id) ||
+		    (vf_request && queue->id.virt_id == id))
+			return queue;
+
+	return NULL;
+}
+
+#define DLB_XFER_LL_RSRC(dst, src, num, type_t, name) ({		    \
+	struct dlb_list_entry *it1 __attribute__((unused));		    \
+	struct dlb_list_entry *it2 __attribute__((unused));		    \
+	struct dlb_function_resources *_src = src;			    \
+	struct dlb_function_resources *_dst = dst;			    \
+	type_t *ptr, *tmp __attribute__((unused));			    \
+	unsigned int i = 0;						    \
+									    \
+	DLB_FUNC_LIST_FOR_SAFE(_src->avail_##name##s, ptr, tmp, it1, it2) { \
+		if (i++ == (num))					    \
+			break;						    \
+									    \
+		dlb_list_del(&_src->avail_##name##s, &ptr->func_list);	    \
+		dlb_list_add(&_dst->avail_##name##s,  &ptr->func_list);     \
+		_src->num_avail_##name##s--;				    \
+		_dst->num_avail_##name##s++;				    \
+	}								    \
+})
+
+#define DLB_VF_ID_CLEAR(head, type_t) ({   \
+	struct dlb_list_entry *iter __attribute__((unused)); \
+	type_t *var;					     \
+							     \
+	DLB_FUNC_LIST_FOR(head, var, iter)		     \
+		var->id.vf_owned = false;		     \
+})
+
+int dlb_update_vf_sched_domains(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_function_resources *src, *dst;
+	struct dlb_domain *domain;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_domains;
+
+	/* Detach the destination VF's current resources before checking if
+	 * enough are available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_domains, struct dlb_domain);
+
+	DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_domain, domain);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_domains) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst, src, num, struct dlb_domain, domain);
+
+	/* Set the domains' VF backpointer */
+	DLB_FUNC_LIST_FOR(dst->avail_domains, domain, iter)
+		domain->parent_func = dst;
+
+	return ret;
+}
+
+int dlb_update_vf_ldb_queues(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_ldb_queues;
+
+	/* Detach the destination VF's current resources before checking if
+	 * enough are available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_ldb_queues, struct dlb_ldb_queue);
+
+	DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_queue, ldb_queue);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_ldb_queues) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_queue, ldb_queue);
+
+	return ret;
+}
+
+int dlb_update_vf_ldb_ports(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_ldb_ports;
+
+	/* Detach the destination VF's current resources before checking if
+	 * enough are available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_ldb_ports, struct dlb_ldb_port);
+
+	DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_port, ldb_port);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_ldb_ports) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_port, ldb_port);
+
+	return ret;
+}
+
+int dlb_update_vf_dir_ports(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_dir_pq_pairs;
+
+	/* Detach the destination VF's current resources before checking if
+	 * enough are available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_dir_pq_pairs, struct dlb_dir_pq_pair);
+
+	DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_dir_pq_pair, dir_pq_pair);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_dir_pq_pairs) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst, src, num, struct dlb_dir_pq_pair, dir_pq_pair);
+
+	return ret;
+}
+
+int dlb_update_vf_ldb_credit_pools(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_ldb_credit_pools;
+
+	/* Detach the destination VF's current resources before checking if
+	 * enough are available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_ldb_credit_pools, struct dlb_credit_pool);
+
+	DLB_XFER_LL_RSRC(src,
+			 dst,
+			 orig,
+			 struct dlb_credit_pool,
+			 ldb_credit_pool);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_ldb_credit_pools) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst,
+			 src,
+			 num,
+			 struct dlb_credit_pool,
+			 ldb_credit_pool);
+
+	return ret;
+}
+
+int dlb_update_vf_dir_credit_pools(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+	unsigned int orig;
+	int ret;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	orig = dst->num_avail_dir_credit_pools;
+
+	/* Detach the VF's current resources before checking if enough are
+	 * available, and set their IDs accordingly.
+	 */
+	DLB_VF_ID_CLEAR(dst->avail_dir_credit_pools, struct dlb_credit_pool);
+
+	DLB_XFER_LL_RSRC(src,
+			 dst,
+			 orig,
+			 struct dlb_credit_pool,
+			 dir_credit_pool);
+
+	/* Are there enough available resources to satisfy the request? */
+	if (num > src->num_avail_dir_credit_pools) {
+		num = orig;
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	DLB_XFER_LL_RSRC(dst,
+			 src,
+			 num,
+			 struct dlb_credit_pool,
+			 dir_credit_pool);
+
+	return ret;
+}
+
+static int dlb_transfer_bitmap_resources(struct dlb_bitmap *src,
+					 struct dlb_bitmap *dst,
+					 u32 num)
+{
+	int orig, ret, base;
+
+	/* Validate bitmaps before use */
+	if (dlb_bitmap_count(dst) < 0 || dlb_bitmap_count(src) < 0)
+		return -EINVAL;
+
+	/* Reassign the dest's bitmap entries to the source's before checking
+	 * if a contiguous chunk of size 'num' is available. The reassignment
+	 * may be necessary to create a sufficiently large contiguous chunk.
+	 */
+	orig = dlb_bitmap_count(dst);
+
+	dlb_bitmap_or(src, src, dst);
+
+	dlb_bitmap_zero(dst);
+
+	/* Are there enough available resources to satisfy the request? */
+	base = dlb_bitmap_find_set_bit_range(src, num);
+
+	if (base == -ENOENT) {
+		num = orig;
+		base = dlb_bitmap_find_set_bit_range(src, num);
+		ret = -EINVAL;
+	} else {
+		ret = 0;
+	}
+
+	dlb_bitmap_set_range(dst, base, num);
+
+	dlb_bitmap_clear_range(src, base, num);
+
+	return ret;
+}
+
+int dlb_update_vf_ldb_credits(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	return dlb_transfer_bitmap_resources(src->avail_qed_freelist_entries,
+					     dst->avail_qed_freelist_entries,
+					     num);
+}
+
+int dlb_update_vf_dir_credits(struct dlb_hw *hw, u32 vf_id, u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	return dlb_transfer_bitmap_resources(src->avail_dqed_freelist_entries,
+					     dst->avail_dqed_freelist_entries,
+					     num);
+}
+
+int dlb_update_vf_hist_list_entries(struct dlb_hw *hw,
+				    u32 vf_id,
+				    u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	return dlb_transfer_bitmap_resources(src->avail_hist_list_entries,
+					     dst->avail_hist_list_entries,
+					     num);
+}
+
+int dlb_update_vf_atomic_inflights(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num)
+{
+	struct dlb_function_resources *src, *dst;
+
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	src = &hw->pf;
+	dst = &hw->vf[vf_id];
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	return dlb_transfer_bitmap_resources(src->avail_aqed_freelist_entries,
+					     dst->avail_aqed_freelist_entries,
+					     num);
+}
+
+static int dlb_attach_ldb_queues(struct dlb_hw *hw,
+				 struct dlb_function_resources *rsrcs,
+				 struct dlb_domain *domain,
+				 u32 num_queues,
+				 struct dlb_cmd_response *resp)
+{
+	unsigned int i, j;
+
+	if (rsrcs->num_avail_ldb_queues < num_queues) {
+		resp->status = DLB_ST_LDB_QUEUES_UNAVAILABLE;
+		return -1;
+	}
+
+	for (i = 0; i < num_queues; i++) {
+		struct dlb_ldb_queue *queue;
+
+		queue = DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
+					   typeof(*queue));
+		if (!queue) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: domain validation failed\n",
+				   __func__);
+			goto cleanup;
+		}
+
+		dlb_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
+
+		queue->domain_id = domain->id;
+		queue->owned = true;
+
+		dlb_list_add(&domain->avail_ldb_queues, &queue->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_queues -= num_queues;
+
+	return 0;
+
+cleanup:
+
+	/* Return the assigned queues */
+	for (j = 0; j < i; j++) {
+		struct dlb_ldb_queue *queue;
+
+		queue = DLB_FUNC_LIST_HEAD(domain->avail_ldb_queues,
+					   typeof(*queue));
+		/* Unrecoverable internal error */
+		if (!queue)
+			break;
+
+		queue->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+		dlb_list_add(&rsrcs->avail_ldb_queues, &queue->func_list);
+	}
+
+	return -EFAULT;
+}
+
+static struct dlb_ldb_port *
+dlb_get_next_ldb_port(struct dlb_hw *hw,
+		      struct dlb_function_resources *rsrcs,
+		      u32 domain_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	/* To reduce the odds of consecutive load-balanced ports mapping to the
+	 * same queue(s), the driver attempts to allocate ports whose neighbors
+	 * are owned by a different domain.
+	 */
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[next].owned ||
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
+			continue;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned ||
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
+			continue;
+
+		return port;
+	}
+
+	/* Failing that, the driver looks for a port with one neighbor owned by
+	 * a different domain and the other unallocated.
+	 */
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
+			return port;
+
+		if (!hw->rsrcs.ldb_ports[next].owned &&
+		    hw->rsrcs.ldb_ports[prev].owned &&
+		    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
+			return port;
+	}
+
+	/* Failing that, the driver looks for a port with both neighbors
+	 * unallocated.
+	 */
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
+		u32 next, prev;
+		u32 phys_id;
+
+		phys_id = port->id.phys_id;
+		next = phys_id + 1;
+		prev = phys_id - 1;
+
+		if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
+			next = 0;
+		if (phys_id == 0)
+			prev = DLB_MAX_NUM_LDB_PORTS - 1;
+
+		if (!hw->rsrcs.ldb_ports[prev].owned &&
+		    !hw->rsrcs.ldb_ports[next].owned)
+			return port;
+	}
+
+	/* If all else fails, the driver returns the next available port. */
+	return DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports, typeof(*port));
+}
+
+static int dlb_attach_ldb_ports(struct dlb_hw *hw,
+				struct dlb_function_resources *rsrcs,
+				struct dlb_domain *domain,
+				u32 num_ports,
+				struct dlb_cmd_response *resp)
+{
+	unsigned int i, j;
+
+	if (rsrcs->num_avail_ldb_ports < num_ports) {
+		resp->status = DLB_ST_LDB_PORTS_UNAVAILABLE;
+		return -1;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb_ldb_port *port;
+
+		port = dlb_get_next_ldb_port(hw, rsrcs, domain->id.phys_id);
+
+		if (!port) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: domain validation failed\n",
+				   __func__);
+			goto cleanup;
+		}
+
+		dlb_list_del(&rsrcs->avail_ldb_ports, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb_list_add(&domain->avail_ldb_ports, &port->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_ports -= num_ports;
+
+	return 0;
+
+cleanup:
+
+	/* Return the assigned ports */
+	for (j = 0; j < i; j++) {
+		struct dlb_ldb_port *port;
+
+		port = DLB_FUNC_LIST_HEAD(domain->avail_ldb_ports,
+					  typeof(*port));
+		/* Unrecoverable internal error */
+		if (!port)
+			break;
+
+		port->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_ports, &port->domain_list);
+
+		dlb_list_add(&rsrcs->avail_ldb_ports, &port->func_list);
+	}
+
+	return -EFAULT;
+}
+
+static int dlb_attach_dir_ports(struct dlb_hw *hw,
+				struct dlb_function_resources *rsrcs,
+				struct dlb_domain *domain,
+				u32 num_ports,
+				struct dlb_cmd_response *resp)
+{
+	unsigned int i, j;
+
+	if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
+		resp->status = DLB_ST_DIR_PORTS_UNAVAILABLE;
+		return -1;
+	}
+
+	for (i = 0; i < num_ports; i++) {
+		struct dlb_dir_pq_pair *port;
+
+		port = DLB_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
+					  typeof(*port));
+		if (!port) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: domain validation failed\n",
+				   __func__);
+			goto cleanup;
+		}
+
+		dlb_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+
+		port->domain_id = domain->id;
+		port->owned = true;
+
+		dlb_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);
+	}
+
+	rsrcs->num_avail_dir_pq_pairs -= num_ports;
+
+	return 0;
+
+cleanup:
+
+	/* Return the assigned ports */
+	for (j = 0; j < i; j++) {
+		struct dlb_dir_pq_pair *port;
+
+		port = DLB_FUNC_LIST_HEAD(domain->avail_dir_pq_pairs,
+					  typeof(*port));
+		/* Unrecoverable internal error */
+		if (!port)
+			break;
+
+		port->owned = false;
+
+		dlb_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb_list_add(&rsrcs->avail_dir_pq_pairs, &port->func_list);
+	}
+
+	return -EFAULT;
+}
+
+static int dlb_attach_ldb_credits(struct dlb_function_resources *rsrcs,
+				  struct dlb_domain *domain,
+				  u32 num_credits,
+				  struct dlb_cmd_response *resp)
+{
+	struct dlb_bitmap *bitmap = rsrcs->avail_qed_freelist_entries;
+
+	if (dlb_bitmap_count(bitmap) < (int)num_credits) {
+		resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (num_credits) {
+		int base;
+
+		base = dlb_bitmap_find_set_bit_range(bitmap, num_credits);
+		if (base < 0)
+			goto error;
+
+		domain->qed_freelist.base = base;
+		domain->qed_freelist.bound = base + num_credits;
+		domain->qed_freelist.offset = 0;
+
+		dlb_bitmap_clear_range(bitmap, base, num_credits);
+	}
+
+	return 0;
+
+error:
+	resp->status = DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE;
+	return -1;
+}
+
+static int dlb_attach_dir_credits(struct dlb_function_resources *rsrcs,
+				  struct dlb_domain *domain,
+				  u32 num_credits,
+				  struct dlb_cmd_response *resp)
+{
+	struct dlb_bitmap *bitmap = rsrcs->avail_dqed_freelist_entries;
+
+	if (dlb_bitmap_count(bitmap) < (int)num_credits) {
+		resp->status = DLB_ST_DIR_CREDITS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (num_credits) {
+		int base;
+
+		base = dlb_bitmap_find_set_bit_range(bitmap, num_credits);
+		if (base < 0)
+			goto error;
+
+		domain->dqed_freelist.base = base;
+		domain->dqed_freelist.bound = base + num_credits;
+		domain->dqed_freelist.offset = 0;
+
+		dlb_bitmap_clear_range(bitmap, base, num_credits);
+	}
+
+	return 0;
+
+error:
+	resp->status = DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE;
+	return -1;
+}
+
+static int dlb_attach_ldb_credit_pools(struct dlb_hw *hw,
+				       struct dlb_function_resources *rsrcs,
+				       struct dlb_domain *domain,
+				       u32 num_credit_pools,
+				       struct dlb_cmd_response *resp)
+{
+	unsigned int i, j;
+
+	if (rsrcs->num_avail_ldb_credit_pools < num_credit_pools) {
+		resp->status = DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE;
+		return -1;
+	}
+
+	for (i = 0; i < num_credit_pools; i++) {
+		struct dlb_credit_pool *pool;
+
+		pool = DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_credit_pools,
+					  typeof(*pool));
+		if (!pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: domain validation failed\n",
+				   __func__);
+			goto cleanup;
+		}
+
+		dlb_list_del(&rsrcs->avail_ldb_credit_pools,
+			     &pool->func_list);
+
+		pool->domain_id = domain->id;
+		pool->owned = true;
+
+		dlb_list_add(&domain->avail_ldb_credit_pools,
+			     &pool->domain_list);
+	}
+
+	rsrcs->num_avail_ldb_credit_pools -= num_credit_pools;
+
+	return 0;
+
+cleanup:
+
+	/* Return the assigned credit pools */
+	for (j = 0; j < i; j++) {
+		struct dlb_credit_pool *pool;
+
+		pool = DLB_FUNC_LIST_HEAD(domain->avail_ldb_credit_pools,
+					  typeof(*pool));
+		/* Unrecoverable internal error */
+		if (!pool)
+			break;
+
+		pool->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_credit_pools,
+			     &pool->domain_list);
+
+		dlb_list_add(&rsrcs->avail_ldb_credit_pools,
+			     &pool->func_list);
+	}
+
+	return -EFAULT;
+}
+
+static int dlb_attach_dir_credit_pools(struct dlb_hw *hw,
+				       struct dlb_function_resources *rsrcs,
+				       struct dlb_domain *domain,
+				       u32 num_credit_pools,
+				       struct dlb_cmd_response *resp)
+{
+	unsigned int i, j;
+
+	if (rsrcs->num_avail_dir_credit_pools < num_credit_pools) {
+		resp->status = DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE;
+		return -1;
+	}
+
+	for (i = 0; i < num_credit_pools; i++) {
+		struct dlb_credit_pool *pool;
+
+		pool = DLB_FUNC_LIST_HEAD(rsrcs->avail_dir_credit_pools,
+					  typeof(*pool));
+		if (!pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: domain validation failed\n",
+				   __func__);
+			goto cleanup;
+		}
+
+		dlb_list_del(&rsrcs->avail_dir_credit_pools,
+			     &pool->func_list);
+
+		pool->domain_id = domain->id;
+		pool->owned = true;
+
+		dlb_list_add(&domain->avail_dir_credit_pools,
+			     &pool->domain_list);
+	}
+
+	rsrcs->num_avail_dir_credit_pools -= num_credit_pools;
+
+	return 0;
+
+cleanup:
+
+	/* Return the assigned credit pools */
+	for (j = 0; j < i; j++) {
+		struct dlb_credit_pool *pool;
+
+		pool = DLB_FUNC_LIST_HEAD(domain->avail_dir_credit_pools,
+					  typeof(*pool));
+		/* Unrecoverable internal error */
+		if (!pool)
+			break;
+
+		pool->owned = false;
+
+		dlb_list_del(&domain->avail_dir_credit_pools,
+			     &pool->domain_list);
+
+		dlb_list_add(&rsrcs->avail_dir_credit_pools,
+			     &pool->func_list);
+	}
+
+	return -EFAULT;
+}
+
+static int dlb_attach_atomic_inflights(struct dlb_function_resources *rsrcs,
+				       struct dlb_domain *domain,
+				       u32 num_atomic_inflights,
+				       struct dlb_cmd_response *resp)
+{
+	if (num_atomic_inflights) {
+		struct dlb_bitmap *bitmap =
+			rsrcs->avail_aqed_freelist_entries;
+		int base;
+
+		base = dlb_bitmap_find_set_bit_range(bitmap,
+						     num_atomic_inflights);
+		if (base < 0)
+			goto error;
+
+		domain->aqed_freelist.base = base;
+		domain->aqed_freelist.bound = base + num_atomic_inflights;
+		domain->aqed_freelist.offset = 0;
+
+		dlb_bitmap_clear_range(bitmap, base, num_atomic_inflights);
+	}
+
+	return 0;
+
+error:
+	resp->status = DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+	return -1;
+}
+
+static int
+dlb_attach_domain_hist_list_entries(struct dlb_function_resources *rsrcs,
+				    struct dlb_domain *domain,
+				    u32 num_hist_list_entries,
+				    struct dlb_cmd_response *resp)
+{
+	struct dlb_bitmap *bitmap;
+	int base;
+
+	if (num_hist_list_entries) {
+		bitmap = rsrcs->avail_hist_list_entries;
+
+		base = dlb_bitmap_find_set_bit_range(bitmap,
+						     num_hist_list_entries);
+		if (base < 0)
+			goto error;
+
+		domain->total_hist_list_entries = num_hist_list_entries;
+		domain->avail_hist_list_entries = num_hist_list_entries;
+		domain->hist_list_entry_base = base;
+		domain->hist_list_entry_offset = 0;
+
+		dlb_bitmap_clear_range(bitmap, base, num_hist_list_entries);
+	}
+	return 0;
+
+error:
+	resp->status = DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	return -1;
+}
+
+static unsigned int
+dlb_get_num_ports_in_use(struct dlb_hw *hw)
+{
+	unsigned int i, n = 0;
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++)
+		if (hw->rsrcs.ldb_ports[i].owned)
+			n++;
+
+	for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++)
+		if (hw->rsrcs.dir_pq_pairs[i].owned)
+			n++;
+
+	return n;
+}
+
+static int
+dlb_verify_create_sched_domain_args(struct dlb_hw *hw,
+				    struct dlb_function_resources *rsrcs,
+				    struct dlb_create_sched_domain_args *args,
+				    struct dlb_cmd_response *resp)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_bitmap *ldb_credit_freelist;
+	struct dlb_bitmap *dir_credit_freelist;
+	unsigned int ldb_credit_freelist_count;
+	unsigned int dir_credit_freelist_count;
+	unsigned int max_contig_aqed_entries;
+	unsigned int max_contig_dqed_entries;
+	unsigned int max_contig_qed_entries;
+	unsigned int max_contig_hl_entries;
+	struct dlb_bitmap *aqed_freelist;
+	enum dlb_dev_revision revision;
+
+	ldb_credit_freelist = rsrcs->avail_qed_freelist_entries;
+	dir_credit_freelist = rsrcs->avail_dqed_freelist_entries;
+	aqed_freelist = rsrcs->avail_aqed_freelist_entries;
+
+	ldb_credit_freelist_count = dlb_bitmap_count(ldb_credit_freelist);
+	dir_credit_freelist_count = dlb_bitmap_count(dir_credit_freelist);
+
+	max_contig_hl_entries =
+		dlb_bitmap_longest_set_range(rsrcs->avail_hist_list_entries);
+	max_contig_aqed_entries =
+		dlb_bitmap_longest_set_range(aqed_freelist);
+	max_contig_qed_entries =
+		dlb_bitmap_longest_set_range(ldb_credit_freelist);
+	max_contig_dqed_entries =
+		dlb_bitmap_longest_set_range(dir_credit_freelist);
+
+	if (rsrcs->num_avail_domains < 1)
+		resp->status = DLB_ST_DOMAIN_UNAVAILABLE;
+	else if (rsrcs->num_avail_ldb_queues < args->num_ldb_queues)
+		resp->status = DLB_ST_LDB_QUEUES_UNAVAILABLE;
+	else if (rsrcs->num_avail_ldb_ports < args->num_ldb_ports)
+		resp->status = DLB_ST_LDB_PORTS_UNAVAILABLE;
+	else if (args->num_ldb_queues > 0 && args->num_ldb_ports == 0)
+		resp->status = DLB_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;
+	else if (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports)
+		resp->status = DLB_ST_DIR_PORTS_UNAVAILABLE;
+	else if (ldb_credit_freelist_count < args->num_ldb_credits)
+		resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
+	else if (dir_credit_freelist_count < args->num_dir_credits)
+		resp->status = DLB_ST_DIR_CREDITS_UNAVAILABLE;
+	else if (rsrcs->num_avail_ldb_credit_pools < args->num_ldb_credit_pools)
+		resp->status = DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE;
+	else if (rsrcs->num_avail_dir_credit_pools < args->num_dir_credit_pools)
+		resp->status = DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE;
+	else if (max_contig_hl_entries < args->num_hist_list_entries)
+		resp->status = DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+	else if (max_contig_aqed_entries < args->num_atomic_inflights)
+		resp->status = DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+	else if (max_contig_qed_entries < args->num_ldb_credits)
+		resp->status = DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE;
+	else if (max_contig_dqed_entries < args->num_dir_credits)
+		resp->status = DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE;
+
+	/* DLB A-stepping workaround for hardware write buffer lock up issue:
+	 * limit the maximum configured ports to less than 128 and disable CQ
+	 * occupancy interrupts.
+	 */
+	revision = os_get_dev_revision(hw);
+
+	if (revision < DLB_B0) {
+		u32 n = dlb_get_num_ports_in_use(hw);
+
+		n += args->num_ldb_ports + args->num_dir_ports;
+
+		if (n >= DLB_A_STEP_MAX_PORTS)
+			resp->status = args->num_ldb_ports ?
+				DLB_ST_LDB_PORTS_UNAVAILABLE :
+				DLB_ST_DIR_PORTS_UNAVAILABLE;
+	}
+
+	if (resp->status)
+		return -1;
+
+	return 0;
+}
+
+static int
+dlb_verify_create_ldb_pool_args(struct dlb_hw *hw,
+				u32 domain_id,
+				struct dlb_create_ldb_pool_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_freelist *qed_freelist;
+	struct dlb_domain *domain;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	qed_freelist = &domain->qed_freelist;
+
+	if (dlb_freelist_count(qed_freelist) < args->num_ldb_credits) {
+		resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (dlb_list_empty(&domain->avail_ldb_credit_pools)) {
+		resp->status = DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	return 0;
+}
+
+static void
+dlb_configure_ldb_credit_pool(struct dlb_hw *hw,
+			      struct dlb_domain *domain,
+			      struct dlb_create_ldb_pool_args *args,
+			      struct dlb_credit_pool *pool)
+{
+	union dlb_sys_ldb_pool_enbld r0 = { {0} };
+	union dlb_chp_ldb_pool_crd_lim r1 = { {0} };
+	union dlb_chp_ldb_pool_crd_cnt r2 = { {0} };
+	union dlb_chp_qed_fl_base  r3 = { {0} };
+	union dlb_chp_qed_fl_lim r4 = { {0} };
+	union dlb_chp_qed_fl_push_ptr r5 = { {0} };
+	union dlb_chp_qed_fl_pop_ptr  r6 = { {0} };
+
+	r1.field.limit = args->num_ldb_credits;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_POOL_CRD_LIM(pool->id.phys_id), r1.val);
+
+	r2.field.count = args->num_ldb_credits;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_POOL_CRD_CNT(pool->id.phys_id), r2.val);
+
+	r3.field.base = domain->qed_freelist.base + domain->qed_freelist.offset;
+
+	DLB_CSR_WR(hw, DLB_CHP_QED_FL_BASE(pool->id.phys_id), r3.val);
+
+	r4.field.freelist_disable = 0;
+	r4.field.limit = r3.field.base + args->num_ldb_credits - 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_QED_FL_LIM(pool->id.phys_id), r4.val);
+
+	r5.field.push_ptr = r3.field.base;
+	r5.field.generation = 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_QED_FL_PUSH_PTR(pool->id.phys_id), r5.val);
+
+	r6.field.pop_ptr = r3.field.base;
+	r6.field.generation = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_QED_FL_POP_PTR(pool->id.phys_id), r6.val);
+
+	r0.field.pool_enabled = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_POOL_ENBLD(pool->id.phys_id), r0.val);
+
+	pool->avail_credits = args->num_ldb_credits;
+	pool->total_credits = args->num_ldb_credits;
+	domain->qed_freelist.offset += args->num_ldb_credits;
+
+	pool->configured = true;
+}
+
+static int
+dlb_verify_create_dir_pool_args(struct dlb_hw *hw,
+				u32 domain_id,
+				struct dlb_create_dir_pool_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_freelist *dqed_freelist;
+	struct dlb_domain *domain;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	dqed_freelist = &domain->dqed_freelist;
+
+	if (dlb_freelist_count(dqed_freelist) < args->num_dir_credits) {
+		resp->status = DLB_ST_DIR_CREDITS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (dlb_list_empty(&domain->avail_dir_credit_pools)) {
+		resp->status = DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	return 0;
+}
+
+static void
+dlb_configure_dir_credit_pool(struct dlb_hw *hw,
+			      struct dlb_domain *domain,
+			      struct dlb_create_dir_pool_args *args,
+			      struct dlb_credit_pool *pool)
+{
+	union dlb_sys_dir_pool_enbld r0 = { {0} };
+	union dlb_chp_dir_pool_crd_lim r1 = { {0} };
+	union dlb_chp_dir_pool_crd_cnt r2 = { {0} };
+	union dlb_chp_dqed_fl_base  r3 = { {0} };
+	union dlb_chp_dqed_fl_lim r4 = { {0} };
+	union dlb_chp_dqed_fl_push_ptr r5 = { {0} };
+	union dlb_chp_dqed_fl_pop_ptr  r6 = { {0} };
+
+	r1.field.limit = args->num_dir_credits;
+
+	DLB_CSR_WR(hw, DLB_CHP_DIR_POOL_CRD_LIM(pool->id.phys_id), r1.val);
+
+	r2.field.count = args->num_dir_credits;
+
+	DLB_CSR_WR(hw, DLB_CHP_DIR_POOL_CRD_CNT(pool->id.phys_id), r2.val);
+
+	r3.field.base = domain->dqed_freelist.base +
+			domain->dqed_freelist.offset;
+
+	DLB_CSR_WR(hw, DLB_CHP_DQED_FL_BASE(pool->id.phys_id), r3.val);
+
+	r4.field.freelist_disable = 0;
+	r4.field.limit = r3.field.base + args->num_dir_credits - 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_DQED_FL_LIM(pool->id.phys_id), r4.val);
+
+	r5.field.push_ptr = r3.field.base;
+	r5.field.generation = 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_DQED_FL_PUSH_PTR(pool->id.phys_id), r5.val);
+
+	r6.field.pop_ptr = r3.field.base;
+	r6.field.generation = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_DQED_FL_POP_PTR(pool->id.phys_id), r6.val);
+
+	r0.field.pool_enabled = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_POOL_ENBLD(pool->id.phys_id), r0.val);
+
+	pool->avail_credits = args->num_dir_credits;
+	pool->total_credits = args->num_dir_credits;
+	domain->dqed_freelist.offset += args->num_dir_credits;
+
+	pool->configured = true;
+}
+
+static int
+dlb_verify_create_ldb_queue_args(struct dlb_hw *hw,
+				 u32 domain_id,
+				 struct dlb_create_ldb_queue_args *args,
+				 struct dlb_cmd_response *resp,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_freelist *aqed_freelist;
+	struct dlb_domain *domain;
+	int i;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	if (dlb_list_empty(&domain->avail_ldb_queues)) {
+		resp->status = DLB_ST_LDB_QUEUES_UNAVAILABLE;
+		return -1;
+	}
+
+	if (args->num_sequence_numbers) {
+		for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+			struct dlb_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+			if (group->sequence_numbers_per_queue ==
+			    args->num_sequence_numbers &&
+			    !dlb_sn_group_full(group))
+				break;
+		}
+
+		if (i == DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {
+			resp->status = DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE;
+			return -1;
+		}
+	}
+
+	if (args->num_qid_inflights > 4096) {
+		resp->status = DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -1;
+	}
+
+	/* Inflights must be <= number of sequence numbers if ordered */
+	if (args->num_sequence_numbers != 0 &&
+	    args->num_qid_inflights > args->num_sequence_numbers) {
+		resp->status = DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION;
+		return -1;
+	}
+
+	aqed_freelist = &domain->aqed_freelist;
+
+	if (dlb_freelist_count(aqed_freelist) < args->num_atomic_inflights) {
+		resp->status = DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_verify_create_dir_queue_args(struct dlb_hw *hw,
+				 u32 domain_id,
+				 struct dlb_create_dir_queue_args *args,
+				 struct dlb_cmd_response *resp,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	/* If the user claims the port is already configured, validate the port
+	 * ID, its domain, and whether the port is configured.
+	 */
+	if (args->port_id != -1) {
+		struct dlb_dir_pq_pair *port;
+
+		port = dlb_get_domain_used_dir_pq(args->port_id,
+						  vf_request,
+						  domain);
+
+		if (!port || port->domain_id.phys_id != domain->id.phys_id ||
+		    !port->port_configured) {
+			resp->status = DLB_ST_INVALID_PORT_ID;
+			return -1;
+		}
+	}
+
+	/* If the queue's port is not configured, validate that a free
+	 * port-queue pair is available.
+	 */
+	if (args->port_id == -1 &&
+	    dlb_list_empty(&domain->avail_dir_pq_pairs)) {
+		resp->status = DLB_ST_DIR_QUEUES_UNAVAILABLE;
+		return -1;
+	}
+
+	return 0;
+}
+
+static void dlb_configure_ldb_queue(struct dlb_hw *hw,
+				    struct dlb_domain *domain,
+				    struct dlb_ldb_queue *queue,
+				    struct dlb_create_ldb_queue_args *args,
+				    bool vf_request,
+				    unsigned int vf_id)
+{
+	union dlb_sys_vf_ldb_vqid_v r0 = { {0} };
+	union dlb_sys_vf_ldb_vqid2qid r1 = { {0} };
+	union dlb_sys_ldb_qid2vqid r2 = { {0} };
+	union dlb_sys_ldb_vasqid_v r3 = { {0} };
+	union dlb_lsp_qid_ldb_infl_lim r4 = { {0} };
+	union dlb_lsp_qid_aqed_active_lim r5 = { {0} };
+	union dlb_aqed_pipe_fl_lim r6 = { {0} };
+	union dlb_aqed_pipe_fl_base r7 = { {0} };
+	union dlb_chp_ord_qid_sn_map r11 = { {0} };
+	union dlb_sys_ldb_qid_cfg_v r12 = { {0} };
+	union dlb_sys_ldb_qid_v r13 = { {0} };
+	union dlb_aqed_pipe_fl_push_ptr r14 = { {0} };
+	union dlb_aqed_pipe_fl_pop_ptr r15 = { {0} };
+	union dlb_aqed_pipe_qid_fid_lim r16 = { {0} };
+	union dlb_ro_pipe_qid2grpslt r17 = { {0} };
+	struct dlb_sn_group *sn_group;
+	unsigned int offs;
+
+	/* QID write permissions are turned on when the domain is started */
+	r3.field.vasqid_v = 0;
+
+	offs = domain->id.phys_id * DLB_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_VASQID_V(offs), r3.val);
+
+	/* Unordered QIDs get 4K inflights, ordered get as many as the number
+	 * of sequence numbers.
+	 */
+	r4.field.limit = args->num_qid_inflights;
+
+	DLB_CSR_WR(hw, DLB_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val);
+
+	r5.field.limit = queue->aqed_freelist.bound -
+			 queue->aqed_freelist.base;
+
+	if (r5.field.limit > DLB_MAX_NUM_AQOS_ENTRIES)
+		r5.field.limit = DLB_MAX_NUM_AQOS_ENTRIES;
+
+	/* AQOS */
+	DLB_CSR_WR(hw, DLB_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id), r5.val);
+
+	r6.field.freelist_disable = 0;
+	r6.field.limit = queue->aqed_freelist.bound - 1;
+
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_LIM(queue->id.phys_id), r6.val);
+
+	r7.field.base = queue->aqed_freelist.base;
+
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_BASE(queue->id.phys_id), r7.val);
+
+	r14.field.push_ptr = r7.field.base;
+	r14.field.generation = 1;
+
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_PUSH_PTR(queue->id.phys_id), r14.val);
+
+	r15.field.pop_ptr = r7.field.base;
+	r15.field.generation = 0;
+
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_POP_PTR(queue->id.phys_id), r15.val);
+
+	/* Configure SNs */
+	sn_group = &hw->rsrcs.sn_groups[queue->sn_group];
+	r11.field.mode = sn_group->mode;
+	r11.field.slot = queue->sn_slot;
+	r11.field.grp  = sn_group->id;
+
+	DLB_CSR_WR(hw, DLB_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val);
+
+	/* This register limits the number of inflight flows a queue can have
+	 * at one time.  It has an upper bound of 2048, but can be
+	 * over-subscribed. 512 is chosen so that a single queue doesn't use
+	 * the entire atomic storage, but can use a substantial portion if
+	 * needed.
+	 */
+	r16.field.qid_fid_limit = 512;
+
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_QID_FID_LIM(queue->id.phys_id), r16.val);
+
+	r17.field.group = sn_group->id;
+	r17.field.slot = queue->sn_slot;
+
+	DLB_CSR_WR(hw, DLB_RO_PIPE_QID2GRPSLT(queue->id.phys_id), r17.val);
+
+	r12.field.sn_cfg_v = (args->num_sequence_numbers != 0);
+	r12.field.fid_cfg_v = (args->num_atomic_inflights != 0);
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val);
+
+	if (vf_request) {
+		unsigned int offs;
+
+		r0.field.vqid_v = 1;
+
+		offs = vf_id * DLB_MAX_NUM_LDB_QUEUES + queue->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_LDB_VQID_V(offs), r0.val);
+
+		r1.field.qid = queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_LDB_VQID2QID(offs), r1.val);
+
+		r2.field.vqid = queue->id.virt_id;
+
+		offs = vf_id * DLB_MAX_NUM_LDB_QUEUES + queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_LDB_QID2VQID(offs), r2.val);
+	}
+
+	r13.field.qid_v = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_QID_V(queue->id.phys_id), r13.val);
+}
+
+static void dlb_configure_dir_queue(struct dlb_hw *hw,
+				    struct dlb_domain *domain,
+				    struct dlb_dir_pq_pair *queue,
+				    bool vf_request,
+				    unsigned int vf_id)
+{
+	union dlb_sys_dir_vasqid_v r0 = { {0} };
+	unsigned int offs;
+
+	/* QID write permissions are turned on when the domain is started */
+	r0.field.vasqid_v = 0;
+
+	offs = (domain->id.phys_id * DLB_MAX_NUM_DIR_PORTS) + queue->id.phys_id;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_VASQID_V(offs), r0.val);
+
+	if (vf_request) {
+		union dlb_sys_vf_dir_vqid_v   r1 = { {0} };
+		union dlb_sys_vf_dir_vqid2qid r2 = { {0} };
+
+		r1.field.vqid_v = 1;
+
+		offs = (vf_id * DLB_MAX_NUM_DIR_PORTS) + queue->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_DIR_VQID_V(offs), r1.val);
+
+		r2.field.qid = queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_DIR_VQID2QID(offs), r2.val);
+	} else {
+		union dlb_sys_dir_qid_v r3 = { {0} };
+
+		r3.field.qid_v = 1;
+
+		DLB_CSR_WR(hw, DLB_SYS_DIR_QID_V(queue->id.phys_id), r3.val);
+	}
+
+	queue->queue_configured = true;
+}
+
+static int
+dlb_verify_create_ldb_port_args(struct dlb_hw *hw,
+				u32 domain_id,
+				u64 pop_count_dma_base,
+				u64 cq_dma_base,
+				struct dlb_create_ldb_port_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_credit_pool *pool;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	if (dlb_list_empty(&domain->avail_ldb_ports)) {
+		resp->status = DLB_ST_LDB_PORTS_UNAVAILABLE;
+		return -1;
+	}
+
+	/* If the scheduling domain has no LDB queues, we configure the
+	 * hardware to not supply the port with any LDB credits. In that
+	 * case, ignore the LDB credit arguments.
+	 */
+	if (!dlb_list_empty(&domain->used_ldb_queues) ||
+	    !dlb_list_empty(&domain->avail_ldb_queues)) {
+		pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+					       vf_request,
+					       domain);
+
+		if (!pool || !pool->configured ||
+		    pool->domain_id.phys_id != domain->id.phys_id) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_POOL_ID;
+			return -1;
+		}
+
+		if (args->ldb_credit_high_watermark > pool->avail_credits) {
+			resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
+			return -1;
+		}
+
+		if (args->ldb_credit_low_watermark >=
+		    args->ldb_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK;
+			return -1;
+		}
+
+		if (args->ldb_credit_quantum >=
+		    args->ldb_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_QUANTUM;
+			return -1;
+		}
+
+		if (args->ldb_credit_quantum > DLB_MAX_PORT_CREDIT_QUANTUM) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_QUANTUM;
+			return -1;
+		}
+	}
+
+	/* Likewise, if the scheduling domain has no DIR queues, we configure
+	 * the hardware to not supply the port with any DIR credits. In that
+	 * case, ignore the DIR credit arguments.
+	 */
+	if (!dlb_list_empty(&domain->used_dir_pq_pairs) ||
+	    !dlb_list_empty(&domain->avail_dir_pq_pairs)) {
+		pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+					       vf_request,
+					       domain);
+
+		if (!pool || !pool->configured ||
+		    pool->domain_id.phys_id != domain->id.phys_id) {
+			resp->status = DLB_ST_INVALID_DIR_CREDIT_POOL_ID;
+			return -1;
+		}
+
+		if (args->dir_credit_high_watermark > pool->avail_credits) {
+			resp->status = DLB_ST_DIR_CREDITS_UNAVAILABLE;
+			return -1;
+		}
+
+		if (args->dir_credit_low_watermark >=
+		    args->dir_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK;
+			return -1;
+		}
+
+		if (args->dir_credit_quantum >=
+		    args->dir_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_DIR_CREDIT_QUANTUM;
+			return -1;
+		}
+
+		if (args->dir_credit_quantum > DLB_MAX_PORT_CREDIT_QUANTUM) {
+			resp->status = DLB_ST_INVALID_DIR_CREDIT_QUANTUM;
+			return -1;
+		}
+	}
+
+	/* Check cache-line alignment */
+	if ((pop_count_dma_base & 0x3F) != 0) {
+		resp->status = DLB_ST_INVALID_POP_COUNT_VIRT_ADDR;
+		return -1;
+	}
+
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB_ST_INVALID_CQ_VIRT_ADDR;
+		return -1;
+	}
+
+	if (args->cq_depth != 1 &&
+	    args->cq_depth != 2 &&
+	    args->cq_depth != 4 &&
+	    args->cq_depth != 8 &&
+	    args->cq_depth != 16 &&
+	    args->cq_depth != 32 &&
+	    args->cq_depth != 64 &&
+	    args->cq_depth != 128 &&
+	    args->cq_depth != 256 &&
+	    args->cq_depth != 512 &&
+	    args->cq_depth != 1024) {
+		resp->status = DLB_ST_INVALID_CQ_DEPTH;
+		return -1;
+	}
+
+	/* The history list size must be >= 1 */
+	if (!args->cq_history_list_size) {
+		resp->status = DLB_ST_INVALID_HIST_LIST_DEPTH;
+		return -1;
+	}
+
+	if (args->cq_history_list_size > domain->avail_hist_list_entries) {
+		resp->status = DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_verify_create_dir_port_args(struct dlb_hw *hw,
+				u32 domain_id,
+				u64 pop_count_dma_base,
+				u64 cq_dma_base,
+				struct dlb_create_dir_port_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_credit_pool *pool;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	/* If the user claims the queue is already configured, validate
+	 * the queue ID, its domain, and whether the queue is configured.
+	 */
+	if (args->queue_id != -1) {
+		struct dlb_dir_pq_pair *queue;
+
+		queue = dlb_get_domain_used_dir_pq(args->queue_id,
+						   vf_request,
+						   domain);
+
+		if (!queue || queue->domain_id.phys_id != domain->id.phys_id ||
+		    !queue->queue_configured) {
+			resp->status = DLB_ST_INVALID_DIR_QUEUE_ID;
+			return -1;
+		}
+	}
+
+	/* If the port's queue is not configured, validate that a free
+	 * port-queue pair is available.
+	 */
+	if (args->queue_id == -1 &&
+	    dlb_list_empty(&domain->avail_dir_pq_pairs)) {
+		resp->status = DLB_ST_DIR_PORTS_UNAVAILABLE;
+		return -1;
+	}
+
+	/* If the scheduling domain has no LDB queues, we configure the
+	 * hardware to not supply the port with any LDB credits. In that
+	 * case, ignore the LDB credit arguments.
+	 */
+	if (!dlb_list_empty(&domain->used_ldb_queues) ||
+	    !dlb_list_empty(&domain->avail_ldb_queues)) {
+		pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+					       vf_request,
+					       domain);
+
+		if (!pool || !pool->configured ||
+		    pool->domain_id.phys_id != domain->id.phys_id) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_POOL_ID;
+			return -1;
+		}
+
+		if (args->ldb_credit_high_watermark > pool->avail_credits) {
+			resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
+			return -1;
+		}
+
+		if (args->ldb_credit_low_watermark >=
+		    args->ldb_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK;
+			return -1;
+		}
+
+		if (args->ldb_credit_quantum >=
+		    args->ldb_credit_high_watermark) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_QUANTUM;
+			return -1;
+		}
+
+		if (args->ldb_credit_quantum > DLB_MAX_PORT_CREDIT_QUANTUM) {
+			resp->status = DLB_ST_INVALID_LDB_CREDIT_QUANTUM;
+			return -1;
+		}
+	}
+
+	pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+				       vf_request,
+				       domain);
+
+	if (!pool || !pool->configured ||
+	    pool->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB_ST_INVALID_DIR_CREDIT_POOL_ID;
+		return -1;
+	}
+
+	if (args->dir_credit_high_watermark > pool->avail_credits) {
+		resp->status = DLB_ST_DIR_CREDITS_UNAVAILABLE;
+		return -1;
+	}
+
+	if (args->dir_credit_low_watermark >= args->dir_credit_high_watermark) {
+		resp->status = DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK;
+		return -1;
+	}
+
+	if (args->dir_credit_quantum >= args->dir_credit_high_watermark) {
+		resp->status = DLB_ST_INVALID_DIR_CREDIT_QUANTUM;
+		return -1;
+	}
+
+	if (args->dir_credit_quantum > DLB_MAX_PORT_CREDIT_QUANTUM) {
+		resp->status = DLB_ST_INVALID_DIR_CREDIT_QUANTUM;
+		return -1;
+	}
+
+	/* Check cache-line alignment */
+	if ((pop_count_dma_base & 0x3F) != 0) {
+		resp->status = DLB_ST_INVALID_POP_COUNT_VIRT_ADDR;
+		return -1;
+	}
+
+	if ((cq_dma_base & 0x3F) != 0) {
+		resp->status = DLB_ST_INVALID_CQ_VIRT_ADDR;
+		return -1;
+	}
+
+	if (args->cq_depth != 8 &&
+	    args->cq_depth != 16 &&
+	    args->cq_depth != 32 &&
+	    args->cq_depth != 64 &&
+	    args->cq_depth != 128 &&
+	    args->cq_depth != 256 &&
+	    args->cq_depth != 512 &&
+	    args->cq_depth != 1024) {
+		resp->status = DLB_ST_INVALID_CQ_DEPTH;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int dlb_verify_start_domain_args(struct dlb_hw *hw,
+					u32 domain_id,
+					struct dlb_cmd_response *resp,
+					bool vf_request,
+					unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	if (domain->started) {
+		resp->status = DLB_ST_DOMAIN_STARTED;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int dlb_verify_map_qid_args(struct dlb_hw *hw,
+				   u32 domain_id,
+				   struct dlb_map_qid_args *args,
+				   struct dlb_cmd_response *resp,
+				   bool vf_request,
+				   unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_ldb_port *port;
+	struct dlb_ldb_queue *queue;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	if (args->priority >= DLB_QID_PRIORITIES) {
+		resp->status = DLB_ST_INVALID_PRIORITY;
+		return -1;
+	}
+
+	queue = dlb_get_domain_ldb_queue(args->qid, vf_request, domain);
+
+	if (!queue || !queue->configured) {
+		resp->status = DLB_ST_INVALID_QID;
+		return -1;
+	}
+
+	if (queue->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB_ST_INVALID_QID;
+		return -1;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	return 0;
+}
+
+static bool dlb_port_find_slot(struct dlb_ldb_port *port,
+			       enum dlb_qid_map_state state,
+			       int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool dlb_port_find_slot_queue(struct dlb_ldb_port *port,
+				     enum dlb_qid_map_state state,
+				     struct dlb_ldb_queue *queue,
+				     int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		if (port->qid_map[i].state == state &&
+		    port->qid_map[i].qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static bool
+dlb_port_find_slot_with_pending_map_queue(struct dlb_ldb_port *port,
+					  struct dlb_ldb_queue *queue,
+					  int *slot)
+{
+	int i;
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb_ldb_port_qid_map *map = &port->qid_map[i];
+
+		if (map->state == DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP &&
+		    map->pending_qid == queue->id.phys_id)
+			break;
+	}
+
+	*slot = i;
+
+	return (i < DLB_MAX_NUM_QIDS_PER_LDB_CQ);
+}
+
+static int dlb_port_slot_state_transition(struct dlb_hw *hw,
+					  struct dlb_ldb_port *port,
+					  struct dlb_ldb_queue *queue,
+					  int slot,
+					  enum dlb_qid_map_state new_state)
+{
+	enum dlb_qid_map_state curr_state = port->qid_map[slot].state;
+	struct dlb_domain *domain;
+
+	domain = dlb_get_domain_from_id(hw, port->domain_id.phys_id, false, 0);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: unable to find domain %d\n",
+			   __func__, port->domain_id.phys_id);
+		return -EFAULT;
+	}
+
+	switch (curr_state) {
+	case DLB_QUEUE_UNMAPPED:
+		switch (new_state) {
+		case DLB_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			break;
+		case DLB_QUEUE_MAP_IN_PROGRESS:
+			queue->num_pending_additions++;
+			domain->num_pending_additions++;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB_QUEUE_MAPPED:
+		switch (new_state) {
+		case DLB_QUEUE_UNMAPPED:
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB_QUEUE_UNMAP_IN_PROGRESS:
+			port->num_pending_removals++;
+			domain->num_pending_removals++;
+			break;
+		case DLB_QUEUE_MAPPED:
+			/* Priority change, nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB_QUEUE_MAP_IN_PROGRESS:
+		switch (new_state) {
+		case DLB_QUEUE_UNMAPPED:
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		case DLB_QUEUE_MAPPED:
+			queue->num_mappings++;
+			port->num_mappings++;
+			queue->num_pending_additions--;
+			domain->num_pending_additions--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB_QUEUE_UNMAP_IN_PROGRESS:
+		switch (new_state) {
+		case DLB_QUEUE_UNMAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			queue->num_mappings--;
+			port->num_mappings--;
+			break;
+		case DLB_QUEUE_MAPPED:
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		case DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP:
+			/* Nothing to update */
+			break;
+		default:
+			goto error;
+		}
+		break;
+	case DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP:
+		switch (new_state) {
+		case DLB_QUEUE_UNMAP_IN_PROGRESS:
+			/* Nothing to update */
+			break;
+		case DLB_QUEUE_UNMAPPED:
+			/* An UNMAP_IN_PROGRESS_PENDING_MAP slot briefly
+			 * becomes UNMAPPED before it transitions to
+			 * MAP_IN_PROGRESS.
+			 */
+			queue->num_mappings--;
+			port->num_mappings--;
+			port->num_pending_removals--;
+			domain->num_pending_removals--;
+			break;
+		default:
+			goto error;
+		}
+		break;
+	default:
+		goto error;
+	}
+
+	port->qid_map[slot].state = new_state;
+
+	DLB_HW_INFO(hw,
+		    "[%s()] queue %d -> port %d state transition (%d -> %d)\n",
+		    __func__, queue->id.phys_id, port->id.phys_id, curr_state,
+		    new_state);
+	return 0;
+
+error:
+	DLB_HW_ERR(hw,
+		   "[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\n",
+		   __func__, queue->id.phys_id, port->id.phys_id, curr_state,
+		   new_state);
+	return -EFAULT;
+}
+
+static int dlb_verify_map_qid_slot_available(struct dlb_ldb_port *port,
+					     struct dlb_ldb_queue *queue,
+					     struct dlb_cmd_response *resp)
+{
+	enum dlb_qid_map_state state;
+	int i;
+
+	/* Unused slot available? */
+	if (port->num_mappings < DLB_MAX_NUM_QIDS_PER_LDB_CQ)
+		return 0;
+
+	/* If the queue is already mapped (from the application's perspective),
+	 * this is simply a priority update.
+	 */
+	state = DLB_QUEUE_MAPPED;
+	if (dlb_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	if (dlb_port_find_slot_queue(port, state, queue, &i))
+		return 0;
+
+	if (dlb_port_find_slot_with_pending_map_queue(port, queue, &i))
+		return 0;
+
+	/* If the slot contains an unmap in progress, it's considered
+	 * available.
+	 */
+	state = DLB_QUEUE_UNMAP_IN_PROGRESS;
+	if (dlb_port_find_slot(port, state, &i))
+		return 0;
+
+	state = DLB_QUEUE_UNMAPPED;
+	if (dlb_port_find_slot(port, state, &i))
+		return 0;
+
+	resp->status = DLB_ST_NO_QID_SLOTS_AVAILABLE;
+	return -EINVAL;
+}
+
+static int dlb_verify_unmap_qid_args(struct dlb_hw *hw,
+				     u32 domain_id,
+				     struct dlb_unmap_qid_args *args,
+				     struct dlb_cmd_response *resp,
+				     bool vf_request,
+				     unsigned int vf_id)
+{
+	enum dlb_qid_map_state state;
+	struct dlb_domain *domain;
+	struct dlb_ldb_port *port;
+	struct dlb_ldb_queue *queue;
+	int slot;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	if (port->domain_id.phys_id != domain->id.phys_id) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	queue = dlb_get_domain_ldb_queue(args->qid, vf_request, domain);
+
+	if (!queue || !queue->configured) {
+		DLB_HW_ERR(hw, "[%s()] Can't unmap unconfigured queue %d\n",
+			   __func__, args->qid);
+		resp->status = DLB_ST_INVALID_QID;
+		return -1;
+	}
+
+	/* Verify that the port has the queue mapped. From the application's
+	 * perspective a queue is mapped if it is actually mapped, the map is
+	 * in progress, or the map is blocked pending an unmap.
+	 */
+	state = DLB_QUEUE_MAPPED;
+	if (dlb_port_find_slot_queue(port, state, queue, &slot))
+		return 0;
+
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	if (dlb_port_find_slot_queue(port, state, queue, &slot))
+		return 0;
+
+	if (dlb_port_find_slot_with_pending_map_queue(port, queue, &slot))
+		return 0;
+
+	resp->status = DLB_ST_INVALID_QID;
+	return -1;
+}
+
+static int
+dlb_verify_enable_ldb_port_args(struct dlb_hw *hw,
+				u32 domain_id,
+				struct dlb_enable_ldb_port_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_ldb_port *port;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_verify_enable_dir_port_args(struct dlb_hw *hw,
+				u32 domain_id,
+				struct dlb_enable_dir_port_args *args,
+				struct dlb_cmd_response *resp,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_dir_pq_pair *port;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_dir_pq(id, vf_request, domain);
+
+	if (!port || !port->port_configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_verify_disable_ldb_port_args(struct dlb_hw *hw,
+				 u32 domain_id,
+				 struct dlb_disable_ldb_port_args *args,
+				 struct dlb_cmd_response *resp,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_ldb_port *port;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+
+	if (!port || !port->configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_verify_disable_dir_port_args(struct dlb_hw *hw,
+				 u32 domain_id,
+				 struct dlb_disable_dir_port_args *args,
+				 struct dlb_cmd_response *resp,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_dir_pq_pair *port;
+	int id;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -1;
+	}
+
+	if (!domain->configured) {
+		resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED;
+		return -1;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_dir_pq(id, vf_request, domain);
+
+	if (!port || !port->port_configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+dlb_domain_attach_resources(struct dlb_hw *hw,
+			    struct dlb_function_resources *rsrcs,
+			    struct dlb_domain *domain,
+			    struct dlb_create_sched_domain_args *args,
+			    struct dlb_cmd_response *resp)
+{
+	int ret;
+
+	ret = dlb_attach_ldb_queues(hw,
+				    rsrcs,
+				    domain,
+				    args->num_ldb_queues,
+				    resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_ldb_ports(hw,
+				   rsrcs,
+				   domain,
+				   args->num_ldb_ports,
+				   resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_dir_ports(hw,
+				   rsrcs,
+				   domain,
+				   args->num_dir_ports,
+				   resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_ldb_credits(rsrcs,
+				     domain,
+				     args->num_ldb_credits,
+				     resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_dir_credits(rsrcs,
+				     domain,
+				     args->num_dir_credits,
+				     resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_ldb_credit_pools(hw,
+					  rsrcs,
+					  domain,
+					  args->num_ldb_credit_pools,
+					  resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_dir_credit_pools(hw,
+					  rsrcs,
+					  domain,
+					  args->num_dir_credit_pools,
+					  resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_domain_hist_list_entries(rsrcs,
+						  domain,
+						  args->num_hist_list_entries,
+						  resp);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_attach_atomic_inflights(rsrcs,
+					  domain,
+					  args->num_atomic_inflights,
+					  resp);
+	if (ret < 0)
+		return ret;
+
+	domain->configured = true;
+
+	domain->started = false;
+
+	rsrcs->num_avail_domains--;
+
+	return 0;
+}
+
+static int
+dlb_ldb_queue_attach_to_sn_group(struct dlb_hw *hw,
+				 struct dlb_ldb_queue *queue,
+				 struct dlb_create_ldb_queue_args *args)
+{
+	int slot = -1;
+	int i;
+
+	queue->sn_cfg_valid = false;
+
+	if (args->num_sequence_numbers == 0)
+		return 0;
+
+	for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
+		struct dlb_sn_group *group = &hw->rsrcs.sn_groups[i];
+
+		if (group->sequence_numbers_per_queue ==
+		    args->num_sequence_numbers &&
+		    !dlb_sn_group_full(group)) {
+			slot = dlb_sn_group_alloc_slot(group);
+			if (slot >= 0)
+				break;
+		}
+	}
+
+	if (slot == -1) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no sequence number slots available\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue->sn_cfg_valid = true;
+	queue->sn_group = i;
+	queue->sn_slot = slot;
+	return 0;
+}
+
+static int
+dlb_ldb_queue_attach_resources(struct dlb_hw *hw,
+			       struct dlb_domain *domain,
+			       struct dlb_ldb_queue *queue,
+			       struct dlb_create_ldb_queue_args *args)
+{
+	int ret;
+
+	ret = dlb_ldb_queue_attach_to_sn_group(hw, queue, args);
+	if (ret)
+		return ret;
+
+	/* Attach QID inflights */
+	queue->num_qid_inflights = args->num_qid_inflights;
+
+	/* Attach atomic inflights */
+	queue->aqed_freelist.base = domain->aqed_freelist.base +
+				    domain->aqed_freelist.offset;
+	queue->aqed_freelist.bound = queue->aqed_freelist.base +
+				     args->num_atomic_inflights;
+	domain->aqed_freelist.offset += args->num_atomic_inflights;
+
+	return 0;
+}
+
+static void dlb_ldb_port_cq_enable(struct dlb_hw *hw,
+				   struct dlb_ldb_port *port)
+{
+	union dlb_lsp_cq_ldb_dsbl reg;
+
+	/* Don't re-enable the port if a removal is pending. The caller should
+	 * mark this port as enabled (if it isn't already), and when the
+	 * removal completes the port will be enabled.
+	 */
+	if (port->num_pending_removals)
+		return;
+
+	reg.field.disabled = 0;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_ldb_port_cq_disable(struct dlb_hw *hw,
+				    struct dlb_ldb_port *port)
+{
+	union dlb_lsp_cq_ldb_dsbl reg;
+
+	reg.field.disabled = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ_LDB_DSBL(port->id.phys_id), reg.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_dir_port_cq_enable(struct dlb_hw *hw,
+				   struct dlb_dir_pq_pair *port)
+{
+	union dlb_lsp_cq_dir_dsbl reg;
+
+	reg.field.disabled = 0;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_dir_port_cq_disable(struct dlb_hw *hw,
+				    struct dlb_dir_pq_pair *port)
+{
+	union dlb_lsp_cq_dir_dsbl reg;
+
+	reg.field.disabled = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ_DIR_DSBL(port->id.phys_id), reg.val);
+
+	dlb_flush_csr(hw);
+}
+
+static int dlb_ldb_port_configure_pp(struct dlb_hw *hw,
+				     struct dlb_domain *domain,
+				     struct dlb_ldb_port *port,
+				     struct dlb_create_ldb_port_args *args,
+				     bool vf_request,
+				     unsigned int vf_id)
+{
+	union dlb_sys_ldb_pp2ldbpool r0 = { {0} };
+	union dlb_sys_ldb_pp2dirpool r1 = { {0} };
+	union dlb_sys_ldb_pp2vf_pf r2 = { {0} };
+	union dlb_sys_ldb_pp2vas r3 = { {0} };
+	union dlb_sys_ldb_pp_v r4 = { {0} };
+	union dlb_sys_ldb_pp2vpp r5 = { {0} };
+	union dlb_chp_ldb_pp_ldb_crd_hwm r6 = { {0} };
+	union dlb_chp_ldb_pp_dir_crd_hwm r7 = { {0} };
+	union dlb_chp_ldb_pp_ldb_crd_lwm r8 = { {0} };
+	union dlb_chp_ldb_pp_dir_crd_lwm r9 = { {0} };
+	union dlb_chp_ldb_pp_ldb_min_crd_qnt r10 = { {0} };
+	union dlb_chp_ldb_pp_dir_min_crd_qnt r11 = { {0} };
+	union dlb_chp_ldb_pp_ldb_crd_cnt r12 = { {0} };
+	union dlb_chp_ldb_pp_dir_crd_cnt r13 = { {0} };
+	union dlb_chp_ldb_ldb_pp2pool r14 = { {0} };
+	union dlb_chp_ldb_dir_pp2pool r15 = { {0} };
+	union dlb_chp_ldb_pp_crd_req_state r16 = { {0} };
+	union dlb_chp_ldb_pp_ldb_push_ptr r17 = { {0} };
+	union dlb_chp_ldb_pp_dir_push_ptr r18 = { {0} };
+
+	struct dlb_credit_pool *ldb_pool = NULL;
+	struct dlb_credit_pool *dir_pool = NULL;
+	unsigned int offs;
+
+	if (port->ldb_pool_used) {
+		ldb_pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!ldb_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+	}
+
+	if (port->dir_pool_used) {
+		dir_pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!dir_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+	}
+
+	r0.field.ldbpool = (port->ldb_pool_used) ? ldb_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP2LDBPOOL(port->id.phys_id), r0.val);
+
+	r1.field.dirpool = (port->dir_pool_used) ? dir_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP2DIRPOOL(port->id.phys_id), r1.val);
+
+	r2.field.vf = vf_id;
+	r2.field.is_pf = !vf_request;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP2VF_PF(port->id.phys_id), r2.val);
+
+	r3.field.vas = domain->id.phys_id;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP2VAS(port->id.phys_id), r3.val);
+
+	r5.field.vpp = port->id.virt_id;
+
+	offs = (vf_id * DLB_MAX_NUM_LDB_PORTS) + port->id.phys_id;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP2VPP(offs), r5.val);
+
+	r6.field.hwm = args->ldb_credit_high_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_LDB_CRD_HWM(port->id.phys_id), r6.val);
+
+	r7.field.hwm = args->dir_credit_high_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_DIR_CRD_HWM(port->id.phys_id), r7.val);
+
+	r8.field.lwm = args->ldb_credit_low_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_LDB_CRD_LWM(port->id.phys_id), r8.val);
+
+	r9.field.lwm = args->dir_credit_low_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_DIR_CRD_LWM(port->id.phys_id), r9.val);
+
+	r10.field.quanta = args->ldb_credit_quantum;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT(port->id.phys_id),
+		   r10.val);
+
+	r11.field.quanta = args->dir_credit_quantum;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT(port->id.phys_id),
+		   r11.val);
+
+	r12.field.count = args->ldb_credit_high_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_LDB_CRD_CNT(port->id.phys_id), r12.val);
+
+	r13.field.count = args->dir_credit_high_watermark;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_DIR_CRD_CNT(port->id.phys_id), r13.val);
+
+	r14.field.pool = (port->ldb_pool_used) ? ldb_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_LDB_PP2POOL(port->id.phys_id), r14.val);
+
+	r15.field.pool = (port->dir_pool_used) ? dir_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_DIR_PP2POOL(port->id.phys_id), r15.val);
+
+	r16.field.no_pp_credit_update = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_CRD_REQ_STATE(port->id.phys_id), r16.val);
+
+	r17.field.push_pointer = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_LDB_PUSH_PTR(port->id.phys_id), r17.val);
+
+	r18.field.push_pointer = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_PP_DIR_PUSH_PTR(port->id.phys_id), r18.val);
+
+	if (vf_request) {
+		union dlb_sys_vf_ldb_vpp2pp r16 = { {0} };
+		union dlb_sys_vf_ldb_vpp_v r17 = { {0} };
+
+		r16.field.pp = port->id.phys_id;
+
+		offs = vf_id * DLB_MAX_NUM_LDB_PORTS + port->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_LDB_VPP2PP(offs), r16.val);
+
+		r17.field.vpp_v = 1;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_LDB_VPP_V(offs), r17.val);
+	}
+
+	r4.field.pp_v = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP_V(port->id.phys_id),
+		   r4.val);
+
+	return 0;
+}
+
+static int dlb_ldb_port_configure_cq(struct dlb_hw *hw,
+				     struct dlb_ldb_port *port,
+				     u64 pop_count_dma_base,
+				     u64 cq_dma_base,
+				     struct dlb_create_ldb_port_args *args,
+				     bool vf_request,
+				     unsigned int vf_id)
+{
+	int i;
+
+	union dlb_sys_ldb_cq_addr_l r0 = { {0} };
+	union dlb_sys_ldb_cq_addr_u r1 = { {0} };
+	union dlb_sys_ldb_cq2vf_pf r2 = { {0} };
+	union dlb_chp_ldb_cq_tkn_depth_sel r3 = { {0} };
+	union dlb_chp_hist_list_lim r4 = { {0} };
+	union dlb_chp_hist_list_base r5 = { {0} };
+	union dlb_lsp_cq_ldb_infl_lim r6 = { {0} };
+	union dlb_lsp_cq2priov r7 = { {0} };
+	union dlb_chp_hist_list_push_ptr r8 = { {0} };
+	union dlb_chp_hist_list_pop_ptr r9 = { {0} };
+	union dlb_lsp_cq_ldb_tkn_depth_sel r10 = { {0} };
+	union dlb_sys_ldb_pp_addr_l r11 = { {0} };
+	union dlb_sys_ldb_pp_addr_u r12 = { {0} };
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	r0.field.addr_l = cq_dma_base >> 6;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		   r0.val);
+
+	r1.field.addr_u = cq_dma_base >> 32;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		   r1.val);
+
+	r2.field.vf = vf_id;
+	r2.field.is_pf = !vf_request;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ2VF_PF(port->id.phys_id),
+		   r2.val);
+
+	if (args->cq_depth <= 8) {
+		r3.field.token_depth_select = 1;
+	} else if (args->cq_depth == 16) {
+		r3.field.token_depth_select = 2;
+	} else if (args->cq_depth == 32) {
+		r3.field.token_depth_select = 3;
+	} else if (args->cq_depth == 64) {
+		r3.field.token_depth_select = 4;
+	} else if (args->cq_depth == 128) {
+		r3.field.token_depth_select = 5;
+	} else if (args->cq_depth == 256) {
+		r3.field.token_depth_select = 6;
+	} else if (args->cq_depth == 512) {
+		r3.field.token_depth_select = 7;
+	} else if (args->cq_depth == 1024) {
+		r3.field.token_depth_select = 8;
+	} else {
+		DLB_HW_ERR(hw, "[%s():%d] Internal error: invalid CQ depth\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
+		   r3.val);
+
+	r10.field.token_depth_select = r3.field.token_depth_select;
+	r10.field.ignore_depth = 0;
+	/* TDT algorithm: DLB must be able to write CQs with depth < 4 */
+	r10.field.enab_shallow_cq = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
+		   r10.val);
+
+	/* To support CQs with depth less than 8, program the token count
+	 * register with a non-zero initial value. Operations such as domain
+	 * reset must take this initial value into account when quiescing the
+	 * CQ.
+	 */
+	port->init_tkn_cnt = 0;
+
+	if (args->cq_depth < 8) {
+		union dlb_lsp_cq_ldb_tkn_cnt r12 = { {0} };
+
+		port->init_tkn_cnt = 8 - args->cq_depth;
+
+		r12.field.token_count = port->init_tkn_cnt;
+
+		DLB_CSR_WR(hw,
+			   DLB_LSP_CQ_LDB_TKN_CNT(port->id.phys_id),
+			   r12.val);
+	}
+
+	r4.field.limit = port->hist_list_entry_limit - 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_HIST_LIST_LIM(port->id.phys_id), r4.val);
+
+	r5.field.base = port->hist_list_entry_base;
+
+	DLB_CSR_WR(hw, DLB_CHP_HIST_LIST_BASE(port->id.phys_id), r5.val);
+
+	r8.field.push_ptr = r5.field.base;
+	r8.field.generation = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id), r8.val);
+
+	r9.field.pop_ptr = r5.field.base;
+	r9.field.generation = 0;
+
+	DLB_CSR_WR(hw, DLB_CHP_HIST_LIST_POP_PTR(port->id.phys_id), r9.val);
+
+	/* The inflight limit sets a cap on the number of QEs for which this CQ
+	 * can owe completions at one time.
+	 */
+	r6.field.limit = args->cq_history_list_size;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ_LDB_INFL_LIM(port->id.phys_id), r6.val);
+
+	/* Disable the port's QID mappings */
+	r7.field.v = 0;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ2PRIOV(port->id.phys_id), r7.val);
+
+	/* Two cache lines (128B) are dedicated for the port's pop counts */
+	r11.field.addr_l = pop_count_dma_base >> 7;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP_ADDR_L(port->id.phys_id), r11.val);
+
+	r12.field.addr_u = pop_count_dma_base >> 32;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_PP_ADDR_U(port->id.phys_id), r12.val);
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+		port->qid_map[i].state = DLB_QUEUE_UNMAPPED;
+
+	return 0;
+}
+
+static void dlb_update_ldb_arb_threshold(struct dlb_hw *hw)
+{
+	union dlb_lsp_ctrl_config_0 r0 = { {0} };
+
+	/* From the hardware spec:
+	 * "The optimal value for ldb_arb_threshold is in the region of {8 *
+	 * #CQs}. It is expected therefore that the PF will change this value
+	 * dynamically as the number of active ports changes."
+	 */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CTRL_CONFIG_0);
+
+	r0.field.ldb_arb_threshold = hw->pf.num_enabled_ldb_ports * 8;
+	r0.field.ldb_arb_ignore_empty = 1;
+	r0.field.ldb_arb_mode = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_CTRL_CONFIG_0, r0.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_ldb_pool_update_credit_count(struct dlb_hw *hw,
+					     u32 pool_id,
+					     u32 count)
+{
+	hw->rsrcs.ldb_credit_pools[pool_id].avail_credits -= count;
+}
+
+static void dlb_dir_pool_update_credit_count(struct dlb_hw *hw,
+					     u32 pool_id,
+					     u32 count)
+{
+	hw->rsrcs.dir_credit_pools[pool_id].avail_credits -= count;
+}
+
+static void dlb_ldb_pool_write_credit_count_reg(struct dlb_hw *hw,
+						u32 pool_id)
+{
+	union dlb_chp_ldb_pool_crd_cnt r0 = { {0} };
+	struct dlb_credit_pool *pool;
+
+	pool = &hw->rsrcs.ldb_credit_pools[pool_id];
+
+	r0.field.count = pool->avail_credits;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_POOL_CRD_CNT(pool->id.phys_id),
+		   r0.val);
+}
+
+static void dlb_dir_pool_write_credit_count_reg(struct dlb_hw *hw,
+						u32 pool_id)
+{
+	union dlb_chp_dir_pool_crd_cnt r0 = { {0} };
+	struct dlb_credit_pool *pool;
+
+	pool = &hw->rsrcs.dir_credit_pools[pool_id];
+
+	r0.field.count = pool->avail_credits;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_POOL_CRD_CNT(pool->id.phys_id),
+		   r0.val);
+}
+
+static int dlb_configure_ldb_port(struct dlb_hw *hw,
+				  struct dlb_domain *domain,
+				  struct dlb_ldb_port *port,
+				  u64 pop_count_dma_base,
+				  u64 cq_dma_base,
+				  struct dlb_create_ldb_port_args *args,
+				  bool vf_request,
+				  unsigned int vf_id)
+{
+	struct dlb_credit_pool *ldb_pool, *dir_pool;
+	int ret;
+
+	port->hist_list_entry_base = domain->hist_list_entry_base +
+				     domain->hist_list_entry_offset;
+	port->hist_list_entry_limit = port->hist_list_entry_base +
+				      args->cq_history_list_size;
+
+	domain->hist_list_entry_offset += args->cq_history_list_size;
+	domain->avail_hist_list_entries -= args->cq_history_list_size;
+
+	port->ldb_pool_used = !dlb_list_empty(&domain->used_ldb_queues) ||
+			      !dlb_list_empty(&domain->avail_ldb_queues);
+	port->dir_pool_used = !dlb_list_empty(&domain->used_dir_pq_pairs) ||
+			      !dlb_list_empty(&domain->avail_dir_pq_pairs);
+
+	if (port->ldb_pool_used) {
+		u32 cnt = args->ldb_credit_high_watermark;
+
+		ldb_pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!ldb_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+
+		dlb_ldb_pool_update_credit_count(hw, ldb_pool->id.phys_id, cnt);
+	} else {
+		args->ldb_credit_high_watermark = 0;
+		args->ldb_credit_low_watermark = 0;
+		args->ldb_credit_quantum = 0;
+	}
+
+	if (port->dir_pool_used) {
+		u32 cnt = args->dir_credit_high_watermark;
+
+		dir_pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!dir_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+
+		dlb_dir_pool_update_credit_count(hw, dir_pool->id.phys_id, cnt);
+	} else {
+		args->dir_credit_high_watermark = 0;
+		args->dir_credit_low_watermark = 0;
+		args->dir_credit_quantum = 0;
+	}
+
+	ret = dlb_ldb_port_configure_cq(hw,
+					port,
+					pop_count_dma_base,
+					cq_dma_base,
+					args,
+					vf_request,
+					vf_id);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_ldb_port_configure_pp(hw,
+					domain,
+					port,
+					args,
+					vf_request,
+					vf_id);
+	if (ret < 0)
+		return ret;
+
+	dlb_ldb_port_cq_enable(hw, port);
+
+	port->num_mappings = 0;
+
+	port->enabled = true;
+
+	hw->pf.num_enabled_ldb_ports++;
+
+	dlb_update_ldb_arb_threshold(hw);
+
+	port->configured = true;
+
+	return 0;
+}
+
+static int dlb_dir_port_configure_pp(struct dlb_hw *hw,
+				     struct dlb_domain *domain,
+				     struct dlb_dir_pq_pair *port,
+				     struct dlb_create_dir_port_args *args,
+				     bool vf_request,
+				     unsigned int vf_id)
+{
+	union dlb_sys_dir_pp2ldbpool r0 = { {0} };
+	union dlb_sys_dir_pp2dirpool r1 = { {0} };
+	union dlb_sys_dir_pp2vf_pf r2 = { {0} };
+	union dlb_sys_dir_pp2vas r3 = { {0} };
+	union dlb_sys_dir_pp_v r4 = { {0} };
+	union dlb_sys_dir_pp2vpp r5 = { {0} };
+	union dlb_chp_dir_pp_ldb_crd_hwm r6 = { {0} };
+	union dlb_chp_dir_pp_dir_crd_hwm r7 = { {0} };
+	union dlb_chp_dir_pp_ldb_crd_lwm r8 = { {0} };
+	union dlb_chp_dir_pp_dir_crd_lwm r9 = { {0} };
+	union dlb_chp_dir_pp_ldb_min_crd_qnt r10 = { {0} };
+	union dlb_chp_dir_pp_dir_min_crd_qnt r11 = { {0} };
+	union dlb_chp_dir_pp_ldb_crd_cnt r12 = { {0} };
+	union dlb_chp_dir_pp_dir_crd_cnt r13 = { {0} };
+	union dlb_chp_dir_ldb_pp2pool r14 = { {0} };
+	union dlb_chp_dir_dir_pp2pool r15 = { {0} };
+	union dlb_chp_dir_pp_crd_req_state r16 = { {0} };
+	union dlb_chp_dir_pp_ldb_push_ptr r17 = { {0} };
+	union dlb_chp_dir_pp_dir_push_ptr r18 = { {0} };
+
+	struct dlb_credit_pool *ldb_pool = NULL;
+	struct dlb_credit_pool *dir_pool = NULL;
+
+	if (port->ldb_pool_used) {
+		ldb_pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!ldb_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+	}
+
+	if (port->dir_pool_used) {
+		dir_pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!dir_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+	}
+
+	r0.field.ldbpool = (port->ldb_pool_used) ? ldb_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2LDBPOOL(port->id.phys_id),
+		   r0.val);
+
+	r1.field.dirpool = (port->dir_pool_used) ? dir_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2DIRPOOL(port->id.phys_id),
+		   r1.val);
+
+	r2.field.vf = vf_id;
+	r2.field.is_pf = !vf_request;
+	r2.field.is_hw_dsi = 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2VF_PF(port->id.phys_id),
+		   r2.val);
+
+	r3.field.vas = domain->id.phys_id;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2VAS(port->id.phys_id),
+		   r3.val);
+
+	r5.field.vpp = port->id.virt_id;
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2VPP((vf_id * DLB_MAX_NUM_DIR_PORTS) +
+				      port->id.phys_id),
+		   r5.val);
+
+	r6.field.hwm = args->ldb_credit_high_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_CRD_HWM(port->id.phys_id),
+		   r6.val);
+
+	r7.field.hwm = args->dir_credit_high_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_CRD_HWM(port->id.phys_id),
+		   r7.val);
+
+	r8.field.lwm = args->ldb_credit_low_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_CRD_LWM(port->id.phys_id),
+		   r8.val);
+
+	r9.field.lwm = args->dir_credit_low_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_CRD_LWM(port->id.phys_id),
+		   r9.val);
+
+	r10.field.quanta = args->ldb_credit_quantum;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT(port->id.phys_id),
+		   r10.val);
+
+	r11.field.quanta = args->dir_credit_quantum;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT(port->id.phys_id),
+		   r11.val);
+
+	r12.field.count = args->ldb_credit_high_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_CRD_CNT(port->id.phys_id),
+		   r12.val);
+
+	r13.field.count = args->dir_credit_high_watermark;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_CRD_CNT(port->id.phys_id),
+		   r13.val);
+
+	r14.field.pool = (port->ldb_pool_used) ? ldb_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_LDB_PP2POOL(port->id.phys_id),
+		   r14.val);
+
+	r15.field.pool = (port->dir_pool_used) ? dir_pool->id.phys_id : 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_DIR_PP2POOL(port->id.phys_id),
+		   r15.val);
+
+	r16.field.no_pp_credit_update = 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_CRD_REQ_STATE(port->id.phys_id),
+		   r16.val);
+
+	r17.field.push_pointer = 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_PUSH_PTR(port->id.phys_id),
+		   r17.val);
+
+	r18.field.push_pointer = 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_PUSH_PTR(port->id.phys_id),
+		   r18.val);
+
+	if (vf_request) {
+		union dlb_sys_vf_dir_vpp2pp r16 = { {0} };
+		union dlb_sys_vf_dir_vpp_v r17 = { {0} };
+		unsigned int offs;
+
+		r16.field.pp = port->id.phys_id;
+
+		offs = vf_id * DLB_MAX_NUM_DIR_PORTS + port->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_DIR_VPP2PP(offs), r16.val);
+
+		r17.field.vpp_v = 1;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_DIR_VPP_V(offs), r17.val);
+	}
+
+	r4.field.pp_v = 1;
+	r4.field.mb_dm = 0;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_PP_V(port->id.phys_id), r4.val);
+
+	return 0;
+}
+
+static int dlb_dir_port_configure_cq(struct dlb_hw *hw,
+				     struct dlb_dir_pq_pair *port,
+				     u64 pop_count_dma_base,
+				     u64 cq_dma_base,
+				     struct dlb_create_dir_port_args *args,
+				     bool vf_request,
+				     unsigned int vf_id)
+{
+	union dlb_sys_dir_cq_addr_l r0 = { {0} };
+	union dlb_sys_dir_cq_addr_u r1 = { {0} };
+	union dlb_sys_dir_cq2vf_pf r2 = { {0} };
+	union dlb_chp_dir_cq_tkn_depth_sel r3 = { {0} };
+	union dlb_lsp_cq_dir_tkn_depth_sel_dsi r4 = { {0} };
+	union dlb_sys_dir_pp_addr_l r5 = { {0} };
+	union dlb_sys_dir_pp_addr_u r6 = { {0} };
+
+	/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */
+	r0.field.addr_l = cq_dma_base >> 6;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_CQ_ADDR_L(port->id.phys_id), r0.val);
+
+	r1.field.addr_u = cq_dma_base >> 32;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_CQ_ADDR_U(port->id.phys_id), r1.val);
+
+	r2.field.vf = vf_id;
+	r2.field.is_pf = !vf_request;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_CQ2VF_PF(port->id.phys_id), r2.val);
+
+	if (args->cq_depth == 8) {
+		r3.field.token_depth_select = 1;
+	} else if (args->cq_depth == 16) {
+		r3.field.token_depth_select = 2;
+	} else if (args->cq_depth == 32) {
+		r3.field.token_depth_select = 3;
+	} else if (args->cq_depth == 64) {
+		r3.field.token_depth_select = 4;
+	} else if (args->cq_depth == 128) {
+		r3.field.token_depth_select = 5;
+	} else if (args->cq_depth == 256) {
+		r3.field.token_depth_select = 6;
+	} else if (args->cq_depth == 512) {
+		r3.field.token_depth_select = 7;
+	} else if (args->cq_depth == 1024) {
+		r3.field.token_depth_select = 8;
+	} else {
+		DLB_HW_ERR(hw, "[%s():%d] Internal error: invalid CQ depth\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
+		   r3.val);
+
+	r4.field.token_depth_select = r3.field.token_depth_select;
+	r4.field.disable_wb_opt = 0;
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
+		   r4.val);
+
+	/* Two cache lines (128B) are dedicated for the port's pop counts */
+	r5.field.addr_l = pop_count_dma_base >> 7;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_PP_ADDR_L(port->id.phys_id), r5.val);
+
+	r6.field.addr_u = pop_count_dma_base >> 32;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_PP_ADDR_U(port->id.phys_id), r6.val);
+
+	return 0;
+}
+
+static int dlb_configure_dir_port(struct dlb_hw *hw,
+				  struct dlb_domain *domain,
+				  struct dlb_dir_pq_pair *port,
+				  u64 pop_count_dma_base,
+				  u64 cq_dma_base,
+				  struct dlb_create_dir_port_args *args,
+				  bool vf_request,
+				  unsigned int vf_id)
+{
+	struct dlb_credit_pool *ldb_pool, *dir_pool;
+	int ret;
+
+	port->ldb_pool_used = !dlb_list_empty(&domain->used_ldb_queues) ||
+			      !dlb_list_empty(&domain->avail_ldb_queues);
+
+	/* Each directed port has a directed queue, hence this port requires
+	 * directed credits.
+	 */
+	port->dir_pool_used = true;
+
+	if (port->ldb_pool_used) {
+		u32 cnt = args->ldb_credit_high_watermark;
+
+		ldb_pool = dlb_get_domain_ldb_pool(args->ldb_credit_pool_id,
+						   vf_request,
+						   domain);
+		if (!ldb_pool) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: port validation failed\n",
+				   __func__);
+			return -EFAULT;
+		}
+
+		dlb_ldb_pool_update_credit_count(hw, ldb_pool->id.phys_id, cnt);
+	} else {
+		args->ldb_credit_high_watermark = 0;
+		args->ldb_credit_low_watermark = 0;
+		args->ldb_credit_quantum = 0;
+	}
+
+	dir_pool = dlb_get_domain_dir_pool(args->dir_credit_pool_id,
+					   vf_request,
+					   domain);
+	if (!dir_pool) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: port validation failed\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	dlb_dir_pool_update_credit_count(hw,
+					 dir_pool->id.phys_id,
+					 args->dir_credit_high_watermark);
+
+	ret = dlb_dir_port_configure_cq(hw,
+					port,
+					pop_count_dma_base,
+					cq_dma_base,
+					args,
+					vf_request,
+					vf_id);
+
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_dir_port_configure_pp(hw,
+					domain,
+					port,
+					args,
+					vf_request,
+					vf_id);
+	if (ret < 0)
+		return ret;
+
+	dlb_dir_port_cq_enable(hw, port);
+
+	port->enabled = true;
+
+	port->port_configured = true;
+
+	return 0;
+}
+
+static int dlb_ldb_port_map_qid_static(struct dlb_hw *hw,
+				       struct dlb_ldb_port *p,
+				       struct dlb_ldb_queue *q,
+				       u8 priority)
+{
+	union dlb_lsp_cq2priov r0;
+	union dlb_lsp_cq2qid r1;
+	union dlb_atm_pipe_qid_ldb_qid2cqidx r2;
+	union dlb_lsp_qid_ldb_qid2cqidx r3;
+	union dlb_lsp_qid_ldb_qid2cqidx2 r4;
+	enum dlb_qid_map_state state;
+	int i;
+
+	/* Look for a pending or already mapped slot, else an unused slot */
+	if (!dlb_port_find_slot_queue(p, DLB_QUEUE_MAP_IN_PROGRESS, q, &i) &&
+	    !dlb_port_find_slot_queue(p, DLB_QUEUE_MAPPED, q, &i) &&
+	    !dlb_port_find_slot(p, DLB_QUEUE_UNMAPPED, &i)) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: CQ has no available QID mapping slots\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port slot tracking failed\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Read-modify-write the priority and valid bit register */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ2PRIOV(p->id.phys_id));
+
+	r0.field.v |= 1 << i;
+	r0.field.prio |= (priority & 0x7) << i * 3;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ2PRIOV(p->id.phys_id), r0.val);
+
+	/* Read-modify-write the QID map register */
+	r1.val = DLB_CSR_RD(hw, DLB_LSP_CQ2QID(p->id.phys_id, i / 4));
+
+	if (i == 0 || i == 4)
+		r1.field.qid_p0 = q->id.phys_id;
+	if (i == 1 || i == 5)
+		r1.field.qid_p1 = q->id.phys_id;
+	if (i == 2 || i == 6)
+		r1.field.qid_p2 = q->id.phys_id;
+	if (i == 3 || i == 7)
+		r1.field.qid_p3 = q->id.phys_id;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ2QID(p->id.phys_id, i / 4), r1.val);
+
+	r2.val = DLB_CSR_RD(hw,
+			    DLB_ATM_PIPE_QID_LDB_QID2CQIDX(q->id.phys_id,
+							   p->id.phys_id / 4));
+
+	r3.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_QID2CQIDX(q->id.phys_id,
+						      p->id.phys_id / 4));
+
+	r4.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_QID2CQIDX2(q->id.phys_id,
+						       p->id.phys_id / 4));
+
+	switch (p->id.phys_id % 4) {
+	case 0:
+		r2.field.cq_p0 |= 1 << i;
+		r3.field.cq_p0 |= 1 << i;
+		r4.field.cq_p0 |= 1 << i;
+		break;
+
+	case 1:
+		r2.field.cq_p1 |= 1 << i;
+		r3.field.cq_p1 |= 1 << i;
+		r4.field.cq_p1 |= 1 << i;
+		break;
+
+	case 2:
+		r2.field.cq_p2 |= 1 << i;
+		r3.field.cq_p2 |= 1 << i;
+		r4.field.cq_p2 |= 1 << i;
+		break;
+
+	case 3:
+		r2.field.cq_p3 |= 1 << i;
+		r3.field.cq_p3 |= 1 << i;
+		r4.field.cq_p3 |= 1 << i;
+		break;
+	}
+
+	DLB_CSR_WR(hw,
+		   DLB_ATM_PIPE_QID_LDB_QID2CQIDX(q->id.phys_id,
+						  p->id.phys_id / 4),
+		   r2.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_QID_LDB_QID2CQIDX(q->id.phys_id,
+					     p->id.phys_id / 4),
+		   r3.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_QID_LDB_QID2CQIDX2(q->id.phys_id,
+					      p->id.phys_id / 4),
+		   r4.val);
+
+	dlb_flush_csr(hw);
+
+	p->qid_map[i].qid = q->id.phys_id;
+	p->qid_map[i].priority = priority;
+
+	state = DLB_QUEUE_MAPPED;
+
+	return dlb_port_slot_state_transition(hw, p, q, i, state);
+}
+
+static void dlb_ldb_port_change_qid_priority(struct dlb_hw *hw,
+					     struct dlb_ldb_port *port,
+					     int slot,
+					     struct dlb_map_qid_args *args)
+{
+	union dlb_lsp_cq2priov r0;
+
+	/* Read-modify-write the priority and valid bit register */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ2PRIOV(port->id.phys_id));
+
+	r0.field.v |= 1 << slot;
+	r0.field.prio |= (args->priority & 0x7) << slot * 3;
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ2PRIOV(port->id.phys_id), r0.val);
+
+	dlb_flush_csr(hw);
+
+	port->qid_map[slot].priority = args->priority;
+}
+
+static int dlb_ldb_port_set_has_work_bits(struct dlb_hw *hw,
+					  struct dlb_ldb_port *port,
+					  struct dlb_ldb_queue *queue,
+					  int slot)
+{
+	union dlb_lsp_qid_aqed_active_cnt r0;
+	union dlb_lsp_qid_ldb_enqueue_cnt r1;
+	union dlb_lsp_ldb_sched_ctrl r2 = { {0} };
+
+	/* Set the atomic scheduling haswork bit */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
+
+	r2.field.cq = port->id.phys_id;
+	r2.field.qidix = slot;
+	r2.field.value = 1;
+	r2.field.rlist_haswork_v = r0.field.count > 0;
+
+	/* Set the non-atomic scheduling haswork bit */
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r2.val);
+
+	r1.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
+
+	memset(&r2, 0, sizeof(r2));
+
+	r2.field.cq = port->id.phys_id;
+	r2.field.qidix = slot;
+	r2.field.value = 1;
+	r2.field.nalb_haswork_v = (r1.field.count > 0);
+
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r2.val);
+
+	dlb_flush_csr(hw);
+
+	return 0;
+}
+
+static void dlb_ldb_port_clear_has_work_bits(struct dlb_hw *hw,
+					     struct dlb_ldb_port *port,
+					     u8 slot)
+{
+	union dlb_lsp_ldb_sched_ctrl r2 = { {0} };
+
+	r2.field.cq = port->id.phys_id;
+	r2.field.qidix = slot;
+	r2.field.value = 0;
+	r2.field.rlist_haswork_v = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r2.val);
+
+	memset(&r2, 0, sizeof(r2));
+
+	r2.field.cq = port->id.phys_id;
+	r2.field.qidix = slot;
+	r2.field.value = 0;
+	r2.field.nalb_haswork_v = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r2.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_ldb_port_clear_queue_if_status(struct dlb_hw *hw,
+					       struct dlb_ldb_port *port,
+					       int slot)
+{
+	union dlb_lsp_ldb_sched_ctrl r0 = { {0} };
+
+	r0.field.cq = port->id.phys_id;
+	r0.field.qidix = slot;
+	r0.field.value = 0;
+	r0.field.inflight_ok_v = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r0.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_ldb_port_set_queue_if_status(struct dlb_hw *hw,
+					     struct dlb_ldb_port *port,
+					     int slot)
+{
+	union dlb_lsp_ldb_sched_ctrl r0 = { {0} };
+
+	r0.field.cq = port->id.phys_id;
+	r0.field.qidix = slot;
+	r0.field.value = 1;
+	r0.field.inflight_ok_v = 1;
+
+	DLB_CSR_WR(hw, DLB_LSP_LDB_SCHED_CTRL, r0.val);
+
+	dlb_flush_csr(hw);
+}
+
+static void dlb_ldb_queue_set_inflight_limit(struct dlb_hw *hw,
+					     struct dlb_ldb_queue *queue)
+{
+	union dlb_lsp_qid_ldb_infl_lim r0 = { {0} };
+
+	r0.field.limit = queue->num_qid_inflights;
+
+	DLB_CSR_WR(hw, DLB_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r0.val);
+}
+
+static void dlb_ldb_queue_clear_inflight_limit(struct dlb_hw *hw,
+					       struct dlb_ldb_queue *queue)
+{
+	DLB_CSR_WR(hw,
+		   DLB_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
+		   DLB_LSP_QID_LDB_INFL_LIM_RST);
+}
+
+/* dlb_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as their
+ * function names imply, and should only be called by the dynamic CQ mapping
+ * code.
+ */
+static void dlb_ldb_queue_disable_mapped_cqs(struct dlb_hw *hw,
+					     struct dlb_domain *domain,
+					     struct dlb_ldb_queue *queue)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+	int slot;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		enum dlb_qid_map_state state = DLB_QUEUE_MAPPED;
+
+		if (!dlb_port_find_slot_queue(port, state, queue, &slot))
+			continue;
+
+		if (port->enabled)
+			dlb_ldb_port_cq_disable(hw, port);
+	}
+}
+
+static void dlb_ldb_queue_enable_mapped_cqs(struct dlb_hw *hw,
+					    struct dlb_domain *domain,
+					    struct dlb_ldb_queue *queue)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+	int slot;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		enum dlb_qid_map_state state = DLB_QUEUE_MAPPED;
+
+		if (!dlb_port_find_slot_queue(port, state, queue, &slot))
+			continue;
+
+		if (port->enabled)
+			dlb_ldb_port_cq_enable(hw, port);
+	}
+}
+
+static int dlb_ldb_port_finish_map_qid_dynamic(struct dlb_hw *hw,
+					       struct dlb_domain *domain,
+					       struct dlb_ldb_port *port,
+					       struct dlb_ldb_queue *queue)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_lsp_qid_ldb_infl_cnt r0;
+	enum dlb_qid_map_state state;
+	int slot, ret;
+	u8 prio;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
+
+	if (r0.field.count) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: non-zero QID inflight count\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	/* For each port with a pending mapping to this queue, perform the
+	 * static mapping and set the corresponding has_work bits.
+	 */
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	if (!dlb_port_find_slot_queue(port, state, queue, &slot))
+		return -EINVAL;
+
+	if (slot >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port slot tracking failed\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	prio = port->qid_map[slot].priority;
+
+	/* Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and
+	 * the port's qid_map state.
+	 */
+	ret = dlb_ldb_port_map_qid_static(hw, port, queue, prio);
+	if (ret)
+		return ret;
+
+	ret = dlb_ldb_port_set_has_work_bits(hw, port, queue, slot);
+	if (ret)
+		return ret;
+
+	/* Ensure IF_status(cq,qid) is 0 before enabling the port to
+	 * prevent spurious schedules to cause the queue's inflight
+	 * count to increase.
+	 */
+	dlb_ldb_port_clear_queue_if_status(hw, port, slot);
+
+	/* Reset the queue's inflight status */
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		state = DLB_QUEUE_MAPPED;
+		if (!dlb_port_find_slot_queue(port, state, queue, &slot))
+			continue;
+
+		dlb_ldb_port_set_queue_if_status(hw, port, slot);
+	}
+
+	dlb_ldb_queue_set_inflight_limit(hw, queue);
+
+	/* Re-enable CQs mapped to this queue */
+	dlb_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+	/* If this queue has other mappings pending, clear its inflight limit */
+	if (queue->num_pending_additions > 0)
+		dlb_ldb_queue_clear_inflight_limit(hw, queue);
+
+	return 0;
+}
+
+/**
+ * dlb_ldb_port_map_qid_dynamic() - perform a "dynamic" QID->CQ mapping
+ * @hw: dlb_hw handle for a particular device.
+ * @port: load-balanced port
+ * @queue: load-balanced queue
+ * @priority: queue servicing priority
+ *
+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur
+ * at a later point, and <0 if an error occurred.
+ */
+static int dlb_ldb_port_map_qid_dynamic(struct dlb_hw *hw,
+					struct dlb_ldb_port *port,
+					struct dlb_ldb_queue *queue,
+					u8 priority)
+{
+	union dlb_lsp_qid_ldb_infl_cnt r0 = { {0} };
+	enum dlb_qid_map_state state;
+	struct dlb_domain *domain;
+	int slot, ret;
+
+	domain = dlb_get_domain_from_id(hw, port->domain_id.phys_id, false, 0);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: unable to find domain %d\n",
+			   __func__, port->domain_id.phys_id);
+		return -EFAULT;
+	}
+
+	/* Set the QID inflight limit to 0 to prevent further scheduling of the
+	 * queue.
+	 */
+	DLB_CSR_WR(hw, DLB_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), 0);
+
+	if (!dlb_port_find_slot(port, DLB_QUEUE_UNMAPPED, &slot)) {
+		DLB_HW_ERR(hw,
+			   "Internal error: No available unmapped slots\n");
+		return -EFAULT;
+	}
+
+	if (slot >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port slot tracking failed\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	port->qid_map[slot].qid = queue->id.phys_id;
+	port->qid_map[slot].priority = priority;
+
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	ret = dlb_port_slot_state_transition(hw, port, queue, slot, state);
+	if (ret)
+		return ret;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
+
+	if (r0.field.count) {
+		/* The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	/* Disable the affected CQ, and the CQs already mapped to the QID,
+	 * before reading the QID's inflight count a second time. There is an
+	 * unlikely race in which the QID may schedule one more QE after we
+	 * read an inflight count of 0, and disabling the CQs guarantees that
+	 * the race will not occur after a re-read of the inflight count
+	 * register.
+	 */
+	if (port->enabled)
+		dlb_ldb_port_cq_disable(hw, port);
+
+	dlb_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
+
+	if (r0.field.count) {
+		if (port->enabled)
+			dlb_ldb_port_cq_enable(hw, port);
+
+		dlb_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+		/* The queue is owed completions so it's not safe to map it
+		 * yet. Schedule a kernel thread to complete the mapping later,
+		 * once software has completed all the queue's inflight events.
+		 */
+		if (!os_worker_active(hw))
+			os_schedule_work(hw);
+
+		return 1;
+	}
+
+	return dlb_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+}
+
+static int dlb_ldb_port_map_qid(struct dlb_hw *hw,
+				struct dlb_domain *domain,
+				struct dlb_ldb_port *port,
+				struct dlb_ldb_queue *queue,
+				u8 prio)
+{
+	if (domain->started)
+		return dlb_ldb_port_map_qid_dynamic(hw, port, queue, prio);
+	else
+		return dlb_ldb_port_map_qid_static(hw, port, queue, prio);
+}
+
+static int dlb_ldb_port_unmap_qid(struct dlb_hw *hw,
+				  struct dlb_ldb_port *port,
+				  struct dlb_ldb_queue *queue)
+{
+	enum dlb_qid_map_state mapped, in_progress, pending_map, unmapped;
+	union dlb_lsp_cq2priov r0;
+	union dlb_atm_pipe_qid_ldb_qid2cqidx r1;
+	union dlb_lsp_qid_ldb_qid2cqidx r2;
+	union dlb_lsp_qid_ldb_qid2cqidx2 r3;
+	u32 queue_id;
+	u32 port_id;
+	int i;
+
+	/* Find the queue's slot */
+	mapped = DLB_QUEUE_MAPPED;
+	in_progress = DLB_QUEUE_UNMAP_IN_PROGRESS;
+	pending_map = DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP;
+
+	if (!dlb_port_find_slot_queue(port, mapped, queue, &i) &&
+	    !dlb_port_find_slot_queue(port, in_progress, queue, &i) &&
+	    !dlb_port_find_slot_queue(port, pending_map, queue, &i)) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: QID %d isn't mapped\n",
+			   __func__, __LINE__, queue->id.phys_id);
+		return -EFAULT;
+	}
+
+	if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port slot tracking failed\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	port_id = port->id.phys_id;
+	queue_id = queue->id.phys_id;
+
+	/* Read-modify-write the priority and valid bit register */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ2PRIOV(port_id));
+
+	r0.field.v &= ~(1 << i);
+
+	DLB_CSR_WR(hw, DLB_LSP_CQ2PRIOV(port_id), r0.val);
+
+	r1.val = DLB_CSR_RD(hw,
+			    DLB_ATM_PIPE_QID_LDB_QID2CQIDX(queue_id,
+							   port_id / 4));
+
+	r2.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_QID2CQIDX(queue_id,
+						      port_id / 4));
+
+	r3.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_QID2CQIDX2(queue_id,
+						       port_id / 4));
+
+	switch (port_id % 4) {
+	case 0:
+		r1.field.cq_p0 &= ~(1 << i);
+		r2.field.cq_p0 &= ~(1 << i);
+		r3.field.cq_p0 &= ~(1 << i);
+		break;
+
+	case 1:
+		r1.field.cq_p1 &= ~(1 << i);
+		r2.field.cq_p1 &= ~(1 << i);
+		r3.field.cq_p1 &= ~(1 << i);
+		break;
+
+	case 2:
+		r1.field.cq_p2 &= ~(1 << i);
+		r2.field.cq_p2 &= ~(1 << i);
+		r3.field.cq_p2 &= ~(1 << i);
+		break;
+
+	case 3:
+		r1.field.cq_p3 &= ~(1 << i);
+		r2.field.cq_p3 &= ~(1 << i);
+		r3.field.cq_p3 &= ~(1 << i);
+		break;
+	}
+
+	DLB_CSR_WR(hw,
+		   DLB_ATM_PIPE_QID_LDB_QID2CQIDX(queue_id, port_id / 4),
+		   r1.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_QID_LDB_QID2CQIDX(queue_id, port_id / 4),
+		   r2.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_QID_LDB_QID2CQIDX2(queue_id, port_id / 4),
+		   r3.val);
+
+	dlb_flush_csr(hw);
+
+	unmapped = DLB_QUEUE_UNMAPPED;
+
+	return dlb_port_slot_state_transition(hw, port, queue, i, unmapped);
+}
+
+static void
+dlb_log_create_sched_domain_args(struct dlb_hw *hw,
+				 struct dlb_create_sched_domain_args *args,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create sched domain arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tNumber of LDB queues:        %d\n",
+		    args->num_ldb_queues);
+	DLB_HW_INFO(hw, "\tNumber of LDB ports:         %d\n",
+		    args->num_ldb_ports);
+	DLB_HW_INFO(hw, "\tNumber of DIR ports:         %d\n",
+		    args->num_dir_ports);
+	DLB_HW_INFO(hw, "\tNumber of ATM inflights:     %d\n",
+		    args->num_atomic_inflights);
+	DLB_HW_INFO(hw, "\tNumber of hist list entries: %d\n",
+		    args->num_hist_list_entries);
+	DLB_HW_INFO(hw, "\tNumber of LDB credits:       %d\n",
+		    args->num_ldb_credits);
+	DLB_HW_INFO(hw, "\tNumber of DIR credits:       %d\n",
+		    args->num_dir_credits);
+	DLB_HW_INFO(hw, "\tNumber of LDB credit pools:  %d\n",
+		    args->num_ldb_credit_pools);
+	DLB_HW_INFO(hw, "\tNumber of DIR credit pools:  %d\n",
+		    args->num_dir_credit_pools);
+}
+
+/**
+ * dlb_hw_create_sched_domain() - Allocate and initialize a DLB scheduling
+ *	domain and its resources.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ * @vf_request: Request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_sched_domain(struct dlb_hw *hw,
+			       struct dlb_create_sched_domain_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_function_resources *rsrcs;
+	int ret;
+
+	rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
+
+	dlb_log_create_sched_domain_args(hw, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_create_sched_domain_args(hw, rsrcs, args, resp))
+		return -EINVAL;
+
+	domain = DLB_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));
+
+	/* Verification should catch this. */
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available domains\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	if (domain->configured) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: avail_domains contains configured domains.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	dlb_init_domain_rsrc_lists(domain);
+
+	/* Verification should catch this too. */
+	ret = dlb_domain_attach_resources(hw, rsrcs, domain, args, resp);
+	if (ret < 0) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: failed to verify args.\n",
+			   __func__);
+
+		return -EFAULT;
+	}
+
+	dlb_list_del(&rsrcs->avail_domains, &domain->func_list);
+
+	dlb_list_add(&rsrcs->used_domains, &domain->func_list);
+
+	resp->id = (vf_request) ? domain->id.virt_id : domain->id.phys_id;
+	resp->status = 0;
+
+	return 0;
+}
+
+static void
+dlb_log_create_ldb_pool_args(struct dlb_hw *hw,
+			     u32 domain_id,
+			     struct dlb_create_ldb_pool_args *args,
+			     bool vf_request,
+			     unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create load-balanced credit pool arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID:             %d\n", domain_id);
+	DLB_HW_INFO(hw, "\tNumber of LDB credits: %d\n",
+		    args->num_ldb_credits);
+}
+
+/**
+ * dlb_hw_create_ldb_pool() - Allocate and initialize a DLB credit pool.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_ldb_pool(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_ldb_pool_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_credit_pool *pool;
+	struct dlb_domain *domain;
+
+	dlb_log_create_ldb_pool_args(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_create_ldb_pool_args(hw,
+					    domain_id,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	pool = DLB_DOM_LIST_HEAD(domain->avail_ldb_credit_pools, typeof(*pool));
+
+	/* Verification should catch this. */
+	if (!pool) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available ldb credit pools\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	dlb_configure_ldb_credit_pool(hw, domain, args, pool);
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb_list_del(&domain->avail_ldb_credit_pools, &pool->domain_list);
+
+	dlb_list_add(&domain->used_ldb_credit_pools, &pool->domain_list);
+
+	resp->status = 0;
+	resp->id = (vf_request) ? pool->id.virt_id : pool->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb_log_create_dir_pool_args(struct dlb_hw *hw,
+			     u32 domain_id,
+			     struct dlb_create_dir_pool_args *args,
+			     bool vf_request,
+			     unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create directed credit pool arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID:             %d\n", domain_id);
+	DLB_HW_INFO(hw, "\tNumber of DIR credits: %d\n",
+		    args->num_dir_credits);
+}
+
+/**
+ * dlb_hw_create_dir_pool() - Allocate and initialize a DLB credit pool.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_dir_pool(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_dir_pool_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_credit_pool *pool;
+	struct dlb_domain *domain;
+
+	dlb_log_create_dir_pool_args(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	/* At least one available pool */
+	if (dlb_verify_create_dir_pool_args(hw,
+					    domain_id,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	pool = DLB_DOM_LIST_HEAD(domain->avail_dir_credit_pools, typeof(*pool));
+
+	/* Verification should catch this. */
+	if (!pool) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available dir credit pools\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	dlb_configure_dir_credit_pool(hw, domain, args, pool);
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb_list_del(&domain->avail_dir_credit_pools, &pool->domain_list);
+
+	dlb_list_add(&domain->used_dir_credit_pools, &pool->domain_list);
+
+	resp->status = 0;
+	resp->id = (vf_request) ? pool->id.virt_id : pool->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb_log_create_ldb_queue_args(struct dlb_hw *hw,
+			      u32 domain_id,
+			      struct dlb_create_ldb_queue_args *args,
+			      bool vf_request,
+			      unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create load-balanced queue arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID:                  %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tNumber of sequence numbers: %d\n",
+		    args->num_sequence_numbers);
+	DLB_HW_INFO(hw, "\tNumber of QID inflights:    %d\n",
+		    args->num_qid_inflights);
+	DLB_HW_INFO(hw, "\tNumber of ATM inflights:    %d\n",
+		    args->num_atomic_inflights);
+}
+
+/**
+ * dlb_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_ldb_queue(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_create_ldb_queue_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id)
+{
+	struct dlb_ldb_queue *queue;
+	struct dlb_domain *domain;
+	int ret;
+
+	dlb_log_create_ldb_queue_args(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	/* At least one available queue */
+	if (dlb_verify_create_ldb_queue_args(hw,
+					     domain_id,
+					     args,
+					     resp,
+					     vf_request,
+					     vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue = DLB_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));
+
+	/* Verification should catch this. */
+	if (!queue) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available ldb queues\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	ret = dlb_ldb_queue_attach_resources(hw, domain, queue, args);
+	if (ret < 0) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: failed to attach the ldb queue resources\n",
+			   __func__, __LINE__);
+		return ret;
+	}
+
+	dlb_configure_ldb_queue(hw, domain, queue, args, vf_request, vf_id);
+
+	queue->num_mappings = 0;
+
+	queue->configured = true;
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb_list_del(&domain->avail_ldb_queues, &queue->domain_list);
+
+	dlb_list_add(&domain->used_ldb_queues, &queue->domain_list);
+
+	resp->status = 0;
+	resp->id = (vf_request) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static void
+dlb_log_create_dir_queue_args(struct dlb_hw *hw,
+			      u32 domain_id,
+			      struct dlb_create_dir_queue_args *args,
+			      bool vf_request,
+			      unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create directed queue arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n", domain_id);
+	DLB_HW_INFO(hw, "\tPort ID:   %d\n", args->port_id);
+}
+
+/**
+ * dlb_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_dir_queue(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_create_dir_queue_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *queue;
+	struct dlb_domain *domain;
+
+	dlb_log_create_dir_queue_args(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_create_dir_queue_args(hw,
+					     domain_id,
+					     args,
+					     resp,
+					     vf_request,
+					     vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	if (args->port_id != -1)
+		queue = dlb_get_domain_used_dir_pq(args->port_id,
+						   vf_request,
+						   domain);
+	else
+		queue = DLB_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					  typeof(*queue));
+
+	/* Verification should catch this. */
+	if (!queue) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available dir queues\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	dlb_configure_dir_queue(hw, domain, queue, vf_request, vf_id);
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->port_id == -1) {
+		dlb_list_del(&domain->avail_dir_pq_pairs, &queue->domain_list);
+
+		dlb_list_add(&domain->used_dir_pq_pairs, &queue->domain_list);
+	}
+
+	resp->status = 0;
+
+	resp->id = (vf_request) ? queue->id.virt_id : queue->id.phys_id;
+
+	return 0;
+}
+
+static void dlb_log_create_ldb_port_args(struct dlb_hw *hw,
+					 u32 domain_id,
+					 u64 pop_count_dma_base,
+					 u64 cq_dma_base,
+					 struct dlb_create_ldb_port_args *args,
+					 bool vf_request,
+					 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create load-balanced port arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tLDB credit pool ID:        %d\n",
+		    args->ldb_credit_pool_id);
+	DLB_HW_INFO(hw, "\tLDB credit high watermark: %d\n",
+		    args->ldb_credit_high_watermark);
+	DLB_HW_INFO(hw, "\tLDB credit low watermark:  %d\n",
+		    args->ldb_credit_low_watermark);
+	DLB_HW_INFO(hw, "\tLDB credit quantum:        %d\n",
+		    args->ldb_credit_quantum);
+	DLB_HW_INFO(hw, "\tDIR credit pool ID:        %d\n",
+		    args->dir_credit_pool_id);
+	DLB_HW_INFO(hw, "\tDIR credit high watermark: %d\n",
+		    args->dir_credit_high_watermark);
+	DLB_HW_INFO(hw, "\tDIR credit low watermark:  %d\n",
+		    args->dir_credit_low_watermark);
+	DLB_HW_INFO(hw, "\tDIR credit quantum:        %d\n",
+		    args->dir_credit_quantum);
+	DLB_HW_INFO(hw, "\tpop_count_address:         0x%"PRIx64"\n",
+		    pop_count_dma_base);
+	DLB_HW_INFO(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB_HW_INFO(hw, "\tCQ hist list size:         %d\n",
+		    args->cq_history_list_size);
+	DLB_HW_INFO(hw, "\tCQ base address:           0x%"PRIx64"\n",
+		    cq_dma_base);
+}
+
+/**
+ * dlb_hw_create_ldb_port() - Allocate and initialize a load-balanced port and
+ *	its resources.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_ldb_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_ldb_port_args *args,
+			   u64 pop_count_dma_base,
+			   u64 cq_dma_base,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+	int ret;
+
+	dlb_log_create_ldb_port_args(hw,
+				     domain_id,
+				     pop_count_dma_base,
+				     cq_dma_base,
+				     args,
+				     vf_request,
+				     vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_create_ldb_port_args(hw,
+					    domain_id,
+					    pop_count_dma_base,
+					    cq_dma_base,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	port = DLB_DOM_LIST_HEAD(domain->avail_ldb_ports, typeof(*port));
+
+	/* Verification should catch this. */
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available ldb ports\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	if (port->configured) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: avail_ldb_ports contains configured ports.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	ret = dlb_configure_ldb_port(hw,
+				     domain,
+				     port,
+				     pop_count_dma_base,
+				     cq_dma_base,
+				     args,
+				     vf_request,
+				     vf_id);
+	if (ret < 0)
+		return ret;
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list.
+	 */
+	dlb_list_del(&domain->avail_ldb_ports, &port->domain_list);
+
+	dlb_list_add(&domain->used_ldb_ports, &port->domain_list);
+
+	resp->status = 0;
+	resp->id = (vf_request) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void dlb_log_create_dir_port_args(struct dlb_hw *hw,
+					 u32 domain_id,
+					 u64 pop_count_dma_base,
+					 u64 cq_dma_base,
+					 struct dlb_create_dir_port_args *args,
+					 bool vf_request,
+					 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB create directed port arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID:                 %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tLDB credit pool ID:        %d\n",
+		    args->ldb_credit_pool_id);
+	DLB_HW_INFO(hw, "\tLDB credit high watermark: %d\n",
+		    args->ldb_credit_high_watermark);
+	DLB_HW_INFO(hw, "\tLDB credit low watermark:  %d\n",
+		    args->ldb_credit_low_watermark);
+	DLB_HW_INFO(hw, "\tLDB credit quantum:        %d\n",
+		    args->ldb_credit_quantum);
+	DLB_HW_INFO(hw, "\tDIR credit pool ID:        %d\n",
+		    args->dir_credit_pool_id);
+	DLB_HW_INFO(hw, "\tDIR credit high watermark: %d\n",
+		    args->dir_credit_high_watermark);
+	DLB_HW_INFO(hw, "\tDIR credit low watermark:  %d\n",
+		    args->dir_credit_low_watermark);
+	DLB_HW_INFO(hw, "\tDIR credit quantum:        %d\n",
+		    args->dir_credit_quantum);
+	DLB_HW_INFO(hw, "\tpop_count_address:         0x%"PRIx64"\n",
+		    pop_count_dma_base);
+	DLB_HW_INFO(hw, "\tCQ depth:                  %d\n",
+		    args->cq_depth);
+	DLB_HW_INFO(hw, "\tCQ base address:           0x%"PRIx64"\n",
+		    cq_dma_base);
+}
+
+/**
+ * dlb_hw_create_dir_port() - Allocate and initialize a DLB directed port and
+ *	queue. The port/queue pair have the same ID and name.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_create_dir_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_dir_port_args *args,
+			   u64 pop_count_dma_base,
+			   u64 cq_dma_base,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *port;
+	struct dlb_domain *domain;
+	int ret;
+
+	dlb_log_create_dir_port_args(hw,
+				     domain_id,
+				     pop_count_dma_base,
+				     cq_dma_base,
+				     args,
+				     vf_request,
+				     vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_create_dir_port_args(hw,
+					    domain_id,
+					    pop_count_dma_base,
+					    cq_dma_base,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	if (args->queue_id != -1)
+		port = dlb_get_domain_used_dir_pq(args->queue_id,
+						  vf_request,
+						  domain);
+	else
+		port = DLB_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,
+					 typeof(*port));
+
+	/* Verification should catch this. */
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: no available dir ports\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	ret = dlb_configure_dir_port(hw,
+				     domain,
+				     port,
+				     pop_count_dma_base,
+				     cq_dma_base,
+				     args,
+				     vf_request,
+				     vf_id);
+	if (ret < 0)
+		return ret;
+
+	/* Configuration succeeded, so move the resource from the 'avail' to
+	 * the 'used' list (if it's not already there).
+	 */
+	if (args->queue_id == -1) {
+		dlb_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);
+
+		dlb_list_add(&domain->used_dir_pq_pairs, &port->domain_list);
+	}
+
+	resp->status = 0;
+	resp->id = (vf_request) ? port->id.virt_id : port->id.phys_id;
+
+	return 0;
+}
+
+static void dlb_log_start_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB start domain arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb_hw_start_domain() - Lock the domain configuration
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Return: returns < 0 on error, 0 otherwise. If the driver is unable to
+ * satisfy a request, resp->status will be set accordingly.
+ */
+int dlb_hw_start_domain(struct dlb_hw *hw,
+			u32 domain_id,
+			__attribute((unused)) struct dlb_start_domain_args *arg,
+			struct dlb_cmd_response *resp,
+			bool vf_request,
+			unsigned int vf_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *dir_queue;
+	struct dlb_ldb_queue *ldb_queue;
+	struct dlb_credit_pool *pool;
+	struct dlb_domain *domain;
+
+	dlb_log_start_domain(hw, domain_id, vf_request, vf_id);
+
+	if (dlb_verify_start_domain_args(hw,
+					 domain_id,
+					 resp,
+					 vf_request,
+					 vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Write the domain's pool credit counts, which have been updated
+	 * during port configuration. The sum of the pool credit count plus
+	 * each producer port's credit count must equal the pool's credit
+	 * allocation *before* traffic is sent.
+	 */
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter)
+		dlb_ldb_pool_write_credit_count_reg(hw, pool->id.phys_id);
+
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter)
+		dlb_dir_pool_write_credit_count_reg(hw, pool->id.phys_id);
+
+	/* Enable load-balanced and directed queue write permissions for the
+	 * queues this domain owns. Without this, the DLB will drop all
+	 * incoming traffic to those queues.
+	 */
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		union dlb_sys_ldb_vasqid_v r0 = { {0} };
+		unsigned int offs;
+
+		r0.field.vasqid_v = 1;
+
+		offs = domain->id.phys_id * DLB_MAX_NUM_LDB_QUEUES +
+			ldb_queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_LDB_VASQID_V(offs), r0.val);
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {
+		union dlb_sys_dir_vasqid_v r0 = { {0} };
+		unsigned int offs;
+
+		r0.field.vasqid_v = 1;
+
+		offs = domain->id.phys_id * DLB_MAX_NUM_DIR_PORTS +
+			dir_queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_DIR_VASQID_V(offs), r0.val);
+	}
+
+	dlb_flush_csr(hw);
+
+	domain->started = true;
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb_domain_finish_unmap_port_slot(struct dlb_hw *hw,
+					      struct dlb_domain *domain,
+					      struct dlb_ldb_port *port,
+					      int slot)
+{
+	enum dlb_qid_map_state state;
+	struct dlb_ldb_queue *queue;
+
+	queue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];
+
+	state = port->qid_map[slot].state;
+
+	/* Update the QID2CQIDX and CQ2QID vectors */
+	dlb_ldb_port_unmap_qid(hw, port, queue);
+
+	/* Ensure the QID will not be serviced by this {CQ, slot} by clearing
+	 * the has_work bits
+	 */
+	dlb_ldb_port_clear_has_work_bits(hw, port, slot);
+
+	/* Reset the {CQ, slot} to its default state */
+	dlb_ldb_port_set_queue_if_status(hw, port, slot);
+
+	/* Re-enable the CQ if it wasn't manually disabled by the user */
+	if (port->enabled)
+		dlb_ldb_port_cq_enable(hw, port);
+
+	/* If there is a mapping that is pending this slot's removal, perform
+	 * the mapping now.
+	 */
+	if (state == DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP) {
+		struct dlb_ldb_port_qid_map *map;
+		struct dlb_ldb_queue *map_queue;
+		u8 prio;
+
+		map = &port->qid_map[slot];
+
+		map->qid = map->pending_qid;
+		map->priority = map->pending_priority;
+
+		map_queue = &hw->rsrcs.ldb_queues[map->qid];
+		prio = map->priority;
+
+		dlb_ldb_port_map_qid(hw, domain, port, map_queue, prio);
+	}
+}
+
+static bool dlb_domain_finish_unmap_port(struct dlb_hw *hw,
+					 struct dlb_domain *domain,
+					 struct dlb_ldb_port *port)
+{
+	union dlb_lsp_cq_ldb_infl_cnt r0;
+	int i;
+
+	if (port->num_pending_removals == 0)
+		return false;
+
+	/* The unmap requires all the CQ's outstanding inflights to be
+	 * completed.
+	 */
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
+	if (r0.field.count > 0)
+		return false;
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		struct dlb_ldb_port_qid_map *map;
+
+		map = &port->qid_map[i];
+
+		if (map->state != DLB_QUEUE_UNMAP_IN_PROGRESS &&
+		    map->state != DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP)
+			continue;
+
+		dlb_domain_finish_unmap_port_slot(hw, domain, port, i);
+	}
+
+	return true;
+}
+
+static unsigned int
+dlb_domain_finish_unmap_qid_procedures(struct dlb_hw *hw,
+				       struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	if (!domain->configured || domain->num_pending_removals == 0)
+		return 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		dlb_domain_finish_unmap_port(hw, domain, port);
+
+	return domain->num_pending_removals;
+}
+
+unsigned int dlb_finish_unmap_qid_procedures(struct dlb_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue unmap jobs for any domain that needs it */
+	for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
+		struct dlb_domain *domain = &hw->domains[i];
+
+		num += dlb_domain_finish_unmap_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+static void dlb_domain_finish_map_port(struct dlb_hw *hw,
+				       struct dlb_domain *domain,
+				       struct dlb_ldb_port *port)
+{
+	int i;
+
+	for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++) {
+		union dlb_lsp_qid_ldb_infl_cnt r0;
+		struct dlb_ldb_queue *queue;
+		int qid;
+
+		if (port->qid_map[i].state != DLB_QUEUE_MAP_IN_PROGRESS)
+			continue;
+
+		qid = port->qid_map[i].qid;
+
+		queue = dlb_get_ldb_queue_from_id(hw, qid, false, 0);
+
+		if (!queue) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: unable to find queue %d\n",
+				   __func__, qid);
+			continue;
+		}
+
+		r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_INFL_CNT(qid));
+
+		if (r0.field.count)
+			continue;
+
+		/* Disable the affected CQ, and the CQs already mapped to the
+		 * QID, before reading the QID's inflight count a second time.
+		 * There is an unlikely race in which the QID may schedule one
+		 * more QE after we read an inflight count of 0, and disabling
+		 * the CQs guarantees that the race will not occur after a
+		 * re-read of the inflight count register.
+		 */
+		if (port->enabled)
+			dlb_ldb_port_cq_disable(hw, port);
+
+		dlb_ldb_queue_disable_mapped_cqs(hw, domain, queue);
+
+		r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_LDB_INFL_CNT(qid));
+
+		if (r0.field.count) {
+			if (port->enabled)
+				dlb_ldb_port_cq_enable(hw, port);
+
+			dlb_ldb_queue_enable_mapped_cqs(hw, domain, queue);
+
+			continue;
+		}
+
+		dlb_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);
+	}
+}
+
+static unsigned int
+dlb_domain_finish_map_qid_procedures(struct dlb_hw *hw,
+				     struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	if (!domain->configured || domain->num_pending_additions == 0)
+		return 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		dlb_domain_finish_map_port(hw, domain, port);
+
+	return domain->num_pending_additions;
+}
+
+unsigned int dlb_finish_map_qid_procedures(struct dlb_hw *hw)
+{
+	int i, num = 0;
+
+	/* Finish queue map jobs for any domain that needs it */
+	for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
+		struct dlb_domain *domain = &hw->domains[i];
+
+		num += dlb_domain_finish_map_qid_procedures(hw, domain);
+	}
+
+	return num;
+}
+
+static void dlb_log_map_qid(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_map_qid_args *args,
+			    bool vf_request,
+			    unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB map QID arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB_HW_INFO(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	DLB_HW_INFO(hw, "\tPriority:  %d\n",
+		    args->priority);
+}
+
+int dlb_hw_map_qid(struct dlb_hw *hw,
+		   u32 domain_id,
+		   struct dlb_map_qid_args *args,
+		   struct dlb_cmd_response *resp,
+		   bool vf_request,
+		   unsigned int vf_id)
+{
+	enum dlb_qid_map_state state;
+	struct dlb_ldb_queue *queue;
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+	int ret, i, id;
+	u8 prio;
+
+	dlb_log_map_qid(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_map_qid_args(hw,
+				    domain_id,
+				    args,
+				    resp,
+				    vf_request,
+				    vf_id))
+		return -EINVAL;
+
+	prio = args->priority;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue = dlb_get_domain_ldb_queue(args->qid, vf_request, domain);
+	if (!queue) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: queue not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* If there are any outstanding detach operations for this port,
+	 * attempt to complete them. This may be necessary to free up a QID
+	 * slot for this requested mapping.
+	 */
+	if (port->num_pending_removals)
+		dlb_domain_finish_unmap_port(hw, domain, port);
+
+	ret = dlb_verify_map_qid_slot_available(port, queue, resp);
+	if (ret)
+		return ret;
+
+	/* Hardware requires disabling the CQ before mapping QIDs. */
+	if (port->enabled)
+		dlb_ldb_port_cq_disable(hw, port);
+
+	/* If this is only a priority change, don't perform the full QID->CQ
+	 * mapping procedure
+	 */
+	state = DLB_QUEUE_MAPPED;
+	if (dlb_port_find_slot_queue(port, state, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		if (prio != port->qid_map[i].priority) {
+			dlb_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB_HW_INFO(hw, "DLB map: priority change only\n");
+		}
+
+		state = DLB_QUEUE_MAPPED;
+		ret = dlb_port_slot_state_transition(hw, port, queue, i, state);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	state = DLB_QUEUE_UNMAP_IN_PROGRESS;
+	if (dlb_port_find_slot_queue(port, state, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		if (prio != port->qid_map[i].priority) {
+			dlb_ldb_port_change_qid_priority(hw, port, i, args);
+			DLB_HW_INFO(hw, "DLB map: priority change only\n");
+		}
+
+		state = DLB_QUEUE_MAPPED;
+		ret = dlb_port_slot_state_transition(hw, port, queue, i, state);
+		if (ret)
+			return ret;
+
+		goto map_qid_done;
+	}
+
+	/* If this is a priority change on an in-progress mapping, don't
+	 * perform the full QID->CQ mapping procedure.
+	 */
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	if (dlb_port_find_slot_queue(port, state, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		port->qid_map[i].priority = prio;
+
+		DLB_HW_INFO(hw, "DLB map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/* If this is a priority change on a pending mapping, update the
+	 * pending priority
+	 */
+	if (dlb_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		port->qid_map[i].pending_priority = prio;
+
+		DLB_HW_INFO(hw, "DLB map: priority change only\n");
+
+		goto map_qid_done;
+	}
+
+	/* If all the CQ's slots are in use, then there's an unmap in progress
+	 * (guaranteed by dlb_verify_map_qid_slot_available()), so add this
+	 * mapping to pending_map and return. When the removal is completed for
+	 * the slot's current occupant, this mapping will be performed.
+	 */
+	if (!dlb_port_find_slot(port, DLB_QUEUE_UNMAPPED, &i)) {
+		if (dlb_port_find_slot(port, DLB_QUEUE_UNMAP_IN_PROGRESS, &i)) {
+			enum dlb_qid_map_state state;
+
+			if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+				DLB_HW_ERR(hw,
+					   "[%s():%d] Internal error: port slot tracking failed\n",
+					   __func__, __LINE__);
+				return -EFAULT;
+			}
+
+			port->qid_map[i].pending_qid = queue->id.phys_id;
+			port->qid_map[i].pending_priority = prio;
+
+			state = DLB_QUEUE_UNMAP_IN_PROGRESS_PENDING_MAP;
+
+			ret = dlb_port_slot_state_transition(hw, port, queue,
+							     i, state);
+			if (ret)
+				return ret;
+
+			DLB_HW_INFO(hw, "DLB map: map pending removal\n");
+
+			goto map_qid_done;
+		}
+	}
+
+	/* If the domain has started, a special "dynamic" CQ->queue mapping
+	 * procedure is required in order to safely update the CQ<->QID tables.
+	 * The "static" procedure cannot be used when traffic is flowing,
+	 * because the CQ<->QID tables cannot be updated atomically and the
+	 * scheduler won't see the new mapping unless the queue's if_status
+	 * changes, which isn't guaranteed.
+	 */
+	ret = dlb_ldb_port_map_qid(hw, domain, port, queue, prio);
+
+	/* If ret is less than zero, it's due to an internal error */
+	if (ret < 0)
+		return ret;
+
+map_qid_done:
+	if (port->enabled)
+		dlb_ldb_port_cq_enable(hw, port);
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb_log_unmap_qid(struct dlb_hw *hw,
+			      u32 domain_id,
+			      struct dlb_unmap_qid_args *args,
+			      bool vf_request,
+			      unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB unmap QID arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tPort ID:   %d\n",
+		    args->port_id);
+	DLB_HW_INFO(hw, "\tQueue ID:  %d\n",
+		    args->qid);
+	if (args->qid < DLB_MAX_NUM_LDB_QUEUES)
+		DLB_HW_INFO(hw, "\tQueue's num mappings:  %d\n",
+			    hw->rsrcs.ldb_queues[args->qid].num_mappings);
+}
+
+int dlb_hw_unmap_qid(struct dlb_hw *hw,
+		     u32 domain_id,
+		     struct dlb_unmap_qid_args *args,
+		     struct dlb_cmd_response *resp,
+		     bool vf_request,
+		     unsigned int vf_id)
+{
+	enum dlb_qid_map_state state;
+	struct dlb_ldb_queue *queue;
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+	bool unmap_complete;
+	int i, ret, id;
+
+	dlb_log_unmap_qid(hw, domain_id, args, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_unmap_qid_args(hw,
+				      domain_id,
+				      args,
+				      resp,
+				      vf_request,
+				      vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	queue = dlb_get_domain_ldb_queue(args->qid, vf_request, domain);
+	if (!queue) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: queue not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* If the queue hasn't been mapped yet, we need to update the slot's
+	 * state and re-enable the queue's inflights.
+	 */
+	state = DLB_QUEUE_MAP_IN_PROGRESS;
+	if (dlb_port_find_slot_queue(port, state, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		/* Since the in-progress map was aborted, re-enable the QID's
+		 * inflights.
+		 */
+		if (queue->num_pending_additions == 0)
+			dlb_ldb_queue_set_inflight_limit(hw, queue);
+
+		state = DLB_QUEUE_UNMAPPED;
+		ret = dlb_port_slot_state_transition(hw, port, queue, i, state);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	/* If the queue mapping is on hold pending an unmap, we simply need to
+	 * update the slot's state.
+	 */
+	if (dlb_port_find_slot_with_pending_map_queue(port, queue, &i)) {
+		if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+			DLB_HW_ERR(hw,
+				   "[%s():%d] Internal error: port slot tracking failed\n",
+				   __func__, __LINE__);
+			return -EFAULT;
+		}
+
+		state = DLB_QUEUE_UNMAP_IN_PROGRESS;
+		ret = dlb_port_slot_state_transition(hw, port, queue, i, state);
+		if (ret)
+			return ret;
+
+		goto unmap_qid_done;
+	}
+
+	state = DLB_QUEUE_MAPPED;
+	if (!dlb_port_find_slot_queue(port, state, queue, &i)) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: no available CQ slots\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	if (i >= DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port slot tracking failed\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* QID->CQ mapping removal is an asychronous procedure. It requires
+	 * stopping the DLB from scheduling this CQ, draining all inflights
+	 * from the CQ, then unmapping the queue from the CQ. This function
+	 * simply marks the port as needing the queue unmapped, and (if
+	 * necessary) starts the unmapping worker thread.
+	 */
+	dlb_ldb_port_cq_disable(hw, port);
+
+	state = DLB_QUEUE_UNMAP_IN_PROGRESS;
+	ret = dlb_port_slot_state_transition(hw, port, queue, i, state);
+	if (ret)
+		return ret;
+
+	/* Attempt to finish the unmapping now, in case the port has no
+	 * outstanding inflights. If that's not the case, this will fail and
+	 * the unmapping will be completed at a later time.
+	 */
+	unmap_complete = dlb_domain_finish_unmap_port(hw, domain, port);
+
+	/* If the unmapping couldn't complete immediately, launch the worker
+	 * thread (if it isn't already launched) to finish it later.
+	 */
+	if (!unmap_complete && !os_worker_active(hw))
+		os_schedule_work(hw);
+
+unmap_qid_done:
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb_log_enable_port(struct dlb_hw *hw,
+				u32 domain_id,
+				u32 port_id,
+				bool vf_request,
+				unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB enable port arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tPort ID:   %d\n",
+		    port_id);
+}
+
+int dlb_hw_enable_ldb_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_enable_ldb_port_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+	int id;
+
+	dlb_log_enable_port(hw, domain_id, args->port_id, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_enable_ldb_port_args(hw,
+					    domain_id,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Hardware requires disabling the CQ before unmapping QIDs. */
+	if (!port->enabled) {
+		dlb_ldb_port_cq_enable(hw, port);
+		port->enabled = true;
+
+		hw->pf.num_enabled_ldb_ports++;
+		dlb_update_ldb_arb_threshold(hw);
+	}
+
+	resp->status = 0;
+
+	return 0;
+}
+
+static void dlb_log_disable_port(struct dlb_hw *hw,
+				 u32 domain_id,
+				 u32 port_id,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB disable port arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n",
+		    domain_id);
+	DLB_HW_INFO(hw, "\tPort ID:   %d\n",
+		    port_id);
+}
+
+int dlb_hw_disable_ldb_port(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_disable_ldb_port_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id)
+{
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+	int id;
+
+	dlb_log_disable_port(hw, domain_id, args->port_id, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_disable_ldb_port_args(hw,
+					     domain_id,
+					     args,
+					     resp,
+					     vf_request,
+					     vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_ldb_port(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Hardware requires disabling the CQ before unmapping QIDs. */
+	if (port->enabled) {
+		dlb_ldb_port_cq_disable(hw, port);
+		port->enabled = false;
+
+		hw->pf.num_enabled_ldb_ports--;
+		dlb_update_ldb_arb_threshold(hw);
+	}
+
+	resp->status = 0;
+
+	return 0;
+}
+
+int dlb_hw_enable_dir_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_enable_dir_port_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *port;
+	struct dlb_domain *domain;
+	int id;
+
+	dlb_log_enable_port(hw, domain_id, args->port_id, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_enable_dir_port_args(hw,
+					    domain_id,
+					    args,
+					    resp,
+					    vf_request,
+					    vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_dir_pq(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Hardware requires disabling the CQ before unmapping QIDs. */
+	if (!port->enabled) {
+		dlb_dir_port_cq_enable(hw, port);
+		port->enabled = true;
+	}
+
+	resp->status = 0;
+
+	return 0;
+}
+
+int dlb_hw_disable_dir_port(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_disable_dir_port_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *port;
+	struct dlb_domain *domain;
+	int id;
+
+	dlb_log_disable_port(hw, domain_id, args->port_id, vf_request, vf_id);
+
+	/* Verify that hardware resources are available before attempting to
+	 * satisfy the request. This simplifies the error unwinding code.
+	 */
+	if (dlb_verify_disable_dir_port_args(hw,
+					     domain_id,
+					     args,
+					     resp,
+					     vf_request,
+					     vf_id))
+		return -EINVAL;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+	if (!domain) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: domain not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	id = args->port_id;
+
+	port = dlb_get_domain_used_dir_pq(id, vf_request, domain);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s():%d] Internal error: port not found\n",
+			   __func__, __LINE__);
+		return -EFAULT;
+	}
+
+	/* Hardware requires disabling the CQ before unmapping QIDs. */
+	if (port->enabled) {
+		dlb_dir_port_cq_disable(hw, port);
+		port->enabled = false;
+	}
+
+	resp->status = 0;
+
+	return 0;
+}
+
+int dlb_notify_vf(struct dlb_hw *hw,
+		  unsigned int vf_id,
+		  enum dlb_mbox_vf_notification_type notification)
+{
+	struct dlb_mbox_vf_notification_cmd_req req;
+	int retry_cnt;
+
+	req.hdr.type = DLB_MBOX_VF_CMD_NOTIFICATION;
+	req.notification = notification;
+
+	if (dlb_pf_write_vf_mbox_req(hw, vf_id, &req, sizeof(req)))
+		return -1;
+
+	dlb_send_async_pf_to_vf_msg(hw, vf_id);
+
+	/* Timeout after 1 second of inactivity */
+	retry_cnt = 0;
+	while (!dlb_pf_to_vf_complete(hw, vf_id)) {
+		os_msleep(1);
+		if (++retry_cnt >= 1000) {
+			DLB_HW_ERR(hw,
+				   "PF driver timed out waiting for mbox response\n");
+			return -1;
+		}
+	}
+
+	/* No response data expected for notifications. */
+
+	return 0;
+}
+
+int dlb_vf_in_use(struct dlb_hw *hw, unsigned int vf_id)
+{
+	struct dlb_mbox_vf_in_use_cmd_resp resp;
+	struct dlb_mbox_vf_in_use_cmd_req req;
+	int retry_cnt;
+
+	req.hdr.type = DLB_MBOX_VF_CMD_IN_USE;
+
+	if (dlb_pf_write_vf_mbox_req(hw, vf_id, &req, sizeof(req)))
+		return -1;
+
+	dlb_send_async_pf_to_vf_msg(hw, vf_id);
+
+	/* Timeout after 1 second of inactivity */
+	retry_cnt = 0;
+	while (!dlb_pf_to_vf_complete(hw, vf_id)) {
+		os_msleep(1);
+		if (++retry_cnt >= 1000) {
+			DLB_HW_ERR(hw,
+				   "PF driver timed out waiting for mbox response\n");
+			return -1;
+		}
+	}
+
+	if (dlb_pf_read_vf_mbox_resp(hw, vf_id, &resp, sizeof(resp)))
+		return -1;
+
+	if (resp.hdr.status != DLB_MBOX_ST_SUCCESS) {
+		DLB_HW_ERR(hw,
+			   "[%s()]: failed with mailbox error: %s\n",
+			   __func__,
+			   DLB_MBOX_ST_STRING(&resp));
+
+		return -1;
+	}
+
+	return resp.in_use;
+}
+
+static int dlb_vf_domain_alert(struct dlb_hw *hw,
+			       unsigned int vf_id,
+			       u32 domain_id,
+			       u32 alert_id,
+			       u32 aux_alert_data)
+{
+	struct dlb_mbox_vf_alert_cmd_req req;
+	int retry_cnt;
+
+	req.hdr.type = DLB_MBOX_VF_CMD_DOMAIN_ALERT;
+	req.domain_id = domain_id;
+	req.alert_id = alert_id;
+	req.aux_alert_data = aux_alert_data;
+
+	if (dlb_pf_write_vf_mbox_req(hw, vf_id, &req, sizeof(req)))
+		return -1;
+
+	dlb_send_async_pf_to_vf_msg(hw, vf_id);
+
+	/* Timeout after 1 second of inactivity */
+	retry_cnt = 0;
+	while (!dlb_pf_to_vf_complete(hw, vf_id)) {
+		os_msleep(1);
+		if (++retry_cnt >= 1000) {
+			DLB_HW_ERR(hw,
+				   "PF driver timed out waiting for mbox response\n");
+			return -1;
+		}
+	}
+
+	/* No response data expected for alarm notifications. */
+
+	return 0;
+}
+
+void dlb_set_msix_mode(struct dlb_hw *hw, int mode)
+{
+	union dlb_sys_msix_mode r0 = { {0} };
+
+	r0.field.mode = mode;
+
+	DLB_CSR_WR(hw, DLB_SYS_MSIX_MODE, r0.val);
+}
+
+int dlb_configure_ldb_cq_interrupt(struct dlb_hw *hw,
+				   int port_id,
+				   int vector,
+				   int mode,
+				   unsigned int vf,
+				   unsigned int owner_vf,
+				   u16 threshold)
+{
+	union dlb_chp_ldb_cq_int_depth_thrsh r0 = { {0} };
+	union dlb_chp_ldb_cq_int_enb r1 = { {0} };
+	union dlb_sys_ldb_cq_isr r2 = { {0} };
+	struct dlb_ldb_port *port;
+	bool vf_request;
+
+	vf_request = (mode == DLB_CQ_ISR_MODE_MSI);
+
+	port = dlb_get_ldb_port_from_id(hw, port_id, vf_request, vf);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s()]: Internal error: failed to enable LDB CQ int\n\tport_id: %u, vf_req: %u, vf: %u\n",
+			   __func__, port_id, vf_request, vf);
+		return -EINVAL;
+	}
+
+	/* Trigger the interrupt when threshold or more QEs arrive in the CQ */
+	r0.field.depth_threshold = threshold - 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
+		   r0.val);
+
+	r1.field.en_depth = 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_LDB_CQ_INT_ENB(port->id.phys_id), r1.val);
+
+	r2.field.vector = vector;
+	r2.field.vf = owner_vf;
+	r2.field.en_code = mode;
+
+	DLB_CSR_WR(hw, DLB_SYS_LDB_CQ_ISR(port->id.phys_id), r2.val);
+
+	return 0;
+}
+
+int dlb_configure_dir_cq_interrupt(struct dlb_hw *hw,
+				   int port_id,
+				   int vector,
+				   int mode,
+				   unsigned int vf,
+				   unsigned int owner_vf,
+				   u16 threshold)
+{
+	union dlb_chp_dir_cq_int_depth_thrsh r0 = { {0} };
+	union dlb_chp_dir_cq_int_enb r1 = { {0} };
+	union dlb_sys_dir_cq_isr r2 = { {0} };
+	struct dlb_dir_pq_pair *port;
+	bool vf_request;
+
+	vf_request = (mode == DLB_CQ_ISR_MODE_MSI);
+
+	port = dlb_get_dir_pq_from_id(hw, port_id, vf_request, vf);
+	if (!port) {
+		DLB_HW_ERR(hw,
+			   "[%s()]: Internal error: failed to enable DIR CQ int\n\tport_id: %u, vf_req: %u, vf: %u\n",
+			   __func__, port_id, vf_request, vf);
+		return -EINVAL;
+	}
+
+	/* Trigger the interrupt when threshold or more QEs arrive in the CQ */
+	r0.field.depth_threshold = threshold - 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
+		   r0.val);
+
+	r1.field.en_depth = 1;
+
+	DLB_CSR_WR(hw, DLB_CHP_DIR_CQ_INT_ENB(port->id.phys_id), r1.val);
+
+	r2.field.vector = vector;
+	r2.field.vf = owner_vf;
+	r2.field.en_code = mode;
+
+	DLB_CSR_WR(hw, DLB_SYS_DIR_CQ_ISR(port->id.phys_id), r2.val);
+
+	return 0;
+}
+
+int dlb_arm_cq_interrupt(struct dlb_hw *hw,
+			 int port_id,
+			 bool is_ldb,
+			 bool vf_request,
+			 unsigned int vf_id)
+{
+	u32 val;
+	u32 reg;
+
+	if (vf_request && is_ldb) {
+		struct dlb_ldb_port *ldb_port;
+
+		ldb_port = dlb_get_ldb_port_from_id(hw, port_id, true, vf_id);
+
+		if (!ldb_port || !ldb_port->configured)
+			return -EINVAL;
+
+		port_id = ldb_port->id.phys_id;
+	} else if (vf_request && !is_ldb) {
+		struct dlb_dir_pq_pair *dir_port;
+
+		dir_port = dlb_get_dir_pq_from_id(hw, port_id, true, vf_id);
+
+		if (!dir_port || !dir_port->port_configured)
+			return -EINVAL;
+
+		port_id = dir_port->id.phys_id;
+	}
+
+	val = 1 << (port_id % 32);
+
+	if (is_ldb && port_id < 32)
+		reg = DLB_CHP_LDB_CQ_INTR_ARMED0;
+	else if (is_ldb && port_id < 64)
+		reg = DLB_CHP_LDB_CQ_INTR_ARMED1;
+	else if (!is_ldb && port_id < 32)
+		reg = DLB_CHP_DIR_CQ_INTR_ARMED0;
+	else if (!is_ldb && port_id < 64)
+		reg = DLB_CHP_DIR_CQ_INTR_ARMED1;
+	else if (!is_ldb && port_id < 96)
+		reg = DLB_CHP_DIR_CQ_INTR_ARMED2;
+	else
+		reg = DLB_CHP_DIR_CQ_INTR_ARMED3;
+
+	DLB_CSR_WR(hw, reg, val);
+
+	dlb_flush_csr(hw);
+
+	return 0;
+}
+
+void dlb_read_compressed_cq_intr_status(struct dlb_hw *hw,
+					u32 *ldb_interrupts,
+					u32 *dir_interrupts)
+{
+	/* Read every CQ's interrupt status */
+
+	ldb_interrupts[0] = DLB_CSR_RD(hw, DLB_SYS_LDB_CQ_31_0_OCC_INT_STS);
+	ldb_interrupts[1] = DLB_CSR_RD(hw, DLB_SYS_LDB_CQ_63_32_OCC_INT_STS);
+
+	dir_interrupts[0] = DLB_CSR_RD(hw, DLB_SYS_DIR_CQ_31_0_OCC_INT_STS);
+	dir_interrupts[1] = DLB_CSR_RD(hw, DLB_SYS_DIR_CQ_63_32_OCC_INT_STS);
+	dir_interrupts[2] = DLB_CSR_RD(hw, DLB_SYS_DIR_CQ_95_64_OCC_INT_STS);
+	dir_interrupts[3] = DLB_CSR_RD(hw, DLB_SYS_DIR_CQ_127_96_OCC_INT_STS);
+}
+
+static void dlb_ack_msix_interrupt(struct dlb_hw *hw, int vector)
+{
+	union dlb_sys_msix_ack r0 = { {0} };
+
+	switch (vector) {
+	case 0:
+		r0.field.msix_0_ack = 1;
+		break;
+	case 1:
+		r0.field.msix_1_ack = 1;
+		break;
+	case 2:
+		r0.field.msix_2_ack = 1;
+		break;
+	case 3:
+		r0.field.msix_3_ack = 1;
+		break;
+	case 4:
+		r0.field.msix_4_ack = 1;
+		break;
+	case 5:
+		r0.field.msix_5_ack = 1;
+		break;
+	case 6:
+		r0.field.msix_6_ack = 1;
+		break;
+	case 7:
+		r0.field.msix_7_ack = 1;
+		break;
+	case 8:
+		r0.field.msix_8_ack = 1;
+		/*
+		 * CSSY-1650
+		 * workaround h/w bug for lost MSI-X interrupts
+		 *
+		 * The recommended workaround for acknowledging
+		 * vector 8 interrupts is :
+		 *   1: set   MSI-X mask
+		 *   2: set   MSIX_PASSTHROUGH
+		 *   3: clear MSIX_ACK
+		 *   4: clear MSIX_PASSTHROUGH
+		 *   5: clear MSI-X mask
+		 *
+		 * The MSIX-ACK (step 3) is cleared for all vectors
+		 * below. We handle steps 1 & 2 for vector 8 here.
+		 *
+		 * The bitfields for MSIX_ACK and MSIX_PASSTHRU are
+		 * defined the same, so we just use the MSIX_ACK
+		 * value when writing to PASSTHRU.
+		 */
+
+		/* set MSI-X mask and passthrough for vector 8 */
+		DLB_FUNC_WR(hw, DLB_MSIX_MEM_VECTOR_CTRL(8), 1);
+		DLB_CSR_WR(hw, DLB_SYS_MSIX_PASSTHRU, r0.val);
+		break;
+	}
+
+	/* clear MSIX_ACK (write one to clear) */
+	DLB_CSR_WR(hw, DLB_SYS_MSIX_ACK, r0.val);
+
+	if (vector == 8) {
+		/*
+		 * finish up steps 4 & 5 of the workaround -
+		 * clear pasthrough and mask
+		 */
+		DLB_CSR_WR(hw, DLB_SYS_MSIX_PASSTHRU, 0);
+		DLB_FUNC_WR(hw, DLB_MSIX_MEM_VECTOR_CTRL(8), 0);
+	}
+
+	dlb_flush_csr(hw);
+}
+
+void dlb_ack_compressed_cq_intr(struct dlb_hw *hw,
+				u32 *ldb_interrupts,
+				u32 *dir_interrupts)
+{
+	/* Write back the status regs to ack the interrupts */
+	if (ldb_interrupts[0])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_CQ_31_0_OCC_INT_STS,
+			   ldb_interrupts[0]);
+	if (ldb_interrupts[1])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_CQ_63_32_OCC_INT_STS,
+			   ldb_interrupts[1]);
+
+	if (dir_interrupts[0])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_CQ_31_0_OCC_INT_STS,
+			   dir_interrupts[0]);
+	if (dir_interrupts[1])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_CQ_63_32_OCC_INT_STS,
+			   dir_interrupts[1]);
+	if (dir_interrupts[2])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_CQ_95_64_OCC_INT_STS,
+			   dir_interrupts[2]);
+	if (dir_interrupts[3])
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_CQ_127_96_OCC_INT_STS,
+			   dir_interrupts[3]);
+
+	dlb_ack_msix_interrupt(hw, DLB_PF_COMPRESSED_MODE_CQ_VECTOR_ID);
+}
+
+u32 dlb_read_vf_intr_status(struct dlb_hw *hw)
+{
+	return DLB_FUNC_RD(hw, DLB_FUNC_VF_VF_MSI_ISR);
+}
+
+void dlb_ack_vf_intr_status(struct dlb_hw *hw, u32 interrupts)
+{
+	DLB_FUNC_WR(hw, DLB_FUNC_VF_VF_MSI_ISR, interrupts);
+}
+
+void dlb_ack_vf_msi_intr(struct dlb_hw *hw, u32 interrupts)
+{
+	DLB_FUNC_WR(hw, DLB_FUNC_VF_VF_MSI_ISR_PEND, interrupts);
+}
+
+void dlb_ack_pf_mbox_int(struct dlb_hw *hw)
+{
+	union dlb_func_vf_pf2vf_mailbox_isr r0;
+
+	r0.field.pf_isr = 1;
+
+	DLB_FUNC_WR(hw, DLB_FUNC_VF_PF2VF_MAILBOX_ISR, r0.val);
+}
+
+u32 dlb_read_vf_to_pf_int_bitvec(struct dlb_hw *hw)
+{
+	/* The PF has one VF->PF MBOX ISR register per VF space, but they all
+	 * alias to the same physical register.
+	 */
+	return DLB_FUNC_RD(hw, DLB_FUNC_PF_VF2PF_MAILBOX_ISR(0));
+}
+
+void dlb_ack_vf_mbox_int(struct dlb_hw *hw, u32 bitvec)
+{
+	/* The PF has one VF->PF MBOX ISR register per VF space, but they all
+	 * alias to the same physical register.
+	 */
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_VF2PF_MAILBOX_ISR(0), bitvec);
+}
+
+u32 dlb_read_vf_flr_int_bitvec(struct dlb_hw *hw)
+{
+	/* The PF has one VF->PF FLR ISR register per VF space, but they all
+	 * alias to the same physical register.
+	 */
+	return DLB_FUNC_RD(hw, DLB_FUNC_PF_VF2PF_FLR_ISR(0));
+}
+
+void dlb_set_vf_reset_in_progress(struct dlb_hw *hw, int vf)
+{
+	u32 bitvec = DLB_FUNC_RD(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0));
+
+	bitvec |= (1 << vf);
+
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0), bitvec);
+}
+
+void dlb_clr_vf_reset_in_progress(struct dlb_hw *hw, int vf)
+{
+	u32 bitvec = DLB_FUNC_RD(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0));
+
+	bitvec &= ~(1 << vf);
+
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0), bitvec);
+}
+
+void dlb_ack_vf_flr_int(struct dlb_hw *hw, u32 bitvec, bool a_stepping)
+{
+	union dlb_sys_func_vf_bar_dsbl r0 = { {0} };
+	u32 clear;
+	int i;
+
+	if (!bitvec)
+		return;
+
+	/* Re-enable access to the VF BAR */
+	r0.field.func_vf_bar_dis = 0;
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		if (!(bitvec & (1 << i)))
+			continue;
+
+		DLB_CSR_WR(hw, DLB_SYS_FUNC_VF_BAR_DSBL(i), r0.val);
+	}
+
+	/* Notify the VF driver that the reset has completed. This register is
+	 * RW in A-stepping devices, WOCLR otherwise.
+	 */
+	if (a_stepping) {
+		clear = DLB_FUNC_RD(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0));
+		clear &= ~bitvec;
+	} else {
+		clear = bitvec;
+	}
+
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_VF_RESET_IN_PROGRESS(0), clear);
+
+	/* Mark the FLR ISR as complete */
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_VF2PF_FLR_ISR(0), bitvec);
+}
+
+void dlb_ack_vf_to_pf_int(struct dlb_hw *hw,
+			  u32 mbox_bitvec,
+			  u32 flr_bitvec)
+{
+	int i;
+
+	dlb_ack_msix_interrupt(hw, DLB_INT_VF_TO_PF_MBOX);
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		union dlb_func_pf_vf2pf_isr_pend r0 = { {0} };
+
+		if (!((mbox_bitvec & (1 << i)) || (flr_bitvec & (1 << i))))
+			continue;
+
+		/* Unset the VF's ISR pending bit */
+		r0.field.isr_pend = 1;
+		DLB_FUNC_WR(hw, DLB_FUNC_PF_VF2PF_ISR_PEND(i), r0.val);
+	}
+}
+
+void dlb_enable_alarm_interrupts(struct dlb_hw *hw)
+{
+	union dlb_sys_ingress_alarm_enbl r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_INGRESS_ALARM_ENBL);
+
+	r0.field.illegal_hcw = 1;
+	r0.field.illegal_pp = 1;
+	r0.field.disabled_pp = 1;
+	r0.field.illegal_qid = 1;
+	r0.field.disabled_qid = 1;
+	r0.field.illegal_ldb_qid_cfg = 1;
+	r0.field.illegal_cqid = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_INGRESS_ALARM_ENBL, r0.val);
+}
+
+void dlb_disable_alarm_interrupts(struct dlb_hw *hw)
+{
+	union dlb_sys_ingress_alarm_enbl r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_INGRESS_ALARM_ENBL);
+
+	r0.field.illegal_hcw = 0;
+	r0.field.illegal_pp = 0;
+	r0.field.disabled_pp = 0;
+	r0.field.illegal_qid = 0;
+	r0.field.disabled_qid = 0;
+	r0.field.illegal_ldb_qid_cfg = 0;
+	r0.field.illegal_cqid = 0;
+
+	DLB_CSR_WR(hw, DLB_SYS_INGRESS_ALARM_ENBL, r0.val);
+}
+
+static void dlb_log_alarm_syndrome(struct dlb_hw *hw,
+				   const char *str,
+				   union dlb_sys_alarm_hw_synd r0)
+{
+	DLB_HW_ERR(hw, "%s:\n", str);
+	DLB_HW_ERR(hw, "\tsyndrome: 0x%x\n", r0.field.syndrome);
+	DLB_HW_ERR(hw, "\trtype:    0x%x\n", r0.field.rtype);
+	DLB_HW_ERR(hw, "\tfrom_dmv: 0x%x\n", r0.field.from_dmv);
+	DLB_HW_ERR(hw, "\tis_ldb:   0x%x\n", r0.field.is_ldb);
+	DLB_HW_ERR(hw, "\tcls:      0x%x\n", r0.field.cls);
+	DLB_HW_ERR(hw, "\taid:      0x%x\n", r0.field.aid);
+	DLB_HW_ERR(hw, "\tunit:     0x%x\n", r0.field.unit);
+	DLB_HW_ERR(hw, "\tsource:   0x%x\n", r0.field.source);
+	DLB_HW_ERR(hw, "\tmore:     0x%x\n", r0.field.more);
+	DLB_HW_ERR(hw, "\tvalid:    0x%x\n", r0.field.valid);
+}
+
+/* Note: this array's contents must match dlb_alert_id() */
+static const char dlb_alert_strings[NUM_DLB_DOMAIN_ALERTS][128] = {
+	[DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS] = "Insufficient credits",
+	[DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ] = "Illegal enqueue",
+	[DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS] = "Excess token pops",
+	[DLB_DOMAIN_ALERT_ILLEGAL_HCW] = "Illegal HCW",
+	[DLB_DOMAIN_ALERT_ILLEGAL_QID] = "Illegal QID",
+	[DLB_DOMAIN_ALERT_DISABLED_QID] = "Disabled QID",
+};
+
+static void dlb_log_pf_vf_syndrome(struct dlb_hw *hw,
+				   const char *str,
+				   union dlb_sys_alarm_pf_synd0 r0,
+				   union dlb_sys_alarm_pf_synd1 r1,
+				   union dlb_sys_alarm_pf_synd2 r2,
+				   u32 alert_id)
+{
+	DLB_HW_ERR(hw, "%s:\n", str);
+	if (alert_id < NUM_DLB_DOMAIN_ALERTS)
+		DLB_HW_ERR(hw, "Alert: %s\n", dlb_alert_strings[alert_id]);
+	DLB_HW_ERR(hw, "\tsyndrome:     0x%x\n", r0.field.syndrome);
+	DLB_HW_ERR(hw, "\trtype:        0x%x\n", r0.field.rtype);
+	DLB_HW_ERR(hw, "\tfrom_dmv:     0x%x\n", r0.field.from_dmv);
+	DLB_HW_ERR(hw, "\tis_ldb:       0x%x\n", r0.field.is_ldb);
+	DLB_HW_ERR(hw, "\tcls:          0x%x\n", r0.field.cls);
+	DLB_HW_ERR(hw, "\taid:          0x%x\n", r0.field.aid);
+	DLB_HW_ERR(hw, "\tunit:         0x%x\n", r0.field.unit);
+	DLB_HW_ERR(hw, "\tsource:       0x%x\n", r0.field.source);
+	DLB_HW_ERR(hw, "\tmore:         0x%x\n", r0.field.more);
+	DLB_HW_ERR(hw, "\tvalid:        0x%x\n", r0.field.valid);
+	DLB_HW_ERR(hw, "\tdsi:          0x%x\n", r1.field.dsi);
+	DLB_HW_ERR(hw, "\tqid:          0x%x\n", r1.field.qid);
+	DLB_HW_ERR(hw, "\tqtype:        0x%x\n", r1.field.qtype);
+	DLB_HW_ERR(hw, "\tqpri:         0x%x\n", r1.field.qpri);
+	DLB_HW_ERR(hw, "\tmsg_type:     0x%x\n", r1.field.msg_type);
+	DLB_HW_ERR(hw, "\tlock_id:      0x%x\n", r2.field.lock_id);
+	DLB_HW_ERR(hw, "\tmeas:         0x%x\n", r2.field.meas);
+	DLB_HW_ERR(hw, "\tdebug:        0x%x\n", r2.field.debug);
+	DLB_HW_ERR(hw, "\tcq_pop:       0x%x\n", r2.field.cq_pop);
+	DLB_HW_ERR(hw, "\tqe_uhl:       0x%x\n", r2.field.qe_uhl);
+	DLB_HW_ERR(hw, "\tqe_orsp:      0x%x\n", r2.field.qe_orsp);
+	DLB_HW_ERR(hw, "\tqe_valid:     0x%x\n", r2.field.qe_valid);
+	DLB_HW_ERR(hw, "\tcq_int_rearm: 0x%x\n", r2.field.cq_int_rearm);
+	DLB_HW_ERR(hw, "\tdsi_error:    0x%x\n", r2.field.dsi_error);
+}
+
+static void dlb_clear_syndrome_register(struct dlb_hw *hw, u32 offset)
+{
+	union dlb_sys_alarm_hw_synd r0 = { {0} };
+
+	r0.field.valid = 1;
+	r0.field.more = 1;
+
+	DLB_CSR_WR(hw, offset, r0.val);
+}
+
+void dlb_process_alarm_interrupt(struct dlb_hw *hw)
+{
+	union dlb_sys_alarm_hw_synd r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_HW_SYND);
+
+	dlb_log_alarm_syndrome(hw, "HW alarm syndrome", r0);
+
+	dlb_clear_syndrome_register(hw, DLB_SYS_ALARM_HW_SYND);
+
+	dlb_ack_msix_interrupt(hw, DLB_INT_ALARM);
+}
+
+static void dlb_process_ingress_error(struct dlb_hw *hw,
+				      union dlb_sys_alarm_pf_synd0 r0,
+				      u32 alert_id,
+				      bool vf_error,
+				      unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	bool is_ldb;
+	u8 port_id;
+	int ret;
+
+	port_id = r0.field.syndrome & 0x7F;
+	if (r0.field.source == DLB_ALARM_HW_SOURCE_SYS)
+		is_ldb = r0.field.is_ldb;
+	else
+		is_ldb = (r0.field.syndrome & 0x80) != 0;
+
+	/* Get the domain ID and, if it's a VF domain, the virtual port ID */
+	if (is_ldb) {
+		struct dlb_ldb_port *port;
+
+		port = dlb_get_ldb_port_from_id(hw, port_id, vf_error, vf_id);
+
+		if (!port) {
+			DLB_HW_ERR(hw,
+				   "[%s()]: Internal error: unable to find LDB port\n\tport: %u, vf_error: %u, vf_id: %u\n",
+				   __func__, port_id, vf_error, vf_id);
+			return;
+		}
+
+		domain = &hw->domains[port->domain_id.phys_id];
+	} else {
+		struct dlb_dir_pq_pair *port;
+
+		port = dlb_get_dir_pq_from_id(hw, port_id, vf_error, vf_id);
+
+		if (!port) {
+			DLB_HW_ERR(hw,
+				   "[%s()]: Internal error: unable to find DIR port\n\tport: %u, vf_error: %u, vf_id: %u\n",
+				   __func__, port_id, vf_error, vf_id);
+			return;
+		}
+
+		domain = &hw->domains[port->domain_id.phys_id];
+	}
+
+	if (vf_error)
+		ret = dlb_vf_domain_alert(hw,
+					  vf_id,
+					  domain->id.virt_id,
+					  alert_id,
+					  (is_ldb << 8) | port_id);
+	else
+		ret = os_notify_user_space(hw,
+					   domain->id.phys_id,
+					   alert_id,
+					   (is_ldb << 8) | port_id);
+
+	if (ret)
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: failed to notify\n",
+			   __func__);
+}
+
+static u32 dlb_alert_id(union dlb_sys_alarm_pf_synd0 r0)
+{
+	if (r0.field.unit == DLB_ALARM_HW_UNIT_CHP &&
+	    r0.field.aid == DLB_ALARM_HW_CHP_AID_OUT_OF_CREDITS)
+		return DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS;
+	else if (r0.field.unit == DLB_ALARM_HW_UNIT_CHP &&
+		 r0.field.aid == DLB_ALARM_HW_CHP_AID_ILLEGAL_ENQ)
+		return DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ;
+	else if (r0.field.unit == DLB_ALARM_HW_UNIT_LSP &&
+		 r0.field.aid == DLB_ALARM_HW_LSP_AID_EXCESS_TOKEN_POPS)
+		return DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS;
+	else if (r0.field.source == DLB_ALARM_HW_SOURCE_SYS &&
+		 r0.field.aid == DLB_ALARM_SYS_AID_ILLEGAL_HCW)
+		return DLB_DOMAIN_ALERT_ILLEGAL_HCW;
+	else if (r0.field.source == DLB_ALARM_HW_SOURCE_SYS &&
+		 r0.field.aid == DLB_ALARM_SYS_AID_ILLEGAL_QID)
+		return DLB_DOMAIN_ALERT_ILLEGAL_QID;
+	else if (r0.field.source == DLB_ALARM_HW_SOURCE_SYS &&
+		 r0.field.aid == DLB_ALARM_SYS_AID_DISABLED_QID)
+		return DLB_DOMAIN_ALERT_DISABLED_QID;
+	else
+		return NUM_DLB_DOMAIN_ALERTS;
+}
+
+void dlb_process_ingress_error_interrupt(struct dlb_hw *hw)
+{
+	union dlb_sys_alarm_pf_synd0 r0;
+	union dlb_sys_alarm_pf_synd1 r1;
+	union dlb_sys_alarm_pf_synd2 r2;
+	u32 alert_id;
+	int i;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_PF_SYND0);
+
+	if (r0.field.valid) {
+		r1.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_PF_SYND1);
+		r2.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_PF_SYND2);
+
+		alert_id = dlb_alert_id(r0);
+
+		dlb_log_pf_vf_syndrome(hw,
+				       "PF Ingress error alarm",
+				       r0, r1, r2, alert_id);
+
+		dlb_clear_syndrome_register(hw, DLB_SYS_ALARM_PF_SYND0);
+
+		dlb_process_ingress_error(hw, r0, alert_id, false, 0);
+	}
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		r0.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_VF_SYND0(i));
+
+		if (!r0.field.valid)
+			continue;
+
+		r1.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_VF_SYND1(i));
+		r2.val = DLB_CSR_RD(hw, DLB_SYS_ALARM_VF_SYND2(i));
+
+		alert_id = dlb_alert_id(r0);
+
+		dlb_log_pf_vf_syndrome(hw,
+				       "VF Ingress error alarm",
+				       r0, r1, r2, alert_id);
+
+		dlb_clear_syndrome_register(hw,
+					    DLB_SYS_ALARM_VF_SYND0(i));
+
+		dlb_process_ingress_error(hw, r0, alert_id, true, i);
+	}
+
+	dlb_ack_msix_interrupt(hw, DLB_INT_INGRESS_ERROR);
+}
+
+int dlb_get_group_sequence_numbers(struct dlb_hw *hw, unsigned int group_id)
+{
+	if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;
+}
+
+int dlb_get_group_sequence_number_occupancy(struct dlb_hw *hw,
+					    unsigned int group_id)
+{
+	if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	return dlb_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);
+}
+
+static void dlb_log_set_group_sequence_numbers(struct dlb_hw *hw,
+					       unsigned int group_id,
+					       unsigned long val)
+{
+	DLB_HW_INFO(hw, "DLB set group sequence numbers:\n");
+	DLB_HW_INFO(hw, "\tGroup ID: %u\n", group_id);
+	DLB_HW_INFO(hw, "\tValue:    %lu\n", val);
+}
+
+int dlb_set_group_sequence_numbers(struct dlb_hw *hw,
+				   unsigned int group_id,
+				   unsigned long val)
+{
+	u32 valid_allocations[6] = {32, 64, 128, 256, 512, 1024};
+	union dlb_ro_pipe_grp_sn_mode r0 = { {0} };
+	struct dlb_sn_group *group;
+	int mode;
+
+	if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS)
+		return -EINVAL;
+
+	group = &hw->rsrcs.sn_groups[group_id];
+
+	/* Once the first load-balanced queue using an SN group is configured,
+	 * the group cannot be changed.
+	 */
+	if (group->slot_use_bitmap != 0)
+		return -EPERM;
+
+	for (mode = 0; mode < DLB_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)
+		if (val == valid_allocations[mode])
+			break;
+
+	if (mode == DLB_MAX_NUM_SEQUENCE_NUMBER_MODES)
+		return -EINVAL;
+
+	group->mode = mode;
+	group->sequence_numbers_per_queue = val;
+
+	r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode;
+	r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode;
+	r0.field.sn_mode_2 = hw->rsrcs.sn_groups[2].mode;
+	r0.field.sn_mode_3 = hw->rsrcs.sn_groups[3].mode;
+
+	DLB_CSR_WR(hw, DLB_RO_PIPE_GRP_SN_MODE, r0.val);
+
+	dlb_log_set_group_sequence_numbers(hw, group_id, val);
+
+	return 0;
+}
+
+void dlb_disable_dp_vasr_feature(struct dlb_hw *hw)
+{
+	union dlb_dp_dir_csr_ctrl r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_DP_DIR_CSR_CTRL);
+
+	r0.field.cfg_vasr_dis = 1;
+
+	DLB_CSR_WR(hw, DLB_DP_DIR_CSR_CTRL, r0.val);
+}
+
+void dlb_enable_excess_tokens_alarm(struct dlb_hw *hw)
+{
+	union dlb_chp_cfg_chp_csr_ctrl r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_CHP_CFG_CHP_CSR_CTRL);
+
+	r0.val |= 1 << DLB_CHP_CFG_EXCESS_TOKENS_SHIFT;
+
+	DLB_CSR_WR(hw, DLB_CHP_CFG_CHP_CSR_CTRL, r0.val);
+}
+
+void dlb_disable_excess_tokens_alarm(struct dlb_hw *hw)
+{
+	union dlb_chp_cfg_chp_csr_ctrl r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_CHP_CFG_CHP_CSR_CTRL);
+
+	r0.val &= ~(1 << DLB_CHP_CFG_EXCESS_TOKENS_SHIFT);
+
+	DLB_CSR_WR(hw, DLB_CHP_CFG_CHP_CSR_CTRL, r0.val);
+}
+
+static int dlb_reset_hw_resource(struct dlb_hw *hw, int type, int id)
+{
+	union dlb_cfg_mstr_diag_reset_sts r0 = { {0} };
+	union dlb_cfg_mstr_bcast_reset_vf_start r1 = { {0} };
+	int i;
+
+	r1.field.vf_reset_start = 1;
+
+	r1.field.vf_reset_type = type;
+	r1.field.vf_reset_id = id;
+
+	DLB_CSR_WR(hw, DLB_CFG_MSTR_BCAST_RESET_VF_START, r1.val);
+
+	/* Wait for hardware to complete. This is a finite time operation,
+	 * but wait set a loop bound just in case.
+	 */
+	for (i = 0; i < 1024 * 1024; i++) {
+		r0.val = DLB_CSR_RD(hw, DLB_CFG_MSTR_DIAG_RESET_STS);
+
+		if (r0.field.chp_vf_reset_done &&
+		    r0.field.rop_vf_reset_done &&
+		    r0.field.lsp_vf_reset_done &&
+		    r0.field.nalb_vf_reset_done &&
+		    r0.field.ap_vf_reset_done &&
+		    r0.field.dp_vf_reset_done &&
+		    r0.field.qed_vf_reset_done &&
+		    r0.field.dqed_vf_reset_done &&
+		    r0.field.aqed_vf_reset_done)
+			return 0;
+
+		os_udelay(1);
+	}
+
+	return -ETIMEDOUT;
+}
+
+static int dlb_domain_reset_hw_resources(struct dlb_hw *hw,
+					 struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *dir_port;
+	struct dlb_ldb_queue *ldb_queue;
+	struct dlb_ldb_port *ldb_port;
+	struct dlb_credit_pool *pool;
+	int ret;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_POOL_LDB,
+					    pool->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_POOL_DIR,
+					    pool->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_QID_LDB,
+					    ldb_queue->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_QID_DIR,
+					    dir_port->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, ldb_port, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_CQ_LDB,
+					    ldb_port->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		ret = dlb_reset_hw_resource(hw,
+					    VF_RST_TYPE_CQ_DIR,
+					    dir_port->id.phys_id);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static u32 dlb_ldb_cq_inflight_count(struct dlb_hw *hw,
+				     struct dlb_ldb_port *port)
+{
+	union dlb_lsp_cq_ldb_infl_cnt r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ_LDB_INFL_CNT(port->id.phys_id));
+
+	return r0.field.count;
+}
+
+static u32 dlb_ldb_cq_token_count(struct dlb_hw *hw,
+				  struct dlb_ldb_port *port)
+{
+	union dlb_lsp_cq_ldb_tkn_cnt r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ_LDB_TKN_CNT(port->id.phys_id));
+
+	return r0.field.token_count;
+}
+
+static int dlb_drain_ldb_cq(struct dlb_hw *hw, struct dlb_ldb_port *port)
+{
+	u32 infl_cnt, tkn_cnt;
+	unsigned int i;
+
+	infl_cnt = dlb_ldb_cq_inflight_count(hw, port);
+
+	/* Account for the initial token count, which is used in order to
+	 * provide a CQ with depth less than 8.
+	 */
+	tkn_cnt = dlb_ldb_cq_token_count(hw, port) - port->init_tkn_cnt;
+
+	if (infl_cnt || tkn_cnt) {
+		struct dlb_hcw hcw_mem[8], *hcw;
+		void  *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port->id.phys_id, true);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/* Program the first HCW for a completion and token return and
+		 * the other HCWs as NOOPS
+		 */
+
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->qe_comp = (infl_cnt > 0);
+		hcw->cq_token = (tkn_cnt > 0);
+		hcw->lock_id = tkn_cnt - 1;
+
+		/* Return tokens in the first HCW */
+		os_enqueue_four_hcws(hw, hcw, pp_addr);
+
+		hcw->cq_token = 0;
+
+		/* Issue remaining completions (if any) */
+		for (i = 1; i < infl_cnt; i++)
+			os_enqueue_four_hcws(hw, hcw, pp_addr);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+
+	return 0;
+}
+
+static int dlb_domain_wait_for_ldb_cqs_to_empty(struct dlb_hw *hw,
+						struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		int i;
+
+		for (i = 0; i < DLB_MAX_CQ_COMP_CHECK_LOOPS; i++) {
+			if (dlb_ldb_cq_inflight_count(hw, port) == 0)
+				break;
+		}
+
+		if (i == DLB_MAX_CQ_COMP_CHECK_LOOPS) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to flush load-balanced port %d's completions.\n",
+				   __func__, port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static int dlb_domain_reset_software_state(struct dlb_hw *hw,
+					   struct dlb_domain *domain)
+{
+	struct dlb_ldb_queue *tmp_ldb_queue __attribute__((unused));
+	struct dlb_dir_pq_pair *tmp_dir_port __attribute__((unused));
+	struct dlb_ldb_port *tmp_ldb_port __attribute__((unused));
+	struct dlb_credit_pool *tmp_pool __attribute__((unused));
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	struct dlb_ldb_queue *ldb_queue;
+	struct dlb_dir_pq_pair *dir_port;
+	struct dlb_ldb_port *ldb_port;
+	struct dlb_credit_pool *pool;
+
+	struct dlb_function_resources *rsrcs;
+	struct dlb_list_head *list;
+	int ret;
+
+	rsrcs = domain->parent_func;
+
+	/* Move the domain's ldb queues to the function's avail list */
+	list = &domain->used_ldb_queues;
+	DLB_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		if (ldb_queue->sn_cfg_valid) {
+			struct dlb_sn_group *grp;
+
+			grp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];
+
+			dlb_sn_group_free_slot(grp, ldb_queue->sn_slot);
+			ldb_queue->sn_cfg_valid = false;
+		}
+
+		ldb_queue->owned = false;
+		ldb_queue->num_mappings = 0;
+		ldb_queue->num_pending_additions = 0;
+
+		dlb_list_del(&domain->used_ldb_queues, &ldb_queue->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_queues, &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	list = &domain->avail_ldb_queues;
+	DLB_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {
+		ldb_queue->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_queues,
+			     &ldb_queue->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_queues,
+			     &ldb_queue->func_list);
+		rsrcs->num_avail_ldb_queues++;
+	}
+
+	/* Move the domain's ldb ports to the function's avail list */
+	list = &domain->used_ldb_ports;
+	DLB_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port, iter1, iter2) {
+		int i;
+
+		ldb_port->owned = false;
+		ldb_port->configured = false;
+		ldb_port->num_pending_removals = 0;
+		ldb_port->num_mappings = 0;
+		for (i = 0; i < DLB_MAX_NUM_QIDS_PER_LDB_CQ; i++)
+			ldb_port->qid_map[i].state = DLB_QUEUE_UNMAPPED;
+
+		dlb_list_del(&domain->used_ldb_ports, &ldb_port->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_ports, &ldb_port->func_list);
+		rsrcs->num_avail_ldb_ports++;
+	}
+
+	list = &domain->avail_ldb_ports;
+	DLB_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port, iter1, iter2) {
+		ldb_port->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_ports, &ldb_port->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_ports, &ldb_port->func_list);
+		rsrcs->num_avail_ldb_ports++;
+	}
+
+	/* Move the domain's dir ports to the function's avail list */
+	list = &domain->used_dir_pq_pairs;
+	DLB_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+		dir_port->port_configured = false;
+
+		dlb_list_del(&domain->used_dir_pq_pairs,
+			     &dir_port->domain_list);
+
+		dlb_list_add(&rsrcs->avail_dir_pq_pairs,
+			     &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	list = &domain->avail_dir_pq_pairs;
+	DLB_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {
+		dir_port->owned = false;
+
+		dlb_list_del(&domain->avail_dir_pq_pairs,
+			     &dir_port->domain_list);
+
+		dlb_list_add(&rsrcs->avail_dir_pq_pairs,
+			     &dir_port->func_list);
+		rsrcs->num_avail_dir_pq_pairs++;
+	}
+
+	/* Return hist list entries to the function */
+	ret = dlb_bitmap_set_range(rsrcs->avail_hist_list_entries,
+				   domain->hist_list_entry_base,
+				   domain->total_hist_list_entries);
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: domain hist list base doesn't match the function's bitmap.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	domain->total_hist_list_entries = 0;
+	domain->avail_hist_list_entries = 0;
+	domain->hist_list_entry_base = 0;
+	domain->hist_list_entry_offset = 0;
+
+	/* Return QED entries to the function */
+	ret = dlb_bitmap_set_range(rsrcs->avail_qed_freelist_entries,
+				   domain->qed_freelist.base,
+				   (domain->qed_freelist.bound -
+					domain->qed_freelist.base));
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: domain QED base doesn't match the function's bitmap.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	domain->qed_freelist.base = 0;
+	domain->qed_freelist.bound = 0;
+	domain->qed_freelist.offset = 0;
+
+	/* Return DQED entries back to the function */
+	ret = dlb_bitmap_set_range(rsrcs->avail_dqed_freelist_entries,
+				   domain->dqed_freelist.base,
+				   (domain->dqed_freelist.bound -
+					domain->dqed_freelist.base));
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: domain DQED base doesn't match the function's bitmap.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	domain->dqed_freelist.base = 0;
+	domain->dqed_freelist.bound = 0;
+	domain->dqed_freelist.offset = 0;
+
+	/* Return AQED entries back to the function */
+	ret = dlb_bitmap_set_range(rsrcs->avail_aqed_freelist_entries,
+				   domain->aqed_freelist.base,
+				   (domain->aqed_freelist.bound -
+					domain->aqed_freelist.base));
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: domain AQED base doesn't match the function's bitmap.\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	domain->aqed_freelist.base = 0;
+	domain->aqed_freelist.bound = 0;
+	domain->aqed_freelist.offset = 0;
+
+	/* Return ldb credit pools back to the function's avail list */
+	list = &domain->used_ldb_credit_pools;
+	DLB_DOM_LIST_FOR_SAFE(*list, pool, tmp_pool, iter1, iter2) {
+		pool->owned = false;
+		pool->configured = false;
+
+		dlb_list_del(&domain->used_ldb_credit_pools,
+			     &pool->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_credit_pools,
+			     &pool->func_list);
+		rsrcs->num_avail_ldb_credit_pools++;
+	}
+
+	list = &domain->avail_ldb_credit_pools;
+	DLB_DOM_LIST_FOR_SAFE(*list, pool, tmp_pool, iter1, iter2) {
+		pool->owned = false;
+
+		dlb_list_del(&domain->avail_ldb_credit_pools,
+			     &pool->domain_list);
+		dlb_list_add(&rsrcs->avail_ldb_credit_pools,
+			     &pool->func_list);
+		rsrcs->num_avail_ldb_credit_pools++;
+	}
+
+	/* Move dir credit pools back to the function */
+	list = &domain->used_dir_credit_pools;
+	DLB_DOM_LIST_FOR_SAFE(*list, pool, tmp_pool, iter1, iter2) {
+		pool->owned = false;
+		pool->configured = false;
+
+		dlb_list_del(&domain->used_dir_credit_pools,
+			     &pool->domain_list);
+		dlb_list_add(&rsrcs->avail_dir_credit_pools,
+			     &pool->func_list);
+		rsrcs->num_avail_dir_credit_pools++;
+	}
+
+	list = &domain->avail_dir_credit_pools;
+	DLB_DOM_LIST_FOR_SAFE(*list, pool, tmp_pool, iter1, iter2) {
+		pool->owned = false;
+
+		dlb_list_del(&domain->avail_dir_credit_pools,
+			     &pool->domain_list);
+		dlb_list_add(&rsrcs->avail_dir_credit_pools,
+			     &pool->func_list);
+		rsrcs->num_avail_dir_credit_pools++;
+	}
+
+	domain->num_pending_removals = 0;
+	domain->num_pending_additions = 0;
+	domain->configured = false;
+	domain->started = false;
+
+	/* Move the domain out of the used_domains list and back to the
+	 * function's avail_domains list.
+	 */
+	dlb_list_del(&rsrcs->used_domains, &domain->func_list);
+	dlb_list_add(&rsrcs->avail_domains, &domain->func_list);
+	rsrcs->num_avail_domains++;
+
+	return 0;
+}
+
+void dlb_resource_reset(struct dlb_hw *hw)
+{
+	struct dlb_domain *domain, *next __attribute__((unused));
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	int i;
+
+	for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
+		DLB_FUNC_LIST_FOR_SAFE(hw->vf[i].used_domains, domain,
+				       next, iter1, iter2)
+			dlb_domain_reset_software_state(hw, domain);
+	}
+
+	DLB_FUNC_LIST_FOR_SAFE(hw->pf.used_domains, domain, next, iter1, iter2)
+		dlb_domain_reset_software_state(hw, domain);
+}
+
+static u32 dlb_dir_queue_depth(struct dlb_hw *hw,
+			       struct dlb_dir_pq_pair *queue)
+{
+	union dlb_lsp_qid_dir_enqueue_cnt r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id));
+
+	return r0.field.count;
+}
+
+static bool dlb_dir_queue_is_empty(struct dlb_hw *hw,
+				   struct dlb_dir_pq_pair *queue)
+{
+	return dlb_dir_queue_depth(hw, queue) == 0;
+}
+
+static void dlb_log_get_dir_queue_depth(struct dlb_hw *hw,
+					u32 domain_id,
+					u32 queue_id,
+					bool vf_request,
+					unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB get directed queue depth:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n", domain_id);
+	DLB_HW_INFO(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+int dlb_hw_get_dir_queue_depth(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_get_dir_queue_depth_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *queue;
+	struct dlb_domain *domain;
+	int id;
+
+	id = domain_id;
+
+	dlb_log_get_dir_queue_depth(hw, domain_id, args->queue_id,
+				    vf_request, vf_id);
+
+	domain = dlb_get_domain_from_id(hw, id, vf_request, vf_id);
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	id = args->queue_id;
+
+	queue = dlb_get_domain_used_dir_pq(id, vf_request, domain);
+	if (!queue) {
+		resp->status = DLB_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	resp->id = dlb_dir_queue_depth(hw, queue);
+
+	return 0;
+}
+
+static void
+dlb_log_pending_port_unmaps_args(struct dlb_hw *hw,
+				 struct dlb_pending_port_unmaps_args *args,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB pending port unmaps arguments:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tPort ID: %d\n", args->port_id);
+}
+
+int dlb_hw_pending_port_unmaps(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_pending_port_unmaps_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	struct dlb_ldb_port *port;
+
+	dlb_log_pending_port_unmaps_args(hw, args, vf_request, vf_id);
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	port = dlb_get_domain_used_ldb_port(args->port_id, vf_request, domain);
+	if (!port || !port->configured) {
+		resp->status = DLB_ST_INVALID_PORT_ID;
+		return -EINVAL;
+	}
+
+	resp->id = port->num_pending_removals;
+
+	return 0;
+}
+
+/* Returns whether the queue is empty, including its inflight and replay
+ * counts.
+ */
+static bool dlb_ldb_queue_is_empty(struct dlb_hw *hw,
+				   struct dlb_ldb_queue *queue)
+{
+	union dlb_lsp_qid_ldb_replay_cnt r0;
+	union dlb_lsp_qid_aqed_active_cnt r1;
+	union dlb_lsp_qid_atq_enqueue_cnt r2;
+	union dlb_lsp_qid_ldb_enqueue_cnt r3;
+	union dlb_lsp_qid_ldb_infl_cnt r4;
+
+	r0.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_REPLAY_CNT(queue->id.phys_id));
+	if (r0.val)
+		return false;
+
+	r1.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
+	if (r1.val)
+		return false;
+
+	r2.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_ATQ_ENQUEUE_CNT(queue->id.phys_id));
+	if (r2.val)
+		return false;
+
+	r3.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
+	if (r3.val)
+		return false;
+
+	r4.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_INFL_CNT(queue->id.phys_id));
+	if (r4.val)
+		return false;
+
+	return true;
+}
+
+static void dlb_log_get_ldb_queue_depth(struct dlb_hw *hw,
+					u32 domain_id,
+					u32 queue_id,
+					bool vf_request,
+					unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB get load-balanced queue depth:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n", domain_id);
+	DLB_HW_INFO(hw, "\tQueue ID: %d\n", queue_id);
+}
+
+int dlb_hw_get_ldb_queue_depth(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_get_ldb_queue_depth_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_req,
+			       unsigned int vf_id)
+{
+	union dlb_lsp_qid_aqed_active_cnt r0;
+	union dlb_lsp_qid_atq_enqueue_cnt r1;
+	union dlb_lsp_qid_ldb_enqueue_cnt r2;
+	struct dlb_ldb_queue *queue;
+	struct dlb_domain *domain;
+
+	dlb_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,
+				    vf_req, vf_id);
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_req, vf_id);
+	if (!domain) {
+		resp->status = DLB_ST_INVALID_DOMAIN_ID;
+		return -EINVAL;
+	}
+
+	queue = dlb_get_domain_ldb_queue(args->queue_id, vf_req, domain);
+	if (!queue) {
+		resp->status = DLB_ST_INVALID_QID;
+		return -EINVAL;
+	}
+
+	r0.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id));
+
+	r1.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_ATQ_ENQUEUE_CNT(queue->id.phys_id));
+
+	r2.val = DLB_CSR_RD(hw,
+			    DLB_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id));
+
+	resp->id = r0.val + r1.val + r2.val;
+
+	return 0;
+}
+
+static u32 dlb_dir_cq_token_count(struct dlb_hw *hw,
+				  struct dlb_dir_pq_pair *port)
+{
+	union dlb_lsp_cq_dir_tkn_cnt r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_LSP_CQ_DIR_TKN_CNT(port->id.phys_id));
+
+	return r0.field.count;
+}
+
+static int dlb_domain_verify_reset_success(struct dlb_hw *hw,
+					   struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *dir_port;
+	struct dlb_ldb_port *ldb_port;
+	struct dlb_credit_pool *pool;
+	struct dlb_ldb_queue *queue;
+
+	/* Confirm that all credits are returned to the domain's credit pools */
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter) {
+		union dlb_chp_dqed_fl_pop_ptr r0;
+		union dlb_chp_dqed_fl_push_ptr r1;
+
+		r0.val = DLB_CSR_RD(hw,
+				    DLB_CHP_DQED_FL_POP_PTR(pool->id.phys_id));
+
+		r1.val = DLB_CSR_RD(hw,
+				    DLB_CHP_DQED_FL_PUSH_PTR(pool->id.phys_id));
+
+		if (r0.field.pop_ptr != r1.field.push_ptr ||
+		    r0.field.generation == r1.field.generation) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to refill directed pool %d's credits.\n",
+				   __func__, pool->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's queue's inflight counts and AQED
+	 * active counts are 0.
+	 */
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (!dlb_ldb_queue_is_empty(hw, queue)) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to empty ldb queue %d\n",
+				   __func__, queue->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	/* Confirm that all the domain's CQs inflight and token counts are 0. */
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, ldb_port, iter) {
+		if (dlb_ldb_cq_inflight_count(hw, ldb_port) ||
+		    dlb_ldb_cq_token_count(hw, ldb_port)) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to empty ldb port %d\n",
+				   __func__, ldb_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {
+		if (!dlb_dir_queue_is_empty(hw, dir_port)) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to empty dir queue %d\n",
+				   __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+
+		if (dlb_dir_cq_token_count(hw, dir_port)) {
+			DLB_HW_ERR(hw,
+				   "[%s()] Internal error: failed to empty dir port %d\n",
+				   __func__, dir_port->id.phys_id);
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void __dlb_domain_reset_ldb_port_registers(struct dlb_hw *hw,
+						  struct dlb_ldb_port *port)
+{
+	union dlb_chp_ldb_pp_state_reset r0 = { {0} };
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_CRD_REQ_STATE(port->id.phys_id),
+		   DLB_CHP_LDB_PP_CRD_REQ_STATE_RST);
+
+	/* Reset the port's load-balanced and directed credit state */
+	r0.field.dir_type = 0;
+	r0.field.reset_pp_state = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_STATE_RESET(port->id.phys_id),
+		   r0.val);
+
+	r0.field.dir_type = 1;
+	r0.field.reset_pp_state = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_STATE_RESET(port->id.phys_id),
+		   r0.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_DIR_PUSH_PTR(port->id.phys_id),
+		   DLB_CHP_LDB_PP_DIR_PUSH_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_LDB_PUSH_PTR(port->id.phys_id),
+		   DLB_CHP_LDB_PP_LDB_PUSH_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT(port->id.phys_id),
+		   DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_LDB_CRD_LWM(port->id.phys_id),
+		   DLB_CHP_LDB_PP_LDB_CRD_LWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_LDB_CRD_HWM(port->id.phys_id),
+		   DLB_CHP_LDB_PP_LDB_CRD_HWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_LDB_PP2POOL(port->id.phys_id),
+		   DLB_CHP_LDB_LDB_PP2POOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT(port->id.phys_id),
+		   DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_DIR_CRD_LWM(port->id.phys_id),
+		   DLB_CHP_LDB_PP_DIR_CRD_LWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_PP_DIR_CRD_HWM(port->id.phys_id),
+		   DLB_CHP_LDB_PP_DIR_CRD_HWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_DIR_PP2POOL(port->id.phys_id),
+		   DLB_CHP_LDB_DIR_PP2POOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP2LDBPOOL(port->id.phys_id),
+		   DLB_SYS_LDB_PP2LDBPOOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP2DIRPOOL(port->id.phys_id),
+		   DLB_SYS_LDB_PP2DIRPOOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_HIST_LIST_LIM(port->id.phys_id),
+		   DLB_CHP_HIST_LIST_LIM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_HIST_LIST_BASE(port->id.phys_id),
+		   DLB_CHP_HIST_LIST_BASE_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_HIST_LIST_POP_PTR(port->id.phys_id),
+		   DLB_CHP_HIST_LIST_POP_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_HIST_LIST_PUSH_PTR(port->id.phys_id),
+		   DLB_CHP_HIST_LIST_PUSH_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_WPTR(port->id.phys_id),
+		   DLB_CHP_LDB_CQ_WPTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_INT_DEPTH_THRSH(port->id.phys_id),
+		   DLB_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_TMR_THRESHOLD(port->id.phys_id),
+		   DLB_CHP_LDB_CQ_TMR_THRESHOLD_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
+		   DLB_CHP_LDB_CQ_INT_ENB_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_LDB_INFL_LIM(port->id.phys_id),
+		   DLB_LSP_CQ_LDB_INFL_LIM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ2PRIOV(port->id.phys_id),
+		   DLB_LSP_CQ2PRIOV_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_LDB_TOT_SCH_CNT_CTRL(port->id.phys_id),
+		   DLB_LSP_CQ_LDB_TOT_SCH_CNT_CTRL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_LDB_TKN_DEPTH_SEL(port->id.phys_id),
+		   DLB_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_LDB_CQ_TKN_DEPTH_SEL(port->id.phys_id),
+		   DLB_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_LDB_DSBL(port->id.phys_id),
+		   DLB_LSP_CQ_LDB_DSBL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ2VF_PF(port->id.phys_id),
+		   DLB_SYS_LDB_CQ2VF_PF_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP2VF_PF(port->id.phys_id),
+		   DLB_SYS_LDB_PP2VF_PF_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ_ADDR_L(port->id.phys_id),
+		   DLB_SYS_LDB_CQ_ADDR_L_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ_ADDR_U(port->id.phys_id),
+		   DLB_SYS_LDB_CQ_ADDR_U_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP_ADDR_L(port->id.phys_id),
+		   DLB_SYS_LDB_PP_ADDR_L_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP_ADDR_U(port->id.phys_id),
+		   DLB_SYS_LDB_PP_ADDR_U_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP_V(port->id.phys_id),
+		   DLB_SYS_LDB_PP_V_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_PP2VAS(port->id.phys_id),
+		   DLB_SYS_LDB_PP2VAS_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_LDB_CQ_ISR(port->id.phys_id),
+		   DLB_SYS_LDB_CQ_ISR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_WBUF_LDB_FLAGS(port->id.phys_id),
+		   DLB_SYS_WBUF_LDB_FLAGS_RST);
+}
+
+static void dlb_domain_reset_ldb_port_registers(struct dlb_hw *hw,
+						struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		__dlb_domain_reset_ldb_port_registers(hw, port);
+}
+
+static void __dlb_domain_reset_dir_port_registers(struct dlb_hw *hw,
+						  struct dlb_dir_pq_pair *port)
+{
+	union dlb_chp_dir_pp_state_reset r0 = { {0} };
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_CRD_REQ_STATE(port->id.phys_id),
+		   DLB_CHP_DIR_PP_CRD_REQ_STATE_RST);
+
+	/* Reset the port's load-balanced and directed credit state */
+	r0.field.dir_type = 0;
+	r0.field.reset_pp_state = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_STATE_RESET(port->id.phys_id),
+		   r0.val);
+
+	r0.field.dir_type = 1;
+	r0.field.reset_pp_state = 1;
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_STATE_RESET(port->id.phys_id),
+		   r0.val);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_PUSH_PTR(port->id.phys_id),
+		   DLB_CHP_DIR_PP_DIR_PUSH_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_PUSH_PTR(port->id.phys_id),
+		   DLB_CHP_DIR_PP_LDB_PUSH_PTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT(port->id.phys_id),
+		   DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_CRD_LWM(port->id.phys_id),
+		   DLB_CHP_DIR_PP_LDB_CRD_LWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_LDB_CRD_HWM(port->id.phys_id),
+		   DLB_CHP_DIR_PP_LDB_CRD_HWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_LDB_PP2POOL(port->id.phys_id),
+		   DLB_CHP_DIR_LDB_PP2POOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT(port->id.phys_id),
+		   DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_CRD_LWM(port->id.phys_id),
+		   DLB_CHP_DIR_PP_DIR_CRD_LWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_PP_DIR_CRD_HWM(port->id.phys_id),
+		   DLB_CHP_DIR_PP_DIR_CRD_HWM_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_DIR_PP2POOL(port->id.phys_id),
+		   DLB_CHP_DIR_DIR_PP2POOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2LDBPOOL(port->id.phys_id),
+		   DLB_SYS_DIR_PP2LDBPOOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2DIRPOOL(port->id.phys_id),
+		   DLB_SYS_DIR_PP2DIRPOOL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_WPTR(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_WPTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(port->id.phys_id),
+		   DLB_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_TKN_DEPTH_SEL(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_LSP_CQ_DIR_DSBL(port->id.phys_id),
+		   DLB_LSP_CQ_DIR_DSBL_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_WPTR(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_WPTR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_INT_DEPTH_THRSH(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_TMR_THRESHOLD(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_TMR_THRESHOLD_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
+		   DLB_CHP_DIR_CQ_INT_ENB_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_CQ2VF_PF(port->id.phys_id),
+		   DLB_SYS_DIR_CQ2VF_PF_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2VF_PF(port->id.phys_id),
+		   DLB_SYS_DIR_PP2VF_PF_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_CQ_ADDR_L(port->id.phys_id),
+		   DLB_SYS_DIR_CQ_ADDR_L_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_CQ_ADDR_U(port->id.phys_id),
+		   DLB_SYS_DIR_CQ_ADDR_U_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP_ADDR_L(port->id.phys_id),
+		   DLB_SYS_DIR_PP_ADDR_L_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP_ADDR_U(port->id.phys_id),
+		   DLB_SYS_DIR_PP_ADDR_U_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP_V(port->id.phys_id),
+		   DLB_SYS_DIR_PP_V_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_PP2VAS(port->id.phys_id),
+		   DLB_SYS_DIR_PP2VAS_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_DIR_CQ_ISR(port->id.phys_id),
+		   DLB_SYS_DIR_CQ_ISR_RST);
+
+	DLB_CSR_WR(hw,
+		   DLB_SYS_WBUF_DIR_FLAGS(port->id.phys_id),
+		   DLB_SYS_WBUF_DIR_FLAGS_RST);
+}
+
+static void dlb_domain_reset_dir_port_registers(struct dlb_hw *hw,
+						struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		__dlb_domain_reset_dir_port_registers(hw, port);
+}
+
+static void dlb_domain_reset_ldb_queue_registers(struct dlb_hw *hw,
+						 struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_queue *queue;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_AQED_PIPE_FL_LIM(queue->id.phys_id),
+			   DLB_AQED_PIPE_FL_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_AQED_PIPE_FL_BASE(queue->id.phys_id),
+			   DLB_AQED_PIPE_FL_BASE_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_AQED_PIPE_FL_POP_PTR(queue->id.phys_id),
+			   DLB_AQED_PIPE_FL_POP_PTR_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_AQED_PIPE_FL_PUSH_PTR(queue->id.phys_id),
+			   DLB_AQED_PIPE_FL_PUSH_PTR_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_AQED_PIPE_QID_FID_LIM(queue->id.phys_id),
+			   DLB_AQED_PIPE_QID_FID_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id),
+			   DLB_LSP_QID_AQED_ACTIVE_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_LSP_QID_LDB_INFL_LIM(queue->id.phys_id),
+			   DLB_LSP_QID_LDB_INFL_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_QID_V(queue->id.phys_id),
+			   DLB_SYS_LDB_QID_V_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_QID_V(queue->id.phys_id),
+			   DLB_SYS_LDB_QID_V_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_ORD_QID_SN(queue->id.phys_id),
+			   DLB_CHP_ORD_QID_SN_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_ORD_QID_SN_MAP(queue->id.phys_id),
+			   DLB_CHP_ORD_QID_SN_MAP_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_RO_PIPE_QID2GRPSLT(queue->id.phys_id),
+			   DLB_RO_PIPE_QID2GRPSLT_RST);
+	}
+}
+
+static void dlb_domain_reset_dir_queue_registers(struct dlb_hw *hw,
+						 struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *queue;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_QID_V(queue->id.phys_id),
+			   DLB_SYS_DIR_QID_V_RST);
+	}
+}
+
+static void dlb_domain_reset_ldb_pool_registers(struct dlb_hw *hw,
+						struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_CHP_LDB_POOL_CRD_LIM(pool->id.phys_id),
+			   DLB_CHP_LDB_POOL_CRD_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_LDB_POOL_CRD_CNT(pool->id.phys_id),
+			   DLB_CHP_LDB_POOL_CRD_CNT_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_QED_FL_BASE(pool->id.phys_id),
+			   DLB_CHP_QED_FL_BASE_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_QED_FL_LIM(pool->id.phys_id),
+			   DLB_CHP_QED_FL_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_QED_FL_PUSH_PTR(pool->id.phys_id),
+			   DLB_CHP_QED_FL_PUSH_PTR_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_QED_FL_POP_PTR(pool->id.phys_id),
+			   DLB_CHP_QED_FL_POP_PTR_RST);
+	}
+}
+
+static void dlb_domain_reset_dir_pool_registers(struct dlb_hw *hw,
+						struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DIR_POOL_CRD_LIM(pool->id.phys_id),
+			   DLB_CHP_DIR_POOL_CRD_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DIR_POOL_CRD_CNT(pool->id.phys_id),
+			   DLB_CHP_DIR_POOL_CRD_CNT_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DQED_FL_BASE(pool->id.phys_id),
+			   DLB_CHP_DQED_FL_BASE_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DQED_FL_LIM(pool->id.phys_id),
+			   DLB_CHP_DQED_FL_LIM_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DQED_FL_PUSH_PTR(pool->id.phys_id),
+			   DLB_CHP_DQED_FL_PUSH_PTR_RST);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DQED_FL_POP_PTR(pool->id.phys_id),
+			   DLB_CHP_DQED_FL_POP_PTR_RST);
+	}
+}
+
+static void dlb_domain_reset_registers(struct dlb_hw *hw,
+				       struct dlb_domain *domain)
+{
+	dlb_domain_reset_ldb_port_registers(hw, domain);
+
+	dlb_domain_reset_dir_port_registers(hw, domain);
+
+	dlb_domain_reset_ldb_queue_registers(hw, domain);
+
+	dlb_domain_reset_dir_queue_registers(hw, domain);
+
+	dlb_domain_reset_ldb_pool_registers(hw, domain);
+
+	dlb_domain_reset_dir_pool_registers(hw, domain);
+}
+
+static int dlb_domain_drain_ldb_cqs(struct dlb_hw *hw,
+				    struct dlb_domain *domain,
+				    bool toggle_port)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+	int ret;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		if (toggle_port)
+			dlb_ldb_port_cq_disable(hw, port);
+
+		ret = dlb_drain_ldb_cq(hw, port);
+		if (ret < 0)
+			return ret;
+
+		if (toggle_port)
+			dlb_ldb_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static bool dlb_domain_mapped_queues_empty(struct dlb_hw *hw,
+					   struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_queue *queue;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings == 0)
+			continue;
+
+		if (!dlb_ldb_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb_domain_drain_mapped_queues(struct dlb_hw *hw,
+					  struct dlb_domain *domain)
+{
+	int i, ret;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	if (domain->num_pending_removals > 0) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: failed to unmap domain queues\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	for (i = 0; i < DLB_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		ret = dlb_domain_drain_ldb_cqs(hw, domain, true);
+		if (ret < 0)
+			return ret;
+
+		if (dlb_domain_mapped_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: failed to empty queues\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	/* Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	ret = dlb_domain_drain_ldb_cqs(hw, domain, true);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int dlb_domain_drain_unmapped_queue(struct dlb_hw *hw,
+					   struct dlb_domain *domain,
+					   struct dlb_ldb_queue *queue)
+{
+	struct dlb_ldb_port *port;
+	int ret;
+
+	/* If a domain has LDB queues, it must have LDB ports */
+	if (dlb_list_empty(&domain->used_ldb_ports)) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: No configured LDB ports\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	port = DLB_DOM_LIST_HEAD(domain->used_ldb_ports, typeof(*port));
+
+	/* If necessary, free up a QID slot in this CQ */
+	if (port->num_mappings == DLB_MAX_NUM_QIDS_PER_LDB_CQ) {
+		struct dlb_ldb_queue *mapped_queue;
+
+		mapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];
+
+		ret = dlb_ldb_port_unmap_qid(hw, port, mapped_queue);
+		if (ret)
+			return ret;
+	}
+
+	ret = dlb_ldb_port_map_qid_dynamic(hw, port, queue, 0);
+	if (ret)
+		return ret;
+
+	return dlb_domain_drain_mapped_queues(hw, domain);
+}
+
+static int dlb_domain_drain_unmapped_queues(struct dlb_hw *hw,
+					    struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_queue *queue;
+	int ret;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		if (queue->num_mappings != 0 ||
+		    dlb_ldb_queue_is_empty(hw, queue))
+			continue;
+
+		ret = dlb_domain_drain_unmapped_queue(hw, domain, queue);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static void dlb_drain_dir_cq(struct dlb_hw *hw, struct dlb_dir_pq_pair *port)
+{
+	unsigned int port_id = port->id.phys_id;
+	u32 cnt;
+
+	/* Return any outstanding tokens */
+	cnt = dlb_dir_cq_token_count(hw, port);
+
+	if (cnt != 0) {
+		struct dlb_hcw hcw_mem[8], *hcw;
+		void  *pp_addr;
+
+		pp_addr = os_map_producer_port(hw, port_id, false);
+
+		/* Point hcw to a 64B-aligned location */
+		hcw = (struct dlb_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);
+
+		/* Program the first HCW for a batch token return and
+		 * the rest as NOOPS
+		 */
+		memset(hcw, 0, 4 * sizeof(*hcw));
+		hcw->cq_token = 1;
+		hcw->lock_id = cnt - 1;
+
+		os_enqueue_four_hcws(hw, hcw, pp_addr);
+
+		os_fence_hcw(hw, pp_addr);
+
+		os_unmap_producer_port(hw, pp_addr);
+	}
+}
+
+static int dlb_domain_drain_dir_cqs(struct dlb_hw *hw,
+				    struct dlb_domain *domain,
+				    bool toggle_port)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		/* Can't drain a port if it's not configured, and there's
+		 * nothing to drain if its queue is unconfigured.
+		 */
+		if (!port->port_configured || !port->queue_configured)
+			continue;
+
+		if (toggle_port)
+			dlb_dir_port_cq_disable(hw, port);
+
+		dlb_drain_dir_cq(hw, port);
+
+		if (toggle_port)
+			dlb_dir_port_cq_enable(hw, port);
+	}
+
+	return 0;
+}
+
+static bool dlb_domain_dir_queues_empty(struct dlb_hw *hw,
+					struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *queue;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {
+		if (!dlb_dir_queue_is_empty(hw, queue))
+			return false;
+	}
+
+	return true;
+}
+
+static int dlb_domain_drain_dir_queues(struct dlb_hw *hw,
+				       struct dlb_domain *domain)
+{
+	int i;
+
+	/* If the domain hasn't been started, there's no traffic to drain */
+	if (!domain->started)
+		return 0;
+
+	for (i = 0; i < DLB_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+		dlb_domain_drain_dir_cqs(hw, domain, true);
+
+		if (dlb_domain_dir_queues_empty(hw, domain))
+			break;
+	}
+
+	if (i == DLB_MAX_QID_EMPTY_CHECK_LOOPS) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: failed to empty queues\n",
+			   __func__);
+		return -EFAULT;
+	}
+
+	/* Drain the CQs one more time. For the queues to go empty, they would
+	 * have scheduled one or more QEs.
+	 */
+	dlb_domain_drain_dir_cqs(hw, domain, true);
+
+	return 0;
+}
+
+static void dlb_domain_disable_dir_producer_ports(struct dlb_hw *hw,
+						  struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+	union dlb_sys_dir_pp_v r1;
+
+	r1.field.pp_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_PP_V(port->id.phys_id),
+			   r1.val);
+}
+
+static void dlb_domain_disable_ldb_producer_ports(struct dlb_hw *hw,
+						  struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_ldb_pp_v r1;
+	struct dlb_ldb_port *port;
+
+	r1.field.pp_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_PP_V(port->id.phys_id),
+			   r1.val);
+
+		hw->pf.num_enabled_ldb_ports--;
+	}
+}
+
+static void dlb_domain_disable_dir_vpps(struct dlb_hw *hw,
+					struct dlb_domain *domain,
+					unsigned int vf_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_vf_dir_vpp_v r1;
+	struct dlb_dir_pq_pair *port;
+
+	r1.field.vpp_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		unsigned int offs;
+
+		offs = vf_id * DLB_MAX_NUM_DIR_PORTS + port->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_DIR_VPP_V(offs), r1.val);
+	}
+}
+
+static void dlb_domain_disable_ldb_vpps(struct dlb_hw *hw,
+					struct dlb_domain *domain,
+					unsigned int vf_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_vf_ldb_vpp_v r1;
+	struct dlb_ldb_port *port;
+
+	r1.field.vpp_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		unsigned int offs;
+
+		offs = vf_id * DLB_MAX_NUM_LDB_PORTS + port->id.virt_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_VF_LDB_VPP_V(offs), r1.val);
+	}
+}
+
+static void dlb_domain_disable_dir_pools(struct dlb_hw *hw,
+					 struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_dir_pool_enbld r0 = { {0} };
+	struct dlb_credit_pool *pool;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter)
+		DLB_CSR_WR(hw,
+			   DLB_SYS_DIR_POOL_ENBLD(pool->id.phys_id),
+			   r0.val);
+}
+
+static void dlb_domain_disable_ldb_pools(struct dlb_hw *hw,
+					 struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_ldb_pool_enbld r0 = { {0} };
+	struct dlb_credit_pool *pool;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter)
+		DLB_CSR_WR(hw,
+			   DLB_SYS_LDB_POOL_ENBLD(pool->id.phys_id),
+			   r0.val);
+}
+
+static void dlb_domain_disable_ldb_seq_checks(struct dlb_hw *hw,
+					      struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_chp_sn_chk_enbl r1;
+	struct dlb_ldb_port *port;
+
+	r1.field.en = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		DLB_CSR_WR(hw,
+			   DLB_CHP_SN_CHK_ENBL(port->id.phys_id),
+			   r1.val);
+}
+
+static void dlb_domain_disable_ldb_port_crd_updates(struct dlb_hw *hw,
+						    struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_chp_ldb_pp_crd_req_state r0;
+	struct dlb_ldb_port *port;
+
+	r0.field.no_pp_credit_update = 1;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
+		DLB_CSR_WR(hw,
+			   DLB_CHP_LDB_PP_CRD_REQ_STATE(port->id.phys_id),
+			   r0.val);
+}
+
+static void dlb_domain_disable_ldb_port_interrupts(struct dlb_hw *hw,
+						   struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_chp_ldb_cq_int_enb r0 = { {0} };
+	union dlb_chp_ldb_cq_wd_enb r1 = { {0} };
+	struct dlb_ldb_port *port;
+
+	r0.field.en_tim = 0;
+	r0.field.en_depth = 0;
+
+	r1.field.wd_enable = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_CHP_LDB_CQ_INT_ENB(port->id.phys_id),
+			   r0.val);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_LDB_CQ_WD_ENB(port->id.phys_id),
+			   r1.val);
+	}
+}
+
+static void dlb_domain_disable_dir_port_interrupts(struct dlb_hw *hw,
+						   struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_chp_dir_cq_int_enb r0 = { {0} };
+	union dlb_chp_dir_cq_wd_enb r1 = { {0} };
+	struct dlb_dir_pq_pair *port;
+
+	r0.field.en_tim = 0;
+	r0.field.en_depth = 0;
+
+	r1.field.wd_enable = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DIR_CQ_INT_ENB(port->id.phys_id),
+			   r0.val);
+
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DIR_CQ_WD_ENB(port->id.phys_id),
+			   r1.val);
+	}
+}
+
+static void dlb_domain_disable_dir_port_crd_updates(struct dlb_hw *hw,
+						    struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_chp_dir_pp_crd_req_state r0;
+	struct dlb_dir_pq_pair *port;
+
+	r0.field.no_pp_credit_update = 1;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
+		DLB_CSR_WR(hw,
+			   DLB_CHP_DIR_PP_CRD_REQ_STATE(port->id.phys_id),
+			   r0.val);
+}
+
+static void dlb_domain_disable_ldb_queue_write_perms(struct dlb_hw *hw,
+						     struct dlb_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB_MAX_NUM_LDB_QUEUES;
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_ldb_vasqid_v r0;
+	struct dlb_ldb_queue *queue;
+
+	r0.field.vasqid_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {
+		int idx = domain_offset + queue->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_LDB_VASQID_V(idx), r0.val);
+	}
+}
+
+static void dlb_domain_disable_dir_queue_write_perms(struct dlb_hw *hw,
+						     struct dlb_domain *domain)
+{
+	int domain_offset = domain->id.phys_id * DLB_MAX_NUM_DIR_PORTS;
+	struct dlb_list_entry *iter __attribute__((unused));
+	union dlb_sys_dir_vasqid_v r0;
+	struct dlb_dir_pq_pair *port;
+
+	r0.field.vasqid_v = 0;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		int idx = domain_offset + port->id.phys_id;
+
+		DLB_CSR_WR(hw, DLB_SYS_DIR_VASQID_V(idx), r0.val);
+	}
+}
+
+static void dlb_domain_disable_dir_cqs(struct dlb_hw *hw,
+				       struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *port;
+
+	DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {
+		port->enabled = false;
+
+		dlb_dir_port_cq_disable(hw, port);
+	}
+}
+
+static void dlb_domain_disable_ldb_cqs(struct dlb_hw *hw,
+				       struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		port->enabled = false;
+
+		dlb_ldb_port_cq_disable(hw, port);
+	}
+}
+
+static void dlb_domain_enable_ldb_cqs(struct dlb_hw *hw,
+				      struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_ldb_port *port;
+
+	DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) {
+		port->enabled = true;
+
+		dlb_ldb_port_cq_enable(hw, port);
+	}
+}
+
+static int dlb_domain_wait_for_ldb_pool_refill(struct dlb_hw *hw,
+					       struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	/* Confirm that all credits are returned to the domain's credit pools */
+	DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter) {
+		union dlb_chp_qed_fl_push_ptr r0;
+		union dlb_chp_qed_fl_pop_ptr r1;
+		unsigned long pop_offs, push_offs;
+		int i;
+
+		push_offs = DLB_CHP_QED_FL_PUSH_PTR(pool->id.phys_id);
+		pop_offs = DLB_CHP_QED_FL_POP_PTR(pool->id.phys_id);
+
+		for (i = 0; i < DLB_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+			r0.val = DLB_CSR_RD(hw, push_offs);
+
+			r1.val = DLB_CSR_RD(hw, pop_offs);
+
+			/* Break early if the freelist is replenished */
+			if (r1.field.pop_ptr == r0.field.push_ptr &&
+			    r1.field.generation != r0.field.generation) {
+				break;
+			}
+		}
+
+		/* Error if the freelist is not full */
+		if (r1.field.pop_ptr != r0.field.push_ptr ||
+		    r1.field.generation == r0.field.generation) {
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static int dlb_domain_wait_for_dir_pool_refill(struct dlb_hw *hw,
+					       struct dlb_domain *domain)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_credit_pool *pool;
+
+	/* Confirm that all credits are returned to the domain's credit pools */
+	DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter) {
+		union dlb_chp_dqed_fl_push_ptr r0;
+		union dlb_chp_dqed_fl_pop_ptr r1;
+		unsigned long pop_offs, push_offs;
+		int i;
+
+		push_offs = DLB_CHP_DQED_FL_PUSH_PTR(pool->id.phys_id);
+		pop_offs = DLB_CHP_DQED_FL_POP_PTR(pool->id.phys_id);
+
+		for (i = 0; i < DLB_MAX_QID_EMPTY_CHECK_LOOPS; i++) {
+			r0.val = DLB_CSR_RD(hw, push_offs);
+
+			r1.val = DLB_CSR_RD(hw, pop_offs);
+
+			/* Break early if the freelist is replenished */
+			if (r1.field.pop_ptr == r0.field.push_ptr &&
+			    r1.field.generation != r0.field.generation) {
+				break;
+			}
+		}
+
+		/* Error if the freelist is not full */
+		if (r1.field.pop_ptr != r0.field.push_ptr ||
+		    r1.field.generation == r0.field.generation) {
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
+static void dlb_log_reset_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	DLB_HW_INFO(hw, "DLB reset domain:\n");
+	if (vf_request)
+		DLB_HW_INFO(hw, "(Request from VF %d)\n", vf_id);
+	DLB_HW_INFO(hw, "\tDomain ID: %d\n", domain_id);
+}
+
+/**
+ * dlb_reset_domain() - Reset a DLB scheduling domain and its associated
+ *	hardware resources.
+ * @hw:	  Contains the current state of the DLB hardware.
+ * @args: User-provided arguments.
+ * @resp: Response to user.
+ *
+ * Note: User software *must* stop sending to this domain's producer ports
+ * before invoking this function, otherwise undefined behavior will result.
+ *
+ * Return: returns < 0 on error, 0 otherwise.
+ */
+int dlb_reset_domain(struct dlb_hw *hw,
+		     u32 domain_id,
+		     bool vf_request,
+		     unsigned int vf_id)
+{
+	struct dlb_domain *domain;
+	int ret;
+
+	dlb_log_reset_domain(hw, domain_id, vf_request, vf_id);
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain || !domain->configured)
+		return -EINVAL;
+
+	if (vf_request) {
+		dlb_domain_disable_dir_vpps(hw, domain, vf_id);
+
+		dlb_domain_disable_ldb_vpps(hw, domain, vf_id);
+	}
+
+	/* For each queue owned by this domain, disable its write permissions to
+	 * cause any traffic sent to it to be dropped. Well-behaved software
+	 * should not be sending QEs at this point.
+	 */
+	dlb_domain_disable_dir_queue_write_perms(hw, domain);
+
+	dlb_domain_disable_ldb_queue_write_perms(hw, domain);
+
+	/* Disable credit updates and turn off completion tracking on all the
+	 * domain's PPs.
+	 */
+	dlb_domain_disable_dir_port_crd_updates(hw, domain);
+
+	dlb_domain_disable_ldb_port_crd_updates(hw, domain);
+
+	dlb_domain_disable_dir_port_interrupts(hw, domain);
+
+	dlb_domain_disable_ldb_port_interrupts(hw, domain);
+
+	dlb_domain_disable_ldb_seq_checks(hw, domain);
+
+	/* Disable the LDB CQs and drain them in order to complete the map and
+	 * unmap procedures, which require zero CQ inflights and zero QID
+	 * inflights respectively.
+	 */
+	dlb_domain_disable_ldb_cqs(hw, domain);
+
+	ret = dlb_domain_drain_ldb_cqs(hw, domain, false);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_domain_wait_for_ldb_cqs_to_empty(hw, domain);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_domain_finish_unmap_qid_procedures(hw, domain);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_domain_finish_map_qid_procedures(hw, domain);
+	if (ret < 0)
+		return ret;
+
+	/* Re-enable the CQs in order to drain the mapped queues. */
+	dlb_domain_enable_ldb_cqs(hw, domain);
+
+	ret = dlb_domain_drain_mapped_queues(hw, domain);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_domain_drain_unmapped_queues(hw, domain);
+	if (ret < 0)
+		return ret;
+
+	ret = dlb_domain_wait_for_ldb_pool_refill(hw, domain);
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: LDB credits failed to refill\n",
+			   __func__);
+		return ret;
+	}
+
+	/* Done draining LDB QEs, so disable the CQs. */
+	dlb_domain_disable_ldb_cqs(hw, domain);
+
+	/* Directed queues are reset in dlb_domain_reset_hw_resources(), but
+	 * that process doesn't decrement the directed queue size counters used
+	 * by SMON for its average DQED depth measurement. So, we manually drain
+	 * the directed queues here.
+	 */
+	dlb_domain_drain_dir_queues(hw, domain);
+
+	ret = dlb_domain_wait_for_dir_pool_refill(hw, domain);
+	if (ret) {
+		DLB_HW_ERR(hw,
+			   "[%s()] Internal error: DIR credits failed to refill\n",
+			   __func__);
+		return ret;
+	}
+
+	/* Done draining DIR QEs, so disable the CQs. */
+	dlb_domain_disable_dir_cqs(hw, domain);
+
+	dlb_domain_disable_dir_producer_ports(hw, domain);
+
+	dlb_domain_disable_ldb_producer_ports(hw, domain);
+
+	dlb_domain_disable_dir_pools(hw, domain);
+
+	dlb_domain_disable_ldb_pools(hw, domain);
+
+	/* Reset the QID, credit pool, and CQ hardware.
+	 *
+	 * Note: DLB 1.0 A0 h/w does not disarm CQ interrupts during VAS reset.
+	 * A spurious interrupt can occur on subsequent use of a reset CQ.
+	 */
+	ret = dlb_domain_reset_hw_resources(hw, domain);
+	if (ret)
+		return ret;
+
+	ret = dlb_domain_verify_reset_success(hw, domain);
+	if (ret)
+		return ret;
+
+	dlb_domain_reset_registers(hw, domain);
+
+	/* Hardware reset complete. Reset the domain's software state */
+	ret = dlb_domain_reset_software_state(hw, domain);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+int dlb_reset_vf(struct dlb_hw *hw, unsigned int vf_id)
+{
+	struct dlb_domain *domain, *next __attribute__((unused));
+	struct dlb_list_entry *it1 __attribute__((unused));
+	struct dlb_list_entry *it2 __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+
+	if (vf_id >= DLB_MAX_NUM_VFS) {
+		DLB_HW_ERR(hw, "[%s()] Internal error: invalid VF ID %d\n",
+			   __func__, vf_id);
+		return -EFAULT;
+	}
+
+	rsrcs = &hw->vf[vf_id];
+
+	DLB_FUNC_LIST_FOR_SAFE(rsrcs->used_domains, domain, next, it1, it2) {
+		int ret = dlb_reset_domain(hw,
+					   domain->id.virt_id,
+					   true,
+					   vf_id);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+int dlb_ldb_port_owned_by_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 u32 port_id,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_ldb_port *port;
+	struct dlb_domain *domain;
+
+	if (vf_request && vf_id >= DLB_MAX_NUM_VFS)
+		return -1;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain || !domain->configured)
+		return -EINVAL;
+
+	port = dlb_get_domain_ldb_port(port_id, vf_request, domain);
+
+	if (!port)
+		return -EINVAL;
+
+	return port->domain_id.phys_id == domain->id.phys_id;
+}
+
+int dlb_dir_port_owned_by_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 u32 port_id,
+				 bool vf_request,
+				 unsigned int vf_id)
+{
+	struct dlb_dir_pq_pair *port;
+	struct dlb_domain *domain;
+
+	if (vf_request && vf_id >= DLB_MAX_NUM_VFS)
+		return -1;
+
+	domain = dlb_get_domain_from_id(hw, domain_id, vf_request, vf_id);
+
+	if (!domain || !domain->configured)
+		return -EINVAL;
+
+	port = dlb_get_domain_dir_pq(port_id, vf_request, domain);
+
+	if (!port)
+		return -EINVAL;
+
+	return port->domain_id.phys_id == domain->id.phys_id;
+}
+
+int dlb_hw_get_num_resources(struct dlb_hw *hw,
+			     struct dlb_get_num_resources_args *arg,
+			     bool vf_request,
+			     unsigned int vf_id)
+{
+	struct dlb_function_resources *rsrcs;
+	struct dlb_bitmap *map;
+
+	if (vf_request && vf_id >= DLB_MAX_NUM_VFS)
+		return -1;
+
+	if (vf_request)
+		rsrcs = &hw->vf[vf_id];
+	else
+		rsrcs = &hw->pf;
+
+	arg->num_sched_domains = rsrcs->num_avail_domains;
+
+	arg->num_ldb_queues = rsrcs->num_avail_ldb_queues;
+
+	arg->num_ldb_ports = rsrcs->num_avail_ldb_ports;
+
+	arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;
+
+	map = rsrcs->avail_aqed_freelist_entries;
+
+	arg->num_atomic_inflights = dlb_bitmap_count(map);
+
+	arg->max_contiguous_atomic_inflights =
+		dlb_bitmap_longest_set_range(map);
+
+	map = rsrcs->avail_hist_list_entries;
+
+	arg->num_hist_list_entries = dlb_bitmap_count(map);
+
+	arg->max_contiguous_hist_list_entries =
+		dlb_bitmap_longest_set_range(map);
+
+	map = rsrcs->avail_qed_freelist_entries;
+
+	arg->num_ldb_credits = dlb_bitmap_count(map);
+
+	arg->max_contiguous_ldb_credits = dlb_bitmap_longest_set_range(map);
+
+	map = rsrcs->avail_dqed_freelist_entries;
+
+	arg->num_dir_credits = dlb_bitmap_count(map);
+
+	arg->max_contiguous_dir_credits = dlb_bitmap_longest_set_range(map);
+
+	arg->num_ldb_credit_pools = rsrcs->num_avail_ldb_credit_pools;
+
+	arg->num_dir_credit_pools = rsrcs->num_avail_dir_credit_pools;
+
+	return 0;
+}
+
+int dlb_hw_get_num_used_resources(struct dlb_hw *hw,
+				  struct dlb_get_num_resources_args *arg,
+				  bool vf_request,
+				  unsigned int vf_id)
+{
+	struct dlb_list_entry *iter1 __attribute__((unused));
+	struct dlb_list_entry *iter2 __attribute__((unused));
+	struct dlb_function_resources *rsrcs;
+	struct dlb_domain *domain;
+
+	if (vf_request && vf_id >= DLB_MAX_NUM_VFS)
+		return -1;
+
+	rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
+
+	memset(arg, 0, sizeof(*arg));
+
+	DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
+		struct dlb_dir_pq_pair *dir_port;
+		struct dlb_ldb_port *ldb_port;
+		struct dlb_credit_pool *pool;
+		struct dlb_ldb_queue *queue;
+
+		arg->num_sched_domains++;
+
+		arg->num_atomic_inflights +=
+			domain->aqed_freelist.bound -
+			domain->aqed_freelist.base;
+
+		DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
+			arg->num_ldb_queues++;
+		DLB_DOM_LIST_FOR(domain->avail_ldb_queues, queue, iter2)
+			arg->num_ldb_queues++;
+
+		DLB_DOM_LIST_FOR(domain->used_ldb_ports, ldb_port, iter2)
+			arg->num_ldb_ports++;
+		DLB_DOM_LIST_FOR(domain->avail_ldb_ports, ldb_port, iter2)
+			arg->num_ldb_ports++;
+
+		DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter2)
+			arg->num_dir_ports++;
+		DLB_DOM_LIST_FOR(domain->avail_dir_pq_pairs, dir_port, iter2)
+			arg->num_dir_ports++;
+
+		arg->num_ldb_credits +=
+			domain->qed_freelist.bound -
+			domain->qed_freelist.base;
+
+		DLB_DOM_LIST_FOR(domain->avail_ldb_credit_pools, pool, iter2)
+			arg->num_ldb_credit_pools++;
+		DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter2) {
+			arg->num_ldb_credit_pools++;
+			arg->num_ldb_credits += pool->total_credits;
+		}
+
+		arg->num_dir_credits +=
+			domain->dqed_freelist.bound -
+			domain->dqed_freelist.base;
+
+		DLB_DOM_LIST_FOR(domain->avail_dir_credit_pools, pool, iter2)
+			arg->num_dir_credit_pools++;
+		DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter2) {
+			arg->num_dir_credit_pools++;
+			arg->num_dir_credits += pool->total_credits;
+		}
+
+		arg->num_hist_list_entries += domain->total_hist_list_entries;
+	}
+
+	return 0;
+}
+
+static inline bool dlb_ldb_port_owned_by_vf(struct dlb_hw *hw,
+					    u32 vf_id,
+					    u32 port_id)
+{
+	return (hw->rsrcs.ldb_ports[port_id].id.vf_owned &&
+		hw->rsrcs.ldb_ports[port_id].id.vf_id == vf_id);
+}
+
+static inline bool dlb_dir_port_owned_by_vf(struct dlb_hw *hw,
+					    u32 vf_id,
+					    u32 port_id)
+{
+	return (hw->rsrcs.dir_pq_pairs[port_id].id.vf_owned &&
+		hw->rsrcs.dir_pq_pairs[port_id].id.vf_id == vf_id);
+}
+
+void dlb_send_async_pf_to_vf_msg(struct dlb_hw *hw, unsigned int vf_id)
+{
+	union dlb_func_pf_pf2vf_mailbox_isr r0 = { {0} };
+
+	r0.field.isr = 1 << vf_id;
+
+	DLB_FUNC_WR(hw, DLB_FUNC_PF_PF2VF_MAILBOX_ISR(0), r0.val);
+}
+
+bool dlb_pf_to_vf_complete(struct dlb_hw *hw, unsigned int vf_id)
+{
+	union dlb_func_pf_pf2vf_mailbox_isr r0;
+
+	r0.val = DLB_FUNC_RD(hw, DLB_FUNC_PF_PF2VF_MAILBOX_ISR(vf_id));
+
+	return (r0.val & (1 << vf_id)) == 0;
+}
+
+void dlb_send_async_vf_to_pf_msg(struct dlb_hw *hw)
+{
+	union dlb_func_vf_vf2pf_mailbox_isr r0 = { {0} };
+
+	r0.field.isr = 1;
+	DLB_FUNC_WR(hw, DLB_FUNC_VF_VF2PF_MAILBOX_ISR, r0.val);
+}
+
+bool dlb_vf_to_pf_complete(struct dlb_hw *hw)
+{
+	union dlb_func_vf_vf2pf_mailbox_isr r0;
+
+	r0.val = DLB_FUNC_RD(hw, DLB_FUNC_VF_VF2PF_MAILBOX_ISR);
+
+	return (r0.field.isr == 0);
+}
+
+bool dlb_vf_flr_complete(struct dlb_hw *hw)
+{
+	union dlb_func_vf_vf_reset_in_progress r0;
+
+	r0.val = DLB_FUNC_RD(hw, DLB_FUNC_VF_VF_RESET_IN_PROGRESS);
+
+	return (r0.field.reset_in_progress == 0);
+}
+
+int dlb_pf_read_vf_mbox_req(struct dlb_hw *hw,
+			    unsigned int vf_id,
+			    void *data,
+			    int len)
+{
+	u32 buf[DLB_VF2PF_REQ_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_VF2PF_REQ_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > VF->PF mailbox req size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	if (len == 0) {
+		DLB_HW_ERR(hw, "[%s()] invalid len (0)\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_VF2PF_REQ_BASE_WORD;
+
+		buf[i] = DLB_FUNC_RD(hw, DLB_FUNC_PF_VF2PF_MAILBOX(vf_id, idx));
+	}
+
+	memcpy(data, buf, len);
+
+	return 0;
+}
+
+int dlb_pf_read_vf_mbox_resp(struct dlb_hw *hw,
+			     unsigned int vf_id,
+			     void *data,
+			     int len)
+{
+	u32 buf[DLB_VF2PF_RESP_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_VF2PF_RESP_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > VF->PF mailbox resp size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_VF2PF_RESP_BASE_WORD;
+
+		buf[i] = DLB_FUNC_RD(hw, DLB_FUNC_PF_VF2PF_MAILBOX(vf_id, idx));
+	}
+
+	memcpy(data, buf, len);
+
+	return 0;
+}
+
+int dlb_pf_write_vf_mbox_resp(struct dlb_hw *hw,
+			      unsigned int vf_id,
+			      void *data,
+			      int len)
+{
+	u32 buf[DLB_PF2VF_RESP_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_PF2VF_RESP_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > PF->VF mailbox resp size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	memcpy(buf, data, len);
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_PF2VF_RESP_BASE_WORD;
+
+		DLB_FUNC_WR(hw, DLB_FUNC_PF_PF2VF_MAILBOX(vf_id, idx), buf[i]);
+	}
+
+	return 0;
+}
+
+int dlb_pf_write_vf_mbox_req(struct dlb_hw *hw,
+			     unsigned int vf_id,
+			     void *data,
+			     int len)
+{
+	u32 buf[DLB_PF2VF_REQ_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_PF2VF_REQ_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > PF->VF mailbox req size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	memcpy(buf, data, len);
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_PF2VF_REQ_BASE_WORD;
+
+		DLB_FUNC_WR(hw, DLB_FUNC_PF_PF2VF_MAILBOX(vf_id, idx), buf[i]);
+	}
+
+	return 0;
+}
+
+int dlb_vf_read_pf_mbox_resp(struct dlb_hw *hw, void *data, int len)
+{
+	u32 buf[DLB_PF2VF_RESP_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_PF2VF_RESP_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > PF->VF mailbox resp size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	if (len == 0) {
+		DLB_HW_ERR(hw, "[%s()] invalid len (0)\n", __func__);
+		return -EINVAL;
+	}
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_PF2VF_RESP_BASE_WORD;
+
+		buf[i] = DLB_FUNC_RD(hw, DLB_FUNC_VF_PF2VF_MAILBOX(idx));
+	}
+
+	memcpy(data, buf, len);
+
+	return 0;
+}
+
+int dlb_vf_read_pf_mbox_req(struct dlb_hw *hw, void *data, int len)
+{
+	u32 buf[DLB_PF2VF_REQ_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_PF2VF_REQ_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > PF->VF mailbox req size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if ((len % 4) != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_PF2VF_REQ_BASE_WORD;
+
+		buf[i] = DLB_FUNC_RD(hw, DLB_FUNC_VF_PF2VF_MAILBOX(idx));
+	}
+
+	memcpy(data, buf, len);
+
+	return 0;
+}
+
+int dlb_vf_write_pf_mbox_req(struct dlb_hw *hw, void *data, int len)
+{
+	u32 buf[DLB_VF2PF_REQ_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_VF2PF_REQ_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > VF->PF mailbox req size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	memcpy(buf, data, len);
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_VF2PF_REQ_BASE_WORD;
+
+		DLB_FUNC_WR(hw, DLB_FUNC_VF_VF2PF_MAILBOX(idx), buf[i]);
+	}
+
+	return 0;
+}
+
+int dlb_vf_write_pf_mbox_resp(struct dlb_hw *hw, void *data, int len)
+{
+	u32 buf[DLB_VF2PF_RESP_BYTES / 4];
+	int num_words;
+	int i;
+
+	if (len > DLB_VF2PF_RESP_BYTES) {
+		DLB_HW_ERR(hw, "[%s()] len (%d) > VF->PF mailbox resp size\n",
+			   __func__, len);
+		return -EINVAL;
+	}
+
+	memcpy(buf, data, len);
+
+	/* Round up len to the nearest 4B boundary, since the mailbox registers
+	 * are 32b wide.
+	 */
+	num_words = len / 4;
+	if (len % 4 != 0)
+		num_words++;
+
+	for (i = 0; i < num_words; i++) {
+		u32 idx = i + DLB_VF2PF_RESP_BASE_WORD;
+
+		DLB_FUNC_WR(hw, DLB_FUNC_VF_VF2PF_MAILBOX(idx), buf[i]);
+	}
+
+	return 0;
+}
+
+bool dlb_vf_is_locked(struct dlb_hw *hw, unsigned int vf_id)
+{
+	return hw->vf[vf_id].locked;
+}
+
+static void dlb_vf_set_rsrc_virt_ids(struct dlb_function_resources *rsrcs,
+				     unsigned int vf_id)
+{
+	struct dlb_list_entry *iter __attribute__((unused));
+	struct dlb_dir_pq_pair *dir_port;
+	struct dlb_ldb_queue *ldb_queue;
+	struct dlb_ldb_port *ldb_port;
+	struct dlb_credit_pool *pool;
+	struct dlb_domain *domain;
+	int i;
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_domains, domain, iter) {
+		domain->id.virt_id = i;
+		domain->id.vf_owned = true;
+		domain->id.vf_id = vf_id;
+		i++;
+	}
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, ldb_queue, iter) {
+		ldb_queue->id.virt_id = i;
+		ldb_queue->id.vf_owned = true;
+		ldb_queue->id.vf_id = vf_id;
+		i++;
+	}
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, ldb_port, iter) {
+		ldb_port->id.virt_id = i;
+		ldb_port->id.vf_owned = true;
+		ldb_port->id.vf_id = vf_id;
+		i++;
+	}
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_dir_pq_pairs, dir_port, iter) {
+		dir_port->id.virt_id = i;
+		dir_port->id.vf_owned = true;
+		dir_port->id.vf_id = vf_id;
+		i++;
+	}
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_credit_pools, pool, iter) {
+		pool->id.virt_id = i;
+		pool->id.vf_owned = true;
+		pool->id.vf_id = vf_id;
+		i++;
+	}
+
+	i = 0;
+	DLB_FUNC_LIST_FOR(rsrcs->avail_dir_credit_pools, pool, iter) {
+		pool->id.virt_id = i;
+		pool->id.vf_owned = true;
+		pool->id.vf_id = vf_id;
+		i++;
+	}
+}
+
+void dlb_lock_vf(struct dlb_hw *hw, unsigned int vf_id)
+{
+	struct dlb_function_resources *rsrcs = &hw->vf[vf_id];
+
+	rsrcs->locked = true;
+
+	dlb_vf_set_rsrc_virt_ids(rsrcs, vf_id);
+}
+
+void dlb_unlock_vf(struct dlb_hw *hw, unsigned int vf_id)
+{
+	hw->vf[vf_id].locked = false;
+}
+
+int dlb_reset_vf_resources(struct dlb_hw *hw, unsigned int vf_id)
+{
+	if (vf_id >= DLB_MAX_NUM_VFS)
+		return -EINVAL;
+
+	/* If the VF is locked, its resource assignment can't be changed */
+	if (dlb_vf_is_locked(hw, vf_id))
+		return -EPERM;
+
+	dlb_update_vf_sched_domains(hw, vf_id, 0);
+	dlb_update_vf_ldb_queues(hw, vf_id, 0);
+	dlb_update_vf_ldb_ports(hw, vf_id, 0);
+	dlb_update_vf_dir_ports(hw, vf_id, 0);
+	dlb_update_vf_ldb_credit_pools(hw, vf_id, 0);
+	dlb_update_vf_dir_credit_pools(hw, vf_id, 0);
+	dlb_update_vf_ldb_credits(hw, vf_id, 0);
+	dlb_update_vf_dir_credits(hw, vf_id, 0);
+	dlb_update_vf_hist_list_entries(hw, vf_id, 0);
+	dlb_update_vf_atomic_inflights(hw, vf_id, 0);
+
+	return 0;
+}
+
+void dlb_hw_enable_sparse_ldb_cq_mode(struct dlb_hw *hw)
+{
+	union dlb_sys_cq_mode r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_CQ_MODE);
+
+	r0.field.ldb_cq64 = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_CQ_MODE, r0.val);
+}
+
+void dlb_hw_enable_sparse_dir_cq_mode(struct dlb_hw *hw)
+{
+	union dlb_sys_cq_mode r0;
+
+	r0.val = DLB_CSR_RD(hw, DLB_SYS_CQ_MODE);
+
+	r0.field.dir_cq64 = 1;
+
+	DLB_CSR_WR(hw, DLB_SYS_CQ_MODE, r0.val);
+}
+
+void dlb_hw_set_qe_arbiter_weights(struct dlb_hw *hw, u8 weight[8])
+{
+	union dlb_atm_pipe_ctrl_arb_weights_rdy_bin r0 = { {0} };
+	union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_0 r1 = { {0} };
+	union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_1 r2 = { {0} };
+	union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_0 r3 = { {0} };
+	union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_1 r4 = { {0} };
+	union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_0 r5 = { {0} };
+	union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_1 r6 = { {0} };
+	union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_0 r7 =  { {0} };
+	union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_1 r8 =  { {0} };
+	union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_0 r9 = { {0} };
+	union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_1 r10 = { {0} };
+	union dlb_atm_pipe_cfg_ctrl_arb_weights_sched_bin r11 = { {0} };
+	union dlb_aqed_pipe_cfg_ctrl_arb_weights_tqpri_atm_0 r12 = { {0} };
+
+	r0.field.bin0 = weight[1];
+	r0.field.bin1 = weight[3];
+	r0.field.bin2 = weight[5];
+	r0.field.bin3 = weight[7];
+
+	r1.field.pri0 = weight[0];
+	r1.field.pri1 = weight[1];
+	r1.field.pri2 = weight[2];
+	r1.field.pri3 = weight[3];
+	r2.field.pri4 = weight[4];
+	r2.field.pri5 = weight[5];
+	r2.field.pri6 = weight[6];
+	r2.field.pri7 = weight[7];
+
+	r3.field.pri0 = weight[0];
+	r3.field.pri1 = weight[1];
+	r3.field.pri2 = weight[2];
+	r3.field.pri3 = weight[3];
+	r4.field.pri4 = weight[4];
+	r4.field.pri5 = weight[5];
+	r4.field.pri6 = weight[6];
+	r4.field.pri7 = weight[7];
+
+	r5.field.pri0 = weight[0];
+	r5.field.pri1 = weight[1];
+	r5.field.pri2 = weight[2];
+	r5.field.pri3 = weight[3];
+	r6.field.pri4 = weight[4];
+	r6.field.pri5 = weight[5];
+	r6.field.pri6 = weight[6];
+	r6.field.pri7 = weight[7];
+
+	r7.field.pri0 = weight[0];
+	r7.field.pri1 = weight[1];
+	r7.field.pri2 = weight[2];
+	r7.field.pri3 = weight[3];
+	r8.field.pri4 = weight[4];
+	r8.field.pri5 = weight[5];
+	r8.field.pri6 = weight[6];
+	r8.field.pri7 = weight[7];
+
+	r9.field.pri0 = weight[0];
+	r9.field.pri1 = weight[1];
+	r9.field.pri2 = weight[2];
+	r9.field.pri3 = weight[3];
+	r10.field.pri4 = weight[4];
+	r10.field.pri5 = weight[5];
+	r10.field.pri6 = weight[6];
+	r10.field.pri7 = weight[7];
+
+	r11.field.bin0 = weight[1];
+	r11.field.bin1 = weight[3];
+	r11.field.bin2 = weight[5];
+	r11.field.bin3 = weight[7];
+
+	r12.field.pri0 = weight[1];
+	r12.field.pri1 = weight[3];
+	r12.field.pri2 = weight[5];
+	r12.field.pri3 = weight[7];
+
+	DLB_CSR_WR(hw, DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN, r0.val);
+	DLB_CSR_WR(hw, DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0, r1.val);
+	DLB_CSR_WR(hw, DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1, r2.val);
+	DLB_CSR_WR(hw,
+		   DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0,
+		   r3.val);
+	DLB_CSR_WR(hw,
+		   DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1,
+		   r4.val);
+	DLB_CSR_WR(hw, DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0, r5.val);
+	DLB_CSR_WR(hw, DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1, r6.val);
+	DLB_CSR_WR(hw, DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0, r7.val);
+	DLB_CSR_WR(hw, DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1, r8.val);
+	DLB_CSR_WR(hw, DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0, r9.val);
+	DLB_CSR_WR(hw, DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1, r10.val);
+	DLB_CSR_WR(hw, DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN, r11.val);
+	DLB_CSR_WR(hw, DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0, r12.val);
+}
+
+void dlb_hw_set_qid_arbiter_weights(struct dlb_hw *hw, u8 weight[8])
+{
+	union dlb_lsp_cfg_arb_weight_ldb_qid_0 r0 = { {0} };
+	union dlb_lsp_cfg_arb_weight_ldb_qid_1 r1 = { {0} };
+	union dlb_lsp_cfg_arb_weight_atm_nalb_qid_0 r2 = { {0} };
+	union dlb_lsp_cfg_arb_weight_atm_nalb_qid_1 r3 = { {0} };
+
+	r0.field.slot0_weight = weight[0];
+	r0.field.slot1_weight = weight[1];
+	r0.field.slot2_weight = weight[2];
+	r0.field.slot3_weight = weight[3];
+	r1.field.slot4_weight = weight[4];
+	r1.field.slot5_weight = weight[5];
+	r1.field.slot6_weight = weight[6];
+	r1.field.slot7_weight = weight[7];
+
+	r2.field.slot0_weight = weight[0];
+	r2.field.slot1_weight = weight[1];
+	r2.field.slot2_weight = weight[2];
+	r2.field.slot3_weight = weight[3];
+	r3.field.slot4_weight = weight[4];
+	r3.field.slot5_weight = weight[5];
+	r3.field.slot6_weight = weight[6];
+	r3.field.slot7_weight = weight[7];
+
+	DLB_CSR_WR(hw, DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_0, r0.val);
+	DLB_CSR_WR(hw, DLB_LSP_CFG_ARB_WEIGHT_LDB_QID_1, r1.val);
+	DLB_CSR_WR(hw, DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_0, r2.val);
+	DLB_CSR_WR(hw, DLB_LSP_CFG_ARB_WEIGHT_ATM_NALB_QID_1, r3.val);
+}
+
+void dlb_hw_enable_pp_sw_alarms(struct dlb_hw *hw)
+{
+	union dlb_chp_cfg_ldb_pp_sw_alarm_en r0 = { {0} };
+	union dlb_chp_cfg_dir_pp_sw_alarm_en r1 = { {0} };
+	int i;
+
+	r0.field.alarm_enable = 1;
+	r1.field.alarm_enable = 1;
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++)
+		DLB_CSR_WR(hw, DLB_CHP_CFG_LDB_PP_SW_ALARM_EN(i), r0.val);
+
+	for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++)
+		DLB_CSR_WR(hw, DLB_CHP_CFG_DIR_PP_SW_ALARM_EN(i), r1.val);
+}
+
+void dlb_hw_disable_pp_sw_alarms(struct dlb_hw *hw)
+{
+	union dlb_chp_cfg_ldb_pp_sw_alarm_en r0 = { {0} };
+	union dlb_chp_cfg_dir_pp_sw_alarm_en r1 = { {0} };
+	int i;
+
+	r0.field.alarm_enable = 0;
+	r1.field.alarm_enable = 0;
+
+	for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++)
+		DLB_CSR_WR(hw, DLB_CHP_CFG_LDB_PP_SW_ALARM_EN(i), r0.val);
+
+	for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++)
+		DLB_CSR_WR(hw, DLB_CHP_CFG_DIR_PP_SW_ALARM_EN(i), r1.val);
+}
diff --git a/drivers/event/dlb/pf/base/dlb_resource.h b/drivers/event/dlb/pf/base/dlb_resource.h
new file mode 100644
index 000000000..5500f9b26
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_resource.h
@@ -0,0 +1,1625 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_RESOURCE_H
+#define __DLB_RESOURCE_H
+
+#include "dlb_hw_types.h"
+#include "dlb_osdep_types.h"
+#include "dlb_user.h"
+
+/**
+ * dlb_resource_init() - initialize the device
+ * @hw: pointer to struct dlb_hw.
+ *
+ * This function initializes the device's software state (pointed to by the hw
+ * argument) and programs global scheduling QoS registers. This function should
+ * be called during driver initialization.
+ *
+ * The dlb_hw struct must be unique per DLB device and persist until the device
+ * is reset.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ */
+int dlb_resource_init(struct dlb_hw *hw);
+
+/**
+ * dlb_resource_free() - free device state memory
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function frees software state pointed to by dlb_hw. This function
+ * should be called when resetting the device or unloading the driver.
+ */
+void dlb_resource_free(struct dlb_hw *hw);
+
+/**
+ * dlb_resource_reset() - reset in-use resources to their initial state
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function resets in-use resources, and makes them available for use.
+ * All resources go back to their owning function, whether a PF or a VF.
+ */
+void dlb_resource_reset(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_create_sched_domain() - create a scheduling domain
+ * @hw: dlb_hw handle for a particular device.
+ * @args: scheduling domain creation arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a scheduling domain containing the resources specified
+ * in args. The individual resources (queues, ports, credit pools) can be
+ * configured after creating a scheduling domain.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the domain ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, or the requested domain name
+ *	    is already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_sched_domain(struct dlb_hw *hw,
+			       struct dlb_create_sched_domain_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id);
+
+/**
+ * dlb_hw_create_ldb_pool() - create a load-balanced credit pool
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: credit pool creation arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a load-balanced credit pool containing the number of
+ * requested credits.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the pool ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_ldb_pool(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_ldb_pool_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_create_dir_pool() - create a directed credit pool
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: credit pool creation arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a directed credit pool containing the number of
+ * requested credits.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the pool ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_dir_pool(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_dir_pool_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_create_ldb_queue() - create a load-balanced queue
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a load-balanced queue.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    the domain has already been started, or the requested queue name is
+ *	    already in use.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_ldb_queue(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_create_ldb_queue_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id);
+
+/**
+ * dlb_hw_create_dir_queue() - create a directed queue
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue creation arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a directed queue.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the queue ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, the domain is not configured,
+ *	    or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_dir_queue(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_create_dir_queue_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id);
+
+/**
+ * dlb_hw_create_dir_port() - create a directed port
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @pop_count_dma_base: base address of the pop count memory. This can be
+ *			a PA or an IOVA.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a directed port.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pool ID is invalid, a pointer address is not properly aligned, the
+ *	    domain is not configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_dir_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_dir_port_args *args,
+			   u64 pop_count_dma_base,
+			   u64 cq_dma_base,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_create_ldb_port() - create a load-balanced port
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port creation arguments.
+ * @pop_count_dma_base: base address of the pop count memory. This can be
+ *			 a PA or an IOVA.
+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function creates a load-balanced port.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the port ID.
+ *
+ * Note: resp->id contains a virtual ID if vf_request is true.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a
+ *	    pool ID is invalid, a pointer address is not properly aligned, the
+ *	    domain is not configured, or the domain has already been started.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_create_ldb_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_create_ldb_port_args *args,
+			   u64 pop_count_dma_base,
+			   u64 cq_dma_base,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_start_domain() - start a scheduling domain
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: start domain arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function starts a scheduling domain, which allows applications to send
+ * traffic through it. Once a domain is started, its resources can no longer be
+ * configured (besides QID remapping and port enable/disable).
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - the domain is not configured, or the domain is already started.
+ */
+int dlb_hw_start_domain(struct dlb_hw *hw,
+			u32 domain_id,
+			struct dlb_start_domain_args *args,
+			struct dlb_cmd_response *resp,
+			bool vf_request,
+			unsigned int vf_id);
+
+/**
+ * dlb_hw_map_qid() - map a load-balanced queue to a load-balanced port
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: map QID arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to schedule QEs from the specified queue to
+ * the specified port. Each load-balanced port can be mapped to up to 8 queues;
+ * each load-balanced queue can potentially map to all the load-balanced ports.
+ *
+ * A successful return does not necessarily mean the mapping was configured. If
+ * this function is unable to immediately map the queue to the port, it will
+ * add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. In a sense, this is
+ * an asynchronous function.
+ *
+ * This asynchronicity creates two views of the state of hardware: the actual
+ * hardware state and the requested state (as if every request completed
+ * immediately). If there are any pending map/unmap operations, the requested
+ * state will differ from the actual state. All validation is performed with
+ * respect to the pending state; for instance, if there are 8 pending map
+ * operations for port X, a request for a 9th will fail because a load-balanced
+ * port can only map up to 8 queues.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_map_qid(struct dlb_hw *hw,
+		   u32 domain_id,
+		   struct dlb_map_qid_args *args,
+		   struct dlb_cmd_response *resp,
+		   bool vf_request,
+		   unsigned int vf_id);
+
+/**
+ * dlb_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: unmap QID arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs from the specified
+ * queue to the specified port.
+ *
+ * A successful return does not necessarily mean the mapping was removed. If
+ * this function is unable to immediately unmap the queue from the port, it
+ * will add the requested operation to a per-port list of pending map/unmap
+ * operations, and (if it's not already running) launch a kernel thread that
+ * periodically attempts to process all pending operations. See
+ * dlb_hw_map_qid() for more details.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or
+ *	    the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_unmap_qid(struct dlb_hw *hw,
+		     u32 domain_id,
+		     struct dlb_unmap_qid_args *args,
+		     struct dlb_cmd_response *resp,
+		     bool vf_request,
+		     unsigned int vf_id);
+
+/**
+ * dlb_finish_unmap_qid_procedures() - finish any pending unmap procedures
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding unmap procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb_finish_unmap_qid_procedures(struct dlb_hw *hw);
+
+/**
+ * dlb_finish_map_qid_procedures() - finish any pending map procedures
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function attempts to finish any outstanding map procedures.
+ * This function should be called by the kernel thread responsible for
+ * finishing map/unmap procedures.
+ *
+ * Return:
+ * Returns the number of procedures that weren't completed.
+ */
+unsigned int dlb_finish_map_qid_procedures(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_enable_ldb_port() - enable a load-balanced port for scheduling
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port enable arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to schedule QEs to a load-balanced port.
+ * Ports are enabled by default.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid or the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_enable_ldb_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_enable_ldb_port_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_disable_ldb_port() - disable a load-balanced port for scheduling
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port disable arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs to a load-balanced
+ * port. Ports are enabled by default.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid or the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_disable_ldb_port(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_disable_ldb_port_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id);
+
+/**
+ * dlb_hw_enable_dir_port() - enable a directed port for scheduling
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port enable arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to schedule QEs to a directed port.
+ * Ports are enabled by default.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid or the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_enable_dir_port(struct dlb_hw *hw,
+			   u32 domain_id,
+			   struct dlb_enable_dir_port_args *args,
+			   struct dlb_cmd_response *resp,
+			   bool vf_request,
+			   unsigned int vf_id);
+
+/**
+ * dlb_hw_disable_dir_port() - disable a directed port for scheduling
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: port disable arguments.
+ * @resp: response structure.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function configures the DLB to stop scheduling QEs to a directed port.
+ * Ports are enabled by default.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid or the domain is not configured.
+ * EFAULT - Internal error (resp->status not set).
+ */
+int dlb_hw_disable_dir_port(struct dlb_hw *hw,
+			    u32 domain_id,
+			    struct dlb_disable_dir_port_args *args,
+			    struct dlb_cmd_response *resp,
+			    bool vf_request,
+			    unsigned int vf_id);
+
+/**
+ * dlb_configure_ldb_cq_interrupt() - configure load-balanced CQ for interrupts
+ * @hw: dlb_hw handle for a particular device.
+ * @port_id: load-balancd port ID.
+ * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X mode,
+ *	    else a value up to 64.
+ * @mode: interrupt type (DLB_CQ_ISR_MODE_MSI or DLB_CQ_ISR_MODE_MSIX)
+ * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
+ *	virtual port ID to a physical port ID. Ignored if mode is not MSI.
+ * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
+ * @threshold: the minimum CQ depth at which the interrupt can fire. Must be
+ *	greater than 0.
+ *
+ * This function configures the DLB registers for load-balanced CQ's interrupts.
+ * This doesn't enable the CQ's interrupt; that can be done with
+ * dlb_arm_cq_interrupt() or through an interrupt arm QE.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid.
+ */
+int dlb_configure_ldb_cq_interrupt(struct dlb_hw *hw,
+				   int port_id,
+				   int vector,
+				   int mode,
+				   unsigned int vf,
+				   unsigned int owner_vf,
+				   u16 threshold);
+
+/**
+ * dlb_configure_dir_cq_interrupt() - configure directed CQ for interrupts
+ * @hw: dlb_hw handle for a particular device.
+ * @port_id: load-balancd port ID.
+ * @vector: interrupt vector ID. Should be 0 for MSI or compressed MSI-X mode,
+ *	    else a value up to 64.
+ * @mode: interrupt type (DLB_CQ_ISR_MODE_MSI or DLB_CQ_ISR_MODE_MSIX)
+ * @vf: If the port is VF-owned, the VF's ID. This is used for translating the
+ *	virtual port ID to a physical port ID. Ignored if mode is not MSI.
+ * @owner_vf: the VF to route the interrupt to. Ignore if mode is not MSI.
+ * @threshold: the minimum CQ depth at which the interrupt can fire. Must be
+ *	greater than 0.
+ *
+ * This function configures the DLB registers for directed CQ's interrupts.
+ * This doesn't enable the CQ's interrupt; that can be done with
+ * dlb_arm_cq_interrupt() or through an interrupt arm QE.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise.
+ *
+ * Errors:
+ * EINVAL - The port ID is invalid.
+ */
+int dlb_configure_dir_cq_interrupt(struct dlb_hw *hw,
+				   int port_id,
+				   int vector,
+				   int mode,
+				   unsigned int vf,
+				   unsigned int owner_vf,
+				   u16 threshold);
+
+/**
+ * dlb_enable_alarm_interrupts() - enable certain hardware alarm interrupts
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function configures the ingress error alarm. (Other alarms are enabled
+ * by default.)
+ */
+void dlb_enable_alarm_interrupts(struct dlb_hw *hw);
+
+/**
+ * dlb_disable_alarm_interrupts() - disable certain hardware alarm interrupts
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function configures the ingress error alarm. (Other alarms are disabled
+ * by default.)
+ */
+void dlb_disable_alarm_interrupts(struct dlb_hw *hw);
+
+/**
+ * dlb_set_msix_mode() - enable certain hardware alarm interrupts
+ * @hw: dlb_hw handle for a particular device.
+ * @mode: MSI-X mode (DLB_MSIX_MODE_PACKED or DLB_MSIX_MODE_COMPRESSED)
+ *
+ * This function configures the hardware to use either packed or compressed
+ * mode. This function should not be called if using MSI interrupts.
+ */
+void dlb_set_msix_mode(struct dlb_hw *hw, int mode);
+
+/**
+ * dlb_arm_cq_interrupt() - arm a CQ's interrupt
+ * @hw: dlb_hw handle for a particular device.
+ * @port_id: port ID
+ * @is_ldb: true for load-balanced port, false for a directed port
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function arms the CQ's interrupt. The CQ must be configured prior to
+ * calling this function.
+ *
+ * The function does no parameter validation; that is the caller's
+ * responsibility.
+ *
+ * Return: returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - Invalid port ID.
+ */
+int dlb_arm_cq_interrupt(struct dlb_hw *hw,
+			 int port_id,
+			 bool is_ldb,
+			 bool vf_request,
+			 unsigned int vf_id);
+
+/**
+ * dlb_read_compressed_cq_intr_status() - read compressed CQ interrupt status
+ * @hw: dlb_hw handle for a particular device.
+ * @ldb_interrupts: 2-entry array of u32 bitmaps
+ * @dir_interrupts: 4-entry array of u32 bitmaps
+ *
+ * This function can be called from a compressed CQ interrupt handler to
+ * determine which CQ interrupts have fired. The caller should take appropriate
+ * (such as waking threads blocked on a CQ's interrupt) then ack the interrupts
+ * with dlb_ack_compressed_cq_intr().
+ */
+void dlb_read_compressed_cq_intr_status(struct dlb_hw *hw,
+					u32 *ldb_interrupts,
+					u32 *dir_interrupts);
+
+/**
+ * dlb_ack_compressed_cq_intr_status() - ack compressed CQ interrupts
+ * @hw: dlb_hw handle for a particular device.
+ * @ldb_interrupts: 2-entry array of u32 bitmaps
+ * @dir_interrupts: 4-entry array of u32 bitmaps
+ *
+ * This function ACKs compressed CQ interrupts. Its arguments should be the
+ * same ones passed to dlb_read_compressed_cq_intr_status().
+ */
+void dlb_ack_compressed_cq_intr(struct dlb_hw *hw,
+				u32 *ldb_interrupts,
+				u32 *dir_interrupts);
+
+/**
+ * dlb_read_vf_intr_status() - read the VF interrupt status register
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function can be called from a VF's interrupt handler to determine
+ * which interrupts have fired. The first 31 bits correspond to CQ interrupt
+ * vectors, and the final bit is for the PF->VF mailbox interrupt vector.
+ *
+ * Return:
+ * Returns a bit vector indicating which interrupt vectors are active.
+ */
+u32 dlb_read_vf_intr_status(struct dlb_hw *hw);
+
+/**
+ * dlb_ack_vf_intr_status() - ack VF interrupts
+ * @hw: dlb_hw handle for a particular device.
+ * @interrupts: 32-bit bitmap
+ *
+ * This function ACKs a VF's interrupts. Its interrupts argument should be the
+ * value returned by dlb_read_vf_intr_status().
+ */
+void dlb_ack_vf_intr_status(struct dlb_hw *hw, u32 interrupts);
+
+/**
+ * dlb_ack_vf_msi_intr() - ack VF MSI interrupt
+ * @hw: dlb_hw handle for a particular device.
+ * @interrupts: 32-bit bitmap
+ *
+ * This function clears the VF's MSI interrupt pending register. Its interrupts
+ * argument should be contain the MSI vectors to ACK. For example, if MSI MME
+ * is in mode 0, then one bit 0 should ever be set.
+ */
+void dlb_ack_vf_msi_intr(struct dlb_hw *hw, u32 interrupts);
+
+/**
+ * dlb_ack_vf_mbox_int() - ack PF->VF mailbox interrupt
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * When done processing the PF mailbox request, this function unsets
+ * the PF's mailbox ISR register.
+ */
+void dlb_ack_pf_mbox_int(struct dlb_hw *hw);
+
+/**
+ * dlb_read_vf_to_pf_int_bitvec() - return a bit vector of all requesting VFs
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * When the VF->PF ISR fires, this function can be called to determine which
+ * VF(s) are requesting service. This bitvector must be passed to
+ * dlb_ack_vf_to_pf_int() when processing is complete for all requesting VFs.
+ *
+ * Return:
+ * Returns a bit vector indicating which VFs (0-15) have requested service.
+ */
+u32 dlb_read_vf_to_pf_int_bitvec(struct dlb_hw *hw);
+
+/**
+ * dlb_ack_vf_mbox_int() - ack processed VF->PF mailbox interrupt
+ * @hw: dlb_hw handle for a particular device.
+ * @bitvec: bit vector returned by dlb_read_vf_to_pf_int_bitvec()
+ *
+ * When done processing all VF mailbox requests, this function unsets the VF's
+ * mailbox ISR register.
+ */
+void dlb_ack_vf_mbox_int(struct dlb_hw *hw, u32 bitvec);
+
+/**
+ * dlb_read_vf_flr_int_bitvec() - return a bit vector of all VFs requesting FLR
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * When the VF FLR ISR fires, this function can be called to determine which
+ * VF(s) are requesting FLRs. This bitvector must passed to
+ * dlb_ack_vf_flr_int() when processing is complete for all requesting VFs.
+ *
+ * Return:
+ * Returns a bit vector indicating which VFs (0-15) have requested FLRs.
+ */
+u32 dlb_read_vf_flr_int_bitvec(struct dlb_hw *hw);
+
+/**
+ * dlb_ack_vf_flr_int() - ack processed VF<->PF interrupt(s)
+ * @hw: dlb_hw handle for a particular device.
+ * @bitvec: bit vector returned by dlb_read_vf_flr_int_bitvec()
+ * @a_stepping: device is A-stepping
+ *
+ * When done processing all VF FLR requests, this function unsets the VF's FLR
+ * ISR register.
+ *
+ * Note: The caller must ensure dlb_set_vf_reset_in_progress(),
+ * dlb_clr_vf_reset_in_progress(), and dlb_ack_vf_flr_int() are not executed in
+ * parallel, because the reset-in-progress register does not support atomic
+ * updates on A-stepping devices.
+ */
+void dlb_ack_vf_flr_int(struct dlb_hw *hw, u32 bitvec, bool a_stepping);
+
+/**
+ * dlb_ack_vf_to_pf_int() - ack processed VF mbox and FLR interrupt(s)
+ * @hw: dlb_hw handle for a particular device.
+ * @mbox_bitvec: bit vector returned by dlb_read_vf_to_pf_int_bitvec()
+ * @flr_bitvec: bit vector returned by dlb_read_vf_flr_int_bitvec()
+ *
+ * When done processing all VF requests, this function communicates to the
+ * hardware that processing is complete. When this function completes, hardware
+ * can immediately generate another VF mbox or FLR interrupt.
+ */
+void dlb_ack_vf_to_pf_int(struct dlb_hw *hw,
+			  u32 mbox_bitvec,
+			  u32 flr_bitvec);
+
+/**
+ * dlb_process_alarm_interrupt() - process an alarm interrupt
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function reads the alarm syndrome, logs its, and acks the interrupt.
+ * This function should be called from the alarm interrupt handler when
+ * interrupt vector DLB_INT_ALARM fires.
+ */
+void dlb_process_alarm_interrupt(struct dlb_hw *hw);
+
+/**
+ * dlb_process_ingress_error_interrupt() - process ingress error interrupts
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function reads the alarm syndrome, logs it, notifies user-space, and
+ * acks the interrupt. This function should be called from the alarm interrupt
+ * handler when interrupt vector DLB_INT_INGRESS_ERROR fires.
+ */
+void dlb_process_ingress_error_interrupt(struct dlb_hw *hw);
+
+/**
+ * dlb_get_group_sequence_numbers() - return a group's number of SNs per queue
+ * @hw: dlb_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the configured number of sequence numbers per queue
+ * for the specified group.
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.
+ */
+int dlb_get_group_sequence_numbers(struct dlb_hw *hw, unsigned int group_id);
+
+/**
+ * dlb_get_group_sequence_number_occupancy() - return a group's in-use slots
+ * @hw: dlb_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ *
+ * This function returns the group's number of in-use slots (i.e. load-balanced
+ * queues using the specified group).
+ *
+ * Return:
+ * Returns -EINVAL if group_id is invalid, else the group's occupancy.
+ */
+int dlb_get_group_sequence_number_occupancy(struct dlb_hw *hw,
+					    unsigned int group_id);
+
+/**
+ * dlb_set_group_sequence_numbers() - assign a group's number of SNs per queue
+ * @hw: dlb_hw handle for a particular device.
+ * @group_id: sequence number group ID.
+ * @val: requested amount of sequence numbers per queue.
+ *
+ * This function configures the group's number of sequence numbers per queue.
+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can
+ * be configured until the first ordered load-balanced queue is configured, at
+ * which point the configuration is locked.
+ *
+ * Return:
+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an
+ * ordered queue is configured.
+ */
+int dlb_set_group_sequence_numbers(struct dlb_hw *hw,
+				   unsigned int group_id,
+				   unsigned long val);
+
+/**
+ * dlb_reset_domain() - reset a scheduling domain
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function resets and frees a DLB scheduling domain and its associated
+ * resources.
+ *
+ * Pre-condition: the driver must ensure software has stopped sending QEs
+ * through this domain's producer ports before invoking this function, or
+ * undefined behavior will result.
+ *
+ * Return:
+ * Returns 0 upon success, -1 otherwise.
+ *
+ * EINVAL - Invalid domain ID, or the domain is not configured.
+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition
+ *	    is not met.)
+ * ETIMEDOUT - Hardware component didn't reset in the expected time.
+ */
+int dlb_reset_domain(struct dlb_hw *hw,
+		     u32 domain_id,
+		     bool vf_request,
+		     unsigned int vf_id);
+
+/**
+ * dlb_ldb_port_owned_by_domain() - query whether a port is owned by a domain
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @port_id: indicates whether this request came from a VF.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns whether a load-balanced port is owned by a specified
+ * domain.
+ *
+ * Return:
+ * Returns 0 if false, 1 if true, <0 otherwise.
+ *
+ * EINVAL - Invalid domain or port ID, or the domain is not configured.
+ */
+int dlb_ldb_port_owned_by_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 u32 port_id,
+				 bool vf_request,
+				 unsigned int vf_id);
+
+/**
+ * dlb_dir_port_owned_by_domain() - query whether a port is owned by a domain
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @port_id: indicates whether this request came from a VF.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns whether a directed port is owned by a specified
+ * domain.
+ *
+ * Return:
+ * Returns 0 if false, 1 if true, <0 otherwise.
+ *
+ * EINVAL - Invalid domain or port ID, or the domain is not configured.
+ */
+int dlb_dir_port_owned_by_domain(struct dlb_hw *hw,
+				 u32 domain_id,
+				 u32 port_id,
+				 bool vf_request,
+				 unsigned int vf_id);
+
+/**
+ * dlb_hw_get_num_resources() - query the PCI function's available resources
+ * @arg: pointer to resource counts.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns the number of available resources for the PF or for a
+ * VF.
+ *
+ * Return:
+ * Returns 0 upon success, -1 if vf_request is true and vf_id is invalid.
+ */
+int dlb_hw_get_num_resources(struct dlb_hw *hw,
+			     struct dlb_get_num_resources_args *arg,
+			     bool vf_request,
+			     unsigned int vf_id);
+
+/**
+ * dlb_hw_get_num_used_resources() - query the PCI function's used resources
+ * @arg: pointer to resource counts.
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns the number of resources in use by the PF or a VF. It
+ * fills in the fields that args points to, except the following:
+ * - max_contiguous_atomic_inflights
+ * - max_contiguous_hist_list_entries
+ * - max_contiguous_ldb_credits
+ * - max_contiguous_dir_credits
+ *
+ * Return:
+ * Returns 0 upon success, -1 if vf_request is true and vf_id is invalid.
+ */
+int dlb_hw_get_num_used_resources(struct dlb_hw *hw,
+				  struct dlb_get_num_resources_args *arg,
+				  bool vf_request,
+				  unsigned int vf_id);
+
+/**
+ * dlb_send_async_vf_to_pf_msg() - (VF only) send a mailbox message to the PF
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function sends a VF->PF mailbox message. It is asynchronous, so it
+ * returns once the message is sent but potentially before the PF has processed
+ * the message. The caller must call dlb_vf_to_pf_complete() to determine when
+ * the PF has finished processing the request.
+ */
+void dlb_send_async_vf_to_pf_msg(struct dlb_hw *hw);
+
+/**
+ * dlb_vf_to_pf_complete() - check the status of an asynchronous mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function returns a boolean indicating whether the PF has finished
+ * processing a VF->PF mailbox request. It should only be called after sending
+ * an asynchronous request with dlb_send_async_vf_to_pf_msg().
+ */
+bool dlb_vf_to_pf_complete(struct dlb_hw *hw);
+
+/**
+ * dlb_vf_flr_complete() - check the status of a VF FLR
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function returns a boolean indicating whether the PF has finished
+ * executing the VF FLR. It should only be called after setting the VF's FLR
+ * bit.
+ */
+bool dlb_vf_flr_complete(struct dlb_hw *hw);
+
+/**
+ * dlb_set_vf_reset_in_progress() - set a VF's reset in progress bit
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ *
+ * Note: This function is only supported on A-stepping devices.
+ *
+ * Note: The caller must ensure dlb_set_vf_reset_in_progress(),
+ * dlb_clr_vf_reset_in_progress(), and dlb_ack_vf_flr_int() are not executed in
+ * parallel, because the reset-in-progress register does not support atomic
+ * updates on A-stepping devices.
+ */
+void dlb_set_vf_reset_in_progress(struct dlb_hw *hw, int vf_id);
+
+/**
+ * dlb_clr_vf_reset_in_progress() - clear a VF's reset in progress bit
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ *
+ * Note: This function is only supported on A-stepping devices.
+ *
+ * Note: The caller must ensure dlb_set_vf_reset_in_progress(),
+ * dlb_clr_vf_reset_in_progress(), and dlb_ack_vf_flr_int() are not executed in
+ * parallel, because the reset-in-progress register does not support atomic
+ * updates on A-stepping devices.
+ */
+void dlb_clr_vf_reset_in_progress(struct dlb_hw *hw, int vf_id);
+
+/**
+ * dlb_send_async_pf_to_vf_msg() - (PF only) send a mailbox message to the VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ *
+ * This function sends a PF->VF mailbox message. It is asynchronous, so it
+ * returns once the message is sent but potentially before the VF has processed
+ * the message. The caller must call dlb_pf_to_vf_complete() to determine when
+ * the VF has finished processing the request.
+ */
+void dlb_send_async_pf_to_vf_msg(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_pf_to_vf_complete() - check the status of an asynchronous mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ *
+ * This function returns a boolean indicating whether the VF has finished
+ * processing a PF->VF mailbox request. It should only be called after sending
+ * an asynchronous request with dlb_send_async_pf_to_vf_msg().
+ */
+bool dlb_pf_to_vf_complete(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_pf_read_vf_mbox_req() - (PF only) read a VF->PF mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies one of the PF's VF->PF mailboxes into the array pointed
+ * to by data.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_VF2PF_REQ_BYTES.
+ */
+int dlb_pf_read_vf_mbox_req(struct dlb_hw *hw,
+			    unsigned int vf_id,
+			    void *data,
+			    int len);
+
+/**
+ * dlb_pf_read_vf_mbox_resp() - (PF only) read a VF->PF mailbox response
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies one of the PF's VF->PF mailboxes into the array pointed
+ * to by data.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_VF2PF_RESP_BYTES.
+ */
+int dlb_pf_read_vf_mbox_resp(struct dlb_hw *hw,
+			     unsigned int vf_id,
+			     void *data,
+			     int len);
+
+/**
+ * dlb_pf_write_vf_mbox_resp() - (PF only) write a PF->VF mailbox response
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the user-provided message data into of the PF's VF->PF
+ * mailboxes.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_PF2VF_RESP_BYTES.
+ */
+int dlb_pf_write_vf_mbox_resp(struct dlb_hw *hw,
+			      unsigned int vf_id,
+			      void *data,
+			      int len);
+
+/**
+ * dlb_pf_write_vf_mbox_req() - (PF only) write a PF->VF mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the user-provided message data into of the PF's VF->PF
+ * mailboxes.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_PF2VF_REQ_BYTES.
+ */
+int dlb_pf_write_vf_mbox_req(struct dlb_hw *hw,
+			     unsigned int vf_id,
+			     void *data,
+			     int len);
+
+/**
+ * dlb_vf_read_pf_mbox_resp() - (VF only) read a PF->VF mailbox response
+ * @hw: dlb_hw handle for a particular device.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the VF's PF->VF mailbox into the array pointed to by
+ * data.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_PF2VF_RESP_BYTES.
+ */
+int dlb_vf_read_pf_mbox_resp(struct dlb_hw *hw, void *data, int len);
+
+/**
+ * dlb_vf_read_pf_mbox_req() - (VF only) read a PF->VF mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the VF's PF->VF mailbox into the array pointed to by
+ * data.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_PF2VF_REQ_BYTES.
+ */
+int dlb_vf_read_pf_mbox_req(struct dlb_hw *hw, void *data, int len);
+
+/**
+ * dlb_vf_write_pf_mbox_req() - (VF only) write a VF->PF mailbox request
+ * @hw: dlb_hw handle for a particular device.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the user-provided message data into of the VF's PF->VF
+ * mailboxes.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_VF2PF_REQ_BYTES.
+ */
+int dlb_vf_write_pf_mbox_req(struct dlb_hw *hw, void *data, int len);
+
+/**
+ * dlb_vf_write_pf_mbox_resp() - (VF only) write a VF->PF mailbox response
+ * @hw: dlb_hw handle for a particular device.
+ * @data: pointer to message data.
+ * @len: size, in bytes, of the data array.
+ *
+ * This function copies the user-provided message data into of the VF's PF->VF
+ * mailboxes.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * EINVAL - len >= DLB_VF2PF_RESP_BYTES.
+ */
+int dlb_vf_write_pf_mbox_resp(struct dlb_hw *hw, void *data, int len);
+
+/**
+ * dlb_reset_vf() - reset the hardware owned by a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function resets the hardware owned by a VF (if any), by resetting the
+ * VF's domains one by one.
+ */
+int dlb_reset_vf(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_vf_is_locked() - check whether the VF's resources are locked
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function returns whether or not the VF's resource assignments are
+ * locked. If locked, no resources can be added to or subtracted from the
+ * group.
+ */
+bool dlb_vf_is_locked(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_lock_vf() - lock the VF's resources
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function sets a flag indicating that the VF is using its resources.
+ * When VF is locked, its resource assignment cannot be changed.
+ */
+void dlb_lock_vf(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_unlock_vf() - unlock the VF's resources
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function unlocks the VF's resource assignment, allowing it to be
+ * modified.
+ */
+void dlb_unlock_vf(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_update_vf_sched_domains() - update the domains assigned to a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of scheduling domains to assign to this VF
+ *
+ * This function assigns num scheduling domains to the specified VF. If the VF
+ * already has domains assigned, this existing assignment is adjusted
+ * accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_sched_domains(struct dlb_hw *hw,
+				u32 vf_id,
+				u32 num);
+
+/**
+ * dlb_update_vf_ldb_queues() - update the LDB queues assigned to a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of LDB queues to assign to this VF
+ *
+ * This function assigns num LDB queues to the specified VF. If the VF already
+ * has LDB queues assigned, this existing assignment is adjusted
+ * accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_ldb_queues(struct dlb_hw *hw, u32 vf_id, u32 num);
+
+/**
+ * dlb_update_vf_ldb_ports() - update the LDB ports assigned to a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of LDB ports to assign to this VF
+ *
+ * This function assigns num LDB ports to the specified VF. If the VF already
+ * has LDB ports assigned, this existing assignment is adjusted accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_ldb_ports(struct dlb_hw *hw, u32 vf_id, u32 num);
+
+/**
+ * dlb_update_vf_dir_ports() - update the DIR ports assigned to a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of DIR ports to assign to this VF
+ *
+ * This function assigns num DIR ports to the specified VF. If the VF already
+ * has DIR ports assigned, this existing assignment is adjusted accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_dir_ports(struct dlb_hw *hw, u32 vf_id, u32 num);
+
+/**
+ * dlb_update_vf_ldb_credit_pools() - update the VF's assigned LDB pools
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of LDB credit pools to assign to this VF
+ *
+ * This function assigns num LDB credit pools to the specified VF. If the VF
+ * already has LDB credit pools assigned, this existing assignment is adjusted
+ * accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_ldb_credit_pools(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num);
+
+/**
+ * dlb_update_vf_dir_credit_pools() - update the VF's assigned DIR pools
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of DIR credit pools to assign to this VF
+ *
+ * This function assigns num DIR credit pools to the specified VF. If the VF
+ * already has DIR credit pools assigned, this existing assignment is adjusted
+ * accordingly.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_dir_credit_pools(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num);
+
+/**
+ * dlb_update_vf_ldb_credits() - update the VF's assigned LDB credits
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of LDB credits to assign to this VF
+ *
+ * This function assigns num LDB credits to the specified VF. If the VF already
+ * has LDB credits assigned, this existing assignment is adjusted accordingly.
+ * VF's are assigned a contiguous chunk of credits, so this function may fail
+ * if a sufficiently large contiguous chunk is not available.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_ldb_credits(struct dlb_hw *hw, u32 vf_id, u32 num);
+
+/**
+ * dlb_update_vf_dir_credits() - update the VF's assigned DIR credits
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of DIR credits to assign to this VF
+ *
+ * This function assigns num DIR credits to the specified VF. If the VF already
+ * has DIR credits assigned, this existing assignment is adjusted accordingly.
+ * VF's are assigned a contiguous chunk of credits, so this function may fail
+ * if a sufficiently large contiguous chunk is not available.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_dir_credits(struct dlb_hw *hw, u32 vf_id, u32 num);
+
+/**
+ * dlb_update_vf_hist_list_entries() - update the VF's assigned HL entries
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of history list entries to assign to this VF
+ *
+ * This function assigns num history list entries to the specified VF. If the
+ * VF already has history list entries assigned, this existing assignment is
+ * adjusted accordingly. VF's are assigned a contiguous chunk of entries, so
+ * this function may fail if a sufficiently large contiguous chunk is not
+ * available.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_hist_list_entries(struct dlb_hw *hw,
+				    u32 vf_id,
+				    u32 num);
+
+/**
+ * dlb_update_vf_atomic_inflights() - update the VF's atomic inflights
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @num: number of atomic inflights to assign to this VF
+ *
+ * This function assigns num atomic inflights to the specified VF. If the VF
+ * already has atomic inflights assigned, this existing assignment is adjusted
+ * accordingly. VF's are assigned a contiguous chunk of entries, so this
+ * function may fail if a sufficiently large contiguous chunk is not available.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid, or the requested number of resources are
+ *	    unavailable.
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_update_vf_atomic_inflights(struct dlb_hw *hw,
+				   u32 vf_id,
+				   u32 num);
+
+/**
+ * dlb_reset_vf_resources() - reassign the VF's resources to the PF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function takes any resources currently assigned to the VF and reassigns
+ * them to the PF.
+ *
+ * Return:
+ * Returns 0 upon success, <0 otherwise.
+ *
+ * Errors:
+ * EINVAL - vf_id is invalid
+ * EPERM  - The VF's resource assignment is locked and cannot be changed.
+ */
+int dlb_reset_vf_resources(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_notify_vf() - send a notification to a VF
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ * @notification: notification
+ *
+ * This function sends a notification (as defined in dlb_mbox.h) to a VF.
+ *
+ * Return:
+ * Returns 0 upon success, -1 if the VF doesn't ACK the PF->VF interrupt.
+ */
+int dlb_notify_vf(struct dlb_hw *hw,
+		  unsigned int vf_id,
+		  u32 notification);
+
+/**
+ * dlb_vf_in_use() - query whether a VF is in use
+ * @hw: dlb_hw handle for a particular device.
+ * @vf_id: VF ID
+ *
+ * This function sends a mailbox request to the VF to query whether the VF is in
+ * use.
+ *
+ * Return:
+ * Returns 0 for false, 1 for true, and -1 if the mailbox request times out or
+ * an internal error occurs.
+ */
+int dlb_vf_in_use(struct dlb_hw *hw, unsigned int vf_id);
+
+/**
+ * dlb_disable_dp_vasr_feature() - disable directed pipe VAS reset hardware
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function disables certain hardware in the directed pipe,
+ * necessary to workaround a DLB VAS reset issue.
+ */
+void dlb_disable_dp_vasr_feature(struct dlb_hw *hw);
+
+/**
+ * dlb_enable_excess_tokens_alarm() - enable interrupts for the excess token
+ * pop alarm
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function enables the PF ingress error alarm interrupt to fire when an
+ * excess token pop occurs.
+ */
+void dlb_enable_excess_tokens_alarm(struct dlb_hw *hw);
+
+/**
+ * dlb_disable_excess_tokens_alarm() - disable interrupts for the excess token
+ * pop alarm
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function disables the PF ingress error alarm interrupt to fire when an
+ * excess token pop occurs.
+ */
+void dlb_disable_excess_tokens_alarm(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns the depth of a load-balanced queue.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb_hw_get_ldb_queue_depth(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_get_ldb_queue_depth_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id);
+
+/**
+ * dlb_hw_get_dir_queue_depth() - returns the depth of a directed queue
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: queue depth args
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * This function returns the depth of a directed queue.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the depth.
+ *
+ * Errors:
+ * EINVAL - Invalid domain ID or queue ID.
+ */
+int dlb_hw_get_dir_queue_depth(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_get_dir_queue_depth_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id);
+
+/**
+ * dlb_hw_pending_port_unmaps() - returns the number of unmap operations in
+ *	progress for a load-balanced port.
+ * @hw: dlb_hw handle for a particular device.
+ * @domain_id: domain ID.
+ * @args: number of unmaps in progress args
+ * @vf_request: indicates whether this request came from a VF.
+ * @vf_id: If vf_request is true, this contains the VF's ID.
+ *
+ * Return:
+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is
+ * assigned a detailed error code from enum dlb_error. If successful, resp->id
+ * contains the number of unmaps in progress.
+ *
+ * Errors:
+ * EINVAL - Invalid port ID.
+ */
+int dlb_hw_pending_port_unmaps(struct dlb_hw *hw,
+			       u32 domain_id,
+			       struct dlb_pending_port_unmaps_args *args,
+			       struct dlb_cmd_response *resp,
+			       bool vf_request,
+			       unsigned int vf_id);
+
+/**
+ * dlb_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced
+ *	ports.
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb_hw_enable_sparse_ldb_cq_mode(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports
+ * @hw: dlb_hw handle for a particular device.
+ *
+ * This function must be called prior to configuring scheduling domains.
+ */
+void dlb_hw_enable_sparse_dir_cq_mode(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_set_qe_arbiter_weights() - program QE arbiter weights
+ * @hw: dlb_hw handle for a particular device.
+ * @weight: 8-entry array of arbiter weights.
+ *
+ * weight[N] programs priority N's weight. In cases where the 8 priorities are
+ * reduced to 4 bins, the mapping is:
+ * - weight[1] programs bin 0
+ * - weight[3] programs bin 1
+ * - weight[5] programs bin 2
+ * - weight[7] programs bin 3
+ */
+void dlb_hw_set_qe_arbiter_weights(struct dlb_hw *hw, u8 weight[8]);
+
+/**
+ * dlb_hw_set_qid_arbiter_weights() - program QID arbiter weights
+ * @hw: dlb_hw handle for a particular device.
+ * @weight: 8-entry array of arbiter weights.
+ *
+ * weight[N] programs priority N's weight. In cases where the 8 priorities are
+ * reduced to 4 bins, the mapping is:
+ * - weight[1] programs bin 0
+ * - weight[3] programs bin 1
+ * - weight[5] programs bin 2
+ * - weight[7] programs bin 3
+ */
+void dlb_hw_set_qid_arbiter_weights(struct dlb_hw *hw, u8 weight[8]);
+
+/**
+ * dlb_hw_enable_pp_sw_alarms() - enable out-of-credit alarm for all producer
+ * ports
+ * @hw: dlb_hw handle for a particular device.
+ */
+void dlb_hw_enable_pp_sw_alarms(struct dlb_hw *hw);
+
+/**
+ * dlb_hw_disable_pp_sw_alarms() - disable out-of-credit alarm for all producer
+ * ports
+ * @hw: dlb_hw handle for a particular device.
+ */
+void dlb_hw_disable_pp_sw_alarms(struct dlb_hw *hw);
+
+#endif /* __DLB_RESOURCE_H */
diff --git a/drivers/event/dlb/pf/base/dlb_user.h b/drivers/event/dlb/pf/base/dlb_user.h
new file mode 100644
index 000000000..6e7ee2ec3
--- /dev/null
+++ b/drivers/event/dlb/pf/base/dlb_user.h
@@ -0,0 +1,1084 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+ * Copyright(c) 2016-2020 Intel Corporation
+ */
+
+#ifndef __DLB_USER_H
+#define __DLB_USER_H
+
+#define DLB_MAX_NAME_LEN 64
+
+#include "dlb_osdep_types.h"
+
+enum dlb_error {
+	DLB_ST_SUCCESS = 0,
+	DLB_ST_NAME_EXISTS,
+	DLB_ST_DOMAIN_UNAVAILABLE,
+	DLB_ST_LDB_PORTS_UNAVAILABLE,
+	DLB_ST_DIR_PORTS_UNAVAILABLE,
+	DLB_ST_LDB_QUEUES_UNAVAILABLE,
+	DLB_ST_LDB_CREDITS_UNAVAILABLE,
+	DLB_ST_DIR_CREDITS_UNAVAILABLE,
+	DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE,
+	DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE,
+	DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE,
+	DLB_ST_INVALID_DOMAIN_ID,
+	DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION,
+	DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE,
+	DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_INVALID_LDB_CREDIT_POOL_ID,
+	DLB_ST_INVALID_DIR_CREDIT_POOL_ID,
+	DLB_ST_INVALID_POP_COUNT_VIRT_ADDR,
+	DLB_ST_INVALID_LDB_QUEUE_ID,
+	DLB_ST_INVALID_CQ_DEPTH,
+	DLB_ST_INVALID_CQ_VIRT_ADDR,
+	DLB_ST_INVALID_PORT_ID,
+	DLB_ST_INVALID_QID,
+	DLB_ST_INVALID_PRIORITY,
+	DLB_ST_NO_QID_SLOTS_AVAILABLE,
+	DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE,
+	DLB_ST_INVALID_DIR_QUEUE_ID,
+	DLB_ST_DIR_QUEUES_UNAVAILABLE,
+	DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK,
+	DLB_ST_INVALID_LDB_CREDIT_QUANTUM,
+	DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK,
+	DLB_ST_INVALID_DIR_CREDIT_QUANTUM,
+	DLB_ST_DOMAIN_NOT_CONFIGURED,
+	DLB_ST_PID_ALREADY_ATTACHED,
+	DLB_ST_PID_NOT_ATTACHED,
+	DLB_ST_INTERNAL_ERROR,
+	DLB_ST_DOMAIN_IN_USE,
+	DLB_ST_IOMMU_MAPPING_ERROR,
+	DLB_ST_FAIL_TO_PIN_MEMORY_PAGE,
+	DLB_ST_UNABLE_TO_PIN_POPCOUNT_PAGES,
+	DLB_ST_UNABLE_TO_PIN_CQ_PAGES,
+	DLB_ST_DISCONTIGUOUS_CQ_MEMORY,
+	DLB_ST_DISCONTIGUOUS_POP_COUNT_MEMORY,
+	DLB_ST_DOMAIN_STARTED,
+	DLB_ST_LARGE_POOL_NOT_SPECIFIED,
+	DLB_ST_SMALL_POOL_NOT_SPECIFIED,
+	DLB_ST_NEITHER_POOL_SPECIFIED,
+	DLB_ST_DOMAIN_NOT_STARTED,
+	DLB_ST_INVALID_MEASUREMENT_DURATION,
+	DLB_ST_INVALID_PERF_METRIC_GROUP_ID,
+	DLB_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES,
+	DLB_ST_DOMAIN_RESET_FAILED,
+	DLB_ST_MBOX_ERROR,
+	DLB_ST_INVALID_HIST_LIST_DEPTH,
+	DLB_ST_NO_MEMORY,
+};
+
+static const char dlb_error_strings[][128] = {
+	"DLB_ST_SUCCESS",
+	"DLB_ST_NAME_EXISTS",
+	"DLB_ST_DOMAIN_UNAVAILABLE",
+	"DLB_ST_LDB_PORTS_UNAVAILABLE",
+	"DLB_ST_DIR_PORTS_UNAVAILABLE",
+	"DLB_ST_LDB_QUEUES_UNAVAILABLE",
+	"DLB_ST_LDB_CREDITS_UNAVAILABLE",
+	"DLB_ST_DIR_CREDITS_UNAVAILABLE",
+	"DLB_ST_LDB_CREDIT_POOLS_UNAVAILABLE",
+	"DLB_ST_DIR_CREDIT_POOLS_UNAVAILABLE",
+	"DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE",
+	"DLB_ST_INVALID_DOMAIN_ID",
+	"DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION",
+	"DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE",
+	"DLB_ST_HIST_LIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_INVALID_LDB_CREDIT_POOL_ID",
+	"DLB_ST_INVALID_DIR_CREDIT_POOL_ID",
+	"DLB_ST_INVALID_POP_COUNT_VIRT_ADDR",
+	"DLB_ST_INVALID_LDB_QUEUE_ID",
+	"DLB_ST_INVALID_CQ_DEPTH",
+	"DLB_ST_INVALID_CQ_VIRT_ADDR",
+	"DLB_ST_INVALID_PORT_ID",
+	"DLB_ST_INVALID_QID",
+	"DLB_ST_INVALID_PRIORITY",
+	"DLB_ST_NO_QID_SLOTS_AVAILABLE",
+	"DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_DQED_FREELIST_ENTRIES_UNAVAILABLE",
+	"DLB_ST_INVALID_DIR_QUEUE_ID",
+	"DLB_ST_DIR_QUEUES_UNAVAILABLE",
+	"DLB_ST_INVALID_LDB_CREDIT_LOW_WATERMARK",
+	"DLB_ST_INVALID_LDB_CREDIT_QUANTUM",
+	"DLB_ST_INVALID_DIR_CREDIT_LOW_WATERMARK",
+	"DLB_ST_INVALID_DIR_CREDIT_QUANTUM",
+	"DLB_ST_DOMAIN_NOT_CONFIGURED",
+	"DLB_ST_PID_ALREADY_ATTACHED",
+	"DLB_ST_PID_NOT_ATTACHED",
+	"DLB_ST_INTERNAL_ERROR",
+	"DLB_ST_DOMAIN_IN_USE",
+	"DLB_ST_IOMMU_MAPPING_ERROR",
+	"DLB_ST_FAIL_TO_PIN_MEMORY_PAGE",
+	"DLB_ST_UNABLE_TO_PIN_POPCOUNT_PAGES",
+	"DLB_ST_UNABLE_TO_PIN_CQ_PAGES",
+	"DLB_ST_DISCONTIGUOUS_CQ_MEMORY",
+	"DLB_ST_DISCONTIGUOUS_POP_COUNT_MEMORY",
+	"DLB_ST_DOMAIN_STARTED",
+	"DLB_ST_LARGE_POOL_NOT_SPECIFIED",
+	"DLB_ST_SMALL_POOL_NOT_SPECIFIED",
+	"DLB_ST_NEITHER_POOL_SPECIFIED",
+	"DLB_ST_DOMAIN_NOT_STARTED",
+	"DLB_ST_INVALID_MEASUREMENT_DURATION",
+	"DLB_ST_INVALID_PERF_METRIC_GROUP_ID",
+	"DLB_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES",
+	"DLB_ST_DOMAIN_RESET_FAILED",
+	"DLB_ST_MBOX_ERROR",
+	"DLB_ST_INVALID_HIST_LIST_DEPTH",
+	"DLB_ST_NO_MEMORY",
+};
+
+struct dlb_cmd_response {
+	__u32 status; /* Interpret using enum dlb_error */
+	__u32 id;
+};
+
+/******************************/
+/* 'dlb' device file commands */
+/******************************/
+
+#define DLB_DEVICE_VERSION(x) (((x) >> 8) & 0xFF)
+#define DLB_DEVICE_REVISION(x) ((x) & 0xFF)
+
+enum dlb_revisions {
+	DLB_REV_A0 = 0,
+	DLB_REV_A1 = 1,
+	DLB_REV_A2 = 2,
+	DLB_REV_A3 = 3,
+	DLB_REV_B0 = 4,
+};
+
+/*
+ * DLB_CMD_GET_DEVICE_VERSION: Query the DLB device version.
+ *
+ *	This ioctl interface is the same in all driver versions and is always
+ *	the first ioctl.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id[7:0]: Device revision.
+ *	response.id[15:8]: Device version.
+ */
+
+struct dlb_get_device_version_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+#define DLB_VERSION_MAJOR_NUMBER 10
+#define DLB_VERSION_MINOR_NUMBER 7
+#define DLB_VERSION_REVISION_NUMBER 9
+#define DLB_VERSION (DLB_VERSION_MAJOR_NUMBER << 24 | \
+		     DLB_VERSION_MINOR_NUMBER << 16 | \
+		     DLB_VERSION_REVISION_NUMBER)
+
+#define DLB_VERSION_GET_MAJOR_NUMBER(x) (((x) >> 24) & 0xFF)
+#define DLB_VERSION_GET_MINOR_NUMBER(x) (((x) >> 16) & 0xFF)
+#define DLB_VERSION_GET_REVISION_NUMBER(x) ((x) & 0xFFFF)
+
+static inline __u8 dlb_version_incompatible(__u32 version)
+{
+	__u8 inc;
+
+	inc = DLB_VERSION_GET_MAJOR_NUMBER(version) != DLB_VERSION_MAJOR_NUMBER;
+	inc |= (int)DLB_VERSION_GET_MINOR_NUMBER(version) <
+		DLB_VERSION_MINOR_NUMBER;
+
+	return inc;
+}
+
+/*
+ * DLB_CMD_GET_DRIVER_VERSION: Query the DLB driver version. The major number
+ *	is changed when there is an ABI-breaking change, the minor number is
+ *	changed if the API is changed in a backwards-compatible way, and the
+ *	revision number is changed for fixes that don't affect the API.
+ *
+ *	If the kernel driver's API version major number and the header's
+ *	DLB_VERSION_MAJOR_NUMBER differ, the two are incompatible, or if the
+ *	major numbers match but the kernel driver's minor number is less than
+ *	the header file's, they are incompatible. The DLB_VERSION_INCOMPATIBLE
+ *	macro should be used to check for compatibility.
+ *
+ *	This ioctl interface is the same in all driver versions. Applications
+ *	should check the driver version before performing any other ioctl
+ *	operations.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Driver API version. Use the DLB_VERSION_GET_MAJOR_NUMBER,
+ *		DLB_VERSION_GET_MINOR_NUMBER, and
+ *		DLB_VERSION_GET_REVISION_NUMBER macros to interpret the field.
+ */
+
+struct dlb_get_driver_version_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+/*
+ * DLB_CMD_CREATE_SCHED_DOMAIN: Create a DLB scheduling domain and reserve the
+ *	resources (queues, ports, etc.) that it contains.
+ *
+ * Input parameters:
+ * - num_ldb_queues: Number of load-balanced queues.
+ * - num_ldb_ports: Number of load-balanced ports.
+ * - num_dir_ports: Number of directed ports. A directed port has one directed
+ *	queue, so no num_dir_queues argument is necessary.
+ * - num_atomic_inflights: This specifies the amount of temporary atomic QE
+ *	storage for the domain. This storage is divided among the domain's
+ *	load-balanced queues that are configured for atomic scheduling.
+ * - num_hist_list_entries: Amount of history list storage. This is divided
+ *	among the domain's CQs.
+ * - num_ldb_credits: Amount of load-balanced QE storage (QED). QEs occupy this
+ *	space until they are scheduled to a load-balanced CQ. One credit
+ *	represents the storage for one QE.
+ * - num_dir_credits: Amount of directed QE storage (DQED). QEs occupy this
+ *	space until they are scheduled to a directed CQ. One credit represents
+ *	the storage for one QE.
+ * - num_ldb_credit_pools: Number of pools into which the load-balanced credits
+ *	are placed.
+ * - num_dir_credit_pools: Number of pools into which the directed credits are
+ *	placed.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: domain ID.
+ */
+struct dlb_create_sched_domain_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_ldb_queues;
+	__u32 num_ldb_ports;
+	__u32 num_dir_ports;
+	__u32 num_atomic_inflights;
+	__u32 num_hist_list_entries;
+	__u32 num_ldb_credits;
+	__u32 num_dir_credits;
+	__u32 num_ldb_credit_pools;
+	__u32 num_dir_credit_pools;
+};
+
+/*
+ * DLB_CMD_GET_NUM_RESOURCES: Return the number of available resources
+ *	(queues, ports, etc.) that this device owns.
+ *
+ * Output parameters:
+ * - num_domains: Number of available scheduling domains.
+ * - num_ldb_queues: Number of available load-balanced queues.
+ * - num_ldb_ports: Number of available load-balanced ports.
+ * - num_dir_ports: Number of available directed ports. There is one directed
+ *	queue for every directed port.
+ * - num_atomic_inflights: Amount of available temporary atomic QE storage.
+ * - max_contiguous_atomic_inflights: When a domain is created, the temporary
+ *	atomic QE storage is allocated in a contiguous chunk. This return value
+ *	is the longest available contiguous range of atomic QE storage.
+ * - num_hist_list_entries: Amount of history list storage.
+ * - max_contiguous_hist_list_entries: History list storage is allocated in
+ *	a contiguous chunk, and this return value is the longest available
+ *	contiguous range of history list entries.
+ * - num_ldb_credits: Amount of available load-balanced QE storage.
+ * - max_contiguous_ldb_credits: QED storage is allocated in a contiguous
+ *	chunk, and this return value is the longest available contiguous range
+ *	of load-balanced credit storage.
+ * - num_dir_credits: Amount of available directed QE storage.
+ * - max_contiguous_dir_credits: DQED storage is allocated in a contiguous
+ *	chunk, and this return value is the longest available contiguous range
+ *	of directed credit storage.
+ * - num_ldb_credit_pools: Number of available load-balanced credit pools.
+ * - num_dir_credit_pools: Number of available directed credit pools.
+ * - padding0: Reserved for future use.
+ */
+struct dlb_get_num_resources_args {
+	/* Output parameters */
+	__u32 num_sched_domains;
+	__u32 num_ldb_queues;
+	__u32 num_ldb_ports;
+	__u32 num_dir_ports;
+	__u32 num_atomic_inflights;
+	__u32 max_contiguous_atomic_inflights;
+	__u32 num_hist_list_entries;
+	__u32 max_contiguous_hist_list_entries;
+	__u32 num_ldb_credits;
+	__u32 max_contiguous_ldb_credits;
+	__u32 num_dir_credits;
+	__u32 max_contiguous_dir_credits;
+	__u32 num_ldb_credit_pools;
+	__u32 num_dir_credit_pools;
+	__u32 padding0;
+};
+
+/*
+ * DLB_CMD_SET_SN_ALLOCATION: Configure a sequence number group
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - num: Number of sequence numbers per queue.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_set_sn_allocation_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 num;
+};
+
+/*
+ * DLB_CMD_GET_SN_ALLOCATION: Get a sequence number group's configuration
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Specified group's number of sequence numbers per queue.
+ */
+struct dlb_get_sn_allocation_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 padding0;
+};
+
+/*
+ * DLB_CMD_QUERY_CQ_POLL_MODE: Query the CQ poll mode the kernel driver is using
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: CQ poll mode (see enum dlb_cq_poll_modes).
+ */
+struct dlb_query_cq_poll_mode_args {
+	/* Output parameters */
+	__u64 response;
+};
+
+enum dlb_cq_poll_modes {
+	DLB_CQ_POLL_MODE_STD,
+	DLB_CQ_POLL_MODE_SPARSE,
+
+	/* NUM_DLB_CQ_POLL_MODE must be last */
+	NUM_DLB_CQ_POLL_MODE,
+};
+
+/*
+ * DLB_CMD_GET_SN_OCCUPANCY: Get a sequence number group's occupancy
+ *
+ * Each sequence number group has one or more slots, depending on its
+ * configuration. I.e.:
+ * - If configured for 1024 sequence numbers per queue, the group has 1 slot
+ * - If configured for 512 sequence numbers per queue, the group has 2 slots
+ *   ...
+ * - If configured for 32 sequence numbers per queue, the group has 32 slots
+ *
+ * This ioctl returns the group's number of in-use slots. If its occupancy is
+ * 0, the group's sequence number allocation can be reconfigured.
+ *
+ * Input parameters:
+ * - group: Sequence number group ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Specified group's number of used slots.
+ */
+struct dlb_get_sn_occupancy_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 group;
+	__u32 padding0;
+};
+
+enum dlb_user_interface_commands {
+	DLB_CMD_GET_DEVICE_VERSION,
+	DLB_CMD_CREATE_SCHED_DOMAIN,
+	DLB_CMD_GET_NUM_RESOURCES,
+	DLB_CMD_GET_DRIVER_VERSION,
+	DLB_CMD_SAMPLE_PERF_COUNTERS,
+	DLB_CMD_SET_SN_ALLOCATION,
+	DLB_CMD_GET_SN_ALLOCATION,
+	DLB_CMD_MEASURE_SCHED_COUNTS,
+	DLB_CMD_QUERY_CQ_POLL_MODE,
+	DLB_CMD_GET_SN_OCCUPANCY,
+
+	/* NUM_DLB_CMD must be last */
+	NUM_DLB_CMD,
+};
+
+/*******************************/
+/* 'domain' device file alerts */
+/*******************************/
+
+/* Scheduling domain device files can be read to receive domain-specific
+ * notifications, for alerts such as hardware errors.
+ *
+ * Each alert is encoded in a 16B message. The first 8B contains the alert ID,
+ * and the second 8B is optional and contains additional information.
+ * Applications should cast read data to a struct dlb_domain_alert, and
+ * interpret the struct's alert_id according to dlb_domain_alert_id. The read
+ * length must be 16B, or the function will return -EINVAL.
+ *
+ * Reads are destructive, and in the case of multiple file descriptors for the
+ * same domain device file, an alert will be read by only one of the file
+ * descriptors.
+ *
+ * The driver stores alerts in a fixed-size alert ring until they are read. If
+ * the alert ring fills completely, subsequent alerts will be dropped. It is
+ * recommended that DLB applications dedicate a thread to perform blocking
+ * reads on the device file.
+ */
+enum dlb_domain_alert_id {
+	/* A destination domain queue that this domain connected to has
+	 * unregistered, and can no longer be sent to. The aux alert data
+	 * contains the queue ID.
+	 */
+	DLB_DOMAIN_ALERT_REMOTE_QUEUE_UNREGISTER,
+	/* A producer port in this domain attempted to send a QE without a
+	 * credit. aux_alert_data[7:0] contains the port ID, and
+	 * aux_alert_data[15:8] contains a flag indicating whether the port is
+	 * load-balanced (1) or directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS,
+	/* Software issued an illegal enqueue for a port in this domain. An
+	 * illegal enqueue could be:
+	 * - Illegal (excess) completion
+	 * - Illegal fragment
+	 * - Illegal enqueue command
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ,
+	/* Software issued excess CQ token pops for a port in this domain.
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS,
+	/* A enqueue contained either an invalid command encoding or a REL,
+	 * REL_T, RLS, FWD, FWD_T, FRAG, or FRAG_T from a directed port.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_ILLEGAL_HCW,
+	/* The QID must be valid and less than 128.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_ILLEGAL_QID,
+	/* An enqueue went to a disabled QID.
+	 *
+	 * aux_alert_data[7:0] contains the port ID, and aux_alert_data[15:8]
+	 * contains a flag indicating whether the port is load-balanced (1) or
+	 * directed (0).
+	 */
+	DLB_DOMAIN_ALERT_DISABLED_QID,
+	/* The device containing this domain was reset. All applications using
+	 * the device need to exit for the driver to complete the reset
+	 * procedure.
+	 *
+	 * aux_alert_data doesn't contain any information for this alert.
+	 */
+	DLB_DOMAIN_ALERT_DEVICE_RESET,
+	/* User-space has enqueued an alert.
+	 *
+	 * aux_alert_data contains user-provided data.
+	 */
+	DLB_DOMAIN_ALERT_USER,
+
+	/* Number of DLB domain alerts */
+	NUM_DLB_DOMAIN_ALERTS
+};
+
+static const char dlb_domain_alert_strings[][128] = {
+	"DLB_DOMAIN_ALERT_REMOTE_QUEUE_UNREGISTER",
+	"DLB_DOMAIN_ALERT_PP_OUT_OF_CREDITS",
+	"DLB_DOMAIN_ALERT_PP_ILLEGAL_ENQ",
+	"DLB_DOMAIN_ALERT_PP_EXCESS_TOKEN_POPS",
+	"DLB_DOMAIN_ALERT_ILLEGAL_HCW",
+	"DLB_DOMAIN_ALERT_ILLEGAL_QID",
+	"DLB_DOMAIN_ALERT_DISABLED_QID",
+	"DLB_DOMAIN_ALERT_DEVICE_RESET",
+	"DLB_DOMAIN_ALERT_USER",
+};
+
+struct dlb_domain_alert {
+	__u64 alert_id;
+	__u64 aux_alert_data;
+};
+
+/*********************************/
+/* 'domain' device file commands */
+/*********************************/
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_POOL: Configure a load-balanced credit pool.
+ * Input parameters:
+ * - num_ldb_credits: Number of load-balanced credits (QED space) for this
+ *	pool.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: pool ID.
+ */
+struct dlb_create_ldb_pool_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_ldb_credits;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_POOL: Configure a directed credit pool.
+ * Input parameters:
+ * - num_dir_credits: Number of directed credits (DQED space) for this pool.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Pool ID.
+ */
+struct dlb_create_dir_pool_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_dir_credits;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_QUEUE: Configure a load-balanced queue.
+ * Input parameters:
+ * - num_atomic_inflights: This specifies the amount of temporary atomic QE
+ *	storage for this queue. If zero, the queue will not support atomic
+ *	scheduling.
+ * - num_sequence_numbers: This specifies the number of sequence numbers used
+ *	by this queue. If zero, the queue will not support ordered scheduling.
+ *	If non-zero, the queue will not support unordered scheduling.
+ * - num_qid_inflights: The maximum number of QEs that can be inflight
+ *	(scheduled to a CQ but not completed) at any time. If
+ *	num_sequence_numbers is non-zero, num_qid_inflights must be set equal
+ *	to num_sequence_numbers.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Queue ID.
+ */
+struct dlb_create_ldb_queue_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 num_sequence_numbers;
+	__u32 num_qid_inflights;
+	__u32 num_atomic_inflights;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_QUEUE: Configure a directed queue.
+ * Input parameters:
+ * - port_id: Port ID. If the corresponding directed port is already created,
+ *	specify its ID here. Else this argument must be 0xFFFFFFFF to indicate
+ *	that the queue is being created before the port.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Queue ID.
+ */
+struct dlb_create_dir_queue_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__s32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_LDB_PORT: Configure a load-balanced port.
+ * Input parameters:
+ * - ldb_credit_pool_id: Load-balanced credit pool this port will belong to.
+ * - dir_credit_pool_id: Directed credit pool this port will belong to.
+ * - ldb_credit_high_watermark: Number of load-balanced credits from the pool
+ *	that this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_high_watermark: Number of directed credits from the pool that
+ *	this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - ldb_credit_low_watermark: Load-balanced credit low watermark. When the
+ *	port's credits reach this watermark, they become eligible to be
+ *	refilled by the DLB as credits until the high watermark
+ *	(num_ldb_credits) is reached.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_low_watermark: Directed credit low watermark. When the port's
+ *	credits reach this watermark, they become eligible to be refilled by
+ *	the DLB as credits until the high watermark (num_dir_credits) is
+ *	reached.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - ldb_credit_quantum: Number of load-balanced credits for the DLB to refill
+ *	per refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_quantum: Number of directed credits for the DLB to refill per
+ *	refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any directed queues,
+ *	this argument is ignored and the port is given no directed credits.
+ * - padding0: Reserved for future use.
+ * - cq_depth: Depth of the port's CQ. Must be a power-of-two between 8 and
+ *	1024, inclusive.
+ * - cq_depth_threshold: CQ depth interrupt threshold. A value of N means that
+ *	the CQ interrupt won't fire until there are N or more outstanding CQ
+ *	tokens.
+ * - cq_history_list_size: Number of history list entries. This must be greater
+ *	than or equal to cq_depth.
+ * - padding1: Reserved for future use.
+ * - padding2: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: port ID.
+ */
+struct dlb_create_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 ldb_credit_pool_id;
+	__u32 dir_credit_pool_id;
+	__u16 ldb_credit_high_watermark;
+	__u16 ldb_credit_low_watermark;
+	__u16 ldb_credit_quantum;
+	__u16 dir_credit_high_watermark;
+	__u16 dir_credit_low_watermark;
+	__u16 dir_credit_quantum;
+	__u16 padding0;
+	__u16 cq_depth;
+	__u16 cq_depth_threshold;
+	__u16 cq_history_list_size;
+	__u32 padding1;
+};
+
+/*
+ * DLB_DOMAIN_CMD_CREATE_DIR_PORT: Configure a directed port.
+ * Input parameters:
+ * - ldb_credit_pool_id: Load-balanced credit pool this port will belong to.
+ * - dir_credit_pool_id: Directed credit pool this port will belong to.
+ * - ldb_credit_high_watermark: Number of load-balanced credits from the pool
+ *	that this port will own.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_high_watermark: Number of directed credits from the pool that
+ *	this port will own.
+ * - ldb_credit_low_watermark: Load-balanced credit low watermark. When the
+ *	port's credits reach this watermark, they become eligible to be
+ *	refilled by the DLB as credits until the high watermark
+ *	(num_ldb_credits) is reached.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_low_watermark: Directed credit low watermark. When the port's
+ *	credits reach this watermark, they become eligible to be refilled by
+ *	the DLB as credits until the high watermark (num_dir_credits) is
+ *	reached.
+ * - ldb_credit_quantum: Number of load-balanced credits for the DLB to refill
+ *	per refill operation.
+ *
+ *	If this port's scheduling domain doesn't have any load-balanced queues,
+ *	this argument is ignored and the port is given no load-balanced
+ *	credits.
+ * - dir_credit_quantum: Number of directed credits for the DLB to refill per
+ *	refill operation.
+ * - cq_depth: Depth of the port's CQ. Must be a power-of-two between 8 and
+ *	1024, inclusive.
+ * - cq_depth_threshold: CQ depth interrupt threshold. A value of N means that
+ *	the CQ interrupt won't fire until there are N or more outstanding CQ
+ *	tokens.
+ * - qid: Queue ID. If the corresponding directed queue is already created,
+ *	specify its ID here. Else this argument must be 0xFFFFFFFF to indicate
+ *	that the port is being created before the queue.
+ * - padding1: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: Port ID.
+ */
+struct dlb_create_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 ldb_credit_pool_id;
+	__u32 dir_credit_pool_id;
+	__u16 ldb_credit_high_watermark;
+	__u16 ldb_credit_low_watermark;
+	__u16 ldb_credit_quantum;
+	__u16 dir_credit_high_watermark;
+	__u16 dir_credit_low_watermark;
+	__u16 dir_credit_quantum;
+	__u16 cq_depth;
+	__u16 cq_depth_threshold;
+	__s32 queue_id;
+	__u32 padding1;
+};
+
+/*
+ * DLB_DOMAIN_CMD_START_DOMAIN: Mark the end of the domain configuration. This
+ *	must be called before passing QEs into the device, and no configuration
+ *	ioctls can be issued once the domain has started. Sending QEs into the
+ *	device before calling this ioctl will result in undefined behavior.
+ * Input parameters:
+ * - (None)
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_start_domain_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+};
+
+/*
+ * DLB_DOMAIN_CMD_MAP_QID: Map a load-balanced queue to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - qid: Load-balanced queue ID.
+ * - priority: Queue->port service priority.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_map_qid_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 qid;
+	__u32 priority;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_UNMAP_QID: Unmap a load-balanced queue to a load-balanced
+ *	port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - qid: Load-balanced queue ID.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_unmap_qid_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 qid;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENABLE_LDB_PORT: Enable scheduling to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enable_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENABLE_DIR_PORT: Enable scheduling to a directed port.
+ * Input parameters:
+ * - port_id: Directed port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enable_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+};
+
+/*
+ * DLB_DOMAIN_CMD_DISABLE_LDB_PORT: Disable scheduling to a load-balanced port.
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_disable_ldb_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_DISABLE_DIR_PORT: Disable scheduling to a directed port.
+ * Input parameters:
+ * - port_id: Directed port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_disable_dir_port_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_BLOCK_ON_CQ_INTERRUPT: Block on a CQ interrupt until a QE
+ *	arrives for the specified port. If a QE is already present, the ioctl
+ *	will immediately return.
+ *
+ *	Note: Only one thread can block on a CQ's interrupt at a time. Doing
+ *	otherwise can result in hung threads.
+ *
+ * Input parameters:
+ * - port_id: Port ID.
+ * - is_ldb: True if the port is load-balanced, false otherwise.
+ * - arm: Tell the driver to arm the interrupt.
+ * - cq_gen: Current CQ generation bit.
+ * - padding0: Reserved for future use.
+ * - cq_va: VA of the CQ entry where the next QE will be placed.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_block_on_cq_interrupt_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u8 is_ldb;
+	__u8 arm;
+	__u8 cq_gen;
+	__u8 padding0;
+	__u64 cq_va;
+};
+
+/*
+ * DLB_DOMAIN_CMD_ENQUEUE_DOMAIN_ALERT: Enqueue a domain alert that will be
+ *	read by one reader thread.
+ *
+ * Input parameters:
+ * - aux_alert_data: user-defined auxiliary data.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ */
+struct dlb_enqueue_domain_alert_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u64 aux_alert_data;
+};
+
+/*
+ * DLB_DOMAIN_CMD_GET_LDB_QUEUE_DEPTH: Get a load-balanced queue's depth.
+ * Input parameters:
+ * - queue_id: The load-balanced queue ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: queue depth.
+ */
+struct dlb_get_ldb_queue_depth_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 queue_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_GET_DIR_QUEUE_DEPTH: Get a directed queue's depth.
+ * Input parameters:
+ * - queue_id: The directed queue ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: queue depth.
+ */
+struct dlb_get_dir_queue_depth_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 queue_id;
+	__u32 padding0;
+};
+
+/*
+ * DLB_DOMAIN_CMD_PENDING_PORT_UNMAPS: Get number of queue unmap operations in
+ *	progress for a load-balanced port.
+ *
+ *	Note: This is a snapshot; the number of unmap operations in progress
+ *	is subject to change at any time.
+ *
+ * Input parameters:
+ * - port_id: Load-balanced port ID.
+ * - padding0: Reserved for future use.
+ *
+ * Output parameters:
+ * - response: pointer to a struct dlb_cmd_response.
+ *	response.status: Detailed error code. In certain cases, such as if the
+ *		response pointer is invalid, the driver won't set status.
+ *	response.id: number of unmaps in progress.
+ */
+struct dlb_pending_port_unmaps_args {
+	/* Output parameters */
+	__u64 response;
+	/* Input parameters */
+	__u32 port_id;
+	__u32 padding0;
+};
+
+enum dlb_domain_user_interface_commands {
+	DLB_DOMAIN_CMD_CREATE_LDB_POOL,
+	DLB_DOMAIN_CMD_CREATE_DIR_POOL,
+	DLB_DOMAIN_CMD_CREATE_LDB_QUEUE,
+	DLB_DOMAIN_CMD_CREATE_DIR_QUEUE,
+	DLB_DOMAIN_CMD_CREATE_LDB_PORT,
+	DLB_DOMAIN_CMD_CREATE_DIR_PORT,
+	DLB_DOMAIN_CMD_START_DOMAIN,
+	DLB_DOMAIN_CMD_MAP_QID,
+	DLB_DOMAIN_CMD_UNMAP_QID,
+	DLB_DOMAIN_CMD_ENABLE_LDB_PORT,
+	DLB_DOMAIN_CMD_ENABLE_DIR_PORT,
+	DLB_DOMAIN_CMD_DISABLE_LDB_PORT,
+	DLB_DOMAIN_CMD_DISABLE_DIR_PORT,
+	DLB_DOMAIN_CMD_BLOCK_ON_CQ_INTERRUPT,
+	DLB_DOMAIN_CMD_ENQUEUE_DOMAIN_ALERT,
+	DLB_DOMAIN_CMD_GET_LDB_QUEUE_DEPTH,
+	DLB_DOMAIN_CMD_GET_DIR_QUEUE_DEPTH,
+	DLB_DOMAIN_CMD_PENDING_PORT_UNMAPS,
+
+	/* NUM_DLB_DOMAIN_CMD must be last */
+	NUM_DLB_DOMAIN_CMD,
+};
+
+/*
+ * Base addresses for memory mapping the consumer queue (CQ) and popcount (PC)
+ * memory space, and producer port (PP) MMIO space. The CQ, PC, and PP
+ * addresses are per-port. Every address is page-separated (e.g. LDB PP 0 is at
+ * 0x2100000 and LDB PP 1 is at 0x2101000).
+ */
+#define DLB_LDB_CQ_BASE 0x3000000
+#define DLB_LDB_CQ_MAX_SIZE 65536
+#define DLB_LDB_CQ_OFFS(id) (DLB_LDB_CQ_BASE + (id) * DLB_LDB_CQ_MAX_SIZE)
+
+#define DLB_DIR_CQ_BASE 0x3800000
+#define DLB_DIR_CQ_MAX_SIZE 65536
+#define DLB_DIR_CQ_OFFS(id) (DLB_DIR_CQ_BASE + (id) * DLB_DIR_CQ_MAX_SIZE)
+
+#define DLB_LDB_PC_BASE 0x2300000
+#define DLB_LDB_PC_MAX_SIZE 4096
+#define DLB_LDB_PC_OFFS(id) (DLB_LDB_PC_BASE + (id) * DLB_LDB_PC_MAX_SIZE)
+
+#define DLB_DIR_PC_BASE 0x2200000
+#define DLB_DIR_PC_MAX_SIZE 4096
+#define DLB_DIR_PC_OFFS(id) (DLB_DIR_PC_BASE + (id) * DLB_DIR_PC_MAX_SIZE)
+
+#define DLB_LDB_PP_BASE 0x2100000
+#define DLB_LDB_PP_MAX_SIZE 4096
+#define DLB_LDB_PP_OFFS(id) (DLB_LDB_PP_BASE + (id) * DLB_LDB_PP_MAX_SIZE)
+
+#define DLB_DIR_PP_BASE 0x2000000
+#define DLB_DIR_PP_MAX_SIZE 4096
+#define DLB_DIR_PP_OFFS(id) (DLB_DIR_PP_BASE + (id) * DLB_DIR_PP_MAX_SIZE)
+
+#endif /* __DLB_USER_H */
-- 
2.13.6


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v2 09/10] doc: add note about blacklist/whitelist changes
  @ 2020-06-12  0:20  4% ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-06-12  0:20 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The blacklist/whitelist changes to API will not be a breaking
change for applications in this release but worth adding a note
to encourage migration.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/release_20_08.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5887..9e68544e7920 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -91,6 +91,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* eal: The definitions related to including and excluding devices
+  has been changed from blacklist/whitelist to blocklist/allowlist.
+  There are compatiablity macros and command line mapping to accept
+  the old values but applications and scripts are strongly encouraged
+  to migrate to the new names.
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [EXTERNAL] Re: [PATCH v2 1/4] eal: disable function versioning on Windows
  @ 2020-06-11 10:09  4%       ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2020-06-11 10:09 UTC (permalink / raw)
  To: Thomas Monjalon, Omar Cardona, Neil Horman
  Cc: Fady Bader, dev, tbashar, talshn, yohadt, dmitry.kozliuk,
	Harini Ramakrishnan, pallavi.kadam, ranjit.menon, olivier.matz,
	arybchenko

So apologies for resurrecting an old thread - I did want to chime on this.

From a past life as a Windows Programmer, I would say that shared libraries model on Windows is not as strong as on Linux/Unix.
Libraries on Windows are typically packaged and distributed along with the applications, not usually at a system level as in Linux/Unix.

That said - I strongly agree with Omar - that does not mean that stable ABI's should not be goal on Windows.
This does not diminish the value of enabling Windows applications to seamless upgrade their DPDK, at an application level.

So I don't have a problem with disabling function-level versioning as an interim measure, until we figure out the best mechanism.
What I would suggest is that we aim to get this sort for the v22 ABI in the 21.11 release.
And that we clearly indicate in v21 in 20.11 that Windows is not yet covered in the ABI policy.

Make sense?

Ray K

On 02/06/2020 11:40, Thomas Monjalon wrote:
> 02/06/2020 12:27, Neil Horman:
>> On Mon, Jun 01, 2020 at 09:46:18PM +0000, Omar Cardona wrote:
>>>>> Do we know if we have future plans of supporting dlls on windows in the future?
>>> 	- Hi Neil, yes this is of interest to us (Windows).  
>>> 	- Specifically to aid in non-disruptive granular servicing/updating.
>>> 	- Our primary scenario Userspace VMSwitch is biased towards shared libraries for production servicing
>>>
>> Ok, do you have recommendations on how to provide backwards compatibility
>> between dpdk versions?  From what I read the most direct solution would be
>> per-application dll bundling (which seems to me to defeat the purpose of
>> creating a dll, but if its the only solution, perhaps thats all we have to work
>> with).  Is there a better solution?
>>
>> If not, then I would suggest that, instead of disabling shared libraries on
>> Windows, as we do below, we allow it, and redefine VERSION_SYMBOL[_EXPERIMENTAL]
>> to do nothing, and implement BIND_DEFAULT_SYMBOL to act like MAP_STATIC_SYMBOL
>> by aliasing the supplied symbol name to the provided export name.  I think msvc
>> supports aliasing, correct?
> We don't use msvc, but clang and MinGW.
>
>


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
  2020-06-10 13:33  0% ` Harman Kalra
@ 2020-06-10 15:16  0%   ` Slava Ovsiienko
  2020-06-17 15:57  0%     ` [dpdk-dev] [EXT] " Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2020-06-10 15:16 UTC (permalink / raw)
  To: Harman Kalra
  Cc: dev, Thomas Monjalon, Matan Azrad, Raslan Darawsheh, Ori Kam,
	olivier.matz, Shahaf Shuler

Hi, Harman

> -----Original Message-----
> From: Harman Kalra <hkalra@marvell.com>
> Sent: Wednesday, June 10, 2020 16:34
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Matan
> Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Ori Kam <orika@mellanox.com>;
> olivier.matz@6wind.com; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
> 
> On Wed, Jun 10, 2020 at 06:38:05AM +0000, Viacheslav Ovsiienko wrote:
> 
> Hi Viacheslav,
> 
>    I have some queries below:
> 
> > There is the requirement on some networks for precise traffic timing
> > management. The ability to send (and, generally speaking, receive) the
> > packets at the very precisely specified moment of time provides the
> > opportunity to support the connections with Time Division Multiplexing
> > using the contemporary general purpose NIC without involving an
> > auxiliary hardware. For example, the supporting of O-RAN Fronthaul
> > interface is one of the promising features for potentially usage of
> > the precise time management for the egress packets.
> >
> > The main objective of this RFC is to specify the way how applications
> > can provide the moment of time at what the packet transmission must be
> > started and to describe in preliminary the supporting this feature
> > from
> > mlx5 PMD side.
> >
> > The new dynamic timestamp field is proposed, it provides some timing
> > information, the units and time references (initial phase) are not
> > explicitly defined but are maintained always the same for a given port.
> > Some devices allow to query rte_eth_read_clock() that will return the
> > current device timestamp. The dynamic timestamp flag tells whether the
> > field contains actual timestamp value. For the packets being sent this
> > value can be used by PMD to schedule packet sending.
> >
> > After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation and
> > obsoleting, these dynamic flag and field will be used to manage the
> > timestamps on receiving datapath as well.
> >
> > When PMD sees the "rte_dynfield_timestamp" set on the packet being
> > sent it tries to synchronize the time of packet appearing on the wire
> > with the specified packet timestamp. It the specified one is in the
> > past it should be ignored, if one is in the distant future it should
> > be capped with some reasonable value (in range of seconds). These
> > specific cases ("too late" and "distant future") can be optionally
> > reported via device xstats to assist applications to detect the
> > time-related problems.
> >
> > There is no any packet reordering according timestamps is supposed,
> > neither within packet burst, nor between packets, it is an entirely
> > application responsibility to generate packets and its timestamps in
> > desired order. The timestamps can be put only in the first packet in
> > the burst providing the entire burst scheduling.
> 
> Since its applicaiton responsibility to care of packet reordering and many
> other parameters, so why cant application itself take the responsibility of
> packet scheduling, i.e. applicaton can hold for the required time before
> calling tx-burst? Why are we even offloading this job to PMD?
> 
- The scheduling is required to be very precise. Within handred(s) of nanoseconds.
- It saves CPU cycles. Application just should prepare the packets, put the desired timestamps
 and call tx_burst().  "Shut-n-forget" approach. 

SW approach is potentially possible, application can hold the time and schedule packets itself.
But... Can we guarantee the stable delay between tx_burst call and data on the wire?
Should we waste CPU cycles to wait the desired moment of time? Can we guarantee
stable interrupt latency if we choose to schedule on interrupts approach?

This RFC splits the responsibility - application should prepare the data and specify
when it desires to send, the rest is on PMD.
 
> >
> > PMD reports the ability to synchronize packet sending on timestamp
> > with new offload flag:
> >
> > This is palliative and is going to be replaced with new eth_dev API
> > about reporting/managing the supported dynamic flags and its related
> > features. This API would break ABI compatibility and can't be
> > introduced at the moment, so is postponed to 20.11.
> >
> > For testing purposes it is proposed to update testpmd "txonly"
> > forwarding mode routine. With this update testpmd application
> > generates the packets and sets the dynamic timestamps according to
> > specified time pattern if it sees the "rte_dynfield_timestamp" is registered.
> 
> So what I am understanding here is "rte_dynfield_timestamp" will provide
> information about three parameters:
> - timestamp at which TX should start
> - intra packet gap
> - intra burst gap.
> 
> If its about "intra packet gap" then PMD can take care, but if it is about intra
> burst gap, application can take care of it.

Not sure - the intra-burst gap might be pretty small.
It is supposed to handle intra-burst in the same way - by specifying
the timestamps. Waiting is supposed to be implemented on tx_burst() retry.
Prepare the packets with timestamps, tx_burst - if not all packets are sent -
it means queue is waiting for the schedult, retry with the remaining packets.
As option - we can implement intra-burst wait basing rte_eth_read_clock().

> > The new testpmd command is proposed to configure sending pattern:
> >
> > set tx_times <intra_gap>,<burst_gap>
> >
> > <intra_gap> - the delay between the packets within the burst
> >               specified in the device clock units. The number
> >               of packets in the burst is defined by txburst parameter
> >
> > <burst_gap> - the delay between the bursts in the device clock units
> >
> > As the result the bursts of packet will be transmitted with specific
> > delays between the packets within the burst and specific delay between
> > the bursts. The rte_eth_get_clock is supposed to be engaged to get the
> 
> I think here you mean "rte_eth_read_clock".
Yes, exactly. Thank you for the correction.

With best regards, Slava

> 
> 
> Thanks
> Harman
> 
> > current device clock value and provide the reference for the timestamps.
> >
> > Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> > ---
> >  lib/librte_ethdev/rte_ethdev.h |  4 ++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 16 ++++++++++++++++
> >  2 files changed, 20 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_ethdev.h
> > b/lib/librte_ethdev/rte_ethdev.h index a49242b..6f6454c 100644
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -1178,6 +1178,10 @@ struct rte_eth_conf {
> >  /** Device supports outer UDP checksum */  #define
> > DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
> >
> > +/** Device supports send on timestamp */ #define
> > +DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
> > +
> > +
> >  #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
> /**<
> > Device supports Rx queue setup after device started*/  #define
> > RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 diff --git
> > a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
> > index 96c3631..fb5477c 100644
> > --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > @@ -250,4 +250,20 @@ int rte_mbuf_dynflag_lookup(const char *name,
> > #define RTE_MBUF_DYNFIELD_METADATA_NAME
> "rte_flow_dynfield_metadata"
> >  #define RTE_MBUF_DYNFLAG_METADATA_NAME
> "rte_flow_dynflag_metadata"
> >
> > +/*
> > + * The timestamp dynamic field provides some timing information, the
> > + * units and time references (initial phase) are not explicitly
> > +defined
> > + * but are maintained always the same for a given port. Some devices
> > +allow
> > + * to query rte_eth_read_clock() that will return the current device
> > + * timestamp. The dynamic timestamp flag tells whether the field
> > +contains
> > + * actual timestamp value. For the packets being sent this value can
> > +be
> > + * used by PMD to schedule packet sending.
> > + *
> > + * After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
> > + * and obsoleting, these dynamic flag and field will be used to
> > +manage
> > + * the timestamps on receiving datapath as well.
> > + */
> > +#define RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> "rte_dynfield_timestamp"
> > +#define RTE_MBUF_DYNFLAG_TIMESTAMP_NAME
> "rte_dynflag_timestamp"
> > +
> >  #endif
> > --
> > 1.8.3.1
> >

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 2/7] eal: fix multiple definition of per lcore thread id
  @ 2020-06-10 14:45  3% ` David Marchand
  2020-06-15  6:46  0%   ` Kinsella, Ray
      2 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-06-10 14:45 UTC (permalink / raw)
  To: dev
  Cc: Ray Kinsella, Neil Horman, Cunming Liang, Konstantin Ananyev,
	Olivier Matz

Because of the inline accessor + static declaration in rte_gettid(),
we end up with multiple symbols for RTE_PER_LCORE(_thread_id).
Each compilation unit will pay a cost when accessing this information
for the first time.

$ nm build/app/dpdk-testpmd | grep per_lcore__thread_id
0000000000000054 d per_lcore__thread_id.5037
0000000000000040 d per_lcore__thread_id.5103
0000000000000048 d per_lcore__thread_id.5259
000000000000004c d per_lcore__thread_id.5259
0000000000000044 d per_lcore__thread_id.5933
0000000000000058 d per_lcore__thread_id.6261
0000000000000050 d per_lcore__thread_id.7378
000000000000005c d per_lcore__thread_id.7496
000000000000000c d per_lcore__thread_id.8016
0000000000000010 d per_lcore__thread_id.8431

Make it global as part of the DPDK_21 stable ABI.

Fixes: ef76436c6834 ("eal: get unique thread id")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 lib/librte_eal/common/eal_common_thread.c | 1 +
 lib/librte_eal/include/rte_eal.h          | 3 ++-
 lib/librte_eal/rte_eal_version.map        | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
index 25200e5a99..f04d880880 100644
--- a/lib/librte_eal/common/eal_common_thread.c
+++ b/lib/librte_eal/common/eal_common_thread.c
@@ -24,6 +24,7 @@
 #include "eal_thread.h"
 
 RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
+RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 static RTE_DEFINE_PER_LCORE(unsigned int, _socket_id) =
 	(unsigned int)SOCKET_ID_ANY;
 static RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h
index 2f9ed298de..2edf8c6556 100644
--- a/lib/librte_eal/include/rte_eal.h
+++ b/lib/librte_eal/include/rte_eal.h
@@ -447,6 +447,8 @@ enum rte_intr_mode rte_eal_vfio_intr_mode(void);
  */
 int rte_sys_gettid(void);
 
+RTE_DECLARE_PER_LCORE(int, _thread_id);
+
 /**
  * Get system unique thread id.
  *
@@ -456,7 +458,6 @@ int rte_sys_gettid(void);
  */
 static inline int rte_gettid(void)
 {
-	static RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
 	if (RTE_PER_LCORE(_thread_id) == -1)
 		RTE_PER_LCORE(_thread_id) = rte_sys_gettid();
 	return RTE_PER_LCORE(_thread_id);
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index d8038749a4..fdfc3f1a88 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -221,6 +221,13 @@ DPDK_20.0 {
 	local: *;
 };
 
+DPDK_21 {
+	global:
+
+	per_lcore__thread_id;
+
+} DPDK_20.0;
+
 EXPERIMENTAL {
 	global:
 
-- 
2.23.0


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8 01/11] eal: replace rte_page_sizes with a set of constants
  @ 2020-06-10 14:27  9%         ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2020-06-10 14:27 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Dmitry Kozlyuk, Jerin Jacob, John McNamara,
	Marko Kovacevic, Anatoly Burakov

Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
Enum rte_page_sizes has members valued above this limit, which get
wrapped to zero, resulting in compilation error (duplicate values in
enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.

Remove rte_page_sizes and replace its values with #define's.
This enumeration is not used in public API, so there's no ABI breakage.
Announce API changes for 20.08 in documentation.

Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 doc/guides/rel_notes/release_20_08.rst |  2 ++
 lib/librte_eal/include/rte_memory.h    | 23 ++++++++++-------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe..2041a29b9 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -85,6 +85,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* ``rte_page_sizes`` enumeration is replaced with ``RTE_PGSIZE_xxx`` defines.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/include/rte_memory.h b/lib/librte_eal/include/rte_memory.h
index 3d8d0bd69..65374d53a 100644
--- a/lib/librte_eal/include/rte_memory.h
+++ b/lib/librte_eal/include/rte_memory.h
@@ -24,19 +24,16 @@ extern "C" {
 #include <rte_config.h>
 #include <rte_fbarray.h>
 
-__extension__
-enum rte_page_sizes {
-	RTE_PGSIZE_4K    = 1ULL << 12,
-	RTE_PGSIZE_64K   = 1ULL << 16,
-	RTE_PGSIZE_256K  = 1ULL << 18,
-	RTE_PGSIZE_2M    = 1ULL << 21,
-	RTE_PGSIZE_16M   = 1ULL << 24,
-	RTE_PGSIZE_256M  = 1ULL << 28,
-	RTE_PGSIZE_512M  = 1ULL << 29,
-	RTE_PGSIZE_1G    = 1ULL << 30,
-	RTE_PGSIZE_4G    = 1ULL << 32,
-	RTE_PGSIZE_16G   = 1ULL << 34,
-};
+#define RTE_PGSIZE_4K   (1ULL << 12)
+#define RTE_PGSIZE_64K  (1ULL << 16)
+#define RTE_PGSIZE_256K (1ULL << 18)
+#define RTE_PGSIZE_2M   (1ULL << 21)
+#define RTE_PGSIZE_16M  (1ULL << 24)
+#define RTE_PGSIZE_256M (1ULL << 28)
+#define RTE_PGSIZE_512M (1ULL << 29)
+#define RTE_PGSIZE_1G   (1ULL << 30)
+#define RTE_PGSIZE_4G   (1ULL << 32)
+#define RTE_PGSIZE_16G  (1ULL << 34)
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
 
-- 
2.25.4


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
  2020-06-10  6:38  2% [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling Viacheslav Ovsiienko
@ 2020-06-10 13:33  0% ` Harman Kalra
  2020-06-10 15:16  0%   ` Slava Ovsiienko
  0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2020-06-10 13:33 UTC (permalink / raw)
  To: Viacheslav Ovsiienko
  Cc: dev, thomas, matan, rasland, orika, olivier.matz, shahafs

On Wed, Jun 10, 2020 at 06:38:05AM +0000, Viacheslav Ovsiienko wrote:

Hi Viacheslav,

   I have some queries below:
   	
> There is the requirement on some networks for precise traffic timing
> management. The ability to send (and, generally speaking, receive)
> the packets at the very precisely specified moment of time provides
> the opportunity to support the connections with Time Division
> Multiplexing using the contemporary general purpose NIC without involving
> an auxiliary hardware. For example, the supporting of O-RAN Fronthaul
> interface is one of the promising features for potentially usage of the
> precise time management for the egress packets.
> 
> The main objective of this RFC is to specify the way how applications
> can provide the moment of time at what the packet transmission must be
> started and to describe in preliminary the supporting this feature from
> mlx5 PMD side.
> 
> The new dynamic timestamp field is proposed, it provides some timing
> information, the units and time references (initial phase) are not
> explicitly defined but are maintained always the same for a given port.
> Some devices allow to query rte_eth_read_clock() that will return
> the current device timestamp. The dynamic timestamp flag tells whether
> the field contains actual timestamp value. For the packets being sent
> this value can be used by PMD to schedule packet sending.
> 
> After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
> and obsoleting, these dynamic flag and field will be used to manage
> the timestamps on receiving datapath as well.
> 
> When PMD sees the "rte_dynfield_timestamp" set on the packet being sent
> it tries to synchronize the time of packet appearing on the wire with
> the specified packet timestamp. It the specified one is in the past it
> should be ignored, if one is in the distant future it should be capped
> with some reasonable value (in range of seconds). These specific cases
> ("too late" and "distant future") can be optionally reported via
> device xstats to assist applications to detect the time-related
> problems.
> 
> There is no any packet reordering according timestamps is supposed,
> neither within packet burst, nor between packets, it is an entirely
> application responsibility to generate packets and its timestamps
> in desired order. The timestamps can be put only in the first packet
> in the burst providing the entire burst scheduling.

Since its applicaiton responsibility to care of packet reordering and
many other parameters, so why cant application itself take the
responsibility of packet scheduling, i.e. applicaton can hold
for the required time before calling tx-burst? Why are we even
offloading this job to PMD? 

> 
> PMD reports the ability to synchronize packet sending on timestamp
> with new offload flag:
> 
> This is palliative and is going to be replaced with new eth_dev API
> about reporting/managing the supported dynamic flags and its related
> features. This API would break ABI compatibility and can't be introduced
> at the moment, so is postponed to 20.11.
> 
> For testing purposes it is proposed to update testpmd "txonly"
> forwarding mode routine. With this update testpmd application generates
> the packets and sets the dynamic timestamps according to specified time
> pattern if it sees the "rte_dynfield_timestamp" is registered.

So what I am understanding here is "rte_dynfield_timestamp" will provide
information about three parameters:
- timestamp at which TX should start
- intra packet gap
- intra burst gap.

If its about "intra packet gap" then PMD can take care, but if it is
about intra burst gap, application can take care of it.

> 
> The new testpmd command is proposed to configure sending pattern:
> 
> set tx_times <intra_gap>,<burst_gap>
> 
> <intra_gap> - the delay between the packets within the burst
>               specified in the device clock units. The number
>               of packets in the burst is defined by txburst parameter
> 
> <burst_gap> - the delay between the bursts in the device clock units
> 
> As the result the bursts of packet will be transmitted with specific
> delays between the packets within the burst and specific delay between
> the bursts. The rte_eth_get_clock is supposed to be engaged to get the

I think here you mean "rte_eth_read_clock".


Thanks
Harman

> current device clock value and provide the reference for the timestamps.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  lib/librte_ethdev/rte_ethdev.h |  4 ++++
>  lib/librte_mbuf/rte_mbuf_dyn.h | 16 ++++++++++++++++
>  2 files changed, 20 insertions(+)
> 
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index a49242b..6f6454c 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -1178,6 +1178,10 @@ struct rte_eth_conf {
>  /** Device supports outer UDP checksum */
>  #define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
>  
> +/** Device supports send on timestamp */
> +#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
> +
> +
>  #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
>  /**< Device supports Rx queue setup after device started*/
>  #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
> index 96c3631..fb5477c 100644
> --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> @@ -250,4 +250,20 @@ int rte_mbuf_dynflag_lookup(const char *name,
>  #define RTE_MBUF_DYNFIELD_METADATA_NAME "rte_flow_dynfield_metadata"
>  #define RTE_MBUF_DYNFLAG_METADATA_NAME "rte_flow_dynflag_metadata"
>  
> +/*
> + * The timestamp dynamic field provides some timing information, the
> + * units and time references (initial phase) are not explicitly defined
> + * but are maintained always the same for a given port. Some devices allow
> + * to query rte_eth_read_clock() that will return the current device
> + * timestamp. The dynamic timestamp flag tells whether the field contains
> + * actual timestamp value. For the packets being sent this value can be
> + * used by PMD to schedule packet sending.
> + *
> + * After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
> + * and obsoleting, these dynamic flag and field will be used to manage
> + * the timestamps on receiving datapath as well.
> + */
> +#define RTE_MBUF_DYNFIELD_TIMESTAMP_NAME "rte_dynfield_timestamp"
> +#define RTE_MBUF_DYNFLAG_TIMESTAMP_NAME "rte_dynflag_timestamp"
> +
>  #endif
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] 19.11.3 patches review and test
  2020-06-10  7:19  0% ` Yu, PingX
@ 2020-06-10  8:35  0%   ` Luca Boccassi
  0 siblings, 0 replies; 200+ results
From: Luca Boccassi @ 2020-06-10  8:35 UTC (permalink / raw)
  To: Yu, PingX, stable
  Cc: dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani, Walker,
	Benjamin, David Christensen, Hemant Agrawal, Stokes, Ian,
	Jerin Jacob, Mcnamara, John, Ju-Hyoung Lee, Kevin Traynor,
	Pei Zhang, Xu, Qian Q, Raslan Darawsheh, Thomas Monjalon, Peng,
	Yuan, Chen, Zhaoyan

On Wed, 2020-06-10 at 07:19 +0000, Yu, PingX wrote:
> Hi Luca,
> Update LTS 19.11.3 test result for Intel part.  All passed and no new issue is found.  
> 
> * Intel(R) Testing
> 
> # Basic Intel(R) NIC testing
>  * PF(i40e):Passed
>  * PF(ixgbe):Passed 
>  * PF(ice):Passed
>  * VF(i40e):Passed
>  * VF(ixgbe):Passed
>  * VF(ice):Passed
>  * Build or compile: Passed
>  * Intel NIC single core/NIC performance: Passed
>  
> #Basic cryptodev and virtio testing
>  * vhost/virtio basic loopback, PVP and performance test: Passed. 
>     known issue: https://bugzilla.kernel.org/show_bug.cgi?id=207075
>  * cryptodev Function: Passed. 
>  * cryptodev Performance: Passed. 
>     known unstable issue of test case 1c1t 3CPM. not effect LTS release.
> 
> Thanks.
> Regards,
> Yu Ping

Great, thank you!

> > -----Original Message-----
> > From: luca.boccassi@gmail.com <luca.boccassi@gmail.com>
> > Sent: Thursday, June 4, 2020 3:44 AM
> > To: stable@dpdk.org
> > Cc: dev@dpdk.org; Abhishek Marathe
> > <Abhishek.Marathe@microsoft.com>; Akhil Goyal <akhil.goyal@nxp.com>;
> > Ali Alnubani <alialnu@mellanox.com>; Walker, Benjamin
> > <benjamin.walker@intel.com>; David Christensen
> > <drc@linux.vnet.ibm.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> > Stokes, Ian <ian.stokes@intel.com>; Jerin Jacob <jerinj@marvell.com>;
> > Mcnamara, John <john.mcnamara@intel.com>; Ju-Hyoung Lee
> > <juhlee@microsoft.com>; Kevin Traynor <ktraynor@redhat.com>; Pei Zhang
> > <pezhang@redhat.com>; Yu, PingX <pingx.yu@intel.com>; Xu, Qian Q
> > <qian.q.xu@intel.com>; Raslan Darawsheh <rasland@mellanox.com>;
> > Thomas Monjalon <thomas@monjalon.net>; Peng, Yuan
> > <yuan.peng@intel.com>; Chen, Zhaoyan <zhaoyan.chen@intel.com>
> > Subject: 19.11.3 patches review and test
> > 
> > Hi all,
> > 
> > Here is a list of patches targeted for stable release 19.11.3.
> > 
> > The planned date for the final release is the 17th of June.
> > 
> > Please help with testing and validation of your use cases and report any
> > issues/results with reply-all to this mail. For the final release the fixes and
> > reported validations will be added to the release notes.
> > 
> > A release candidate tarball can be found at:
> > 
> >     https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1
> > 
> > These patches are located at branch 19.11 of dpdk-stable repo:
> >     https://dpdk.org/browse/dpdk-stable/
> > 
> > Thanks.
> > 
> > Luca Boccassi
> > 
> > ---
> > Adam Dybkowski (5):
> >       cryptodev: fix missing device id range checking
> >       common/qat: fix GEN3 marketing name
> >       app/crypto-perf: fix display of sample test vector
> >       crypto/qat: support plain SHA1..SHA512 hashes
> >       cryptodev: fix SHA-1 digest enum comment
> > 
> > Ajit Khaparde (3):
> >       net/bnxt: fix FW version query
> >       net/bnxt: fix error log for command timeout
> >       net/bnxt: fix using RSS config struct
> > 
> > Akhil Goyal (1):
> >       ipsec: fix build dependency on hash lib
> > 
> > Alex Kiselev (1):
> >       lpm6: fix size of tbl8 group
> > 
> > Alex Marginean (1):
> >       net/enetc: fix Rx lock-up
> > 
> > Alexander Kozyrev (8):
> >       net/mlx5: reduce Tx completion index memory loads
> >       net/mlx5: add device parameter for MPRQ stride size
> >       net/mlx5: enable MPRQ multi-stride operations
> >       net/mlx5: add multi-segment packets in MPRQ mode
> >       net/mlx5: set dynamic flow metadata in Rx queues
> >       net/mlx5: improve logging of MPRQ selection
> >       net/mlx5: fix assert in dynamic metadata handling
> >       net/mlx5: fix Tx queue release debug log timing
> > 
> > Alvin Zhang (2):
> >       net/iavf: fix link speed
> >       net/e1000: fix port hotplug for multi-process
> > 
> > Amit Gupta (1):
> >       net/octeontx: fix meson build for disabled drivers
> > 
> > Anatoly Burakov (1):
> >       mem: preallocate VA space in no-huge mode
> > 
> > Andrew Rybchenko (4):
> >       net/sfc: fix reported promiscuous/multicast mode
> >       net/sfc/base: use simpler EF10 family conditional check
> >       net/sfc/base: use simpler EF10 family run-time checks
> >       net/sfc/base: fix build when EVB is enabled
> > 
> > Andy Pei (1):
> >       net/ipn3ke: use control thread to check link status
> > 
> > Ankur Dwivedi (1):
> >       net/octeontx2: fix buffer size assignment
> > 
> > Apeksha Gupta (2):
> >       bus/fslmc: fix dereferencing null pointer
> >       test/crypto: fix statistics case
> > 
> > Archana Muniganti (1):
> >       examples/fips_validation: fix parsing of algorithms
> > 
> > Arek Kusztal (1):
> >       crypto/qat: fix cipher descriptor for ZUC and SNOW
> > 
> > Asaf Penso (2):
> >       net/mlx5: fix call to modify action without init item
> >       net/mlx5: fix assert in doorbell lookup
> > 
> > Ashish Gupta (1):
> >       net/octeontx2: fix link information for loopback port
> > 
> > Asim Jamshed (1):
> >       fib: fix headers for C++ support
> > 
> > Bernard Iremonger (1):
> >       net/i40e: fix flow director initialisation
> > 
> > Bing Zhao (6):
> >       net/mlx5: fix header modify action validation
> >       net/mlx5: fix actions validation on root table
> >       net/mlx5: fix assert in modify converting
> >       mk: fix static linkage of mlx dependency
> >       mem: fix overflow on allocation
> >       net/mlx5: fix doorbell bitmap management offsets
> > 
> > Bruce Richardson (3):
> >       pci: remove unneeded includes in public header file
> >       pci: fix build on FreeBSD
> >       drivers: fix log type variables for -fno-common
> > 
> > Cheng Peng (1):
> >       net/iavf: fix stats query error code
> > 
> > Chengchang Tang (3):
> >       net/hns3: fix promiscuous mode for PF
> >       net/hns3: fix default VLAN filter configuration for PF
> >       net/hns3: fix VLAN filter when setting promisucous mode
> > 
> > Chengwen Feng (7):
> >       net/hns3: fix packets offload features flags in Rx
> >       net/hns3: fix default error code of command interface
> >       net/hns3: fix crash when flushing RSS flow rules with FLR
> >       net/hns3: fix return value of setting VLAN offload
> >       net/hns3: clear residual flow rules on init
> >       net/hns3: fix Rx interrupt after reset
> >       net/hns3: replace memory barrier with data dependency order
> > 
> > Ciara Power (1):
> >       telemetry: fix port stats retrieval
> > 
> > Darek Stojaczyk (1):
> >       pci: accept 32-bit domain numbers
> > 
> > David Christensen (2):
> >       pci: fix build on ppc
> >       eal/ppc: fix build with gcc 9.3
> > 
> > David Marchand (5):
> >       mem: mark pages as not accessed when reserving VA
> >       test: load drivers when required
> >       eal: fix typo in endian conversion macros
> >       remove references to private PCI probe function
> >       doc: prefer https when pointing to dpdk.org
> > 
> > Dekel Peled (7):
> >       net/mlx5: fix mask used for IPv6 item validation
> >       net/mlx5: fix CVLAN tag set in IP item translation
> >       net/mlx5: update VLAN and encap actions validation
> >       net/mlx5: fix match on empty VLAN item in DV mode
> >       common/mlx5: fix umem buffer alignment
> >       net/mlx5: fix VLAN flow action with wildcard VLAN item
> >       net/mlx5: fix RSS key copy to TIR context
> > 
> > Dmitry Kozlyuk (2):
> >       build: fix linker warnings with clang on Windows
> >       build: support MinGW-w64 with Meson
> > 
> > Eduard Serra (1):
> >       net/vmxnet3: fix RSS setting on v4
> > 
> > Eugeny Parshutin (1):
> >       ethdev: fix build when vtune profiling is on
> > 
> > Fady Bader (1):
> >       mempool: remove inline functions from export list
> > 
> > Fan Zhang (1):
> >       vhost/crypto: add missing user protocol flag
> > 
> > Ferruh Yigit (7):
> >       net/nfp: fix log format specifiers
> >       net/null: fix secondary burst function selection
> >       net/null: remove redundant check
> >       mempool/octeontx2: fix build for gcc O1 optimization
> >       net/ena: fix build for O1 optimization
> >       event/octeontx2: fix build for O1 optimization
> >       examples/kni: fix crash during MTU set
> > 
> > Gaetan Rivet (5):
> >       doc: fix number of failsafe sub-devices
> >       net/ring: fix device pointer on allocation
> >       pci: reject negative values in PCI id
> >       doc: fix typos in ABI policy
> >       kvargs: fix strcmp helper documentation
> > 
> > Gavin Hu (2):
> >       net/i40e: relax barrier in Tx
> >       net/i40e: relax barrier in Tx for NEON
> > 
> > Guinan Sun (2):
> >       net/ixgbe: fix statistics in flow control mode
> >       net/ixgbe: check driver type in MACsec API
> > 
> > Haifeng Lin (1):
> >       eal/arm64: fix precise TSC
> > 
> > Haiyue Wang (1):
> >       net/ice/base: check memory pointer before copying
> > 
> > Hao Chen (1):
> >       net/hns3: support Rx interrupt
> > 
> > Harry van Haaren (3):
> >       service: fix crash on exit
> >       examples/eventdev: fix crash on exit
> >       test/flow_classify: enable multi-sockets system
> > 
> > Hemant Agrawal (3):
> >       drivers: add crypto as dependency for event drivers
> >       bus/fslmc: fix size of qman fq descriptor
> >       mempool/dpaa2: install missing header with meson
> > 
> > Honnappa Nagarahalli (3):
> >       timer: protect initialization with lock
> >       service: fix race condition for MT unsafe service
> >       service: fix identification of service running on other lcore
> > 
> > Hyong Youb Kim (1):
> >       net/enic: fix flow action reordering
> > 
> > Igor Chauskin (2):
> >       net/ena/base: make allocation macros thread-safe
> >       net/ena/base: prevent allocation of zero sized memory
> > 
> > Igor Romanov (9):
> >       net/sfc: fix initialization error path
> >       net/sfc: fix Rx queue start failure path
> >       net/sfc: fix promiscuous and allmulticast toggles errors
> >       net/sfc: set priority of created filters to manual
> >       net/sfc/base: reduce filter priorities to implemented only
> >       net/sfc/base: reject automatic filter creation by users
> >       net/sfc/base: refactor filter lookup loop in EF10
> >       net/sfc/base: handle manual and auto filter clashes in EF10
> >       net/sfc/base: fix manual filter delete in EF10
> > 
> > Itsuro Oda (2):
> >       net/vhost: fix potential memory leak on close
> >       vhost: make IOTLB cache name unique among processes
> > 
> > Ivan Dyukov (3):
> >       net/virtio-user: fix devargs parsing
> >       app: remove extra new line after link duplex
> >       examples: remove extra new line after link duplex
> > 
> > Jasvinder Singh (3):
> >       net/softnic: fix memory leak for thread
> >       net/softnic: fix resource leak for pipeline
> >       examples/ip_pipeline: remove check of null response
> > 
> > Jeff Guo (3):
> >       net/i40e: fix setting L2TAG
> >       net/iavf: fix setting L2TAG
> >       net/ice: fix setting L2TAG
> > 
> > Jiawei Wang (1):
> >       net/mlx5: fix imissed counter overflow
> > 
> > Jim Harris (1):
> >       contigmem: cleanup properly when load fails
> > 
> > Jun Yang (1):
> >       net/dpaa2: fix congestion ID for multiple traffic classes
> > 
> > Junyu Jiang (4):
> >       examples/vmdq: fix output of pools/queues
> >       examples/vmdq: fix RSS configuration
> >       net/ice: fix RSS advanced rule
> >       net/ice: fix crash in switch filter
> > 
> > Juraj Linkeš (1):
> >       ci: fix telemetry dependency in Travis
> > 
> > Július Milan (1):
> >       net/memif: fix init when already connected
> > 
> > Kalesh AP (9):
> >       net/bnxt: fix HWRM command during FW reset
> >       net/bnxt: use true/false for bool types
> >       net/bnxt: fix port start failure handling
> >       net/bnxt: fix VLAN add when port is stopped
> >       net/bnxt: fix VNIC Rx queue count on VNIC free
> >       net/bnxt: fix number of TQM ring
> >       net/bnxt: fix TQM ring context memory size
> >       app/testpmd: fix memory failure handling for i40e DDP
> >       net/bnxt: fix storing MAC address twice
> > 
> > Kevin Traynor (9):
> >       net/hinic: fix snprintf length of cable info
> >       net/hinic: fix repeating cable log and length check
> >       net/avp: fix gcc 10 maybe-uninitialized warning
> >       examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
> >       eal/x86: ignore gcc 10 stringop-overflow warnings
> >       net/mlx5: fix gcc 10 enum-conversion warning
> >       crypto/kasumi: fix extern declaration
> >       drivers/crypto: disable gcc 10 no-common errors
> >       build: disable gcc 10 zero-length-bounds warning
> > 
> > Konstantin Ananyev (1):
> >       security: fix crash at accessing non-implemented ops
> > 
> > Lijun Ou (4):
> >       net/hns3: fix configuring RSS hash when rules are flushed
> >       net/hns3: add RSS hash offload to capabilities
> >       net/hns3: fix RSS key length
> >       net/hns3: fix RSS indirection table configuration
> > 
> > Linsi Yuan (1):
> >       net/bnxt: fix possible stack smashing
> > 
> > Louise Kilheeney (1):
> >       examples/l2fwd-keepalive: fix mbuf pool size
> > 
> > Luca Boccassi (4):
> >       fix various typos found by Lintian
> >       usertools: check for pci.ids in /usr/share/misc
> >       Revert "net/bnxt: fix TQM ring context memory size"
> >       Revert "net/bnxt: fix number of TQM ring"
> > 
> > Lukasz Bartosik (1):
> >       event/octeontx2: fix queue removal from Rx adapter
> > 
> > Lukasz Wojciechowski (5):
> >       drivers/crypto: fix log type variables for -fno-common
> >       security: fix verification of parameters
> >       security: fix return types in documentation
> >       security: fix session counter
> >       test: remove redundant macro
> > 
> > Marvin Liu (5):
> >       vhost: fix packed ring zero-copy
> >       vhost: fix shadow update
> >       vhost: fix shadowed descriptors not flushed
> >       net/virtio: fix crash when device reconnecting
> >       net/virtio: fix unexpected event after reconnect
> > 
> > Matteo Croce (1):
> >       doc: fix LTO config option
> > 
> > Mattias Rönnblom (3):
> >       event/dsw: remove redundant control ring poll
> >       event/dsw: remove unnecessary read barrier
> >       event/dsw: avoid reusing previously recorded events
> > 
> > Michael Baum (2):
> >       net/mlx5: fix meter color register consideration
> >       net/mlx4: fix drop queue error handling
> > 
> > Michael Haeuptle (1):
> >       vfio: fix race condition with sysfs
> > 
> > Michal Krawczyk (5):
> >       net/ena/base: fix documentation of functions
> >       net/ena/base: fix indentation in CQ polling
> >       net/ena/base: fix indentation of multiple defines
> >       net/ena: set IO ring size to valid value
> >       net/ena/base: fix testing for supported hash function
> > 
> > Min Hu (Connor) (3):
> >       net/hns3: fix configuring illegal VLAN PVID
> >       net/hns3: fix mailbox opcode data type
> >       net/hns3: fix VLAN PVID when configuring device
> > 
> > Mit Matelske (1):
> >       eal/freebsd: fix queuing duplicate alarm callbacks
> > 
> > Mohsin Shaikh (1):
> >       net/mlx5: use open/read/close for ib stats query
> > 
> > Muhammad Bilal (2):
> >       fix same typo in multiple places
> >       doc: fix typo in contributors guide
> > 
> > Nagadheeraj Rottela (2):
> >       crypto/nitrox: fix CSR register address generation
> >       crypto/nitrox: fix oversized device name
> > 
> > Nicolas Chautru (2):
> >       baseband/turbo_sw: fix exposed LLR decimals assumption
> >       bbdev: fix doxygen comments
> > 
> > Nithin Dabilpuram (2):
> >       devtools: fix symbol map change check
> >       net/octeontx2: disable unnecessary error interrupts
> > 
> > Olivier Matz (3):
> >       test/kvargs: fix to consider empty elements as valid
> >       test/kvargs: fix invalid cases check
> >       kvargs: fix invalid token parsing on FreeBSD
> > 
> > Ophir Munk (1):
> >       net/mlx5: fix VLAN PCP item calculation
> > 
> > Ori Kam (1):
> >       eal/ppc: fix bool type after altivec include
> > 
> > Pablo de Lara (4):
> >       cryptodev: add asymmetric session-less feature name
> >       test/crypto: fix flag check
> >       crypto/openssl: fix out-of-place encryption
> >       doc: add NASM installation steps
> > 
> > Pavan Nikhilesh (4):
> >       net/octeontx2: fix device configuration sequence
> >       eventdev: fix probe and remove for secondary process
> >       common/octeontx: fix gcc 9.1 ABI break
> >       app/eventdev: check Tx adapter service ID
> > 
> > Phil Yang (2):
> >       service: remove rte prefix from static functions
> >       net/ixgbe: fix link state timing on fiber ports
> > 
> > Qi Zhang (10):
> >       net/ice: remove unnecessary variable
> >       net/ice: remove bulk alloc option
> >       net/ice/base: fix uninitialized stack variables
> >       net/ice/base: read PSM clock frequency from register
> >       net/ice/base: minor fixes
> >       net/ice/base: fix MAC write command
> >       net/ice/base: fix binary order for GTPU filter
> >       net/ice/base: remove unused code in switch rule
> >       net/ice: fix variable initialization
> >       net/ice: fix RSS for GTPU
> > 
> > Qiming Yang (3):
> >       net/i40e: fix X722 performance
> >       doc: fix multicast filter feature announcement
> >       net/i40e: fix queue related exception handling
> > 
> > Rahul Gupta (2):
> >       net/bnxt: fix memory leak during queue restart
> >       net/bnxt: fix Rx ring producer index
> > 
> > Rasesh Mody (3):
> >       net/qede: fix link state configuration
> >       net/qede: fix port reconfiguration
> >       examples/kni: fix MTU change to setup Tx queue
> > 
> > Raslan Darawsheh (4):
> >       net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
> >       app/testpmd: add parsing for QinQ VLAN headers
> >       net/mlx5: fix matching for UDP tunnels with Verbs
> >       doc: fix build issue in ABI guide
> > 
> > Ray Kinsella (1):
> >       doc: fix default symbol binding in ABI guide
> > 
> > Rohit Raj (1):
> >       net/dpaa2: fix 10G port negotiation
> > 
> > Roland Qi (1):
> >       vhost: fix peer close check
> > 
> > Ruifeng Wang (2):
> >       test: skip some subtests in no-huge mode
> >       test/ipsec: fix crash in session destroy
> > 
> > Sarosh Arif (1):
> >       doc: fix typo in contributors guide
> > 
> > Shougang Wang (2):
> >       net/ixgbe: fix link status after port reset
> >       net/i40e: fix queue region in RSS flow
> > 
> > Simei Su (1):
> >       net/ice: support mark only action for flow director
> > 
> > Sivaprasad Tummala (1):
> >       vhost: handle mbuf allocation failure
> > 
> > Somnath Kotur (2):
> >       bus/pci: fix devargs on probing again
> >       net/bnxt: fix max ring count
> > 
> > Stephen Hemminger (24):
> >       ethdev: fix spelling
> >       net/mvneta: do not use PMD log type
> >       net/virtio: do not use PMD log type
> >       net/tap: do not use PMD log type
> >       net/pfe: do not use PMD log type
> >       net/bnxt: do not use PMD log type
> >       net/dpaa: use dynamic log type
> >       net/thunderx: use dynamic log type
> >       net/netvsc: propagate descriptor limits from VF
> >       net/netvsc: handle Rx packets during multi-channel setup
> >       net/netvsc: split send buffers from Tx descriptors
> >       net/netvsc: fix memory free on device close
> >       net/netvsc: remove process event optimization
> >       net/netvsc: handle Tx completions based on burst size
> >       net/netvsc: avoid possible live lock
> >       lpm6: fix comments spelling
> >       eal: fix comments spelling
> >       net/netvsc: fix comment spelling
> >       bus/vmbus: fix comment spelling
> >       net/netvsc: do RSS across Rx queue only
> >       net/netvsc: do not configure RSS if disabled
> >       net/tap: fix crash in flow destroy
> >       eal: fix C++17 compilation
> >       net/vmxnet3: handle bad host framing
> > 
> > Suanming Mou (3):
> >       net/mlx5: fix counter container usage
> >       net/mlx5: fix meter suffix table leak
> >       net/mlx5: fix jump table leak
> > 
> > Sunil Kumar Kori (1):
> >       eal: fix log message print for regex
> > 
> > Tao Zhu (3):
> >       net/ice: fix hash flow crash
> >       net/ixgbe: fix link status inconsistencies
> >       net/ixgbe: fix resource leak after thread exits normally
> > 
> > Thomas Monjalon (13):
> >       drivers/crypto: fix build with make 4.3
> >       doc: fix sphinx compatibility
> >       log: fix level picked with globbing on type register
> >       doc: fix matrix CSS for recent sphinx
> >       common/mlx5: fix build with -fno-common
> >       net/mlx4: fix build with -fno-common
> >       common/mlx5: fix build with rdma-core 21
> >       app: fix usage help of options separated by dashes
> >       net/mvpp2: fix build with gcc 10
> >       examples/vm_power: fix build with -fno-common
> >       examples/vm_power: drop Unix path limit redefinition
> >       doc: fix build with doxygen 1.8.18
> >       doc: fix API index
> > 
> > Timothy Redaelli (6):
> >       crypto/octeontx2: fix build with gcc 10
> >       test: fix build with gcc 10
> >       app/pipeline: fix build with gcc 10
> >       examples/vhost_blk: fix build with gcc 10
> >       examples/eventdev: fix build with gcc 10
> >       examples/qos_sched: fix build with gcc 10
> > 
> > Ting Xu (1):
> >       app/testpmd: fix DCB set
> > 
> > Tonghao Zhang (2):
> >       eal: fix PRNG init with HPET enabled
> >       net/mlx5: fix crash when releasing meter table
> > 
> > Vadim Podovinnikov (1):
> >       net/memif: fix resource leak
> > 
> > Vamsi Attunuru (1):
> >       net/octeontx2: enable error and RAS interrupt in configure
> > 
> > Viacheslav Ovsiienko (2):
> >       net/mlx5: fix metadata for compressed Rx CQEs
> >       common/mlx5: fix netlink buffer allocation from stack
> > 
> > Vijaya Mohan Guvva (1):
> >       bus/pci: fix UIO resource access from secondary process
> > 
> > Vladimir Medvedkin (1):
> >       ipsec: check SAD lookup error
> > 
> > Wei Hu (Xavier) (10):
> >       vfio: fix use after free with multiprocess
> >       net/hns3: fix status after repeated resets
> >       net/hns3: fix return value when clearing statistics
> >       app/testpmd: fix statistics after reset
> >       net/hns3: support different numbers of Rx and Tx queues
> >       net/hns3: fix Tx interrupt when enabling Rx interrupt
> >       net/hns3: fix MSI-X interrupt during initialization
> >       net/hns3: remove unnecessary assignments in Tx
> >       net/hns3: remove one IO barrier in Rx
> >       net/hns3: add free threshold in Rx
> > 
> > Wei Zhao (8):
> >       net/ice: change default tunnel type
> >       net/ice: add action number check for switch
> >       net/ice: fix input set of VLAN item
> >       net/i40e: fix flow director for ARP packets
> >       doc: add i40e limitation for flow director
> >       net/i40e: fix flush of flow director filter
> >       net/i40e: fix wild pointer
> >       net/i40e: fix flow director enabling
> > 
> > Wisam Jaddo (3):
> >       net/mlx5: fix zero metadata action
> >       net/mlx5: fix zero value validation for metadata
> >       net/mlx5: fix VLAN ID check
> > 
> > Xiao Zhang (1):
> >       app/testpmd: fix PPPoE flow command
> > 
> > Xiaolong Ye (3):
> >       net/virtio: fix outdated comment
> >       vhost: remove unused variable
> >       doc: fix log level example in Linux guide
> > 
> > Xiaoyu Min (3):
> >       net/mlx5: fix push VLAN action to use item info
> >       net/mlx5: fix validation of push VLAN without full mask
> >       net/mlx5: fix RSS enablement
> > 
> > Xiaoyun Li (4):
> >       net/ixgbe/base: update copyright
> >       net/i40e/base: update copyright
> >       common/iavf: update copyright
> >       net/ice/base: update copyright
> > 
> > Xiaoyun Wang (7):
> >       net/hinic: allocate IO memory with socket id
> >       net/hinic: fix LRO
> >       net/hinic/base: fix port start during FW hot update
> >       net/hinic/base: fix PF firmware hot-active problem
> >       net/hinic: fix queues resource free
> >       net/hinic: fix Tx mbuf length while copying
> >       net/hinic: fix TSO
> > 
> > Xuan Ding (2):
> >       vhost: prevent zero-copy with incompatible client mode
> >       vhost: fix zero-copy server mode
> > 
> > Yisen Zhuang (1):
> >       net/hns3: reduce judgements of free Tx ring space
> > 
> > Yunjian Wang (16):
> >       kvargs: fix buffer overflow when parsing list
> >       net/tap: remove unused assert
> >       net/nfp: fix dangling pointer on probe failure
> >       net/pfe: fix double free of MAC address
> >       net/tap: fix mbuf double free when writev fails
> >       net/tap: fix mbuf and mem leak during queue release
> >       net/tap: fix check for mbuf number of segment
> >       net/tap: fix file close on remove
> >       net/tap: fix fd leak on creation failure
> >       net/tap: fix unexpected link handler
> >       net/tap: fix queues fd check before close
> >       net/octeontx: fix dangling pointer on init failure
> >       crypto/ccp: fix fd leak on probe failure
> >       net/failsafe: fix fd leak
> >       crypto/caam_jr: fix check of file descriptors
> >       crypto/caam_jr: fix IRQ functions return type
> > 
> > Yuri Chipchev (1):
> >       event/dsw: fix enqueue burst return value
> > 
> > Zhihong Peng (1):
> >       net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] 19.11.3 patches review and test
  2020-06-03 19:43  3% [dpdk-dev] 19.11.3 patches review and test luca.boccassi
@ 2020-06-10  7:19  0% ` Yu, PingX
  2020-06-10  8:35  0%   ` Luca Boccassi
  2020-06-15  2:05  3% ` Pei Zhang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 200+ results
From: Yu, PingX @ 2020-06-10  7:19 UTC (permalink / raw)
  To: luca.boccassi, stable
  Cc: dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani, Walker,
	Benjamin, David Christensen, Hemant Agrawal, Stokes, Ian,
	Jerin Jacob, Mcnamara, John, Ju-Hyoung Lee, Kevin Traynor,
	Pei Zhang, Xu, Qian Q, Raslan Darawsheh, Thomas Monjalon, Peng,
	Yuan, Chen, Zhaoyan

Hi Luca,
Update LTS 19.11.3 test result for Intel part.  All passed and no new issue is found.  

* Intel(R) Testing

# Basic Intel(R) NIC testing
 * PF(i40e):Passed
 * PF(ixgbe):Passed 
 * PF(ice):Passed
 * VF(i40e):Passed
 * VF(ixgbe):Passed
 * VF(ice):Passed
 * Build or compile: Passed
 * Intel NIC single core/NIC performance: Passed
 
#Basic cryptodev and virtio testing
 * vhost/virtio basic loopback, PVP and performance test: Passed. 
    known issue: https://bugzilla.kernel.org/show_bug.cgi?id=207075
 * cryptodev Function: Passed. 
 * cryptodev Performance: Passed. 
    known unstable issue of test case 1c1t 3CPM. not effect LTS release.

Thanks.
Regards,
Yu Ping

> -----Original Message-----
> From: luca.boccassi@gmail.com <luca.boccassi@gmail.com>
> Sent: Thursday, June 4, 2020 3:44 AM
> To: stable@dpdk.org
> Cc: dev@dpdk.org; Abhishek Marathe
> <Abhishek.Marathe@microsoft.com>; Akhil Goyal <akhil.goyal@nxp.com>;
> Ali Alnubani <alialnu@mellanox.com>; Walker, Benjamin
> <benjamin.walker@intel.com>; David Christensen
> <drc@linux.vnet.ibm.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> Stokes, Ian <ian.stokes@intel.com>; Jerin Jacob <jerinj@marvell.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Ju-Hyoung Lee
> <juhlee@microsoft.com>; Kevin Traynor <ktraynor@redhat.com>; Pei Zhang
> <pezhang@redhat.com>; Yu, PingX <pingx.yu@intel.com>; Xu, Qian Q
> <qian.q.xu@intel.com>; Raslan Darawsheh <rasland@mellanox.com>;
> Thomas Monjalon <thomas@monjalon.net>; Peng, Yuan
> <yuan.peng@intel.com>; Chen, Zhaoyan <zhaoyan.chen@intel.com>
> Subject: 19.11.3 patches review and test
> 
> Hi all,
> 
> Here is a list of patches targeted for stable release 19.11.3.
> 
> The planned date for the final release is the 17th of June.
> 
> Please help with testing and validation of your use cases and report any
> issues/results with reply-all to this mail. For the final release the fixes and
> reported validations will be added to the release notes.
> 
> A release candidate tarball can be found at:
> 
>     https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1
> 
> These patches are located at branch 19.11 of dpdk-stable repo:
>     https://dpdk.org/browse/dpdk-stable/
> 
> Thanks.
> 
> Luca Boccassi
> 
> ---
> Adam Dybkowski (5):
>       cryptodev: fix missing device id range checking
>       common/qat: fix GEN3 marketing name
>       app/crypto-perf: fix display of sample test vector
>       crypto/qat: support plain SHA1..SHA512 hashes
>       cryptodev: fix SHA-1 digest enum comment
> 
> Ajit Khaparde (3):
>       net/bnxt: fix FW version query
>       net/bnxt: fix error log for command timeout
>       net/bnxt: fix using RSS config struct
> 
> Akhil Goyal (1):
>       ipsec: fix build dependency on hash lib
> 
> Alex Kiselev (1):
>       lpm6: fix size of tbl8 group
> 
> Alex Marginean (1):
>       net/enetc: fix Rx lock-up
> 
> Alexander Kozyrev (8):
>       net/mlx5: reduce Tx completion index memory loads
>       net/mlx5: add device parameter for MPRQ stride size
>       net/mlx5: enable MPRQ multi-stride operations
>       net/mlx5: add multi-segment packets in MPRQ mode
>       net/mlx5: set dynamic flow metadata in Rx queues
>       net/mlx5: improve logging of MPRQ selection
>       net/mlx5: fix assert in dynamic metadata handling
>       net/mlx5: fix Tx queue release debug log timing
> 
> Alvin Zhang (2):
>       net/iavf: fix link speed
>       net/e1000: fix port hotplug for multi-process
> 
> Amit Gupta (1):
>       net/octeontx: fix meson build for disabled drivers
> 
> Anatoly Burakov (1):
>       mem: preallocate VA space in no-huge mode
> 
> Andrew Rybchenko (4):
>       net/sfc: fix reported promiscuous/multicast mode
>       net/sfc/base: use simpler EF10 family conditional check
>       net/sfc/base: use simpler EF10 family run-time checks
>       net/sfc/base: fix build when EVB is enabled
> 
> Andy Pei (1):
>       net/ipn3ke: use control thread to check link status
> 
> Ankur Dwivedi (1):
>       net/octeontx2: fix buffer size assignment
> 
> Apeksha Gupta (2):
>       bus/fslmc: fix dereferencing null pointer
>       test/crypto: fix statistics case
> 
> Archana Muniganti (1):
>       examples/fips_validation: fix parsing of algorithms
> 
> Arek Kusztal (1):
>       crypto/qat: fix cipher descriptor for ZUC and SNOW
> 
> Asaf Penso (2):
>       net/mlx5: fix call to modify action without init item
>       net/mlx5: fix assert in doorbell lookup
> 
> Ashish Gupta (1):
>       net/octeontx2: fix link information for loopback port
> 
> Asim Jamshed (1):
>       fib: fix headers for C++ support
> 
> Bernard Iremonger (1):
>       net/i40e: fix flow director initialisation
> 
> Bing Zhao (6):
>       net/mlx5: fix header modify action validation
>       net/mlx5: fix actions validation on root table
>       net/mlx5: fix assert in modify converting
>       mk: fix static linkage of mlx dependency
>       mem: fix overflow on allocation
>       net/mlx5: fix doorbell bitmap management offsets
> 
> Bruce Richardson (3):
>       pci: remove unneeded includes in public header file
>       pci: fix build on FreeBSD
>       drivers: fix log type variables for -fno-common
> 
> Cheng Peng (1):
>       net/iavf: fix stats query error code
> 
> Chengchang Tang (3):
>       net/hns3: fix promiscuous mode for PF
>       net/hns3: fix default VLAN filter configuration for PF
>       net/hns3: fix VLAN filter when setting promisucous mode
> 
> Chengwen Feng (7):
>       net/hns3: fix packets offload features flags in Rx
>       net/hns3: fix default error code of command interface
>       net/hns3: fix crash when flushing RSS flow rules with FLR
>       net/hns3: fix return value of setting VLAN offload
>       net/hns3: clear residual flow rules on init
>       net/hns3: fix Rx interrupt after reset
>       net/hns3: replace memory barrier with data dependency order
> 
> Ciara Power (1):
>       telemetry: fix port stats retrieval
> 
> Darek Stojaczyk (1):
>       pci: accept 32-bit domain numbers
> 
> David Christensen (2):
>       pci: fix build on ppc
>       eal/ppc: fix build with gcc 9.3
> 
> David Marchand (5):
>       mem: mark pages as not accessed when reserving VA
>       test: load drivers when required
>       eal: fix typo in endian conversion macros
>       remove references to private PCI probe function
>       doc: prefer https when pointing to dpdk.org
> 
> Dekel Peled (7):
>       net/mlx5: fix mask used for IPv6 item validation
>       net/mlx5: fix CVLAN tag set in IP item translation
>       net/mlx5: update VLAN and encap actions validation
>       net/mlx5: fix match on empty VLAN item in DV mode
>       common/mlx5: fix umem buffer alignment
>       net/mlx5: fix VLAN flow action with wildcard VLAN item
>       net/mlx5: fix RSS key copy to TIR context
> 
> Dmitry Kozlyuk (2):
>       build: fix linker warnings with clang on Windows
>       build: support MinGW-w64 with Meson
> 
> Eduard Serra (1):
>       net/vmxnet3: fix RSS setting on v4
> 
> Eugeny Parshutin (1):
>       ethdev: fix build when vtune profiling is on
> 
> Fady Bader (1):
>       mempool: remove inline functions from export list
> 
> Fan Zhang (1):
>       vhost/crypto: add missing user protocol flag
> 
> Ferruh Yigit (7):
>       net/nfp: fix log format specifiers
>       net/null: fix secondary burst function selection
>       net/null: remove redundant check
>       mempool/octeontx2: fix build for gcc O1 optimization
>       net/ena: fix build for O1 optimization
>       event/octeontx2: fix build for O1 optimization
>       examples/kni: fix crash during MTU set
> 
> Gaetan Rivet (5):
>       doc: fix number of failsafe sub-devices
>       net/ring: fix device pointer on allocation
>       pci: reject negative values in PCI id
>       doc: fix typos in ABI policy
>       kvargs: fix strcmp helper documentation
> 
> Gavin Hu (2):
>       net/i40e: relax barrier in Tx
>       net/i40e: relax barrier in Tx for NEON
> 
> Guinan Sun (2):
>       net/ixgbe: fix statistics in flow control mode
>       net/ixgbe: check driver type in MACsec API
> 
> Haifeng Lin (1):
>       eal/arm64: fix precise TSC
> 
> Haiyue Wang (1):
>       net/ice/base: check memory pointer before copying
> 
> Hao Chen (1):
>       net/hns3: support Rx interrupt
> 
> Harry van Haaren (3):
>       service: fix crash on exit
>       examples/eventdev: fix crash on exit
>       test/flow_classify: enable multi-sockets system
> 
> Hemant Agrawal (3):
>       drivers: add crypto as dependency for event drivers
>       bus/fslmc: fix size of qman fq descriptor
>       mempool/dpaa2: install missing header with meson
> 
> Honnappa Nagarahalli (3):
>       timer: protect initialization with lock
>       service: fix race condition for MT unsafe service
>       service: fix identification of service running on other lcore
> 
> Hyong Youb Kim (1):
>       net/enic: fix flow action reordering
> 
> Igor Chauskin (2):
>       net/ena/base: make allocation macros thread-safe
>       net/ena/base: prevent allocation of zero sized memory
> 
> Igor Romanov (9):
>       net/sfc: fix initialization error path
>       net/sfc: fix Rx queue start failure path
>       net/sfc: fix promiscuous and allmulticast toggles errors
>       net/sfc: set priority of created filters to manual
>       net/sfc/base: reduce filter priorities to implemented only
>       net/sfc/base: reject automatic filter creation by users
>       net/sfc/base: refactor filter lookup loop in EF10
>       net/sfc/base: handle manual and auto filter clashes in EF10
>       net/sfc/base: fix manual filter delete in EF10
> 
> Itsuro Oda (2):
>       net/vhost: fix potential memory leak on close
>       vhost: make IOTLB cache name unique among processes
> 
> Ivan Dyukov (3):
>       net/virtio-user: fix devargs parsing
>       app: remove extra new line after link duplex
>       examples: remove extra new line after link duplex
> 
> Jasvinder Singh (3):
>       net/softnic: fix memory leak for thread
>       net/softnic: fix resource leak for pipeline
>       examples/ip_pipeline: remove check of null response
> 
> Jeff Guo (3):
>       net/i40e: fix setting L2TAG
>       net/iavf: fix setting L2TAG
>       net/ice: fix setting L2TAG
> 
> Jiawei Wang (1):
>       net/mlx5: fix imissed counter overflow
> 
> Jim Harris (1):
>       contigmem: cleanup properly when load fails
> 
> Jun Yang (1):
>       net/dpaa2: fix congestion ID for multiple traffic classes
> 
> Junyu Jiang (4):
>       examples/vmdq: fix output of pools/queues
>       examples/vmdq: fix RSS configuration
>       net/ice: fix RSS advanced rule
>       net/ice: fix crash in switch filter
> 
> Juraj Linkeš (1):
>       ci: fix telemetry dependency in Travis
> 
> Július Milan (1):
>       net/memif: fix init when already connected
> 
> Kalesh AP (9):
>       net/bnxt: fix HWRM command during FW reset
>       net/bnxt: use true/false for bool types
>       net/bnxt: fix port start failure handling
>       net/bnxt: fix VLAN add when port is stopped
>       net/bnxt: fix VNIC Rx queue count on VNIC free
>       net/bnxt: fix number of TQM ring
>       net/bnxt: fix TQM ring context memory size
>       app/testpmd: fix memory failure handling for i40e DDP
>       net/bnxt: fix storing MAC address twice
> 
> Kevin Traynor (9):
>       net/hinic: fix snprintf length of cable info
>       net/hinic: fix repeating cable log and length check
>       net/avp: fix gcc 10 maybe-uninitialized warning
>       examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
>       eal/x86: ignore gcc 10 stringop-overflow warnings
>       net/mlx5: fix gcc 10 enum-conversion warning
>       crypto/kasumi: fix extern declaration
>       drivers/crypto: disable gcc 10 no-common errors
>       build: disable gcc 10 zero-length-bounds warning
> 
> Konstantin Ananyev (1):
>       security: fix crash at accessing non-implemented ops
> 
> Lijun Ou (4):
>       net/hns3: fix configuring RSS hash when rules are flushed
>       net/hns3: add RSS hash offload to capabilities
>       net/hns3: fix RSS key length
>       net/hns3: fix RSS indirection table configuration
> 
> Linsi Yuan (1):
>       net/bnxt: fix possible stack smashing
> 
> Louise Kilheeney (1):
>       examples/l2fwd-keepalive: fix mbuf pool size
> 
> Luca Boccassi (4):
>       fix various typos found by Lintian
>       usertools: check for pci.ids in /usr/share/misc
>       Revert "net/bnxt: fix TQM ring context memory size"
>       Revert "net/bnxt: fix number of TQM ring"
> 
> Lukasz Bartosik (1):
>       event/octeontx2: fix queue removal from Rx adapter
> 
> Lukasz Wojciechowski (5):
>       drivers/crypto: fix log type variables for -fno-common
>       security: fix verification of parameters
>       security: fix return types in documentation
>       security: fix session counter
>       test: remove redundant macro
> 
> Marvin Liu (5):
>       vhost: fix packed ring zero-copy
>       vhost: fix shadow update
>       vhost: fix shadowed descriptors not flushed
>       net/virtio: fix crash when device reconnecting
>       net/virtio: fix unexpected event after reconnect
> 
> Matteo Croce (1):
>       doc: fix LTO config option
> 
> Mattias Rönnblom (3):
>       event/dsw: remove redundant control ring poll
>       event/dsw: remove unnecessary read barrier
>       event/dsw: avoid reusing previously recorded events
> 
> Michael Baum (2):
>       net/mlx5: fix meter color register consideration
>       net/mlx4: fix drop queue error handling
> 
> Michael Haeuptle (1):
>       vfio: fix race condition with sysfs
> 
> Michal Krawczyk (5):
>       net/ena/base: fix documentation of functions
>       net/ena/base: fix indentation in CQ polling
>       net/ena/base: fix indentation of multiple defines
>       net/ena: set IO ring size to valid value
>       net/ena/base: fix testing for supported hash function
> 
> Min Hu (Connor) (3):
>       net/hns3: fix configuring illegal VLAN PVID
>       net/hns3: fix mailbox opcode data type
>       net/hns3: fix VLAN PVID when configuring device
> 
> Mit Matelske (1):
>       eal/freebsd: fix queuing duplicate alarm callbacks
> 
> Mohsin Shaikh (1):
>       net/mlx5: use open/read/close for ib stats query
> 
> Muhammad Bilal (2):
>       fix same typo in multiple places
>       doc: fix typo in contributors guide
> 
> Nagadheeraj Rottela (2):
>       crypto/nitrox: fix CSR register address generation
>       crypto/nitrox: fix oversized device name
> 
> Nicolas Chautru (2):
>       baseband/turbo_sw: fix exposed LLR decimals assumption
>       bbdev: fix doxygen comments
> 
> Nithin Dabilpuram (2):
>       devtools: fix symbol map change check
>       net/octeontx2: disable unnecessary error interrupts
> 
> Olivier Matz (3):
>       test/kvargs: fix to consider empty elements as valid
>       test/kvargs: fix invalid cases check
>       kvargs: fix invalid token parsing on FreeBSD
> 
> Ophir Munk (1):
>       net/mlx5: fix VLAN PCP item calculation
> 
> Ori Kam (1):
>       eal/ppc: fix bool type after altivec include
> 
> Pablo de Lara (4):
>       cryptodev: add asymmetric session-less feature name
>       test/crypto: fix flag check
>       crypto/openssl: fix out-of-place encryption
>       doc: add NASM installation steps
> 
> Pavan Nikhilesh (4):
>       net/octeontx2: fix device configuration sequence
>       eventdev: fix probe and remove for secondary process
>       common/octeontx: fix gcc 9.1 ABI break
>       app/eventdev: check Tx adapter service ID
> 
> Phil Yang (2):
>       service: remove rte prefix from static functions
>       net/ixgbe: fix link state timing on fiber ports
> 
> Qi Zhang (10):
>       net/ice: remove unnecessary variable
>       net/ice: remove bulk alloc option
>       net/ice/base: fix uninitialized stack variables
>       net/ice/base: read PSM clock frequency from register
>       net/ice/base: minor fixes
>       net/ice/base: fix MAC write command
>       net/ice/base: fix binary order for GTPU filter
>       net/ice/base: remove unused code in switch rule
>       net/ice: fix variable initialization
>       net/ice: fix RSS for GTPU
> 
> Qiming Yang (3):
>       net/i40e: fix X722 performance
>       doc: fix multicast filter feature announcement
>       net/i40e: fix queue related exception handling
> 
> Rahul Gupta (2):
>       net/bnxt: fix memory leak during queue restart
>       net/bnxt: fix Rx ring producer index
> 
> Rasesh Mody (3):
>       net/qede: fix link state configuration
>       net/qede: fix port reconfiguration
>       examples/kni: fix MTU change to setup Tx queue
> 
> Raslan Darawsheh (4):
>       net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
>       app/testpmd: add parsing for QinQ VLAN headers
>       net/mlx5: fix matching for UDP tunnels with Verbs
>       doc: fix build issue in ABI guide
> 
> Ray Kinsella (1):
>       doc: fix default symbol binding in ABI guide
> 
> Rohit Raj (1):
>       net/dpaa2: fix 10G port negotiation
> 
> Roland Qi (1):
>       vhost: fix peer close check
> 
> Ruifeng Wang (2):
>       test: skip some subtests in no-huge mode
>       test/ipsec: fix crash in session destroy
> 
> Sarosh Arif (1):
>       doc: fix typo in contributors guide
> 
> Shougang Wang (2):
>       net/ixgbe: fix link status after port reset
>       net/i40e: fix queue region in RSS flow
> 
> Simei Su (1):
>       net/ice: support mark only action for flow director
> 
> Sivaprasad Tummala (1):
>       vhost: handle mbuf allocation failure
> 
> Somnath Kotur (2):
>       bus/pci: fix devargs on probing again
>       net/bnxt: fix max ring count
> 
> Stephen Hemminger (24):
>       ethdev: fix spelling
>       net/mvneta: do not use PMD log type
>       net/virtio: do not use PMD log type
>       net/tap: do not use PMD log type
>       net/pfe: do not use PMD log type
>       net/bnxt: do not use PMD log type
>       net/dpaa: use dynamic log type
>       net/thunderx: use dynamic log type
>       net/netvsc: propagate descriptor limits from VF
>       net/netvsc: handle Rx packets during multi-channel setup
>       net/netvsc: split send buffers from Tx descriptors
>       net/netvsc: fix memory free on device close
>       net/netvsc: remove process event optimization
>       net/netvsc: handle Tx completions based on burst size
>       net/netvsc: avoid possible live lock
>       lpm6: fix comments spelling
>       eal: fix comments spelling
>       net/netvsc: fix comment spelling
>       bus/vmbus: fix comment spelling
>       net/netvsc: do RSS across Rx queue only
>       net/netvsc: do not configure RSS if disabled
>       net/tap: fix crash in flow destroy
>       eal: fix C++17 compilation
>       net/vmxnet3: handle bad host framing
> 
> Suanming Mou (3):
>       net/mlx5: fix counter container usage
>       net/mlx5: fix meter suffix table leak
>       net/mlx5: fix jump table leak
> 
> Sunil Kumar Kori (1):
>       eal: fix log message print for regex
> 
> Tao Zhu (3):
>       net/ice: fix hash flow crash
>       net/ixgbe: fix link status inconsistencies
>       net/ixgbe: fix resource leak after thread exits normally
> 
> Thomas Monjalon (13):
>       drivers/crypto: fix build with make 4.3
>       doc: fix sphinx compatibility
>       log: fix level picked with globbing on type register
>       doc: fix matrix CSS for recent sphinx
>       common/mlx5: fix build with -fno-common
>       net/mlx4: fix build with -fno-common
>       common/mlx5: fix build with rdma-core 21
>       app: fix usage help of options separated by dashes
>       net/mvpp2: fix build with gcc 10
>       examples/vm_power: fix build with -fno-common
>       examples/vm_power: drop Unix path limit redefinition
>       doc: fix build with doxygen 1.8.18
>       doc: fix API index
> 
> Timothy Redaelli (6):
>       crypto/octeontx2: fix build with gcc 10
>       test: fix build with gcc 10
>       app/pipeline: fix build with gcc 10
>       examples/vhost_blk: fix build with gcc 10
>       examples/eventdev: fix build with gcc 10
>       examples/qos_sched: fix build with gcc 10
> 
> Ting Xu (1):
>       app/testpmd: fix DCB set
> 
> Tonghao Zhang (2):
>       eal: fix PRNG init with HPET enabled
>       net/mlx5: fix crash when releasing meter table
> 
> Vadim Podovinnikov (1):
>       net/memif: fix resource leak
> 
> Vamsi Attunuru (1):
>       net/octeontx2: enable error and RAS interrupt in configure
> 
> Viacheslav Ovsiienko (2):
>       net/mlx5: fix metadata for compressed Rx CQEs
>       common/mlx5: fix netlink buffer allocation from stack
> 
> Vijaya Mohan Guvva (1):
>       bus/pci: fix UIO resource access from secondary process
> 
> Vladimir Medvedkin (1):
>       ipsec: check SAD lookup error
> 
> Wei Hu (Xavier) (10):
>       vfio: fix use after free with multiprocess
>       net/hns3: fix status after repeated resets
>       net/hns3: fix return value when clearing statistics
>       app/testpmd: fix statistics after reset
>       net/hns3: support different numbers of Rx and Tx queues
>       net/hns3: fix Tx interrupt when enabling Rx interrupt
>       net/hns3: fix MSI-X interrupt during initialization
>       net/hns3: remove unnecessary assignments in Tx
>       net/hns3: remove one IO barrier in Rx
>       net/hns3: add free threshold in Rx
> 
> Wei Zhao (8):
>       net/ice: change default tunnel type
>       net/ice: add action number check for switch
>       net/ice: fix input set of VLAN item
>       net/i40e: fix flow director for ARP packets
>       doc: add i40e limitation for flow director
>       net/i40e: fix flush of flow director filter
>       net/i40e: fix wild pointer
>       net/i40e: fix flow director enabling
> 
> Wisam Jaddo (3):
>       net/mlx5: fix zero metadata action
>       net/mlx5: fix zero value validation for metadata
>       net/mlx5: fix VLAN ID check
> 
> Xiao Zhang (1):
>       app/testpmd: fix PPPoE flow command
> 
> Xiaolong Ye (3):
>       net/virtio: fix outdated comment
>       vhost: remove unused variable
>       doc: fix log level example in Linux guide
> 
> Xiaoyu Min (3):
>       net/mlx5: fix push VLAN action to use item info
>       net/mlx5: fix validation of push VLAN without full mask
>       net/mlx5: fix RSS enablement
> 
> Xiaoyun Li (4):
>       net/ixgbe/base: update copyright
>       net/i40e/base: update copyright
>       common/iavf: update copyright
>       net/ice/base: update copyright
> 
> Xiaoyun Wang (7):
>       net/hinic: allocate IO memory with socket id
>       net/hinic: fix LRO
>       net/hinic/base: fix port start during FW hot update
>       net/hinic/base: fix PF firmware hot-active problem
>       net/hinic: fix queues resource free
>       net/hinic: fix Tx mbuf length while copying
>       net/hinic: fix TSO
> 
> Xuan Ding (2):
>       vhost: prevent zero-copy with incompatible client mode
>       vhost: fix zero-copy server mode
> 
> Yisen Zhuang (1):
>       net/hns3: reduce judgements of free Tx ring space
> 
> Yunjian Wang (16):
>       kvargs: fix buffer overflow when parsing list
>       net/tap: remove unused assert
>       net/nfp: fix dangling pointer on probe failure
>       net/pfe: fix double free of MAC address
>       net/tap: fix mbuf double free when writev fails
>       net/tap: fix mbuf and mem leak during queue release
>       net/tap: fix check for mbuf number of segment
>       net/tap: fix file close on remove
>       net/tap: fix fd leak on creation failure
>       net/tap: fix unexpected link handler
>       net/tap: fix queues fd check before close
>       net/octeontx: fix dangling pointer on init failure
>       crypto/ccp: fix fd leak on probe failure
>       net/failsafe: fix fd leak
>       crypto/caam_jr: fix check of file descriptors
>       crypto/caam_jr: fix IRQ functions return type
> 
> Yuri Chipchev (1):
>       event/dsw: fix enqueue burst return value
> 
> Zhihong Peng (1):
>       net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling
@ 2020-06-10  6:38  2% Viacheslav Ovsiienko
  2020-06-10 13:33  0% ` Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Viacheslav Ovsiienko @ 2020-06-10  6:38 UTC (permalink / raw)
  To: dev; +Cc: thomas, matan, rasland, orika, olivier.matz, shahafs

There is the requirement on some networks for precise traffic timing
management. The ability to send (and, generally speaking, receive)
the packets at the very precisely specified moment of time provides
the opportunity to support the connections with Time Division
Multiplexing using the contemporary general purpose NIC without involving
an auxiliary hardware. For example, the supporting of O-RAN Fronthaul
interface is one of the promising features for potentially usage of the
precise time management for the egress packets.

The main objective of this RFC is to specify the way how applications
can provide the moment of time at what the packet transmission must be
started and to describe in preliminary the supporting this feature from
mlx5 PMD side.

The new dynamic timestamp field is proposed, it provides some timing
information, the units and time references (initial phase) are not
explicitly defined but are maintained always the same for a given port.
Some devices allow to query rte_eth_read_clock() that will return
the current device timestamp. The dynamic timestamp flag tells whether
the field contains actual timestamp value. For the packets being sent
this value can be used by PMD to schedule packet sending.

After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
and obsoleting, these dynamic flag and field will be used to manage
the timestamps on receiving datapath as well.

When PMD sees the "rte_dynfield_timestamp" set on the packet being sent
it tries to synchronize the time of packet appearing on the wire with
the specified packet timestamp. It the specified one is in the past it
should be ignored, if one is in the distant future it should be capped
with some reasonable value (in range of seconds). These specific cases
("too late" and "distant future") can be optionally reported via
device xstats to assist applications to detect the time-related
problems.

There is no any packet reordering according timestamps is supposed,
neither within packet burst, nor between packets, it is an entirely
application responsibility to generate packets and its timestamps
in desired order. The timestamps can be put only in the first packet
in the burst providing the entire burst scheduling.

PMD reports the ability to synchronize packet sending on timestamp
with new offload flag:

This is palliative and is going to be replaced with new eth_dev API
about reporting/managing the supported dynamic flags and its related
features. This API would break ABI compatibility and can't be introduced
at the moment, so is postponed to 20.11.

For testing purposes it is proposed to update testpmd "txonly"
forwarding mode routine. With this update testpmd application generates
the packets and sets the dynamic timestamps according to specified time
pattern if it sees the "rte_dynfield_timestamp" is registered.

The new testpmd command is proposed to configure sending pattern:

set tx_times <intra_gap>,<burst_gap>

<intra_gap> - the delay between the packets within the burst
              specified in the device clock units. The number
              of packets in the burst is defined by txburst parameter

<burst_gap> - the delay between the bursts in the device clock units

As the result the bursts of packet will be transmitted with specific
delays between the packets within the burst and specific delay between
the bursts. The rte_eth_get_clock is supposed to be engaged to get the
current device clock value and provide the reference for the timestamps.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 lib/librte_ethdev/rte_ethdev.h |  4 ++++
 lib/librte_mbuf/rte_mbuf_dyn.h | 16 ++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index a49242b..6f6454c 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1178,6 +1178,10 @@ struct rte_eth_conf {
 /** Device supports outer UDP checksum */
 #define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
 
+/** Device supports send on timestamp */
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+
+
 #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001
 /**< Device supports Rx queue setup after device started*/
 #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
index 96c3631..fb5477c 100644
--- a/lib/librte_mbuf/rte_mbuf_dyn.h
+++ b/lib/librte_mbuf/rte_mbuf_dyn.h
@@ -250,4 +250,20 @@ int rte_mbuf_dynflag_lookup(const char *name,
 #define RTE_MBUF_DYNFIELD_METADATA_NAME "rte_flow_dynfield_metadata"
 #define RTE_MBUF_DYNFLAG_METADATA_NAME "rte_flow_dynflag_metadata"
 
+/*
+ * The timestamp dynamic field provides some timing information, the
+ * units and time references (initial phase) are not explicitly defined
+ * but are maintained always the same for a given port. Some devices allow
+ * to query rte_eth_read_clock() that will return the current device
+ * timestamp. The dynamic timestamp flag tells whether the field contains
+ * actual timestamp value. For the packets being sent this value can be
+ * used by PMD to schedule packet sending.
+ *
+ * After PKT_RX_TIMESTAMP flag and fixed timestamp field deprecation
+ * and obsoleting, these dynamic flag and field will be used to manage
+ * the timestamps on receiving datapath as well.
+ */
+#define RTE_MBUF_DYNFIELD_TIMESTAMP_NAME "rte_dynfield_timestamp"
+#define RTE_MBUF_DYNFLAG_TIMESTAMP_NAME "rte_dynflag_timestamp"
+
 #endif
-- 
1.8.3.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH] mbuf: remove unused next member
  2020-06-09 15:29  3%     ` Stephen Hemminger
@ 2020-06-10  0:54  3%       ` Ye Xiaolong
  0 siblings, 0 replies; 200+ results
From: Ye Xiaolong @ 2020-06-10  0:54 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Olivier Matz, Konstantin Ananyev, Thomas Monjalon, dev,
	haiyue.wang, stable

On 06/09, Stephen Hemminger wrote:
>On Tue, 9 Jun 2020 15:15:33 +0800
>Ye Xiaolong <xiaolong.ye@intel.com> wrote:
>
>> On 06/09, Olivier Matz wrote:
>> >Hi Xialong,
>> >
>> >On Tue, Jun 09, 2020 at 01:29:55PM +0800, Xiaolong Ye wrote:  
>> >> TAILQ_ENTRY next is not needed in struct mbuf_dynfield_elt and
>> >> mbuf_dynflag_elt, since they are actually chained by rte_tailq_entry's
>> >> next field when calling TAILQ_INSERT_TAIL(mbuf_dynfield/dynflag_list, te,
>> >> next).
>> >> 
>> >> Fixes: 4958ca3a443a ("mbuf: support dynamic fields and flags")
>> >> Cc: stable@dpdk.org
>> >> 
>> >> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>  
>> >
>> >Good catch, I forgot to remove this field which was used in former
>> >implementations. Thanks!
>> >
>> >I suggest to update the title to highlight it's about dynamic mbuf:
>> >  mbuf: remove unused next member in dyn flag/field
>> >
>> >Apart from this:
>> >Acked-by: Olivier Matz <olivier.matz@6wind.com>  
>> 
>> Thanks for the ack, I'll submit V2 with suggested subject.
>> 
>> Thanks,
>> Xiaolong
>
>Is the field visible in ABI?

I don't think so, the touched structs in this patch mbuf_dynfield_elt and
mbuf_dynflag_elt are internal structures used in rte_mbuf_dyn.c, and structures
exposed to user are struct rte_mbuf_dynfield and rte_mbuf_dynflag in
rte_mbuf_dyn.h, and they still keep the same as before, so there should be no
ABI break in this patch.

Thanks,
Xiaolong

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] mbuf: remove unused next member
  @ 2020-06-09 15:29  3%     ` Stephen Hemminger
  2020-06-10  0:54  3%       ` Ye Xiaolong
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2020-06-09 15:29 UTC (permalink / raw)
  To: Ye Xiaolong
  Cc: Olivier Matz, Konstantin Ananyev, Thomas Monjalon, dev,
	haiyue.wang, stable

On Tue, 9 Jun 2020 15:15:33 +0800
Ye Xiaolong <xiaolong.ye@intel.com> wrote:

> On 06/09, Olivier Matz wrote:
> >Hi Xialong,
> >
> >On Tue, Jun 09, 2020 at 01:29:55PM +0800, Xiaolong Ye wrote:  
> >> TAILQ_ENTRY next is not needed in struct mbuf_dynfield_elt and
> >> mbuf_dynflag_elt, since they are actually chained by rte_tailq_entry's
> >> next field when calling TAILQ_INSERT_TAIL(mbuf_dynfield/dynflag_list, te,
> >> next).
> >> 
> >> Fixes: 4958ca3a443a ("mbuf: support dynamic fields and flags")
> >> Cc: stable@dpdk.org
> >> 
> >> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>  
> >
> >Good catch, I forgot to remove this field which was used in former
> >implementations. Thanks!
> >
> >I suggest to update the title to highlight it's about dynamic mbuf:
> >  mbuf: remove unused next member in dyn flag/field
> >
> >Apart from this:
> >Acked-by: Olivier Matz <olivier.matz@6wind.com>  
> 
> Thanks for the ack, I'll submit V2 with suggested subject.
> 
> Thanks,
> Xiaolong

Is the field visible in ABI?

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2 09/10] doc: add note about blacklist/whitelist changes
  @ 2020-06-08 19:25  4%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-06-08 19:25 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The blacklist/whitelist changes to API will cause warnings to
change for applications.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/release_20_08.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe968..502d67f26ff8 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -85,7 +85,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* eal: The definitions related to including and excluding devices
+  has been changed from blacklist/whitelist to block/allow.
+  There are compatibility macros and command line mapping to accept
+  the old values but applications and scripts are strongly encouraged
+  to migrate to the new names.
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
  @ 2020-06-08 18:46  3%     ` Honnappa Nagarahalli
  2020-06-18 17:36  0%       ` Medvedkin, Vladimir
  0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2020-06-08 18:46 UTC (permalink / raw)
  To: Ruifeng Wang, Bruce Richardson, Vladimir Medvedkin,
	John McNamara, Marko Kovacevic, Ray Kinsella, Neil Horman
  Cc: dev, konstantin.ananyev, nd, Ruifeng Wang, Honnappa Nagarahalli, nd

<snip>

> Subject: [PATCH v4 1/3] lib/lpm: integrate RCU QSBR
> 
> Currently, the tbl8 group is freed even though the readers might be using the
> tbl8 group entries. The freed tbl8 group can be reallocated quickly. This
> results in incorrect lookup results.
> 
> RCU QSBR process is integrated for safe tbl8 group reclaim.
> Refer to RCU documentation to understand various aspects of integrating
> RCU library into other libraries.
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
>  doc/guides/prog_guide/lpm_lib.rst  |  32 ++++++++
>  lib/librte_lpm/Makefile            |   2 +-
>  lib/librte_lpm/meson.build         |   1 +
>  lib/librte_lpm/rte_lpm.c           | 123 ++++++++++++++++++++++++++---
>  lib/librte_lpm/rte_lpm.h           |  59 ++++++++++++++
>  lib/librte_lpm/rte_lpm_version.map |   6 ++
>  6 files changed, 211 insertions(+), 12 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/lpm_lib.rst
> b/doc/guides/prog_guide/lpm_lib.rst
> index 1609a57d0..7cc99044a 100644
> --- a/doc/guides/prog_guide/lpm_lib.rst
> +++ b/doc/guides/prog_guide/lpm_lib.rst
> @@ -145,6 +145,38 @@ depending on whether we need to move to the next
> table or not.
>  Prefix expansion is one of the keys of this algorithm,  since it improves the
> speed dramatically by adding redundancy.
> 
> +Deletion
> +~~~~~~~~
> +
> +When deleting a rule, a replacement rule is searched for. Replacement
> +rule is an existing rule that has the longest prefix match with the rule to be
> deleted, but has smaller depth.
> +
> +If a replacement rule is found, target tbl24 and tbl8 entries are
> +updated to have the same depth and next hop value with the replacement
> rule.
> +
> +If no replacement rule can be found, target tbl24 and tbl8 entries will be
> cleared.
> +
> +Prefix expansion is performed if the rule's depth is not exactly 24 bits or 32
> bits.
> +
> +After deleting a rule, a group of tbl8s that belongs to the same tbl24 entry
> are freed in following cases:
> +
> +*   All tbl8s in the group are empty .
> +
> +*   All tbl8s in the group have the same values and with depth no greater
> than 24.
> +
> +Free of tbl8s have different behaviors:
> +
> +*   If RCU is not used, tbl8s are cleared and reclaimed immediately.
> +
> +*   If RCU is used, tbl8s are reclaimed when readers are in quiescent state.
> +
> +When the LPM is not using RCU, tbl8 group can be freed immediately even
> +though the readers might be using the tbl8 group entries. This might result
> in incorrect lookup results.
> +
> +RCU QSBR process is integrated for safe tbl8 group reclaimation.
> +Application has certain responsibilities while using this feature.
> +Please refer to resource reclaimation framework of :ref:`RCU library
> <RCU_Library>` for more details.
> +
>  Lookup
>  ~~~~~~
> 
> diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index
> d682785b6..6f06c5c03 100644
> --- a/lib/librte_lpm/Makefile
> +++ b/lib/librte_lpm/Makefile
> @@ -8,7 +8,7 @@ LIB = librte_lpm.a
> 
>  CFLAGS += -O3
>  CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
> -LDLIBS += -lrte_eal -lrte_hash
> +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu
> 
>  EXPORT_MAP := rte_lpm_version.map
> 
> diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build index
> 021ac6d8d..6cfc083c5 100644
> --- a/lib/librte_lpm/meson.build
> +++ b/lib/librte_lpm/meson.build
> @@ -7,3 +7,4 @@ headers = files('rte_lpm.h', 'rte_lpm6.h')  # without
> worrying about which architecture we actually need  headers +=
> files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')  deps += ['hash']
> +deps += ['rcu']
> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> 38ab512a4..30f541179 100644
> --- a/lib/librte_lpm/rte_lpm.c
> +++ b/lib/librte_lpm/rte_lpm.c
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2010-2014 Intel Corporation
> + * Copyright(c) 2020 Arm Limited
>   */
> 
>  #include <string.h>
> @@ -246,12 +247,85 @@ rte_lpm_free(struct rte_lpm *lpm)
> 
>  	rte_mcfg_tailq_write_unlock();
> 
> +	if (lpm->dq)
> +		rte_rcu_qsbr_dq_delete(lpm->dq);
>  	rte_free(lpm->tbl8);
>  	rte_free(lpm->rules_tbl);
>  	rte_free(lpm);
>  	rte_free(te);
>  }
> 
> +static void
> +__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n) {
> +	struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> +	uint32_t tbl8_group_index = *(uint32_t *)data;
> +	struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p;
> +
> +	RTE_SET_USED(n);
> +	/* Set tbl8 group invalid */
> +	__atomic_store(&tbl8[tbl8_group_index], &zero_tbl8_entry,
> +		__ATOMIC_RELAXED);
> +}
> +
> +/* Associate QSBR variable with an LPM object.
> + */
> +int
> +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg,
> +	struct rte_rcu_qsbr_dq **dq)
I prefer not to return the defer queue to the user here. I see 3 different ways how RCU can be integrated in the libraries:

1) The sync mode in which the defer queue is not created. The rte_rcu_qsbr_synchronize API is called after delete. The resource is freed after rte_rcu_qsbr_synchronize returns and the control is given back to the user.

2) The mode where the defer queue is created. There is a lot of flexibility provided now as the defer queue size, reclaim threshold and how many resources to reclaim are all configurable. IMO, this solves most of the use cases and helps the application integrate lock-less algorithms with minimal effort.

3) This is where the application has its own method of reclamation that does not fall under 1) or 2). To address this use case, I think we should make changes to the LPM library. Today, in LPM, the delete and free are combined into a single API. We can split this single API into 2 separate APIs - delete and free (similar thing was done to rte_hash library) without affecting the ABI. This should provide all the flexibility required for the application to implement any kind of reclamation algorithm it wants. Returning the defer queue to the user in the above API does not solve this use case.

> +{
> +	char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
> +	struct rte_rcu_qsbr_dq_parameters params = {0};
> +
> +	if ((lpm == NULL) || (cfg == NULL)) {
> +		rte_errno = EINVAL;
> +		return 1;
> +	}
> +
> +	if (lpm->v) {
> +		rte_errno = EEXIST;
> +		return 1;
> +	}
> +
> +	if (cfg->mode == RTE_LPM_QSBR_MODE_SYNC) {
> +		/* No other things to do. */
> +	} else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) {
> +		/* Init QSBR defer queue. */
> +		snprintf(rcu_dq_name, sizeof(rcu_dq_name),
> +				"LPM_RCU_%s", lpm->name);
> +		params.name = rcu_dq_name;
> +		params.size = cfg->dq_size;
> +		if (params.size == 0)
> +			params.size = lpm->number_tbl8s;
> +		params.trigger_reclaim_limit = cfg->reclaim_thd;
> +		if (params.trigger_reclaim_limit == 0)
> +			params.trigger_reclaim_limit =
> +					RTE_LPM_RCU_DQ_RECLAIM_THD;
> +		params.max_reclaim_size = cfg->reclaim_max;
> +		if (params.max_reclaim_size == 0)
> +			params.max_reclaim_size =
> RTE_LPM_RCU_DQ_RECLAIM_MAX;
> +		params.esize = sizeof(uint32_t);	/* tbl8 group index */
> +		params.free_fn = __lpm_rcu_qsbr_free_resource;
> +		params.p = lpm->tbl8;
> +		params.v = cfg->v;
> +		lpm->dq = rte_rcu_qsbr_dq_create(&params);
> +		if (lpm->dq == NULL) {
> +			RTE_LOG(ERR, LPM,
> +					"LPM QS defer queue creation
> failed\n");
> +			return 1;
> +		}
> +		if (dq)
> +			*dq = lpm->dq;
> +	} else {
> +		rte_errno = EINVAL;
> +		return 1;
> +	}
> +	lpm->rcu_mode = cfg->mode;
> +	lpm->v = cfg->v;
> +
> +	return 0;
> +}
> +
>  /*
>   * Adds a rule to the rule table.
>   *
> @@ -394,14 +468,15 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked,
> uint8_t depth)
>   * Find, clean and allocate a tbl8.
>   */
>  static int32_t
> -tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
> +_tbl8_alloc(struct rte_lpm *lpm)
>  {
>  	uint32_t group_idx; /* tbl8 group index. */
>  	struct rte_lpm_tbl_entry *tbl8_entry;
> 
>  	/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> -	for (group_idx = 0; group_idx < number_tbl8s; group_idx++) {
> -		tbl8_entry = &tbl8[group_idx *
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> +	for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
> +		tbl8_entry = &lpm->tbl8[group_idx *
> +
> 	RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
>  		/* If a free tbl8 group is found clean it and set as VALID. */
>  		if (!tbl8_entry->valid_group) {
>  			struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -
> 427,14 +502,40 @@ tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t
> number_tbl8s)
>  	return -ENOSPC;
>  }
> 
> +static int32_t
> +tbl8_alloc(struct rte_lpm *lpm)
> +{
> +	int32_t group_idx; /* tbl8 group index. */
> +
> +	group_idx = _tbl8_alloc(lpm);
> +	if ((group_idx < 0) && (lpm->dq != NULL)) {
> +		/* If there are no tbl8 groups try to reclaim one. */
> +		if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL)
> == 0)
> +			group_idx = _tbl8_alloc(lpm);
> +	}
> +
> +	return group_idx;
> +}
> +
>  static void
> -tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> +tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
>  {
> -	/* Set tbl8 group invalid*/
>  	struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> 
> -	__atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
> -			__ATOMIC_RELAXED);
> +	if (!lpm->v) {
> +		/* Set tbl8 group invalid*/
> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
> &zero_tbl8_entry,
> +				__ATOMIC_RELAXED);
> +	} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {
> +		/* Wait for quiescent state change. */
> +		rte_rcu_qsbr_synchronize(lpm->v,
> RTE_QSBR_THRID_INVALID);
> +		/* Set tbl8 group invalid*/
> +		__atomic_store(&lpm->tbl8[tbl8_group_start],
> &zero_tbl8_entry,
> +				__ATOMIC_RELAXED);
> +	} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {
> +		/* Push into QSBR defer queue. */
> +		rte_rcu_qsbr_dq_enqueue(lpm->dq, (void
> *)&tbl8_group_start);
> +	}
>  }
> 
>  static __rte_noinline int32_t
> @@ -523,7 +624,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked, uint8_t depth,
> 
>  	if (!lpm->tbl24[tbl24_index].valid) {
>  		/* Search for a free tbl8 group. */
> -		tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
> +		tbl8_group_index = tbl8_alloc(lpm);
> 
>  		/* Check tbl8 allocation was successful. */
>  		if (tbl8_group_index < 0) {
> @@ -569,7 +670,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked, uint8_t depth,
>  	} /* If valid entry but not extended calculate the index into Table8. */
>  	else if (lpm->tbl24[tbl24_index].valid_group == 0) {
>  		/* Search for free tbl8 group. */
> -		tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
> +		tbl8_group_index = tbl8_alloc(lpm);
> 
>  		if (tbl8_group_index < 0) {
>  			return tbl8_group_index;
> @@ -977,7 +1078,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
>  		 */
>  		lpm->tbl24[tbl24_index].valid = 0;
>  		__atomic_thread_fence(__ATOMIC_RELEASE);
> -		tbl8_free(lpm->tbl8, tbl8_group_start);
> +		tbl8_free(lpm, tbl8_group_start);
>  	} else if (tbl8_recycle_index > -1) {
>  		/* Update tbl24 entry. */
>  		struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -993,7
> +1094,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
>  		__atomic_store(&lpm->tbl24[tbl24_index],
> &new_tbl24_entry,
>  				__ATOMIC_RELAXED);
>  		__atomic_thread_fence(__ATOMIC_RELEASE);
> -		tbl8_free(lpm->tbl8, tbl8_group_start);
> +		tbl8_free(lpm, tbl8_group_start);
>  	}
>  #undef group_idx
>  	return 0;
> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> b9d49ac87..8c054509a 100644
> --- a/lib/librte_lpm/rte_lpm.h
> +++ b/lib/librte_lpm/rte_lpm.h
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2010-2014 Intel Corporation
> + * Copyright(c) 2020 Arm Limited
>   */
> 
>  #ifndef _RTE_LPM_H_
> @@ -20,6 +21,7 @@
>  #include <rte_memory.h>
>  #include <rte_common.h>
>  #include <rte_vect.h>
> +#include <rte_rcu_qsbr.h>
> 
>  #ifdef __cplusplus
>  extern "C" {
> @@ -62,6 +64,17 @@ extern "C" {
>  /** Bitmask used to indicate successful lookup */
>  #define RTE_LPM_LOOKUP_SUCCESS          0x01000000
> 
> +/** @internal Default threshold to trigger RCU defer queue reclaimation. */
> +#define RTE_LPM_RCU_DQ_RECLAIM_THD	32
> +
> +/** @internal Default RCU defer queue entries to reclaim in one go. */
> +#define RTE_LPM_RCU_DQ_RECLAIM_MAX	16
> +
> +/* Create defer queue for reclaim. */
> +#define RTE_LPM_QSBR_MODE_DQ		0
> +/* Use blocking mode reclaim. No defer queue created. */
> +#define RTE_LPM_QSBR_MODE_SYNC		0x01
> +
>  #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
>  /** @internal Tbl24 entry structure. */  __extension__ @@ -130,6 +143,28
> @@ struct rte_lpm {
>  			__rte_cache_aligned; /**< LPM tbl24 table. */
>  	struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
>  	struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> +
> +	/* RCU config. */
> +	struct rte_rcu_qsbr *v;		/* RCU QSBR variable. */
> +	uint32_t rcu_mode;		/* Blocking, defer queue. */
> +	struct rte_rcu_qsbr_dq *dq;	/* RCU QSBR defer queue. */
> +};
> +
> +/** LPM RCU QSBR configuration structure. */ struct rte_lpm_rcu_config
> +{
> +	struct rte_rcu_qsbr *v;	/* RCU QSBR variable. */
> +	/* Mode of RCU QSBR. RTE_LPM_QSBR_MODE_xxx
> +	 * '0' for default: create defer queue for reclaim.
> +	 */
> +	uint32_t mode;
> +	/* RCU defer queue size. default: lpm->number_tbl8s. */
> +	uint32_t dq_size;
> +	uint32_t reclaim_thd;	/* Threshold to trigger auto reclaim.
> +				 * default:
> RTE_LPM_RCU_DQ_RECLAIM_TRHD.
> +				 */
> +	uint32_t reclaim_max;	/* Max entries to reclaim in one go.
> +				 * default:
> RTE_LPM_RCU_DQ_RECLAIM_MAX.
> +				 */
>  };
> 
>  /**
> @@ -179,6 +214,30 @@ rte_lpm_find_existing(const char *name);  void
> rte_lpm_free(struct rte_lpm *lpm);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Associate RCU QSBR variable with an LPM object.
> + *
> + * @param lpm
> + *   the lpm object to add RCU QSBR
> + * @param cfg
> + *   RCU QSBR configuration
> + * @param dq
> + *   handler of created RCU QSBR defer queue
> + * @return
> + *   On success - 0
> + *   On error - 1 with error code set in rte_errno.
> + *   Possible rte_errno codes are:
> + *   - EINVAL - invalid pointer
> + *   - EEXIST - already added QSBR
> + *   - ENOMEM - memory allocation failure
> + */
> +__rte_experimental
> +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config
> *cfg,
> +	struct rte_rcu_qsbr_dq **dq);
> +
>  /**
>   * Add a rule to the LPM table.
>   *
> diff --git a/lib/librte_lpm/rte_lpm_version.map
> b/lib/librte_lpm/rte_lpm_version.map
> index 500f58b80..bfccd7eac 100644
> --- a/lib/librte_lpm/rte_lpm_version.map
> +++ b/lib/librte_lpm/rte_lpm_version.map
> @@ -21,3 +21,9 @@ DPDK_20.0 {
> 
>  	local: *;
>  };
> +
> +EXPERIMENTAL {
> +	global:
> +
> +	rte_lpm_rcu_qsbr_add;
> +};
> --
> 2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] Handling missing export functions in MSVC linkage
  @ 2020-06-08  8:33  3%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-06-08  8:33 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: Tal Shnaiderman, Thomas Monjalon, ranjit.menon, pallavi.kadam,
	Harini Ramakrishnan, navasile, bruce.richardson, William Tu,
	Dmitry Malloy (MESHCHANINOV),
	Fady Bader, Tasnim Bashar, dev

On Mon, Jun 8, 2020 at 2:09 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
>
> On Sun, 7 Jun 2020 12:26:56 +0000
> Tal Shnaiderman <talshn@mellanox.com> wrote:
>
> > In clang build the .map file is converted into Module-Definition (.Def) File.
>
> If you create a .def manually, it will override the generation from .map. Of
> cause, this adds manual work and ideally all .def files should be generated.

On this topic, I just noticed that a patch of mine, that removed
rte_eal_get_configuration() from the stable ABI, missed the
declaration in rte_eal_exports.def.
Probably worth adding a check in devtools/, to avoid further misalignment.


---
David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 01/11] eal: replace rte_page_sizes with a set of constants
  @ 2020-06-08  7:41  9%       ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2020-06-08  7:41 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Dmitry Kozlyuk, Jerin Jacob, John McNamara,
	Marko Kovacevic, Anatoly Burakov

Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
Enum rte_page_sizes has members valued above this limit, which get
wrapped to zero, resulting in compilation error (duplicate values in
enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.

Remove rte_page_sizes and replace its values with #define's.
This enumeration is not used in public API, so there's no ABI breakage.
Announce API changes for 20.08 in documentation.

Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 doc/guides/rel_notes/release_20_08.rst |  2 ++
 lib/librte_eal/include/rte_memory.h    | 23 ++++++++++-------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe..2041a29b9 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -85,6 +85,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* ``rte_page_sizes`` enumeration is replaced with ``RTE_PGSIZE_xxx`` defines.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/include/rte_memory.h b/lib/librte_eal/include/rte_memory.h
index 3d8d0bd69..65374d53a 100644
--- a/lib/librte_eal/include/rte_memory.h
+++ b/lib/librte_eal/include/rte_memory.h
@@ -24,19 +24,16 @@ extern "C" {
 #include <rte_config.h>
 #include <rte_fbarray.h>
 
-__extension__
-enum rte_page_sizes {
-	RTE_PGSIZE_4K    = 1ULL << 12,
-	RTE_PGSIZE_64K   = 1ULL << 16,
-	RTE_PGSIZE_256K  = 1ULL << 18,
-	RTE_PGSIZE_2M    = 1ULL << 21,
-	RTE_PGSIZE_16M   = 1ULL << 24,
-	RTE_PGSIZE_256M  = 1ULL << 28,
-	RTE_PGSIZE_512M  = 1ULL << 29,
-	RTE_PGSIZE_1G    = 1ULL << 30,
-	RTE_PGSIZE_4G    = 1ULL << 32,
-	RTE_PGSIZE_16G   = 1ULL << 34,
-};
+#define RTE_PGSIZE_4K   (1ULL << 12)
+#define RTE_PGSIZE_64K  (1ULL << 16)
+#define RTE_PGSIZE_256K (1ULL << 18)
+#define RTE_PGSIZE_2M   (1ULL << 21)
+#define RTE_PGSIZE_16M  (1ULL << 24)
+#define RTE_PGSIZE_256M (1ULL << 28)
+#define RTE_PGSIZE_512M (1ULL << 29)
+#define RTE_PGSIZE_1G   (1ULL << 30)
+#define RTE_PGSIZE_4G   (1ULL << 32)
+#define RTE_PGSIZE_16G  (1ULL << 34)
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
 
-- 
2.25.4


^ permalink raw reply	[relevance 9%]

* [dpdk-dev] [PATCH 9/9] doc: add note about blacklist/whitelist changes
  @ 2020-06-07 17:01  4% ` Stephen Hemminger
    1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-06-07 17:01 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The blacklist/whitelist changes to API will not be a breaking
change for applications in this release but worth adding a note
to encourage migration.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/rel_notes/release_20_08.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe968..502d67f26ff8 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -85,6 +85,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* eal: The definitions related to including and excluding devices
+  has been changed from blacklist/whitelist to blocklist/allowlist.
+  There are compatiablity macros and command line mapping to accept
+  the old values but applications and scripts are strongly encouraged
+  to migrate to the new names.
 
 ABI Changes
 -----------
-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC] doc: change to diverse and inclusive language
  2020-06-04 21:02  6% [dpdk-dev] [RFC] doc: change to diverse and inclusive language Stephen Hemminger
  2020-06-05  7:54  0% ` Luca Boccassi
@ 2020-06-05 21:40  4% ` Aaron Conole
  1 sibling, 0 replies; 200+ results
From: Aaron Conole @ 2020-06-05 21:40 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

Stephen Hemminger <stephen@networkplumber.org> writes:

> For diversity reasons, the DPDK should take every effort
> to eliminate master and slave terminology. The actual code change
> is just syntax, but it has bigger impacts.
>
> Lets announce this now and do it in the next API changing
> release.
> ---

Okay.

Usually, I am resistant to API/ABI changes - but actually in this case,
I think we can do this even *now* without breaking the ABI (IIUC, we can
use alias to keep around the old 'functions' - even structures shouldn't
have any change).  The API, we can carry over the existing stuff, and
flag it in documentation as deprecated.

Acked-by: Aaron Conole <aconole@redhat.com>

>  doc/guides/rel_notes/deprecation.rst | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 0bee924255af..6b5cbf8d0b0c 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -138,3 +138,30 @@ Deprecation Notices
>    driver probe scheme. The legacy virtio support will be available through
>    the existing VFIO/UIO based kernel driver scheme.
>    More details at https://patches.dpdk.org/patch/69351/
> +
> +* eal: To be more inclusive in choice of naming, the DPDK project
> +  will follow established diversity guidelines.
> +  The code base will be changed to replace references to sexist
> +  and offensive terms used in function, documentation and variable
> +  names. This change will be progressive across several releases.
> +
> +  The immediate impact to the API/ABI is that references to
> +  master and slave related to DPDK lcore will be changed to
> +  primary and secondary.
> +
> +  For example: ``rte_get_master_lcore()`` will be renamed
> +  to ``rte_get_primary_lcore()``.  For the 20.11, release
> +  both names will be present and the old function will be
> +  marked with the deprecated tag.
> +
> +  The macros related to primary and secondary lcore will also
> +  be change:  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced
> +  with ``RTE_LCORE_FOREACH_SECONDARY``.
> +
> +  Drivers and source not governed by API/ABI policy will change
> +  as soon as practical.
> +
> +  This change aligns DPDK with the MIT diversity guidelines:
> +  https://www.cs.cmu.edu/~mjw/Language/NonSexist/vuw.non-sexist-language-guidelines.txt
> +  and follows precedent of other open source projects: Django, Gnome,
> +  ISC, Python, Rust


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6 02/11] eal: introduce internal wrappers for file operations
  @ 2020-06-05 11:19  3%               ` Neil Horman
  0 siblings, 0 replies; 200+ results
From: Neil Horman @ 2020-06-05 11:19 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Thomas Monjalon, Anatoly Burakov,
	Bruce Richardson

On Fri, Jun 05, 2020 at 03:16:03AM +0300, Dmitry Kozlyuk wrote:
> On Thu, 4 Jun 2020 17:07:07 -0400
> Neil Horman <nhorman@tuxdriver.com> wrote:
> 
> > On Wed, Jun 03, 2020 at 03:34:03PM +0300, Dmitry Kozlyuk wrote:
> > > On Wed, 3 Jun 2020 08:07:59 -0400
> > > Neil Horman <nhorman@tuxdriver.com> wrote:
> > > 
> > > [snip]  
> > > > > +int
> > > > > +eal_file_create(const char *path)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	ret = open(path, O_CREAT | O_RDWR, 0600);
> > > > > +	if (ret < 0)
> > > > > +		rte_errno = errno;
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +    
> > > > You don't need this call if you support the oflags option in the open call
> > > > below.  
> > > 
> > > See below.
> > >   
> > > > > +int
> > > > > +eal_file_open(const char *path, bool writable)
> > > > > +{
> > > > > +	int ret, flags;
> > > > > +
> > > > > +	flags = writable ? O_RDWR : O_RDONLY;
> > > > > +	ret = open(path, flags);
> > > > > +	if (ret < 0)
> > > > > +		rte_errno = errno;
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +    
> > > > why are you changing this api from the posix file format (with oflags
> > > > specified).  As far as I can see both unix and windows platforms support that  
> > > 
> > > There is a number of caveats, which IMO make this approach better:
> > > 
> > > 1. Filesystem permissions on Windows are complicated. Supporting anything
> > > other than 0600 would add a lot of code, while EAL doesn't really need it.
> > > Microsoft's open() takes not permission bits, but a set of flags.
> > > 
> > > 2. Restricted interface prevents EAL developers from accidentally using
> > > features not supported on all platforms via a seemingly rich API.
> > > 
> > > 3. Microsoft CRT (the one Clang is using) deprecates open() in favor of
> > > _sopen_s() and issues a warning, and we're targeting -Werror. Disabling all
> > > such warnings (_CRT_SECURE_NO_DEPRECATE) doesn't seem right when CRT vendor
> > > encourages using alternatives. This is the primary reason for open()
> > > wrappers in v6.
> > >   
> > 
> > that seems a bit shortsighted to me.  By creating wrappers that restrict
> > functionality to the least common demoninator of supported platforms restricts
> > what all platforms are capable of.  For example, theres no reason that the eal
> > library shouldn't be able to open a file O_TRUNC or O_SYNC just because its
> > complex to do it on a single platform.  
> 
> The purpose of these wrappers is to maximize reuse of common code. It doesn't
> require POSIX par se, it's just implemented in terms of API that had been
> available on all supported OSes until Windows target was introduced. Wrapper
> interface is derived from common code requirements.
> 
Sure, and I'm fine with that.  What I'm concerned about is implementing wrappers
that define their APIs in terms of whats currently implemented.  Theres no
reason that the existing in use feature set won't be built upon in the future,
and the API should be able to handle that.

> > The API should be written to support the full range of functionality on all
> > platforms, and the individual implementations should write the code to make that
> > happen, or return an error that its unsupported on this particular platform.
> 
> IMO, common code, by definition, should avoid partial support of anything.
> 
I disagree.  Anytime you abstract an implementation to a more generic api, you
have the possibility that a given implementation won't offer full support for
all of the APIs features, and thats ok, as long as there are no users of the
features, and a given implmentation properly returns an error when their usage
is attempted.  The expectation then is, that the user of the feature will add
the feature to all implementations, so that the code can remain portable.  What
you shouldn't do is define the API such that those features can't be implemented
without having to change the API, as that runs the potential risk of having to
modify the ABI.  Thats probably not the case here, but the notion stands.  If
you write the API to encompass the superset of supported platforms features, the
rest is implementation details.

> > I'm not saying that you have to implement everything now, but you shouldn't
> > restrict the API from being able to do so in the future.  Otherwise, in the
> > future, if someone wants to implement O_TRUNC support (just to site an example),
> > they're going to have to make a change to the API above, and alter the
> > implementation for all the platforms anyway.  You may as well make the API
> > robust enough to support that now.
> 
> I agree that these particular wrappers can have a lot more options, so
> probably flags would be better. However, I wouldn't add parameters that
> have partial support, namely, permissions.
But windows does offer file and folder permissions:
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/bb727008(v=technet.10)?redirectedfrom=MSDN

As you said, they are complicated, and the security model is different, but
those are details that can be worked out/emulated when the need arises.

> What do you think of the following
> (names shortened)?
> 
> enum mode {
> 	RO = 0,	/* write-only is not portable */
> 	RW = 1,
> 	CREATE = 2	/* always 0600 equivalent */
> };
> 
> eal_file_open(const char *path, int mode);
> 
Yeah, that makes sense to me. I'd be good with that.

Neil

> -- 
> Dmitry Kozlyuk
> 

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC] doc: change to diverse and inclusive language
  2020-06-05  7:54  0% ` Luca Boccassi
@ 2020-06-05  8:35  0%   ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2020-06-05  8:35 UTC (permalink / raw)
  To: Luca Boccassi; +Cc: Stephen Hemminger, dev

On Fri, Jun 05, 2020 at 08:54:47AM +0100, Luca Boccassi wrote:
> On Thu, 2020-06-04 at 14:02 -0700, Stephen Hemminger wrote:
> > For diversity reasons, the DPDK should take every effort
> > to eliminate master and slave terminology. The actual code change
> > is just syntax, but it has bigger impacts.
> > 
> > Lets announce this now and do it in the next API changing
> > release.
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 27 +++++++++++++++++++++++++++
> >  1 file changed, 27 insertions(+)
> > 
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 0bee924255af..6b5cbf8d0b0c 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -138,3 +138,30 @@ Deprecation Notices
> >    driver probe scheme. The legacy virtio support will be available through
> >    the existing VFIO/UIO based kernel driver scheme.
> >    More details at https://patches.dpdk.org/patch/69351/
> > +
> > +* eal: To be more inclusive in choice of naming, the DPDK project
> > +  will follow established diversity guidelines.
> > +  The code base will be changed to replace references to sexist
> > +  and offensive terms used in function, documentation and variable
> > +  names. This change will be progressive across several releases.
> > +
> > +  The immediate impact to the API/ABI is that references to
> > +  master and slave related to DPDK lcore will be changed to
> > +  primary and secondary.
> > +
> > +  For example: ``rte_get_master_lcore()`` will be renamed
> > +  to ``rte_get_primary_lcore()``.  For the 20.11, release
> > +  both names will be present and the old function will be
> > +  marked with the deprecated tag.
> > +
> > +  The macros related to primary and secondary lcore will also
> > +  be change:  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced
> > +  with ``RTE_LCORE_FOREACH_SECONDARY``.
> > +
> > +  Drivers and source not governed by API/ABI policy will change
> > +  as soon as practical.
> > +
> > +  This change aligns DPDK with the MIT diversity guidelines:
> > +  https://www.cs.cmu.edu/~mjw/Language/NonSexist/vuw.non-sexist-language-guidelines.txt
> > +  and follows precedent of other open source projects: Django, Gnome,
> > +  ISC, Python, Rust
> 
> Acked-by: Luca Boccassi <bluca@debian.org>
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] doc: change to diverse and inclusive language
  2020-06-04 21:02  6% [dpdk-dev] [RFC] doc: change to diverse and inclusive language Stephen Hemminger
@ 2020-06-05  7:54  0% ` Luca Boccassi
  2020-06-05  8:35  0%   ` Bruce Richardson
  2020-06-05 21:40  4% ` Aaron Conole
  1 sibling, 1 reply; 200+ results
From: Luca Boccassi @ 2020-06-05  7:54 UTC (permalink / raw)
  To: Stephen Hemminger, dev

On Thu, 2020-06-04 at 14:02 -0700, Stephen Hemminger wrote:
> For diversity reasons, the DPDK should take every effort
> to eliminate master and slave terminology. The actual code change
> is just syntax, but it has bigger impacts.
> 
> Lets announce this now and do it in the next API changing
> release.
> ---
>  doc/guides/rel_notes/deprecation.rst | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 0bee924255af..6b5cbf8d0b0c 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -138,3 +138,30 @@ Deprecation Notices
>    driver probe scheme. The legacy virtio support will be available through
>    the existing VFIO/UIO based kernel driver scheme.
>    More details at https://patches.dpdk.org/patch/69351/
> +
> +* eal: To be more inclusive in choice of naming, the DPDK project
> +  will follow established diversity guidelines.
> +  The code base will be changed to replace references to sexist
> +  and offensive terms used in function, documentation and variable
> +  names. This change will be progressive across several releases.
> +
> +  The immediate impact to the API/ABI is that references to
> +  master and slave related to DPDK lcore will be changed to
> +  primary and secondary.
> +
> +  For example: ``rte_get_master_lcore()`` will be renamed
> +  to ``rte_get_primary_lcore()``.  For the 20.11, release
> +  both names will be present and the old function will be
> +  marked with the deprecated tag.
> +
> +  The macros related to primary and secondary lcore will also
> +  be change:  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced
> +  with ``RTE_LCORE_FOREACH_SECONDARY``.
> +
> +  Drivers and source not governed by API/ABI policy will change
> +  as soon as practical.
> +
> +  This change aligns DPDK with the MIT diversity guidelines:
> +  https://www.cs.cmu.edu/~mjw/Language/NonSexist/vuw.non-sexist-language-guidelines.txt
> +  and follows precedent of other open source projects: Django, Gnome,
> +  ISC, Python, Rust

Acked-by: Luca Boccassi <bluca@debian.org>

Thanks for doing this!

-- 
Kind regards,
Luca Boccassi

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [RFC] doc: change to diverse and inclusive language
@ 2020-06-04 21:02  6% Stephen Hemminger
  2020-06-05  7:54  0% ` Luca Boccassi
  2020-06-05 21:40  4% ` Aaron Conole
  0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2020-06-04 21:02 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

For diversity reasons, the DPDK should take every effort
to eliminate master and slave terminology. The actual code change
is just syntax, but it has bigger impacts.

Lets announce this now and do it in the next API changing
release.
---
 doc/guides/rel_notes/deprecation.rst | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 0bee924255af..6b5cbf8d0b0c 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -138,3 +138,30 @@ Deprecation Notices
   driver probe scheme. The legacy virtio support will be available through
   the existing VFIO/UIO based kernel driver scheme.
   More details at https://patches.dpdk.org/patch/69351/
+
+* eal: To be more inclusive in choice of naming, the DPDK project
+  will follow established diversity guidelines.
+  The code base will be changed to replace references to sexist
+  and offensive terms used in function, documentation and variable
+  names. This change will be progressive across several releases.
+
+  The immediate impact to the API/ABI is that references to
+  master and slave related to DPDK lcore will be changed to
+  primary and secondary.
+
+  For example: ``rte_get_master_lcore()`` will be renamed
+  to ``rte_get_primary_lcore()``.  For the 20.11, release
+  both names will be present and the old function will be
+  marked with the deprecated tag.
+
+  The macros related to primary and secondary lcore will also
+  be change:  ``RTE_LCORE_FOREACH_SLAVE`` will be replaced
+  with ``RTE_LCORE_FOREACH_SECONDARY``.
+
+  Drivers and source not governed by API/ABI policy will change
+  as soon as practical.
+
+  This change aligns DPDK with the MIT diversity guidelines:
+  https://www.cs.cmu.edu/~mjw/Language/NonSexist/vuw.non-sexist-language-guidelines.txt
+  and follows precedent of other open source projects: Django, Gnome,
+  ISC, Python, Rust
-- 
2.26.2


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] 19.11.3 patches review and test
@ 2020-06-03 19:43  3% luca.boccassi
  2020-06-10  7:19  0% ` Yu, PingX
                   ` (3 more replies)
  0 siblings, 4 replies; 200+ results
From: luca.boccassi @ 2020-06-03 19:43 UTC (permalink / raw)
  To: stable
  Cc: dev, Abhishek Marathe, Akhil Goyal, Ali Alnubani,
	benjamin.walker, David Christensen, Hemant Agrawal, Ian Stokes,
	Jerin Jacob, John McNamara, Ju-Hyoung Lee, Kevin Traynor,
	Pei Zhang, pingx.yu, qian.q.xu, Raslan Darawsheh,
	Thomas Monjalon, yuan.peng, zhaoyan.chen

Hi all,

Here is a list of patches targeted for stable release 19.11.3.

The planned date for the final release is the 17th of June.

Please help with testing and validation of your use cases and report
any issues/results with reply-all to this mail. For the final release
the fixes and reported validations will be added to the release notes.

A release candidate tarball can be found at:

    https://dpdk.org/browse/dpdk-stable/tag/?id=v19.11.3-rc1

These patches are located at branch 19.11 of dpdk-stable repo:
    https://dpdk.org/browse/dpdk-stable/

Thanks.

Luca Boccassi

---
Adam Dybkowski (5):
      cryptodev: fix missing device id range checking
      common/qat: fix GEN3 marketing name
      app/crypto-perf: fix display of sample test vector
      crypto/qat: support plain SHA1..SHA512 hashes
      cryptodev: fix SHA-1 digest enum comment

Ajit Khaparde (3):
      net/bnxt: fix FW version query
      net/bnxt: fix error log for command timeout
      net/bnxt: fix using RSS config struct

Akhil Goyal (1):
      ipsec: fix build dependency on hash lib

Alex Kiselev (1):
      lpm6: fix size of tbl8 group

Alex Marginean (1):
      net/enetc: fix Rx lock-up

Alexander Kozyrev (8):
      net/mlx5: reduce Tx completion index memory loads
      net/mlx5: add device parameter for MPRQ stride size
      net/mlx5: enable MPRQ multi-stride operations
      net/mlx5: add multi-segment packets in MPRQ mode
      net/mlx5: set dynamic flow metadata in Rx queues
      net/mlx5: improve logging of MPRQ selection
      net/mlx5: fix assert in dynamic metadata handling
      net/mlx5: fix Tx queue release debug log timing

Alvin Zhang (2):
      net/iavf: fix link speed
      net/e1000: fix port hotplug for multi-process

Amit Gupta (1):
      net/octeontx: fix meson build for disabled drivers

Anatoly Burakov (1):
      mem: preallocate VA space in no-huge mode

Andrew Rybchenko (4):
      net/sfc: fix reported promiscuous/multicast mode
      net/sfc/base: use simpler EF10 family conditional check
      net/sfc/base: use simpler EF10 family run-time checks
      net/sfc/base: fix build when EVB is enabled

Andy Pei (1):
      net/ipn3ke: use control thread to check link status

Ankur Dwivedi (1):
      net/octeontx2: fix buffer size assignment

Apeksha Gupta (2):
      bus/fslmc: fix dereferencing null pointer
      test/crypto: fix statistics case

Archana Muniganti (1):
      examples/fips_validation: fix parsing of algorithms

Arek Kusztal (1):
      crypto/qat: fix cipher descriptor for ZUC and SNOW

Asaf Penso (2):
      net/mlx5: fix call to modify action without init item
      net/mlx5: fix assert in doorbell lookup

Ashish Gupta (1):
      net/octeontx2: fix link information for loopback port

Asim Jamshed (1):
      fib: fix headers for C++ support

Bernard Iremonger (1):
      net/i40e: fix flow director initialisation

Bing Zhao (6):
      net/mlx5: fix header modify action validation
      net/mlx5: fix actions validation on root table
      net/mlx5: fix assert in modify converting
      mk: fix static linkage of mlx dependency
      mem: fix overflow on allocation
      net/mlx5: fix doorbell bitmap management offsets

Bruce Richardson (3):
      pci: remove unneeded includes in public header file
      pci: fix build on FreeBSD
      drivers: fix log type variables for -fno-common

Cheng Peng (1):
      net/iavf: fix stats query error code

Chengchang Tang (3):
      net/hns3: fix promiscuous mode for PF
      net/hns3: fix default VLAN filter configuration for PF
      net/hns3: fix VLAN filter when setting promisucous mode

Chengwen Feng (7):
      net/hns3: fix packets offload features flags in Rx
      net/hns3: fix default error code of command interface
      net/hns3: fix crash when flushing RSS flow rules with FLR
      net/hns3: fix return value of setting VLAN offload
      net/hns3: clear residual flow rules on init
      net/hns3: fix Rx interrupt after reset
      net/hns3: replace memory barrier with data dependency order

Ciara Power (1):
      telemetry: fix port stats retrieval

Darek Stojaczyk (1):
      pci: accept 32-bit domain numbers

David Christensen (2):
      pci: fix build on ppc
      eal/ppc: fix build with gcc 9.3

David Marchand (5):
      mem: mark pages as not accessed when reserving VA
      test: load drivers when required
      eal: fix typo in endian conversion macros
      remove references to private PCI probe function
      doc: prefer https when pointing to dpdk.org

Dekel Peled (7):
      net/mlx5: fix mask used for IPv6 item validation
      net/mlx5: fix CVLAN tag set in IP item translation
      net/mlx5: update VLAN and encap actions validation
      net/mlx5: fix match on empty VLAN item in DV mode
      common/mlx5: fix umem buffer alignment
      net/mlx5: fix VLAN flow action with wildcard VLAN item
      net/mlx5: fix RSS key copy to TIR context

Dmitry Kozlyuk (2):
      build: fix linker warnings with clang on Windows
      build: support MinGW-w64 with Meson

Eduard Serra (1):
      net/vmxnet3: fix RSS setting on v4

Eugeny Parshutin (1):
      ethdev: fix build when vtune profiling is on

Fady Bader (1):
      mempool: remove inline functions from export list

Fan Zhang (1):
      vhost/crypto: add missing user protocol flag

Ferruh Yigit (7):
      net/nfp: fix log format specifiers
      net/null: fix secondary burst function selection
      net/null: remove redundant check
      mempool/octeontx2: fix build for gcc O1 optimization
      net/ena: fix build for O1 optimization
      event/octeontx2: fix build for O1 optimization
      examples/kni: fix crash during MTU set

Gaetan Rivet (5):
      doc: fix number of failsafe sub-devices
      net/ring: fix device pointer on allocation
      pci: reject negative values in PCI id
      doc: fix typos in ABI policy
      kvargs: fix strcmp helper documentation

Gavin Hu (2):
      net/i40e: relax barrier in Tx
      net/i40e: relax barrier in Tx for NEON

Guinan Sun (2):
      net/ixgbe: fix statistics in flow control mode
      net/ixgbe: check driver type in MACsec API

Haifeng Lin (1):
      eal/arm64: fix precise TSC

Haiyue Wang (1):
      net/ice/base: check memory pointer before copying

Hao Chen (1):
      net/hns3: support Rx interrupt

Harry van Haaren (3):
      service: fix crash on exit
      examples/eventdev: fix crash on exit
      test/flow_classify: enable multi-sockets system

Hemant Agrawal (3):
      drivers: add crypto as dependency for event drivers
      bus/fslmc: fix size of qman fq descriptor
      mempool/dpaa2: install missing header with meson

Honnappa Nagarahalli (3):
      timer: protect initialization with lock
      service: fix race condition for MT unsafe service
      service: fix identification of service running on other lcore

Hyong Youb Kim (1):
      net/enic: fix flow action reordering

Igor Chauskin (2):
      net/ena/base: make allocation macros thread-safe
      net/ena/base: prevent allocation of zero sized memory

Igor Romanov (9):
      net/sfc: fix initialization error path
      net/sfc: fix Rx queue start failure path
      net/sfc: fix promiscuous and allmulticast toggles errors
      net/sfc: set priority of created filters to manual
      net/sfc/base: reduce filter priorities to implemented only
      net/sfc/base: reject automatic filter creation by users
      net/sfc/base: refactor filter lookup loop in EF10
      net/sfc/base: handle manual and auto filter clashes in EF10
      net/sfc/base: fix manual filter delete in EF10

Itsuro Oda (2):
      net/vhost: fix potential memory leak on close
      vhost: make IOTLB cache name unique among processes

Ivan Dyukov (3):
      net/virtio-user: fix devargs parsing
      app: remove extra new line after link duplex
      examples: remove extra new line after link duplex

Jasvinder Singh (3):
      net/softnic: fix memory leak for thread
      net/softnic: fix resource leak for pipeline
      examples/ip_pipeline: remove check of null response

Jeff Guo (3):
      net/i40e: fix setting L2TAG
      net/iavf: fix setting L2TAG
      net/ice: fix setting L2TAG

Jiawei Wang (1):
      net/mlx5: fix imissed counter overflow

Jim Harris (1):
      contigmem: cleanup properly when load fails

Jun Yang (1):
      net/dpaa2: fix congestion ID for multiple traffic classes

Junyu Jiang (4):
      examples/vmdq: fix output of pools/queues
      examples/vmdq: fix RSS configuration
      net/ice: fix RSS advanced rule
      net/ice: fix crash in switch filter

Juraj Linkeš (1):
      ci: fix telemetry dependency in Travis

Július Milan (1):
      net/memif: fix init when already connected

Kalesh AP (9):
      net/bnxt: fix HWRM command during FW reset
      net/bnxt: use true/false for bool types
      net/bnxt: fix port start failure handling
      net/bnxt: fix VLAN add when port is stopped
      net/bnxt: fix VNIC Rx queue count on VNIC free
      net/bnxt: fix number of TQM ring
      net/bnxt: fix TQM ring context memory size
      app/testpmd: fix memory failure handling for i40e DDP
      net/bnxt: fix storing MAC address twice

Kevin Traynor (9):
      net/hinic: fix snprintf length of cable info
      net/hinic: fix repeating cable log and length check
      net/avp: fix gcc 10 maybe-uninitialized warning
      examples/ipsec-gw: fix gcc 10 maybe-uninitialized warning
      eal/x86: ignore gcc 10 stringop-overflow warnings
      net/mlx5: fix gcc 10 enum-conversion warning
      crypto/kasumi: fix extern declaration
      drivers/crypto: disable gcc 10 no-common errors
      build: disable gcc 10 zero-length-bounds warning

Konstantin Ananyev (1):
      security: fix crash at accessing non-implemented ops

Lijun Ou (4):
      net/hns3: fix configuring RSS hash when rules are flushed
      net/hns3: add RSS hash offload to capabilities
      net/hns3: fix RSS key length
      net/hns3: fix RSS indirection table configuration

Linsi Yuan (1):
      net/bnxt: fix possible stack smashing

Louise Kilheeney (1):
      examples/l2fwd-keepalive: fix mbuf pool size

Luca Boccassi (4):
      fix various typos found by Lintian
      usertools: check for pci.ids in /usr/share/misc
      Revert "net/bnxt: fix TQM ring context memory size"
      Revert "net/bnxt: fix number of TQM ring"

Lukasz Bartosik (1):
      event/octeontx2: fix queue removal from Rx adapter

Lukasz Wojciechowski (5):
      drivers/crypto: fix log type variables for -fno-common
      security: fix verification of parameters
      security: fix return types in documentation
      security: fix session counter
      test: remove redundant macro

Marvin Liu (5):
      vhost: fix packed ring zero-copy
      vhost: fix shadow update
      vhost: fix shadowed descriptors not flushed
      net/virtio: fix crash when device reconnecting
      net/virtio: fix unexpected event after reconnect

Matteo Croce (1):
      doc: fix LTO config option

Mattias Rönnblom (3):
      event/dsw: remove redundant control ring poll
      event/dsw: remove unnecessary read barrier
      event/dsw: avoid reusing previously recorded events

Michael Baum (2):
      net/mlx5: fix meter color register consideration
      net/mlx4: fix drop queue error handling

Michael Haeuptle (1):
      vfio: fix race condition with sysfs

Michal Krawczyk (5):
      net/ena/base: fix documentation of functions
      net/ena/base: fix indentation in CQ polling
      net/ena/base: fix indentation of multiple defines
      net/ena: set IO ring size to valid value
      net/ena/base: fix testing for supported hash function

Min Hu (Connor) (3):
      net/hns3: fix configuring illegal VLAN PVID
      net/hns3: fix mailbox opcode data type
      net/hns3: fix VLAN PVID when configuring device

Mit Matelske (1):
      eal/freebsd: fix queuing duplicate alarm callbacks

Mohsin Shaikh (1):
      net/mlx5: use open/read/close for ib stats query

Muhammad Bilal (2):
      fix same typo in multiple places
      doc: fix typo in contributors guide

Nagadheeraj Rottela (2):
      crypto/nitrox: fix CSR register address generation
      crypto/nitrox: fix oversized device name

Nicolas Chautru (2):
      baseband/turbo_sw: fix exposed LLR decimals assumption
      bbdev: fix doxygen comments

Nithin Dabilpuram (2):
      devtools: fix symbol map change check
      net/octeontx2: disable unnecessary error interrupts

Olivier Matz (3):
      test/kvargs: fix to consider empty elements as valid
      test/kvargs: fix invalid cases check
      kvargs: fix invalid token parsing on FreeBSD

Ophir Munk (1):
      net/mlx5: fix VLAN PCP item calculation

Ori Kam (1):
      eal/ppc: fix bool type after altivec include

Pablo de Lara (4):
      cryptodev: add asymmetric session-less feature name
      test/crypto: fix flag check
      crypto/openssl: fix out-of-place encryption
      doc: add NASM installation steps

Pavan Nikhilesh (4):
      net/octeontx2: fix device configuration sequence
      eventdev: fix probe and remove for secondary process
      common/octeontx: fix gcc 9.1 ABI break
      app/eventdev: check Tx adapter service ID

Phil Yang (2):
      service: remove rte prefix from static functions
      net/ixgbe: fix link state timing on fiber ports

Qi Zhang (10):
      net/ice: remove unnecessary variable
      net/ice: remove bulk alloc option
      net/ice/base: fix uninitialized stack variables
      net/ice/base: read PSM clock frequency from register
      net/ice/base: minor fixes
      net/ice/base: fix MAC write command
      net/ice/base: fix binary order for GTPU filter
      net/ice/base: remove unused code in switch rule
      net/ice: fix variable initialization
      net/ice: fix RSS for GTPU

Qiming Yang (3):
      net/i40e: fix X722 performance
      doc: fix multicast filter feature announcement
      net/i40e: fix queue related exception handling

Rahul Gupta (2):
      net/bnxt: fix memory leak during queue restart
      net/bnxt: fix Rx ring producer index

Rasesh Mody (3):
      net/qede: fix link state configuration
      net/qede: fix port reconfiguration
      examples/kni: fix MTU change to setup Tx queue

Raslan Darawsheh (4):
      net/mlx5: fix validation of VXLAN/VXLAN-GPE specs
      app/testpmd: add parsing for QinQ VLAN headers
      net/mlx5: fix matching for UDP tunnels with Verbs
      doc: fix build issue in ABI guide

Ray Kinsella (1):
      doc: fix default symbol binding in ABI guide

Rohit Raj (1):
      net/dpaa2: fix 10G port negotiation

Roland Qi (1):
      vhost: fix peer close check

Ruifeng Wang (2):
      test: skip some subtests in no-huge mode
      test/ipsec: fix crash in session destroy

Sarosh Arif (1):
      doc: fix typo in contributors guide

Shougang Wang (2):
      net/ixgbe: fix link status after port reset
      net/i40e: fix queue region in RSS flow

Simei Su (1):
      net/ice: support mark only action for flow director

Sivaprasad Tummala (1):
      vhost: handle mbuf allocation failure

Somnath Kotur (2):
      bus/pci: fix devargs on probing again
      net/bnxt: fix max ring count

Stephen Hemminger (24):
      ethdev: fix spelling
      net/mvneta: do not use PMD log type
      net/virtio: do not use PMD log type
      net/tap: do not use PMD log type
      net/pfe: do not use PMD log type
      net/bnxt: do not use PMD log type
      net/dpaa: use dynamic log type
      net/thunderx: use dynamic log type
      net/netvsc: propagate descriptor limits from VF
      net/netvsc: handle Rx packets during multi-channel setup
      net/netvsc: split send buffers from Tx descriptors
      net/netvsc: fix memory free on device close
      net/netvsc: remove process event optimization
      net/netvsc: handle Tx completions based on burst size
      net/netvsc: avoid possible live lock
      lpm6: fix comments spelling
      eal: fix comments spelling
      net/netvsc: fix comment spelling
      bus/vmbus: fix comment spelling
      net/netvsc: do RSS across Rx queue only
      net/netvsc: do not configure RSS if disabled
      net/tap: fix crash in flow destroy
      eal: fix C++17 compilation
      net/vmxnet3: handle bad host framing

Suanming Mou (3):
      net/mlx5: fix counter container usage
      net/mlx5: fix meter suffix table leak
      net/mlx5: fix jump table leak

Sunil Kumar Kori (1):
      eal: fix log message print for regex

Tao Zhu (3):
      net/ice: fix hash flow crash
      net/ixgbe: fix link status inconsistencies
      net/ixgbe: fix resource leak after thread exits normally

Thomas Monjalon (13):
      drivers/crypto: fix build with make 4.3
      doc: fix sphinx compatibility
      log: fix level picked with globbing on type register
      doc: fix matrix CSS for recent sphinx
      common/mlx5: fix build with -fno-common
      net/mlx4: fix build with -fno-common
      common/mlx5: fix build with rdma-core 21
      app: fix usage help of options separated by dashes
      net/mvpp2: fix build with gcc 10
      examples/vm_power: fix build with -fno-common
      examples/vm_power: drop Unix path limit redefinition
      doc: fix build with doxygen 1.8.18
      doc: fix API index

Timothy Redaelli (6):
      crypto/octeontx2: fix build with gcc 10
      test: fix build with gcc 10
      app/pipeline: fix build with gcc 10
      examples/vhost_blk: fix build with gcc 10
      examples/eventdev: fix build with gcc 10
      examples/qos_sched: fix build with gcc 10

Ting Xu (1):
      app/testpmd: fix DCB set

Tonghao Zhang (2):
      eal: fix PRNG init with HPET enabled
      net/mlx5: fix crash when releasing meter table

Vadim Podovinnikov (1):
      net/memif: fix resource leak

Vamsi Attunuru (1):
      net/octeontx2: enable error and RAS interrupt in configure

Viacheslav Ovsiienko (2):
      net/mlx5: fix metadata for compressed Rx CQEs
      common/mlx5: fix netlink buffer allocation from stack

Vijaya Mohan Guvva (1):
      bus/pci: fix UIO resource access from secondary process

Vladimir Medvedkin (1):
      ipsec: check SAD lookup error

Wei Hu (Xavier) (10):
      vfio: fix use after free with multiprocess
      net/hns3: fix status after repeated resets
      net/hns3: fix return value when clearing statistics
      app/testpmd: fix statistics after reset
      net/hns3: support different numbers of Rx and Tx queues
      net/hns3: fix Tx interrupt when enabling Rx interrupt
      net/hns3: fix MSI-X interrupt during initialization
      net/hns3: remove unnecessary assignments in Tx
      net/hns3: remove one IO barrier in Rx
      net/hns3: add free threshold in Rx

Wei Zhao (8):
      net/ice: change default tunnel type
      net/ice: add action number check for switch
      net/ice: fix input set of VLAN item
      net/i40e: fix flow director for ARP packets
      doc: add i40e limitation for flow director
      net/i40e: fix flush of flow director filter
      net/i40e: fix wild pointer
      net/i40e: fix flow director enabling

Wisam Jaddo (3):
      net/mlx5: fix zero metadata action
      net/mlx5: fix zero value validation for metadata
      net/mlx5: fix VLAN ID check

Xiao Zhang (1):
      app/testpmd: fix PPPoE flow command

Xiaolong Ye (3):
      net/virtio: fix outdated comment
      vhost: remove unused variable
      doc: fix log level example in Linux guide

Xiaoyu Min (3):
      net/mlx5: fix push VLAN action to use item info
      net/mlx5: fix validation of push VLAN without full mask
      net/mlx5: fix RSS enablement

Xiaoyun Li (4):
      net/ixgbe/base: update copyright
      net/i40e/base: update copyright
      common/iavf: update copyright
      net/ice/base: update copyright

Xiaoyun Wang (7):
      net/hinic: allocate IO memory with socket id
      net/hinic: fix LRO
      net/hinic/base: fix port start during FW hot update
      net/hinic/base: fix PF firmware hot-active problem
      net/hinic: fix queues resource free
      net/hinic: fix Tx mbuf length while copying
      net/hinic: fix TSO

Xuan Ding (2):
      vhost: prevent zero-copy with incompatible client mode
      vhost: fix zero-copy server mode

Yisen Zhuang (1):
      net/hns3: reduce judgements of free Tx ring space

Yunjian Wang (16):
      kvargs: fix buffer overflow when parsing list
      net/tap: remove unused assert
      net/nfp: fix dangling pointer on probe failure
      net/pfe: fix double free of MAC address
      net/tap: fix mbuf double free when writev fails
      net/tap: fix mbuf and mem leak during queue release
      net/tap: fix check for mbuf number of segment
      net/tap: fix file close on remove
      net/tap: fix fd leak on creation failure
      net/tap: fix unexpected link handler
      net/tap: fix queues fd check before close
      net/octeontx: fix dangling pointer on init failure
      crypto/ccp: fix fd leak on probe failure
      net/failsafe: fix fd leak
      crypto/caam_jr: fix check of file descriptors
      crypto/caam_jr: fix IRQ functions return type

Yuri Chipchev (1):
      event/dsw: fix enqueue burst return value

Zhihong Peng (1):
      net/ixgbe: fix link status synchronization on BSD

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-03  8:16  0%       ` Ori Kam
@ 2020-06-03 12:10  0%         ` Dekel Peled
  2020-06-18  6:58  0%           ` Dekel Peled
  0 siblings, 1 reply; 200+ results
From: Dekel Peled @ 2020-06-03 12:10 UTC (permalink / raw)
  To: Ori Kam, Adrien Mazarguil
  Cc: Andrew Rybchenko, ferruh.yigit, john.mcnamara, marko.kovacevic,
	Asaf Penso, Matan Azrad, Eli Britstein, dev, Ivan Malov

Hi, PSB.

> -----Original Message-----
> From: Ori Kam <orika@mellanox.com>
> Sent: Wednesday, June 3, 2020 11:16 AM
> To: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Dekel Peled
> <dekelp@mellanox.com>; ferruh.yigit@intel.com;
> john.mcnamara@intel.com; marko.kovacevic@intel.com; Asaf Penso
> <asafp@mellanox.com>; Matan Azrad <matan@mellanox.com>; Eli Britstein
> <elibr@mellanox.com>; dev@dpdk.org; Ivan Malov
> <Ivan.Malov@oktetlabs.ru>
> Subject: RE: [RFC] ethdev: add fragment attribute to IPv6 item
> 
> Hi Adrien,
> 
> Great to hear from you again.
> 
> > -----Original Message-----
> > From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> > Sent: Tuesday, June 2, 2020 10:04 PM
> > To: Ori Kam <orika@mellanox.com>
> > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Dekel Peled
> > <dekelp@mellanox.com>; ferruh.yigit@intel.com;
> > john.mcnamara@intel.com; marko.kovacevic@intel.com; Asaf Penso
> > <asafp@mellanox.com>; Matan Azrad <matan@mellanox.com>; Eli
> Britstein
> > <elibr@mellanox.com>; dev@dpdk.org; Ivan Malov
> > <Ivan.Malov@oktetlabs.ru>
> > Subject: Re: [RFC] ethdev: add fragment attribute to IPv6 item
> >
> > Hi Ori, Andrew, Delek,

It's Dekel, not Delek ;-)

> >
> > (been a while eh?)
> >
> > On Tue, Jun 02, 2020 at 06:28:41PM +0000, Ori Kam wrote:
> > > Hi Andrew,
> > >
> > > PSB,
> > [...]
> > > > > diff --git a/lib/librte_ethdev/rte_flow.h
> > > > > b/lib/librte_ethdev/rte_flow.h index b0e4199..3bc8ce1 100644
> > > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> > > > >   */
> > > > >  struct rte_flow_item_ipv6 {
> > > > >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > > > > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-
> fragmented. */
> > > > > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> > > >
> > > > The solution is simple, but hardly generic and adds an example for
> > > > the future extensions. I doubt that it is a right way to go.
> > > >
> > > I agree with you that this is not the most generic way possible, but
> > > the IPV6 extensions are very unique. So the solution is also unique.
> > > In general, I'm always in favor of finding the most generic way, but
> > sometimes
> > > it is better to keep things simple, and see how it goes.
> >
> > Same feeling here, it doesn't look right.
> >
> > > > May be we should add 256-bit string with one bit for each IP
> > > > protocol number and apply it to extension headers only?
> > > > If bit A is set in the mask:
> > > >  - if bit A is set in spec as well, extension header with
> > > >    IP protocol (1 << A) number must present
> > > >  - if bit A is clear in spec, extension header with
> > > >    IP protocol (1 << A) number must absent If bit is clear in the
> > > > mask, corresponding extension header may present and may absent
> > > > (i.e. don't care).
> > > >
> > > There are only 12 possible extension headers and currently none of
> > > them are supported in rte_flow. So adding a logic to parse the 256
> > > just to get a max
> > of 12
> > > possible values is an overkill. Also, if we disregard the case of
> > > the extension, the application must select only one next proto. For
> > > example, the application can't select udp + tcp. There is the option
> > > to add a flag for each of the possible extensions, does it makes more
> sense to you?
> >
> > Each of these extension headers has its own structure, we first need
> > the ability to match them properly by adding the necessary pattern items.
> >
> > > > The RFC indirectly touches IPv6 proto (next header) matching
> > > > logic.
> > > >
> > > > If logic used in ETH+VLAN is applied on IPv6 as well, it would
> > > > make pattern specification and handling complicated. E.g.:
> > > >   eth / ipv6 / udp / end
> > > > should match UDP over IPv6 without any extension headers only.
> > > >
> > > The issue with VLAN I agree is different since by definition VLAN is
> > > layer 2.5. We can add the same logic also to the VLAN case, maybe it
> > > will be easier.
> > > In any case, in your example above and according to the RFC we will
> > > get all ipv6 udp traffic with and without extensions.
> > >
> > > > And how to specify UPD over IPv6 regardless extension headers?
> > >
> > > Please see above the rule will be eth / ipv6 /udp.
> > >
> > > >   eth / ipv6 / ipv6_ext / udp / end with a convention that
> > > > ipv6_ext is optional if spec and mask are NULL (or mask is empty).
> > > >
> > > I would guess that this flow should match all ipv6 that has one ext
> > > and the
> > next
> > > proto is udp.
> >
> > In my opinion RTE_FLOW_ITEM_TYPE_IPV6_EXT is a bit useless on its own.
> > It's only for matching packets that contain some kind of extension
> > header, not a specific one, more about that below.
> >
> > > > I'm wondering if any driver treats it this way?
> > > >
> > > I'm not sure, we can support only the frag ext by default, but if
> > > required we
> > can support other
> > > ext.
> > >
> > > > I agree that the problem really comes when we'd like match
> > > > IPv6 frags or even worse not fragments.
> > > >
> > > > Two patterns for fragments:
> > > >   eth / ipv6 (proto=FRAGMENT) / end
> > > >   eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end
> > > >
> > > > Any sensible solution for not-fragments with any other extension
> > > > headers?
> > > >
> > > The one propose in this mail 😊
> > >
> > > > INVERT exists, but hardly useful, since it simply says that
> > > > patches which do not match pattern without INVERT matches the
> > > > pattern with INVERT and
> > > >   invert / eth / ipv6 (proto=FRAGMENT) / end will match ARP, IPv4,
> > > > IPv6 with an extension header before fragment header and so on.
> > > >
> > > I agree with you, INVERT in this doesn’t help.
> > > We were considering adding some kind of not mask / item per item.
> > > some think around this line:
> > > user request ipv6 unfragmented udp packets. The flow would look
> > > something like this:
> > > Eth / ipv6 / Not (Ipv6.proto = frag_proto) / udp But it makes the
> > > rules much harder to use, and I don't think that there is any HW
> > > that support not, and adding such feature to all items is overkill.
> > >
> > >
> > > > Bit string suggested above will allow to match:
> > > >  - UDP over IPv6 with any extension headers:
> > > >     eth / ipv6 (ext_hdrs mask empty) / udp / end
> > > >  - UDP over IPv6 without any extension headers:
> > > >     eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
> > > >  - UDP over IPv6 without fragment header:
> > > >     eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp /
> > > > end
> > > >  - UDP over IPv6 with fragment header
> > > >     eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp /
> > > > end
> > > >
> > > > where FRAGMENT is 1 << IPPROTO_FRAGMENT.
> > > >
> > > Please see my response regarding this above.
> > >
> > > > Above I intentionally keep 'proto' unspecified in ipv6 since
> > > > otherwise it would specify the next header after IPv6 header.
> > > >
> > > > Extension headers mask should be empty by default.
> >
> > This is a deliberate design choice/issue with rte_flow: an empty
> > pattern matches everything; adding items only narrows the selection.
> > As Andrew said there is currently no way to provide a specific item to
> > reject, it can only be done globally on a pattern through INVERT that no
> PMD implements so far.
> >
> > So we have two requirements here: the ability to specifically match
> > IPv6 fragment headers and the ability to reject them.
> >
> > To match IPv6 fragment headers, we need a dedicated pattern item. The
> > generic RTE_FLOW_ITEM_TYPE_IPV6_EXT is useless for that on its own, it
> > must be completed with RTE_FLOW_ITEM_TYPE_IPV6_EXT_FRAG and
> associated
> > object
> 
> Yes, we must add EXT_FRAG to be able to match on the FRAG bits.
> 

Please see previous RFC I sent.
[RFC] ethdev: add IPv6 fragment extension header item
http://mails.dpdk.org/archives/dev/2020-March/160255.html
It is complemented by this RFC.

> > to match individual fields if needed (like all the others
> > protocols/headers).
> >
> > Then to reject a pattern item... My preference goes to a new "NOT"
> > meta item affecting the meaning of the item coming immediately after
> > in the pattern list. That would be ultra generic, wouldn't break any
> > ABI/API and like INVERT, wouldn't even require a new object associated
> with it.
> >
> > To match UDPv6 traffic when there is no fragment header, one could
> > then do something like:
> >
> >  eth / ipv6 / not / ipv6_ext_frag / udp
> >
> > PMD support would be trivial to implement (I'm sure!)
> >
> I agree with you as I said above. The issue is not PMD, the issues are:
> 1. think about the rule you stated above from logic point there is some
> contradiction, you are saying ipv6 next proto udp but you also say not frag,
> this is logic only for IPV6 ext.
> 2. HW issue, I don't know of HW that knows how to support not on an item.
> So adding something for all items for only one case is overkill.
> 
> 
> 
> > We may later implement other kinds of "operator" items as Andrew
> > suggested, for bit-wise stuff and so on. Let's keep adding features on
> > a needed basis though.
> >
> > --
> > Adrien Mazarguil
> > 6WIND
> 
> Best,
> Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-02 19:04  3%     ` Adrien Mazarguil
@ 2020-06-03  8:16  0%       ` Ori Kam
  2020-06-03 12:10  0%         ` Dekel Peled
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2020-06-03  8:16 UTC (permalink / raw)
  To: Adrien Mazarguil
  Cc: Andrew Rybchenko, Dekel Peled, ferruh.yigit, john.mcnamara,
	marko.kovacevic, Asaf Penso, Matan Azrad, Eli Britstein, dev,
	Ivan Malov

Hi Adrien,

Great to hear from you again.

> -----Original Message-----
> From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Sent: Tuesday, June 2, 2020 10:04 PM
> To: Ori Kam <orika@mellanox.com>
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Dekel Peled
> <dekelp@mellanox.com>; ferruh.yigit@intel.com; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; Asaf Penso <asafp@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; Eli Britstein <elibr@mellanox.com>; dev@dpdk.org;
> Ivan Malov <Ivan.Malov@oktetlabs.ru>
> Subject: Re: [RFC] ethdev: add fragment attribute to IPv6 item
> 
> Hi Ori, Andrew, Delek,
> 
> (been a while eh?)
> 
> On Tue, Jun 02, 2020 at 06:28:41PM +0000, Ori Kam wrote:
> > Hi Andrew,
> >
> > PSB,
> [...]
> > > > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > > > index b0e4199..3bc8ce1 100644
> > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> > > >   */
> > > >  struct rte_flow_item_ipv6 {
> > > >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > > > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
> > > > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> > >
> > > The solution is simple, but hardly generic and adds an
> > > example for the future extensions. I doubt that it is a
> > > right way to go.
> > >
> > I agree with you that this is not the most generic way possible,
> > but the IPV6 extensions are very unique. So the solution is also unique.
> > In general, I'm always in favor of finding the most generic way, but
> sometimes
> > it is better to keep things simple, and see how it goes.
> 
> Same feeling here, it doesn't look right.
> 
> > > May be we should add 256-bit string with one bit for each
> > > IP protocol number and apply it to extension headers only?
> > > If bit A is set in the mask:
> > >  - if bit A is set in spec as well, extension header with
> > >    IP protocol (1 << A) number must present
> > >  - if bit A is clear in spec, extension header with
> > >    IP protocol (1 << A) number must absent
> > > If bit is clear in the mask, corresponding extension header
> > > may present and may absent (i.e. don't care).
> > >
> > There are only 12 possible extension headers and currently none of them
> > are supported in rte_flow. So adding a logic to parse the 256 just to get a max
> of 12
> > possible values is an overkill. Also, if we disregard the case of the extension,
> > the application must select only one next proto. For example, the application
> > can't select udp + tcp. There is the option to add a flag for each of the
> > possible extensions, does it makes more sense to you?
> 
> Each of these extension headers has its own structure, we first need the
> ability to match them properly by adding the necessary pattern items.
> 
> > > The RFC indirectly touches IPv6 proto (next header) matching
> > > logic.
> > >
> > > If logic used in ETH+VLAN is applied on IPv6 as well, it would
> > > make pattern specification and handling complicated. E.g.:
> > >   eth / ipv6 / udp / end
> > > should match UDP over IPv6 without any extension headers only.
> > >
> > The issue with VLAN I agree is different since by definition VLAN is
> > layer 2.5. We can add the same logic also to the VLAN case, maybe it will
> > be easier.
> > In any case, in your example above and according to the RFC we will
> > get all ipv6 udp traffic with and without extensions.
> >
> > > And how to specify UPD over IPv6 regardless extension headers?
> >
> > Please see above the rule will be eth / ipv6 /udp.
> >
> > >   eth / ipv6 / ipv6_ext / udp / end
> > > with a convention that ipv6_ext is optional if spec and mask
> > > are NULL (or mask is empty).
> > >
> > I would guess that this flow should match all ipv6 that has one ext and the
> next
> > proto is udp.
> 
> In my opinion RTE_FLOW_ITEM_TYPE_IPV6_EXT is a bit useless on its own. It's
> only for matching packets that contain some kind of extension header, not a
> specific one, more about that below.
> 
> > > I'm wondering if any driver treats it this way?
> > >
> > I'm not sure, we can support only the frag ext by default, but if required we
> can support other
> > ext.
> >
> > > I agree that the problem really comes when we'd like match
> > > IPv6 frags or even worse not fragments.
> > >
> > > Two patterns for fragments:
> > >   eth / ipv6 (proto=FRAGMENT) / end
> > >   eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end
> > >
> > > Any sensible solution for not-fragments with any other
> > > extension headers?
> > >
> > The one propose in this mail 😊
> >
> > > INVERT exists, but hardly useful, since it simply says
> > > that patches which do not match pattern without INVERT
> > > matches the pattern with INVERT and
> > >   invert / eth / ipv6 (proto=FRAGMENT) / end
> > > will match ARP, IPv4, IPv6 with an extension header before
> > > fragment header and so on.
> > >
> > I agree with you, INVERT in this doesn’t help.
> > We were considering adding some kind of not mask / item per item.
> > some think around this line:
> > user request ipv6 unfragmented udp packets. The flow would look something
> > like this:
> > Eth / ipv6 / Not (Ipv6.proto = frag_proto) / udp
> > But it makes the rules much harder to use, and I don't think that there
> > is any HW that support not, and adding such feature to all items is overkill.
> >
> >
> > > Bit string suggested above will allow to match:
> > >  - UDP over IPv6 with any extension headers:
> > >     eth / ipv6 (ext_hdrs mask empty) / udp / end
> > >  - UDP over IPv6 without any extension headers:
> > >     eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
> > >  - UDP over IPv6 without fragment header:
> > >     eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp / end
> > >  - UDP over IPv6 with fragment header
> > >     eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp / end
> > >
> > > where FRAGMENT is 1 << IPPROTO_FRAGMENT.
> > >
> > Please see my response regarding this above.
> >
> > > Above I intentionally keep 'proto' unspecified in ipv6
> > > since otherwise it would specify the next header after IPv6
> > > header.
> > >
> > > Extension headers mask should be empty by default.
> 
> This is a deliberate design choice/issue with rte_flow: an empty pattern
> matches everything; adding items only narrows the selection. As Andrew said
> there is currently no way to provide a specific item to reject, it can only
> be done globally on a pattern through INVERT that no PMD implements so far.
> 
> So we have two requirements here: the ability to specifically match IPv6
> fragment headers and the ability to reject them.
> 
> To match IPv6 fragment headers, we need a dedicated pattern item. The
> generic RTE_FLOW_ITEM_TYPE_IPV6_EXT is useless for that on its own, it must
> be completed with RTE_FLOW_ITEM_TYPE_IPV6_EXT_FRAG and associated
> object

Yes, we must add EXT_FRAG to be able to match on the FRAG bits.

> to match individual fields if needed (like all the others
> protocols/headers).
> 
> Then to reject a pattern item... My preference goes to a new "NOT" meta item
> affecting the meaning of the item coming immediately after in the pattern
> list. That would be ultra generic, wouldn't break any ABI/API and like
> INVERT, wouldn't even require a new object associated with it.
> 
> To match UDPv6 traffic when there is no fragment header, one could then do
> something like:
> 
>  eth / ipv6 / not / ipv6_ext_frag / udp
> 
> PMD support would be trivial to implement (I'm sure!)
> 
I agree with you as I said above. The issue is not PMD, the issues are:
1. think about the rule you stated above from logic point there is some contradiction,
you are saying ipv6 next proto udp but you also say not frag, this is logic only for IPV6 ext.
2. HW issue, I don't know of HW that knows how to support not on an item.
So adding something for all items for only one case is overkill.
 


> We may later implement other kinds of "operator" items as Andrew suggested,
> for bit-wise stuff and so on. Let's keep adding features on a needed basis
> though.
> 
> --
> Adrien Mazarguil
> 6WIND

Best,
Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 01/11] eal: replace rte_page_sizes with a set of constants
  2020-06-02 23:03  9%     ` [dpdk-dev] [PATCH v6 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
@ 2020-06-03  1:59  0%       ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2020-06-03  1:59 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Jerin Jacob, John McNamara, Marko Kovacevic,
	Anatoly Burakov

On Wed,  3 Jun 2020 02:03:19 +0300
Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:

> Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
> Enum rte_page_sizes has members valued above this limit, which get
> wrapped to zero, resulting in compilation error (duplicate values in
> enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.
> 
> Remove rte_page_sizes and replace its values with #define's.
> This enumeration is not used in public API, so there's no ABI breakage.
> Announce API changes for 20.08 in documentation.
> 
> Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
> Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>

In this case #define makes more sense.

Acked-by: Stephen Hemminger <stephen@networkplumber.org>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6 01/11] eal: replace rte_page_sizes with a set of constants
  @ 2020-06-02 23:03  9%     ` Dmitry Kozlyuk
  2020-06-03  1:59  0%       ` Stephen Hemminger
      2 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2020-06-02 23:03 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Dmitry Kozlyuk, Jerin Jacob, John McNamara,
	Marko Kovacevic, Anatoly Burakov

Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
Enum rte_page_sizes has members valued above this limit, which get
wrapped to zero, resulting in compilation error (duplicate values in
enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.

Remove rte_page_sizes and replace its values with #define's.
This enumeration is not used in public API, so there's no ABI breakage.
Announce API changes for 20.08 in documentation.

Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 doc/guides/rel_notes/release_20_08.rst |  2 ++
 lib/librte_eal/include/rte_memory.h    | 23 ++++++++++-------------
 2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe..2041a29b9 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -85,6 +85,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* ``rte_page_sizes`` enumeration is replaced with ``RTE_PGSIZE_xxx`` defines.
+
 
 ABI Changes
 -----------
diff --git a/lib/librte_eal/include/rte_memory.h b/lib/librte_eal/include/rte_memory.h
index 3d8d0bd69..65374d53a 100644
--- a/lib/librte_eal/include/rte_memory.h
+++ b/lib/librte_eal/include/rte_memory.h
@@ -24,19 +24,16 @@ extern "C" {
 #include <rte_config.h>
 #include <rte_fbarray.h>
 
-__extension__
-enum rte_page_sizes {
-	RTE_PGSIZE_4K    = 1ULL << 12,
-	RTE_PGSIZE_64K   = 1ULL << 16,
-	RTE_PGSIZE_256K  = 1ULL << 18,
-	RTE_PGSIZE_2M    = 1ULL << 21,
-	RTE_PGSIZE_16M   = 1ULL << 24,
-	RTE_PGSIZE_256M  = 1ULL << 28,
-	RTE_PGSIZE_512M  = 1ULL << 29,
-	RTE_PGSIZE_1G    = 1ULL << 30,
-	RTE_PGSIZE_4G    = 1ULL << 32,
-	RTE_PGSIZE_16G   = 1ULL << 34,
-};
+#define RTE_PGSIZE_4K   (1ULL << 12)
+#define RTE_PGSIZE_64K  (1ULL << 16)
+#define RTE_PGSIZE_256K (1ULL << 18)
+#define RTE_PGSIZE_2M   (1ULL << 21)
+#define RTE_PGSIZE_16M  (1ULL << 24)
+#define RTE_PGSIZE_256M (1ULL << 28)
+#define RTE_PGSIZE_512M (1ULL << 29)
+#define RTE_PGSIZE_1G   (1ULL << 30)
+#define RTE_PGSIZE_4G   (1ULL << 32)
+#define RTE_PGSIZE_16G  (1ULL << 34)
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
 
-- 
2.25.4


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-02 18:28  0%   ` Ori Kam
@ 2020-06-02 19:04  3%     ` Adrien Mazarguil
  2020-06-03  8:16  0%       ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Adrien Mazarguil @ 2020-06-02 19:04 UTC (permalink / raw)
  To: Ori Kam
  Cc: Andrew Rybchenko, Dekel Peled, ferruh.yigit, john.mcnamara,
	marko.kovacevic, Asaf Penso, Matan Azrad, Eli Britstein, dev,
	Ivan Malov

Hi Ori, Andrew, Delek,

(been a while eh?)

On Tue, Jun 02, 2020 at 06:28:41PM +0000, Ori Kam wrote:
> Hi Andrew,
> 
> PSB,
[...]
> > > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > > index b0e4199..3bc8ce1 100644
> > > --- a/lib/librte_ethdev/rte_flow.h
> > > +++ b/lib/librte_ethdev/rte_flow.h
> > > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> > >   */
> > >  struct rte_flow_item_ipv6 {
> > >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
> > > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> > 
> > The solution is simple, but hardly generic and adds an
> > example for the future extensions. I doubt that it is a
> > right way to go.
> > 
> I agree with you that this is not the most generic way possible,
> but the IPV6 extensions are very unique. So the solution is also unique.
> In general, I'm always in favor of finding the most generic way, but sometimes
> it is better to keep things simple, and see how it goes.

Same feeling here, it doesn't look right.

> > May be we should add 256-bit string with one bit for each
> > IP protocol number and apply it to extension headers only?
> > If bit A is set in the mask:
> >  - if bit A is set in spec as well, extension header with
> >    IP protocol (1 << A) number must present
> >  - if bit A is clear in spec, extension header with
> >    IP protocol (1 << A) number must absent
> > If bit is clear in the mask, corresponding extension header
> > may present and may absent (i.e. don't care).
> > 
> There are only 12 possible extension headers and currently none of them
> are supported in rte_flow. So adding a logic to parse the 256 just to get a max of 12 
> possible values is an overkill. Also, if we disregard the case of the extension, 
> the application must select only one next proto. For example, the application
> can't select udp + tcp. There is the option to add a flag for each of the
> possible extensions, does it makes more sense to you?

Each of these extension headers has its own structure, we first need the
ability to match them properly by adding the necessary pattern items.

> > The RFC indirectly touches IPv6 proto (next header) matching
> > logic.
> > 
> > If logic used in ETH+VLAN is applied on IPv6 as well, it would
> > make pattern specification and handling complicated. E.g.:
> >   eth / ipv6 / udp / end
> > should match UDP over IPv6 without any extension headers only.
> > 
> The issue with VLAN I agree is different since by definition VLAN is 
> layer 2.5. We can add the same logic also to the VLAN case, maybe it will
> be easier. 
> In any case, in your example above and according to the RFC we will
> get all ipv6 udp traffic with and without extensions.
> 
> > And how to specify UPD over IPv6 regardless extension headers?
> 
> Please see above the rule will be eth / ipv6 /udp.
> 
> >   eth / ipv6 / ipv6_ext / udp / end
> > with a convention that ipv6_ext is optional if spec and mask
> > are NULL (or mask is empty).
> > 
> I would guess that this flow should match all ipv6 that has one ext and the next 
> proto is udp.

In my opinion RTE_FLOW_ITEM_TYPE_IPV6_EXT is a bit useless on its own. It's
only for matching packets that contain some kind of extension header, not a
specific one, more about that below.

> > I'm wondering if any driver treats it this way?
> >
> I'm not sure, we can support only the frag ext by default, but if required we can support other 
> ext.
>  
> > I agree that the problem really comes when we'd like match
> > IPv6 frags or even worse not fragments.
> > 
> > Two patterns for fragments:
> >   eth / ipv6 (proto=FRAGMENT) / end
> >   eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end
> > 
> > Any sensible solution for not-fragments with any other
> > extension headers?
> > 
> The one propose in this mail 😊 
> 
> > INVERT exists, but hardly useful, since it simply says
> > that patches which do not match pattern without INVERT
> > matches the pattern with INVERT and
> >   invert / eth / ipv6 (proto=FRAGMENT) / end
> > will match ARP, IPv4, IPv6 with an extension header before
> > fragment header and so on.
> >
> I agree with you, INVERT in this doesn’t help.
> We were considering adding some kind of not mask / item per item.
> some think around this line:
> user request ipv6 unfragmented udp packets. The flow would look something
> like this:
> Eth / ipv6 / Not (Ipv6.proto = frag_proto) / udp
> But it makes the rules much harder to use, and I don't think that there
> is any HW that support not, and adding such feature to all items is overkill.
> 
>  
> > Bit string suggested above will allow to match:
> >  - UDP over IPv6 with any extension headers:
> >     eth / ipv6 (ext_hdrs mask empty) / udp / end
> >  - UDP over IPv6 without any extension headers:
> >     eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
> >  - UDP over IPv6 without fragment header:
> >     eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp / end
> >  - UDP over IPv6 with fragment header
> >     eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp / end
> > 
> > where FRAGMENT is 1 << IPPROTO_FRAGMENT.
> > 
> Please see my response regarding this above.
> 
> > Above I intentionally keep 'proto' unspecified in ipv6
> > since otherwise it would specify the next header after IPv6
> > header.
> > 
> > Extension headers mask should be empty by default.

This is a deliberate design choice/issue with rte_flow: an empty pattern
matches everything; adding items only narrows the selection. As Andrew said
there is currently no way to provide a specific item to reject, it can only
be done globally on a pattern through INVERT that no PMD implements so far.

So we have two requirements here: the ability to specifically match IPv6
fragment headers and the ability to reject them.

To match IPv6 fragment headers, we need a dedicated pattern item. The
generic RTE_FLOW_ITEM_TYPE_IPV6_EXT is useless for that on its own, it must
be completed with RTE_FLOW_ITEM_TYPE_IPV6_EXT_FRAG and associated object
to match individual fields if needed (like all the others
protocols/headers).

Then to reject a pattern item... My preference goes to a new "NOT" meta item
affecting the meaning of the item coming immediately after in the pattern
list. That would be ultra generic, wouldn't break any ABI/API and like
INVERT, wouldn't even require a new object associated with it.

To match UDPv6 traffic when there is no fragment header, one could then do
something like:

 eth / ipv6 / not / ipv6_ext_frag / udp

PMD support would be trivial to implement (I'm sure!)

We may later implement other kinds of "operator" items as Andrew suggested,
for bit-wise stuff and so on. Let's keep adding features on a needed basis
though.

-- 
Adrien Mazarguil
6WIND

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-02 14:32  0% ` Andrew Rybchenko
@ 2020-06-02 18:28  0%   ` Ori Kam
  2020-06-02 19:04  3%     ` Adrien Mazarguil
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2020-06-02 18:28 UTC (permalink / raw)
  To: Andrew Rybchenko, Dekel Peled, ferruh.yigit, john.mcnamara,
	marko.kovacevic
  Cc: Asaf Penso, Matan Azrad, Eli Britstein, dev, Adrien Mazarguil,
	Ivan Malov

Hi Andrew,

PSB,

Best,
Ori

> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Tuesday, June 2, 2020 5:33 PM
> Subject: Re: [RFC] ethdev: add fragment attribute to IPv6 item
> 
> On 5/31/20 5:43 PM, Dekel Peled wrote:
> > Using the current implementation of DPDK, an application cannot
> > match on fragmented/non-fragmented IPv6 packets in a simple way.
> >
> > In current implementation:
> > IPv6 header doesn't contain information regarding the packet
> > fragmentation.
> > Fragmented IPv6 packets contain a dedicated extension header, as
> > detailed in RFC [1], which is not yet supported in rte_flow.
> > Non-fragmented packets don't contain the fragment extension header.
> > For an application to match on non-fragmented IPv6 packets, the
> > current implementation doesn't provide a suitable solution.
> > Matching on the Next Header field is not sufficient, since additional
> > extension headers might be present in the same packet.
> > To match on fragmented IPv6 packets, the same difficulty exists.
> >
> > Proposed update:
> > An additional value will be added to IPv6 header struct.
> > This value will contain the fragmentation attribute of the packet,
> > providing simple means for identification of fragmented and
> > non-fragmented packets.
> >
> > This update changes ABI, and is proposed for the 20.11 LTS version.
> >
> > [1]
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpdk
> .org%2Farchives%2Fdev%2F2020-
> March%2F160255.html&amp;data=02%7C01%7Corika%40mellanox.com%7C42
> 1376a4759b4b9550fd08d80701d24c%7Ca652971c7d2e4d9ba6a4d149256f461b
> %7C0%7C0%7C637267051695278770&amp;sdata=%2F1HKQPZUVwU199ERL2S
> HPFBYj%2BLFmx8%2BtW8ZBiDL%2FTw%3D&amp;reserved=0
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > ---
> >  lib/librte_ethdev/rte_flow.h | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index b0e4199..3bc8ce1 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> >   */
> >  struct rte_flow_item_ipv6 {
> >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
> > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> 
> The solution is simple, but hardly generic and adds an
> example for the future extensions. I doubt that it is a
> right way to go.
> 
I agree with you that this is not the most generic way possible,
but the IPV6 extensions are very unique. So the solution is also unique.
In general, I'm always in favor of finding the most generic way, but sometimes
it is better to keep things simple, and see how it goes.

> May be we should add 256-bit string with one bit for each
> IP protocol number and apply it to extension headers only?
> If bit A is set in the mask:
>  - if bit A is set in spec as well, extension header with
>    IP protocol (1 << A) number must present
>  - if bit A is clear in spec, extension header with
>    IP protocol (1 << A) number must absent
> If bit is clear in the mask, corresponding extension header
> may present and may absent (i.e. don't care).
> 
There are only 12 possible extension headers and currently none of them
are supported in rte_flow. So adding a logic to parse the 256 just to get a max of 12 
possible values is an overkill. Also, if we disregard the case of the extension, 
the application must select only one next proto. For example, the application
can't select udp + tcp. There is the option to add a flag for each of the
possible extensions, does it makes more sense to you?

> The RFC indirectly touches IPv6 proto (next header) matching
> logic.
> 
> If logic used in ETH+VLAN is applied on IPv6 as well, it would
> make pattern specification and handling complicated. E.g.:
>   eth / ipv6 / udp / end
> should match UDP over IPv6 without any extension headers only.
> 
The issue with VLAN I agree is different since by definition VLAN is 
layer 2.5. We can add the same logic also to the VLAN case, maybe it will
be easier. 
In any case, in your example above and according to the RFC we will
get all ipv6 udp traffic with and without extensions.

> And how to specify UPD over IPv6 regardless extension headers?

Please see above the rule will be eth / ipv6 /udp.

>   eth / ipv6 / ipv6_ext / udp / end
> with a convention that ipv6_ext is optional if spec and mask
> are NULL (or mask is empty).
> 
I would guess that this flow should match all ipv6 that has one ext and the next 
proto is udp.

> I'm wondering if any driver treats it this way?
>
I'm not sure, we can support only the frag ext by default, but if required we can support other 
ext.
 
> I agree that the problem really comes when we'd like match
> IPv6 frags or even worse not fragments.
> 
> Two patterns for fragments:
>   eth / ipv6 (proto=FRAGMENT) / end
>   eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end
> 
> Any sensible solution for not-fragments with any other
> extension headers?
> 
The one propose in this mail 😊 

> INVERT exists, but hardly useful, since it simply says
> that patches which do not match pattern without INVERT
> matches the pattern with INVERT and
>   invert / eth / ipv6 (proto=FRAGMENT) / end
> will match ARP, IPv4, IPv6 with an extension header before
> fragment header and so on.
>
I agree with you, INVERT in this doesn’t help.
We were considering adding some kind of not mask / item per item.
some think around this line:
user request ipv6 unfragmented udp packets. The flow would look something
like this:
Eth / ipv6 / Not (Ipv6.proto = frag_proto) / udp
But it makes the rules much harder to use, and I don't think that there
is any HW that support not, and adding such feature to all items is overkill.

 
> Bit string suggested above will allow to match:
>  - UDP over IPv6 with any extension headers:
>     eth / ipv6 (ext_hdrs mask empty) / udp / end
>  - UDP over IPv6 without any extension headers:
>     eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
>  - UDP over IPv6 without fragment header:
>     eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp / end
>  - UDP over IPv6 with fragment header
>     eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp / end
> 
> where FRAGMENT is 1 << IPPROTO_FRAGMENT.
> 
Please see my response regarding this above.

> Above I intentionally keep 'proto' unspecified in ipv6
> since otherwise it would specify the next header after IPv6
> header.
> 
> Extension headers mask should be empty by default.
> 
> Andrew.
Ori

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-05-31 14:43  3% [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item Dekel Peled
  2020-06-01  5:38  3% ` Stephen Hemminger
@ 2020-06-02 14:32  0% ` Andrew Rybchenko
  2020-06-02 18:28  0%   ` Ori Kam
  1 sibling, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-06-02 14:32 UTC (permalink / raw)
  To: Dekel Peled, ferruh.yigit, orika, john.mcnamara, marko.kovacevic
  Cc: asafp, matan, elibr, dev, Adrien Mazarguil, Ivan Malov

On 5/31/20 5:43 PM, Dekel Peled wrote:
> Using the current implementation of DPDK, an application cannot
> match on fragmented/non-fragmented IPv6 packets in a simple way.
> 
> In current implementation:
> IPv6 header doesn't contain information regarding the packet
> fragmentation.
> Fragmented IPv6 packets contain a dedicated extension header, as
> detailed in RFC [1], which is not yet supported in rte_flow.
> Non-fragmented packets don't contain the fragment extension header.
> For an application to match on non-fragmented IPv6 packets, the
> current implementation doesn't provide a suitable solution.
> Matching on the Next Header field is not sufficient, since additional
> extension headers might be present in the same packet.
> To match on fragmented IPv6 packets, the same difficulty exists.
> 
> Proposed update:
> An additional value will be added to IPv6 header struct.
> This value will contain the fragmentation attribute of the packet,
> providing simple means for identification of fragmented and
> non-fragmented packets.
> 
> This update changes ABI, and is proposed for the 20.11 LTS version.
> 
> [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> ---
>  lib/librte_ethdev/rte_flow.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index b0e4199..3bc8ce1 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
>   */
>  struct rte_flow_item_ipv6 {
>  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
> +	uint32_t reserved:31; /**< Reserved, must be zero. */

The solution is simple, but hardly generic and adds an
example for the future extensions. I doubt that it is a
right way to go.

May be we should add 256-bit string with one bit for each
IP protocol number and apply it to extension headers only?
If bit A is set in the mask:
 - if bit A is set in spec as well, extension header with
   IP protocol (1 << A) number must present
 - if bit A is clear in spec, extension header with
   IP protocol (1 << A) number must absent
If bit is clear in the mask, corresponding extension header
may present and may absent (i.e. don't care).

The RFC indirectly touches IPv6 proto (next header) matching
logic.

If logic used in ETH+VLAN is applied on IPv6 as well, it would
make pattern specification and handling complicated. E.g.:
  eth / ipv6 / udp / end
should match UDP over IPv6 without any extension headers only.

And how to specify UPD over IPv6 regardless extension headers?
  eth / ipv6 / ipv6_ext / udp / end
with a convention that ipv6_ext is optional if spec and mask
are NULL (or mask is empty).

I'm wondering if any driver treats it this way?

I agree that the problem really comes when we'd like match
IPv6 frags or even worse not fragments.

Two patterns for fragments:
  eth / ipv6 (proto=FRAGMENT) / end
  eth / ipv6 / ipv6_ext (next_hdr=FRAGMENT) / end

Any sensible solution for not-fragments with any other
extension headers?

INVERT exists, but hardly useful, since it simply says
that patches which do not match pattern without INVERT
matches the pattern with INVERT and
  invert / eth / ipv6 (proto=FRAGMENT) / end
will match ARP, IPv4, IPv6 with an extension header before
fragment header and so on.

Bit string suggested above will allow to match:
 - UDP over IPv6 with any extension headers:
    eth / ipv6 (ext_hdrs mask empty) / udp / end
 - UDP over IPv6 without any extension headers:
    eth / ipv6 (ext_hdrs mask full, spec empty) / udp / end
 - UDP over IPv6 without fragment header:
    eth / ipv6 (ext.spec & ~FRAGMENT, ext.mask | FRAGMENT) / udp / end
 - UDP over IPv6 with fragment header
    eth / ipv6 (ext.spec | FRAGMENT, ext.mask | FRAGMENT) / udp / end

where FRAGMENT is 1 << IPPROTO_FRAGMENT.

Above I intentionally keep 'proto' unspecified in ipv6
since otherwise it would specify the next header after IPv6
header.

Extension headers mask should be empty by default.

Andrew.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-06-01  5:38  3% ` Stephen Hemminger
@ 2020-06-01  6:11  0%   ` Dekel Peled
  0 siblings, 0 replies; 200+ results
From: Dekel Peled @ 2020-06-01  6:11 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: ferruh.yigit, arybchenko, Ori Kam, john.mcnamara,
	marko.kovacevic, Asaf Penso, Matan Azrad, Eli Britstein, dev

Thanks, PSB.

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday, June 1, 2020 8:39 AM
> To: Dekel Peled <dekelp@mellanox.com>
> Cc: ferruh.yigit@intel.com; arybchenko@solarflare.com; Ori Kam
> <orika@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; Asaf Penso <asafp@mellanox.com>; Matan
> Azrad <matan@mellanox.com>; Eli Britstein <elibr@mellanox.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
> 
> On Sun, 31 May 2020 17:43:29 +0300
> Dekel Peled <dekelp@mellanox.com> wrote:
> 
> > Using the current implementation of DPDK, an application cannot match
> > on fragmented/non-fragmented IPv6 packets in a simple way.
> >
> > In current implementation:
> > IPv6 header doesn't contain information regarding the packet
> > fragmentation.
> > Fragmented IPv6 packets contain a dedicated extension header, as
> > detailed in RFC [1], which is not yet supported in rte_flow.
> > Non-fragmented packets don't contain the fragment extension header.
> > For an application to match on non-fragmented IPv6 packets, the
> > current implementation doesn't provide a suitable solution.
> > Matching on the Next Header field is not sufficient, since additional
> > extension headers might be present in the same packet.
> > To match on fragmented IPv6 packets, the same difficulty exists.
> >
> > Proposed update:
> > An additional value will be added to IPv6 header struct.
> > This value will contain the fragmentation attribute of the packet,
> > providing simple means for identification of fragmented and
> > non-fragmented packets.
> >
> > This update changes ABI, and is proposed for the 20.11 LTS version.
> >
> > [1]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails
> > .dpdk.org%2Farchives%2Fdev%2F2020-
> March%2F160255.html&amp;data=02%7C01
> >
> %7Cdekelp%40mellanox.com%7C9ee87004dc3943b945c908d805ee0bcc%7Ca
> 652971c
> >
> 7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637265867256841029&amp;sdata
> =rf1zYz
> >
> fNLGdqayXLHffO%2FrM%2FeX5op6KO91RDKq%2BYk3Q%3D&amp;reserved
> =0
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > ---
> >  lib/librte_ethdev/rte_flow.h | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h
> > b/lib/librte_ethdev/rte_flow.h index b0e4199..3bc8ce1 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
> >   */
> >  struct rte_flow_item_ipv6 {
> >  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> > +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented.
> */
> > +	uint32_t reserved:31; /**< Reserved, must be zero. */
> >  };
> 
> You can't do this in the 20.08 release it would be an ABI breakage.
Please see above, I noted in the commit log that this is proposed for 20.11.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
  2020-05-31 14:43  3% [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item Dekel Peled
@ 2020-06-01  5:38  3% ` Stephen Hemminger
  2020-06-01  6:11  0%   ` Dekel Peled
  2020-06-02 14:32  0% ` Andrew Rybchenko
  1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2020-06-01  5:38 UTC (permalink / raw)
  To: Dekel Peled
  Cc: ferruh.yigit, arybchenko, orika, john.mcnamara, marko.kovacevic,
	asafp, matan, elibr, dev

On Sun, 31 May 2020 17:43:29 +0300
Dekel Peled <dekelp@mellanox.com> wrote:

> Using the current implementation of DPDK, an application cannot
> match on fragmented/non-fragmented IPv6 packets in a simple way.
> 
> In current implementation:
> IPv6 header doesn't contain information regarding the packet
> fragmentation.
> Fragmented IPv6 packets contain a dedicated extension header, as
> detailed in RFC [1], which is not yet supported in rte_flow.
> Non-fragmented packets don't contain the fragment extension header.
> For an application to match on non-fragmented IPv6 packets, the
> current implementation doesn't provide a suitable solution.
> Matching on the Next Header field is not sufficient, since additional
> extension headers might be present in the same packet.
> To match on fragmented IPv6 packets, the same difficulty exists.
> 
> Proposed update:
> An additional value will be added to IPv6 header struct.
> This value will contain the fragmentation attribute of the packet,
> providing simple means for identification of fragmented and
> non-fragmented packets.
> 
> This update changes ABI, and is proposed for the 20.11 LTS version.
> 
> [1] http://mails.dpdk.org/archives/dev/2020-March/160255.html
> 
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> ---
>  lib/librte_ethdev/rte_flow.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index b0e4199..3bc8ce1 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
>   */
>  struct rte_flow_item_ipv6 {
>  	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
> +	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
> +	uint32_t reserved:31; /**< Reserved, must be zero. */
>  };

You can't do this in the 20.08 release it would be an ABI breakage.


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item
@ 2020-05-31 14:43  3% Dekel Peled
  2020-06-01  5:38  3% ` Stephen Hemminger
  2020-06-02 14:32  0% ` Andrew Rybchenko
  0 siblings, 2 replies; 200+ results
From: Dekel Peled @ 2020-05-31 14:43 UTC (permalink / raw)
  To: ferruh.yigit, arybchenko, orika, john.mcnamara, marko.kovacevic
  Cc: asafp, matan, elibr, dev

Using the current implementation of DPDK, an application cannot
match on fragmented/non-fragmented IPv6 packets in a simple way.

In current implementation:
IPv6 header doesn't contain information regarding the packet
fragmentation.
Fragmented IPv6 packets contain a dedicated extension header, as
detailed in RFC [1], which is not yet supported in rte_flow.
Non-fragmented packets don't contain the fragment extension header.
For an application to match on non-fragmented IPv6 packets, the
current implementation doesn't provide a suitable solution.
Matching on the Next Header field is not sufficient, since additional
extension headers might be present in the same packet.
To match on fragmented IPv6 packets, the same difficulty exists.

Proposed update:
An additional value will be added to IPv6 header struct.
This value will contain the fragmentation attribute of the packet,
providing simple means for identification of fragmented and
non-fragmented packets.

This update changes ABI, and is proposed for the 20.11 LTS version.

[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
 lib/librte_ethdev/rte_flow.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index b0e4199..3bc8ce1 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -787,6 +787,8 @@ struct rte_flow_item_ipv4 {
  */
 struct rte_flow_item_ipv6 {
 	struct rte_ipv6_hdr hdr; /**< IPv6 header definition. */
+	uint32_t is_frag:1; /**< Is IPv6 packet fragmented/non-fragmented. */
+	uint32_t reserved:31; /**< Reserved, must be zero. */
 };
 
 /** Default mask for RTE_FLOW_ITEM_TYPE_IPV6. */
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v15 0/2] support for VFIO-PCI VF token interface
    2020-05-28  1:22  4% ` [dpdk-dev] [PATCH v14 0/2] support for VFIO-PCI VF token interface Haiyue Wang
@ 2020-05-29  1:37  4% ` Haiyue Wang
  2020-06-17  6:33  4% ` [dpdk-dev] [PATCH v16 " Haiyue Wang
  2 siblings, 0 replies; 200+ results
From: Haiyue Wang @ 2020-05-29  1:37 UTC (permalink / raw)
  To: dev, anatoly.burakov, thomas, jerinj, david.marchand, arybchenko,
	xiaolong.ye
  Cc: Haiyue Wang

v15: Add the missed EXPERIMENTAL warning for API doxgen.

v14: Rebase the patch for 20.08 release note.

v13: Rename the EAL get VF token function, and leave the freebsd type as empty.

v12: support to vfio devices with VF token and no token.

v11: Use the eal parameter to pass the VF token, then not every PCI
     device needs to be specified with this token. Also no ABI issue
     now.

v10: Use the __rte_internal to mark the internal API changing.

v9: Rewrite the document.

v8: Update the document.

v7: Add the Fixes tag in uuid, the release note and help
    document.

v6: Drop the Fixes tag in uuid, since the file has been
    moved to another place, not suitable to apply on stable.
    And this is not a bug, just some kind of enhancement.

v5: 1. Add the VF token parse error handling.
    2. Split into two patches for different logic module.
    3. Add more comments into the code for explaining the design.
    4. Drop the ABI change workaround, this patch set focuses on code review.

v4: 1. Ignore rte_vfio_setup_device ABI check since it is
       for Linux driver use.

v3: Fix the Travis build failed:
           (1). rte_uuid.h:97:55: error: unknown type name ‘size_t’
           (2). rte_uuid.h:58:2: error: implicit declaration of function ‘memcpy’

v2: Fix the FreeBSD build error.

v1: Update the commit message.

RFC v2:
         Based on Vamsi's RFC v1, and Alex's patch for Qemu
        [https://lore.kernel.org/lkml/20200204161737.34696b91@w520.home/]: 
       Use the devarg to pass-down the VF token.

RFC v1: https://patchwork.dpdk.org/patch/66281/ by Vamsi.


Haiyue Wang (2):
  eal: add uuid dependent header files explicitly
  eal: support for VFIO-PCI VF token

 doc/guides/linux_gsg/linux_drivers.rst        | 35 ++++++++++++++++++-
 doc/guides/linux_gsg/linux_eal_parameters.rst |  4 +++
 doc/guides/rel_notes/release_20_08.rst        |  5 +++
 lib/librte_eal/common/eal_common_options.c    |  2 ++
 lib/librte_eal/common/eal_internal_cfg.h      |  2 ++
 lib/librte_eal/common/eal_options.h           |  2 ++
 lib/librte_eal/freebsd/eal.c                  |  4 +++
 lib/librte_eal/include/rte_eal.h              | 15 ++++++++
 lib/librte_eal/include/rte_uuid.h             |  2 ++
 lib/librte_eal/linux/eal.c                    | 29 +++++++++++++++
 lib/librte_eal/linux/eal_vfio.c               | 19 ++++++++++
 lib/librte_eal/rte_eal_version.map            |  1 +
 12 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 3/3] lib: remind experimental status in library headers
  2020-05-28  6:53  5%     ` David Marchand
@ 2020-05-28 18:40  3%       ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-05-28 18:40 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, thomas, techboard, stable, Nicolas Chautru,
	Konstantin Ananyev, Fiona Trahe, Ashish Gupta,
	Honnappa Nagarahalli, Vladimir Medvedkin, Bernard Iremonger,
	Gage Eads, Olivier Matz, Kevin Laatz, nd, Ray Kinsella,
	Neil Horman, nd

<snip>

> 
> Hello Honnappa,
> 
> On Fri, May 22, 2020 at 4:16 PM Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com> wrote:
> > > @@ -11,7 +11,8 @@
> > >   * Wireless base band device abstraction APIs.
> > >   *
> > >   * @warning
> > > - * @b EXPERIMENTAL: this API may change without prior notice
> > > + * @b EXPERIMENTAL:
> > > + * All functions in this file may change or disappear without prior notice.
> > nit, is 'removed' a better choice instead of 'disappear'? May be something
> like:
> > All functions in this file may be changed or removed without prior notice.
> 
> I used the same form than in the abi policy (that I wanted but forgot to
> update in patch 1 afterwards... will be fixed in v2).
> 
> #. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may
>    change without constraint, as they are not considered part of an ABI version.
>    Experimental libraries have the major ABI version ``0``.
> 
> No strong opinion, but I prefer keeping a single phrasing.
> If we go with your suggestion, I will update the abi policy.
Agree on using the same wording.
IMO, we should capture the 'removed/disappear' wording in the ABI policy.

> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [20.11] [PATCH v2] stack: remove experimental tag from API
  2020-05-28  1:04  3% [dpdk-dev] [PATCH] stack: remove experimental tag from API Gage Eads
  2020-05-28  5:46  3% ` Ray Kinsella
@ 2020-05-28 15:04  8% ` Gage Eads
  1 sibling, 0 replies; 200+ results
From: Gage Eads @ 2020-05-28 15:04 UTC (permalink / raw)
  To: dev; +Cc: thomas, david.marchand, mdr, nhorman, phil.yang, honnappa.nagarahalli

The stack library was first released in 19.05, and its interfaces have been
stable since their initial introduction. This commit promotes the full
interface to stable, starting with the 20.11 major version.

Signed-off-by: Gage Eads <gage.eads@intel.com>
---
 doc/guides/rel_notes/release_20_11.rst |  3 +++
 lib/librte_stack/rte_stack.h           | 29 -----------------------------
 lib/librte_stack/rte_stack_lf.h        |  2 --
 lib/librte_stack/rte_stack_std.h       |  3 ---
 lib/librte_stack/rte_stack_version.map |  2 +-
 5 files changed, 4 insertions(+), 35 deletions(-)

v2:
- Added 20.11 tag, will set patch status to 'Deferred' so it is skipped
  for the 20.08 development period.
- Added release notes announcement. release_20_11.rst doesn't exist yet,
  so I made the change to release_20_08.rst then edited the filename
  directly in the patch. This will not apply cleanly.
- Changed rte_stack_version.map version to DPDK_21.

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 39064afbe..4eaf4f17d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -101,6 +101,9 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* stack: the experimental tag has been dropped from the stack library, and its
+  interfaces are considered stable as of DPDK 20.11.
+
 * No ABI change that would break compatibility with 19.11.
 
 
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 27ddb199e..343dd019a 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -4,7 +4,6 @@
 
 /**
  * @file rte_stack.h
- * @b EXPERIMENTAL: this API may change without prior notice
  *
  * RTE Stack
  *
@@ -98,9 +97,6 @@ struct rte_stack {
 #include "rte_stack_lf.h"
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Push several objects on the stack (MT-safe).
  *
  * @param s
@@ -112,7 +108,6 @@ struct rte_stack {
  * @return
  *   Actual number of objects pushed (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
 {
@@ -126,9 +121,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Pop several objects from the stack (MT-safe).
  *
  * @param s
@@ -140,7 +132,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
  * @return
  *   Actual number of objects popped (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
@@ -154,9 +145,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Return the number of used entries in a stack.
  *
  * @param s
@@ -164,7 +152,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
  * @return
  *   The number of used entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_count(struct rte_stack *s)
 {
@@ -177,9 +164,6 @@ rte_stack_count(struct rte_stack *s)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Return the number of free entries in a stack.
  *
  * @param s
@@ -187,7 +171,6 @@ rte_stack_count(struct rte_stack *s)
  * @return
  *   The number of free entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_free_count(struct rte_stack *s)
 {
@@ -197,9 +180,6 @@ rte_stack_free_count(struct rte_stack *s)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Create a new stack named *name* in memory.
  *
  * This function uses ``memzone_reserve()`` to allocate memory for a stack of
@@ -226,28 +206,20 @@ rte_stack_free_count(struct rte_stack *s)
  *    - ENOMEM - insufficient memory to create the stack
  *    - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
  */
-__rte_experimental
 struct rte_stack *
 rte_stack_create(const char *name, unsigned int count, int socket_id,
 		 uint32_t flags);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Free all memory used by the stack.
  *
  * @param s
  *   Stack to free
  */
-__rte_experimental
 void
 rte_stack_free(struct rte_stack *s);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Lookup a stack by its name.
  *
  * @param name
@@ -258,7 +230,6 @@ rte_stack_free(struct rte_stack *s);
  *    - ENOENT - Stack with name *name* not found.
  *    - EINVAL - *name* pointer is NULL.
  */
-__rte_experimental
 struct rte_stack *
 rte_stack_lookup(const char *name);
 
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index e67630c27..eb106e64e 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -27,7 +27,6 @@
  * @return
  *   Actual number of objects enqueued.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_lf_push(struct rte_stack *s,
 		    void * const *obj_table,
@@ -66,7 +65,6 @@ __rte_stack_lf_push(struct rte_stack *s,
  * @return
  *   - Actual number of objects popped.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
index 7142cbf8e..ae28add5c 100644
--- a/lib/librte_stack/rte_stack_std.h
+++ b/lib/librte_stack/rte_stack_std.h
@@ -19,7 +19,6 @@
  * @return
  *   Actual number of objects pushed (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
 		     unsigned int n)
@@ -59,7 +58,6 @@ __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
  * @return
  *   Actual number of objects popped (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
@@ -94,7 +92,6 @@ __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
  * @return
  *   The number of used entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_count(struct rte_stack *s)
 {
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c3..8c4ca0245 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,4 +1,4 @@
-EXPERIMENTAL {
+DPDK_21 {
 	global:
 
 	rte_stack_create;
-- 
2.13.6


^ permalink raw reply	[relevance 8%]

* Re: [dpdk-dev] [PATCH v2] devtools: remove useless files from ABI reference
  2020-05-24 17:43 13% ` [dpdk-dev] [PATCH v2] " Thomas Monjalon
@ 2020-05-28 13:16  4%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-28 13:16 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, Bruce Richardson

On Sun, May 24, 2020 at 7:43 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> When building an ABI reference with meson, some static libraries
> are built and linked in apps. They are useless and take a lot of space.
> Those binaries, and other useless files (examples and doc files)
> in the share/ directory, are removed after being installed.
>
> In order to save time when building the ABI reference,
> the examples (which are not installed anyway) are not compiled.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> v2: find static libraries anywhere it tries hiding from being swept
> ---
>  devtools/test-meson-builds.sh | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index 18b874fac5..de569a486f 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -140,10 +140,15 @@ build () # <directory> <target compiler> <meson options>
>                         fi
>
>                         rm -rf $abirefdir/build
> -                       config $abirefdir/src $abirefdir/build $*
> +                       config $abirefdir/src $abirefdir/build -Dexamples= $*
>                         compile $abirefdir/build
>                         install_target $abirefdir/build $abirefdir/$targetdir
>                         $srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
> +
> +                       # save disk space by removing static libs and apps
> +                       find $abirefdir/$targetdir/usr/local -name '*.a' -delete

I would prefer -exec rm -f {} as Bruce proposed, because -delete is not posix.
But otherwise, it works and looks good to me.


> +                       rm -rf $abirefdir/$targetdir/usr/local/bin
> +                       rm -rf $abirefdir/$targetdir/usr/local/share
>                 fi
>
>                 install_target $builds_dir/$targetdir \
> --
> 2.26.2
>

With either -delete or -exec rm:
Acked-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 3/3] lib: remind experimental status in library headers
  @ 2020-05-28  6:53  5%     ` David Marchand
  2020-05-28 18:40  3%       ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-05-28  6:53 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: dev, thomas, techboard, stable, Nicolas Chautru,
	Konstantin Ananyev, Fiona Trahe, Ashish Gupta,
	Vladimir Medvedkin, Bernard Iremonger, Gage Eads, Olivier Matz,
	Kevin Laatz, nd, Ray Kinsella, Neil Horman

Hello Honnappa,

On Fri, May 22, 2020 at 4:16 PM Honnappa Nagarahalli
<Honnappa.Nagarahalli@arm.com> wrote:
> > @@ -11,7 +11,8 @@
> >   * Wireless base band device abstraction APIs.
> >   *
> >   * @warning
> > - * @b EXPERIMENTAL: this API may change without prior notice
> > + * @b EXPERIMENTAL:
> > + * All functions in this file may change or disappear without prior notice.
> nit, is 'removed' a better choice instead of 'disappear'? May be something like:
> All functions in this file may be changed or removed without prior notice.

I used the same form than in the abi policy (that I wanted but forgot
to update in patch 1 afterwards... will be fixed in v2).

#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may
   change without constraint, as they are not considered part of an ABI version.
   Experimental libraries have the major ABI version ``0``.

No strong opinion, but I prefer keeping a single phrasing.
If we go with your suggestion, I will update the abi policy.


-- 
David Marchand


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH] stack: remove experimental tag from API
  2020-05-28  1:04  3% [dpdk-dev] [PATCH] stack: remove experimental tag from API Gage Eads
@ 2020-05-28  5:46  3% ` Ray Kinsella
  2020-05-28 15:04  8% ` [dpdk-dev] [20.11] [PATCH v2] " Gage Eads
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-28  5:46 UTC (permalink / raw)
  To: Gage Eads, dev
  Cc: thomas, david.marchand, nhorman, phil.yang, honnappa.nagarahalli,
	olivier.matz

Hi Gage,

Do you have any idea.
If the change from experimental to stable symbol is likely to break anyone's application?
(are folks actively using the api with shared libraries)

If so, you _may_ consider offering a temporary alias to experimental until the v21 is declared at 20.11.
This is entirely at your discretion, see
https://doc.dpdk.org/guides/contributing/abi_versioning.html (4.3.1.4)

Please see my additional comments on the map file below.

On 28/05/2020 02:04, Gage Eads wrote:
> The stack library was first released in 19.05, and its interfaces have been
> stable since their initial introduction. This commit promotes the full
> interface to stable, starting with the 20.08 ABI.
> 
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> ---
>  lib/librte_stack/rte_stack.h           | 29 -----------------------------
>  lib/librte_stack/rte_stack_lf.h        |  2 --
>  lib/librte_stack/rte_stack_std.h       |  3 ---
>  lib/librte_stack/rte_stack_version.map |  2 +-
>  4 files changed, 1 insertion(+), 35 deletions(-)
> 
> diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
> index 27ddb199e..343dd019a 100644
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -4,7 +4,6 @@
>  
>  /**
>   * @file rte_stack.h
> - * @b EXPERIMENTAL: this API may change without prior notice
>   *
>   * RTE Stack
>   *
> @@ -98,9 +97,6 @@ struct rte_stack {
>  #include "rte_stack_lf.h"
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Push several objects on the stack (MT-safe).
>   *
>   * @param s
> @@ -112,7 +108,6 @@ struct rte_stack {
>   * @return
>   *   Actual number of objects pushed (either 0 or *n*).
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
>  {
> @@ -126,9 +121,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
>  }
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Pop several objects from the stack (MT-safe).
>   *
>   * @param s
> @@ -140,7 +132,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
>   * @return
>   *   Actual number of objects popped (either 0 or *n*).
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>  {
> @@ -154,9 +145,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>  }
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Return the number of used entries in a stack.
>   *
>   * @param s
> @@ -164,7 +152,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>   * @return
>   *   The number of used entries in the stack.
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  rte_stack_count(struct rte_stack *s)
>  {
> @@ -177,9 +164,6 @@ rte_stack_count(struct rte_stack *s)
>  }
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Return the number of free entries in a stack.
>   *
>   * @param s
> @@ -187,7 +171,6 @@ rte_stack_count(struct rte_stack *s)
>   * @return
>   *   The number of free entries in the stack.
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  rte_stack_free_count(struct rte_stack *s)
>  {
> @@ -197,9 +180,6 @@ rte_stack_free_count(struct rte_stack *s)
>  }
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Create a new stack named *name* in memory.
>   *
>   * This function uses ``memzone_reserve()`` to allocate memory for a stack of
> @@ -226,28 +206,20 @@ rte_stack_free_count(struct rte_stack *s)
>   *    - ENOMEM - insufficient memory to create the stack
>   *    - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
>   */
> -__rte_experimental
>  struct rte_stack *
>  rte_stack_create(const char *name, unsigned int count, int socket_id,
>  		 uint32_t flags);
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Free all memory used by the stack.
>   *
>   * @param s
>   *   Stack to free
>   */
> -__rte_experimental
>  void
>  rte_stack_free(struct rte_stack *s);
>  
>  /**
> - * @warning
> - * @b EXPERIMENTAL: this API may change without prior notice
> - *
>   * Lookup a stack by its name.
>   *
>   * @param name
> @@ -258,7 +230,6 @@ rte_stack_free(struct rte_stack *s);
>   *    - ENOENT - Stack with name *name* not found.
>   *    - EINVAL - *name* pointer is NULL.
>   */
> -__rte_experimental
>  struct rte_stack *
>  rte_stack_lookup(const char *name);
>  
> diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
> index e67630c27..eb106e64e 100644
> --- a/lib/librte_stack/rte_stack_lf.h
> +++ b/lib/librte_stack/rte_stack_lf.h
> @@ -27,7 +27,6 @@
>   * @return
>   *   Actual number of objects enqueued.
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  __rte_stack_lf_push(struct rte_stack *s,
>  		    void * const *obj_table,
> @@ -66,7 +65,6 @@ __rte_stack_lf_push(struct rte_stack *s,
>   * @return
>   *   - Actual number of objects popped.
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  __rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>  {
> diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
> index 7142cbf8e..ae28add5c 100644
> --- a/lib/librte_stack/rte_stack_std.h
> +++ b/lib/librte_stack/rte_stack_std.h
> @@ -19,7 +19,6 @@
>   * @return
>   *   Actual number of objects pushed (either 0 or *n*).
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
>  		     unsigned int n)
> @@ -59,7 +58,6 @@ __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
>   * @return
>   *   Actual number of objects popped (either 0 or *n*).
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>  {
> @@ -94,7 +92,6 @@ __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
>   * @return
>   *   The number of used entries in the stack.
>   */
> -__rte_experimental
>  static __rte_always_inline unsigned int
>  __rte_stack_std_count(struct rte_stack *s)
>  {
> diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
> index 6662679c3..22c703d3b 100644
> --- a/lib/librte_stack/rte_stack_version.map
> +++ b/lib/librte_stack/rte_stack_version.map
> @@ -1,4 +1,4 @@
> -EXPERIMENTAL {
> +DPDK_20.0.3 {
>  	global:
>  
>  	rte_stack_create;
> 

Should be DPDK_21, the next major ABI version is v21. 



^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v14 0/2] support for VFIO-PCI VF token interface
  @ 2020-05-28  1:22  4% ` Haiyue Wang
  2020-05-29  1:37  4% ` [dpdk-dev] [PATCH v15 " Haiyue Wang
  2020-06-17  6:33  4% ` [dpdk-dev] [PATCH v16 " Haiyue Wang
  2 siblings, 0 replies; 200+ results
From: Haiyue Wang @ 2020-05-28  1:22 UTC (permalink / raw)
  To: dev, anatoly.burakov, thomas, jerinj, david.marchand, arybchenko
  Cc: Haiyue Wang

v14: Rebase the patch for 20.08 release note.

v13: Rename the EAL get VF token function, and leave the freebsd type as empty.

v12: support to vfio devices with VF token and no token.

v11: Use the eal parameter to pass the VF token, then not every PCI
     device needs to be specified with this token. Also no ABI issue
     now.

v10: Use the __rte_internal to mark the internal API changing.

v9: Rewrite the document.

v8: Update the document.

v7: Add the Fixes tag in uuid, the release note and help
    document.

v6: Drop the Fixes tag in uuid, since the file has been
    moved to another place, not suitable to apply on stable.
    And this is not a bug, just some kind of enhancement.

v5: 1. Add the VF token parse error handling.
    2. Split into two patches for different logic module.
    3. Add more comments into the code for explaining the design.
    4. Drop the ABI change workaround, this patch set focuses on code review.

v4: 1. Ignore rte_vfio_setup_device ABI check since it is
       for Linux driver use.

v3: Fix the Travis build failed:
           (1). rte_uuid.h:97:55: error: unknown type name ‘size_t’
           (2). rte_uuid.h:58:2: error: implicit declaration of function ‘memcpy’

v2: Fix the FreeBSD build error.

v1: Update the commit message.

RFC v2:
         Based on Vamsi's RFC v1, and Alex's patch for Qemu
        [https://lore.kernel.org/lkml/20200204161737.34696b91@w520.home/]: 
       Use the devarg to pass-down the VF token.

RFC v1: https://patchwork.dpdk.org/patch/66281/ by Vamsi.

Haiyue Wang (2):
  eal: add uuid dependent header files explicitly
  eal: support for VFIO-PCI VF token

 doc/guides/linux_gsg/linux_drivers.rst        | 35 ++++++++++++++++++-
 doc/guides/linux_gsg/linux_eal_parameters.rst |  4 +++
 doc/guides/rel_notes/release_20_08.rst        |  5 +++
 lib/librte_eal/common/eal_common_options.c    |  2 ++
 lib/librte_eal/common/eal_internal_cfg.h      |  2 ++
 lib/librte_eal/common/eal_options.h           |  2 ++
 lib/librte_eal/freebsd/eal.c                  |  4 +++
 lib/librte_eal/include/rte_eal.h              | 12 +++++++
 lib/librte_eal/include/rte_uuid.h             |  2 ++
 lib/librte_eal/linux/eal.c                    | 29 +++++++++++++++
 lib/librte_eal/linux/eal_vfio.c               | 19 ++++++++++
 lib/librte_eal/rte_eal_version.map            |  1 +
 12 files changed, 116 insertions(+), 1 deletion(-)

-- 
2.26.2


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH] stack: remove experimental tag from API
@ 2020-05-28  1:04  3% Gage Eads
  2020-05-28  5:46  3% ` Ray Kinsella
  2020-05-28 15:04  8% ` [dpdk-dev] [20.11] [PATCH v2] " Gage Eads
  0 siblings, 2 replies; 200+ results
From: Gage Eads @ 2020-05-28  1:04 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, mdr, nhorman, phil.yang,
	honnappa.nagarahalli, olivier.matz

The stack library was first released in 19.05, and its interfaces have been
stable since their initial introduction. This commit promotes the full
interface to stable, starting with the 20.08 ABI.

Signed-off-by: Gage Eads <gage.eads@intel.com>
---
 lib/librte_stack/rte_stack.h           | 29 -----------------------------
 lib/librte_stack/rte_stack_lf.h        |  2 --
 lib/librte_stack/rte_stack_std.h       |  3 ---
 lib/librte_stack/rte_stack_version.map |  2 +-
 4 files changed, 1 insertion(+), 35 deletions(-)

diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 27ddb199e..343dd019a 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -4,7 +4,6 @@
 
 /**
  * @file rte_stack.h
- * @b EXPERIMENTAL: this API may change without prior notice
  *
  * RTE Stack
  *
@@ -98,9 +97,6 @@ struct rte_stack {
 #include "rte_stack_lf.h"
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Push several objects on the stack (MT-safe).
  *
  * @param s
@@ -112,7 +108,6 @@ struct rte_stack {
  * @return
  *   Actual number of objects pushed (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
 {
@@ -126,9 +121,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Pop several objects from the stack (MT-safe).
  *
  * @param s
@@ -140,7 +132,6 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
  * @return
  *   Actual number of objects popped (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
@@ -154,9 +145,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Return the number of used entries in a stack.
  *
  * @param s
@@ -164,7 +152,6 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
  * @return
  *   The number of used entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_count(struct rte_stack *s)
 {
@@ -177,9 +164,6 @@ rte_stack_count(struct rte_stack *s)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Return the number of free entries in a stack.
  *
  * @param s
@@ -187,7 +171,6 @@ rte_stack_count(struct rte_stack *s)
  * @return
  *   The number of free entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 rte_stack_free_count(struct rte_stack *s)
 {
@@ -197,9 +180,6 @@ rte_stack_free_count(struct rte_stack *s)
 }
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Create a new stack named *name* in memory.
  *
  * This function uses ``memzone_reserve()`` to allocate memory for a stack of
@@ -226,28 +206,20 @@ rte_stack_free_count(struct rte_stack *s)
  *    - ENOMEM - insufficient memory to create the stack
  *    - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
  */
-__rte_experimental
 struct rte_stack *
 rte_stack_create(const char *name, unsigned int count, int socket_id,
 		 uint32_t flags);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Free all memory used by the stack.
  *
  * @param s
  *   Stack to free
  */
-__rte_experimental
 void
 rte_stack_free(struct rte_stack *s);
 
 /**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
  * Lookup a stack by its name.
  *
  * @param name
@@ -258,7 +230,6 @@ rte_stack_free(struct rte_stack *s);
  *    - ENOENT - Stack with name *name* not found.
  *    - EINVAL - *name* pointer is NULL.
  */
-__rte_experimental
 struct rte_stack *
 rte_stack_lookup(const char *name);
 
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index e67630c27..eb106e64e 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -27,7 +27,6 @@
  * @return
  *   Actual number of objects enqueued.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_lf_push(struct rte_stack *s,
 		    void * const *obj_table,
@@ -66,7 +65,6 @@ __rte_stack_lf_push(struct rte_stack *s,
  * @return
  *   - Actual number of objects popped.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
index 7142cbf8e..ae28add5c 100644
--- a/lib/librte_stack/rte_stack_std.h
+++ b/lib/librte_stack/rte_stack_std.h
@@ -19,7 +19,6 @@
  * @return
  *   Actual number of objects pushed (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
 		     unsigned int n)
@@ -59,7 +58,6 @@ __rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
  * @return
  *   Actual number of objects popped (either 0 or *n*).
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
 {
@@ -94,7 +92,6 @@ __rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
  * @return
  *   The number of used entries in the stack.
  */
-__rte_experimental
 static __rte_always_inline unsigned int
 __rte_stack_std_count(struct rte_stack *s)
 {
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c3..22c703d3b 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,4 +1,4 @@
-EXPERIMENTAL {
+DPDK_20.0.3 {
 	global:
 
 	rte_stack_create;
-- 
2.13.6


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 21:43  4%       ` Thomas Monjalon
@ 2020-05-28  0:28  4%         ` Neil Horman
  0 siblings, 0 replies; 200+ results
From: Neil Horman @ 2020-05-28  0:28 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Harini Ramakrishnan, Fady Bader, dev, Omar Cardona,
	Pallavi Kadam, Ranjit Menon, dmitry.kozliuk, mdr

On Wed, May 27, 2020 at 11:43:49PM +0200, Thomas Monjalon wrote:
> 27/05/2020 23:27, Thomas Monjalon:
> > 27/05/2020 22:35, Neil Horman:
> > > On Wed, May 27, 2020 at 02:50:07PM +0200, Thomas Monjalon wrote:
> > > > +Cc more people
> > > > 
> > > > 27/05/2020 12:41, Fady Bader:
> > > > > What should we do with the ABI versioning in Windows ?
> > > > 
> > > > I think there are 2 questions here:
> > > > 
> > > > 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> > > > The decision must be clearly documented.
> > > > 
> > > My first notion, without any greater thought is "why wouldn't we".  ABI
> > > stability is OS agnostic.  If a symbol is considered stable, theres no reason
> > > that I can think of that it wouldn't be stable for each OS.
> > 
> > Technical reason + no need so far.
> > 
> > 
> > > > 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> > > > Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> > > > 
> > > Can you elaborate on what exactly the issue is here?  I presume by your comment
> > > above that visual studio either doesn't support symbol level versioning or
> > > doesn't support versioning at all?
> > 
> > I don't know how to implement the macros in rte_function_versioning.h for Windows.
> > 
> > 
> > > If thats the case, and there is a commitment to make dpdk buildable on windows,
> > > I suppose the only choice is to make a ifdef WINDOWS section of the
> > > rte_function_versioning.h file, and effectively turn all the macros into no-ops.
> > 
> > Yes that's the idea.
> > But we still need to implement either BIND_DEFAULT_SYMBOL or MAP_STATIC_SYMBOL
> > to alias the latest function version to the actual function symbol.
> 
> I've just found a tip in https://sourceware.org/binutils/docs/ld/WIN32.html
> It suggests to create a weak symbol:
> void foo() __attribute__((weak, alias ("foo_latestversion")));
> 
Ahh, you're using mingw, which appears to support versioning.  If the windows
equivalent of ld.so honors those versions, I would think the versioning bits
should almost just work (assuming that mingw supports all the used
__attirbutes__)

Neil

> 
> > > The BIND_DEFAULT_SYMBOL macro looks like it could still work, as MSVC has an
> > > alias linker command thats implementable via __pragma, but thats probably all we
> > > can do, unless there is some more robust versioning support that I can't find.
> > 
> > What is this pragma?
> > 
> > 
> > > Note we will also likely need to agument the makefiles/meson files so that the
> > > link stage doesn't pass the version script to the linker
> > 
> > Why not using the version script for exported symbols?
> > We are already doing it (.def file generated from .map).
> 
> 
> 
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 21:27  4%     ` Thomas Monjalon
  2020-05-27 21:43  4%       ` Thomas Monjalon
@ 2020-05-28  0:21  4%       ` Neil Horman
  1 sibling, 0 replies; 200+ results
From: Neil Horman @ 2020-05-28  0:21 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Harini Ramakrishnan, Fady Bader, dev, Omar Cardona,
	Pallavi Kadam, Ranjit Menon, dmitry.kozliuk, mdr

On Wed, May 27, 2020 at 11:27:12PM +0200, Thomas Monjalon wrote:
> 27/05/2020 22:35, Neil Horman:
> > On Wed, May 27, 2020 at 02:50:07PM +0200, Thomas Monjalon wrote:
> > > +Cc more people
> > > 
> > > 27/05/2020 12:41, Fady Bader:
> > > > What should we do with the ABI versioning in Windows ?
> > > 
> > > I think there are 2 questions here:
> > > 
> > > 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> > > The decision must be clearly documented.
> > > 
> > My first notion, without any greater thought is "why wouldn't we".  ABI
> > stability is OS agnostic.  If a symbol is considered stable, theres no reason
> > that I can think of that it wouldn't be stable for each OS.
> 
> Technical reason + no need so far.
> 
I'm not sure what you mean by technical reason.  As for need, I'd be careful of
that.  We already have the infrastructure, so if symbol versioning can be
implemented, we should.  We should only rip it out for windows if the
compiler/linker/loader doesn't support symbol versioning.  And I honestly don't
have a definitive answer on that

> 
> > > 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> > > Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> > > 
> > Can you elaborate on what exactly the issue is here?  I presume by your comment
> > above that visual studio either doesn't support symbol level versioning or
> > doesn't support versioning at all?
> 
> I don't know how to implement the macros in rte_function_versioning.h for Windows.
> 
Thats a question beyond my skill, especially given that I don't have a windows
compiler available

> 
> > If thats the case, and there is a commitment to make dpdk buildable on windows,
> > I suppose the only choice is to make a ifdef WINDOWS section of the
> > rte_function_versioning.h file, and effectively turn all the macros into no-ops.
> 
> Yes that's the idea.
> But we still need to implement either BIND_DEFAULT_SYMBOL or MAP_STATIC_SYMBOL
> to alias the latest function version to the actual function symbol.
> 
You can use alternate names, which is equivalent to clang/gccs
__attribute__((alias)):
https://stackoverflow.com/questions/53381461/does-visual-c-provide-a-language-construct-with-the-same-functionality-as-a

> > The BIND_DEFAULT_SYMBOL macro looks like it could still work, as MSVC has an
> > alias linker command thats implementable via __pragma, but thats probably all we
> > can do, unless there is some more robust versioning support that I can't find.
> 
> What is this pragma?
> 
See the link above

> 
> > Note we will also likely need to agument the makefiles/meson files so that the
> > link stage doesn't pass the version script to the linker
> 
> Why not using the version script for exported symbols?
> We are already doing it (.def file generated from .map).
> 
Well, if msvc doesn't support symbol versioning, I expect their linker won't
accept a linker version script, or are you using another compiler?
Neil

> 
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 21:27  4%     ` Thomas Monjalon
@ 2020-05-27 21:43  4%       ` Thomas Monjalon
  2020-05-28  0:28  4%         ` Neil Horman
  2020-05-28  0:21  4%       ` Neil Horman
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-27 21:43 UTC (permalink / raw)
  To: Neil Horman, Harini Ramakrishnan, Fady Bader
  Cc: dev, Omar Cardona, Pallavi Kadam, Ranjit Menon, dmitry.kozliuk, mdr

27/05/2020 23:27, Thomas Monjalon:
> 27/05/2020 22:35, Neil Horman:
> > On Wed, May 27, 2020 at 02:50:07PM +0200, Thomas Monjalon wrote:
> > > +Cc more people
> > > 
> > > 27/05/2020 12:41, Fady Bader:
> > > > What should we do with the ABI versioning in Windows ?
> > > 
> > > I think there are 2 questions here:
> > > 
> > > 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> > > The decision must be clearly documented.
> > > 
> > My first notion, without any greater thought is "why wouldn't we".  ABI
> > stability is OS agnostic.  If a symbol is considered stable, theres no reason
> > that I can think of that it wouldn't be stable for each OS.
> 
> Technical reason + no need so far.
> 
> 
> > > 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> > > Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> > > 
> > Can you elaborate on what exactly the issue is here?  I presume by your comment
> > above that visual studio either doesn't support symbol level versioning or
> > doesn't support versioning at all?
> 
> I don't know how to implement the macros in rte_function_versioning.h for Windows.
> 
> 
> > If thats the case, and there is a commitment to make dpdk buildable on windows,
> > I suppose the only choice is to make a ifdef WINDOWS section of the
> > rte_function_versioning.h file, and effectively turn all the macros into no-ops.
> 
> Yes that's the idea.
> But we still need to implement either BIND_DEFAULT_SYMBOL or MAP_STATIC_SYMBOL
> to alias the latest function version to the actual function symbol.

I've just found a tip in https://sourceware.org/binutils/docs/ld/WIN32.html
It suggests to create a weak symbol:
void foo() __attribute__((weak, alias ("foo_latestversion")));


> > The BIND_DEFAULT_SYMBOL macro looks like it could still work, as MSVC has an
> > alias linker command thats implementable via __pragma, but thats probably all we
> > can do, unless there is some more robust versioning support that I can't find.
> 
> What is this pragma?
> 
> 
> > Note we will also likely need to agument the makefiles/meson files so that the
> > link stage doesn't pass the version script to the linker
> 
> Why not using the version script for exported symbols?
> We are already doing it (.def file generated from .map).




^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 20:35  7%   ` Neil Horman
@ 2020-05-27 21:27  4%     ` Thomas Monjalon
  2020-05-27 21:43  4%       ` Thomas Monjalon
  2020-05-28  0:21  4%       ` Neil Horman
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2020-05-27 21:27 UTC (permalink / raw)
  To: Neil Horman, Harini Ramakrishnan
  Cc: Fady Bader, dev, Omar Cardona, Pallavi Kadam, Ranjit Menon,
	dmitry.kozliuk, mdr

27/05/2020 22:35, Neil Horman:
> On Wed, May 27, 2020 at 02:50:07PM +0200, Thomas Monjalon wrote:
> > +Cc more people
> > 
> > 27/05/2020 12:41, Fady Bader:
> > > What should we do with the ABI versioning in Windows ?
> > 
> > I think there are 2 questions here:
> > 
> > 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> > The decision must be clearly documented.
> > 
> My first notion, without any greater thought is "why wouldn't we".  ABI
> stability is OS agnostic.  If a symbol is considered stable, theres no reason
> that I can think of that it wouldn't be stable for each OS.

Technical reason + no need so far.


> > 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> > Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> > 
> Can you elaborate on what exactly the issue is here?  I presume by your comment
> above that visual studio either doesn't support symbol level versioning or
> doesn't support versioning at all?

I don't know how to implement the macros in rte_function_versioning.h for Windows.


> If thats the case, and there is a commitment to make dpdk buildable on windows,
> I suppose the only choice is to make a ifdef WINDOWS section of the
> rte_function_versioning.h file, and effectively turn all the macros into no-ops.

Yes that's the idea.
But we still need to implement either BIND_DEFAULT_SYMBOL or MAP_STATIC_SYMBOL
to alias the latest function version to the actual function symbol.

> The BIND_DEFAULT_SYMBOL macro looks like it could still work, as MSVC has an
> alias linker command thats implementable via __pragma, but thats probably all we
> can do, unless there is some more robust versioning support that I can't find.

What is this pragma?


> Note we will also likely need to agument the makefiles/meson files so that the
> link stage doesn't pass the version script to the linker

Why not using the version script for exported symbols?
We are already doing it (.def file generated from .map).



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 12:50  7% ` Thomas Monjalon
  2020-05-27 14:32  4%   ` Ray Kinsella
@ 2020-05-27 20:35  7%   ` Neil Horman
  2020-05-27 21:27  4%     ` Thomas Monjalon
  1 sibling, 1 reply; 200+ results
From: Neil Horman @ 2020-05-27 20:35 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Fady Bader, dev, Harini Ramakrishnan, Omar Cardona,
	Pallavi Kadam, Ranjit Menon, dmitry.kozliuk, mdr

On Wed, May 27, 2020 at 02:50:07PM +0200, Thomas Monjalon wrote:
> +Cc more people
> 
> 27/05/2020 12:41, Fady Bader:
> > What should we do with the ABI versioning in Windows ?
> 
> I think there are 2 questions here:
> 
> 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> The decision must be clearly documented.
> 
My first notion, without any greater thought is "why wouldn't we".  ABI
stability is OS agnostic.  If a symbol is considered stable, theres no reason
that I can think of that it wouldn't be stable for each OS.

> 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> 
Can you elaborate on what exactly the issue is here?  I presume by your comment
above that visual studio either doesn't support symbol level versioning or
doesn't support versioning at all?

If thats the case, and there is a commitment to make dpdk buildable on windows,
I suppose the only choice is to make a ifdef WINDOWS section of the
rte_function_versioning.h file, and effectively turn all the macros into no-ops.
The BIND_DEFAULT_SYMBOL macro looks like it could still work, as MSVC has an
alias linker command thats implementable via __pragma, but thats probably all we
can do, unless there is some more robust versioning support that I can't find.
Note we will also likely need to agument the makefiles/meson files so that the
link stage doesn't pass the version script to the linker 

Neil

> 
> 

^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 12:50  7% ` Thomas Monjalon
@ 2020-05-27 14:32  4%   ` Ray Kinsella
  2020-05-27 20:35  7%   ` Neil Horman
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-27 14:32 UTC (permalink / raw)
  To: Thomas Monjalon, Fady Bader
  Cc: dev, Harini Ramakrishnan, Omar Cardona, Pallavi Kadam,
	Ranjit Menon, dmitry.kozliuk, nhorman


Is my impression is the the Windows build is nascent, a fair one? 
Would we consider the entire build experimental for the moment?

Thanks,

Ray K

On 27/05/2020 13:50, Thomas Monjalon wrote:
> +Cc more people
> 
> 27/05/2020 12:41, Fady Bader:
>> What should we do with the ABI versioning in Windows ?
> 
> I think there are 2 questions here:
> 
> 1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
> The decision must be clearly documented.
> 
> 2/ How do we implement the macros in rte_function_versioning.h for Windows?
> Something needs to be done, otherwise we cannot compile libraries having some function versioning.
> 
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] ABI versioning in Windows
  2020-05-27 10:41  7% [dpdk-dev] ABI versioning in Windows Fady Bader
@ 2020-05-27 12:50  7% ` Thomas Monjalon
  2020-05-27 14:32  4%   ` Ray Kinsella
  2020-05-27 20:35  7%   ` Neil Horman
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2020-05-27 12:50 UTC (permalink / raw)
  To: Fady Bader
  Cc: dev, Harini Ramakrishnan, Omar Cardona, Pallavi Kadam,
	Ranjit Menon, dmitry.kozliuk, mdr, nhorman

+Cc more people

27/05/2020 12:41, Fady Bader:
> What should we do with the ABI versioning in Windows ?

I think there are 2 questions here:

1/ Do we want to maintain ABI compatibility on Windows like we do for Linux and FreeBSD?
The decision must be clearly documented.

2/ How do we implement the macros in rte_function_versioning.h for Windows?
Something needs to be done, otherwise we cannot compile libraries having some function versioning.



^ permalink raw reply	[relevance 7%]

* [dpdk-dev] ABI versioning in Windows
@ 2020-05-27 10:41  7% Fady Bader
  2020-05-27 12:50  7% ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Fady Bader @ 2020-05-27 10:41 UTC (permalink / raw)
  To: dev; +Cc: ocardona, Anand Rawat, Stephen Hemminger

What should we do with the ABI versioning in Windows ?

^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH] version: 20.08-rc0
@ 2020-05-27  8:41  7% David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-27  8:41 UTC (permalink / raw)
  To: dev; +Cc: thomas

Start a new release cycle with empty release notes.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .travis.yml                            |   2 +-
 ABI_VERSION                            |   2 +-
 VERSION                                |   2 +-
 doc/guides/rel_notes/index.rst         |   1 +
 doc/guides/rel_notes/release_20_08.rst | 139 +++++++++++++++++++++++++
 5 files changed, 143 insertions(+), 3 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_20_08.rst

diff --git a/.travis.yml b/.travis.yml
index 2d2292ff64..14f8124233 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -36,7 +36,7 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
 
 env:
   global:
-    - REF_GIT_TAG=v20.02
+    - REF_GIT_TAG=v20.05
 
 jobs:
   include:
diff --git a/ABI_VERSION b/ABI_VERSION
index 204da679a1..a9ac8dacb0 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-20.0.2
+20.0.3
diff --git a/VERSION b/VERSION
index 4e3f998d00..30bbcd61a4 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-20.05.0
+20.08.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 31278d2a8a..05c9d837a4 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_20_08
     release_20_05
     release_20_02
     release_19_11
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
new file mode 100644
index 0000000000..39064afbe9
--- /dev/null
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -0,0 +1,139 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 20.08
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      make doc-guides-html
+
+      xdg-open build/doc/html/guides/rel_notes/release_20_08.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =========================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+* No ABI change that would break compatibility with 19.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =========================================================
-- 
2.23.0


^ permalink raw reply	[relevance 7%]

* Re: [dpdk-dev] [PATCH 20.08 8/9] devtools: support python3 only
  2020-05-22 13:23  4% ` [dpdk-dev] [PATCH 20.08 8/9] devtools: support python3 only Louise Kilheeney
@ 2020-05-27  6:15  0%   ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-27  6:15 UTC (permalink / raw)
  To: Louise Kilheeney, dev; +Cc: Neil Horman


yes - I noticed the rpm packaging has become very pedantic about this of late.

Ray K

On 22/05/2020 14:23, Louise Kilheeney wrote:
> Changed script to explicitly use python3 only.
> 
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Ray Kinsella <mdr@ashroe.eu>
> 
> Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
> ---
>  devtools/update_version_map_abi.py | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/devtools/update_version_map_abi.py b/devtools/update_version_map_abi.py
> index 616412a1c..58aa368f9 100755
> --- a/devtools/update_version_map_abi.py
> +++ b/devtools/update_version_map_abi.py
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python
> +#!/usr/bin/env python3
>  # SPDX-License-Identifier: BSD-3-Clause
>  # Copyright(c) 2019 Intel Corporation
>  
> @@ -9,7 +9,6 @@
>  from the devtools/update-abi.sh utility.
>  """
>  
> -from __future__ import print_function
>  import argparse
>  import sys
>  import re
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [dpdk-announce] DPDK 20.05 released
@ 2020-05-26 19:53  3% Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-26 19:53 UTC (permalink / raw)
  To: announce

A new release is available:
	https://fast.dpdk.org/rel/dpdk-20.05.tar.xz

It was a quite big release cycle:
	1304 commits from 189 authors
	1983 files changed, 145825 insertions(+), 29147 deletions(-)

It is not planned (yet) to start a maintenance branch for 20.05.
This version, as previous one (20.02), is ABI-compatible with 19.11.

Below are some new features, grouped by category.
* General
	- packet processing graph
	- ring synchronisation modes for VMs and containers
	- RCU API for deferred resource reclamation
	- telemetry rework
	- low overhead tracing framework
	- GCC 10 support
* Networking
	- flow aging API
	- driver for Intel Foxville I225
* Cryptography
	- ChaCha20-Poly1305 crypto algorithm
	- event mode in the example application ipsec-secgw
* Baseband
	- 5G driver for Intel FPGA-based N3000

More details in the release notes:
	http://doc.dpdk.org/guides/rel_notes/release_20_05.html


There are 69 new contributors (including authors, reviewers and testers).
Welcome to Andrea Arcangeli, Asim Jamshed, Cheng Peng, Christos Ricudis,
Dave Burley, Dong Zhou, Dongsheng Rong, Eugeny Parshutin, Evan Swanson,
Fady Bader, Farah Smith, Guy Tzalik, Hailin Xu, Igor Russkikh, JP Lee,
Jakub Neruda, James Fox, Jianwei Mei, Jiawei Wang, Jun W Zhou,
Juraj Linkeš, Karra Satwik, Kishore Padmanabha, Lihong Ma, Lijian Zhang,
Linsi Yuan, Louise Kilheeney, Lukasz Wojciechowski, Mairtin o Loingsigh,
Martin Spinler, Matteo Croce, Michael Haeuptle, Mike Baucom,
Mit Matelske, Mohsin Shaikh, Muhammad Ahmad, Muhammad Bilal, Nannan Lu,
Narcisa Vasile, Niall Power, Peter Spreadborough, Przemyslaw Patynowski,
Qi Fu, Real Valiquette, Rohit Raj, Roland Qi, Sarosh Arif, Satheesh Paul,
Shahaji Bhosle, Sharon Haroni, Sivaprasad Tummala, Souvik Dey,
Steven Webster, Tal Shnaiderman, Tasnim Bashar, Vadim Podovinnikov,
Venky Venkatesh, Vijaya Mohan Guvva, Vu Pham, Wentao Cui, Xi Zhang, 
Xiaoxiao Zeng, Xinfeng Zhao, Yash Sharma, Yu Jiang, Zalfresso-Jundzillo,
Zhihong Peng, Zhimin Huang, and Zhiwei He.

Below is the number of patches per company (with authors count):
	438     Intel (57)
	194     Mellanox (24)
	171     Marvell (21)
	 89     Broadcom (12)
	 75     Huawei (10)
	 49     Red Hat (7)
	 49     NXP (7)
	 35     Microsoft (3)
	 34     ARM (6)
	 28     Semihalf (1)
	 24     Samsung (2)
	 22     OKTET Labs (2)
	 11     Ericsson (1)
	 11     AMD (3)
	  9     Chelsio (1)
	  8     BIFIT (1)
	  6     Cisco (3)
	  5     Solarflare (1)
	  5     IBM (2)
	  5     Emumba (3)
	  5     6WIND (1)


The new features for 20.08 may be submitted during the next 17 days.
DPDK 20.08 should be released on early August, in a tight schedule:
	http://core.dpdk.org/roadmap#dates

Thanks everyone, and happy birthday to our RTE_MAGIC!



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3] doc: plan splitting the ethdev ops struct
  2020-05-25  9:11  0%     ` Andrew Rybchenko
@ 2020-05-26 13:55  0%       ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-26 13:55 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: dev, Neil Horman, John McNamara, Marko Kovacevic, dev,
	Jerin Jacob Kollanukkaran, David Marchand, mdr, olivier.matz,
	Andrew Rybchenko

25/05/2020 11:11, Andrew Rybchenko:
> On 5/25/20 2:18 AM, Thomas Monjalon wrote:
> > 04/03/2020 10:57, Ferruh Yigit:
> >> For the ABI compatibility it is better to hide internal data structures
> >> from the application as much as possible. But because of some inline
> >> functions 'struct eth_dev_ops' can't be hidden completely.
> >>
> >> Plan is to split the 'struct eth_dev_ops' into two as ones used by
> >> inline functions and ones not used, and hide the second part that not
> >> used by inline functions completely to the application.
> >>
> >> Because of ABI break the work will be done in 20.11
> >>
> >> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >> ---
> >> +* ethdev: Split the ``struct eth_dev_ops`` struct to hide it as much as possible
> >> +  will be done in 20.11.
> >> +  Currently the ``struct eth_dev_ops`` struct is accessible by the application
> >> +  because some inline functions, like ``rte_eth_tx_descriptor_status()``,
> >> +  access the struct directly.
> >> +  The struct will be separate in two, the ops used by inline functions will be moved
> >> +  next to Rx/Tx burst functions, rest of the ``struct eth_dev_ops`` struct will be
> >> +  moved to header file for drivers to hide it from applications.
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> 
> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>

Acked-by: David Marchand <david.marchand@redhat.com>

Applied, thanks



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: plan splitting the ethdev ops struct
  @ 2020-05-26 13:01  0%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-26 13:01 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Ferruh Yigit, dev, Neil Horman, John McNamara, Marko Kovacevic,
	dpdk-dev, David Marchand, Andrew Rybchenko

18/02/2020 06:07, Jerin Jacob:
> On Mon, Feb 17, 2020 at 9:08 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > For the ABI compatibility it is better to hide internal data structures
> > from the application as much as possible. But because of some inline
> > functions 'struct eth_dev_ops' can't be hidden completely.
> >
> > Plan is to split the 'struct eth_dev_ops' into two as ones used by
> > inline functions and ones not used, and hide the second part that not
> > used by inline functions completely to the application.
> 
> It is a good improvement.  IMO, If anything used in fast-path it
> should be in ``struct rte_eth_dev``
> and rest can completely be moved to internal. In this case, if
> `rte_eth_tx_descriptor_status`
> not used on fastpath, Maybe we don't need to maintain the inline
> status and move completely
> to .c file.
> 
> Those may be specifics of the work. In general, this change looks good to me.
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>

This ack is missing from v3.
Jerin, please could you confirm on v3?




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] DPDK-20.05 RC4 quick report
@ 2020-05-26  8:33  3% Peng, Yuan
  0 siblings, 0 replies; 200+ results
From: Peng, Yuan @ 2020-05-26  8:33 UTC (permalink / raw)
  To: dev

DPDK-20.05 RC4 quick report

  *   Totally create ~400+ new test cases for DPDK20.05 new features.
  *   Totally run 9976 cases, execution percentage is 100%, pass rate is about 99.5%, 2 new issues are found, no critical issues.
  *   Checked build and compile, no new issue found.
  *   Checked Basic NIC PMD(i40e, ixgbe, ice) PF & VF regression, new found 1 PF issue.
  *   Checked virtio regression test, no new bug is found.
  *   Checked cryptodev and compressdev regression, no new issus found so far.
  *   Checked NIC performance, no new issue found so far.
  *   Checked ABI test, no new issue found so far.
  *   Checked 20.05 new features: 1 new issue found so far.

Thank you.
Yuan.


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2] doc: update release notes for 20.05
@ 2020-05-25 19:11  4% John McNamara
  0 siblings, 0 replies; 200+ results
From: John McNamara @ 2020-05-25 19:11 UTC (permalink / raw)
  To: dev; +Cc: thomas, ktraynor, ciara.power, fiona.trahe, John McNamara

Fix grammar, spelling and formatting of DPDK 20.05 release notes.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
---

v2: * Addressed comments from mailing list.
    * Tried to add a more coherent grouping to the crypto changes.


 doc/guides/rel_notes/release_20_05.rst | 274 ++++++++++++++++-----------------
 1 file changed, 129 insertions(+), 145 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index a61631e..3c6106b 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -56,38 +56,38 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
-* **Added Trace Library and Tracepoints**
+* **Added Trace Library and Tracepoints.**
 
-  A native implementation of ``common trace format(CTF)`` based trace library
-  has been added to provide the ability to add tracepoints in
-  application/library to get runtime trace/debug information for control and
-  fast APIs with minimum impact on fast path performance.
-  Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-  Added tracepoints in ``EAL``, ``ethdev``, ``cryptodev``, ``eventdev`` and
-  ``mempool`` libraries for important functions.
+  Added a native implementation of the "common trace format" (CTF) based trace
+  library. This allows the user add tracepoints in an application/library to
+  get runtime trace/debug information for control, and fast APIs with minimum
+  impact on fast path performance. Typical trace overhead is ~20 cycles and
+  instrumentation overhead is 1 cycle.  Added tracepoints in ``EAL``,
+  ``ethdev``, ``cryptodev``, ``eventdev`` and ``mempool`` libraries for
+  important functions.
 
-* **Added APIs for RCU defer queue.**
+* **Added APIs for RCU defer queues.**
 
-  Added APIs to create and delete defer queue. Additional APIs are provided
+  Added APIs to create and delete defer queues. Additional APIs are provided
   to enqueue a deleted resource and reclaim the resource in the future.
-  These APIs help the application use lock-free data structures with
+  These APIs help an application use lock-free data structures with
   less effort.
 
 * **Added new API for rte_ring.**
 
-  * New synchronization modes for rte_ring.
+  * Introduced new synchronization modes for ``rte_ring``.
 
-  Introduced new optional MT synchronization modes for rte_ring:
-  Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
-  With these mode selected, rte_ring shows significant improvements for
-  average enqueue/dequeue times on overcommitted systems.
+    Introduced new optional MT synchronization modes for ``rte_ring``:
+    Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
+    With these modes selected, ``rte_ring`` shows significant improvements for
+    average enqueue/dequeue times on overcommitted systems.
 
-  * Added peek style API for rte_ring.
+  * Added peek style API for ``rte_ring``.
 
-  For rings with producer/consumer in RTE_RING_SYNC_ST, RTE_RING_SYNC_MT_HTS
-  mode, provide an ability to split enqueue/dequeue operation into two phases
-  (enqueue/dequeue start; enqueue/dequeue finish). That allows user to inspect
-  objects in the ring without removing them from it (aka MT safe peek).
+    For rings with producer/consumer in ``RTE_RING_SYNC_ST``, ``RTE_RING_SYNC_MT_HTS``
+    mode, provide the ability to split enqueue/dequeue operation into two phases
+    (enqueue/dequeue start and enqueue/dequeue finish). This allows the user to inspect
+    objects in the ring without removing them (aka MT safe peek).
 
 * **Added flow aging support.**
 
@@ -100,14 +100,16 @@ New Features
   * Added new query: ``rte_flow_get_aged_flows`` to get the aged-out flows
     contexts from the port.
 
-* **ethdev: Added a new value to link speed for 200Gbps**
+* **ethdev: Added a new value to link speed for 200Gbps.**
 
-* **Updated Amazon ena driver.**
+  Added a new ethdev value to for link speeds of 200Gbps.
 
-  Updated ena PMD with new features and improvements, including:
+* **Updated the Amazon ena driver.**
+
+  Updated the ena PMD with new features and improvements, including:
 
   * Added support for large LLQ (Low-latency queue) headers.
-  * Added Tx drops as new extended driver statistic.
+  * Added Tx drops as a new extended driver statistic.
   * Added support for accelerated LLQ mode.
   * Handling of the 0 length descriptors on the Rx path.
 
@@ -115,14 +117,14 @@ New Features
 
   Updated Hisilicon hns3 driver with new features and improvements, including:
 
-  * Added support for TSO
-  * Added support for configuring promiscuous and allmulticast mode for VF
+  * Added support for TSO.
+  * Added support for configuring promiscuous and allmulticast mode for VF.
 
 * **Updated Intel i40e driver.**
 
   Updated i40e PMD with new features and improvements, including:
 
-  * Enable MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
+  * Enabled MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
   * Added support for RSS using L3/L4 source/destination only.
   * Added support for setting hash function in rte flow.
 
@@ -139,14 +141,14 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF (Device Config Function) feature.
-  * Added switch filter support for intel DCF.
+  * Added switch filter support for Intel DCF.
 
 * **Updated Marvell OCTEON TX2 ethdev driver.**
 
-  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
-  below features.
+  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
+  including:
 
-  * Hierarchial Scheduling with DWRR and SP.
+  * Hierarchical Scheduling with DWRR and SP.
   * Single rate - Two color, Two rate - Three color shaping.
 
 * **Updated Mellanox mlx5 driver.**
@@ -158,14 +160,27 @@ New Features
   * Added support for configuring Hairpin queue data buffer size.
   * Added support for jumbo frame size (9K MTU) in Multi-Packet RQ mode.
   * Removed flow rules caching for memory saving and compliance with ethdev API.
-  * Optimized the memory consumption of flow.
-  * Added support for flow aging based on hardware counter.
-  * Added support for flow pattern with wildcard VLAN item (without VID value).
-  * Updated support for matching on GTP header, added match on GTP flags.
+  * Optimized the memory consumption of flows.
+  * Added support for flow aging based on hardware counters.
+  * Added support for flow patterns with wildcard VLAN items (without VID value).
+  * Updated support for matching on GTP headers, added match on GTP flags.
+
+* **Added a new driver for Intel Foxville I225 devices.**
+
+  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
+  :doc:`../nics/igc` NIC guide for more details on this new driver.
+
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for host based flow table management.
+  * Added flow counters to extended stats.
+  * Added PCI function stats to extended stats.
 
 * **Added Chacha20-Poly1305 algorithm to Cryptodev API.**
 
-  Chacha20-Poly1305 AEAD algorithm can now be supported in Cryptodev.
+  Added support for Chacha20-Poly1305 AEAD algorithm in Cryptodev.
 
 * **Updated the AESNI MB crypto PMD.**
 
@@ -175,7 +190,7 @@ New Features
 
 * **Updated the AESNI GCM crypto PMD.**
 
-  * Added support for intel-ipsec-mb version 0.54.
+  Added support for intel-ipsec-mb version 0.54.
 
 * **Updated the ZUC crypto PMD.**
 
@@ -186,60 +201,51 @@ New Features
 
 * **Updated the SNOW3G crypto PMD.**
 
-  * Added support for intel-ipsec-mb version 0.54.
+  Added support for intel-ipsec-mb version 0.54.
 
 * **Updated the KASUMI crypto PMD.**
 
-  * Added support for intel-ipsec-mb version 0.54.
-
-* **Added a new driver for Intel Foxville I225 devices.**
-
-  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
-  :doc:`../nics/igc` NIC guide for more details on this new driver.
-
-* **Updated Broadcom bnxt driver.**
-
-  Updated Broadcom bnxt driver with new features and improvements, including:
+  Added support for intel-ipsec-mb version 0.54.
 
-  * Added support for host based flow table management
-  * Added flow counters to extended stats
-  * Added PCI function stats to extended stats
+* **Updated the QuickAssist Technology (QAT) Crypto PMD.**
 
-* **Added handling of mixed crypto algorithms in QAT PMD for GEN2.**
+  * Added handling of mixed crypto algorithms in QAT PMD for GEN2.
 
-  Enabled handling of mixed algorithms in encrypted digest hash-cipher
-  (generation) and cipher-hash (verification) requests in QAT PMD
-  when running on GEN2 QAT hardware with particular firmware versions
-  (GEN3 support was added in DPDK 20.02).
+    Enabled handling of mixed algorithms in encrypted digest hash-cipher
+    (generation) and cipher-hash (verification) requests in QAT PMD when
+    running on GEN2 QAT hardware with particular firmware versions (GEN3
+    support was added in DPDK 20.02).
 
-* **Added plain SHA-1,224,256,384,512 support to QAT PMD.**
+  * Added plain SHA-1, 224, 256, 384, 512 support to QAT PMD.
 
-  Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 hashes
-  to QAT PMD.
+    Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512
+    hashes to QAT PMD.
 
-* **Added AES-GCM/GMAC J0 support to QAT PMD.**
+  * Added AES-GCM/GMAC J0 support to QAT PMD.
 
-  Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. User can
-  use this feature by passing zero length IV in appropriate xform. For more
-  info please refer to rte_crypto_sym.h J0 comments.
+    Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. The
+    user can use this feature by passing a zero length IV in the appropriate
+    xform. For more information refer to the doxygen comments in
+    ``rte_crypto_sym.h`` for ``J0``.
 
-* **Updated the QAT PMD for AES-256 DOCSIS.**
+  * Updated the QAT PMD for AES-256 DOCSIS.
 
-  Added AES-256 DOCSIS algorithm support to QAT PMD.
+    Added AES-256 DOCSIS algorithm support to the QAT PMD.
 
-* **Added QAT intermediate buffer too small handling in QAT compression PMD.**
+* **Updated the QuickAssist Technology (QAT) Compression PMD.**
 
-  Added a special way of buffer handling when internal QAT intermediate buffer
-  is too small for Huffman dynamic compression operation. Instead of falling
+  Added special buffer handling when the internal QAT intermediate buffer is
+  too small for the Huffman dynamic compression operation. Instead of falling
   back to fixed compression, the operation is now split into multiple smaller
-  dynamic compression requests (possible to execute on QAT) and their results
-  are then combined and copied into the output buffer. This is not possible if
-  any checksum calculation was requested - in such case the code falls back to
-  fixed compression as before.
+  dynamic compression requests (which are possible to execute on QAT) and
+  their results are then combined and copied into the output buffer. This is
+  not possible if any checksum calculation was requested - in such cases the
+  code falls back to fixed compression as before.
 
 * **Updated the turbo_sw bbdev PMD.**
 
-  Supported large size code blocks which does not fit in one mbuf segment.
+  Added support for large size code blocks which do not fit in one mbuf
+  segment.
 
 * **Added Intel FPGA_5GNR_FEC bbdev PMD.**
 
@@ -255,31 +261,32 @@ New Features
     accurate load balancing.
   * Improved behavior on high-core count systems.
   * Reduced latency in low-load situations.
-  * Extended DSW xstats with migration- and load-related statistics.
+  * Extended DSW xstats with migration and load-related statistics.
 
-* **Updated ipsec-secgw sample application with following features.**
+* **Updated ipsec-secgw sample application.**
 
-  * Updated ipsec-secgw application to add event based packet processing.
-    The worker thread(s) would receive events and submit them back to the
-    event device after the processing. This way, multicore scaling and HW
-    assisted scheduling is achieved by making use of the event device
-    capabilities. The event mode currently supports only inline IPsec
-    protocol offload.
+  Updated the ``ipsec-secgw`` sample application with the following features:
 
-  * Updated ipsec-secgw application to support key sizes for AES-192-CBC,
-    AES-192-GCM, AES-256-GCM algorithms.
+  * Updated the application to add event based packet processing. The worker
+    thread(s) would receive events and submit them back to the event device
+    after the processing. This way, multicore scaling and HW assisted
+    scheduling is achieved by making use of the event device capabilities. The
+    event mode currently only supports inline IPsec protocol offload.
 
-  * Added IPsec inbound load-distribution support for ipsec-secgw application
-    using NIC load distribution feature(Flow Director).
+  * Updated the application to support key sizes for AES-192-CBC, AES-192-GCM,
+    AES-256-GCM algorithms.
+
+  * Added IPsec inbound load-distribution support for the application using
+    NIC load distribution feature (Flow Director).
 
 * **Updated Telemetry Library.**
 
-  The updated Telemetry library has many improvements on the original version
-  to make it more accessible and scalable:
+  The updated Telemetry library has been significantly improved in relation to
+  the original version to make it more accessible and scalable:
 
-  * It enables DPDK libraries and applications provide their own specific
-    telemetry information, rather than being limited to what could be reported
-    through the metrics library.
+  * It now enables DPDK libraries and applications to provide their own
+    specific telemetry information, rather than being limited to what could be
+    reported through the metrics library.
 
   * It is no longer dependent on the external Jansson library, which allows
     Telemetry be enabled by default.
@@ -287,61 +294,53 @@ New Features
   * The socket handling has been simplified making it easier for clients to
     connect and retrieve information.
 
-* **Added rte_graph library.**
+* **Added the rte_graph library.**
+
+  The Graph architecture abstracts the data processing functions as ``nodes``
+  and ``links`` them together to create a complex ``graph`` to enable
+  reusable/modular data processing functions. The graph library provides APIs
+  to enable graph framework operations such as create, lookup, dump and
+  destroy on graph and node operations such as clone, edge update, and edge
+  shrink, etc. The API also allows the creation of a stats cluster to monitor
+  per graph and per node statistics.
 
-  Graph architecture abstracts the data processing functions as a ``node`` and
-  ``links`` them together to create a complex ``graph`` to enable reusable/modular
-  data processing functions. The graph library provides API to enable graph
-  framework operations such as create, lookup, dump and destroy on graph and node
-  operations such as clone, edge update, and edge shrink, etc.
-  The API also allows to create the stats cluster to monitor per graph and per node stats.
+* **Added the rte_node library.**
 
-* **Added rte_node library which consists of a set of packet processing nodes.**
+  Added the ``rte_node`` library that consists of nodes used by the
+  ``rte_graph`` library. Each node performs a specific packet processing
+  function based on the application configuration.
 
-  The rte_node library that consists of nodes used by rte_graph library. Each
-  node performs a specific packet processing function based on application
-  configuration. The following nodes are added:
+  The following nodes are added:
 
-  * Null node: Skeleton node that defines the general structure of a node.
-  * Ethernet device node: Consists of ethernet Rx/Tx nodes as well as ethernet
+  * Null node: A skeleton node that defines the general structure of a node.
+  * Ethernet device node: Consists of Ethernet Rx/Tx nodes as well as Ethernet
     control APIs.
-  * IPv4 lookup node: Consists of ipv4 extract and lpm lookup node. Routes can
-    be configured by the application through ``rte_node_ip4_route_add`` function.
-  * IPv4 rewrite node: Consists of ipv4 and ethernet header rewrite functionality
-    that can be configured through ``rte_node_ip4_rewrite_add`` function.
+  * IPv4 lookup node: Consists of IPv4 extract and LPM lookup node. Routes can
+    be configured by the application through the ``rte_node_ip4_route_add``
+    function.
+  * IPv4 rewrite node: Consists of IPv4 and Ethernet header rewrite
+    functionality that can be configured through the
+    ``rte_node_ip4_rewrite_add`` function.
   * Packet drop node: Frees the packets received to their respective mempool.
 
 * **Added new l3fwd-graph sample application.**
 
-  Added an example application ``l3fwd-graph``. It demonstrates the usage of graph
-  library and node library for packet processing. In addition to the library usage
-  demonstration, this application can use for performance comparison with existing
-  ``l3fwd`` (The static code without any nodes) with the modular ``l3fwd-graph``
-  approach.
+  Added an example application ``l3fwd-graph``. This demonstrates the usage of
+  the graph library and node library for packet processing. In addition to the
+  library usage demonstration, this application can be used for performance
+  comparison of the existing ``l3fwd`` (static code without any nodes) with
+  the modular ``l3fwd-graph`` approach.
 
-* **Updated testpmd application.**
+* **Updated the testpmd application.**
 
-  * Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's
-    behaviour on handling Rx mq mode.
+  Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's
+  behaviour on handling Rx mq mode.
 
 * **Added support for GCC 10.**
 
   Added support for building with GCC 10.1.
 
 
-Removed Items
--------------
-
-.. This section should contain removed items in this release. Sample format:
-
-   * Add a short 1-2 sentence description of the removed item
-     in the past tense.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =========================================================
-
-
 API Changes
 -----------
 
@@ -358,7 +357,7 @@ API Changes
    =========================================================
 
 * mempool: The API of ``rte_mempool_populate_iova()`` and
-  ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
+  ``rte_mempool_populate_virt()`` changed to return 0 instead of ``-EINVAL``
   when there is not enough room to store one object.
 
 
@@ -380,21 +379,6 @@ ABI Changes
 * No ABI change that would break compatibility with DPDK 20.02 and 19.11.
 
 
-Known Issues
-------------
-
-.. This section should contain new known issues in this release. Sample format:
-
-   * **Add title in present tense with full stop.**
-
-     Add a short 1-2 sentence description of the known issue
-     in the present tense. Add information on any known workarounds.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =========================================================
-
-
 Tested Platforms
 ----------------
 
-- 
2.7.5


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3] doc: plan splitting the ethdev ops struct
    2020-05-24 23:18  0%   ` Thomas Monjalon
@ 2020-05-25 10:24  0%   ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2020-05-25 10:24 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Neil Horman, John McNamara, Marko Kovacevic, dev,
	Jerin Jacob Kollanukkaran, Thomas Monjalon, Andrew Rybchenko

On Wed, Mar 4, 2020 at 10:57 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> For the ABI compatibility it is better to hide internal data structures
> from the application as much as possible. But because of some inline
> functions 'struct eth_dev_ops' can't be hidden completely.
>
> Plan is to split the 'struct eth_dev_ops' into two as ones used by
> inline functions and ones not used, and hide the second part that not
> used by inline functions completely to the application.
>
> Because of ABI break the work will be done in 20.11
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>

Acked-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] doc: plan splitting the ethdev ops struct
  2020-05-24 23:18  0%   ` Thomas Monjalon
@ 2020-05-25  9:11  0%     ` Andrew Rybchenko
  2020-05-26 13:55  0%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2020-05-25  9:11 UTC (permalink / raw)
  To: Thomas Monjalon, Ferruh Yigit
  Cc: Neil Horman, John McNamara, Marko Kovacevic, dev,
	Jerin Jacob Kollanukkaran, David Marchand, mdr, olivier.matz

On 5/25/20 2:18 AM, Thomas Monjalon wrote:
> 04/03/2020 10:57, Ferruh Yigit:
>> For the ABI compatibility it is better to hide internal data structures
>> from the application as much as possible. But because of some inline
>> functions 'struct eth_dev_ops' can't be hidden completely.
>>
>> Plan is to split the 'struct eth_dev_ops' into two as ones used by
>> inline functions and ones not used, and hide the second part that not
>> used by inline functions completely to the application.
>>
>> Because of ABI break the work will be done in 20.11
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> +* ethdev: Split the ``struct eth_dev_ops`` struct to hide it as much as possible
>> +  will be done in 20.11.
>> +  Currently the ``struct eth_dev_ops`` struct is accessible by the application
>> +  because some inline functions, like ``rte_eth_tx_descriptor_status()``,
>> +  access the struct directly.
>> +  The struct will be separate in two, the ops used by inline functions will be moved
>> +  next to Rx/Tx burst functions, rest of the ``struct eth_dev_ops`` struct will be
>> +  moved to header file for drivers to hide it from applications.
> Acked-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] DPDK-20.05 RC3 quick report
@ 2020-05-25  4:16  3% Peng, Yuan
  0 siblings, 0 replies; 200+ results
From: Peng, Yuan @ 2020-05-25  4:16 UTC (permalink / raw)
  To: dev

DPDK-20.05 RC3 quick report

  *   Totally create ~400+ new test cases for DPDK20.05 new features.
  *   Totally 10203 cases, execution percentage is 100%, pass rate is about 99%, 8 new issues are found, including a high level issue(has been fixed and verified).
  *   Checked build and compile, found 1 new issue, now fixed and verified.
  *   Checked Basic NIC PMD(i40e, ixgbe, ice) PF & VF regression, new found 5 PF issue.
  *   Checked virtio regression test, no new bug is found.
  *   Checked cryptodev and compressdev regression, no new issus found so far.
  *   Checked NIC performance, no new issue found so far.
  *   Checked ABI test, no new issue found so far.
  *   Checked 20.05 new features: 2 new issue found so far.

Thank you.
Yuan.


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 01/11] eal: replace rte_page_sizes with a set of constants
  @ 2020-05-25  0:37  4%   ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2020-05-25  0:37 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Malloy, Narcisa Ana Maria Vasile, Fady Bader,
	Tal Shnaiderman, Dmitry Kozlyuk, Jerin Jacob, Anatoly Burakov

Clang on Windows follows MS ABI where enum values are limited to 2^31-1.
Enum rte_page_sizes has members valued above this limit, which get
wrapped to zero, resulting in compilation error (duplicate values in
enum). Using MS ABI is mandatory for Windows EAL to call Win32 APIs.

Remove rte_page_sizes and replace its values with #define's.
This enumeration is not used in public API, so there's no ABI breakage.

Suggested-by: Jerin Jacob <jerinjacobk@gmail.com>
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---

Release notes for 20.08 don't exits yet, so not adding anything.

 lib/librte_eal/include/rte_memory.h | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/lib/librte_eal/include/rte_memory.h b/lib/librte_eal/include/rte_memory.h
index 3d8d0bd69..65374d53a 100644
--- a/lib/librte_eal/include/rte_memory.h
+++ b/lib/librte_eal/include/rte_memory.h
@@ -24,19 +24,16 @@ extern "C" {
 #include <rte_config.h>
 #include <rte_fbarray.h>
 
-__extension__
-enum rte_page_sizes {
-	RTE_PGSIZE_4K    = 1ULL << 12,
-	RTE_PGSIZE_64K   = 1ULL << 16,
-	RTE_PGSIZE_256K  = 1ULL << 18,
-	RTE_PGSIZE_2M    = 1ULL << 21,
-	RTE_PGSIZE_16M   = 1ULL << 24,
-	RTE_PGSIZE_256M  = 1ULL << 28,
-	RTE_PGSIZE_512M  = 1ULL << 29,
-	RTE_PGSIZE_1G    = 1ULL << 30,
-	RTE_PGSIZE_4G    = 1ULL << 32,
-	RTE_PGSIZE_16G   = 1ULL << 34,
-};
+#define RTE_PGSIZE_4K   (1ULL << 12)
+#define RTE_PGSIZE_64K  (1ULL << 16)
+#define RTE_PGSIZE_256K (1ULL << 18)
+#define RTE_PGSIZE_2M   (1ULL << 21)
+#define RTE_PGSIZE_16M  (1ULL << 24)
+#define RTE_PGSIZE_256M (1ULL << 28)
+#define RTE_PGSIZE_512M (1ULL << 29)
+#define RTE_PGSIZE_1G   (1ULL << 30)
+#define RTE_PGSIZE_4G   (1ULL << 32)
+#define RTE_PGSIZE_16G  (1ULL << 34)
 
 #define SOCKET_ID_ANY -1                    /**< Any NUMA socket. */
 
-- 
2.25.4


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2] doc: deprication notice to mark tm spec as experimental
  2020-05-21 10:49  0%     ` Jerin Jacob
  2020-05-24 20:58  0%       ` Nithin Kumar D
@ 2020-05-24 23:33  0%       ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-24 23:33 UTC (permalink / raw)
  To: Nithin Dabilpuram
  Cc: Dumitrescu, Cristian, dev, Nithin Dabilpuram, Yigit, Ferruh,
	Richardson, Bruce, bluca, Singh, Jasvinder, arybchenko, Kinsella,
	Ray, nhorman, ktraynor, david.marchand, Mcnamara, John,
	Kovacevic, Marko, dev, jerinj, kkanas, Jerin Jacob

> > > From: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > >
> > > Based on the discussion in mail thread, it is concluded that
> > > all traffic manager API's (rte_tm.h) need to be marked experimental
> > > till few more releases to support further improvements to spec.
> > >
> > > https://mails.dpdk.org/archives/dev/2020-April/164970.html
> > >
> > > Adding deprication notice for the same in advance.
> > >
> > > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > > ---
> > > +* traffic manager: All traffic manager API's in ``rte_tm.h`` were mistakenly
> > > made
> > > +  abi stable in the v19.11 release. The TM maintainer and other contributor's
> > > have
> > > +  agreed to keep the TM API's as experimental in expectation of additional
> > > spec
> > > +  improvements. Therefore, all API's in ``rte_tm.h`` will be marked back as
> > > +  experimental in v20.11 DPDK release. For more details, please see `the
> > > thread
> > > +  <https://mails.dpdk.org/archives/dev/2020-April/164970.html>`_.
> >
> > Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>

Applied, thanks



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] doc: plan splitting the ethdev ops struct
  @ 2020-05-24 23:18  0%   ` Thomas Monjalon
  2020-05-25  9:11  0%     ` Andrew Rybchenko
  2020-05-25 10:24  0%   ` David Marchand
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-24 23:18 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Neil Horman, John McNamara, Marko Kovacevic, dev,
	Jerin Jacob Kollanukkaran, David Marchand, Andrew Rybchenko, mdr,
	olivier.matz

04/03/2020 10:57, Ferruh Yigit:
> For the ABI compatibility it is better to hide internal data structures
> from the application as much as possible. But because of some inline
> functions 'struct eth_dev_ops' can't be hidden completely.
> 
> Plan is to split the 'struct eth_dev_ops' into two as ones used by
> inline functions and ones not used, and hide the second part that not
> used by inline functions completely to the application.
> 
> Because of ABI break the work will be done in 20.11
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> +* ethdev: Split the ``struct eth_dev_ops`` struct to hide it as much as possible
> +  will be done in 20.11.
> +  Currently the ``struct eth_dev_ops`` struct is accessible by the application
> +  because some inline functions, like ``rte_eth_tx_descriptor_status()``,
> +  access the struct directly.
> +  The struct will be separate in two, the ops used by inline functions will be moved
> +  next to Rx/Tx burst functions, rest of the ``struct eth_dev_ops`` struct will be
> +  moved to header file for drivers to hide it from applications.

Acked-by: Thomas Monjalon <thomas@monjalon.net>



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: deprication notice to mark tm spec as experimental
  2020-05-21 10:49  0%     ` Jerin Jacob
@ 2020-05-24 20:58  0%       ` Nithin Kumar D
  2020-05-24 23:33  0%       ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Nithin Kumar D @ 2020-05-24 20:58 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Dumitrescu, Cristian, Yigit, Ferruh, Richardson, Bruce, thomas,
	bluca, Singh, Jasvinder, arybchenko, Kinsella, Ray, nhorman,
	ktraynor, david.marchand, Mcnamara, John, Kovacevic, Marko, dev,
	jerinj, kkanas, Nithin Dabilpuram

Hi Thomas,

Can this be merged as it was discussed and agreed long back.

--
Nithin

On Thu, May 21, 2020, 16:19 Jerin Jacob <jerinjacobk@gmail.com> wrote:

> On Tue, May 5, 2020 at 2:25 PM Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Nithin Dabilpuram <nithind1988@gmail.com>
> > > Sent: Tuesday, May 5, 2020 9:08 AM
> > > To: Yigit, Ferruh <ferruh.yigit@intel.com>; Richardson, Bruce
> > > <bruce.richardson@intel.com>; Dumitrescu, Cristian
> > > <cristian.dumitrescu@intel.com>; thomas@monjalon.net;
> > > bluca@debian.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> > > arybchenko@solarflare.com; Kinsella, Ray <ray.kinsella@intel.com>;
> > > nhorman@tuxdriver.com; ktraynor@redhat.com;
> > > david.marchand@redhat.com; Mcnamara, John
> > > <john.mcnamara@intel.com>; Kovacevic, Marko
> > > <marko.kovacevic@intel.com>
> > > Cc: dev@dpdk.org; jerinj@marvell.com; kkanas@marvell.com; Nithin
> > > Dabilpuram <ndabilpuram@marvell.com>
> > > Subject: [PATCH v2] doc: deprication notice to mark tm spec as
> experimental
> > >
> > > From: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > >
> > > Based on the discussion in mail thread, it is concluded that
> > > all traffic manager API's (rte_tm.h) need to be marked experimental
> > > till few more releases to support further improvements to spec.
> > >
> > > https://mails.dpdk.org/archives/dev/2020-April/164970.html
> > >
> > > Adding deprication notice for the same in advance.
> > >
> > > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > > ---
> > >  doc/guides/rel_notes/deprecation.rst | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index 1339f54..2c76f36 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -118,3 +118,10 @@ Deprecation Notices
> > >    Python 2 support will be completely removed in 20.11.
> > >    In 20.08, explicit deprecation warnings will be displayed when
> running
> > >    scripts with Python 2.
> > > +
> > > +* traffic manager: All traffic manager API's in ``rte_tm.h`` were
> mistakenly
> > > made
> > > +  abi stable in the v19.11 release. The TM maintainer and other
> contributor's
> > > have
> > > +  agreed to keep the TM API's as experimental in expectation of
> additional
> > > spec
> > > +  improvements. Therefore, all API's in ``rte_tm.h`` will be marked
> back as
> > > +  experimental in v20.11 DPDK release. For more details, please see
> `the
> > > thread
> > > +  <https://mails.dpdk.org/archives/dev/2020-April/164970.html>`_.
> > > --
> > > 2.8.4
> >
> > Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
>
>
> >
>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4] devtools: remove old ABI validation script
  @ 2020-05-24 20:34 39% ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-24 20:34 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, Neil Horman, Ray Kinsella, John McNamara,
	Marko Kovacevic

From: Neil Horman <nhorman@tuxdriver.com>

Since we've moved away from our initial validate-abi.sh script,
in favor of check-abi.sh, which uses libabigail,
remove the old script from the tree, and update the docs accordingly.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
No progress was done during a month after discussions about
the usage example.

This v4 fixes punctuation and replaces the usage example based on make
with a link to the recommendation in doc/guides/contributing/patches.rst.

Applied quickly for 20.05.
---
 MAINTAINERS                                |   1 -
 devtools/validate-abi.sh                   | 251 ---------------------
 doc/guides/contributing/abi_versioning.rst |  43 +---
 doc/guides/contributing/patches.rst        |   2 +
 4 files changed, 11 insertions(+), 286 deletions(-)
 delete mode 100755 devtools/validate-abi.sh

diff --git a/MAINTAINERS b/MAINTAINERS
index 1616951d7f..d2b286701d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -150,7 +150,6 @@ F: devtools/gen-abi.sh
 F: devtools/libabigail.abignore
 F: devtools/update-abi.sh
 F: devtools/update_version_map_abi.py
-F: devtools/validate-abi.sh
 F: buildtools/check-symbols.sh
 F: buildtools/map-list-symbol.sh
 F: drivers/*/*/*.map
diff --git a/devtools/validate-abi.sh b/devtools/validate-abi.sh
deleted file mode 100755
index f64e19d38f..0000000000
--- a/devtools/validate-abi.sh
+++ /dev/null
@@ -1,251 +0,0 @@
-#!/usr/bin/env bash
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015 Neil Horman. All rights reserved.
-# Copyright(c) 2017 6WIND S.A.
-# All rights reserved
-
-set -e
-
-abicheck=abi-compliance-checker
-abidump=abi-dumper
-default_dst=abi-check
-default_target=x86_64-native-linuxapp-gcc
-
-# trap on error
-err_report() {
-    echo "$0: error at line $1"
-}
-trap 'err_report $LINENO' ERR
-
-print_usage () {
-	cat <<- END_OF_HELP
-	$(basename $0) [options] <rev1> <rev2>
-
-	This script compares the ABI of 2 git revisions of the current
-	workspace. The output is a html report and a compilation log.
-
-	The objective is to make sure that applications built against
-	DSOs from the first revision can still run when executed using
-	the DSOs built from the second revision.
-
-	<rev1> and <rev2> are git commit id or tags.
-
-	Options:
-	  -h		show this help
-	  -j <num>	enable parallel compilation with <num> threads
-	  -v		show compilation logs on the console
-	  -d <dir>	change working directory (default is ${default_dst})
-	  -t <target>	the dpdk target to use (default is ${default_target})
-	  -f		overwrite existing files in destination directory
-
-	The script returns 0 on success, or the value of last failing
-	call of ${abicheck} (incompatible abi or the tool has run with errors).
-	The errors returned by ${abidump} are ignored.
-
-	END_OF_HELP
-}
-
-# log in the file, and on stdout if verbose
-# $1: level string
-# $2: string to be logged
-log() {
-	echo "$1: $2"
-	if [ "${verbose}" != "true" ]; then
-		echo "$1: $2" >&3
-	fi
-}
-
-# launch a command and log it, taking care of surrounding spaces with quotes
-cmd() {
-	local i s whitespace ret
-	s=""
-	whitespace="[[:space:]]"
-	for i in "$@"; do
-		if [[ $i =~ $whitespace ]]; then
-			i=\"$i\"
-		fi
-		if [ -z "$s" ]; then
-			s="$i"
-		else
-			s="$s $i"
-		fi
-	done
-
-	ret=0
-	log "CMD" "$s"
-	"$@" || ret=$?
-	if [ "$ret" != "0" ]; then
-		log "CMD" "previous command returned $ret"
-	fi
-
-	return $ret
-}
-
-# redirect or copy stderr/stdout to a file
-# the syntax is unfamiliar, but it makes the rest of the
-# code easier to read, avoiding the use of pipes
-set_log_file() {
-	# save original stdout and stderr in fd 3 and 4
-	exec 3>&1
-	exec 4>&2
-	# create a new fd 5 that send to a file
-	exec 5> >(cat > $1)
-	# send stdout and stderr to fd 5
-	if [ "${verbose}" = "true" ]; then
-		exec 1> >(tee /dev/fd/5 >&3)
-		exec 2> >(tee /dev/fd/5 >&4)
-	else
-		exec 1>&5
-		exec 2>&5
-	fi
-}
-
-# Make sure we configure SHARED libraries
-# Also turn off IGB and KNI as those require kernel headers to build
-fixup_config() {
-	local conf=config/defconfig_$target
-	cmd sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" $conf
-	cmd sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" $conf
-	cmd sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" $conf
-	cmd sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" $conf
-	cmd sed -i -e"$ a\CONFIG_RTE_KNI_KMOD=n" $conf
-}
-
-# build dpdk for the given tag and dump abi
-# $1: hash of the revision
-gen_abi() {
-	local i
-
-	cmd git clone ${dpdkroot} ${dst}/${1}
-	cmd cd ${dst}/${1}
-
-	log "INFO" "Checking out version ${1} of the dpdk"
-	# Move to the old version of the tree
-	cmd git checkout ${1}
-
-	fixup_config
-
-	# Now configure the build
-	log "INFO" "Configuring DPDK ${1}"
-	cmd make config T=$target O=$target
-
-	# Checking abi compliance relies on using the dwarf information in
-	# the shared objects. Build with -g to include them.
-	log "INFO" "Building DPDK ${1}. This might take a moment"
-	cmd make -j$parallel O=$target V=1 EXTRA_CFLAGS="-g -Og -Wno-error" \
-	    EXTRA_LDFLAGS="-g" || log "INFO" "The build failed"
-
-	# Move to the lib directory
-	cmd cd ${PWD}/$target/lib
-	log "INFO" "Collecting ABI information for ${1}"
-	for i in *.so; do
-		[ -e "$i" ] || break
-		cmd $abidump ${i} -o $dst/${1}/${i}.dump -lver ${1} || true
-		# hack to ignore empty SymbolsInfo section (no public ABI)
-		if grep -q "'SymbolInfo' => {}," $dst/${1}/${i}.dump \
-				2> /dev/null; then
-			log "INFO" "${i} has no public ABI, remove dump file"
-			cmd rm -f $dst/${1}/${i}.dump
-		fi
-	done
-}
-
-verbose=false
-parallel=1
-dst=${default_dst}
-target=${default_target}
-force=0
-while getopts j:vd:t:fh ARG ; do
-	case $ARG in
-		j ) parallel=$OPTARG ;;
-		v ) verbose=true ;;
-		d ) dst=$OPTARG ;;
-		t ) target=$OPTARG ;;
-		f ) force=1 ;;
-		h ) print_usage ; exit 0 ;;
-		? ) print_usage ; exit 1 ;;
-	esac
-done
-shift $(($OPTIND - 1))
-
-if [ $# != 2 ]; then
-	print_usage
-	exit 1
-fi
-
-tag1=$1
-tag2=$2
-
-# convert path to absolute
-case "${dst}" in
-	/*) ;;
-	*) dst=${PWD}/${dst} ;;
-esac
-dpdkroot=$(readlink -f $(dirname $0)/..)
-
-if [ -e "${dst}" -a "$force" = 0 ]; then
-	echo "The ${dst} directory is not empty. Remove it, use another"
-	echo "one (-d <dir>), or force overriding (-f)"
-	exit 1
-fi
-
-rm -rf ${dst}
-mkdir -p ${dst}
-set_log_file ${dst}/abi-check.log
-log "INFO" "Logs available in ${dst}/abi-check.log"
-
-command -v ${abicheck} || log "INFO" "Can't find ${abicheck} utility"
-command -v ${abidump} || log "INFO" "Can't find ${abidump} utility"
-
-hash1=$(git show -s --format=%h "$tag1" -- 2> /dev/null | tail -1)
-hash2=$(git show -s --format=%h "$tag2" -- 2> /dev/null | tail -1)
-
-# Make hashes available in output for non-local reference
-tag1="$tag1 ($hash1)"
-tag2="$tag2 ($hash2)"
-
-if [ "$hash1" = "$hash2" ]; then
-	log "ERROR" "$tag1 and $tag2 are the same revisions"
-	exit 1
-fi
-
-cmd mkdir -p ${dst}
-
-# dump abi for each revision
-gen_abi ${hash1}
-gen_abi ${hash2}
-
-# compare the abi dumps
-cmd cd ${dst}
-ret=0
-list=""
-for i in ${hash2}/*.dump; do
-	name=`basename $i`
-	libname=${name%.dump}
-
-	if [ ! -f ${hash1}/$name ]; then
-		log "INFO" "$NAME does not exist in $tag1. skipping..."
-		continue
-	fi
-
-	local_ret=0
-	cmd $abicheck -l $libname \
-	    -old ${hash1}/$name -new ${hash2}/$name || local_ret=$?
-	if [ $local_ret != 0 ]; then
-		log "NOTICE" "$abicheck returned $local_ret"
-		ret=$local_ret
-		list="$list $libname"
-	fi
-done
-
-if [ $ret != 0 ]; then
-	log "NOTICE" "ABI may be incompatible, check reports/logs for details."
-	log "NOTICE" "Incompatible list: $list"
-else
-	log "NOTICE" "No error detected, ABI is compatible."
-fi
-
-log "INFO" "Logs are in ${dst}/abi-check.log"
-log "INFO" "HTML reports are in ${dst}/compat_reports directory"
-
-exit $ret
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index f4a9273afc..e96fde340f 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -682,41 +682,16 @@ Running the ABI Validator
 -------------------------
 
 The ``devtools`` directory in the DPDK source tree contains a utility program,
-``validate-abi.sh``, for validating the DPDK ABI based on the Linux `ABI
-Compliance Checker
-<http://ispras.linuxbase.org/index.php/ABI_compliance_checker>`_.
+``check-abi.sh``, for validating the DPDK ABI based on the libabigail
+`abidiff utility <https://sourceware.org/libabigail/manual/abidiff.html>`_.
 
-This has a dependency on the ``abi-compliance-checker`` and ``and abi-dumper``
-utilities which can be installed via a package manager. For example::
+The syntax of the ``check-abi.sh`` utility is::
 
-   sudo yum install abi-compliance-checker
-   sudo yum install abi-dumper
+   devtools/check-abi.sh <refdir> <newdir>
 
-The syntax of the ``validate-abi.sh`` utility is::
+Where <refdir> specifies the directory housing the reference build of DPDK,
+and <newdir> specifies the DPDK build directory to check the ABI of.
 
-   ./devtools/validate-abi.sh <REV1> <REV2>
-
-Where ``REV1`` and ``REV2`` are valid gitrevisions(7)
-https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html
-on the local repo.
-
-For example::
-
-   # Check between the previous and latest commit:
-   ./devtools/validate-abi.sh HEAD~1 HEAD
-
-   # Check on a specific compilation target:
-   ./devtools/validate-abi.sh -t x86_64-native-linux-gcc HEAD~1 HEAD
-
-   # Check between two tags:
-   ./devtools/validate-abi.sh v2.0.0 v2.1.0
-
-   # Check between git master and local topic-branch "vhost-hacking":
-   ./devtools/validate-abi.sh master vhost-hacking
-
-After the validation script completes (it can take a while since it need to
-compile both tags) it will create compatibility reports in the
-``./abi-check/compat_report`` directory. Listed incompatibilities can be found
-as follows::
-
-  grep -lr Incompatible abi-check/compat_reports/
+The ABI compatibility is automatically verified when using a build script
+from ``devtools``, if the variable ``DPDK_ABI_REF_VERSION`` is set with a tag,
+as described in :ref:`ABI check recommendations<integrated_abi_check>`.
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 59442824a1..e6a934846e 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -513,6 +513,8 @@ in a single subfolder called "__builds" created in the current directory.
 Setting ``DPDK_BUILD_TEST_DIR`` to an absolute directory path e.g. ``/tmp`` is also supported.
 
 
+.. _integrated_abi_check:
+
 Checking ABI compatibility
 --------------------------
 
-- 
2.26.2


^ permalink raw reply	[relevance 39%]

* [dpdk-dev] [PATCH v2] devtools: remove useless files from ABI reference
  @ 2020-05-24 17:43 13% ` Thomas Monjalon
  2020-05-28 13:16  4%   ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-24 17:43 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, bruce.richardson

When building an ABI reference with meson, some static libraries
are built and linked in apps. They are useless and take a lot of space.
Those binaries, and other useless files (examples and doc files)
in the share/ directory, are removed after being installed.

In order to save time when building the ABI reference,
the examples (which are not installed anyway) are not compiled.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v2: find static libraries anywhere it tries hiding from being swept
---
 devtools/test-meson-builds.sh | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 18b874fac5..de569a486f 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -140,10 +140,15 @@ build () # <directory> <target compiler> <meson options>
 			fi
 
 			rm -rf $abirefdir/build
-			config $abirefdir/src $abirefdir/build $*
+			config $abirefdir/src $abirefdir/build -Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
 			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
+
+			# save disk space by removing static libs and apps
+			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
+			rm -rf $abirefdir/$targetdir/usr/local/bin
+			rm -rf $abirefdir/$targetdir/usr/local/share
 		fi
 
 		install_target $builds_dir/$targetdir \
-- 
2.26.2


^ permalink raw reply	[relevance 13%]

* Re: [dpdk-dev] [PATCH v1] doc: update release notes for 20.05
  2020-05-22 14:06  4% [dpdk-dev] [PATCH v1] doc: update release notes for 20.05 John McNamara
@ 2020-05-22 15:17  0% ` Kevin Traynor
  0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2020-05-22 15:17 UTC (permalink / raw)
  To: John McNamara, dev; +Cc: thomas

On 22/05/2020 15:06, John McNamara wrote:
> Fix grammar, spelling and formatting of DPDK 20.05 release notes.
> 
> Signed-off-by: John McNamara <john.mcnamara@intel.com>
> ---
>  doc/guides/rel_notes/release_20_05.rst | 264 +++++++++++++++------------------
>  1 file changed, 116 insertions(+), 148 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
> index 8470690..d10a1f4 100644
> --- a/doc/guides/rel_notes/release_20_05.rst
> +++ b/doc/guides/rel_notes/release_20_05.rst
> @@ -56,38 +56,38 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =========================================================
>  
> -* **Added Trace Library and Tracepoints**
> +* **Added Trace Library and Tracepoints.**
>  
> -  A native implementation of ``common trace format(CTF)`` based trace library
> -  has been added to provide the ability to add tracepoints in
> -  application/library to get runtime trace/debug information for control and
> +  A native implementation of "common trace format" (CTF) based trace library

Not sure if the "" are intentional?

> +  has been added to provide the ability to add tracepoints in an
> +  application/library to get runtime trace/debug information for control, and
>    fast APIs with minimum impact on fast path performance.
>    Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
>    Added tracepoints in ``EAL``, ``ethdev``, ``cryptodev``, ``eventdev`` and
>    ``mempool`` libraries for important functions.
>  
> -* **Added APIs for RCU defer queue.**
> +* **Added APIs for RCU defer queues.**
>  
> -  Added APIs to create and delete defer queue. Additional APIs are provided
> +  Added APIs to create and delete defer queues. Additional APIs are provided
>    to enqueue a deleted resource and reclaim the resource in the future.
> -  These APIs help the application use lock-free data structures with
> +  These APIs help an application use lock-free data structures with
>    less effort.
>  
>  * **Added new API for rte_ring.**
>  
> -  * New synchronization modes for rte_ring.
> +  * Introduced new synchronization modes for rte_ring.
>  
> -  Introduced new optional MT synchronization modes for rte_ring:
> -  Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
> -  With these mode selected, rte_ring shows significant improvements for
> -  average enqueue/dequeue times on overcommitted systems.
> +    Introduced new optional MT synchronization modes for ``rte_ring``:
> +    Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
> +    With these modes selected, ``rte_ring`` shows significant improvements for
> +    average enqueue/dequeue times on overcommitted systems.
>  
> -  * Added peek style API for rte_ring.
> +  * Added peek style API for ``rte_ring``.
>  
> -  For rings with producer/consumer in RTE_RING_SYNC_ST, RTE_RING_SYNC_MT_HTS
> -  mode, provide an ability to split enqueue/dequeue operation into two phases
> -  (enqueue/dequeue start; enqueue/dequeue finish). That allows user to inspect
> -  objects in the ring without removing them from it (aka MT safe peek).
> +    For rings with producer/consumer in ``RTE_RING_SYNC_ST``, ``RTE_RING_SYNC_MT_HTS``
> +    mode, provide the ability to split enqueue/dequeue operation into two phases
> +    (enqueue/dequeue start and enqueue/dequeue finish). This allows the user to inspect
> +    objects in the ring without removing them (aka MT safe peek).
>  
>  * **Added flow aging support.**
>  
> @@ -100,14 +100,16 @@ New Features
>    * Added new query: ``rte_flow_get_aged_flows`` to get the aged-out flows
>      contexts from the port.
>  
> -* **ethdev: Added a new value to link speed for 200Gbps**
> +* **ethdev: Added a new value to link speed for 200Gbps.**
>  
> -* **Updated Amazon ena driver.**
> +  Added a new ethdev value to for link speeds of 200Gbps.
>  
> -  Updated ena PMD with new features and improvements, including:
> +* **Updated the Amazon ena driver.**
> +
> +  Updated the ena PMD with new features and improvements, including:
>  
>    * Added support for large LLQ (Low-latency queue) headers.
> -  * Added Tx drops as new extended driver statistic.
> +  * Added Tx drops as a new extended driver statistic.
>    * Added support for accelerated LLQ mode.
>    * Handling of the 0 length descriptors on the Rx path.
>  
> @@ -115,14 +117,14 @@ New Features
>  
>    Updated Hisilicon hns3 driver with new features and improvements, including:
>  
> -  * Added support for TSO
> -  * Added support for configuring promiscuous and allmulticast mode for VF
> +  * Added support for TSO.
> +  * Added support for configuring promiscuous and allmulticast mode for VF.
>  
>  * **Updated Intel i40e driver.**
>  
>    Updated i40e PMD with new features and improvements, including:
>  
> -  * Enable MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
> +  * Enabled MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
>    * Added support for RSS using L3/L4 source/destination only.
>    * Added support for setting hash function in rte flow.
>  
> @@ -139,14 +141,14 @@ New Features
>    Updated the Intel ice driver with new features and improvements, including:
>  
>    * Added support for DCF (Device Config Function) feature.
> -  * Added switch filter support for intel DCF.
> +  * Added switch filter support for Intel DCF.
>  
>  * **Updated Marvell OCTEON TX2 ethdev driver.**
>  
> -  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
> -  below features.
> +  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
> +  including:
>  
> -  * Hierarchial Scheduling with DWRR and SP.
> +  * Hierarchical Scheduling with DWRR and SP.
>    * Single rate - Two color, Two rate - Three color shaping.
>  
>  * **Updated Mellanox mlx5 driver.**
> @@ -158,52 +160,28 @@ New Features
>    * Added support for configuring Hairpin queue data buffer size.
>    * Added support for jumbo frame size (9K MTU) in Multi-Packet RQ mode.
>    * Removed flow rules caching for memory saving and compliance with ethdev API.
> -  * Optimized the memory consumption of flow.
> -  * Added support for flow aging based on hardware counter.
> -  * Added support for flow pattern with wildcard VLAN item (without VID value).
> -  * Updated support for matching on GTP header, added match on GTP flags.
> -
> -* **Added Chacha20-Poly1305 algorithm to Cryptodev API.**
> -
> -  Chacha20-Poly1305 AEAD algorithm can now be supported in Cryptodev.
> -
> -* **Updated the AESNI MB crypto PMD.**
> -
> -  * Added support for intel-ipsec-mb version 0.54.
> -  * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm.
> -  * Added support for synchronous Crypto burst API.
> -
> -* **Updated the AESNI GCM crypto PMD.**
> -
> -  * Added support for intel-ipsec-mb version 0.54.
> -
> -* **Updated the ZUC crypto PMD.**
> -
> -  * Added support for intel-ipsec-mb version 0.54.
> -  * Updated the PMD to support Multi-buffer ZUC-EIA3,
> -    improving performance significantly, when using
> -    intel-ipsec-mb version 0.54
> -
> -* **Updated the SNOW3G crypto PMD.**
> -
> -  * Added support for intel-ipsec-mb version 0.54.
> +  * Optimized the memory consumption of flows.
> +  * Added support for flow aging based on hardware counters.
> +  * Added support for flow pattern with wildcard VLAN items (without VID value).
> +  * Updated support for matching on GTP headers, added match on GTP flags.
>  
> -* **Updated the KASUMI crypto PMD.**
> +* **Added additional algorithms to the Cryptodev API.**
>  
> -  * Added support for intel-ipsec-mb version 0.54.
> +  Added additional algorithms and updated support to the Cryptodev PMD and
> +  APIs, including:
>  
> -* **Added a new driver for Intel Foxville I225 devices.**
> +  * Added support for intel-ipsec-mb version 0.54 to the following PMDs: AESNI
> +    MB, AESNI GCM, ZUC, KASUMI, SNOW 3G.
>  
> -  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
> -  :doc:`../nics/igc` NIC guide for more details on this new driver.
> +  * Added support for Chacha20-Poly1305 AEAD algorithm.
>  
> -* **Updated Broadcom bnxt driver.**
> +  * Updated the ZUC crypto PMD to support Multi-buffer ZUC-EIA3, improving
> +    performance significantly, when using intel-ipsec-mb version 0.54
>  
> -  Updated Broadcom bnxt driver with new features and improvements, including:
> +  * AESNI MB crypto PMD:
>  
> -  * Added support for host based flow table management
> -  * Added flow counters to extended stats
> -  * Added PCI function stats to extended stats
> +    * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm.
> +    * Added support for synchronous Crypto burst API.
>  
>  * **Added handling of mixed crypto algorithms in QAT PMD for GEN2.**
>  
> @@ -212,7 +190,7 @@ New Features
>    when running on GEN2 QAT hardware with particular firmware versions
>    (GEN3 support was added in DPDK 20.02).
>  
> -* **Added plain SHA-1,224,256,384,512 support to QAT PMD.**
> +* **Added plain SHA-1, 224, 256, 384, 512 support to QAT PMD.**
>  
>    Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 hashes
>    to QAT PMD.
> @@ -220,26 +198,40 @@ New Features
>  * **Added AES-GCM/GMAC J0 support to QAT PMD.**
>  
>    Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. User can
> -  use this feature by passing zero length IV in appropriate xform. For more
> -  info please refer to rte_crypto_sym.h J0 comments.
> +  use this feature by passing a zero length IV in the appropriate xform. For more
> +  info refer to the doxygen comments in ``rte_crypto_sym.h`` for ``J0``.
>  
>  * **Updated the QAT PMD for AES-256 DOCSIS.**
>  
> -  Added AES-256 DOCSIS algorithm support to QAT PMD.
> +  Added AES-256 DOCSIS algorithm support to the QAT PMD.
>  
> -* **Added QAT intermediate buffer too small handling in QAT compression PMD.**
> +* **Added QAT intermediate undersized buffer handling in QAT compression PMD.**
>  
> -  Added a special way of buffer handling when internal QAT intermediate buffer
> -  is too small for Huffman dynamic compression operation. Instead of falling
> +  Added special buffer handling when the internal QAT intermediate buffer is
> +  too small for the Huffman dynamic compression operation. Instead of falling
>    back to fixed compression, the operation is now split into multiple smaller
> -  dynamic compression requests (possible to execute on QAT) and their results
> -  are then combined and copied into the output buffer. This is not possible if
> -  any checksum calculation was requested - in such case the code falls back to
> -  fixed compression as before.
> +  dynamic compression requests (which are possible to execute on QAT) and
> +  their results are then combined and copied into the output buffer. This is
> +  not possible if any checksum calculation was requested - in such cases the
> +  code falls back to fixed compression as before.
> +
> +* **Added a new driver for Intel Foxville I225 devices.**
> +
> +  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
> +  :doc:`../nics/igc` NIC guide for more details on this new driver.
> +
> +* **Updated Broadcom bnxt driver.**
> +
> +  Updated the Broadcom bnxt driver with new features and improvements, including:
> +
> +  * Added support for host based flow table management.
> +  * Added flow counters to extended stats.
> +  * Added PCI function stats to extended stats.
>  
>  * **Updated the turbo_sw bbdev PMD.**
>  
> -  Supported large size code blocks which does not fit in one mbuf segment.
> +  Added support for large size code blocks which do not fit in one mbuf
> +  segment.
>  
>  * **Added Intel FPGA_5GNR_FEC bbdev PMD.**
>  
> @@ -255,31 +247,32 @@ New Features
>      accurate load balancing.
>    * Improved behavior on high-core count systems.
>    * Reduced latency in low-load situations.
> -  * Extended DSW xstats with migration- and load-related statistics.
> +  * Extended DSW xstats with migration and load-related statistics.
> +
> +* **Updated ipsec-secgw sample application.**
>  
> -* **Updated ipsec-secgw sample application with following features.**
> +  Updated ``ipsec-secgw`` sample application with following features:
>  
> -  * Updated ipsec-secgw application to add event based packet processing.
> -    The worker thread(s) would receive events and submit them back to the
> -    event device after the processing. This way, multicore scaling and HW
> -    assisted scheduling is achieved by making use of the event device
> -    capabilities. The event mode currently supports only inline IPsec
> -    protocol offload.
> +  * Updated the application to add event based packet processing. The worker
> +    thread(s) would receive events and submit them back to the event device
> +    after the processing. This way, multicore scaling and HW assisted
> +    scheduling is achieved by making use of the event device capabilities. The
> +    event mode currently only supports inline IPsec protocol offload.
>  
> -  * Updated ipsec-secgw application to support key sizes for AES-192-CBC,
> -    AES-192-GCM, AES-256-GCM algorithms.
> +  * Updated the application to support key sizes for AES-192-CBC, AES-192-GCM,
> +    AES-256-GCM algorithms.
>  
> -  * Added IPsec inbound load-distribution support for ipsec-secgw application
> -    using NIC load distribution feature(Flow Director).
> +  * Added IPsec inbound load-distribution support for the application using
> +    NIC load distribution feature(Flow Director).
>  
>  * **Updated Telemetry Library.**
>  
> -  The updated Telemetry library has many improvements on the original version
> -  to make it more accessible and scalable:
> +  The updated Telemetry library has been significantly in relation to the
> +  original version to make it more accessible and scalable:
>  
> -  * It enables DPDK libraries and applications provide their own specific
> -    telemetry information, rather than being limited to what could be reported
> -    through the metrics library.
> +  * It now enables DPDK libraries and applications to provide their own
> +    specific telemetry information, rather than being limited to what could be
> +    reported through the metrics library.
>  
>    * It is no longer dependent on the external Jansson library, which allows
>      Telemetry be enabled by default.
> @@ -287,57 +280,47 @@ New Features
>    * The socket handling has been simplified making it easier for clients to
>      connect and retrieve information.
>  
> -* **Added rte_graph library.**
> +* **Added the rte_graph library.**
>  
> -  Graph architecture abstracts the data processing functions as a ``node`` and
> -  ``links`` them together to create a complex ``graph`` to enable reusable/modular
> -  data processing functions. The graph library provides API to enable graph
> -  framework operations such as create, lookup, dump and destroy on graph and node
> -  operations such as clone, edge update, and edge shrink, etc.
> -  The API also allows to create the stats cluster to monitor per graph and per node stats.
> +  The Graph architecture abstracts the data processing functions as a ``node``
> +  and ``links`` them together to create a complex ``graph`` to enable
> +  reusable/modular data processing functions. The graph library provides APIs
> +  to enable graph framework operations such as create, lookup, dump and
> +  destroy on graph and node operations such as clone, edge update, and edge
> +  shrink, etc.  The API also allows the creation of a stats cluster to monitor
> +  per graph and per node statistics.
>  
> -* **Added rte_node library which consists of a set of packet processing nodes.**
> +* **Added the rte_node library.**
>  
> -  The rte_node library that consists of nodes used by rte_graph library. Each
> -  node performs a specific packet processing function based on application
> -  configuration. The following nodes are added:
> +  Added the ``rte_node`` library that consists of nodes used by ``rte_graph``
> +  library. Each node performs a specific packet processing function based on
> +  the application configuration. The following nodes are added:
>  
> -  * Null node: Skeleton node that defines the general structure of a node.
> -  * Ethernet device node: Consists of ethernet Rx/Tx nodes as well as ethernet
> +  * Null node: A skeleton node that defines the general structure of a node.
> +  * Ethernet device node: Consists of Ethernet Rx/Tx nodes as well as Ethernet
>      control APIs.
> -  * IPv4 lookup node: Consists of ipv4 extract and lpm lookup node. Routes can
> -    be configured by the application through ``rte_node_ip4_route_add`` function.
> -  * IPv4 rewrite node: Consists of ipv4 and ethernet header rewrite functionality
> -    that can be configured through ``rte_node_ip4_rewrite_add`` function.
> +  * IPv4 lookup node: Consists of IPv4 extract and LPM lookup node. Routes can
> +    be configured by the application through the ``rte_node_ip4_route_add``
> +    function.
> +  * IPv4 rewrite node: Consists of IPv4 and Ethernet header rewrite
> +    functionality that can be configured through the
> +    ``rte_node_ip4_rewrite_add`` function.
>    * Packet drop node: Frees the packets received to their respective mempool.
>  
>  * **Added new l3fwd-graph sample application.**
>  
> -  Added an example application ``l3fwd-graph``. It demonstrates the usage of graph
> -  library and node library for packet processing. In addition to the library usage
> -  demonstration, this application can use for performance comparison with existing
> -  ``l3fwd`` (The static code without any nodes) with the modular ``l3fwd-graph``
> -  approach.
> +  Added an example application ``l3fwd-graph``. This demonstrates the usage of
> +  graph library and node library for packet processing. In addition to the
> +  library usage demonstration, this application can be used for performance
> +  comparison with existing ``l3fwd`` (The static code without any nodes) with
> +  the modular ``l3fwd-graph`` approach.
>  
> -* **Updated testpmd application.**
> +* **Updated the testpmd application.**
>  
>    * Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's
>      behaviour on handling Rx mq mode.
>  

No need for the sub-bullet here

>  
> -Removed Items
> --------------
> -
> -.. This section should contain removed items in this release. Sample format:
> -
> -   * Add a short 1-2 sentence description of the removed item
> -     in the past tense.
> -
> -   This section is a comment. Do not overwrite or remove it.
> -   Also, make sure to start the actual text at the margin.
> -   =========================================================
> -
> -
>  API Changes
>  -----------
>  
> @@ -354,7 +337,7 @@ API Changes
>     =========================================================
>  
>  * mempool: The API of ``rte_mempool_populate_iova()`` and
> -  ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
> +  ``rte_mempool_populate_virt()`` changed to return 0 instead of ``-EINVAL``
>    when there is not enough room to store one object.
>  
>  
> @@ -376,21 +359,6 @@ ABI Changes
>  * No ABI change that would break compatibility with DPDK 20.02 and 19.11.
>  
>  
> -Known Issues
> -------------
> -
> -.. This section should contain new known issues in this release. Sample format:
> -
> -   * **Add title in present tense with full stop.**
> -
> -     Add a short 1-2 sentence description of the known issue
> -     in the present tense. Add information on any known workarounds.
> -
> -   This section is a comment. Do not overwrite or remove it.
> -   Also, make sure to start the actual text at the margin.
> -   =========================================================
> -
> -
>  Tested Platforms
>  ----------------
>  
> 

I just sent a patch to note gcc 10 support.

Aside from minor comments above, LGTM.
Acked-by: Kevin Traynor <ktraynor@redhat.com>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v1] doc: update release notes for 20.05
@ 2020-05-22 14:06  4% John McNamara
  2020-05-22 15:17  0% ` Kevin Traynor
  0 siblings, 1 reply; 200+ results
From: John McNamara @ 2020-05-22 14:06 UTC (permalink / raw)
  To: dev; +Cc: thomas, John McNamara

Fix grammar, spelling and formatting of DPDK 20.05 release notes.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/rel_notes/release_20_05.rst | 264 +++++++++++++++------------------
 1 file changed, 116 insertions(+), 148 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index 8470690..d10a1f4 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -56,38 +56,38 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
-* **Added Trace Library and Tracepoints**
+* **Added Trace Library and Tracepoints.**
 
-  A native implementation of ``common trace format(CTF)`` based trace library
-  has been added to provide the ability to add tracepoints in
-  application/library to get runtime trace/debug information for control and
+  A native implementation of "common trace format" (CTF) based trace library
+  has been added to provide the ability to add tracepoints in an
+  application/library to get runtime trace/debug information for control, and
   fast APIs with minimum impact on fast path performance.
   Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
   Added tracepoints in ``EAL``, ``ethdev``, ``cryptodev``, ``eventdev`` and
   ``mempool`` libraries for important functions.
 
-* **Added APIs for RCU defer queue.**
+* **Added APIs for RCU defer queues.**
 
-  Added APIs to create and delete defer queue. Additional APIs are provided
+  Added APIs to create and delete defer queues. Additional APIs are provided
   to enqueue a deleted resource and reclaim the resource in the future.
-  These APIs help the application use lock-free data structures with
+  These APIs help an application use lock-free data structures with
   less effort.
 
 * **Added new API for rte_ring.**
 
-  * New synchronization modes for rte_ring.
+  * Introduced new synchronization modes for rte_ring.
 
-  Introduced new optional MT synchronization modes for rte_ring:
-  Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
-  With these mode selected, rte_ring shows significant improvements for
-  average enqueue/dequeue times on overcommitted systems.
+    Introduced new optional MT synchronization modes for ``rte_ring``:
+    Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
+    With these modes selected, ``rte_ring`` shows significant improvements for
+    average enqueue/dequeue times on overcommitted systems.
 
-  * Added peek style API for rte_ring.
+  * Added peek style API for ``rte_ring``.
 
-  For rings with producer/consumer in RTE_RING_SYNC_ST, RTE_RING_SYNC_MT_HTS
-  mode, provide an ability to split enqueue/dequeue operation into two phases
-  (enqueue/dequeue start; enqueue/dequeue finish). That allows user to inspect
-  objects in the ring without removing them from it (aka MT safe peek).
+    For rings with producer/consumer in ``RTE_RING_SYNC_ST``, ``RTE_RING_SYNC_MT_HTS``
+    mode, provide the ability to split enqueue/dequeue operation into two phases
+    (enqueue/dequeue start and enqueue/dequeue finish). This allows the user to inspect
+    objects in the ring without removing them (aka MT safe peek).
 
 * **Added flow aging support.**
 
@@ -100,14 +100,16 @@ New Features
   * Added new query: ``rte_flow_get_aged_flows`` to get the aged-out flows
     contexts from the port.
 
-* **ethdev: Added a new value to link speed for 200Gbps**
+* **ethdev: Added a new value to link speed for 200Gbps.**
 
-* **Updated Amazon ena driver.**
+  Added a new ethdev value to for link speeds of 200Gbps.
 
-  Updated ena PMD with new features and improvements, including:
+* **Updated the Amazon ena driver.**
+
+  Updated the ena PMD with new features and improvements, including:
 
   * Added support for large LLQ (Low-latency queue) headers.
-  * Added Tx drops as new extended driver statistic.
+  * Added Tx drops as a new extended driver statistic.
   * Added support for accelerated LLQ mode.
   * Handling of the 0 length descriptors on the Rx path.
 
@@ -115,14 +117,14 @@ New Features
 
   Updated Hisilicon hns3 driver with new features and improvements, including:
 
-  * Added support for TSO
-  * Added support for configuring promiscuous and allmulticast mode for VF
+  * Added support for TSO.
+  * Added support for configuring promiscuous and allmulticast mode for VF.
 
 * **Updated Intel i40e driver.**
 
   Updated i40e PMD with new features and improvements, including:
 
-  * Enable MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
+  * Enabled MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
   * Added support for RSS using L3/L4 source/destination only.
   * Added support for setting hash function in rte flow.
 
@@ -139,14 +141,14 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF (Device Config Function) feature.
-  * Added switch filter support for intel DCF.
+  * Added switch filter support for Intel DCF.
 
 * **Updated Marvell OCTEON TX2 ethdev driver.**
 
-  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support with
-  below features.
+  Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
+  including:
 
-  * Hierarchial Scheduling with DWRR and SP.
+  * Hierarchical Scheduling with DWRR and SP.
   * Single rate - Two color, Two rate - Three color shaping.
 
 * **Updated Mellanox mlx5 driver.**
@@ -158,52 +160,28 @@ New Features
   * Added support for configuring Hairpin queue data buffer size.
   * Added support for jumbo frame size (9K MTU) in Multi-Packet RQ mode.
   * Removed flow rules caching for memory saving and compliance with ethdev API.
-  * Optimized the memory consumption of flow.
-  * Added support for flow aging based on hardware counter.
-  * Added support for flow pattern with wildcard VLAN item (without VID value).
-  * Updated support for matching on GTP header, added match on GTP flags.
-
-* **Added Chacha20-Poly1305 algorithm to Cryptodev API.**
-
-  Chacha20-Poly1305 AEAD algorithm can now be supported in Cryptodev.
-
-* **Updated the AESNI MB crypto PMD.**
-
-  * Added support for intel-ipsec-mb version 0.54.
-  * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm.
-  * Added support for synchronous Crypto burst API.
-
-* **Updated the AESNI GCM crypto PMD.**
-
-  * Added support for intel-ipsec-mb version 0.54.
-
-* **Updated the ZUC crypto PMD.**
-
-  * Added support for intel-ipsec-mb version 0.54.
-  * Updated the PMD to support Multi-buffer ZUC-EIA3,
-    improving performance significantly, when using
-    intel-ipsec-mb version 0.54
-
-* **Updated the SNOW3G crypto PMD.**
-
-  * Added support for intel-ipsec-mb version 0.54.
+  * Optimized the memory consumption of flows.
+  * Added support for flow aging based on hardware counters.
+  * Added support for flow pattern with wildcard VLAN items (without VID value).
+  * Updated support for matching on GTP headers, added match on GTP flags.
 
-* **Updated the KASUMI crypto PMD.**
+* **Added additional algorithms to the Cryptodev API.**
 
-  * Added support for intel-ipsec-mb version 0.54.
+  Added additional algorithms and updated support to the Cryptodev PMD and
+  APIs, including:
 
-* **Added a new driver for Intel Foxville I225 devices.**
+  * Added support for intel-ipsec-mb version 0.54 to the following PMDs: AESNI
+    MB, AESNI GCM, ZUC, KASUMI, SNOW 3G.
 
-  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
-  :doc:`../nics/igc` NIC guide for more details on this new driver.
+  * Added support for Chacha20-Poly1305 AEAD algorithm.
 
-* **Updated Broadcom bnxt driver.**
+  * Updated the ZUC crypto PMD to support Multi-buffer ZUC-EIA3, improving
+    performance significantly, when using intel-ipsec-mb version 0.54
 
-  Updated Broadcom bnxt driver with new features and improvements, including:
+  * AESNI MB crypto PMD:
 
-  * Added support for host based flow table management
-  * Added flow counters to extended stats
-  * Added PCI function stats to extended stats
+    * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm.
+    * Added support for synchronous Crypto burst API.
 
 * **Added handling of mixed crypto algorithms in QAT PMD for GEN2.**
 
@@ -212,7 +190,7 @@ New Features
   when running on GEN2 QAT hardware with particular firmware versions
   (GEN3 support was added in DPDK 20.02).
 
-* **Added plain SHA-1,224,256,384,512 support to QAT PMD.**
+* **Added plain SHA-1, 224, 256, 384, 512 support to QAT PMD.**
 
   Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 hashes
   to QAT PMD.
@@ -220,26 +198,40 @@ New Features
 * **Added AES-GCM/GMAC J0 support to QAT PMD.**
 
   Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. User can
-  use this feature by passing zero length IV in appropriate xform. For more
-  info please refer to rte_crypto_sym.h J0 comments.
+  use this feature by passing a zero length IV in the appropriate xform. For more
+  info refer to the doxygen comments in ``rte_crypto_sym.h`` for ``J0``.
 
 * **Updated the QAT PMD for AES-256 DOCSIS.**
 
-  Added AES-256 DOCSIS algorithm support to QAT PMD.
+  Added AES-256 DOCSIS algorithm support to the QAT PMD.
 
-* **Added QAT intermediate buffer too small handling in QAT compression PMD.**
+* **Added QAT intermediate undersized buffer handling in QAT compression PMD.**
 
-  Added a special way of buffer handling when internal QAT intermediate buffer
-  is too small for Huffman dynamic compression operation. Instead of falling
+  Added special buffer handling when the internal QAT intermediate buffer is
+  too small for the Huffman dynamic compression operation. Instead of falling
   back to fixed compression, the operation is now split into multiple smaller
-  dynamic compression requests (possible to execute on QAT) and their results
-  are then combined and copied into the output buffer. This is not possible if
-  any checksum calculation was requested - in such case the code falls back to
-  fixed compression as before.
+  dynamic compression requests (which are possible to execute on QAT) and
+  their results are then combined and copied into the output buffer. This is
+  not possible if any checksum calculation was requested - in such cases the
+  code falls back to fixed compression as before.
+
+* **Added a new driver for Intel Foxville I225 devices.**
+
+  Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
+  :doc:`../nics/igc` NIC guide for more details on this new driver.
+
+* **Updated Broadcom bnxt driver.**
+
+  Updated the Broadcom bnxt driver with new features and improvements, including:
+
+  * Added support for host based flow table management.
+  * Added flow counters to extended stats.
+  * Added PCI function stats to extended stats.
 
 * **Updated the turbo_sw bbdev PMD.**
 
-  Supported large size code blocks which does not fit in one mbuf segment.
+  Added support for large size code blocks which do not fit in one mbuf
+  segment.
 
 * **Added Intel FPGA_5GNR_FEC bbdev PMD.**
 
@@ -255,31 +247,32 @@ New Features
     accurate load balancing.
   * Improved behavior on high-core count systems.
   * Reduced latency in low-load situations.
-  * Extended DSW xstats with migration- and load-related statistics.
+  * Extended DSW xstats with migration and load-related statistics.
+
+* **Updated ipsec-secgw sample application.**
 
-* **Updated ipsec-secgw sample application with following features.**
+  Updated ``ipsec-secgw`` sample application with following features:
 
-  * Updated ipsec-secgw application to add event based packet processing.
-    The worker thread(s) would receive events and submit them back to the
-    event device after the processing. This way, multicore scaling and HW
-    assisted scheduling is achieved by making use of the event device
-    capabilities. The event mode currently supports only inline IPsec
-    protocol offload.
+  * Updated the application to add event based packet processing. The worker
+    thread(s) would receive events and submit them back to the event device
+    after the processing. This way, multicore scaling and HW assisted
+    scheduling is achieved by making use of the event device capabilities. The
+    event mode currently only supports inline IPsec protocol offload.
 
-  * Updated ipsec-secgw application to support key sizes for AES-192-CBC,
-    AES-192-GCM, AES-256-GCM algorithms.
+  * Updated the application to support key sizes for AES-192-CBC, AES-192-GCM,
+    AES-256-GCM algorithms.
 
-  * Added IPsec inbound load-distribution support for ipsec-secgw application
-    using NIC load distribution feature(Flow Director).
+  * Added IPsec inbound load-distribution support for the application using
+    NIC load distribution feature(Flow Director).
 
 * **Updated Telemetry Library.**
 
-  The updated Telemetry library has many improvements on the original version
-  to make it more accessible and scalable:
+  The updated Telemetry library has been significantly in relation to the
+  original version to make it more accessible and scalable:
 
-  * It enables DPDK libraries and applications provide their own specific
-    telemetry information, rather than being limited to what could be reported
-    through the metrics library.
+  * It now enables DPDK libraries and applications to provide their own
+    specific telemetry information, rather than being limited to what could be
+    reported through the metrics library.
 
   * It is no longer dependent on the external Jansson library, which allows
     Telemetry be enabled by default.
@@ -287,57 +280,47 @@ New Features
   * The socket handling has been simplified making it easier for clients to
     connect and retrieve information.
 
-* **Added rte_graph library.**
+* **Added the rte_graph library.**
 
-  Graph architecture abstracts the data processing functions as a ``node`` and
-  ``links`` them together to create a complex ``graph`` to enable reusable/modular
-  data processing functions. The graph library provides API to enable graph
-  framework operations such as create, lookup, dump and destroy on graph and node
-  operations such as clone, edge update, and edge shrink, etc.
-  The API also allows to create the stats cluster to monitor per graph and per node stats.
+  The Graph architecture abstracts the data processing functions as a ``node``
+  and ``links`` them together to create a complex ``graph`` to enable
+  reusable/modular data processing functions. The graph library provides APIs
+  to enable graph framework operations such as create, lookup, dump and
+  destroy on graph and node operations such as clone, edge update, and edge
+  shrink, etc.  The API also allows the creation of a stats cluster to monitor
+  per graph and per node statistics.
 
-* **Added rte_node library which consists of a set of packet processing nodes.**
+* **Added the rte_node library.**
 
-  The rte_node library that consists of nodes used by rte_graph library. Each
-  node performs a specific packet processing function based on application
-  configuration. The following nodes are added:
+  Added the ``rte_node`` library that consists of nodes used by ``rte_graph``
+  library. Each node performs a specific packet processing function based on
+  the application configuration. The following nodes are added:
 
-  * Null node: Skeleton node that defines the general structure of a node.
-  * Ethernet device node: Consists of ethernet Rx/Tx nodes as well as ethernet
+  * Null node: A skeleton node that defines the general structure of a node.
+  * Ethernet device node: Consists of Ethernet Rx/Tx nodes as well as Ethernet
     control APIs.
-  * IPv4 lookup node: Consists of ipv4 extract and lpm lookup node. Routes can
-    be configured by the application through ``rte_node_ip4_route_add`` function.
-  * IPv4 rewrite node: Consists of ipv4 and ethernet header rewrite functionality
-    that can be configured through ``rte_node_ip4_rewrite_add`` function.
+  * IPv4 lookup node: Consists of IPv4 extract and LPM lookup node. Routes can
+    be configured by the application through the ``rte_node_ip4_route_add``
+    function.
+  * IPv4 rewrite node: Consists of IPv4 and Ethernet header rewrite
+    functionality that can be configured through the
+    ``rte_node_ip4_rewrite_add`` function.
   * Packet drop node: Frees the packets received to their respective mempool.
 
 * **Added new l3fwd-graph sample application.**
 
-  Added an example application ``l3fwd-graph``. It demonstrates the usage of graph
-  library and node library for packet processing. In addition to the library usage
-  demonstration, this application can use for performance comparison with existing
-  ``l3fwd`` (The static code without any nodes) with the modular ``l3fwd-graph``
-  approach.
+  Added an example application ``l3fwd-graph``. This demonstrates the usage of
+  graph library and node library for packet processing. In addition to the
+  library usage demonstration, this application can be used for performance
+  comparison with existing ``l3fwd`` (The static code without any nodes) with
+  the modular ``l3fwd-graph`` approach.
 
-* **Updated testpmd application.**
+* **Updated the testpmd application.**
 
   * Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's
     behaviour on handling Rx mq mode.
 
 
-Removed Items
--------------
-
-.. This section should contain removed items in this release. Sample format:
-
-   * Add a short 1-2 sentence description of the removed item
-     in the past tense.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =========================================================
-
-
 API Changes
 -----------
 
@@ -354,7 +337,7 @@ API Changes
    =========================================================
 
 * mempool: The API of ``rte_mempool_populate_iova()`` and
-  ``rte_mempool_populate_virt()`` changed to return 0 instead of -EINVAL
+  ``rte_mempool_populate_virt()`` changed to return 0 instead of ``-EINVAL``
   when there is not enough room to store one object.
 
 
@@ -376,21 +359,6 @@ ABI Changes
 * No ABI change that would break compatibility with DPDK 20.02 and 19.11.
 
 
-Known Issues
-------------
-
-.. This section should contain new known issues in this release. Sample format:
-
-   * **Add title in present tense with full stop.**
-
-     Add a short 1-2 sentence description of the known issue
-     in the present tense. Add information on any known workarounds.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =========================================================
-
-
 Tested Platforms
 ----------------
 
-- 
2.7.5


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 20.08 8/9] devtools: support python3 only
  @ 2020-05-22 13:23  4% ` Louise Kilheeney
  2020-05-27  6:15  0%   ` Ray Kinsella
    1 sibling, 1 reply; 200+ results
From: Louise Kilheeney @ 2020-05-22 13:23 UTC (permalink / raw)
  To: dev; +Cc: Louise Kilheeney, Neil Horman, Ray Kinsella

Changed script to explicitly use python3 only.

Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Ray Kinsella <mdr@ashroe.eu>

Signed-off-by: Louise Kilheeney <louise.kilheeney@intel.com>
---
 devtools/update_version_map_abi.py | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/devtools/update_version_map_abi.py b/devtools/update_version_map_abi.py
index 616412a1c..58aa368f9 100755
--- a/devtools/update_version_map_abi.py
+++ b/devtools/update_version_map_abi.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2019 Intel Corporation
 
@@ -9,7 +9,6 @@
 from the devtools/update-abi.sh utility.
 """
 
-from __future__ import print_function
 import argparse
 import sys
 import re
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 2/3] drivers: drop workaround for internal libraries
  2020-05-22  6:58  4% [dpdk-dev] [PATCH 0/3] Experimental/internal libraries cleanup David Marchand
  2020-05-22  6:58 17% ` [dpdk-dev] [PATCH 1/3] build: remove special versioning for non stable libraries David Marchand
@ 2020-05-22  6:58  3% ` David Marchand
    2 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-22  6:58 UTC (permalink / raw)
  To: dev
  Cc: thomas, techboard, Ray Kinsella, Neil Horman, Hemant Agrawal,
	Sachin Saxena, Jerin Jacob, Nithin Dabilpuram, Akhil Goyal

Now that all libraries have a single version, we can drop the empty
stable blocks that had been added when moving symbols from stable to
internal ABI.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 drivers/bus/dpaa/rte_bus_dpaa_version.map                   | 6 ++----
 drivers/bus/fslmc/rte_bus_fslmc_version.map                 | 6 ++----
 drivers/common/dpaax/rte_common_dpaax_version.map           | 6 ++----
 drivers/common/octeontx2/rte_common_octeontx2_version.map   | 6 ++----
 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map      | 6 ++----
 drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map        | 6 ++----
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map           | 6 ++----
 drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map | 6 ++----
 drivers/net/dpaa2/rte_pmd_dpaa2_version.map                 | 6 ++----
 9 files changed, 18 insertions(+), 36 deletions(-)

diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 46d42f7d64..491c507119 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,7 +1,3 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
@@ -90,4 +86,6 @@ INTERNAL {
 	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
+
+	local: *;
 };
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 69e7dc6ad9..0a9947a454 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,7 +1,3 @@
-DPDK_20.0 {
-	local: *;
-};
-
 EXPERIMENTAL {
 	global:
 
@@ -111,4 +107,6 @@ INTERNAL {
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
 	rte_global_active_dqs_list;
+
+	local: *;
 };
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index 49c775c072..ee1ca6801c 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,7 +1,3 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
@@ -23,4 +19,6 @@ INTERNAL {
 	of_n_addr_cells;
 	of_translate_address;
 	rta_sec_era;
+
+	local: *;
 };
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index d26bd71172..9a9969613b 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -1,7 +1,3 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
@@ -42,4 +38,6 @@ INTERNAL {
 	otx2_sso_pf_func_get;
 	otx2_sso_pf_func_set;
 	otx2_unregister_irq;
+
+	local: *;
 };
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 3d863aff4d..1352f576e5 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,10 +1,8 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
 	dpaa2_sec_eventq_attach;
 	dpaa2_sec_eventq_detach;
+
+	local: *;
 };
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index 023e120516..731ea593ad 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,10 +1,8 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
 	dpaa_sec_eventq_attach;
 	dpaa_sec_eventq_detach;
+
+	local: *;
 };
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 89d7cf4957..142547ee38 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,10 +1,8 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
 	rte_dpaa_bpid_info;
 	rte_dpaa_memsegs;
+
+	local: *;
 };
diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
index 8691efdfd8..e6887ceb8f 100644
--- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
+++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
@@ -1,10 +1,8 @@
-DPDK_20.0 {
-	local: *;
-};
-
 INTERNAL {
 	global:
 
 	otx2_npa_lf_fini;
 	otx2_npa_lf_init;
+
+	local: *;
 };
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index b633fdc2a8..c3a457d2b9 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,7 +1,3 @@
-DPDK_20.0 {
-	local: *;
-};
-
 EXPERIMENTAL {
 	global:
 
@@ -15,4 +11,6 @@ INTERNAL {
 
 	dpaa2_eth_eventq_attach;
 	dpaa2_eth_eventq_detach;
+
+	local: *;
 };
-- 
2.23.0


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH 1/3] build: remove special versioning for non stable libraries
  2020-05-22  6:58  4% [dpdk-dev] [PATCH 0/3] Experimental/internal libraries cleanup David Marchand
@ 2020-05-22  6:58 17% ` David Marchand
  2020-05-22  6:58  3% ` [dpdk-dev] [PATCH 2/3] drivers: drop workaround for internal libraries David Marchand
    2 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-22  6:58 UTC (permalink / raw)
  To: dev; +Cc: thomas, techboard, Ray Kinsella, Neil Horman

Having a special versioning for experimental/internal libraries put a
additional maintenance cost while this status is already announced in
MAINTAINERS and the library headers/documentation.
Following discussions and vote at 05/20 TB meeting [1], use a single
versioning for all libraries in DPDK.

Note: for the ABI check, an exception [2] had been added when tweaking
this special versioning [3].
Prefer explicit libabigail rules (which will be dropped in 20.11).

1: https://mails.dpdk.org/archives/dev/2020-May/168450.html
2: https://git.dpdk.org/dpdk/commit/?id=23d7ad5db41c
3: https://git.dpdk.org/dpdk/commit/?id=ec2b8cd7ed69

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 buildtools/meson.build       |  3 ---
 config/meson.build           | 16 ++++++----------
 devtools/check-abi.sh        |  5 -----
 devtools/libabigail.abignore | 26 ++++++++++++++++++++++++--
 drivers/meson.build          | 13 +------------
 lib/meson.build              | 13 +------------
 mk/rte.lib.mk                |  5 -----
 7 files changed, 32 insertions(+), 49 deletions(-)

diff --git a/buildtools/meson.build b/buildtools/meson.build
index d5f8291beb..79703b6f93 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -18,6 +18,3 @@ else
 endif
 map_to_def_cmd = py3 + files('map_to_def.py')
 sphinx_wrapper = py3 + files('call-sphinx-build.py')
-
-# stable ABI always starts with "DPDK_"
-is_stable_cmd = [find_program('grep', 'findstr'), '^DPDK_']
diff --git a/config/meson.build b/config/meson.build
index 43ab113106..35975f1030 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -25,18 +25,14 @@ major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
 abi_version = run_command(find_program('cat', 'more'),
 	abi_version_file).stdout().strip()
 
-# Regular libraries have the abi_version as the filename extension
+# Libraries have the abi_version as the filename extension
 # and have the soname be all but the final part of the abi_version.
-# Experimental libraries have soname with '0.major'
-# and the filename suffix as 0.majorminor versions,
-# e.g. v20.1 => librte_stable.so.20.1, librte_experimental.so.0.201
-#    sonames => librte_stable.so.20, librte_experimental.so.0.20
-# e.g. v20.0.1 => librte_stable.so.20.0.1, librte_experimental.so.0.2001
-#      sonames => librte_stable.so.20.0, librte_experimental.so.0.200
+# e.g. v20.1 => librte_foo.so.20.1
+#    sonames => librte_foo.so.20
+# e.g. v20.0.1 => librte_foo.so.20.0.1
+#      sonames => librte_foo.so.20.0
 abi_va = abi_version.split('.')
-stable_so_version = abi_va.length() == 2 ? abi_va[0] : abi_va[0] + '.' + abi_va[1]
-experimental_abi_version = '0.' + abi_va[0] + abi_va[1] + '.' + abi_va[2]
-experimental_so_version = experimental_abi_version
+so_version = abi_va.length() == 2 ? abi_va[0] : abi_va[0] + '.' + abi_va[1]
 
 # extract all version information into the build configuration
 dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index dd9120e69e..e17fedbd9f 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -44,11 +44,6 @@ for dump in $(find $refdir -name "*.dump"); do
 		echo "Skipped glue library $name."
 		continue
 	fi
-	# skip experimental libraries, with a sover starting with 0.
-	if grep -qE "\<soname='[^']*\.so\.0\.[^']*'" $dump; then
-		echo "Skipped experimental library $name."
-		continue
-	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
 		echo "Error: can't find $name in $newdir"
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index becbf842a5..02b290b08f 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -50,9 +50,10 @@
         name = rte_crypto_aead_algorithm_strings
 
 ;;;;;;;;;;;;;;;;;;;;;;
-; Temporary exceptions for new __rte_internal marking till DPDK 20.11
+; Temporary exceptions for new __rte_internal marking and experimental
+; libraries soname changes till DPDK 20.11
 ;;;;;;;;;;;;;;;;;;;;;;
-; Ignore moving OCTEONTX2 stable functions to INTERNAL tag
+; Ignore moving OCTEONTX2 stable functions to INTERNAL
 [suppress_file]
 	file_name_regexp = ^librte_common_octeontx2\.
 [suppress_file]
@@ -77,3 +78,24 @@
         name = rte_dpaa2_mbuf_alloc_bulk
 [suppress_function]
         name_regexp = ^dpaa2?_.*tach$
+; Ignore soname changes for experimental libraries
+[suppress_file]
+	file_name_regexp = ^librte_bbdev\.
+[suppress_file]
+	file_name_regexp = ^librte_bpf\.
+[suppress_file]
+	file_name_regexp = ^librte_compressdev\.
+[suppress_file]
+	file_name_regexp = ^librte_fib\.
+[suppress_file]
+	file_name_regexp = ^librte_flow_classify\.
+[suppress_file]
+	file_name_regexp = ^librte_ipsec\.
+[suppress_file]
+	file_name_regexp = ^librte_rcu\.
+[suppress_file]
+	file_name_regexp = ^librte_rib\.
+[suppress_file]
+	file_name_regexp = ^librte_telemetry\.
+[suppress_file]
+	file_name_regexp = ^librte_stack\.
diff --git a/drivers/meson.build b/drivers/meson.build
index cfb6a833c9..4e5713bb27 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -128,17 +128,6 @@ foreach class:dpdk_driver_classes
 					meson.current_source_dir(),
 					drv_path, lib_name)
 
-			is_stable = run_command(is_stable_cmd,
-				files(version_map)).returncode() == 0
-
-			if is_stable
-				lib_version = abi_version
-				so_version = stable_so_version
-			else
-				lib_version = experimental_abi_version
-				so_version = experimental_so_version
-			endif
-
 			# now build the static driver
 			static_lib = static_library(lib_name,
 				sources,
@@ -183,7 +172,7 @@ foreach class:dpdk_driver_classes
 				c_args: cflags,
 				link_args: lk_args,
 				link_depends: lk_deps,
-				version: lib_version,
+				version: abi_version,
 				soversion: so_version,
 				install: true,
 				install_dir: driver_install_path)
diff --git a/lib/meson.build b/lib/meson.build
index d190d84eff..13b330396c 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -110,17 +110,6 @@ foreach l:libraries
 			version_map = '@0@/@1@/rte_@2@_version.map'.format(
 					meson.current_source_dir(), dir_name, name)
 
-			is_stable = run_command(is_stable_cmd,
-					files(version_map)).returncode() == 0
-
-			if is_stable
-				lib_version = abi_version
-				so_version = stable_so_version
-			else
-				lib_version = experimental_abi_version
-				so_version = experimental_so_version
-			endif
-
 			# first build static lib
 			static_lib = static_library(libname,
 					sources,
@@ -179,7 +168,7 @@ foreach l:libraries
 					include_directories: includes,
 					link_args: lk_args,
 					link_depends: lk_deps,
-					version: lib_version,
+					version: abi_version,
 					soversion: so_version,
 					install: true)
 			shared_dep = declare_dependency(link_with: shared_lib,
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 682b590dba..229ae16814 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -13,11 +13,6 @@ VPATH += $(SRCDIR)
 
 LIBABIVER ?= $(shell cat $(RTE_SRCDIR)/ABI_VERSION)
 SOVER := $(basename $(LIBABIVER))
-ifeq ($(shell grep -s "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
-# EXPERIMENTAL ABI is versioned as 0.major+minor, e.g. 0.201 for 20.1 ABI
-LIBABIVER := 0.$(shell echo $(LIBABIVER) | awk 'BEGIN { FS="." }; { print $$1$$2"."$$3 }')
-SOVER := $(LIBABIVER)
-endif
 
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
 SONAME := $(patsubst %.a,%.so.$(SOVER),$(LIB))
-- 
2.23.0


^ permalink raw reply	[relevance 17%]

* [dpdk-dev] [PATCH 0/3] Experimental/internal libraries cleanup
@ 2020-05-22  6:58  4% David Marchand
  2020-05-22  6:58 17% ` [dpdk-dev] [PATCH 1/3] build: remove special versioning for non stable libraries David Marchand
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: David Marchand @ 2020-05-22  6:58 UTC (permalink / raw)
  To: dev; +Cc: thomas, techboard

Following discussions on the mailing list and the last TB meeting, here
is a series that drops the special versioning for non stable libraries.

Two notes:

- RIB/FIB library is not referenced in the API doxygen index, is this
  intentional?
- I inspected MAINTAINERS: librte_gro, librte_member and librte_rawdev are
  announced as experimental while their functions are part of the 20
  stable ABI (in .map files + no __rte_experimental marking).
  I won't touch this for 20.05 but their fate will have to be discussed.

-- 
David Marchand

David Marchand (3):
  build: remove special versioning for non stable libraries
  drivers: drop workaround for internal libraries
  lib: remind experimental status in library headers

 buildtools/meson.build                        |  3 ---
 config/meson.build                            | 16 +++++-------
 devtools/check-abi.sh                         |  5 ----
 devtools/libabigail.abignore                  | 26 +++++++++++++++++--
 drivers/bus/dpaa/rte_bus_dpaa_version.map     |  6 ++---
 drivers/bus/fslmc/rte_bus_fslmc_version.map   |  6 ++---
 .../common/dpaax/rte_common_dpaax_version.map |  6 ++---
 .../rte_common_octeontx2_version.map          |  6 ++---
 .../dpaa2_sec/rte_pmd_dpaa2_sec_version.map   |  6 ++---
 .../dpaa_sec/rte_pmd_dpaa_sec_version.map     |  6 ++---
 .../mempool/dpaa/rte_mempool_dpaa_version.map |  6 ++---
 .../rte_mempool_octeontx2_version.map         |  6 ++---
 drivers/meson.build                           | 13 +---------
 drivers/net/dpaa2/rte_pmd_dpaa2_version.map   |  6 ++---
 lib/librte_bbdev/rte_bbdev.h                  |  3 ++-
 lib/librte_bpf/rte_bpf.h                      |  6 ++++-
 lib/librte_compressdev/rte_compressdev.h      |  6 ++++-
 lib/librte_fib/rte_fib.h                      |  7 +++++
 lib/librte_fib/rte_fib6.h                     |  7 +++++
 lib/librte_flow_classify/rte_flow_classify.h  |  6 +++--
 lib/librte_ipsec/rte_ipsec.h                  |  6 ++++-
 lib/librte_rcu/rte_rcu_qsbr.h                 |  7 ++++-
 lib/librte_rib/rte_rib.h                      |  7 +++++
 lib/librte_rib/rte_rib6.h                     |  7 +++++
 lib/librte_stack/rte_stack.h                  |  7 +++--
 lib/librte_telemetry/rte_telemetry.h          | 10 ++++---
 lib/meson.build                               | 13 +---------
 mk/rte.lib.mk                                 |  5 ----
 28 files changed, 116 insertions(+), 98 deletions(-)

-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 0/4] fix build with GCC 10
  2020-05-20 16:45  0% ` Kevin Traynor
@ 2020-05-21 13:39  0%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-21 13:39 UTC (permalink / raw)
  To: Kevin Traynor; +Cc: dev, david.marchand

20/05/2020 18:45, Kevin Traynor:
> On 20/05/2020 14:58, Thomas Monjalon wrote:
> > These are supposed to be the last patches to support GCC 10.
> > 
> > Thomas Monjalon (4):
> >   net/mvpp2: fix build with gcc 10
> >   examples/vm_power: fix build with -fno-common
> >   examples/vm_power: drop Unix path limit redefinition
> >   devtools: allow warnings in ABI reference build
> > 
> >  devtools/test-build.sh                      | 6 ++----
> >  devtools/test-meson-builds.sh               | 3 +--
> >  drivers/net/mvpp2/mrvl_flow.c               | 4 ++--
> >  examples/vm_power_manager/channel_manager.c | 3 ++-
> >  examples/vm_power_manager/channel_manager.h | 9 ++-------
> >  examples/vm_power_manager/power_manager.c   | 1 -
> >  6 files changed, 9 insertions(+), 17 deletions(-)
> > 
> For series:
> Acked-by: Kevin Traynor <ktraynor@redhat.com>

Applied



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] DPDK-20.05 RC3 day2 quick report
@ 2020-05-21 11:24  3% Peng, Yuan
  0 siblings, 0 replies; 200+ results
From: Peng, Yuan @ 2020-05-21 11:24 UTC (permalink / raw)
  To: dev

DPDK-20.05 RC3 day2 quick report

  *   Totally create ~400+ new test cases for DPDK20.05 new features.
  *   Totally 10203 cases, execution percentage is about 99%, pass rate is about 97%, 5 new issues are found till now, including a high level issue.
  *   Checked build and compile, found 1 new issue, now fixed and verified.
  *   Checked Basic NIC PMD(i40e, ixgbe, ice) PF & VF regression, new found 3 PF issue.
  *   Checked virtio regression test, no new bug is found.
  *   Checked cryptodev and compressdev regression, no new issus found so far.
  *   Checked NIC performance, no new issue found so far.
  *   Checked ABI test, no new issue found so far.
  *   Checked 20.05 new features: 1 new issue found so far.

Thank you.
Yuan.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] DPDK Release Status Meeting 21/05/2020
  2020-05-21 11:20  3% [dpdk-dev] DPDK Release Status Meeting 21/05/2020 Ferruh Yigit
@ 2020-05-21 11:24  0% ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-05-21 11:24 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Thomas Monjalon, Ajit Khaparde

On 5/21/2020 12:20 PM, Ferruh Yigit wrote:
> Minutes 21 May 2020
> -------------------
> 
> Agenda:
> * Release Dates
> * -rc3 status
> * Subtrees
> * Opens
> 
> Participants:
> * Arm
> * Debian/Microsoft
> * Intel
> * Marvell
> * Mellanox
> * NXP
> * Red Hat
> 
> 
> Release Dates
> -------------
> 
> * v20.05 dates:
>   * -rc3 is released on Tuesday, 19 May
>     * https://mails.dpdk.org/archives/dev/2020-May/168313.html
>   * -rc4 pushed to		*Sunday 24 May 2020*
>   * Release pushed to		*Tuesday 26 May 2020*
> 
> * v20.08 proposal dates *updated*, please comment:
>   * Proposal/V1:		Friday, 12 June 2020
>   * -rc1:			Wednesday, 8 July 2020
>   * -rc2:			Monday, 20 July 2020
>   * Release:			Tuesday, 4 August 2020
> 
>   * Please send roadmap for the release
> 
> 
> -rc3 status
> -----------
> 
> * Intel testing %80 percent done, can be completed tomorrow
>   * Only a PMD issue and a few medium priority defects found
>   * Overall looks good
> 
> 
> Subtrees
> --------
> 
> * main
>   * Some gcc10 fixes will be merged
>     * gcc10 may be causing ABI compatibility issues
>   * Started to work on release notes, John will support
>     * Good to have tested HW information from all vendors
> 
> * next-net
>   * Only a few fixes for -rc4
>   * Vendor sub-trees has patches, will pull from them today
> 
> * next-crypto
>   * No update
> 
> * next-eventdev
>   * No update
> 
> * next-virtio
>   * No update
> 
> * next-net-intel
>   * Some fixes in sub-tree already
> 
> * LTS
> 
>   * A set of security releases done
>     * 19.11.2: https://mails.dpdk.org/archives/dev/2020-May/168103.html
>     * 18.11.8: https://mails.dpdk.org/archives/dev/2020-May/168110.html
>     * 20.02.1: https://mails.dpdk.org/archives/dev/2020-May/168102.html
> 
> 
> Opens
> -----
> 
> * Fuzz testing can be used capture some security issues in advance.
>   * This can be done in the CI.
>   * Luca shared oss-fuzz as reference:
>     * https://oss-fuzz.com/
>     * https://github.com/google/oss-fuzz
> 
> 
> 
> DPDK Release Status Meetings
> ============================
> 
> The DPDK Release Status Meeting is intended for DPDK Committers to discuss
> the status of the master tree and sub-trees, and for project managers to
> track progress or milestone dates.
> 
> The meeting occurs on Thursdays at 8:30 UTC. If you wish to attend just
> send an email to "John McNamara <john.mcnamara@intel.com>" for the invite.
> 


We forget to mention but Ajit send the "DPDK bugs against 20.05" offline, let me
put the list here:


https://bugs.dpdk.org/show_bug.cgi?id=475
https://bugs.dpdk.org/show_bug.cgi?id=474
https://bugs.dpdk.org/show_bug.cgi?id=473
https://bugs.dpdk.org/show_bug.cgi?id=472
https://bugs.dpdk.org/show_bug.cgi?id=470
https://bugs.dpdk.org/show_bug.cgi?id=465 (Duplicate of 471)
https://bugs.dpdk.org/show_bug.cgi?id=481

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] DPDK Release Status Meeting 21/05/2020
@ 2020-05-21 11:20  3% Ferruh Yigit
  2020-05-21 11:24  0% ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-05-21 11:20 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Thomas Monjalon

Minutes 21 May 2020
-------------------

Agenda:
* Release Dates
* -rc3 status
* Subtrees
* Opens

Participants:
* Arm
* Debian/Microsoft
* Intel
* Marvell
* Mellanox
* NXP
* Red Hat


Release Dates
-------------

* v20.05 dates:
  * -rc3 is released on Tuesday, 19 May
    * https://mails.dpdk.org/archives/dev/2020-May/168313.html
  * -rc4 pushed to		*Sunday 24 May 2020*
  * Release pushed to		*Tuesday 26 May 2020*

* v20.08 proposal dates *updated*, please comment:
  * Proposal/V1:		Friday, 12 June 2020
  * -rc1:			Wednesday, 8 July 2020
  * -rc2:			Monday, 20 July 2020
  * Release:			Tuesday, 4 August 2020

  * Please send roadmap for the release


-rc3 status
-----------

* Intel testing %80 percent done, can be completed tomorrow
  * Only a PMD issue and a few medium priority defects found
  * Overall looks good


Subtrees
--------

* main
  * Some gcc10 fixes will be merged
    * gcc10 may be causing ABI compatibility issues
  * Started to work on release notes, John will support
    * Good to have tested HW information from all vendors

* next-net
  * Only a few fixes for -rc4
  * Vendor sub-trees has patches, will pull from them today

* next-crypto
  * No update

* next-eventdev
  * No update

* next-virtio
  * No update

* next-net-intel
  * Some fixes in sub-tree already

* LTS

  * A set of security releases done
    * 19.11.2: https://mails.dpdk.org/archives/dev/2020-May/168103.html
    * 18.11.8: https://mails.dpdk.org/archives/dev/2020-May/168110.html
    * 20.02.1: https://mails.dpdk.org/archives/dev/2020-May/168102.html


Opens
-----

* Fuzz testing can be used capture some security issues in advance.
  * This can be done in the CI.
  * Luca shared oss-fuzz as reference:
    * https://oss-fuzz.com/
    * https://github.com/google/oss-fuzz



DPDK Release Status Meetings
============================

The DPDK Release Status Meeting is intended for DPDK Committers to discuss
the status of the master tree and sub-trees, and for project managers to
track progress or milestone dates.

The meeting occurs on Thursdays at 8:30 UTC. If you wish to attend just
send an email to "John McNamara <john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2] doc: deprication notice to mark tm spec as experimental
  @ 2020-05-21 10:49  0%     ` Jerin Jacob
  2020-05-24 20:58  0%       ` Nithin Kumar D
  2020-05-24 23:33  0%       ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Jerin Jacob @ 2020-05-21 10:49 UTC (permalink / raw)
  To: Dumitrescu, Cristian
  Cc: Nithin Dabilpuram, Yigit, Ferruh, Richardson, Bruce, thomas,
	bluca, Singh, Jasvinder, arybchenko, Kinsella, Ray, nhorman,
	ktraynor, david.marchand, Mcnamara, John, Kovacevic, Marko, dev,
	jerinj, kkanas, Nithin Dabilpuram

On Tue, May 5, 2020 at 2:25 PM Dumitrescu, Cristian
<cristian.dumitrescu@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Nithin Dabilpuram <nithind1988@gmail.com>
> > Sent: Tuesday, May 5, 2020 9:08 AM
> > To: Yigit, Ferruh <ferruh.yigit@intel.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; Dumitrescu, Cristian
> > <cristian.dumitrescu@intel.com>; thomas@monjalon.net;
> > bluca@debian.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> > arybchenko@solarflare.com; Kinsella, Ray <ray.kinsella@intel.com>;
> > nhorman@tuxdriver.com; ktraynor@redhat.com;
> > david.marchand@redhat.com; Mcnamara, John
> > <john.mcnamara@intel.com>; Kovacevic, Marko
> > <marko.kovacevic@intel.com>
> > Cc: dev@dpdk.org; jerinj@marvell.com; kkanas@marvell.com; Nithin
> > Dabilpuram <ndabilpuram@marvell.com>
> > Subject: [PATCH v2] doc: deprication notice to mark tm spec as experimental
> >
> > From: Nithin Dabilpuram <ndabilpuram@marvell.com>
> >
> > Based on the discussion in mail thread, it is concluded that
> > all traffic manager API's (rte_tm.h) need to be marked experimental
> > till few more releases to support further improvements to spec.
> >
> > https://mails.dpdk.org/archives/dev/2020-April/164970.html
> >
> > Adding deprication notice for the same in advance.
> >
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 1339f54..2c76f36 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -118,3 +118,10 @@ Deprecation Notices
> >    Python 2 support will be completely removed in 20.11.
> >    In 20.08, explicit deprecation warnings will be displayed when running
> >    scripts with Python 2.
> > +
> > +* traffic manager: All traffic manager API's in ``rte_tm.h`` were mistakenly
> > made
> > +  abi stable in the v19.11 release. The TM maintainer and other contributor's
> > have
> > +  agreed to keep the TM API's as experimental in expectation of additional
> > spec
> > +  improvements. Therefore, all API's in ``rte_tm.h`` will be marked back as
> > +  experimental in v20.11 DPDK release. For more details, please see `the
> > thread
> > +  <https://mails.dpdk.org/archives/dev/2020-April/164970.html>`_.
> > --
> > 2.8.4
>
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>


>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 0/4] fix build with GCC 10
  2020-05-20 13:58  3% [dpdk-dev] [PATCH 0/4] fix build with GCC 10 Thomas Monjalon
  2020-05-20 13:58 14% ` [dpdk-dev] [PATCH 4/4] devtools: allow warnings in ABI reference build Thomas Monjalon
  2020-05-20 14:52  0% ` [dpdk-dev] [PATCH 0/4] fix build with GCC 10 David Marchand
@ 2020-05-20 16:45  0% ` Kevin Traynor
  2020-05-21 13:39  0%   ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Kevin Traynor @ 2020-05-20 16:45 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: david.marchand

On 20/05/2020 14:58, Thomas Monjalon wrote:
> These are supposed to be the last patches to support GCC 10.
> 
> Thomas Monjalon (4):
>   net/mvpp2: fix build with gcc 10
>   examples/vm_power: fix build with -fno-common
>   examples/vm_power: drop Unix path limit redefinition
>   devtools: allow warnings in ABI reference build
> 
>  devtools/test-build.sh                      | 6 ++----
>  devtools/test-meson-builds.sh               | 3 +--
>  drivers/net/mvpp2/mrvl_flow.c               | 4 ++--
>  examples/vm_power_manager/channel_manager.c | 3 ++-
>  examples/vm_power_manager/channel_manager.h | 9 ++-------
>  examples/vm_power_manager/power_manager.c   | 1 -
>  6 files changed, 9 insertions(+), 17 deletions(-)
> 
For series:
Acked-by: Kevin Traynor <ktraynor@redhat.com>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 0/4] fix build with GCC 10
  2020-05-20 13:58  3% [dpdk-dev] [PATCH 0/4] fix build with GCC 10 Thomas Monjalon
  2020-05-20 13:58 14% ` [dpdk-dev] [PATCH 4/4] devtools: allow warnings in ABI reference build Thomas Monjalon
@ 2020-05-20 14:52  0% ` David Marchand
  2020-05-20 16:45  0% ` Kevin Traynor
  2 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-20 14:52 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On Wed, May 20, 2020 at 3:58 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> These are supposed to be the last patches to support GCC 10.
>
> Thomas Monjalon (4):
>   net/mvpp2: fix build with gcc 10
>   examples/vm_power: fix build with -fno-common
>   examples/vm_power: drop Unix path limit redefinition
>   devtools: allow warnings in ABI reference build
>
>  devtools/test-build.sh                      | 6 ++----
>  devtools/test-meson-builds.sh               | 3 +--
>  drivers/net/mvpp2/mrvl_flow.c               | 4 ++--
>  examples/vm_power_manager/channel_manager.c | 3 ++-
>  examples/vm_power_manager/channel_manager.h | 9 ++-------
>  examples/vm_power_manager/power_manager.c   | 1 -
>  6 files changed, 9 insertions(+), 17 deletions(-)

For the series,
Acked-by: David Marchand <david.marchand@redhat.com>


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 4/4] devtools: allow warnings in ABI reference build
  2020-05-20 13:58  3% [dpdk-dev] [PATCH 0/4] fix build with GCC 10 Thomas Monjalon
@ 2020-05-20 13:58 14% ` Thomas Monjalon
  2020-05-20 14:52  0% ` [dpdk-dev] [PATCH 0/4] fix build with GCC 10 David Marchand
  2020-05-20 16:45  0% ` Kevin Traynor
  2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-20 13:58 UTC (permalink / raw)
  To: dev; +Cc: david.marchand

There is no point in forcing warning-free compilation when building
an ABI reference. It is only preventing from compiling ABI reference
of old releases with recent compilers.

Note: DPDK 20.02 is built (with warnings) by GCC 10 if using -fcommon.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 devtools/test-build.sh        | 6 ++----
 devtools/test-meson-builds.sh | 3 +--
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/devtools/test-build.sh b/devtools/test-build.sh
index 6e53f86fc8..f013656024 100755
--- a/devtools/test-build.sh
+++ b/devtools/test-build.sh
@@ -68,8 +68,6 @@ J=$DPDK_MAKE_JOBS
 builds_dir=${DPDK_BUILD_TEST_DIR:-.}
 short=false
 unset verbose
-# for ABI checks, we need debuginfo
-test_cflags="-Wfatal-errors -g"
 while getopts hj:sv ARG ; do
 	case $ARG in
 		j ) J=$OPTARG ;;
@@ -248,7 +246,7 @@ for conf in $configs ; do
 	config $dir $target $options
 
 	echo "================== Build $conf"
-	${MAKE} -j$J EXTRA_CFLAGS="$test_cflags $DPDK_DEP_CFLAGS" \
+	${MAKE} -j$J EXTRA_CFLAGS="-Wfatal-errors -g $DPDK_DEP_CFLAGS" \
 		EXTRA_LDFLAGS="$DPDK_DEP_LDFLAGS" $verbose O=$dir
 	! $short || break
 	export RTE_TARGET=$target
@@ -282,7 +280,7 @@ for conf in $configs ; do
 			echo -n "================== Build $conf "
 			echo "($DPDK_ABI_REF_VERSION)"
 			${MAKE} -j$J \
-				EXTRA_CFLAGS="$test_cflags $DPDK_DEP_CFLAGS" \
+				EXTRA_CFLAGS="-Wno-error -g $DPDK_DEP_CFLAGS" \
 				EXTRA_LDFLAGS="$DPDK_DEP_LDFLAGS" $verbose \
 				O=$abirefdir/build
 			export RTE_TARGET=$target
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index e8df017596..18b874fac5 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -74,7 +74,6 @@ config () # <dir> <builddir> <meson options>
 		return
 	fi
 	options=
-	options="$options --werror"
 	if echo $* | grep -qw -- '--default-library=shared' ; then
 		options="$options -Dexamples=all"
 	else
@@ -127,7 +126,7 @@ build () # <directory> <target compiler> <meson options>
 	# skip build if compiler not available
 	command -v ${CC##* } >/dev/null 2>&1 || return 0
 	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $*
+	config $srcdir $builds_dir/$targetdir --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" ]; then
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
-- 
2.26.2


^ permalink raw reply	[relevance 14%]

* [dpdk-dev] [PATCH 0/4] fix build with GCC 10
@ 2020-05-20 13:58  3% Thomas Monjalon
  2020-05-20 13:58 14% ` [dpdk-dev] [PATCH 4/4] devtools: allow warnings in ABI reference build Thomas Monjalon
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Thomas Monjalon @ 2020-05-20 13:58 UTC (permalink / raw)
  To: dev; +Cc: david.marchand

These are supposed to be the last patches to support GCC 10.

Thomas Monjalon (4):
  net/mvpp2: fix build with gcc 10
  examples/vm_power: fix build with -fno-common
  examples/vm_power: drop Unix path limit redefinition
  devtools: allow warnings in ABI reference build

 devtools/test-build.sh                      | 6 ++----
 devtools/test-meson-builds.sh               | 3 +--
 drivers/net/mvpp2/mrvl_flow.c               | 4 ++--
 examples/vm_power_manager/channel_manager.c | 3 ++-
 examples/vm_power_manager/channel_manager.h | 9 ++-------
 examples/vm_power_manager/power_manager.c   | 1 -
 6 files changed, 9 insertions(+), 17 deletions(-)

-- 
2.26.2


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1 1/2] devtools: add internal ABI version check
  2020-05-19 15:35  4% ` [dpdk-dev] [PATCH v1 1/2] devtools: add internal ABI version check David Marchand
@ 2020-05-19 16:54  4%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-19 16:54 UTC (permalink / raw)
  To: Haiyue Wang
  Cc: dev, Thomas Monjalon, Bruce Richardson, Burakov, Anatoly,
	Neil Horman, Ray Kinsella

On Tue, May 19, 2020 at 5:35 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Thu, Apr 30, 2020 at 7:54 AM Haiyue Wang <haiyue.wang@intel.com> wrote:
> >
> > INTERNAL is new introduced version, update the shell script that checks
> > whether built libraries are versioned with expected ABI (current ABI,
> > current ABI + 1, EXPERIMENTAL, or INTERNAL).
> >
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> Acked-by: David Marchand <david.marchand@redhat.com>

Series applied, thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1 1/2] devtools: add internal ABI version check
    @ 2020-05-19 15:35  4% ` David Marchand
  2020-05-19 16:54  4%   ` David Marchand
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2020-05-19 15:35 UTC (permalink / raw)
  To: Haiyue Wang
  Cc: dev, Thomas Monjalon, Bruce Richardson, Burakov, Anatoly,
	Neil Horman, Ray Kinsella

On Thu, Apr 30, 2020 at 7:54 AM Haiyue Wang <haiyue.wang@intel.com> wrote:
>
> INTERNAL is new introduced version, update the shell script that checks
> whether built libraries are versioned with expected ABI (current ABI,
> current ABI + 1, EXPERIMENTAL, or INTERNAL).
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> ---
>  devtools/check-abi-version.sh | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/devtools/check-abi-version.sh b/devtools/check-abi-version.sh
> index 9a3d13546..f0cca42a9 100755
> --- a/devtools/check-abi-version.sh
> +++ b/devtools/check-abi-version.sh
> @@ -4,7 +4,7 @@
>
>  # Check whether library symbols have correct
>  # version (provided ABI number or provided ABI
> -# number + 1 or EXPERIMENTAL).
> +# number + 1 or EXPERIMENTAL or INTERNAL).
>  # Args:
>  #   $1: path of the library .so file
>  #   $2: ABI major version number to check
> @@ -12,7 +12,7 @@
>
>  if [ -z "$1" ]; then
>      echo "Script checks whether library symbols have"
> -    echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL)"
> +    echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL/INTERNAL)"
>      echo "Usage:"
>      echo "  $0 SO_FILE_PATH [ABI_VER]"
>      exit 1
> @@ -41,11 +41,11 @@ for SYM in $(echo "${OBJ_DUMP_OUTPUT}" | awk '{print $(NF-1) "-" $NF}')
>  do
>      version=$(echo $SYM | cut -d'-' -f 1)
>      symbol=$(echo $SYM | cut -d'-' -f 2)
> -    case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL")
> +    case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL"|"INTERNAL")
>          ;;
>      (*)
>          echo "Warning: symbol $symbol ($version) should be annotated " \
> -             "as ABI version $ABIVER / $NEXT_ABIVER, or EXPERIMENTAL."
> +             "as ABI version $ABIVER / $NEXT_ABIVER, EXPERIMENTAL, or INTERNAL."
>          ret=1
>      ;;
>      esac
> --
> 2.26.2
>

LGTM + tested current master before and after the patch.

Acked-by: David Marchand <david.marchand@redhat.com>

-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v1 2/2] devtools: updating internal symbols ABI version
  @ 2020-05-19 15:10  9%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2020-05-19 15:10 UTC (permalink / raw)
  To: Haiyue Wang, Ray Kinsella
  Cc: dev, Thomas Monjalon, Bruce Richardson, Burakov, Anatoly, Neil Horman

On Thu, Apr 30, 2020 at 7:54 AM Haiyue Wang <haiyue.wang@intel.com> wrote:
>
> INTERNAL is new introduced version, update the script that automatically
> leaving internal section exactly as it is.
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> ---
>  devtools/update_version_map_abi.py | 37 +++++++++++++++++++++++++++---
>  1 file changed, 34 insertions(+), 3 deletions(-)
>
> diff --git a/devtools/update_version_map_abi.py b/devtools/update_version_map_abi.py
> index 616412a1c..e2104e61e 100755
> --- a/devtools/update_version_map_abi.py
> +++ b/devtools/update_version_map_abi.py
> @@ -50,7 +50,10 @@ def __parse_map_file(f_in):
>      stable_lines = set()
>      # copy experimental section as is
>      experimental_lines = []
> +    # copy internal section as is
> +    internal_lines = []
>      in_experimental = False
> +    in_internal = False
>      has_stable = False
>
>      # gather all functions
> @@ -63,6 +66,7 @@ def __parse_map_file(f_in):
>          if match:
>              # whatever section this was, it's not active any more
>              in_experimental = False
> +            in_internal = False
>              continue
>
>          # if we're in the middle of experimental section, we need to copy
> @@ -71,6 +75,12 @@ def __parse_map_file(f_in):
>              experimental_lines += [line]
>              continue
>
> +        # if we're in the middle of internal section, we need to copy
> +        # the section verbatim, so just add the line
> +        if in_internal:
> +            internal_lines += [line]
> +            continue
> +
>          # skip empty lines
>          if not line:
>              continue
> @@ -81,7 +91,9 @@ def __parse_map_file(f_in):
>              cur_section = match.group("version")
>              # is it experimental?
>              in_experimental = cur_section == "EXPERIMENTAL"
> -            if not in_experimental:
> +            # is it internal?
> +            in_internal = cur_section == "INTERNAL"
> +            if not in_experimental and not in_internal:
>                  has_stable = True
>              continue
>
> @@ -90,7 +102,7 @@ def __parse_map_file(f_in):
>          if match:
>              stable_lines.add(match.group("func"))
>
> -    return has_stable, stable_lines, experimental_lines
> +    return has_stable, stable_lines, experimental_lines, internal_lines
>
>
>  def __generate_stable_abi(f_out, abi_version, lines):
> @@ -132,6 +144,20 @@ def __generate_experimental_abi(f_out, lines):
>      # end section
>      print("};", file=f_out)
>
> +def __generate_internal_abi(f_out, lines):
> +    # start internal section
> +    print("INTERNAL {", file=f_out)
> +
> +    # print all internal lines as they were
> +    for line in lines:
> +        # don't print empty whitespace
> +        if not line:
> +            print("", file=f_out)
> +        else:
> +            print("\t{}".format(line), file=f_out)
> +
> +    # end section
> +    print("};", file=f_out)
>
>  def __main():
>      arg_parser = argparse.ArgumentParser(
> @@ -158,7 +184,7 @@ def __main():
>          sys.exit(1)
>
>      with open(parsed.map_file) as f_in:
> -        has_stable, stable_lines, experimental_lines = __parse_map_file(f_in)
> +        has_stable, stable_lines, experimental_lines, internal_lines = __parse_map_file(f_in)
>
>      with open(parsed.map_file, 'w') as f_out:
>          need_newline = has_stable and experimental_lines
> @@ -169,6 +195,11 @@ def __main():
>              print(file=f_out)
>          if experimental_lines:
>              __generate_experimental_abi(f_out, experimental_lines)
> +        if internal_lines:
> +            if has_stable or experimental_lines:
> +              # separate sections with a newline
> +              print(file=f_out)
> +            __generate_internal_abi(f_out, internal_lines)
>
>
>  if __name__ == "__main__":
> --
> 2.26.2
>

LGTM.
Acked-by: David Marchand <david.marchand@redhat.com>


One comment, trying to update to ABI 21, the script refuses and
expects a 21.X format:
$ ./devtools/update-abi.sh 21
ABI version must be formatted as MAJOR.MINOR version

And passing 21.0 then generates DPDK_21.0 blocks which I understand
are incorrect.
$ git diff drivers/common/iavf/rte_common_iavf_version.map
diff --git a/drivers/common/iavf/rte_common_iavf_version.map
b/drivers/common/iavf/rte_common_iavf_version.map
index 92ceac108d..9a1ef076aa 100644
--- a/drivers/common/iavf/rte_common_iavf_version.map
+++ b/drivers/common/iavf/rte_common_iavf_version.map
@@ -1,11 +1,11 @@
-DPDK_21 {
+DPDK_21.0 {
        global:


-- 
David Marchand


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v6] meter: provide experimental alias of API for old apps
  2020-05-19 13:26  0%   ` Dumitrescu, Cristian
@ 2020-05-19 14:24  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-19 14:24 UTC (permalink / raw)
  To: Yigit, Ferruh, Dumitrescu, Cristian
  Cc: Ray Kinsella, Neil Horman, Eelco Chaudron, dev, David Marchand,
	stable, Luca Boccassi, Richardson, Bruce, Stokes, Ian,
	Andrzej Ostruszka, techboard

19/05/2020 15:26, Dumitrescu, Cristian:
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > 
> > On v20.02 some meter APIs have been matured and symbols moved from
> > EXPERIMENTAL to DPDK_20.0.1 block.
> > 
> > This can break the applications that were using these mentioned APIs on
> > v19.11. Although there is no modification on the APIs and the action is
> > positive and matures the APIs, the affect can be negative to
> > applications.
> > 
> > This patch provides aliasing by duplicating the existing and versioned
> > symbols as experimental.
> > 
> > Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
> > aliasing done between EXPERIMENTAL and DPDK_21.
> > 
> > With DPDK_21 ABI (DPDK v20.11) all aliasing will be removed and only
> > stable version of the APIs will remain.
> > 
> > Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM
> > API")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> 
> Thanks, Ferruh and Ray!
> 
> I am OK to let this is temporarily until release 20.11.

Applied, thanks



> Regarding the API breakage larger problem,
> this method only fixes the case of experimental APIs transitioning
> to non-experimental status with no modifications,

Yes

> but it does not handle the following possible cases:
> 
> 	1. Experimental APIs transitioning to non-experimental status with some modifications.

If there is a modification, it should mature as experimental first.

> 	2. Experimental APIs being removed.

No guarantee that an experimental API remains forever.

> 	3. Non-experimental APIs transitioning to deprecated status.

I don't think we need to deprecate experimental API.
We can change or remove them freely at any time.

> We need a clear procedure & timing for all these cases
> to avoid similar situations in the future.
> Likely a good topic for techboard discussion.

We can ask if there are different opinions.
I think the experimental status is quite clear: no guarantee.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6] meter: provide experimental alias of API for old apps
  2020-05-19 12:16 10% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
  2020-05-19 13:26  0%   ` Dumitrescu, Cristian
@ 2020-05-19 14:22  0%   ` Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 14:22 UTC (permalink / raw)
  To: Ferruh Yigit, Cristian Dumitrescu, Neil Horman, Eelco Chaudron
  Cc: dev, Thomas Monjalon, David Marchand, stable, Luca Boccassi,
	Bruce Richardson, Ian Stokes, Andrzej Ostruszka



On 19/05/2020 13:16, Ferruh Yigit wrote:
> On v20.02 some meter APIs have been matured and symbols moved from
> EXPERIMENTAL to DPDK_20.0.1 block.
> 
> This can break the applications that were using these mentioned APIs on
> v19.11. Although there is no modification on the APIs and the action is
> positive and matures the APIs, the affect can be negative to
> applications.
> 
> This patch provides aliasing by duplicating the existing and versioned
> symbols as experimental.
> 
> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
> aliasing done between EXPERIMENTAL and DPDK_21.
> 
> With DPDK_21 ABI (DPDK v20.11) all aliasing will be removed and only
> stable version of the APIs will remain.
> 
> Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM API")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Luca Boccassi <bluca@debian.org>
> Cc: David Marchand <david.marchand@redhat.com>
> Cc: Bruce Richardson <bruce.richardson@intel.com>
> Cc: Ian Stokes <ian.stokes@intel.com>
> Cc: Eelco Chaudron <echaudro@redhat.com>
> Cc: Andrzej Ostruszka <amo@semihalf.com>
> Cc: Ray Kinsella <mdr@ashroe.eu>
> Cc: cristian.dumitrescu@intel.com
> 
> v2:
> * Commit log updated
> 
> v3:
> * added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro
> 
> v4:
> * update script name in commit log, remove empty line
> 
> v5:
> * Patch has only meter library changes
> * Aliasing moved into rte_meter_compat.c
> 
> v6:
> * Move aliasing back to rte_meter.c
> * Rename static function to have '__' prefix
> * Add comment to alias code
> ---
>  lib/librte_meter/meson.build           |  1 +
>  lib/librte_meter/rte_meter.c           | 73 ++++++++++++++++++++++++--
>  lib/librte_meter/rte_meter_version.map |  8 +++
>  3 files changed, 79 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
> index 646fd4d43f..fce0368437 100644
> --- a/lib/librte_meter/meson.build
> +++ b/lib/librte_meter/meson.build
> @@ -3,3 +3,4 @@
>  
>  sources = files('rte_meter.c')
>  headers = files('rte_meter.h')
> +use_function_versioning = true
> diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
> index da01429a8b..149cf58bdd 100644
> --- a/lib/librte_meter/rte_meter.c
> +++ b/lib/librte_meter/rte_meter.c
> @@ -9,6 +9,7 @@
>  #include <rte_common.h>
>  #include <rte_log.h>
>  #include <rte_cycles.h>
> +#include <rte_function_versioning.h>
>  
>  #include "rte_meter.h"
>  
> @@ -119,8 +120,15 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
>  	return 0;
>  }
>  
> -int
> -rte_meter_trtcm_rfc4115_profile_config(
> +/*
> + *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_profile_config'
> + *  to support both EXPERIMENTAL and DPDK_21 versions
> + *  This versioning will be removed on next ABI version (v20.11)
> + *  and '__rte_meter_trtcm_rfc4115_profile_config' will be restrored back to
> + *  'rte_meter_trtcm_rfc4115_profile_config' without versioning.
> + */
> +static int
> +__rte_meter_trtcm_rfc4115_profile_config(
>  	struct rte_meter_trtcm_rfc4115_profile *p,
>  	struct rte_meter_trtcm_rfc4115_params *params)
>  {
> @@ -145,7 +153,42 @@ rte_meter_trtcm_rfc4115_profile_config(
>  }
>  
>  int
> -rte_meter_trtcm_rfc4115_config(
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct rte_meter_trtcm_rfc4115_profile *p,
> +		struct rte_meter_trtcm_rfc4115_params *params), rte_meter_trtcm_rfc4115_profile_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_config, _e);
> +
> +/*
> + *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_config'
> + *  to support both EXPERIMENTAL and DPDK_21 versions
> + *  This versioning will be removed on next ABI version (v20.11)
> + *  and '__rte_meter_trtcm_rfc4115_config' will be restrored back to
> + *  'rte_meter_trtcm_rfc4115_config' without versioning.
> + */
> +static int
> +__rte_meter_trtcm_rfc4115_config(
>  	struct rte_meter_trtcm_rfc4115 *m,
>  	struct rte_meter_trtcm_rfc4115_profile *p)
>  {
> @@ -160,3 +203,27 @@ rte_meter_trtcm_rfc4115_config(
>  
>  	return 0;
>  }
> +
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return __rte_meter_trtcm_rfc4115_config(m, p);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct rte_meter_trtcm_rfc4115 *m,
> +		 struct rte_meter_trtcm_rfc4115_profile *p), rte_meter_trtcm_rfc4115_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return __rte_meter_trtcm_rfc4115_config(m, p);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
> diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
> index 2c7dadbcac..b493bcebe9 100644
> --- a/lib/librte_meter/rte_meter_version.map
> +++ b/lib/librte_meter/rte_meter_version.map
> @@ -20,4 +20,12 @@ DPDK_21 {
>  	rte_meter_trtcm_rfc4115_color_blind_check;
>  	rte_meter_trtcm_rfc4115_config;
>  	rte_meter_trtcm_rfc4115_profile_config;
> +
>  } DPDK_20.0;
> +
> +EXPERIMENTAL {
> +       global:
> +
> +	rte_meter_trtcm_rfc4115_config;
> +	rte_meter_trtcm_rfc4115_profile_config;
> +};
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-18 17:18  4%       ` Thomas Monjalon
  2020-05-18 17:34  4%         ` Ferruh Yigit
@ 2020-05-19 14:14  4%         ` Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 14:14 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Ferruh Yigit, Luca Boccassi, David Marchand,
	Bruce Richardson, Ian Stokes, Eelco Chaudron, Andrzej Ostruszka,
	Kevin Traynor, John McNamara, Marko Kovacevic,
	Cristian Dumitrescu, Neil Horman



On 18/05/2020 18:18, Thomas Monjalon wrote:
> 16/05/2020 13:53, Neil Horman:
>> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>
>>> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
>>> DPDK_20.0.1 block.
>>>
>>> This had the affect of breaking the applications that were using these
>>> APIs on v19.11. Although there is no modification of the APIs and the
>>> action is positive and matures the APIs, the affect can be negative to
>>> applications.
>>>
>>> When a maintainer is promoting an API to become part of the next major
>>> ABI version by removing the experimental tag. The maintainer may
>>> choose to offer an alias to the experimental tag, to prevent these
>>> breakages in future.
>>>
>>> The following changes are made to enabling aliasing:
>>>
>>> Updated to the abi policy and abi versioning documents.
>>>
>>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
>>>
>>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
>>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
>>> .experimental section (__rte_experimental tag is missing).
>>> Updated tool in a way it won't complain if the symbol in the
>>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>>>
>> Acked-by: Neil Horman <nhorman@tuxdriver.com>
> 
> Applied with few typos fixed, thanks.
> 
> 
> 

Thanks Thomas. 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-18 18:32  4%             ` Ferruh Yigit
@ 2020-05-19 14:13  4%               ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 14:13 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon
  Cc: dev, Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu, Neil Horman



On 18/05/2020 19:32, Ferruh Yigit wrote:
> On 5/18/2020 6:51 PM, Thomas Monjalon wrote:
>> 18/05/2020 19:34, Ferruh Yigit:
>>> On 5/18/2020 6:18 PM, Thomas Monjalon wrote:
>>>> 16/05/2020 13:53, Neil Horman:
>>>>> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
>>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>>
>>>>>> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
>>>>>> DPDK_20.0.1 block.
>>>>>>
>>>>>> This had the affect of breaking the applications that were using these
>>>>>> APIs on v19.11. Although there is no modification of the APIs and the
>>>>>> action is positive and matures the APIs, the affect can be negative to
>>>>>> applications.
>>>>>>
>>>>>> When a maintainer is promoting an API to become part of the next major
>>>>>> ABI version by removing the experimental tag. The maintainer may
>>>>>> choose to offer an alias to the experimental tag, to prevent these
>>>>>> breakages in future.
>>>>>>
>>>>>> The following changes are made to enabling aliasing:
>>>>>>
>>>>>> Updated to the abi policy and abi versioning documents.
>>>>>>
>>>>>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
>>>>>>
>>>>>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
>>>>>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
>>>>>> .experimental section (__rte_experimental tag is missing).
>>>>>> Updated tool in a way it won't complain if the symbol in the
>>>>>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
>>>>>>
>>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>>>>>>
>>>>> Acked-by: Neil Horman <nhorman@tuxdriver.com>
>>>>
>>>> Applied with few typos fixed, thanks.
>>>>
>>>
>>> Is a new version of the meter library required?
>>
>> I think yes, Cristian is asking for some changes.
>>
> 
> done: https://patches.dpdk.org/patch/70399/
> 

Thanks Ferruh.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6] meter: provide experimental alias of API for old apps
  2020-05-19 12:16 10% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
@ 2020-05-19 13:26  0%   ` Dumitrescu, Cristian
  2020-05-19 14:24  0%     ` Thomas Monjalon
  2020-05-19 14:22  0%   ` Ray Kinsella
  1 sibling, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2020-05-19 13:26 UTC (permalink / raw)
  To: Yigit, Ferruh, Ray Kinsella, Neil Horman, Eelco Chaudron
  Cc: dev, Thomas Monjalon, David Marchand, stable, Luca Boccassi,
	Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka, techboard



> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Tuesday, May 19, 2020 1:16 PM
> To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Ray Kinsella
> <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>; Eelco
> Chaudron <echaudro@redhat.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; David Marchand <david.marchand@redhat.com>;
> stable@dpdk.org; Luca Boccassi <bluca@debian.org>; Richardson, Bruce
> <bruce.richardson@intel.com>; Stokes, Ian <ian.stokes@intel.com>; Andrzej
> Ostruszka <amo@semihalf.com>
> Subject: [PATCH v6] meter: provide experimental alias of API for old apps
> 
> On v20.02 some meter APIs have been matured and symbols moved from
> EXPERIMENTAL to DPDK_20.0.1 block.
> 
> This can break the applications that were using these mentioned APIs on
> v19.11. Although there is no modification on the APIs and the action is
> positive and matures the APIs, the affect can be negative to
> applications.
> 
> This patch provides aliasing by duplicating the existing and versioned
> symbols as experimental.
> 
> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
> aliasing done between EXPERIMENTAL and DPDK_21.
> 
> With DPDK_21 ABI (DPDK v20.11) all aliasing will be removed and only
> stable version of the APIs will remain.
> 
> Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM
> API")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Luca Boccassi <bluca@debian.org>
> Cc: David Marchand <david.marchand@redhat.com>
> Cc: Bruce Richardson <bruce.richardson@intel.com>
> Cc: Ian Stokes <ian.stokes@intel.com>
> Cc: Eelco Chaudron <echaudro@redhat.com>
> Cc: Andrzej Ostruszka <amo@semihalf.com>
> Cc: Ray Kinsella <mdr@ashroe.eu>
> Cc: cristian.dumitrescu@intel.com
> 
> v2:
> * Commit log updated
> 
> v3:
> * added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro
> 
> v4:
> * update script name in commit log, remove empty line
> 
> v5:
> * Patch has only meter library changes
> * Aliasing moved into rte_meter_compat.c
> 
> v6:
> * Move aliasing back to rte_meter.c
> * Rename static function to have '__' prefix
> * Add comment to alias code
> ---
>  lib/librte_meter/meson.build           |  1 +
>  lib/librte_meter/rte_meter.c           | 73 ++++++++++++++++++++++++--
>  lib/librte_meter/rte_meter_version.map |  8 +++
>  3 files changed, 79 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
> index 646fd4d43f..fce0368437 100644
> --- a/lib/librte_meter/meson.build
> +++ b/lib/librte_meter/meson.build
> @@ -3,3 +3,4 @@
> 
>  sources = files('rte_meter.c')
>  headers = files('rte_meter.h')
> +use_function_versioning = true
> diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
> index da01429a8b..149cf58bdd 100644
> --- a/lib/librte_meter/rte_meter.c
> +++ b/lib/librte_meter/rte_meter.c
> @@ -9,6 +9,7 @@
>  #include <rte_common.h>
>  #include <rte_log.h>
>  #include <rte_cycles.h>
> +#include <rte_function_versioning.h>
> 
>  #include "rte_meter.h"
> 
> @@ -119,8 +120,15 @@ rte_meter_trtcm_config(struct rte_meter_trtcm
> *m,
>  	return 0;
>  }
> 
> -int
> -rte_meter_trtcm_rfc4115_profile_config(
> +/*
> + *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_profile_config'
> + *  to support both EXPERIMENTAL and DPDK_21 versions
> + *  This versioning will be removed on next ABI version (v20.11)
> + *  and '__rte_meter_trtcm_rfc4115_profile_config' will be restrored back
> to
> + *  'rte_meter_trtcm_rfc4115_profile_config' without versioning.
> + */
> +static int
> +__rte_meter_trtcm_rfc4115_profile_config(
>  	struct rte_meter_trtcm_rfc4115_profile *p,
>  	struct rte_meter_trtcm_rfc4115_params *params)
>  {
> @@ -145,7 +153,42 @@ rte_meter_trtcm_rfc4115_profile_config(
>  }
> 
>  int
> -rte_meter_trtcm_rfc4115_config(
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct
> rte_meter_trtcm_rfc4115_profile *p,
> +		struct rte_meter_trtcm_rfc4115_params *params),
> rte_meter_trtcm_rfc4115_profile_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_conf
> ig, _e);
> +
> +/*
> + *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_config'
> + *  to support both EXPERIMENTAL and DPDK_21 versions
> + *  This versioning will be removed on next ABI version (v20.11)
> + *  and '__rte_meter_trtcm_rfc4115_config' will be restrored back to
> + *  'rte_meter_trtcm_rfc4115_config' without versioning.
> + */
> +static int
> +__rte_meter_trtcm_rfc4115_config(
>  	struct rte_meter_trtcm_rfc4115 *m,
>  	struct rte_meter_trtcm_rfc4115_profile *p)
>  {
> @@ -160,3 +203,27 @@ rte_meter_trtcm_rfc4115_config(
> 
>  	return 0;
>  }
> +
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return __rte_meter_trtcm_rfc4115_config(m, p);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct
> rte_meter_trtcm_rfc4115 *m,
> +		 struct rte_meter_trtcm_rfc4115_profile *p),
> rte_meter_trtcm_rfc4115_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return __rte_meter_trtcm_rfc4115_config(m, p);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
> diff --git a/lib/librte_meter/rte_meter_version.map
> b/lib/librte_meter/rte_meter_version.map
> index 2c7dadbcac..b493bcebe9 100644
> --- a/lib/librte_meter/rte_meter_version.map
> +++ b/lib/librte_meter/rte_meter_version.map
> @@ -20,4 +20,12 @@ DPDK_21 {
>  	rte_meter_trtcm_rfc4115_color_blind_check;
>  	rte_meter_trtcm_rfc4115_config;
>  	rte_meter_trtcm_rfc4115_profile_config;
> +
>  } DPDK_20.0;
> +
> +EXPERIMENTAL {
> +       global:
> +
> +	rte_meter_trtcm_rfc4115_config;
> +	rte_meter_trtcm_rfc4115_profile_config;
> +};
> --
> 2.25.4

Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>

Thanks, Ferruh and Ray!

I am OK to let this is temporarily until release 20.11.

Regarding the API breakage larger problem,  this method only fixes the case of experimental APIs transitioning to non-experimental status with no modifications, but it does not handle the following possible cases:

	1. Experimental APIs transitioning to non-experimental status with some modifications.
	2. Experimental APIs being removed.
	3. Non-experimental APIs transitioning to deprecated status.

We need a clear procedure & timing for all these cases to avoid similar situations in the future. Likely a good topic for techboard discussion.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6] meter: provide experimental alias of API for old apps
                     ` (2 preceding siblings ...)
  2020-05-18 18:30  2% ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
@ 2020-05-19 12:16 10% ` Ferruh Yigit
  2020-05-19 13:26  0%   ` Dumitrescu, Cristian
  2020-05-19 14:22  0%   ` Ray Kinsella
  3 siblings, 2 replies; 200+ results
From: Ferruh Yigit @ 2020-05-19 12:16 UTC (permalink / raw)
  To: Cristian Dumitrescu, Ray Kinsella, Neil Horman, Eelco Chaudron
  Cc: dev, Ferruh Yigit, Thomas Monjalon, David Marchand, stable,
	Luca Boccassi, Bruce Richardson, Ian Stokes, Andrzej Ostruszka

On v20.02 some meter APIs have been matured and symbols moved from
EXPERIMENTAL to DPDK_20.0.1 block.

This can break the applications that were using these mentioned APIs on
v19.11. Although there is no modification on the APIs and the action is
positive and matures the APIs, the affect can be negative to
applications.

This patch provides aliasing by duplicating the existing and versioned
symbols as experimental.

Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
aliasing done between EXPERIMENTAL and DPDK_21.

With DPDK_21 ABI (DPDK v20.11) all aliasing will be removed and only
stable version of the APIs will remain.

Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM API")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Luca Boccassi <bluca@debian.org>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ian Stokes <ian.stokes@intel.com>
Cc: Eelco Chaudron <echaudro@redhat.com>
Cc: Andrzej Ostruszka <amo@semihalf.com>
Cc: Ray Kinsella <mdr@ashroe.eu>
Cc: cristian.dumitrescu@intel.com

v2:
* Commit log updated

v3:
* added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro

v4:
* update script name in commit log, remove empty line

v5:
* Patch has only meter library changes
* Aliasing moved into rte_meter_compat.c

v6:
* Move aliasing back to rte_meter.c
* Rename static function to have '__' prefix
* Add comment to alias code
---
 lib/librte_meter/meson.build           |  1 +
 lib/librte_meter/rte_meter.c           | 73 ++++++++++++++++++++++++--
 lib/librte_meter/rte_meter_version.map |  8 +++
 3 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
index 646fd4d43f..fce0368437 100644
--- a/lib/librte_meter/meson.build
+++ b/lib/librte_meter/meson.build
@@ -3,3 +3,4 @@
 
 sources = files('rte_meter.c')
 headers = files('rte_meter.h')
+use_function_versioning = true
diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
index da01429a8b..149cf58bdd 100644
--- a/lib/librte_meter/rte_meter.c
+++ b/lib/librte_meter/rte_meter.c
@@ -9,6 +9,7 @@
 #include <rte_common.h>
 #include <rte_log.h>
 #include <rte_cycles.h>
+#include <rte_function_versioning.h>
 
 #include "rte_meter.h"
 
@@ -119,8 +120,15 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	return 0;
 }
 
-int
-rte_meter_trtcm_rfc4115_profile_config(
+/*
+ *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_profile_config'
+ *  to support both EXPERIMENTAL and DPDK_21 versions
+ *  This versioning will be removed on next ABI version (v20.11)
+ *  and '__rte_meter_trtcm_rfc4115_profile_config' will be restrored back to
+ *  'rte_meter_trtcm_rfc4115_profile_config' without versioning.
+ */
+static int
+__rte_meter_trtcm_rfc4115_profile_config(
 	struct rte_meter_trtcm_rfc4115_profile *p,
 	struct rte_meter_trtcm_rfc4115_params *params)
 {
@@ -145,7 +153,42 @@ rte_meter_trtcm_rfc4115_profile_config(
 }
 
 int
-rte_meter_trtcm_rfc4115_config(
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct rte_meter_trtcm_rfc4115_profile *p,
+		struct rte_meter_trtcm_rfc4115_params *params), rte_meter_trtcm_rfc4115_profile_config_s);
+
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return __rte_meter_trtcm_rfc4115_profile_config(p, params);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_config, _e);
+
+/*
+ *  ABI aliasing done for 'rte_meter_trtcm_rfc4115_config'
+ *  to support both EXPERIMENTAL and DPDK_21 versions
+ *  This versioning will be removed on next ABI version (v20.11)
+ *  and '__rte_meter_trtcm_rfc4115_config' will be restrored back to
+ *  'rte_meter_trtcm_rfc4115_config' without versioning.
+ */
+static int
+__rte_meter_trtcm_rfc4115_config(
 	struct rte_meter_trtcm_rfc4115 *m,
 	struct rte_meter_trtcm_rfc4115_profile *p)
 {
@@ -160,3 +203,27 @@ rte_meter_trtcm_rfc4115_config(
 
 	return 0;
 }
+
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return __rte_meter_trtcm_rfc4115_config(m, p);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct rte_meter_trtcm_rfc4115 *m,
+		 struct rte_meter_trtcm_rfc4115_profile *p), rte_meter_trtcm_rfc4115_config_s);
+
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return __rte_meter_trtcm_rfc4115_config(m, p);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 2c7dadbcac..b493bcebe9 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -20,4 +20,12 @@ DPDK_21 {
 	rte_meter_trtcm_rfc4115_color_blind_check;
 	rte_meter_trtcm_rfc4115_config;
 	rte_meter_trtcm_rfc4115_profile_config;
+
 } DPDK_20.0;
+
+EXPERIMENTAL {
+       global:
+
+	rte_meter_trtcm_rfc4115_config;
+	rte_meter_trtcm_rfc4115_profile_config;
+};
-- 
2.25.4


^ permalink raw reply	[relevance 10%]

* Re: [dpdk-dev] [PATCH v8 05/13] net/dpaa: move internal symbols into INTERNAL section
  2020-05-19 11:39  0%       ` Hemant Agrawal
@ 2020-05-19 11:41  0%         ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 11:41 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 19/05/2020 12:39, Hemant Agrawal wrote:
> Hi Ray,
> 
>> On 15/05/2020 10:47, Hemant Agrawal wrote:
>>> This patch moves the internal symbols to INTERNAL sections so that any
>>> change in them is not reported as ABI breakage.
>>>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> ---
>>>  devtools/libabigail.abignore              | 2 ++
>>>  drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
>>>  drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
>>>  3 files changed, 11 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/devtools/libabigail.abignore
>>> b/devtools/libabigail.abignore index 42f9469221..7b6358c394 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -63,3 +63,5 @@
>>>  	name_regexp = ^rte_dpaa_bpid_info
>>>  [suppress_variable]
>>>  	name_regexp = ^rte_dpaa2_bpid_info
>>> +[suppress_function]
>>> +        name_regexp = ^dpaa
>>
>> This rule ends up being very general
>> Could we do something more specific like ...
>>
>> ^dpaa_\.attach
> 
> [Hemant]  I am not sure, how much time I have to check and test again before David closes RC3
> - while I am working on experimenting these.
> These are not serious issues. dpaa is always internal for the libraries. No symbol with starting with dpaa name is exposed to libraries.
> This change will anyway go away in 20.11. 
>

if dpaa is always general - I can live with a general rule.

> 
>>
>> it should catch
>>
>> dpaa_eth_eventq_attach;
>> dpaa_eth_eventq_detach;
>> dpaa2_eth_eventq_attach;
>> dpaa2_eth_eventq_detach;
>>
>> which is I think, what you are after.
>>
>>> diff --git a/drivers/net/dpaa/dpaa_ethdev.h
>>> b/drivers/net/dpaa/dpaa_ethdev.h index af9fc2105d..7393a9df05 100644
>>> --- a/drivers/net/dpaa/dpaa_ethdev.h
>>> +++ b/drivers/net/dpaa/dpaa_ethdev.h
>>> @@ -160,12 +160,14 @@ struct dpaa_if_stats {
>>>  	uint64_t tund;		/**<Tx Undersized */
>>>  };
>>>
>>> +__rte_internal
>>>  int
>>>  dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
>>>  		int eth_rx_queue_id,
>>>  		u16 ch_id,
>>>  		const struct rte_event_eth_rx_adapter_queue_conf
>> *queue_conf);
>>>
>>> +__rte_internal
>>>  int
>>>  dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
>>>  			   int eth_rx_queue_id);
>>> diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map
>>> b/drivers/net/dpaa/rte_pmd_dpaa_version.map
>>> index f403a1526d..774aa0de45 100644
>>> --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
>>> +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
>>> @@ -1,9 +1,14 @@
>>>  DPDK_20.0 {
>>>  	global:
>>>
>>> -	dpaa_eth_eventq_attach;
>>> -	dpaa_eth_eventq_detach;
>>>  	rte_pmd_dpaa_set_tx_loopback;
>>>
>>>  	local: *;
>>>  };
>>> +
>>> +INTERNAL {
>>> +	global:
>>> +
>>> +	dpaa_eth_eventq_attach;
>>> +	dpaa_eth_eventq_detach;
>>> +};
>>>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 05/13] net/dpaa: move internal symbols into INTERNAL section
  2020-05-19 11:14  0%     ` Ray Kinsella
@ 2020-05-19 11:39  0%       ` Hemant Agrawal
  2020-05-19 11:41  0%         ` Ray Kinsella
  0 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-19 11:39 UTC (permalink / raw)
  To: Ray Kinsella, dev, david.marchand

Hi Ray,

> On 15/05/2020 10:47, Hemant Agrawal wrote:
> > This patch moves the internal symbols to INTERNAL sections so that any
> > change in them is not reported as ABI breakage.
> >
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
> >  devtools/libabigail.abignore              | 2 ++
> >  drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
> >  drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
> >  3 files changed, 11 insertions(+), 2 deletions(-)
> >
> > diff --git a/devtools/libabigail.abignore
> > b/devtools/libabigail.abignore index 42f9469221..7b6358c394 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -63,3 +63,5 @@
> >  	name_regexp = ^rte_dpaa_bpid_info
> >  [suppress_variable]
> >  	name_regexp = ^rte_dpaa2_bpid_info
> > +[suppress_function]
> > +        name_regexp = ^dpaa
> 
> This rule ends up being very general
> Could we do something more specific like ...
> 
> ^dpaa_\.attach

[Hemant]  I am not sure, how much time I have to check and test again before David closes RC3
- while I am working on experimenting these.
These are not serious issues. dpaa is always internal for the libraries. No symbol with starting with dpaa name is exposed to libraries.
This change will anyway go away in 20.11. 


> 
> it should catch
> 
> dpaa_eth_eventq_attach;
> dpaa_eth_eventq_detach;
> dpaa2_eth_eventq_attach;
> dpaa2_eth_eventq_detach;
> 
> which is I think, what you are after.
> 
> > diff --git a/drivers/net/dpaa/dpaa_ethdev.h
> > b/drivers/net/dpaa/dpaa_ethdev.h index af9fc2105d..7393a9df05 100644
> > --- a/drivers/net/dpaa/dpaa_ethdev.h
> > +++ b/drivers/net/dpaa/dpaa_ethdev.h
> > @@ -160,12 +160,14 @@ struct dpaa_if_stats {
> >  	uint64_t tund;		/**<Tx Undersized */
> >  };
> >
> > +__rte_internal
> >  int
> >  dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
> >  		int eth_rx_queue_id,
> >  		u16 ch_id,
> >  		const struct rte_event_eth_rx_adapter_queue_conf
> *queue_conf);
> >
> > +__rte_internal
> >  int
> >  dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
> >  			   int eth_rx_queue_id);
> > diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map
> > b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> > index f403a1526d..774aa0de45 100644
> > --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
> > +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> > @@ -1,9 +1,14 @@
> >  DPDK_20.0 {
> >  	global:
> >
> > -	dpaa_eth_eventq_attach;
> > -	dpaa_eth_eventq_detach;
> >  	rte_pmd_dpaa_set_tx_loopback;
> >
> >  	local: *;
> >  };
> > +
> > +INTERNAL {
> > +	global:
> > +
> > +	dpaa_eth_eventq_attach;
> > +	dpaa_eth_eventq_detach;
> > +};
> >

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: move internal symbols into INTERNAL section
  2020-05-19 11:16  0%       ` Hemant Agrawal
@ 2020-05-19 11:30  0%         ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 11:30 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 19/05/2020 12:16, Hemant Agrawal wrote:
>  
>> On 15/05/2020 10:47, Hemant Agrawal wrote:
>>> This patch moves the internal symbols to INTERNAL sections so that any
>>> change in them is not reported as ABI breakage.
>>>
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> ---
>>>  devtools/libabigail.abignore                        | 8 ++++++++
>>>  drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 6 ++++--
>>>  drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
>>>  drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
>>>  4 files changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/devtools/libabigail.abignore
>>> b/devtools/libabigail.abignore index ab34302d0c..42f9469221 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -55,3 +55,11 @@
>>>  	file_name_regexp = ^librte_bus_fslmc\.
>>>  [suppress_file]
>>>  	file_name_regexp = ^librte_bus_dpaa\.
>>> +[suppress_function]
>>> +	name = rte_dpaa2_mbuf_alloc_bulk
>>> +[suppress_variable]
>>> +	name_regexp = ^rte_dpaa_memsegs
>>> +[suppress_variable]
>>> +	name_regexp = ^rte_dpaa_bpid_info
>>> +[suppress_variable]
>>> +	name_regexp = ^rte_dpaa2_bpid_info
>>
>> Is there a specific reason you are using name_regexp here.
>> There is only a single variable involved in each case - would "name" not work
>> equally as well?
> 
> [Hemant]  I remember getting some errors in case of variables. But now name is also working ok.
> So, yes, name will also work in this case.
> If I need to do a next version of this series, I will improve it. Is that ok for you?

yes - that is perfect, I will give one more look over then. 

> 
> 
>>
>>> diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
>>> b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
>>> index 9eebaf7ffd..89d7cf4957 100644
>>> --- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
>>> +++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
>>> @@ -1,8 +1,10 @@
>>>  DPDK_20.0 {
>>> +	local: *;
>>> +};
>>> +
>>> +INTERNAL {
>>>  	global:
>>>
>>>  	rte_dpaa_bpid_info;
>>>  	rte_dpaa_memsegs;
>>> -
>>> -	local: *;
>>>  };
>>> diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
>>> b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
>>> index fa0f2280d5..53fa1552d1 100644
>>> --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
>>> +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
>>> @@ -61,6 +61,7 @@ struct dpaa2_bp_info {
>>>
>>>  extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
>>>
>>> +__rte_internal
>>>  int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
>>>  		       void **obj_table, unsigned int count);
>>>
>>> diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
>>> b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
>>> index cd4bc88273..686b024624 100644
>>> --- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
>>> +++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
>>> @@ -1,10 +1,15 @@
>>>  DPDK_20.0 {
>>>  	global:
>>>
>>> -	rte_dpaa2_bpid_info;
>>> -	rte_dpaa2_mbuf_alloc_bulk;
>>>  	rte_dpaa2_mbuf_from_buf_addr;
>>>  	rte_dpaa2_mbuf_pool_bpid;
>>>
>>>  	local: *;
>>>  };
>>> +
>>> +INTERNAL {
>>> +	global:
>>> +
>>> +	rte_dpaa2_bpid_info;
>>> +	rte_dpaa2_mbuf_alloc_bulk;
>>> +};
>>>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 07/13] crypto: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 07/13] crypto: " Hemant Agrawal
@ 2020-05-19 11:17  0%     ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 11:17 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  drivers/crypto/dpaa2_sec/dpaa2_sec_event.h             | 5 +++--
>  drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 6 ++++--
>  drivers/crypto/dpaa_sec/dpaa_sec_event.h               | 8 ++++----
>  drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map   | 6 ++++--
>  4 files changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
> index c779d5d837..675cbbb81d 100644
> --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
> @@ -6,12 +6,13 @@
>  #ifndef _DPAA2_SEC_EVENT_H_
>  #define _DPAA2_SEC_EVENT_H_
>  
> -int
> -dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
> +__rte_internal
> +int dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
>  		int qp_id,
>  		struct dpaa2_dpcon_dev *dpcon,
>  		const struct rte_event *event);
>  
> +__rte_internal
>  int dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
>  		int qp_id);
>  
> diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> index 5952d645fd..3d863aff4d 100644
> --- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> +++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
> @@ -1,8 +1,10 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {
>  	global:
>  
>  	dpaa2_sec_eventq_attach;
>  	dpaa2_sec_eventq_detach;
> -
> -	local: *;
>  };
> diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_event.h b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
> index 8d1a018096..0b09fa8f75 100644
> --- a/drivers/crypto/dpaa_sec/dpaa_sec_event.h
> +++ b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
> @@ -6,14 +6,14 @@
>  #ifndef _DPAA_SEC_EVENT_H_
>  #define _DPAA_SEC_EVENT_H_
>  
> -int
> -dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
> +__rte_internal
> +int dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
>  		int qp_id,
>  		uint16_t ch_id,
>  		const struct rte_event *event);
>  
> -int
> -dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
> +__rte_internal
> +int dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
>  		int qp_id);
>  
>  #endif /* _DPAA_SEC_EVENT_H_ */
> diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
> index 8580fa13db..023e120516 100644
> --- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
> +++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
> @@ -1,8 +1,10 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {
>  	global:
>  
>  	dpaa_sec_eventq_attach;
>  	dpaa_sec_eventq_detach;
> -
> -	local: *;
>  };
> 

As comments on [PATCH v8 05/13]

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: move internal symbols into INTERNAL section
  2020-05-19 11:03  0%     ` Ray Kinsella
@ 2020-05-19 11:16  0%       ` Hemant Agrawal
  2020-05-19 11:30  0%         ` Ray Kinsella
  0 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-19 11:16 UTC (permalink / raw)
  To: Ray Kinsella, dev, david.marchand

 
> On 15/05/2020 10:47, Hemant Agrawal wrote:
> > This patch moves the internal symbols to INTERNAL sections so that any
> > change in them is not reported as ABI breakage.
> >
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
> >  devtools/libabigail.abignore                        | 8 ++++++++
> >  drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 6 ++++--
> >  drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
> >  drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
> >  4 files changed, 20 insertions(+), 4 deletions(-)
> >
> > diff --git a/devtools/libabigail.abignore
> > b/devtools/libabigail.abignore index ab34302d0c..42f9469221 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -55,3 +55,11 @@
> >  	file_name_regexp = ^librte_bus_fslmc\.
> >  [suppress_file]
> >  	file_name_regexp = ^librte_bus_dpaa\.
> > +[suppress_function]
> > +	name = rte_dpaa2_mbuf_alloc_bulk
> > +[suppress_variable]
> > +	name_regexp = ^rte_dpaa_memsegs
> > +[suppress_variable]
> > +	name_regexp = ^rte_dpaa_bpid_info
> > +[suppress_variable]
> > +	name_regexp = ^rte_dpaa2_bpid_info
> 
> Is there a specific reason you are using name_regexp here.
> There is only a single variable involved in each case - would "name" not work
> equally as well?

[Hemant]  I remember getting some errors in case of variables. But now name is also working ok.
So, yes, name will also work in this case.
If I need to do a next version of this series, I will improve it. Is that ok for you?


> 
> > diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> > b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> > index 9eebaf7ffd..89d7cf4957 100644
> > --- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> > +++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> > @@ -1,8 +1,10 @@
> >  DPDK_20.0 {
> > +	local: *;
> > +};
> > +
> > +INTERNAL {
> >  	global:
> >
> >  	rte_dpaa_bpid_info;
> >  	rte_dpaa_memsegs;
> > -
> > -	local: *;
> >  };
> > diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> > b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> > index fa0f2280d5..53fa1552d1 100644
> > --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> > +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> > @@ -61,6 +61,7 @@ struct dpaa2_bp_info {
> >
> >  extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
> >
> > +__rte_internal
> >  int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
> >  		       void **obj_table, unsigned int count);
> >
> > diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> > b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> > index cd4bc88273..686b024624 100644
> > --- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> > +++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> > @@ -1,10 +1,15 @@
> >  DPDK_20.0 {
> >  	global:
> >
> > -	rte_dpaa2_bpid_info;
> > -	rte_dpaa2_mbuf_alloc_bulk;
> >  	rte_dpaa2_mbuf_from_buf_addr;
> >  	rte_dpaa2_mbuf_pool_bpid;
> >
> >  	local: *;
> >  };
> > +
> > +INTERNAL {
> > +	global:
> > +
> > +	rte_dpaa2_bpid_info;
> > +	rte_dpaa2_mbuf_alloc_bulk;
> > +};
> >

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 06/13] net/dpaa2: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 06/13] net/dpaa2: " Hemant Agrawal
@ 2020-05-19 11:15  0%     ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 11:15 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  drivers/net/dpaa2/dpaa2_ethdev.h            |  2 ++
>  drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +++++++-----
>  2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
> index 2c49a7f01f..c7fb6539ff 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.h
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.h
> @@ -164,11 +164,13 @@ int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
>  
>  int dpaa2_attach_bp_list(struct dpaa2_dev_priv *priv, void *blist);
>  
> +__rte_internal
>  int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
>  		int eth_rx_queue_id,
>  		struct dpaa2_dpcon_dev *dpcon,
>  		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
>  
> +__rte_internal
>  int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
>  		int eth_rx_queue_id);
>  
> diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
> index f2bb793319..b633fdc2a8 100644
> --- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
> +++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
> @@ -1,9 +1,4 @@
>  DPDK_20.0 {
> -	global:
> -
> -	dpaa2_eth_eventq_attach;
> -	dpaa2_eth_eventq_detach;
> -
>  	local: *;
>  };
>  
> @@ -14,3 +9,10 @@ EXPERIMENTAL {
>  	rte_pmd_dpaa2_set_custom_hash;
>  	rte_pmd_dpaa2_set_timestamp;
>  };
> +
> +INTERNAL {
> +	global:
> +
> +	dpaa2_eth_eventq_attach;
> +	dpaa2_eth_eventq_detach;
> +};
> 

As comments on [PATCH v8 05/13]

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 05/13] net/dpaa: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 05/13] net/dpaa: " Hemant Agrawal
@ 2020-05-19 11:14  0%     ` Ray Kinsella
  2020-05-19 11:39  0%       ` Hemant Agrawal
  0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-19 11:14 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore              | 2 ++
>  drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
>  drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
>  3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 42f9469221..7b6358c394 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -63,3 +63,5 @@
>  	name_regexp = ^rte_dpaa_bpid_info
>  [suppress_variable]
>  	name_regexp = ^rte_dpaa2_bpid_info
> +[suppress_function]
> +        name_regexp = ^dpaa

This rule ends up being very general 
Could we do something more specific like ... 

^dpaa_\.attach

it should catch

dpaa_eth_eventq_attach;
dpaa_eth_eventq_detach;
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;

which is I think, what you are after.

> diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
> index af9fc2105d..7393a9df05 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.h
> +++ b/drivers/net/dpaa/dpaa_ethdev.h
> @@ -160,12 +160,14 @@ struct dpaa_if_stats {
>  	uint64_t tund;		/**<Tx Undersized */
>  };
>  
> +__rte_internal
>  int
>  dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
>  		int eth_rx_queue_id,
>  		u16 ch_id,
>  		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
>  
> +__rte_internal
>  int
>  dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
>  			   int eth_rx_queue_id);
> diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> index f403a1526d..774aa0de45 100644
> --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
> +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
> @@ -1,9 +1,14 @@
>  DPDK_20.0 {
>  	global:
>  
> -	dpaa_eth_eventq_attach;
> -	dpaa_eth_eventq_detach;
>  	rte_pmd_dpaa_set_tx_loopback;
>  
>  	local: *;
>  };
> +
> +INTERNAL {
> +	global:
> +
> +	dpaa_eth_eventq_attach;
> +	dpaa_eth_eventq_detach;
> +};
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: " Hemant Agrawal
@ 2020-05-19 11:03  0%     ` Ray Kinsella
  2020-05-19 11:16  0%       ` Hemant Agrawal
  0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-19 11:03 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore                        | 8 ++++++++
>  drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 6 ++++--
>  drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
>  drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
>  4 files changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index ab34302d0c..42f9469221 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -55,3 +55,11 @@
>  	file_name_regexp = ^librte_bus_fslmc\.
>  [suppress_file]
>  	file_name_regexp = ^librte_bus_dpaa\.
> +[suppress_function]
> +	name = rte_dpaa2_mbuf_alloc_bulk
> +[suppress_variable]
> +	name_regexp = ^rte_dpaa_memsegs
> +[suppress_variable]
> +	name_regexp = ^rte_dpaa_bpid_info
> +[suppress_variable]
> +	name_regexp = ^rte_dpaa2_bpid_info

Is there a specific reason you are using name_regexp here.
There is only a single variable involved in each case - would "name" not work equally as well?

> diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> index 9eebaf7ffd..89d7cf4957 100644
> --- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> +++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
> @@ -1,8 +1,10 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {
>  	global:
>  
>  	rte_dpaa_bpid_info;
>  	rte_dpaa_memsegs;
> -
> -	local: *;
>  };
> diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> index fa0f2280d5..53fa1552d1 100644
> --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
> @@ -61,6 +61,7 @@ struct dpaa2_bp_info {
>  
>  extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
>  
> +__rte_internal
>  int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
>  		       void **obj_table, unsigned int count);
>  
> diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> index cd4bc88273..686b024624 100644
> --- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> +++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
> @@ -1,10 +1,15 @@
>  DPDK_20.0 {
>  	global:
>  
> -	rte_dpaa2_bpid_info;
> -	rte_dpaa2_mbuf_alloc_bulk;
>  	rte_dpaa2_mbuf_from_buf_addr;
>  	rte_dpaa2_mbuf_pool_bpid;
>  
>  	local: *;
>  };
> +
> +INTERNAL {
> +	global:
> +
> +	rte_dpaa2_bpid_info;
> +	rte_dpaa2_mbuf_alloc_bulk;
> +};
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 03/13] bus/dpaa: move internal symbols into INTERNAL section
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 03/13] bus/dpaa: " Hemant Agrawal
@ 2020-05-19 10:56  0%     ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 10:56 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> This patch also removes two symbols, which are not
> to be exported.
> rte_dpaa_mem_ptov  - static inline in the headerfile
> fman_ccsr_map_fd - local shared variable.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore              |  2 ++
>  drivers/bus/dpaa/include/fsl_bman.h       |  6 +++++
>  drivers/bus/dpaa/include/fsl_fman.h       | 27 +++++++++++++++++++
>  drivers/bus/dpaa/include/fsl_qman.h       | 32 +++++++++++++++++++++++
>  drivers/bus/dpaa/include/fsl_usd.h        |  8 +++++-
>  drivers/bus/dpaa/include/netcfg.h         |  2 ++
>  drivers/bus/dpaa/rte_bus_dpaa_version.map |  8 +++---
>  drivers/bus/dpaa/rte_dpaa_bus.h           |  5 ++++
>  8 files changed, 85 insertions(+), 5 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 877c6d5be8..ab34302d0c 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -53,3 +53,5 @@
>  	file_name_regexp = ^librte_common_dpaax\.
>  [suppress_file]
>  	file_name_regexp = ^librte_bus_fslmc\.
> +[suppress_file]
> +	file_name_regexp = ^librte_bus_dpaa\.
> diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
> index f9cd972153..82da2fcfe0 100644
> --- a/drivers/bus/dpaa/include/fsl_bman.h
> +++ b/drivers/bus/dpaa/include/fsl_bman.h
> @@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);
>   * the structure provided by the caller can be released or reused after the
>   * function returns.
>   */
> +__rte_internal
>  struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
>  
>  /**
>   * bman_free_pool - Deallocates a Buffer Pool object
>   * @pool: the pool object to release
>   */
> +__rte_internal
>  void bman_free_pool(struct bman_pool *pool);
>  
>  /**
> @@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);
>   * The returned pointer refers to state within the pool object so must not be
>   * modified and can no longer be read once the pool object is destroyed.
>   */
> +__rte_internal
>  const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
>  
>  /**
> @@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
>   * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
>   *
>   */
> +__rte_internal
>  int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
>  		 u32 flags);
>  
> @@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
>   * The return value will be the number of buffers obtained from the pool, or a
>   * negative error code if a h/w error or pool starvation was encountered.
>   */
> +__rte_internal
>  int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
>  		 u32 flags);
>  
> @@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);
>   *
>   * Return the number of the free buffers
>   */
> +__rte_internal
>  u32 bman_query_free_buffers(struct bman_pool *pool);
>  
>  /**
> diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
> index 5705ebfdce..6c87c8db0d 100644
> --- a/drivers/bus/dpaa/include/fsl_fman.h
> +++ b/drivers/bus/dpaa/include/fsl_fman.h
> @@ -7,6 +7,8 @@
>  #ifndef __FSL_FMAN_H
>  #define __FSL_FMAN_H
>  
> +#include <rte_compat.h>
> +
>  #ifdef __cplusplus
>  extern "C" {
>  #endif
> @@ -43,18 +45,23 @@ struct fm_status_t {
>  } __rte_packed;
>  
>  /* Set MAC address for a particular interface */
> +__rte_internal
>  int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
>  
>  /* Remove a MAC address for a particular interface */
> +__rte_internal
>  void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
>  
>  /* Get the FMAN statistics */
> +__rte_internal
>  void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
>  
>  /* Reset the FMAN statistics */
> +__rte_internal
>  void fman_if_stats_reset(struct fman_if *p);
>  
>  /* Get all of the FMAN statistics */
> +__rte_internal
>  void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
>  
>  /* Set ignore pause option for a specific interface */
> @@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
>  void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
>  
>  /* Enable/disable Rx promiscuous mode on specified interface */
> +__rte_internal
>  void fman_if_promiscuous_enable(struct fman_if *p);
> +__rte_internal
>  void fman_if_promiscuous_disable(struct fman_if *p);
>  
>  /* Enable/disable Rx on specific interfaces */
> +__rte_internal
>  void fman_if_enable_rx(struct fman_if *p);
> +__rte_internal
>  void fman_if_disable_rx(struct fman_if *p);
>  
>  /* Enable/disable loopback on specific interfaces */
> +__rte_internal
>  void fman_if_loopback_enable(struct fman_if *p);
> +__rte_internal
>  void fman_if_loopback_disable(struct fman_if *p);
>  
>  /* Set buffer pool on specific interface */
> +__rte_internal
>  void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
>  		    size_t bufsize);
>  
>  /* Get Flow Control threshold parameters on specific interface */
> +__rte_internal
>  int fman_if_get_fc_threshold(struct fman_if *fm_if);
>  
>  /* Enable and Set Flow Control threshold parameters on specific interface */
> +__rte_internal
>  int fman_if_set_fc_threshold(struct fman_if *fm_if,
>  			u32 high_water, u32 low_water, u32 bpid);
>  
>  /* Get Flow Control pause quanta on specific interface */
> +__rte_internal
>  int fman_if_get_fc_quanta(struct fman_if *fm_if);
>  
>  /* Set Flow Control pause quanta on specific interface */
> +__rte_internal
>  int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
>  
>  /* Set default error fqid on specific interface */
> @@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
>  int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
>  
>  /* Set IC transfer params */
> +__rte_internal
>  int fman_if_set_ic_params(struct fman_if *fm_if,
>  			  const struct fman_if_ic_params *icp);
>  
>  /* Get interface fd->offset value */
> +__rte_internal
>  int fman_if_get_fdoff(struct fman_if *fm_if);
>  
>  /* Set interface fd->offset value */
> +__rte_internal
>  void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
>  
>  /* Get interface SG enable status value */
> +__rte_internal
>  int fman_if_get_sg_enable(struct fman_if *fm_if);
>  
>  /* Set interface SG support mode */
> +__rte_internal
>  void fman_if_set_sg(struct fman_if *fm_if, int enable);
>  
>  /* Get interface Max Frame length (MTU) */
>  uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
>  
>  /* Set interface  Max Frame length (MTU) */
> +__rte_internal
>  void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
>  
>  /* Set interface next invoked action for dequeue operation */
>  void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
>  
>  /* discard error packets on rx */
> +__rte_internal
>  void fman_if_discard_rx_errors(struct fman_if *fm_if);
>  
> +__rte_internal
>  void fman_if_set_mcast_filter_table(struct fman_if *p);
>  
> +__rte_internal
>  void fman_if_reset_mcast_filter_table(struct fman_if *p);
>  
>  int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
> diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
> index 1b3342e7e6..4411bb0a79 100644
> --- a/drivers/bus/dpaa/include/fsl_qman.h
> +++ b/drivers/bus/dpaa/include/fsl_qman.h
> @@ -1314,6 +1314,7 @@ struct qman_cgr {
>  #define QMAN_CGR_MODE_FRAME          0x00000001
>  
>  #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
> +__rte_internal
>  void qman_set_fq_lookup_table(void **table);
>  #endif
>  
> @@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);
>   */
>  int qman_get_portal_index(void);
>  
> +__rte_internal
>  u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
>  			void **bufs);
>  
> @@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
>   * processed via qman_poll_***() functions). Returns zero for success, or
>   * -EINVAL if the current CPU is sharing a portal hosted on another CPU.
>   */
> +__rte_internal
>  int qman_irqsource_add(u32 bits);
>  
>  /**
> @@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);
>   * takes portal (fq specific) as input rather than using the thread affined
>   * portal.
>   */
> +__rte_internal
>  int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
>  
>  /**
> @@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
>   * instead be processed via qman_poll_***() functions. Returns zero for success,
>   * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.
>   */
> +__rte_internal
>  int qman_irqsource_remove(u32 bits);
>  
>  /**
> @@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);
>   * takes portal (fq specific) as input rather than using the thread affined
>   * portal.
>   */
> +__rte_internal
>  int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
>  
>  /**
> @@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
>   */
>  u16 qman_affine_channel(int cpu);
>  
> +__rte_internal
>  unsigned int qman_portal_poll_rx(unsigned int poll_limit,
>  				 void **bufs, struct qman_portal *q);
>  
> @@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
>   *
>   * This function will issue a volatile dequeue command to the QMAN.
>   */
> +__rte_internal
>  int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
>  
>  /**
> @@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
>   * is issued. It will keep returning NULL until there is no packet available on
>   * the DQRR.
>   */
> +__rte_internal
>  struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
>  
>  /**
> @@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
>   * This will consume the DQRR enrey and make it available for next volatile
>   * dequeue.
>   */
> +__rte_internal
>  void qman_dqrr_consume(struct qman_fq *fq,
>  		       struct qm_dqrr_entry *dq);
>  
> @@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,
>   * this function will return -EINVAL, otherwise the return value is >=0 and
>   * represents the number of DQRR entries processed.
>   */
> +__rte_internal
>  int qman_poll_dqrr(unsigned int limit);
>  
>  /**
> @@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);
>   * (SDQCR). The requested pools are limited to those the portal has dequeue
>   * access to.
>   */
> +__rte_internal
>  void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
>  
>  /**
> @@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
>   * function must be called from the same CPU as that which processed the DQRR
>   * entry in the first place.
>   */
> +__rte_internal
>  void qman_dca_index(u8 index, int park_request);
>  
>  /**
> @@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
>   * a frame queue object based on that, rather than assuming/requiring that it be
>   * Out of Service.
>   */
> +__rte_internal
>  int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
>  
>  /**
> @@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
>   * qman_fq_fqid - Queries the frame queue ID of a FQ object
>   * @fq: the frame queue object to query
>   */
> +__rte_internal
>  u32 qman_fq_fqid(struct qman_fq *fq);
>  
>  /**
> @@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);
>   * This captures the state, as seen by the driver, at the time the function
>   * executes.
>   */
> +__rte_internal
>  void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
>  
>  /**
> @@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
>   * context_a.address fields and will leave the stashing fields provided by the
>   * user alone, otherwise it will zero out the context_a.stashing fields.
>   */
> +__rte_internal
>  int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
>  
>  /**
> @@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);
>   * caller should be prepared to accept the callback as the function is called,
>   * not only once it has returned.
>   */
> +__rte_internal
>  int qman_retire_fq(struct qman_fq *fq, u32 *flags);
>  
>  /**
> @@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
>   * The frame queue must be retired and empty, and if any order restoration list
>   * was released as ERNs at the time of retirement, they must all be consumed.
>   */
> +__rte_internal
>  int qman_oos_fq(struct qman_fq *fq);
>  
>  /**
> @@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
>   * @fq: the frame queue object to be queried
>   * @np: storage for the queried FQD fields
>   */
> +__rte_internal
>  int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
>  
>  /**
> @@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
>   * @fq: the frame queue object to be queried
>   * @frm_cnt: number of frames in the queue
>   */
> +__rte_internal
>  int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
>  
>  /**
> @@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
>   * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
>   * "flags" retrieved from qman_fq_state().
>   */
> +__rte_internal
>  int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
>  
>  /**
> @@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
>   * of an already busy hardware resource by throttling many of the to-be-dropped
>   * enqueues "at the source".
>   */
> +__rte_internal
>  int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
>  
> +__rte_internal
>  int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
>  		       int frames_to_send);
>  
> @@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
>   * This API is similar to qman_enqueue_multi(), but it takes fd which needs
>   * to be processed by different frame queues.
>   */
> +__rte_internal
>  int
>  qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
>  		      u32 *flags, int frames_to_send);
> @@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);
>   * @fqid: the base FQID of the range to deallocate
>   * @count: the number of FQIDs in the range
>   */
> +__rte_internal
>  int qman_reserve_fqid_range(u32 fqid, unsigned int count);
>  static inline int qman_reserve_fqid(u32 fqid)
>  {
> @@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)
>   * than requested (though alignment will be as requested). If @partial is zero,
>   * the return value will either be 'count' or negative.
>   */
> +__rte_internal
>  int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
>  static inline int qman_alloc_pool(u32 *result)
>  {
> @@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);
>   * any unspecified parameters) will be used rather than a modify hw hardware
>   * (which only modifies the specified parameters).
>   */
> +__rte_internal
>  int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
>  		    struct qm_mcc_initcgr *opts);
>  
> @@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
>   * is executed. This must be excuted on the same affine portal on which it was
>   * created.
>   */
> +__rte_internal
>  int qman_delete_cgr(struct qman_cgr *cgr);
>  
>  /**
> @@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);
>   * unspecified parameters) will be used rather than a modify hw hardware (which
>   * only modifies the specified parameters).
>   */
> +__rte_internal
>  int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
>  		    struct qm_mcc_initcgr *opts);
>  
> @@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
>   * than requested (though alignment will be as requested). If @partial is zero,
>   * the return value will either be 'count' or negative.
>   */
> +__rte_internal
>  int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
>  static inline int qman_alloc_cgrid(u32 *result)
>  {
> @@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)
>   * @id: the base CGR ID of the range to deallocate
>   * @count: the number of CGR IDs in the range
>   */
> +__rte_internal
>  void qman_release_cgrid_range(u32 id, unsigned int count);
>  static inline void qman_release_cgrid(u32 id)
>  {
> diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
> index 263d9bb976..dcf35e4adb 100644
> --- a/drivers/bus/dpaa/include/fsl_usd.h
> +++ b/drivers/bus/dpaa/include/fsl_usd.h
> @@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
>  int bman_free_raw_portal(struct dpaa_raw_portal *portal);
>  
>  /* Obtain thread-local UIO file-descriptors */
> +__rte_internal
>  int qman_thread_fd(void);
>  int bman_thread_fd(void);
>  
> @@ -66,10 +67,14 @@ int bman_thread_fd(void);
>   * processing is complete. As such, it is essential to call this before going
>   * into another blocking read/select/poll.
>   */
> +__rte_internal
>  void qman_thread_irq(void);
> +
> +__rte_internal
>  void bman_thread_irq(void);
> +__rte_internal
>  void qman_fq_portal_thread_irq(struct qman_portal *qp);
> -
> +__rte_internal
>  void qman_clear_irq(void);
>  
>  /* Global setup */
> @@ -77,6 +82,7 @@ int qman_global_init(void);
>  int bman_global_init(void);
>  
>  /* Direct portal create and destroy */
> +__rte_internal
>  struct qman_portal *fsl_qman_fq_portal_create(int *fd);
>  int fsl_qman_fq_portal_destroy(struct qman_portal *qp);
>  int fsl_qman_fq_portal_init(struct qman_portal *qp);
> diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
> index bf7bfae8cb..d7d1befd24 100644
> --- a/drivers/bus/dpaa/include/netcfg.h
> +++ b/drivers/bus/dpaa/include/netcfg.h
> @@ -46,11 +46,13 @@ struct netcfg_interface {
>   * cfg_file: FMC config XML file
>   * Returns the configuration information in newly allocated memory.
>   */
> +__rte_internal
>  struct netcfg_info *netcfg_acquire(void);
>  
>  /* cfg_ptr: configuration information pointer.
>   * Frees the resources allocated by the configuration layer.
>   */
> +__rte_internal
>  void netcfg_release(struct netcfg_info *cfg_ptr);
>  
>  #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
> diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> index e6ca4361e0..53732289d3 100644
> --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
> +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
> @@ -1,4 +1,8 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {
>  	global:
>  
>  	bman_acquire;
> @@ -13,7 +17,6 @@ DPDK_20.0 {
>  	dpaa_logtype_pmd;
>  	dpaa_netcfg;
>  	dpaa_svr_family;
> -	fman_ccsr_map_fd;
>  	fman_dealloc_bufs_mask_hi;
>  	fman_dealloc_bufs_mask_lo;
>  	fman_if_add_mac_addr;
> @@ -87,10 +90,7 @@ DPDK_20.0 {
>  	qman_volatile_dequeue;
>  	rte_dpaa_driver_register;
>  	rte_dpaa_driver_unregister;
> -	rte_dpaa_mem_ptov;
>  	rte_dpaa_portal_fq_close;
>  	rte_dpaa_portal_fq_init;
>  	rte_dpaa_portal_init;
> -
> -	local: *;
>  };
> diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
> index 373aca9785..d4aee132ef 100644
> --- a/drivers/bus/dpaa/rte_dpaa_bus.h
> +++ b/drivers/bus/dpaa/rte_dpaa_bus.h
> @@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)
>   *   A pointer to a rte_dpaa_driver structure describing the driver
>   *   to be registered.
>   */
> +__rte_internal
>  void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
>  
>  /**
> @@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
>   *	A pointer to a rte_dpaa_driver structure describing the driver
>   *	to be unregistered.
>   */
> +__rte_internal
>  void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
>  
>  /**
> @@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
>   * @return
>   *	0 in case of success, error otherwise
>   */
> +__rte_internal
>  int rte_dpaa_portal_init(void *arg);
>  
> +__rte_internal
>  int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
>  
> +__rte_internal
>  int rte_dpaa_portal_fq_close(struct qman_fq *fq);
>  
>  /**
> 
Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 02/13] bus/fslmc: move internal symbols into INTERNAL section
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 02/13] bus/fslmc: " Hemant Agrawal
@ 2020-05-19 10:00  0%     ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19 10:00 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> This patch also removes two symbols, which were not used
> anywhere else i.e. rte_fslmc_vfio_dmamap & dpaa2_get_qbman_swp
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore                  |  2 +
>  drivers/bus/fslmc/fslmc_vfio.h                |  5 +++
>  drivers/bus/fslmc/mc/fsl_dpbp.h               |  7 ++++
>  drivers/bus/fslmc/mc/fsl_dpci.h               |  3 ++
>  drivers/bus/fslmc/mc/fsl_dpcon.h              |  2 +
>  drivers/bus/fslmc/mc/fsl_dpdmai.h             | 10 +++++
>  drivers/bus/fslmc/mc/fsl_dpio.h               | 11 +++++
>  drivers/bus/fslmc/mc/fsl_dpmng.h              |  4 ++
>  drivers/bus/fslmc/mc/fsl_mc_cmd.h             |  2 +
>  drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  5 +++
>  drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  8 ++++
>  .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  8 ++++
>  .../fslmc/qbman/include/fsl_qbman_portal.h    | 42 +++++++++++++++++++
>  drivers/bus/fslmc/rte_bus_fslmc_version.map   | 20 ++++-----
>  drivers/bus/fslmc/rte_fslmc.h                 |  4 ++
>  15 files changed, 123 insertions(+), 10 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index b1488d5549..877c6d5be8 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -51,3 +51,5 @@
>  ; Ignore moving DPAAx stable functions to INTERNAL tag
>  [suppress_file]
>  	file_name_regexp = ^librte_common_dpaax\.
> +[suppress_file]
> +	file_name_regexp = ^librte_bus_fslmc\.
> diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
> index c988121294..bc7c6f62d7 100644
> --- a/drivers/bus/fslmc/fslmc_vfio.h
> +++ b/drivers/bus/fslmc/fslmc_vfio.h
> @@ -8,6 +8,7 @@
>  #ifndef _FSLMC_VFIO_H_
>  #define _FSLMC_VFIO_H_
>  
> +#include <rte_compat.h>
>  #include <rte_vfio.h>
>  
>  /* Pathname of FSL-MC devices directory. */
> @@ -41,7 +42,11 @@ typedef struct fslmc_vfio_container {
>  } fslmc_vfio_container;
>  
>  extern char *fslmc_container;
> +
> +__rte_internal
>  int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index);
> +
> +__rte_internal
>  int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index);
>  
>  int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
> index 9d405b42c4..0d590a2647 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpbp.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
> @@ -7,6 +7,7 @@
>  #ifndef __FSL_DPBP_H
>  #define __FSL_DPBP_H
>  
> +#include <rte_compat.h>
>  /*
>   * Data Path Buffer Pool API
>   * Contains initialization APIs and runtime control APIs for DPBP
> @@ -14,6 +15,7 @@
>  
>  struct fsl_mc_io;
>  
> +__rte_internal
>  int dpbp_open(struct fsl_mc_io *mc_io,
>  	      uint32_t cmd_flags,
>  	      int dpbp_id,
> @@ -42,10 +44,12 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint32_t obj_id);
>  
> +__rte_internal
>  int dpbp_enable(struct fsl_mc_io *mc_io,
>  		uint32_t cmd_flags,
>  		uint16_t token);
>  
> +__rte_internal
>  int dpbp_disable(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint16_t token);
> @@ -55,6 +59,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
>  		    uint16_t token,
>  		    int *en);
>  
> +__rte_internal
>  int dpbp_reset(struct fsl_mc_io *mc_io,
>  	       uint32_t cmd_flags,
>  	       uint16_t token);
> @@ -70,6 +75,7 @@ struct dpbp_attr {
>  	uint16_t bpid;
>  };
>  
> +__rte_internal
>  int dpbp_get_attributes(struct fsl_mc_io *mc_io,
>  			uint32_t cmd_flags,
>  			uint16_t token,
> @@ -88,6 +94,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
>  			 uint16_t *major_ver,
>  			 uint16_t *minor_ver);
>  
> +__rte_internal
>  int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
>  			   uint32_t cmd_flags,
>  			   uint16_t token,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
> index a0ee5bfe69..81fd3438aa 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpci.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpci.h
> @@ -181,6 +181,7 @@ struct dpci_rx_queue_cfg {
>  	int order_preservation_en;
>  };
>  
> +__rte_internal
>  int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
>  		      uint32_t cmd_flags,
>  		      uint16_t token,
> @@ -228,6 +229,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
>  			 uint16_t *major_ver,
>  			 uint16_t *minor_ver);
>  
> +__rte_internal
>  int dpci_set_opr(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint16_t token,
> @@ -235,6 +237,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
>  		 uint8_t options,
>  		 struct opr_cfg *cfg);
>  
> +__rte_internal
>  int dpci_get_opr(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint16_t token,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
> index af81d51195..7caa6c68a1 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpcon.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
> @@ -20,6 +20,7 @@ struct fsl_mc_io;
>   */
>  #define DPCON_INVALID_DPIO_ID		(int)(-1)
>  
> +__rte_internal
>  int dpcon_open(struct fsl_mc_io *mc_io,
>  	       uint32_t cmd_flags,
>  	       int dpcon_id,
> @@ -77,6 +78,7 @@ struct dpcon_attr {
>  	uint8_t num_priorities;
>  };
>  
> +__rte_internal
>  int dpcon_get_attributes(struct fsl_mc_io *mc_io,
>  			 uint32_t cmd_flags,
>  			 uint16_t token,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
> index 40469cc139..19328c00a0 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
> @@ -5,6 +5,8 @@
>  #ifndef __FSL_DPDMAI_H
>  #define __FSL_DPDMAI_H
>  
> +#include <rte_compat.h>
> +
>  struct fsl_mc_io;
>  
>  /* Data Path DMA Interface API
> @@ -23,11 +25,13 @@ struct fsl_mc_io;
>   */
>  #define DPDMAI_ALL_QUEUES	(uint8_t)(-1)
>  
> +__rte_internal
>  int dpdmai_open(struct fsl_mc_io *mc_io,
>  		uint32_t cmd_flags,
>  		int dpdmai_id,
>  		uint16_t *token);
>  
> +__rte_internal
>  int dpdmai_close(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint16_t token);
> @@ -54,10 +58,12 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
>  		   uint32_t cmd_flags,
>  		   uint32_t object_id);
>  
> +__rte_internal
>  int dpdmai_enable(struct fsl_mc_io *mc_io,
>  		  uint32_t cmd_flags,
>  		  uint16_t token);
>  
> +__rte_internal
>  int dpdmai_disable(struct fsl_mc_io *mc_io,
>  		   uint32_t cmd_flags,
>  		   uint16_t token);
> @@ -82,6 +88,7 @@ struct dpdmai_attr {
>  	uint8_t num_of_queues;
>  };
>  
> +__rte_internal
>  int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
>  			  uint32_t cmd_flags,
>  			  uint16_t token,
> @@ -148,6 +155,7 @@ struct dpdmai_rx_queue_cfg {
>  
>  };
>  
> +__rte_internal
>  int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
>  			uint32_t cmd_flags,
>  			uint16_t token,
> @@ -168,6 +176,7 @@ struct dpdmai_rx_queue_attr {
>  	uint32_t fqid;
>  };
>  
> +__rte_internal
>  int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
>  			uint32_t cmd_flags,
>  			uint16_t token,
> @@ -184,6 +193,7 @@ struct dpdmai_tx_queue_attr {
>  	uint32_t fqid;
>  };
>  
> +__rte_internal
>  int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
>  			uint32_t cmd_flags,
>  			uint16_t token,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
> index 3158f53191..c2db76bdf8 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpio.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpio.h
> @@ -7,17 +7,21 @@
>  #ifndef __FSL_DPIO_H
>  #define __FSL_DPIO_H
>  
> +#include <rte_compat.h>
> +
>  /* Data Path I/O Portal API
>   * Contains initialization APIs and runtime control APIs for DPIO
>   */
>  
>  struct fsl_mc_io;
>  
> +__rte_internal
>  int dpio_open(struct fsl_mc_io *mc_io,
>  	      uint32_t cmd_flags,
>  	      int dpio_id,
>  	      uint16_t *token);
>  
> +__rte_internal
>  int dpio_close(struct fsl_mc_io *mc_io,
>  	       uint32_t cmd_flags,
>  	       uint16_t token);
> @@ -57,10 +61,12 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint32_t object_id);
>  
> +__rte_internal
>  int dpio_enable(struct fsl_mc_io *mc_io,
>  		uint32_t cmd_flags,
>  		uint16_t token);
>  
> +__rte_internal
>  int dpio_disable(struct fsl_mc_io *mc_io,
>  		 uint32_t cmd_flags,
>  		 uint16_t token);
> @@ -70,10 +76,12 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
>  		    uint16_t token,
>  		    int *en);
>  
> +__rte_internal
>  int dpio_reset(struct fsl_mc_io *mc_io,
>  	       uint32_t cmd_flags,
>  	       uint16_t token);
>  
> +__rte_internal
>  int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
>  				  uint32_t cmd_flags,
>  				  uint16_t token,
> @@ -84,12 +92,14 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
>  				  uint16_t token,
>  				  uint8_t *sdest);
>  
> +__rte_internal
>  int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
>  				    uint32_t cmd_flags,
>  				    uint16_t token,
>  				    int dpcon_id,
>  				    uint8_t *channel_index);
>  
> +__rte_internal
>  int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
>  				       uint32_t cmd_flags,
>  				       uint16_t token,
> @@ -119,6 +129,7 @@ struct dpio_attr {
>  	uint32_t clk;
>  };
>  
> +__rte_internal
>  int dpio_get_attributes(struct fsl_mc_io *mc_io,
>  			uint32_t cmd_flags,
>  			uint16_t token,
> diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
> index 36c387af27..8764ceaed9 100644
> --- a/drivers/bus/fslmc/mc/fsl_dpmng.h
> +++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
> @@ -7,6 +7,8 @@
>  #ifndef __FSL_DPMNG_H
>  #define __FSL_DPMNG_H
>  
> +#include <rte_compat.h>
> +
>  /*
>   * Management Complex General API
>   * Contains general API for the Management Complex firmware
> @@ -34,6 +36,7 @@ struct mc_version {
>  	uint32_t revision;
>  };
>  
> +__rte_internal
>  int mc_get_version(struct fsl_mc_io *mc_io,
>  		   uint32_t cmd_flags,
>  		   struct mc_version *mc_ver_info);
> @@ -48,6 +51,7 @@ struct mc_soc_version {
>  	uint32_t pvr;
>  };
>  
> +__rte_internal
>  int mc_get_soc_version(struct fsl_mc_io *mc_io,
>  		       uint32_t cmd_flags,
>  		       struct mc_soc_version *mc_platform_info);
> diff --git a/drivers/bus/fslmc/mc/fsl_mc_cmd.h b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
> index ac919610cf..7c0ca6b73a 100644
> --- a/drivers/bus/fslmc/mc/fsl_mc_cmd.h
> +++ b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
> @@ -7,6 +7,7 @@
>  #ifndef __FSL_MC_CMD_H
>  #define __FSL_MC_CMD_H
>  
> +#include <rte_compat.h>
>  #include <rte_byteorder.h>
>  #include <stdint.h>
>  
> @@ -80,6 +81,7 @@ enum mc_cmd_status {
>  
>  #define MC_CMD_HDR_FLAGS_MASK	0xFF00FF00
>  
> +__rte_internal
>  int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd);
>  
>  static inline uint64_t mc_encode_cmd_header(uint16_t cmd_id,
> diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
> index 2829c93806..7c5966241a 100644
> --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
> +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
> @@ -36,20 +36,25 @@ extern uint8_t dpaa2_eqcr_size;
>  extern struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
>  
>  /* Affine a DPIO portal to current processing thread */
> +__rte_internal
>  int dpaa2_affine_qbman_swp(void);
>  
>  /* Affine additional DPIO portal to current crypto processing thread */
> +__rte_internal
>  int dpaa2_affine_qbman_ethrx_swp(void);
>  
>  /* allocate memory for FQ - dq storage */
> +__rte_internal
>  int
>  dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage);
>  
>  /* free memory for FQ- dq storage */
> +__rte_internal
>  void
>  dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage);
>  
>  /* free the enqueue response descriptors */
> +__rte_internal
>  uint32_t
>  dpaa2_free_eq_descriptors(void);
>  
> diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
> index 368fe7c688..33b191f823 100644
> --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
> +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
> @@ -426,11 +426,19 @@ void set_swp_active_dqs(uint16_t dpio_index, struct qbman_result *dqs)
>  {
>  	rte_global_active_dqs_list[dpio_index].global_active_dqs = dqs;
>  }
> +__rte_internal
>  struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
> +
> +__rte_internal
>  void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
> +
> +__rte_internal
>  int dpaa2_dpbp_supported(void);
>  
> +__rte_internal
>  struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
> +
> +__rte_internal
>  void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci);
>  
>  #endif
> diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
> index e010b1b6ae..f0c2f9fcb3 100644
> --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
> +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
> @@ -1,6 +1,10 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright (C) 2015 Freescale Semiconductor, Inc.
>   */
> +#ifndef _FSL_QBMAN_DEBUG_H
> +#define _FSL_QBMAN_DEBUG_H
> +
> +#include <rte_compat.h>
>  
>  struct qbman_swp;
>  
> @@ -24,7 +28,11 @@ uint8_t verb;
>  	uint8_t reserved2[29];
>  };
>  
> +__rte_internal
>  int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
>  			 struct qbman_fq_query_np_rslt *r);
> +
> +__rte_internal
>  uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
>  uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
> +#endif
> diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
> index 88f0a99686..f820077d2b 100644
> --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
> +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
> @@ -7,6 +7,7 @@
>  #ifndef _FSL_QBMAN_PORTAL_H
>  #define _FSL_QBMAN_PORTAL_H
>  
> +#include <rte_compat.h>
>  #include <fsl_qbman_base.h>
>  
>  #define SVR_LS1080A	0x87030000
> @@ -117,6 +118,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
>   * @p: the given software portal object.
>   * @mask: The value to set in SWP_ISR register.
>   */
> +__rte_internal
>  void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
>  
>  /**
> @@ -286,6 +288,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
>   * rather by specifying the index (from 0 to 15) that has been mapped to the
>   * desired channel.
>   */
> +__rte_internal
>  void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable);
>  
>  /* ------------------- */
> @@ -325,6 +328,7 @@ enum qbman_pull_type_e {
>   * default/starting state.
>   * @d: the pull dequeue descriptor to be cleared.
>   */
> +__rte_internal
>  void qbman_pull_desc_clear(struct qbman_pull_desc *d);
>  
>  /**
> @@ -340,6 +344,7 @@ void qbman_pull_desc_clear(struct qbman_pull_desc *d);
>   * the caller provides in 'storage_phys'), and 'stash' controls whether or not
>   * those writes to main-memory express a cache-warming attribute.
>   */
> +__rte_internal
>  void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
>  				 struct qbman_result *storage,
>  				 uint64_t storage_phys,
> @@ -349,6 +354,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
>   * @d: the pull dequeue descriptor to be set.
>   * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
>   */
> +__rte_internal
>  void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
>  				   uint8_t numframes);
>  /**
> @@ -372,6 +378,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
>   * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
>   * @fqid: the frame queue index of the given FQ.
>   */
> +__rte_internal
>  void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
>  
>  /**
> @@ -407,6 +414,7 @@ void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
>   * Return 0 for success, and -EBUSY if the software portal is not ready
>   * to do pull dequeue.
>   */
> +__rte_internal
>  int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
>  
>  /* -------------------------------- */
> @@ -421,12 +429,14 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
>   * only once, so repeated calls can return a sequence of DQRR entries, without
>   * requiring they be consumed immediately or in any particular order.
>   */
> +__rte_internal
>  const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p);
>  
>  /**
>   * qbman_swp_prefetch_dqrr_next() - prefetch the next DQRR entry.
>   * @s: the software portal object.
>   */
> +__rte_internal
>  void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
>  
>  /**
> @@ -435,6 +445,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
>   * @s: the software portal object.
>   * @dq: the DQRR entry to be consumed.
>   */
> +__rte_internal
>  void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
>  
>  /**
> @@ -442,6 +453,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
>   * @s: the software portal object.
>   * @dqrr_index: the DQRR index entry to be consumed.
>   */
> +__rte_internal
>  void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
>  
>  /**
> @@ -450,6 +462,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
>   *
>   * Return dqrr index.
>   */
> +__rte_internal
>  uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
>  
>  /**
> @@ -460,6 +473,7 @@ uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
>   *
>   * Return dqrr entry object.
>   */
> +__rte_internal
>  struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
>  
>  /* ------------------------------------------------- */
> @@ -485,6 +499,7 @@ struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
>   * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
>   * dequeue result.
>   */
> +__rte_internal
>  int qbman_result_has_new_result(struct qbman_swp *s,
>  				struct qbman_result *dq);
>  
> @@ -497,8 +512,10 @@ int qbman_result_has_new_result(struct qbman_swp *s,
>   * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
>   * dequeue result.
>   */
> +__rte_internal
>  int qbman_check_command_complete(struct qbman_result *dq);
>  
> +__rte_internal
>  int qbman_check_new_result(struct qbman_result *dq);
>  
>  /* -------------------------------------------------------- */
> @@ -624,6 +641,7 @@ int qbman_result_is_FQPN(const struct qbman_result *dq);
>   *
>   * Return the state field.
>   */
> +__rte_internal
>  uint8_t qbman_result_DQ_flags(const struct qbman_result *dq);
>  
>  /**
> @@ -658,6 +676,7 @@ static inline int qbman_result_DQ_is_pull_complete(
>   *
>   * Return seqnum.
>   */
> +__rte_internal
>  uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
>  
>  /**
> @@ -667,6 +686,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
>   *
>   * Return odpid.
>   */
> +__rte_internal
>  uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
>  
>  /**
> @@ -699,6 +719,7 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
>   *
>   * Return the frame queue context.
>   */
> +__rte_internal
>  uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
>  
>  /**
> @@ -707,6 +728,7 @@ uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
>   *
>   * Return the frame descriptor.
>   */
> +__rte_internal
>  const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
>  
>  /* State-change notifications (FQDAN/CDAN/CSCN/...). */
> @@ -717,6 +739,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
>   *
>   * Return the state in the notifiation.
>   */
> +__rte_internal
>  uint8_t qbman_result_SCN_state(const struct qbman_result *scn);
>  
>  /**
> @@ -850,6 +873,7 @@ struct qbman_eq_response {
>   * default/starting state.
>   * @d: the given enqueue descriptor.
>   */
> +__rte_internal
>  void qbman_eq_desc_clear(struct qbman_eq_desc *d);
>  
>  /* Exactly one of the following descriptor "actions" should be set. (Calling
> @@ -870,6 +894,7 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d);
>   * @response_success: 1 = enqueue with response always; 0 = enqueue with
>   * rejections returned on a FQ.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
>  /**
>   * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
> @@ -881,6 +906,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
>   * @incomplete: indiates whether this is the last fragments using the same
>   * sequeue number.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
>  			   uint16_t opr_id, uint16_t seqnum, int incomplete);
>  
> @@ -915,6 +941,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
>   * data structure.) 'stash' controls whether or not the write to main-memory
>   * expresses a cache-warming attribute.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
>  				uint64_t storage_phys,
>  				int stash);
> @@ -929,6 +956,7 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
>   * result "storage" before issuing an enqueue, and use any non-zero 'token'
>   * value.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
>  
>  /**
> @@ -944,6 +972,7 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
>   * @d: the enqueue descriptor
>   * @fqid: the id of the frame queue to be enqueued.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
>  
>  /**
> @@ -953,6 +982,7 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
>   * @qd_bin: the queuing destination bin
>   * @qd_prio: the queuing destination priority.
>   */
> +__rte_internal
>  void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
>  			  uint16_t qd_bin, uint8_t qd_prio);
>  
> @@ -978,6 +1008,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
>   * held-active (order-preserving) FQ, whether the FQ should be parked instead of
>   * being rescheduled.)
>   */
> +__rte_internal
>  void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
>  			   uint8_t dqrr_idx, int park);
>  
> @@ -987,6 +1018,7 @@ void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
>   *
>   * Return the fd pointer.
>   */
> +__rte_internal
>  struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
>  
>  /**
> @@ -997,6 +1029,7 @@ struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
>   * This value is set into the response id before the enqueue command, which,
>   * get overwritten by qbman once the enqueue command is complete.
>   */
> +__rte_internal
>  void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
>  
>  /**
> @@ -1009,6 +1042,7 @@ void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
>   * copied into the enqueue response to determine if the command has been
>   * completed, and response has been updated.
>   */
> +__rte_internal
>  uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
>  
>  /**
> @@ -1017,6 +1051,7 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
>   *
>   * Return 0 when command is sucessful.
>   */
> +__rte_internal
>  uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
>  
>  /**
> @@ -1043,6 +1078,7 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
>   *
>   * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
>   */
> +__rte_internal
>  int qbman_swp_enqueue_multiple(struct qbman_swp *s,
>  			       const struct qbman_eq_desc *d,
>  			       const struct qbman_fd *fd,
> @@ -1060,6 +1096,7 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
>   *
>   * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
>   */
> +__rte_internal
>  int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
>  				  const struct qbman_eq_desc *d,
>  				  struct qbman_fd **fd,
> @@ -1076,6 +1113,7 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
>   *
>   * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
>   */
> +__rte_internal
>  int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
>  				    const struct qbman_eq_desc *d,
>  				    const struct qbman_fd *fd,
> @@ -1117,12 +1155,14 @@ struct qbman_release_desc {
>   * default/starting state.
>   * @d: the qbman release descriptor.
>   */
> +__rte_internal
>  void qbman_release_desc_clear(struct qbman_release_desc *d);
>  
>  /**
>   * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
>   * @d: the qbman release descriptor.
>   */
> +__rte_internal
>  void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
>  
>  /**
> @@ -1141,6 +1181,7 @@ void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
>   *
>   * Return 0 for success, -EBUSY if the release command ring is not ready.
>   */
> +__rte_internal
>  int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
>  		      const uint64_t *buffers, unsigned int num_buffers);
>  
> @@ -1166,6 +1207,7 @@ int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh);
>   * Return 0 for success, or negative error code if the acquire command
>   * fails.
>   */
> +__rte_internal
>  int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
>  		      unsigned int num_buffers);
>  
> diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
> index fe45575046..1b7a5a45e9 100644
> --- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
> +++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
> @@ -1,4 +1,14 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +EXPERIMENTAL {
> +	global:
> +
> +	rte_fslmc_vfio_mem_dmamap;
> +};
> +
> +INTERNAL {
>  	global:
>  
>  	dpaa2_affine_qbman_ethrx_swp;
> @@ -11,7 +21,6 @@ DPDK_20.0 {
>  	dpaa2_free_dpbp_dev;
>  	dpaa2_free_dq_storage;
>  	dpaa2_free_eq_descriptors;
> -	dpaa2_get_qbman_swp;
>  	dpaa2_io_portal;
>  	dpaa2_svr_family;
>  	dpaa2_virt_mode;
> @@ -101,15 +110,6 @@ DPDK_20.0 {
>  	rte_fslmc_driver_unregister;
>  	rte_fslmc_get_device_count;
>  	rte_fslmc_object_register;
> -	rte_fslmc_vfio_dmamap;
>  	rte_global_active_dqs_list;
>  	rte_mcp_ptr_list;
> -
> -	local: *;
> -};
> -
> -EXPERIMENTAL {
> -	global:
> -
> -	rte_fslmc_vfio_mem_dmamap;
>  };
> diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
> index 96ba8dc259..5078b48ee1 100644
> --- a/drivers/bus/fslmc/rte_fslmc.h
> +++ b/drivers/bus/fslmc/rte_fslmc.h
> @@ -162,6 +162,7 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
>   *   A pointer to a rte_dpaa2_driver structure describing the driver
>   *   to be registered.
>   */
> +__rte_internal
>  void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
>  
>  /**
> @@ -171,6 +172,7 @@ void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
>   *   A pointer to a rte_dpaa2_driver structure describing the driver
>   *   to be unregistered.
>   */
> +__rte_internal
>  void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
>  
>  /** Helper for DPAA2 device registration from driver (eth, crypto) instance */
> @@ -189,6 +191,7 @@ RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
>   *   A pointer to a rte_dpaa_object structure describing the mc object
>   *   to be registered.
>   */
> +__rte_internal
>  void rte_fslmc_object_register(struct rte_dpaa2_object *object);
>  
>  /**
> @@ -200,6 +203,7 @@ void rte_fslmc_object_register(struct rte_dpaa2_object *object);
>   *   >=0 for count; 0 indicates either no device of the said type scanned or
>   *   invalid device type.
>   */
> +__rte_internal
>  uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type);
>  
>  /** Helper for DPAA2 object registration */
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
  2020-05-19  6:43  0%     ` Hemant Agrawal
@ 2020-05-19  9:51  0%     ` Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19  9:51 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand



On 15/05/2020 10:47, Hemant Agrawal wrote:
> This patch moves the internal symbols to INTERNAL sections
> so that any change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore                      |  3 +++
>  drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
>  drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
>  drivers/common/dpaax/rte_common_dpaax_version.map |  6 ++++--
>  4 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index c9ee73cb3c..b1488d5549 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -48,3 +48,6 @@
>          changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
>  [suppress_variable]
>          name = rte_crypto_aead_algorithm_strings
> +; Ignore moving DPAAx stable functions to INTERNAL tag
> +[suppress_file]
> +	file_name_regexp = ^librte_common_dpaax\.
> diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
> index 960b421766..38d91a1afe 100644
> --- a/drivers/common/dpaax/dpaa_of.h
> +++ b/drivers/common/dpaax/dpaa_of.h
> @@ -24,6 +24,7 @@
>  #include <limits.h>
>  #include <rte_common.h>
>  #include <dpaa_list.h>
> +#include <rte_compat.h>
>  
>  #ifndef OF_INIT_DEFAULT_PATH
>  #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
> @@ -102,6 +103,7 @@ struct dt_file {
>  	uint64_t buf[OF_FILE_BUF_MAX >> 3];
>  };
>  
> +__rte_internal
>  const struct device_node *of_find_compatible_node(
>  					const struct device_node *from,
>  					const char *type __rte_unused,
> @@ -113,32 +115,44 @@ const struct device_node *of_find_compatible_node(
>  		dev_node != NULL; \
>  		dev_node = of_find_compatible_node(dev_node, type, compatible))
>  
> +__rte_internal
>  const void *of_get_property(const struct device_node *from, const char *name,
>  			    size_t *lenp) __attribute__((nonnull(2)));
> +__rte_internal
>  bool of_device_is_available(const struct device_node *dev_node);
>  
> +
> +__rte_internal
>  const struct device_node *of_find_node_by_phandle(uint64_t ph);
>  
> +__rte_internal
>  const struct device_node *of_get_parent(const struct device_node *dev_node);
>  
> +__rte_internal
>  const struct device_node *of_get_next_child(const struct device_node *dev_node,
>  					    const struct device_node *prev);
>  
> +__rte_internal
>  const void *of_get_mac_address(const struct device_node *np);
>  
>  #define for_each_child_node(parent, child) \
>  	for (child = of_get_next_child(parent, NULL); child != NULL; \
>  			child = of_get_next_child(parent, child))
>  
> +
> +__rte_internal
>  uint32_t of_n_addr_cells(const struct device_node *dev_node);
>  uint32_t of_n_size_cells(const struct device_node *dev_node);
>  
> +__rte_internal
>  const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
>  			       uint64_t *size, uint32_t *flags);
>  
> +__rte_internal
>  uint64_t of_translate_address(const struct device_node *dev_node,
>  			      const uint32_t *addr) __attribute__((nonnull));
>  
> +__rte_internal
>  bool of_device_is_compatible(const struct device_node *dev_node,
>  			     const char *compatible);
>  
> @@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct device_node *dev_node,
>   * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
>   * The path should usually be "/proc/device-tree".
>   */
> +__rte_internal
>  int of_init_path(const char *dt_path);
>  
>  /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
> diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
> index fc3b9e7a8f..230fba8ba0 100644
> --- a/drivers/common/dpaax/dpaax_iova_table.h
> +++ b/drivers/common/dpaax/dpaax_iova_table.h
> @@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
>  #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
>  
>  /* APIs exposed */
> +__rte_internal
>  int dpaax_iova_table_populate(void);
> +__rte_internal
>  void dpaax_iova_table_depopulate(void);
> +__rte_internal
>  int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
> +__rte_internal
>  void dpaax_iova_table_dump(void);
>  
>  static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
> diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
> index f72eba761d..14b507ad13 100644
> --- a/drivers/common/dpaax/rte_common_dpaax_version.map
> +++ b/drivers/common/dpaax/rte_common_dpaax_version.map
> @@ -1,4 +1,8 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {

you may need to rebase.
rte_common_dpaax_version.map already has an INTERNAL section. 

>  	global:
>  
>  	dpaax_iova_table_depopulate;
> @@ -18,6 +22,4 @@ DPDK_20.0 {
>  	of_init_path;
>  	of_n_addr_cells;
>  	of_translate_address;
> -
> -	local: *;
>  };
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v1] doc: fix typos and errors in abi policy doc
  @ 2020-05-19  9:46  4%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-19  9:46 UTC (permalink / raw)
  To: Gaetan Rivet
  Cc: dev, Neil Horman, Mcnamara, John, Marko Kovacevic, Ray Kinsella

14/05/2020 08:40, Ray Kinsella:
> On 13/05/2020 11:43, Gaetan Rivet wrote:
> > Some errors in the document:
> > 
> >   * API instead of ABI once.
> > 
> > Some typos:
> > 
> >   * __rte_depreciated instead of __rte_deprecated.
> >   * missing ```` around value.
> >   * inconsistent reference to major ABI version, most
> >     of the time described without the minor appended, except once.
> > 
> > Verbosity and grammar:
> > 
> >   * Long sentences that would be better cut short.
> >   * Comma abuse.
> >   * 'May' used where 'can' seems more fitting.
> > 
> > I'm not a native speaker though, so grain of salt applies.
> > 
> > Fixes: fdf7471cccb8 ("doc: introduce major ABI versions")
> > Cc: Ray Kinsella <mdr@ashroe.eu>
> > cc: Neil Horman <nhorman@tuxdriver.com>
> > Signed-off-by: Gaetan Rivet <grive@u256.net>
> 
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Applied, thanks




^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: fix doc build failure
  2020-05-19  7:36  3% [dpdk-dev] [PATCH] doc: fix doc build failure Raslan Darawsheh
  2020-05-19  7:39  0% ` Ray Kinsella
@ 2020-05-19  8:03  0% ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2020-05-19  8:03 UTC (permalink / raw)
  To: Raslan Darawsheh; +Cc: Yigit, Ferruh, dev, Ray Kinsella

On Tue, May 19, 2020 at 9:37 AM Raslan Darawsheh <rasland@mellanox.com> wrote:
>
> doc/guides/contributing/abi_versioning.rst:416:
>  ERROR: Error in "code-block" directive:
> 1 argument(s) required, 0 supplied.
>
> .. code-block::
>
>    use_function_versioning = true
>
> Fixes: 45a4103e680d ("doc: fix default symbol binding in ABI guide")

> Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: David Marchand <david.marchand@redhat.com>

Applied, thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: fix doc build failure
  2020-05-19  7:36  3% [dpdk-dev] [PATCH] doc: fix doc build failure Raslan Darawsheh
@ 2020-05-19  7:39  0% ` Ray Kinsella
  2020-05-19  8:03  0% ` David Marchand
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19  7:39 UTC (permalink / raw)
  To: Raslan Darawsheh, ferruh.yigit; +Cc: dev

strange - I didn't get the error, but the changes makes perfect sense. 
Thanks, 

On 19/05/2020 08:36, Raslan Darawsheh wrote:
> doc/guides/contributing/abi_versioning.rst:416:
>  ERROR: Error in "code-block" directive:
> 1 argument(s) required, 0 supplied.
> 
> .. code-block::
> 
>    use_function_versioning = true
> 
> Fixes: 45a4103e680d ("doc: fix default symbol binding in ABI guide")
> Cc: mdr@ashroe.eu
> Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
> ---
>  doc/guides/contributing/abi_versioning.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
> index ef881877f..f4a9273af 100644
> --- a/doc/guides/contributing/abi_versioning.rst
> +++ b/doc/guides/contributing/abi_versioning.rst
> @@ -413,7 +413,7 @@ Finally, we need to indicate to the :doc:`meson/ninja build system
>  library or driver. In the libraries or driver where we have added symbol
>  versioning, in the ``meson.build`` file we add the following
>  
> -.. code-block::
> +.. code-block:: none
>  
>     use_function_versioning = true
>  
> 

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: fix doc build failure
@ 2020-05-19  7:36  3% Raslan Darawsheh
  2020-05-19  7:39  0% ` Ray Kinsella
  2020-05-19  8:03  0% ` David Marchand
  0 siblings, 2 replies; 200+ results
From: Raslan Darawsheh @ 2020-05-19  7:36 UTC (permalink / raw)
  To: ferruh.yigit; +Cc: dev, mdr

doc/guides/contributing/abi_versioning.rst:416:
 ERROR: Error in "code-block" directive:
1 argument(s) required, 0 supplied.

.. code-block::

   use_function_versioning = true

Fixes: 45a4103e680d ("doc: fix default symbol binding in ABI guide")
Cc: mdr@ashroe.eu
Signed-off-by: Raslan Darawsheh <rasland@mellanox.com>
---
 doc/guides/contributing/abi_versioning.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index ef881877f..f4a9273af 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -413,7 +413,7 @@ Finally, we need to indicate to the :doc:`meson/ninja build system
 library or driver. In the libraries or driver where we have added symbol
 versioning, in the ``meson.build`` file we add the following
 
-.. code-block::
+.. code-block:: none
 
    use_function_versioning = true
 
-- 
2.26.0


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section
  2020-05-19  6:43  0%     ` Hemant Agrawal
@ 2020-05-19  6:44  0%       ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-19  6:44 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand

Working on it at the moment Hemant.

Ray K

On 19/05/2020 07:43, Hemant Agrawal wrote:
> Hi Ray,
> 	Will you please review and ack this series?.
> 
> Regards,
> Hemant
> 
>> -----Original Message-----
>> From: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Sent: Friday, May 15, 2020 3:18 PM
>> To: dev@dpdk.org; david.marchand@redhat.com; mdr@ashroe.eu
>> Cc: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Subject: [PATCH v8 01/13] common/dpaax: move internal symbols into
>> INTERNAL section
>> Importance: High
>>
>> This patch moves the internal symbols to INTERNAL sections so that any
>> change in them is not reported as ABI breakage.
>>
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> ---
>>  devtools/libabigail.abignore                      |  3 +++
>>  drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
>>  drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
>>  drivers/common/dpaax/rte_common_dpaax_version.map |  6 ++++--
>>  4 files changed, 26 insertions(+), 2 deletions(-)
>>
>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index
>> c9ee73cb3c..b1488d5549 100644
>> --- a/devtools/libabigail.abignore
>> +++ b/devtools/libabigail.abignore
>> @@ -48,3 +48,6 @@
>>          changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
>> [suppress_variable]
>>          name = rte_crypto_aead_algorithm_strings
>> +; Ignore moving DPAAx stable functions to INTERNAL tag [suppress_file]
>> +	file_name_regexp = ^librte_common_dpaax\.
>> diff --git a/drivers/common/dpaax/dpaa_of.h
>> b/drivers/common/dpaax/dpaa_of.h index 960b421766..38d91a1afe 100644
>> --- a/drivers/common/dpaax/dpaa_of.h
>> +++ b/drivers/common/dpaax/dpaa_of.h
>> @@ -24,6 +24,7 @@
>>  #include <limits.h>
>>  #include <rte_common.h>
>>  #include <dpaa_list.h>
>> +#include <rte_compat.h>
>>
>>  #ifndef OF_INIT_DEFAULT_PATH
>>  #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
>> @@ -102,6 +103,7 @@ struct dt_file {
>>  	uint64_t buf[OF_FILE_BUF_MAX >> 3];
>>  };
>>
>> +__rte_internal
>>  const struct device_node *of_find_compatible_node(
>>  					const struct device_node *from,
>>  					const char *type __rte_unused,
>> @@ -113,32 +115,44 @@ const struct device_node
>> *of_find_compatible_node(
>>  		dev_node != NULL; \
>>  		dev_node = of_find_compatible_node(dev_node, type,
>> compatible))
>>
>> +__rte_internal
>>  const void *of_get_property(const struct device_node *from, const char
>> *name,
>>  			    size_t *lenp) __attribute__((nonnull(2)));
>> +__rte_internal
>>  bool of_device_is_available(const struct device_node *dev_node);
>>
>> +
>> +__rte_internal
>>  const struct device_node *of_find_node_by_phandle(uint64_t ph);
>>
>> +__rte_internal
>>  const struct device_node *of_get_parent(const struct device_node
>> *dev_node);
>>
>> +__rte_internal
>>  const struct device_node *of_get_next_child(const struct device_node
>> *dev_node,
>>  					    const struct device_node *prev);
>>
>> +__rte_internal
>>  const void *of_get_mac_address(const struct device_node *np);
>>
>>  #define for_each_child_node(parent, child) \
>>  	for (child = of_get_next_child(parent, NULL); child != NULL; \
>>  			child = of_get_next_child(parent, child))
>>
>> +
>> +__rte_internal
>>  uint32_t of_n_addr_cells(const struct device_node *dev_node);  uint32_t
>> of_n_size_cells(const struct device_node *dev_node);
>>
>> +__rte_internal
>>  const uint32_t *of_get_address(const struct device_node *dev_node, size_t
>> idx,
>>  			       uint64_t *size, uint32_t *flags);
>>
>> +__rte_internal
>>  uint64_t of_translate_address(const struct device_node *dev_node,
>>  			      const uint32_t *addr) __attribute__((nonnull));
>>
>> +__rte_internal
>>  bool of_device_is_compatible(const struct device_node *dev_node,
>>  			     const char *compatible);
>>
>> @@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct
>> device_node *dev_node,
>>   * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers,
>> etc.
>>   * The path should usually be "/proc/device-tree".
>>   */
>> +__rte_internal
>>  int of_init_path(const char *dt_path);
>>
>>  /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
>> diff --git a/drivers/common/dpaax/dpaax_iova_table.h
>> b/drivers/common/dpaax/dpaax_iova_table.h
>> index fc3b9e7a8f..230fba8ba0 100644
>> --- a/drivers/common/dpaax/dpaax_iova_table.h
>> +++ b/drivers/common/dpaax/dpaax_iova_table.h
>> @@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
>> #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
>>
>>  /* APIs exposed */
>> +__rte_internal
>>  int dpaax_iova_table_populate(void);
>> +__rte_internal
>>  void dpaax_iova_table_depopulate(void);
>> +__rte_internal
>>  int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
>> +__rte_internal
>>  void dpaax_iova_table_dump(void);
>>
>>  static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
>> diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map
>> b/drivers/common/dpaax/rte_common_dpaax_version.map
>> index f72eba761d..14b507ad13 100644
>> --- a/drivers/common/dpaax/rte_common_dpaax_version.map
>> +++ b/drivers/common/dpaax/rte_common_dpaax_version.map
>> @@ -1,4 +1,8 @@
>>  DPDK_20.0 {
>> +	local: *;
>> +};
>> +
>> +INTERNAL {
>>  	global:
>>
>>  	dpaax_iova_table_depopulate;
>> @@ -18,6 +22,4 @@ DPDK_20.0 {
>>  	of_init_path;
>>  	of_n_addr_cells;
>>  	of_translate_address;
>> -
>> -	local: *;
>>  };
>> --
>> 2.17.1
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
@ 2020-05-19  6:43  0%     ` Hemant Agrawal
  2020-05-19  6:44  0%       ` Ray Kinsella
  2020-05-19  9:51  0%     ` Ray Kinsella
  1 sibling, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-19  6:43 UTC (permalink / raw)
  To: Hemant Agrawal, dev, david.marchand, mdr

Hi Ray,
	Will you please review and ack this series?.

Regards,
Hemant

> -----Original Message-----
> From: Hemant Agrawal <hemant.agrawal@nxp.com>
> Sent: Friday, May 15, 2020 3:18 PM
> To: dev@dpdk.org; david.marchand@redhat.com; mdr@ashroe.eu
> Cc: Hemant Agrawal <hemant.agrawal@nxp.com>
> Subject: [PATCH v8 01/13] common/dpaax: move internal symbols into
> INTERNAL section
> Importance: High
> 
> This patch moves the internal symbols to INTERNAL sections so that any
> change in them is not reported as ABI breakage.
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>  devtools/libabigail.abignore                      |  3 +++
>  drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
>  drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
>  drivers/common/dpaax/rte_common_dpaax_version.map |  6 ++++--
>  4 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index
> c9ee73cb3c..b1488d5549 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -48,3 +48,6 @@
>          changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
> [suppress_variable]
>          name = rte_crypto_aead_algorithm_strings
> +; Ignore moving DPAAx stable functions to INTERNAL tag [suppress_file]
> +	file_name_regexp = ^librte_common_dpaax\.
> diff --git a/drivers/common/dpaax/dpaa_of.h
> b/drivers/common/dpaax/dpaa_of.h index 960b421766..38d91a1afe 100644
> --- a/drivers/common/dpaax/dpaa_of.h
> +++ b/drivers/common/dpaax/dpaa_of.h
> @@ -24,6 +24,7 @@
>  #include <limits.h>
>  #include <rte_common.h>
>  #include <dpaa_list.h>
> +#include <rte_compat.h>
> 
>  #ifndef OF_INIT_DEFAULT_PATH
>  #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
> @@ -102,6 +103,7 @@ struct dt_file {
>  	uint64_t buf[OF_FILE_BUF_MAX >> 3];
>  };
> 
> +__rte_internal
>  const struct device_node *of_find_compatible_node(
>  					const struct device_node *from,
>  					const char *type __rte_unused,
> @@ -113,32 +115,44 @@ const struct device_node
> *of_find_compatible_node(
>  		dev_node != NULL; \
>  		dev_node = of_find_compatible_node(dev_node, type,
> compatible))
> 
> +__rte_internal
>  const void *of_get_property(const struct device_node *from, const char
> *name,
>  			    size_t *lenp) __attribute__((nonnull(2)));
> +__rte_internal
>  bool of_device_is_available(const struct device_node *dev_node);
> 
> +
> +__rte_internal
>  const struct device_node *of_find_node_by_phandle(uint64_t ph);
> 
> +__rte_internal
>  const struct device_node *of_get_parent(const struct device_node
> *dev_node);
> 
> +__rte_internal
>  const struct device_node *of_get_next_child(const struct device_node
> *dev_node,
>  					    const struct device_node *prev);
> 
> +__rte_internal
>  const void *of_get_mac_address(const struct device_node *np);
> 
>  #define for_each_child_node(parent, child) \
>  	for (child = of_get_next_child(parent, NULL); child != NULL; \
>  			child = of_get_next_child(parent, child))
> 
> +
> +__rte_internal
>  uint32_t of_n_addr_cells(const struct device_node *dev_node);  uint32_t
> of_n_size_cells(const struct device_node *dev_node);
> 
> +__rte_internal
>  const uint32_t *of_get_address(const struct device_node *dev_node, size_t
> idx,
>  			       uint64_t *size, uint32_t *flags);
> 
> +__rte_internal
>  uint64_t of_translate_address(const struct device_node *dev_node,
>  			      const uint32_t *addr) __attribute__((nonnull));
> 
> +__rte_internal
>  bool of_device_is_compatible(const struct device_node *dev_node,
>  			     const char *compatible);
> 
> @@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct
> device_node *dev_node,
>   * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers,
> etc.
>   * The path should usually be "/proc/device-tree".
>   */
> +__rte_internal
>  int of_init_path(const char *dt_path);
> 
>  /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
> diff --git a/drivers/common/dpaax/dpaax_iova_table.h
> b/drivers/common/dpaax/dpaax_iova_table.h
> index fc3b9e7a8f..230fba8ba0 100644
> --- a/drivers/common/dpaax/dpaax_iova_table.h
> +++ b/drivers/common/dpaax/dpaax_iova_table.h
> @@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
> #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
> 
>  /* APIs exposed */
> +__rte_internal
>  int dpaax_iova_table_populate(void);
> +__rte_internal
>  void dpaax_iova_table_depopulate(void);
> +__rte_internal
>  int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
> +__rte_internal
>  void dpaax_iova_table_dump(void);
> 
>  static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
> diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map
> b/drivers/common/dpaax/rte_common_dpaax_version.map
> index f72eba761d..14b507ad13 100644
> --- a/drivers/common/dpaax/rte_common_dpaax_version.map
> +++ b/drivers/common/dpaax/rte_common_dpaax_version.map
> @@ -1,4 +1,8 @@
>  DPDK_20.0 {
> +	local: *;
> +};
> +
> +INTERNAL {
>  	global:
> 
>  	dpaax_iova_table_depopulate;
> @@ -18,6 +22,4 @@ DPDK_20.0 {
>  	of_init_path;
>  	of_n_addr_cells;
>  	of_translate_address;
> -
> -	local: *;
>  };
> --
> 2.17.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-18 17:51  4%           ` Thomas Monjalon
@ 2020-05-18 18:32  4%             ` Ferruh Yigit
  2020-05-19 14:13  4%               ` Ray Kinsella
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-05-18 18:32 UTC (permalink / raw)
  To: Thomas Monjalon, Ray Kinsella
  Cc: dev, Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu, Neil Horman

On 5/18/2020 6:51 PM, Thomas Monjalon wrote:
> 18/05/2020 19:34, Ferruh Yigit:
>> On 5/18/2020 6:18 PM, Thomas Monjalon wrote:
>>> 16/05/2020 13:53, Neil Horman:
>>>> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>
>>>>> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
>>>>> DPDK_20.0.1 block.
>>>>>
>>>>> This had the affect of breaking the applications that were using these
>>>>> APIs on v19.11. Although there is no modification of the APIs and the
>>>>> action is positive and matures the APIs, the affect can be negative to
>>>>> applications.
>>>>>
>>>>> When a maintainer is promoting an API to become part of the next major
>>>>> ABI version by removing the experimental tag. The maintainer may
>>>>> choose to offer an alias to the experimental tag, to prevent these
>>>>> breakages in future.
>>>>>
>>>>> The following changes are made to enabling aliasing:
>>>>>
>>>>> Updated to the abi policy and abi versioning documents.
>>>>>
>>>>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
>>>>>
>>>>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
>>>>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
>>>>> .experimental section (__rte_experimental tag is missing).
>>>>> Updated tool in a way it won't complain if the symbol in the
>>>>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
>>>>>
>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>>>>>
>>>> Acked-by: Neil Horman <nhorman@tuxdriver.com>
>>>
>>> Applied with few typos fixed, thanks.
>>>
>>
>> Is a new version of the meter library required?
> 
> I think yes, Cristian is asking for some changes.
> 

done: https://patches.dpdk.org/patch/70399/


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5] meter: provide experimental alias of API for old apps
      2020-05-14 16:11  4% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
@ 2020-05-18 18:30  2% ` Ferruh Yigit
  2020-05-19 12:16 10% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
  3 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-05-18 18:30 UTC (permalink / raw)
  To: Cristian Dumitrescu, Ray Kinsella, Neil Horman, Eelco Chaudron
  Cc: dev, Ferruh Yigit, Thomas Monjalon, David Marchand, stable,
	Luca Boccassi, Bruce Richardson, Ian Stokes, Andrzej Ostruszka

On v20.02 some meter APIs have been matured and symbols moved from
EXPERIMENTAL to DPDK_20.0.1 block.

This can break the applications that were using these mentioned APIs on
v19.11. Although there is no modification on the APIs and the action is
positive and matures the APIs, the affect can be negative to
applications.

This patch provides aliasing by duplicating the existing and versioned
symbols as experimental.

Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
aliasing done between EXPERIMENTAL and DPDK_21.

With DPDK_21 ABI (DPDK v20.11) all aliasing will be removed and only
stable version of the APIs will remain.

Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM API")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Luca Boccassi <bluca@debian.org>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ian Stokes <ian.stokes@intel.com>
Cc: Eelco Chaudron <echaudro@redhat.com>
Cc: Andrzej Ostruszka <amo@semihalf.com>
Cc: Ray Kinsella <mdr@ashroe.eu>
Cc: cristian.dumitrescu@intel.com

v2:
* Commit log updated

v3:
* added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro

v4:
* update script name in commit log, remove empty line

v5:
* Patch has only meter library changes
* Aliasing moved into rte_meter_compat.c
---
 lib/librte_meter/Makefile              |  2 +-
 lib/librte_meter/meson.build           |  3 +-
 lib/librte_meter/rte_meter.c           |  5 +--
 lib/librte_meter/rte_meter_compat.c    | 47 ++++++++++++++++++++++++++
 lib/librte_meter/rte_meter_compat.h    | 26 ++++++++++++++
 lib/librte_meter/rte_meter_version.map |  7 ++++
 6 files changed, 86 insertions(+), 4 deletions(-)
 create mode 100644 lib/librte_meter/rte_meter_compat.c
 create mode 100644 lib/librte_meter/rte_meter_compat.h

diff --git a/lib/librte_meter/Makefile b/lib/librte_meter/Makefile
index 48366e82b0..e2f59fee7c 100644
--- a/lib/librte_meter/Makefile
+++ b/lib/librte_meter/Makefile
@@ -19,7 +19,7 @@ EXPORT_MAP := rte_meter_version.map
 #
 # all source are stored in SRCS-y
 #
-SRCS-$(CONFIG_RTE_LIBRTE_METER) := rte_meter.c
+SRCS-$(CONFIG_RTE_LIBRTE_METER) := rte_meter.c rte_meter_compat.c
 
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_METER)-include := rte_meter.h
diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
index 646fd4d43f..fdc97dc4c9 100644
--- a/lib/librte_meter/meson.build
+++ b/lib/librte_meter/meson.build
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-sources = files('rte_meter.c')
+sources = files('rte_meter.c', 'rte_meter_compat.c')
 headers = files('rte_meter.h')
+use_function_versioning = true
diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
index da01429a8b..b5378f615e 100644
--- a/lib/librte_meter/rte_meter.c
+++ b/lib/librte_meter/rte_meter.c
@@ -11,6 +11,7 @@
 #include <rte_cycles.h>
 
 #include "rte_meter.h"
+#include "rte_meter_compat.h"
 
 #ifndef RTE_METER_TB_PERIOD_MIN
 #define RTE_METER_TB_PERIOD_MIN      100
@@ -120,7 +121,7 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 }
 
 int
-rte_meter_trtcm_rfc4115_profile_config(
+rte_meter_trtcm_rfc4115_profile_config_(
 	struct rte_meter_trtcm_rfc4115_profile *p,
 	struct rte_meter_trtcm_rfc4115_params *params)
 {
@@ -145,7 +146,7 @@ rte_meter_trtcm_rfc4115_profile_config(
 }
 
 int
-rte_meter_trtcm_rfc4115_config(
+rte_meter_trtcm_rfc4115_config_(
 	struct rte_meter_trtcm_rfc4115 *m,
 	struct rte_meter_trtcm_rfc4115_profile *p)
 {
diff --git a/lib/librte_meter/rte_meter_compat.c b/lib/librte_meter/rte_meter_compat.c
new file mode 100644
index 0000000000..ab04b9c244
--- /dev/null
+++ b/lib/librte_meter/rte_meter_compat.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+#include <rte_function_versioning.h>
+
+#include "rte_meter.h"
+#include "rte_meter_compat.h"
+
+int
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct rte_meter_trtcm_rfc4115_profile *p,
+		struct rte_meter_trtcm_rfc4115_params *params), rte_meter_trtcm_rfc4115_profile_config_s);
+
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_config, _e);
+
+
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return rte_meter_trtcm_rfc4115_config_(m, p);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct rte_meter_trtcm_rfc4115 *m,
+		 struct rte_meter_trtcm_rfc4115_profile *p), rte_meter_trtcm_rfc4115_config_s);
+
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return rte_meter_trtcm_rfc4115_config_(m, p);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
diff --git a/lib/librte_meter/rte_meter_compat.h b/lib/librte_meter/rte_meter_compat.h
new file mode 100644
index 0000000000..63c282b015
--- /dev/null
+++ b/lib/librte_meter/rte_meter_compat.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Intel Corporation
+ */
+
+int
+rte_meter_trtcm_rfc4115_profile_config_(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_(
+	struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 2c7dadbcac..3fef20366a 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -21,3 +21,10 @@ DPDK_21 {
 	rte_meter_trtcm_rfc4115_config;
 	rte_meter_trtcm_rfc4115_profile_config;
 } DPDK_20.0;
+
+EXPERIMENTAL {
+       global:
+
+	rte_meter_trtcm_rfc4115_config;
+	rte_meter_trtcm_rfc4115_profile_config;
+};
-- 
2.25.4


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-18 17:34  4%         ` Ferruh Yigit
@ 2020-05-18 17:51  4%           ` Thomas Monjalon
  2020-05-18 18:32  4%             ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-18 17:51 UTC (permalink / raw)
  To: Ray Kinsella, Ferruh Yigit
  Cc: dev, Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu, Neil Horman

18/05/2020 19:34, Ferruh Yigit:
> On 5/18/2020 6:18 PM, Thomas Monjalon wrote:
> > 16/05/2020 13:53, Neil Horman:
> >> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
> >>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>
> >>> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
> >>> DPDK_20.0.1 block.
> >>>
> >>> This had the affect of breaking the applications that were using these
> >>> APIs on v19.11. Although there is no modification of the APIs and the
> >>> action is positive and matures the APIs, the affect can be negative to
> >>> applications.
> >>>
> >>> When a maintainer is promoting an API to become part of the next major
> >>> ABI version by removing the experimental tag. The maintainer may
> >>> choose to offer an alias to the experimental tag, to prevent these
> >>> breakages in future.
> >>>
> >>> The following changes are made to enabling aliasing:
> >>>
> >>> Updated to the abi policy and abi versioning documents.
> >>>
> >>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> >>>
> >>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
> >>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
> >>> .experimental section (__rte_experimental tag is missing).
> >>> Updated tool in a way it won't complain if the symbol in the
> >>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
> >>>
> >>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> >>>
> >> Acked-by: Neil Horman <nhorman@tuxdriver.com>
> > 
> > Applied with few typos fixed, thanks.
> > 
> 
> Is a new version of the meter library required?

I think yes, Cristian is asking for some changes.



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-18 17:18  4%       ` Thomas Monjalon
@ 2020-05-18 17:34  4%         ` Ferruh Yigit
  2020-05-18 17:51  4%           ` Thomas Monjalon
  2020-05-19 14:14  4%         ` Ray Kinsella
  1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2020-05-18 17:34 UTC (permalink / raw)
  To: Thomas Monjalon, Ray Kinsella
  Cc: dev, Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu, Neil Horman

On 5/18/2020 6:18 PM, Thomas Monjalon wrote:
> 16/05/2020 13:53, Neil Horman:
>> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>
>>> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
>>> DPDK_20.0.1 block.
>>>
>>> This had the affect of breaking the applications that were using these
>>> APIs on v19.11. Although there is no modification of the APIs and the
>>> action is positive and matures the APIs, the affect can be negative to
>>> applications.
>>>
>>> When a maintainer is promoting an API to become part of the next major
>>> ABI version by removing the experimental tag. The maintainer may
>>> choose to offer an alias to the experimental tag, to prevent these
>>> breakages in future.
>>>
>>> The following changes are made to enabling aliasing:
>>>
>>> Updated to the abi policy and abi versioning documents.
>>>
>>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
>>>
>>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
>>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
>>> .experimental section (__rte_experimental tag is missing).
>>> Updated tool in a way it won't complain if the symbol in the
>>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>>>
>> Acked-by: Neil Horman <nhorman@tuxdriver.com>
> 
> Applied with few typos fixed, thanks.
> 

Is a new version of the meter library required?


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-16 11:53  4%     ` Neil Horman
@ 2020-05-18 17:18  4%       ` Thomas Monjalon
  2020-05-18 17:34  4%         ` Ferruh Yigit
  2020-05-19 14:14  4%         ` Ray Kinsella
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2020-05-18 17:18 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: dev, Ferruh Yigit, Luca Boccassi, David Marchand,
	Bruce Richardson, Ian Stokes, Eelco Chaudron, Andrzej Ostruszka,
	Kevin Traynor, John McNamara, Marko Kovacevic,
	Cristian Dumitrescu, Neil Horman

16/05/2020 13:53, Neil Horman:
> On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
> > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > 
> > On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
> > DPDK_20.0.1 block.
> > 
> > This had the affect of breaking the applications that were using these
> > APIs on v19.11. Although there is no modification of the APIs and the
> > action is positive and matures the APIs, the affect can be negative to
> > applications.
> > 
> > When a maintainer is promoting an API to become part of the next major
> > ABI version by removing the experimental tag. The maintainer may
> > choose to offer an alias to the experimental tag, to prevent these
> > breakages in future.
> > 
> > The following changes are made to enabling aliasing:
> > 
> > Updated to the abi policy and abi versioning documents.
> > 
> > Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> > 
> > Updated the 'check-symbols.sh' buildtool, which was complaining that the
> > symbol is in EXPERIMENTAL tag in .map file but it is not in the
> > .experimental section (__rte_experimental tag is missing).
> > Updated tool in a way it won't complain if the symbol in the
> > EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
> > 
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> > 
> Acked-by: Neil Horman <nhorman@tuxdriver.com>

Applied with few typos fixed, thanks.




^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3] doc: fix references to bind_default_symbol
  @ 2020-05-18 16:54  3%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-18 16:54 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: dev, arkadiuszx.kusztal, bruce.richardson, Ray Kinsella,
	Neil Horman, John McNamara, Marko Kovacevic

06/05/2020 17:41, Ray Kinsella:
> The document abi_versioning.rst incorrectly instructs the developer to
> add BIND_DEFAULT_SYMBOL to the public header, not the source file. This
> commit fixes the issue and adds some clarifications.
> 
> The commit also clarifies the use of use_function_versioning in the
> meson/ninja build system, and does some minor re-organization of the
> document.
> 
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> v3:
>  * added a note to ask contributors to ensure they review the meson/ninja
>    section when adding "rte_function_versioning.h".

Fixes: f1ef9794f9bd ("doc: add ABI guidelines")
Cc: stable@dpdk.org

Applied with few typos fixed, thanks.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2] abi: document reasons behind the three part versioning
  @ 2020-05-18 16:20  4%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-18 16:20 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: dev, stable, Bruce Richardson, Neil Horman, John McNamara,
	Marko Kovacevic

05/05/2020 10:56, Ray Kinsella:
> Clarify the reasons behind the three part version numbering scheme.
> Documents the fixes made in f26c2b3.
> 
> Fixes: f26c2b39b271 ("build: fix soname info for 19.11 compatibility")
> 
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> v2:
> * Added "fixes" to commit message.
> 
>  doc/guides/contributing/abi_policy.rst |  3 ++-
>  doc/guides/rel_notes/release_20_05.rst | 12 ++++++++++++

Moved to release_20_02.rst and applied, thanks.
Note: the updated release notes will be published as part of 20.05.



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18 12:13  3%               ` Thomas Monjalon
@ 2020-05-18 13:06  0%                 ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-18 13:06 UTC (permalink / raw)
  To: Thomas Monjalon, Yigit, Ferruh, Dumitrescu, Cristian
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka



On 18/05/2020 13:13, Thomas Monjalon wrote:
> And I think, because of this goal, you will try to maintain ABI compat
> of *ALL* experimental symbols maturing as stable symbol.

I think that is a fair point, what we will ultimately need is a way to filter 
TCs that touch experimental from the Unit Test framework. 

That doesn't exist for v19.11, nor can we respectively invent it.
We should look at that for v20.11 to avoid supporting all "experimental symbols". 

As a better useful solution.
That said - I think having an alias-to-experimental tool in our box, 
stands on its own merit.

Ray K

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18 11:48  4%             ` Ray Kinsella
@ 2020-05-18 12:13  3%               ` Thomas Monjalon
  2020-05-18 13:06  0%                 ` Ray Kinsella
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-18 12:13 UTC (permalink / raw)
  To: Yigit, Ferruh, Dumitrescu, Cristian, Ray Kinsella
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka

18/05/2020 13:48, Ray Kinsella:
> On 18/05/2020 11:46, Thomas Monjalon wrote:
> > 18/05/2020 11:30, Ray Kinsella:
> >> On 18/05/2020 10:22, Thomas Monjalon wrote:
> >>> 18/05/2020 08:29, Ray Kinsella:
> >>>> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
> >>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> >>>>>>
> >>>>>> On v20.02 some meter APIs have been matured and symbols moved from
> >>>>>> EXPERIMENTAL to DPDK_20.0.1 block.
> >>>>>>
> >>>>>> This can break the applications that were using these mentioned APIs on
> >>>>>> v19.11. Although there is no modification on the APIs and the action is
> >>>>>> positive and matures the APIs, the affect can be negative to
> >>>>>> applications.
> >>>>>>
> >>>>>> Since experimental APIs can change or go away without notice as part of
> >>>>>> contract, to prevent this negative affect that may occur by maturing
> >>>>>> experimental API, a process update already suggested, which enables
> >>>>>> aliasing without forcing it:
> >>>>>> https://patches.dpdk.org/patch/65863/
> >>>>>>
> >>>>>
> >>>>> Personally, I am not convinced this is really needed.
> >>>>>
> >>>>> Are there any users asking for this?
> >>>>
> >>>> As it happens it is all breaking our abi regression test suite.
> >>>> One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
> >>>>  
> >>>>> Is there any other library where this is also applied, or is librte_meter the only library?
> >>>>
> >>>> librte_meter is the only example AFAIK. 
> >>>> But then we only have one example of needing symbol versioning also at the moment (Cryptodev).
> >>>>
> >>>> This is going to happen with experimental symbols that have been around a while, 
> >>>> that have become used in applications. It is a non-mandatory tool a maintainer can use
> >>>> to preserve abi compatibility.
> >>>
> >>> If you want to maintain ABI compatibility of experimental symbols,
> >>> it IS a mandatory tool.
> >>> You cannot enforce your "ABI regression test suite" and at the same time
> >>> say it is "non-mandatory".
> >>>
> >>> The real question here is to know whether we want to maintain compatibility
> >>> of experimental symbols. We said no. Then we said we can.
> >>> The main concern is the message clarity in my opinion.
> >>>
> >>
> >> There is complete clarity, there is no obligation. 
> >> Our lack of obligation around experimental, is upfront in the policy is upfront in the policy.
> >>
> >> "Libraries or APIs marked as experimental may change without constraint, as they are not considered part of an ABI version. Experimental libraries have the major ABI version 0."
> >>
> >> Later we give the _option_ without obligation to add an alias to experimental.pls see the v6.
> >>
> >> +   - In situations in which an ``experimental`` symbol has been stable for some
> >> +     time. When promoting the symbol to become part of the next ABI version, the
> >> +     maintainer may choose to provide an alias to the ``experimental`` tag, so
> >> +     as not to break consuming applications.
> >>
> >> So it is something a Maintainer, _may_ choose to do.
> >> I use the word, "may" not "will" as there is no obligation's associated with experimental.
> > 
> > 
> > OK Ray, this is my understanding as well.
> > 
> > The only difficult part to understand is when claiming
> > "it is all breaking our abi regression test suite"
> > to justify the choice.
> 
> Justification, is the same as any other consumer of DPDK saying you broke my APP.
> 
> > As the maintainer (Cristian) says he does not like this change,
> > it means the regression test suite should skip this case, right?
> 
> So the regression test run the v19.11 Unit Test's against the v20.05 rc.
> My thought was that would provide reasonably good coverage of the ABI to catch more subtly regression.
> Those regressions that affect the behavior of the ABI (the contract), instead of ABI itself. 

I understand the goal.
And I think, because of this goal, you will try to maintain ABI compat
of *ALL* experimental symbols maturing as stable symbol.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18 11:18  0%             ` Dumitrescu, Cristian
@ 2020-05-18 11:49  0%               ` Ray Kinsella
  0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-18 11:49 UTC (permalink / raw)
  To: Dumitrescu, Cristian, Thomas Monjalon, Yigit, Ferruh
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka



On 18/05/2020 12:18, Dumitrescu, Cristian wrote:
> 
> 
>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Monday, May 18, 2020 11:46 AM
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Ray Kinsella <mdr@ashroe.eu>;
>> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
>> Cc: Neil Horman <nhorman@tuxdriver.com>; Eelco Chaudron
>> <echaudro@redhat.com>; dev@dpdk.org; David Marchand
>> <david.marchand@redhat.com>; stable@dpdk.org; Luca Boccassi
>> <bluca@debian.org>; Richardson, Bruce <bruce.richardson@intel.com>;
>> Stokes, Ian <ian.stokes@intel.com>; Andrzej Ostruszka
>> <amo@semihalf.com>
>> Subject: Re: [PATCH v4] meter: provide experimental alias of API for old apps
>>
>> 18/05/2020 11:30, Ray Kinsella:
>>> On 18/05/2020 10:22, Thomas Monjalon wrote:
>>>> 18/05/2020 08:29, Ray Kinsella:
>>>>> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
>>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>>
>>>>>>> On v20.02 some meter APIs have been matured and symbols moved
>> from
>>>>>>> EXPERIMENTAL to DPDK_20.0.1 block.
>>>>>>>
>>>>>>> This can break the applications that were using these mentioned APIs
>> on
>>>>>>> v19.11. Although there is no modification on the APIs and the action is
>>>>>>> positive and matures the APIs, the affect can be negative to
>>>>>>> applications.
>>>>>>>
>>>>>>> Since experimental APIs can change or go away without notice as part
>> of
>>>>>>> contract, to prevent this negative affect that may occur by maturing
>>>>>>> experimental API, a process update already suggested, which
>> enables
>>>>>>> aliasing without forcing it:
>>>>>>> https://patches.dpdk.org/patch/65863/
>>>>>>>
>>>>>>
>>>>>> Personally, I am not convinced this is really needed.
>>>>>>
>>>>>> Are there any users asking for this?
>>>>>
>>>>> As it happens it is all breaking our abi regression test suite.
>>>>> One of the things we do is to run the unit tests binary from v19.11
>> against the latest release.
>>>>>
>>>>>> Is there any other library where this is also applied, or is librte_meter
>> the only library?
>>>>>
>>>>> librte_meter is the only example AFAIK.
>>>>> But then we only have one example of needing symbol versioning also
>> at the moment (Cryptodev).
>>>>>
>>>>> This is going to happen with experimental symbols that have been
>> around a while,
>>>>> that have become used in applications. It is a non-mandatory tool a
>> maintainer can use
>>>>> to preserve abi compatibility.
>>>>
>>>> If you want to maintain ABI compatibility of experimental symbols,
>>>> it IS a mandatory tool.
>>>> You cannot enforce your "ABI regression test suite" and at the same time
>>>> say it is "non-mandatory".
>>>>
>>>> The real question here is to know whether we want to maintain
>> compatibility
>>>> of experimental symbols. We said no. Then we said we can.
>>>> The main concern is the message clarity in my opinion.
>>>>
>>>
>>> There is complete clarity, there is no obligation.
>>> Our lack of obligation around experimental, is upfront in the policy is
>> upfront in the policy.
>>>
>>> "Libraries or APIs marked as experimental may change without constraint,
>> as they are not considered part of an ABI version. Experimental libraries have
>> the major ABI version 0."
>>>
>>> Later we give the _option_ without obligation to add an alias to
>> experimental.pls see the v6.
>>>
>>> +   - In situations in which an ``experimental`` symbol has been stable for
>> some
>>> +     time. When promoting the symbol to become part of the next ABI
>> version, the
>>> +     maintainer may choose to provide an alias to the ``experimental`` tag,
>> so
>>> +     as not to break consuming applications.
>>>
>>> So it is something a Maintainer, _may_ choose to do.
>>> I use the word, "may" not "will" as there is no obligation's associated with
>> experimental.
>>
>>
>> OK Ray, this is my understanding as well.
>>
>> The only difficult part to understand is when claiming
>> "it is all breaking our abi regression test suite"
>> to justify the choice.
>> As the maintainer (Cristian) says he does not like this change,
>> it means the regression test suite should skip this case, right?
>>
> 
> I am yet to be convinced of the value of this, but if some people think it is useful, I am willing to compromise. This is subject to this code being temporary code to be removed for 20.11 release, which Ray already confirmed.
> 
> Ray, a few more suggestions, are you OK with them?
> 1. Move this code to a separate file in the library (suggest rte_meter_abi_compat.c as the file name)
> 2. Clearly state in the patch description this is temporary code to be removed for 20.11 release.
> 3. Agree that you or Ferruh take the AR to send a patch prior to the 20.11 release to remove this code.
> 
> Thanks,
> Cristian

Hi Cristain - I am good with all of the above.

Ray K


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18 10:46  3%           ` Thomas Monjalon
  2020-05-18 11:18  0%             ` Dumitrescu, Cristian
@ 2020-05-18 11:48  4%             ` Ray Kinsella
  2020-05-18 12:13  3%               ` Thomas Monjalon
  1 sibling, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-18 11:48 UTC (permalink / raw)
  To: Thomas Monjalon, Yigit, Ferruh, Dumitrescu, Cristian
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka



On 18/05/2020 11:46, Thomas Monjalon wrote:
> 18/05/2020 11:30, Ray Kinsella:
>> On 18/05/2020 10:22, Thomas Monjalon wrote:
>>> 18/05/2020 08:29, Ray Kinsella:
>>>> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
>>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>>>
>>>>>> On v20.02 some meter APIs have been matured and symbols moved from
>>>>>> EXPERIMENTAL to DPDK_20.0.1 block.
>>>>>>
>>>>>> This can break the applications that were using these mentioned APIs on
>>>>>> v19.11. Although there is no modification on the APIs and the action is
>>>>>> positive and matures the APIs, the affect can be negative to
>>>>>> applications.
>>>>>>
>>>>>> Since experimental APIs can change or go away without notice as part of
>>>>>> contract, to prevent this negative affect that may occur by maturing
>>>>>> experimental API, a process update already suggested, which enables
>>>>>> aliasing without forcing it:
>>>>>> https://patches.dpdk.org/patch/65863/
>>>>>>
>>>>>
>>>>> Personally, I am not convinced this is really needed.
>>>>>
>>>>> Are there any users asking for this?
>>>>
>>>> As it happens it is all breaking our abi regression test suite.
>>>> One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
>>>>  
>>>>> Is there any other library where this is also applied, or is librte_meter the only library?
>>>>
>>>> librte_meter is the only example AFAIK. 
>>>> But then we only have one example of needing symbol versioning also at the moment (Cryptodev).
>>>>
>>>> This is going to happen with experimental symbols that have been around a while, 
>>>> that have become used in applications. It is a non-mandatory tool a maintainer can use
>>>> to preserve abi compatibility.
>>>
>>> If you want to maintain ABI compatibility of experimental symbols,
>>> it IS a mandatory tool.
>>> You cannot enforce your "ABI regression test suite" and at the same time
>>> say it is "non-mandatory".
>>>
>>> The real question here is to know whether we want to maintain compatibility
>>> of experimental symbols. We said no. Then we said we can.
>>> The main concern is the message clarity in my opinion.
>>>
>>
>> There is complete clarity, there is no obligation. 
>> Our lack of obligation around experimental, is upfront in the policy is upfront in the policy.
>>
>> "Libraries or APIs marked as experimental may change without constraint, as they are not considered part of an ABI version. Experimental libraries have the major ABI version 0."
>>
>> Later we give the _option_ without obligation to add an alias to experimental.pls see the v6.
>>
>> +   - In situations in which an ``experimental`` symbol has been stable for some
>> +     time. When promoting the symbol to become part of the next ABI version, the
>> +     maintainer may choose to provide an alias to the ``experimental`` tag, so
>> +     as not to break consuming applications.
>>
>> So it is something a Maintainer, _may_ choose to do.
>> I use the word, "may" not "will" as there is no obligation's associated with experimental.
> 
> 
> OK Ray, this is my understanding as well.
> 
> The only difficult part to understand is when claiming
> "it is all breaking our abi regression test suite"
> to justify the choice.

Justification, is the same as any other consumer of DPDK saying you broke my APP.

> As the maintainer (Cristian) says he does not like this change,
> it means the regression test suite should skip this case, right?

So the regression test run the v19.11 Unit Test's against the v20.05 rc.
My thought was that would provide reasonably good coverage of the ABI to catch more subtly regression.
Those regressions that affect the behavior of the ABI (the contract), instead of ABI itself. 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18 10:46  3%           ` Thomas Monjalon
@ 2020-05-18 11:18  0%             ` Dumitrescu, Cristian
  2020-05-18 11:49  0%               ` Ray Kinsella
  2020-05-18 11:48  4%             ` Ray Kinsella
  1 sibling, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2020-05-18 11:18 UTC (permalink / raw)
  To: Thomas Monjalon, Yigit, Ferruh, Ray Kinsella
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, May 18, 2020 11:46 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Ray Kinsella <mdr@ashroe.eu>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Neil Horman <nhorman@tuxdriver.com>; Eelco Chaudron
> <echaudro@redhat.com>; dev@dpdk.org; David Marchand
> <david.marchand@redhat.com>; stable@dpdk.org; Luca Boccassi
> <bluca@debian.org>; Richardson, Bruce <bruce.richardson@intel.com>;
> Stokes, Ian <ian.stokes@intel.com>; Andrzej Ostruszka
> <amo@semihalf.com>
> Subject: Re: [PATCH v4] meter: provide experimental alias of API for old apps
> 
> 18/05/2020 11:30, Ray Kinsella:
> > On 18/05/2020 10:22, Thomas Monjalon wrote:
> > > 18/05/2020 08:29, Ray Kinsella:
> > >> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
> > >>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > >>>>
> > >>>> On v20.02 some meter APIs have been matured and symbols moved
> from
> > >>>> EXPERIMENTAL to DPDK_20.0.1 block.
> > >>>>
> > >>>> This can break the applications that were using these mentioned APIs
> on
> > >>>> v19.11. Although there is no modification on the APIs and the action is
> > >>>> positive and matures the APIs, the affect can be negative to
> > >>>> applications.
> > >>>>
> > >>>> Since experimental APIs can change or go away without notice as part
> of
> > >>>> contract, to prevent this negative affect that may occur by maturing
> > >>>> experimental API, a process update already suggested, which
> enables
> > >>>> aliasing without forcing it:
> > >>>> https://patches.dpdk.org/patch/65863/
> > >>>>
> > >>>
> > >>> Personally, I am not convinced this is really needed.
> > >>>
> > >>> Are there any users asking for this?
> > >>
> > >> As it happens it is all breaking our abi regression test suite.
> > >> One of the things we do is to run the unit tests binary from v19.11
> against the latest release.
> > >>
> > >>> Is there any other library where this is also applied, or is librte_meter
> the only library?
> > >>
> > >> librte_meter is the only example AFAIK.
> > >> But then we only have one example of needing symbol versioning also
> at the moment (Cryptodev).
> > >>
> > >> This is going to happen with experimental symbols that have been
> around a while,
> > >> that have become used in applications. It is a non-mandatory tool a
> maintainer can use
> > >> to preserve abi compatibility.
> > >
> > > If you want to maintain ABI compatibility of experimental symbols,
> > > it IS a mandatory tool.
> > > You cannot enforce your "ABI regression test suite" and at the same time
> > > say it is "non-mandatory".
> > >
> > > The real question here is to know whether we want to maintain
> compatibility
> > > of experimental symbols. We said no. Then we said we can.
> > > The main concern is the message clarity in my opinion.
> > >
> >
> > There is complete clarity, there is no obligation.
> > Our lack of obligation around experimental, is upfront in the policy is
> upfront in the policy.
> >
> > "Libraries or APIs marked as experimental may change without constraint,
> as they are not considered part of an ABI version. Experimental libraries have
> the major ABI version 0."
> >
> > Later we give the _option_ without obligation to add an alias to
> experimental.pls see the v6.
> >
> > +   - In situations in which an ``experimental`` symbol has been stable for
> some
> > +     time. When promoting the symbol to become part of the next ABI
> version, the
> > +     maintainer may choose to provide an alias to the ``experimental`` tag,
> so
> > +     as not to break consuming applications.
> >
> > So it is something a Maintainer, _may_ choose to do.
> > I use the word, "may" not "will" as there is no obligation's associated with
> experimental.
> 
> 
> OK Ray, this is my understanding as well.
> 
> The only difficult part to understand is when claiming
> "it is all breaking our abi regression test suite"
> to justify the choice.
> As the maintainer (Cristian) says he does not like this change,
> it means the regression test suite should skip this case, right?
> 

I am yet to be convinced of the value of this, but if some people think it is useful, I am willing to compromise. This is subject to this code being temporary code to be removed for 20.11 release, which Ray already confirmed.

Ray, a few more suggestions, are you OK with them?
1. Move this code to a separate file in the library (suggest rte_meter_abi_compat.c as the file name)
2. Clearly state in the patch description this is temporary code to be removed for 20.11 release.
3. Agree that you or Ferruh take the AR to send a patch prior to the 20.11 release to remove this code.

Thanks,
Cristian

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18  9:30  4%         ` Ray Kinsella
@ 2020-05-18 10:46  3%           ` Thomas Monjalon
  2020-05-18 11:18  0%             ` Dumitrescu, Cristian
  2020-05-18 11:48  4%             ` Ray Kinsella
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2020-05-18 10:46 UTC (permalink / raw)
  To: Yigit, Ferruh, Ray Kinsella, Dumitrescu, Cristian
  Cc: Neil Horman, Eelco Chaudron, dev, David Marchand, stable,
	Luca Boccassi, Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka

18/05/2020 11:30, Ray Kinsella:
> On 18/05/2020 10:22, Thomas Monjalon wrote:
> > 18/05/2020 08:29, Ray Kinsella:
> >> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
> >>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> >>>>
> >>>> On v20.02 some meter APIs have been matured and symbols moved from
> >>>> EXPERIMENTAL to DPDK_20.0.1 block.
> >>>>
> >>>> This can break the applications that were using these mentioned APIs on
> >>>> v19.11. Although there is no modification on the APIs and the action is
> >>>> positive and matures the APIs, the affect can be negative to
> >>>> applications.
> >>>>
> >>>> Since experimental APIs can change or go away without notice as part of
> >>>> contract, to prevent this negative affect that may occur by maturing
> >>>> experimental API, a process update already suggested, which enables
> >>>> aliasing without forcing it:
> >>>> https://patches.dpdk.org/patch/65863/
> >>>>
> >>>
> >>> Personally, I am not convinced this is really needed.
> >>>
> >>> Are there any users asking for this?
> >>
> >> As it happens it is all breaking our abi regression test suite.
> >> One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
> >>  
> >>> Is there any other library where this is also applied, or is librte_meter the only library?
> >>
> >> librte_meter is the only example AFAIK. 
> >> But then we only have one example of needing symbol versioning also at the moment (Cryptodev).
> >>
> >> This is going to happen with experimental symbols that have been around a while, 
> >> that have become used in applications. It is a non-mandatory tool a maintainer can use
> >> to preserve abi compatibility.
> > 
> > If you want to maintain ABI compatibility of experimental symbols,
> > it IS a mandatory tool.
> > You cannot enforce your "ABI regression test suite" and at the same time
> > say it is "non-mandatory".
> > 
> > The real question here is to know whether we want to maintain compatibility
> > of experimental symbols. We said no. Then we said we can.
> > The main concern is the message clarity in my opinion.
> > 
> 
> There is complete clarity, there is no obligation. 
> Our lack of obligation around experimental, is upfront in the policy is upfront in the policy.
> 
> "Libraries or APIs marked as experimental may change without constraint, as they are not considered part of an ABI version. Experimental libraries have the major ABI version 0."
> 
> Later we give the _option_ without obligation to add an alias to experimental.pls see the v6.
> 
> +   - In situations in which an ``experimental`` symbol has been stable for some
> +     time. When promoting the symbol to become part of the next ABI version, the
> +     maintainer may choose to provide an alias to the ``experimental`` tag, so
> +     as not to break consuming applications.
> 
> So it is something a Maintainer, _may_ choose to do.
> I use the word, "may" not "will" as there is no obligation's associated with experimental.


OK Ray, this is my understanding as well.

The only difficult part to understand is when claiming
"it is all breaking our abi regression test suite"
to justify the choice.
As the maintainer (Cristian) says he does not like this change,
it means the regression test suite should skip this case, right?



^ permalink raw reply	[relevance 3%]

* [dpdk-dev] DPDK 20.05 RC2 Test Report
@ 2020-05-18 10:39  3% Peng, Yuan
  0 siblings, 0 replies; 200+ results
From: Peng, Yuan @ 2020-05-18 10:39 UTC (permalink / raw)
  To: dev

RC2 test is finished. Here is DPDK-20.05 RC2 validation report:

  *   Totally create ~400+ new test cases for DPDK20.05 new features.
  *   Totally run 10143 cases, debug progress is around 99%, pass rate is about 97%, 6 new issues are found, no critical issue found.
  *   Checked daily build, all pass.
  *   Checked Basic NIC PMD(i40e, ixgbe, ice) PF & VF regression: new found 1 PF issue and 1 VF issue.
  *   Checked virtio regression test, 1 bug is found.
  *   Checked cryptodev and compressdev regression, no new issus found.
  *   Checked NIC performance, no new issue found.
  *   Checked ABI test, no new issue found.
  *   Checked 20.05 new features: found 3 new issues.

Thank you.
Yuan.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18  9:22  4%       ` Thomas Monjalon
@ 2020-05-18  9:30  4%         ` Ray Kinsella
  2020-05-18 10:46  3%           ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-18  9:30 UTC (permalink / raw)
  To: Thomas Monjalon, Yigit, Ferruh
  Cc: Dumitrescu, Cristian, Neil Horman, Eelco Chaudron, dev,
	David Marchand, stable, Luca Boccassi, Richardson, Bruce, Stokes,
	Ian, Andrzej Ostruszka



On 18/05/2020 10:22, Thomas Monjalon wrote:
> 18/05/2020 08:29, Ray Kinsella:
>> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>>
>>>> On v20.02 some meter APIs have been matured and symbols moved from
>>>> EXPERIMENTAL to DPDK_20.0.1 block.
>>>>
>>>> This can break the applications that were using these mentioned APIs on
>>>> v19.11. Although there is no modification on the APIs and the action is
>>>> positive and matures the APIs, the affect can be negative to
>>>> applications.
>>>>
>>>> Since experimental APIs can change or go away without notice as part of
>>>> contract, to prevent this negative affect that may occur by maturing
>>>> experimental API, a process update already suggested, which enables
>>>> aliasing without forcing it:
>>>> https://patches.dpdk.org/patch/65863/
>>>>
>>>
>>> Personally, I am not convinced this is really needed.
>>>
>>> Are there any users asking for this?
>>
>> As it happens it is all breaking our abi regression test suite.
>> One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
>>  
>>> Is there any other library where this is also applied, or is librte_meter the only library?
>>
>> librte_meter is the only example AFAIK. 
>> But then we only have one example of needing symbol versioning also at the moment (Cryptodev).
>>
>> This is going to happen with experimental symbols that have been around a while, 
>> that have become used in applications. It is a non-mandatory tool a maintainer can use
>> to preserve abi compatibility.
> 
> If you want to maintain ABI compatibility of experimental symbols,
> it IS a mandatory tool.
> You cannot enforce your "ABI regression test suite" and at the same time
> say it is "non-mandatory".> 
> The real question here is to know whether we want to maintain compatibility
> of experimental symbols. We said no. Then we said we can.
> The main concern is the message clarity in my opinion.
> 

There is complete clarity, there is no obligation. 
Our lack of obligation around experimental, is upfront in the policy is upfront in the policy.

"Libraries or APIs marked as experimental may change without constraint, as they are not considered part of an ABI version. Experimental libraries have the major ABI version 0."

Later we give the _option_ without obligation to add an alias to experimental.pls see the v6.

+   - In situations in which an ``experimental`` symbol has been stable for some
+     time. When promoting the symbol to become part of the next ABI version, the
+     maintainer may choose to provide an alias to the ``experimental`` tag, so
+     as not to break consuming applications.

So it is something a Maintainer, _may_ choose to do.
I use the word, "may" not "will" as there is no obligation's associated with experimental.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-18  6:29  4%     ` Ray Kinsella
@ 2020-05-18  9:22  4%       ` Thomas Monjalon
  2020-05-18  9:30  4%         ` Ray Kinsella
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-18  9:22 UTC (permalink / raw)
  To: Yigit, Ferruh, Ray Kinsella
  Cc: Dumitrescu, Cristian, Neil Horman, Eelco Chaudron, dev,
	David Marchand, stable, Luca Boccassi, Richardson, Bruce, Stokes,
	Ian, Andrzej Ostruszka

18/05/2020 08:29, Ray Kinsella:
> On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
> > From: Yigit, Ferruh <ferruh.yigit@intel.com>
> >>
> >> On v20.02 some meter APIs have been matured and symbols moved from
> >> EXPERIMENTAL to DPDK_20.0.1 block.
> >>
> >> This can break the applications that were using these mentioned APIs on
> >> v19.11. Although there is no modification on the APIs and the action is
> >> positive and matures the APIs, the affect can be negative to
> >> applications.
> >>
> >> Since experimental APIs can change or go away without notice as part of
> >> contract, to prevent this negative affect that may occur by maturing
> >> experimental API, a process update already suggested, which enables
> >> aliasing without forcing it:
> >> https://patches.dpdk.org/patch/65863/
> >>
> > 
> > Personally, I am not convinced this is really needed.
> > 
> > Are there any users asking for this?
> 
> As it happens it is all breaking our abi regression test suite.
> One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
>  
> > Is there any other library where this is also applied, or is librte_meter the only library?
> 
> librte_meter is the only example AFAIK. 
> But then we only have one example of needing symbol versioning also at the moment (Cryptodev).
> 
> This is going to happen with experimental symbols that have been around a while, 
> that have become used in applications. It is a non-mandatory tool a maintainer can use
> to preserve abi compatibility.

If you want to maintain ABI compatibility of experimental symbols,
it IS a mandatory tool.
You cannot enforce your "ABI regression test suite" and at the same time
say it is "non-mandatory".

The real question here is to know whether we want to maintain compatibility
of experimental symbols. We said no. Then we said we can.
The main concern is the message clarity in my opinion.



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-17 19:52  0%   ` [dpdk-dev] [PATCH v4] meter: " Dumitrescu, Cristian
@ 2020-05-18  6:29  4%     ` Ray Kinsella
  2020-05-18  9:22  4%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-18  6:29 UTC (permalink / raw)
  To: Dumitrescu, Cristian, Yigit, Ferruh, Neil Horman, Eelco Chaudron
  Cc: dev, Thomas Monjalon, David Marchand, stable, Luca Boccassi,
	Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka



On 17/05/2020 20:52, Dumitrescu, Cristian wrote:
> Hi Ferruh,
> 
>> -----Original Message-----
>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Sent: Thursday, May 14, 2020 5:11 PM
>> To: Ray Kinsella <mdr@ashroe.eu>; Neil Horman
>> <nhorman@tuxdriver.com>; Dumitrescu, Cristian
>> <cristian.dumitrescu@intel.com>; Eelco Chaudron <echaudro@redhat.com>
>> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Thomas Monjalon
>> <thomas@monjalon.net>; David Marchand <david.marchand@redhat.com>;
>> stable@dpdk.org; Luca Boccassi <bluca@debian.org>; Richardson, Bruce
>> <bruce.richardson@intel.com>; Stokes, Ian <ian.stokes@intel.com>; Andrzej
>> Ostruszka <amo@semihalf.com>
>> Subject: [PATCH v4] meter: provide experimental alias of API for old apps
>>
>> On v20.02 some meter APIs have been matured and symbols moved from
>> EXPERIMENTAL to DPDK_20.0.1 block.
>>
>> This can break the applications that were using these mentioned APIs on
>> v19.11. Although there is no modification on the APIs and the action is
>> positive and matures the APIs, the affect can be negative to
>> applications.
>>
>> Since experimental APIs can change or go away without notice as part of
>> contract, to prevent this negative affect that may occur by maturing
>> experimental API, a process update already suggested, which enables
>> aliasing without forcing it:
>> https://patches.dpdk.org/patch/65863/
>>
> 
> Personally, I am not convinced this is really needed.
> 
> Are there any users asking for this?

As it happens it is all breaking our abi regression test suite.
One of the things we do is to run the unit tests binary from v19.11 against the latest release. 
 
> Is there any other library where this is also applied, or is librte_meter the only library?

librte_meter is the only example AFAIK. 
But then we only have one example of needing symbol versioning also at the moment (Cryptodev).

This is going to happen with experimental symbols that have been around a while, 
that have become used in applications. It is a non-mandatory tool a maintainer can use
to preserve abi compatibility. 

> 
>> This patch provides aliasing by duplicating the existing and versioned
>> symbols as experimental.
>>
>> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
>> aliasing done between EXPERIMENTAL and DPDK_21.
>>
>> Also following changes done to enabling aliasing:
>>
>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
>>
>> Updated the 'check-symbols.sh' buildtool, which was complaining that the
>> symbol is in EXPERIMENTAL tag in .map file but it is not in the
>> .experimental section (__rte_experimental tag is missing).
>> Updated tool in a way it won't complain if the symbol in the
>> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
>>
>> Enabled function versioning for meson build for the library.
>>
>> Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM
>> API")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> Cc: Neil Horman <nhorman@tuxdriver.com>
>> Cc: Thomas Monjalon <thomas@monjalon.net>
>> Cc: Luca Boccassi <bluca@debian.org>
>> Cc: David Marchand <david.marchand@redhat.com>
>> Cc: Bruce Richardson <bruce.richardson@intel.com>
>> Cc: Ian Stokes <ian.stokes@intel.com>
>> Cc: Eelco Chaudron <echaudro@redhat.com>
>> Cc: Andrzej Ostruszka <amo@semihalf.com>
>> Cc: Ray Kinsella <mdr@ashroe.eu>
>>
>> v2:
>> * Commit log updated
>>
>> v3:
>> * added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro
>>
>> v4:
>> * update script name in commit log, remove empty line
>> ---
>>  buildtools/check-symbols.sh                   |  3 +-
>>  .../include/rte_function_versioning.h         |  9 +++
>>  lib/librte_meter/meson.build                  |  1 +
>>  lib/librte_meter/rte_meter.c                  | 59 ++++++++++++++++++-
>>  lib/librte_meter/rte_meter_version.map        |  8 +++
>>  5 files changed, 76 insertions(+), 4 deletions(-)
>>
>> diff --git a/buildtools/check-symbols.sh b/buildtools/check-symbols.sh
>> index 3df57c322c..e407553a34 100755
>> --- a/buildtools/check-symbols.sh
>> +++ b/buildtools/check-symbols.sh
>> @@ -26,7 +26,8 @@ ret=0
>>  for SYM in `$LIST_SYMBOL -S EXPERIMENTAL $MAPFILE |cut -d ' ' -f 3`
>>  do
>>  	if grep -q "\.text.*[[:space:]]$SYM$" $DUMPFILE &&
>> -		! grep -q "\.text\.experimental.*[[:space:]]$SYM$"
>> $DUMPFILE
>> +		! grep -q "\.text\.experimental.*[[:space:]]$SYM$"
>> $DUMPFILE &&
>> +		$LIST_SYMBOL -s $SYM $MAPFILE | grep -q EXPERIMENTAL
>>  	then
>>  		cat >&2 <<- END_OF_MESSAGE
>>  		$SYM is not flagged as experimental
>> diff --git a/lib/librte_eal/include/rte_function_versioning.h
>> b/lib/librte_eal/include/rte_function_versioning.h
>> index b9f862d295..f588f2643b 100644
>> --- a/lib/librte_eal/include/rte_function_versioning.h
>> +++ b/lib/librte_eal/include/rte_function_versioning.h
>> @@ -46,6 +46,14 @@
>>   */
>>  #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b)
>> RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
>>
>> +/*
>> + * VERSION_SYMBOL_EXPERIMENTAL
>> + * Creates a symbol version table entry binding the symbol
>> <b>@EXPERIMENTAL to the internal
>> + * function name <b><e>. The macro is used when a symbol matures to
>> become part of the stable ABI,
>> + * to provide an alias to experimental for some time.
>> + */
>> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver "
>> RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
>> +
>>  /*
>>   * BIND_DEFAULT_SYMBOL
>>   * Creates a symbol version entry instructing the linker to bind references to
>> @@ -79,6 +87,7 @@
>>   * No symbol versioning in use
>>   */
>>  #define VERSION_SYMBOL(b, e, n)
>> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e)
>>  #define __vsym
>>  #define BIND_DEFAULT_SYMBOL(b, e, n)
>>  #define MAP_STATIC_SYMBOL(f, p) f __attribute__((alias(RTE_STR(p))))
>> diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
>> index 646fd4d43f..fce0368437 100644
>> --- a/lib/librte_meter/meson.build
>> +++ b/lib/librte_meter/meson.build
>> @@ -3,3 +3,4 @@
>>
>>  sources = files('rte_meter.c')
>>  headers = files('rte_meter.h')
>> +use_function_versioning = true
>> diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
>> index da01429a8b..c600b05064 100644
>> --- a/lib/librte_meter/rte_meter.c
>> +++ b/lib/librte_meter/rte_meter.c
>> @@ -9,6 +9,7 @@
>>  #include <rte_common.h>
>>  #include <rte_log.h>
>>  #include <rte_cycles.h>
>> +#include <rte_function_versioning.h>
>>
>>  #include "rte_meter.h"
>>
>> @@ -119,8 +120,8 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
>>  	return 0;
>>  }
>>
>> -int
>> -rte_meter_trtcm_rfc4115_profile_config(
>> +static int
>> +rte_meter_trtcm_rfc4115_profile_config_(
>>  	struct rte_meter_trtcm_rfc4115_profile *p,
>>  	struct rte_meter_trtcm_rfc4115_params *params)
>>  {
>> @@ -145,7 +146,35 @@ rte_meter_trtcm_rfc4115_profile_config(
>>  }
>>
>>  int
>> -rte_meter_trtcm_rfc4115_config(
>> +rte_meter_trtcm_rfc4115_profile_config_s(
>> +	struct rte_meter_trtcm_rfc4115_profile *p,
>> +	struct rte_meter_trtcm_rfc4115_params *params);
>> +int
>> +rte_meter_trtcm_rfc4115_profile_config_s(
>> +	struct rte_meter_trtcm_rfc4115_profile *p,
>> +	struct rte_meter_trtcm_rfc4115_params *params)
>> +{
>> +	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
>> +}
>> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
>> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct
>> rte_meter_trtcm_rfc4115_profile *p,
>> +		struct rte_meter_trtcm_rfc4115_params *params),
>> rte_meter_trtcm_rfc4115_profile_config_s);
>> +
>> +int
>> +rte_meter_trtcm_rfc4115_profile_config_e(
>> +	struct rte_meter_trtcm_rfc4115_profile *p,
>> +	struct rte_meter_trtcm_rfc4115_params *params);
>> +int
>> +rte_meter_trtcm_rfc4115_profile_config_e(
>> +	struct rte_meter_trtcm_rfc4115_profile *p,
>> +	struct rte_meter_trtcm_rfc4115_params *params)
>> +{
>> +	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
>> +}
>> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_conf
>> ig, _e);
>> +
>> +static int
>> +rte_meter_trtcm_rfc4115_config_(
>>  	struct rte_meter_trtcm_rfc4115 *m,
>>  	struct rte_meter_trtcm_rfc4115_profile *p)
>>  {
>> @@ -160,3 +189,27 @@ rte_meter_trtcm_rfc4115_config(
>>
>>  	return 0;
>>  }
>> +
>> +int
>> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
>> +	struct rte_meter_trtcm_rfc4115_profile *p);
>> +int
>> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
>> +	struct rte_meter_trtcm_rfc4115_profile *p)
>> +{
>> +	return rte_meter_trtcm_rfc4115_config_(m, p);
>> +}
>> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
>> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct
>> rte_meter_trtcm_rfc4115 *m,
>> +		 struct rte_meter_trtcm_rfc4115_profile *p),
>> rte_meter_trtcm_rfc4115_config_s);
>> +
>> +int
>> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
>> +	struct rte_meter_trtcm_rfc4115_profile *p);
>> +int
>> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
>> +	struct rte_meter_trtcm_rfc4115_profile *p)
>> +{
>> +	return rte_meter_trtcm_rfc4115_config_(m, p);
>> +}
>> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
> 
> To me, this is a significant amount of dead code that does not add any functionality and does not bring any added value to the library for any user. I am not a build system expert, but I would definitely prefer avoiding adding any C code to the library for this purpose, and just modify the map file, would this approach be possible?

Approach is exactly the same as the rest of symbol versioning. 
 
> Also, very important, is this C code to be added permanently or is it added just on a temporary basis? If temporary, when is it going to be removed?

It will be removed in the v21 (20.11 lts) release. 
When we officially rev the abi and start afresh. 

> 
>> diff --git a/lib/librte_meter/rte_meter_version.map
>> b/lib/librte_meter/rte_meter_version.map
>> index 2c7dadbcac..b493bcebe9 100644
>> --- a/lib/librte_meter/rte_meter_version.map
>> +++ b/lib/librte_meter/rte_meter_version.map
>> @@ -20,4 +20,12 @@ DPDK_21 {
>>  	rte_meter_trtcm_rfc4115_color_blind_check;
>>  	rte_meter_trtcm_rfc4115_config;
>>  	rte_meter_trtcm_rfc4115_profile_config;
>> +
>>  } DPDK_20.0;
>> +
>> +EXPERIMENTAL {
>> +       global:
>> +
>> +	rte_meter_trtcm_rfc4115_config;
>> +	rte_meter_trtcm_rfc4115_profile_config;
>> +};
>> --
>> 2.25.4
> 
> Regards,
> Cristian
> 

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
  2020-05-14 16:11  4% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
  2020-05-15 14:36 12%   ` [dpdk-dev] [PATCH v5] abi: " Ray Kinsella
  2020-05-15 15:01 12%   ` [dpdk-dev] [PATCH v6] " Ray Kinsella
@ 2020-05-17 19:52  0%   ` Dumitrescu, Cristian
  2020-05-18  6:29  4%     ` Ray Kinsella
  2 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2020-05-17 19:52 UTC (permalink / raw)
  To: Yigit, Ferruh, Ray Kinsella, Neil Horman, Eelco Chaudron
  Cc: dev, Thomas Monjalon, David Marchand, stable, Luca Boccassi,
	Richardson, Bruce, Stokes, Ian, Andrzej Ostruszka

Hi Ferruh,

> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Thursday, May 14, 2020 5:11 PM
> To: Ray Kinsella <mdr@ashroe.eu>; Neil Horman
> <nhorman@tuxdriver.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Eelco Chaudron <echaudro@redhat.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; David Marchand <david.marchand@redhat.com>;
> stable@dpdk.org; Luca Boccassi <bluca@debian.org>; Richardson, Bruce
> <bruce.richardson@intel.com>; Stokes, Ian <ian.stokes@intel.com>; Andrzej
> Ostruszka <amo@semihalf.com>
> Subject: [PATCH v4] meter: provide experimental alias of API for old apps
> 
> On v20.02 some meter APIs have been matured and symbols moved from
> EXPERIMENTAL to DPDK_20.0.1 block.
> 
> This can break the applications that were using these mentioned APIs on
> v19.11. Although there is no modification on the APIs and the action is
> positive and matures the APIs, the affect can be negative to
> applications.
> 
> Since experimental APIs can change or go away without notice as part of
> contract, to prevent this negative affect that may occur by maturing
> experimental API, a process update already suggested, which enables
> aliasing without forcing it:
> https://patches.dpdk.org/patch/65863/
> 

Personally, I am not convinced this is really needed.

Are there any users asking for this?

Is there any other library where this is also applied, or is librte_meter the only library?

> This patch provides aliasing by duplicating the existing and versioned
> symbols as experimental.
> 
> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
> aliasing done between EXPERIMENTAL and DPDK_21.
> 
> Also following changes done to enabling aliasing:
> 
> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> 
> Updated the 'check-symbols.sh' buildtool, which was complaining that the
> symbol is in EXPERIMENTAL tag in .map file but it is not in the
> .experimental section (__rte_experimental tag is missing).
> Updated tool in a way it won't complain if the symbol in the
> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
> 
> Enabled function versioning for meson build for the library.
> 
> Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM
> API")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Luca Boccassi <bluca@debian.org>
> Cc: David Marchand <david.marchand@redhat.com>
> Cc: Bruce Richardson <bruce.richardson@intel.com>
> Cc: Ian Stokes <ian.stokes@intel.com>
> Cc: Eelco Chaudron <echaudro@redhat.com>
> Cc: Andrzej Ostruszka <amo@semihalf.com>
> Cc: Ray Kinsella <mdr@ashroe.eu>
> 
> v2:
> * Commit log updated
> 
> v3:
> * added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro
> 
> v4:
> * update script name in commit log, remove empty line
> ---
>  buildtools/check-symbols.sh                   |  3 +-
>  .../include/rte_function_versioning.h         |  9 +++
>  lib/librte_meter/meson.build                  |  1 +
>  lib/librte_meter/rte_meter.c                  | 59 ++++++++++++++++++-
>  lib/librte_meter/rte_meter_version.map        |  8 +++
>  5 files changed, 76 insertions(+), 4 deletions(-)
> 
> diff --git a/buildtools/check-symbols.sh b/buildtools/check-symbols.sh
> index 3df57c322c..e407553a34 100755
> --- a/buildtools/check-symbols.sh
> +++ b/buildtools/check-symbols.sh
> @@ -26,7 +26,8 @@ ret=0
>  for SYM in `$LIST_SYMBOL -S EXPERIMENTAL $MAPFILE |cut -d ' ' -f 3`
>  do
>  	if grep -q "\.text.*[[:space:]]$SYM$" $DUMPFILE &&
> -		! grep -q "\.text\.experimental.*[[:space:]]$SYM$"
> $DUMPFILE
> +		! grep -q "\.text\.experimental.*[[:space:]]$SYM$"
> $DUMPFILE &&
> +		$LIST_SYMBOL -s $SYM $MAPFILE | grep -q EXPERIMENTAL
>  	then
>  		cat >&2 <<- END_OF_MESSAGE
>  		$SYM is not flagged as experimental
> diff --git a/lib/librte_eal/include/rte_function_versioning.h
> b/lib/librte_eal/include/rte_function_versioning.h
> index b9f862d295..f588f2643b 100644
> --- a/lib/librte_eal/include/rte_function_versioning.h
> +++ b/lib/librte_eal/include/rte_function_versioning.h
> @@ -46,6 +46,14 @@
>   */
>  #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b)
> RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
> 
> +/*
> + * VERSION_SYMBOL_EXPERIMENTAL
> + * Creates a symbol version table entry binding the symbol
> <b>@EXPERIMENTAL to the internal
> + * function name <b><e>. The macro is used when a symbol matures to
> become part of the stable ABI,
> + * to provide an alias to experimental for some time.
> + */
> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver "
> RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
> +
>  /*
>   * BIND_DEFAULT_SYMBOL
>   * Creates a symbol version entry instructing the linker to bind references to
> @@ -79,6 +87,7 @@
>   * No symbol versioning in use
>   */
>  #define VERSION_SYMBOL(b, e, n)
> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e)
>  #define __vsym
>  #define BIND_DEFAULT_SYMBOL(b, e, n)
>  #define MAP_STATIC_SYMBOL(f, p) f __attribute__((alias(RTE_STR(p))))
> diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
> index 646fd4d43f..fce0368437 100644
> --- a/lib/librte_meter/meson.build
> +++ b/lib/librte_meter/meson.build
> @@ -3,3 +3,4 @@
> 
>  sources = files('rte_meter.c')
>  headers = files('rte_meter.h')
> +use_function_versioning = true
> diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
> index da01429a8b..c600b05064 100644
> --- a/lib/librte_meter/rte_meter.c
> +++ b/lib/librte_meter/rte_meter.c
> @@ -9,6 +9,7 @@
>  #include <rte_common.h>
>  #include <rte_log.h>
>  #include <rte_cycles.h>
> +#include <rte_function_versioning.h>
> 
>  #include "rte_meter.h"
> 
> @@ -119,8 +120,8 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
>  	return 0;
>  }
> 
> -int
> -rte_meter_trtcm_rfc4115_profile_config(
> +static int
> +rte_meter_trtcm_rfc4115_profile_config_(
>  	struct rte_meter_trtcm_rfc4115_profile *p,
>  	struct rte_meter_trtcm_rfc4115_params *params)
>  {
> @@ -145,7 +146,35 @@ rte_meter_trtcm_rfc4115_profile_config(
>  }
> 
>  int
> -rte_meter_trtcm_rfc4115_config(
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_s(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct
> rte_meter_trtcm_rfc4115_profile *p,
> +		struct rte_meter_trtcm_rfc4115_params *params),
> rte_meter_trtcm_rfc4115_profile_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params);
> +int
> +rte_meter_trtcm_rfc4115_profile_config_e(
> +	struct rte_meter_trtcm_rfc4115_profile *p,
> +	struct rte_meter_trtcm_rfc4115_params *params)
> +{
> +	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_conf
> ig, _e);
> +
> +static int
> +rte_meter_trtcm_rfc4115_config_(
>  	struct rte_meter_trtcm_rfc4115 *m,
>  	struct rte_meter_trtcm_rfc4115_profile *p)
>  {
> @@ -160,3 +189,27 @@ rte_meter_trtcm_rfc4115_config(
> 
>  	return 0;
>  }
> +
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return rte_meter_trtcm_rfc4115_config_(m, p);
> +}
> +BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
> +MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct
> rte_meter_trtcm_rfc4115 *m,
> +		 struct rte_meter_trtcm_rfc4115_profile *p),
> rte_meter_trtcm_rfc4115_config_s);
> +
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p);
> +int
> +rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
> +	struct rte_meter_trtcm_rfc4115_profile *p)
> +{
> +	return rte_meter_trtcm_rfc4115_config_(m, p);
> +}
> +VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);

To me, this is a significant amount of dead code that does not add any functionality and does not bring any added value to the library for any user. I am not a build system expert, but I would definitely prefer avoiding adding any C code to the library for this purpose, and just modify the map file, would this approach be possible?

Also, very important, is this C code to be added permanently or is it added just on a temporary basis? If temporary, when is it going to be removed?

> diff --git a/lib/librte_meter/rte_meter_version.map
> b/lib/librte_meter/rte_meter_version.map
> index 2c7dadbcac..b493bcebe9 100644
> --- a/lib/librte_meter/rte_meter_version.map
> +++ b/lib/librte_meter/rte_meter_version.map
> @@ -20,4 +20,12 @@ DPDK_21 {
>  	rte_meter_trtcm_rfc4115_color_blind_check;
>  	rte_meter_trtcm_rfc4115_config;
>  	rte_meter_trtcm_rfc4115_profile_config;
> +
>  } DPDK_20.0;
> +
> +EXPERIMENTAL {
> +       global:
> +
> +	rte_meter_trtcm_rfc4115_config;
> +	rte_meter_trtcm_rfc4115_profile_config;
> +};
> --
> 2.25.4

Regards,
Cristian

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-15 15:01 12%   ` [dpdk-dev] [PATCH v6] " Ray Kinsella
@ 2020-05-16 11:53  4%     ` Neil Horman
  2020-05-18 17:18  4%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Neil Horman @ 2020-05-16 11:53 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: dev, Ferruh Yigit, Thomas Monjalon, Luca Boccassi,
	David Marchand, Bruce Richardson, Ian Stokes, Eelco Chaudron,
	Andrzej Ostruszka, Kevin Traynor, John McNamara, Marko Kovacevic,
	Cristian Dumitrescu

On Fri, May 15, 2020 at 04:01:53PM +0100, Ray Kinsella wrote:
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
> DPDK_20.0.1 block.
> 
> This had the affect of breaking the applications that were using these
> APIs on v19.11. Although there is no modification of the APIs and the
> action is positive and matures the APIs, the affect can be negative to
> applications.
> 
> When a maintainer is promoting an API to become part of the next major
> ABI version by removing the experimental tag. The maintainer may
> choose to offer an alias to the experimental tag, to prevent these
> breakages in future.
> 
> The following changes are made to enabling aliasing:
> 
> Updated to the abi policy and abi versioning documents.
> 
> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> 
> Updated the 'check-symbols.sh' buildtool, which was complaining that the
> symbol is in EXPERIMENTAL tag in .map file but it is not in the
> .experimental section (__rte_experimental tag is missing).
> Updated tool in a way it won't complain if the symbol in the
> EXPERIMENTAL tag duplicated in some other block in .map file (versioned)
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> 
> This patch depends on "doc: fix references to bind_default_symbol".
> https://patches.dpdk.org/patch/69850/
> 
> Cc: Neil Horman <nhorman@tuxdriver.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>
> Cc: Luca Boccassi <bluca@debian.org>
> Cc: David Marchand <david.marchand@redhat.com>
> Cc: Bruce Richardson <bruce.richardson@intel.com>
> Cc: Ian Stokes <ian.stokes@intel.com>
> Cc: Eelco Chaudron <echaudro@redhat.com>
> Cc: Andrzej Ostruszka <amo@semihalf.com>
> Cc: Ray Kinsella <mdr@ashroe.eu>
> CC: Kevin Traynor <ktraynor@redhat.com>
> 
> v2:
> * Commit log updated
> 
> v3:
> * added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro
> 
> v4:
> * update script name in commit log, remove empty line
> 
> v5:
> * incorporate policy and version doc changes
> * remove changes to librte_meter
> 
> v6:
> * clarified dependency chain includes "doc: fix references to bind_default_symbol".
> 
>  buildtools/check-symbols.sh                      |   3 +-
>  doc/guides/contributing/abi_policy.rst           |  10 ++
>  doc/guides/contributing/abi_versioning.rst       | 158 ++++++++++++++++++++++-
>  lib/librte_eal/include/rte_function_versioning.h |   9 ++
>  4 files changed, 178 insertions(+), 2 deletions(-)
> 
Acked-by: Neil Horman <nhorman@tuxdriver.com>

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v6] abi: provide experimental alias of API for old apps
  2020-05-14 16:11  4% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
  2020-05-15 14:36 12%   ` [dpdk-dev] [PATCH v5] abi: " Ray Kinsella
@ 2020-05-15 15:01 12%   ` Ray Kinsella
  2020-05-16 11:53  4%     ` Neil Horman
  2020-05-17 19:52  0%   ` [dpdk-dev] [PATCH v4] meter: " Dumitrescu, Cristian
  2 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2020-05-15 15:01 UTC (permalink / raw)
  To: dev
  Cc: Ferruh Yigit, Ray Kinsella, Neil Horman, Thomas Monjalon,
	Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu

From: Ferruh Yigit <ferruh.yigit@intel.com>

On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
DPDK_20.0.1 block.

This had the affect of breaking the applications that were using these
APIs on v19.11. Although there is no modification of the APIs and the
action is positive and matures the APIs, the affect can be negative to
applications.

When a maintainer is promoting an API to become part of the next major
ABI version by removing the experimental tag. The maintainer may
choose to offer an alias to the experimental tag, to prevent these
breakages in future.

The following changes are made to enabling aliasing:

Updated to the abi policy and abi versioning documents.

Created VERSION_SYMBOL_EXPERIMENTAL helper macro.

Updated the 'check-symbols.sh' buildtool, which was complaining that the
symbol is in EXPERIMENTAL tag in .map file but it is not in the
.experimental section (__rte_experimental tag is missing).
Updated tool in a way it won't complain if the symbol in the
EXPERIMENTAL tag duplicated in some other block in .map file (versioned)

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---

This patch depends on "doc: fix references to bind_default_symbol".
https://patches.dpdk.org/patch/69850/

Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Luca Boccassi <bluca@debian.org>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ian Stokes <ian.stokes@intel.com>
Cc: Eelco Chaudron <echaudro@redhat.com>
Cc: Andrzej Ostruszka <amo@semihalf.com>
Cc: Ray Kinsella <mdr@ashroe.eu>
CC: Kevin Traynor <ktraynor@redhat.com>

v2:
* Commit log updated

v3:
* added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro

v4:
* update script name in commit log, remove empty line

v5:
* incorporate policy and version doc changes
* remove changes to librte_meter

v6:
* clarified dependency chain includes "doc: fix references to bind_default_symbol".

 buildtools/check-symbols.sh                      |   3 +-
 doc/guides/contributing/abi_policy.rst           |  10 ++
 doc/guides/contributing/abi_versioning.rst       | 158 ++++++++++++++++++++++-
 lib/librte_eal/include/rte_function_versioning.h |   9 ++
 4 files changed, 178 insertions(+), 2 deletions(-)

diff --git a/buildtools/check-symbols.sh b/buildtools/check-symbols.sh
index 3df57c3..e407553 100755
--- a/buildtools/check-symbols.sh
+++ b/buildtools/check-symbols.sh
@@ -26,7 +26,8 @@ ret=0
 for SYM in `$LIST_SYMBOL -S EXPERIMENTAL $MAPFILE |cut -d ' ' -f 3`
 do
 	if grep -q "\.text.*[[:space:]]$SYM$" $DUMPFILE &&
-		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE
+		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE &&
+		$LIST_SYMBOL -s $SYM $MAPFILE | grep -q EXPERIMENTAL
 	then
 		cat >&2 <<- END_OF_MESSAGE
 		$SYM is not flagged as experimental
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index 86e7dd9..c33bff1 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -160,6 +160,11 @@ The requirements for changing the ABI are:
      ``experimental``, as described in the section on :ref:`Experimental APIs
      and Libraries <experimental_apis>`.

+   - In situations in which an ``experimental`` symbol has been stable for some
+     time. When promoting the symbol to become part of the next ABI version, the
+     maintainer may choose to provide an alias to the ``experimental`` tag, so
+     as not to break consuming applications.
+
 #. If a newly proposed API functionally replaces an existing one, when the new
    API becomes non-experimental, then the old one is marked with
    ``__rte_deprecated``.
@@ -318,6 +323,11 @@ not required. Though, an API should remain in experimental state for at least
 one release. Thereafter, the normal process of posting patch for review to
 mailing list can be followed.

+After the experimental tag has been formally removed, a tree/sub-tree maintainer
+may choose to offer an alias to the experimental tag so as not to break
+applications using the symbol. The alias is then dropped at the declaration of
+next major ABI version.
+
 Libraries
 ~~~~~~~~~

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 7065979..27b5231 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -156,6 +156,11 @@ The macros exported are:
   ``be`` to signal that it is being used as an implementation of a particular
   version of symbol ``b``.

+* ``VERSION_SYMBOL_EXPERIMENTAL(b, e)``: Creates a symbol version table entry
+  binding versioned symbol ``b@EXPERIMENTAL`` to the internal function ``be``.
+  The macro is used when a symbol matures to become part of the stable ABI, to
+  provide an alias to experimental for some time.
+
 .. _example_abi_macro_usage:

 Examples of ABI Macro use
@@ -361,7 +366,7 @@ and a new DPDK_21 version, used by future built applications.
 .. note::

    **Before you leave**, please take care to the review the sections on
-   :ref:`Mapping static symbols <mapping_static_symbols>`, :ref:`Enabling
+   :ref:`mapping static symbols <mapping_static_symbols>`, :ref:`enabling
    versioning macros <enabling_versioning_macros>` and :ref:`ABI deprecation
    <abi_decprecation>`.

@@ -415,6 +420,157 @@ at the start of the head of the file. This will indicate to the tool-chain to
 enable the function version macros when building. There is no corresponding
 directive required for the ``make`` build system.

+.. _aliasing_experimental_symbols:
+
+Aliasing experimental symbols
+_____________________________
+
+In situations in which an ``experimental`` symbol has been stable for some time,
+and it becomes a candidate for promotion to the stable ABI. At this time, when
+promoting the symbol, maintainer may choose to provide an alias to the
+``experimental`` symbol version, so as not to break consuming applications.
+
+The process to provide an alias to ``experimental`` is similar to that, of
+:ref:`symbol visioning <example_abi_macro_usage>` described above. Assume we
+have an experimental function ``rte_acl_create`` as follows
+
+.. code-block:: c
+
+ #include <rte_compat.h>;
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ __rte_experimental
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+ ...
+ }
+
+In the map file, experimental symbols are listed as part of the ``experimental``
+version node.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   EXPERIMENTAL {
+        global:
+
+        rte_acl_create;
+   };
+
+When we promote the symbol to the stable ABI, we simply strip the
+``rte_experimental`` annotation from the function and move the symbol from the
+``experimental`` node, to the node of the next major ABI version as follow.
+
+.. code-block:: c
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+        ...
+ }
+
+We then update the map file, adding the symbol ``rte_acl_create`` to the ``v21``
+version node.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   DPDK_21 {
+        global:
+
+        rte_acl_create;
+   } DPDK_20;
+
+
+Although there are strictly no guarantees or commitments associated with
+:ref:`experimental symbols <experimental_apis>`, a maintainer may wish to offer
+an alias to experimental. The process to add an alias to experimental, is
+similar to the symbol versioning process. Assuming we have an experimental
+symbol as before, we now add the symbol to both the ``experimental`` and ``v21``
+version nodes.
+
+.. code-block:: c
+
+ #include <rte_compat.h>;
+ #include <rte_function_versioning.h>;
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+ ...
+ }
+
+ __rte_experimental
+ struct rte_acl_ctx *
+ rte_acl_create_e(const struct rte_acl_param *param)
+ {
+    return rte_acl_create(param);
+ }
+ VERSION_SYMBOL_EXPERIMENTAL(rte_acl_create, _e);
+
+ struct rte_acl_ctx *
+ rte_acl_create_v21(const struct rte_acl_param *param)
+ {
+    return rte_acl_create(param);
+ }
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+
+
+In the map file, we map the symbol to both the experimental and ``v21`` version
+nodes.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   DPDK_21 {
+        global:
+
+        rte_acl_create;
+   } DPDK_20;
+
+   EXPERIMENTAL {
+        global:
+
+        rte_acl_create;
+   };
+
+.. note::
+
+   Please note, similar to :ref:`symbol visioning <example_abi_macro_usage>`
+   when aliasing to experimental you will also need to take care of
+   :ref:`mapping static symbols <mapping_static_symbols>`.
+
+
 .. _abi_decprecation:

 Deprecating part of a public API
diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
index b9f862d..f588f26 100644
--- a/lib/librte_eal/include/rte_function_versioning.h
+++ b/lib/librte_eal/include/rte_function_versioning.h
@@ -47,6 +47,14 @@
 #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))

 /*
+ * VERSION_SYMBOL_EXPERIMENTAL
+ * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
+ * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
+ * to provide an alias to experimental for some time.
+ */
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
+
+/*
  * BIND_DEFAULT_SYMBOL
  * Creates a symbol version entry instructing the linker to bind references to
  * symbol <b> to the internal symbol <b><e>
@@ -79,6 +87,7 @@
  * No symbol versioning in use
  */
 #define VERSION_SYMBOL(b, e, n)
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e)
 #define __vsym
 #define BIND_DEFAULT_SYMBOL(b, e, n)
 #define MAP_STATIC_SYMBOL(f, p) f __attribute__((alias(RTE_STR(p))))
--
2.7.4

^ permalink raw reply	[relevance 12%]

* [dpdk-dev] [PATCH v5] abi: provide experimental alias of API for old apps
  2020-05-14 16:11  4% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
@ 2020-05-15 14:36 12%   ` Ray Kinsella
  2020-05-15 15:01 12%   ` [dpdk-dev] [PATCH v6] " Ray Kinsella
  2020-05-17 19:52  0%   ` [dpdk-dev] [PATCH v4] meter: " Dumitrescu, Cristian
  2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-15 14:36 UTC (permalink / raw)
  To: dev
  Cc: Ferruh Yigit, Ray Kinsella, Neil Horman, Thomas Monjalon,
	Luca Boccassi, David Marchand, Bruce Richardson, Ian Stokes,
	Eelco Chaudron, Andrzej Ostruszka, Kevin Traynor, John McNamara,
	Marko Kovacevic, Cristian Dumitrescu

From: Ferruh Yigit <ferruh.yigit@intel.com>

On v20.02 some APIs matured and symbols moved from EXPERIMENTAL to
DPDK_20.0.1 block.

This had the affect of breaking the applications that were using these
APIs on v19.11. Although there is no modification of the APIs and the
action is positive and matures the APIs, the affect can be negative to
applications.

When a maintainer is promoting an API to become part of the next major
ABI version by removing the experimental tag. The maintainer may
choose to offer an alias to the experimental tag, to prevent these
breakages in future.

The following changes are made to enabling aliasing:

Updated to the abi policy and abi versioning documents.

Created VERSION_SYMBOL_EXPERIMENTAL helper macro.

Updated the 'check-symbols.sh' buildtool, which was complaining that the
symbol is in EXPERIMENTAL tag in .map file but it is not in the
.experimental section (__rte_experimental tag is missing).
Updated tool in a way it won't complain if the symbol in the
EXPERIMENTAL tag duplicated in some other block in .map file (versioned)

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---

Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Luca Boccassi <bluca@debian.org>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ian Stokes <ian.stokes@intel.com>
Cc: Eelco Chaudron <echaudro@redhat.com>
Cc: Andrzej Ostruszka <amo@semihalf.com>
Cc: Ray Kinsella <mdr@ashroe.eu>
CC: Kevin Traynor <ktraynor@redhat.com>

v2:
* Commit log updated

v3:
* added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro

v4:
* update script name in commit log, remove empty line

v5:
* incorporate policy and version doc changes
* remove changes to librte_meter

 buildtools/check-symbols.sh                      |   3 +-
 doc/guides/contributing/abi_policy.rst           |  10 ++
 doc/guides/contributing/abi_versioning.rst       | 158 ++++++++++++++++++++++-
 lib/librte_eal/include/rte_function_versioning.h |   9 ++
 4 files changed, 178 insertions(+), 2 deletions(-)

diff --git a/buildtools/check-symbols.sh b/buildtools/check-symbols.sh
index 3df57c3..e407553 100755
--- a/buildtools/check-symbols.sh
+++ b/buildtools/check-symbols.sh
@@ -26,7 +26,8 @@ ret=0
 for SYM in `$LIST_SYMBOL -S EXPERIMENTAL $MAPFILE |cut -d ' ' -f 3`
 do
 	if grep -q "\.text.*[[:space:]]$SYM$" $DUMPFILE &&
-		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE
+		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE &&
+		$LIST_SYMBOL -s $SYM $MAPFILE | grep -q EXPERIMENTAL
 	then
 		cat >&2 <<- END_OF_MESSAGE
 		$SYM is not flagged as experimental
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index 86e7dd9..c33bff1 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -160,6 +160,11 @@ The requirements for changing the ABI are:
      ``experimental``, as described in the section on :ref:`Experimental APIs
      and Libraries <experimental_apis>`.

+   - In situations in which an ``experimental`` symbol has been stable for some
+     time. When promoting the symbol to become part of the next ABI version, the
+     maintainer may choose to provide an alias to the ``experimental`` tag, so
+     as not to break consuming applications.
+
 #. If a newly proposed API functionally replaces an existing one, when the new
    API becomes non-experimental, then the old one is marked with
    ``__rte_deprecated``.
@@ -318,6 +323,11 @@ not required. Though, an API should remain in experimental state for at least
 one release. Thereafter, the normal process of posting patch for review to
 mailing list can be followed.

+After the experimental tag has been formally removed, a tree/sub-tree maintainer
+may choose to offer an alias to the experimental tag so as not to break
+applications using the symbol. The alias is then dropped at the declaration of
+next major ABI version.
+
 Libraries
 ~~~~~~~~~

diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 7065979..27b5231 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -156,6 +156,11 @@ The macros exported are:
   ``be`` to signal that it is being used as an implementation of a particular
   version of symbol ``b``.

+* ``VERSION_SYMBOL_EXPERIMENTAL(b, e)``: Creates a symbol version table entry
+  binding versioned symbol ``b@EXPERIMENTAL`` to the internal function ``be``.
+  The macro is used when a symbol matures to become part of the stable ABI, to
+  provide an alias to experimental for some time.
+
 .. _example_abi_macro_usage:

 Examples of ABI Macro use
@@ -361,7 +366,7 @@ and a new DPDK_21 version, used by future built applications.
 .. note::

    **Before you leave**, please take care to the review the sections on
-   :ref:`Mapping static symbols <mapping_static_symbols>`, :ref:`Enabling
+   :ref:`mapping static symbols <mapping_static_symbols>`, :ref:`enabling
    versioning macros <enabling_versioning_macros>` and :ref:`ABI deprecation
    <abi_decprecation>`.

@@ -415,6 +420,157 @@ at the start of the head of the file. This will indicate to the tool-chain to
 enable the function version macros when building. There is no corresponding
 directive required for the ``make`` build system.

+.. _aliasing_experimental_symbols:
+
+Aliasing experimental symbols
+_____________________________
+
+In situations in which an ``experimental`` symbol has been stable for some time,
+and it becomes a candidate for promotion to the stable ABI. At this time, when
+promoting the symbol, maintainer may choose to provide an alias to the
+``experimental`` symbol version, so as not to break consuming applications.
+
+The process to provide an alias to ``experimental`` is similar to that, of
+:ref:`symbol visioning <example_abi_macro_usage>` described above. Assume we
+have an experimental function ``rte_acl_create`` as follows
+
+.. code-block:: c
+
+ #include <rte_compat.h>;
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ __rte_experimental
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+ ...
+ }
+
+In the map file, experimental symbols are listed as part of the ``experimental``
+version node.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   EXPERIMENTAL {
+        global:
+
+        rte_acl_create;
+   };
+
+When we promote the symbol to the stable ABI, we simply strip the
+``rte_experimental`` annotation from the function and move the symbol from the
+``experimental`` node, to the node of the next major ABI version as follow.
+
+.. code-block:: c
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+        ...
+ }
+
+We then update the map file, adding the symbol ``rte_acl_create`` to the ``v21``
+version node.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   DPDK_21 {
+        global:
+
+        rte_acl_create;
+   } DPDK_20;
+
+
+Although there are strictly no guarantees or commitments associated with
+:ref:`experimental symbols <experimental_apis>`, a maintainer may wish to offer
+an alias to experimental. The process to add an alias to experimental, is
+similar to the symbol versioning process. Assuming we have an experimental
+symbol as before, we now add the symbol to both the ``experimental`` and ``v21``
+version nodes.
+
+.. code-block:: c
+
+ #include <rte_compat.h>;
+ #include <rte_function_versioning.h>;
+
+ /*
+  * Create an acl context object for apps to
+  * manipulate
+  */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+ ...
+ }
+
+ __rte_experimental
+ struct rte_acl_ctx *
+ rte_acl_create_e(const struct rte_acl_param *param)
+ {
+    return rte_acl_create(param);
+ }
+ VERSION_SYMBOL_EXPERIMENTAL(rte_acl_create, _e);
+
+ struct rte_acl_ctx *
+ rte_acl_create_v21(const struct rte_acl_param *param)
+ {
+    return rte_acl_create(param);
+ }
+ BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
+
+
+In the map file, we map the symbol to both the experimental and ``v21`` version
+nodes.
+
+.. code-block:: none
+
+   DPDK_20 {
+        global:
+        ...
+
+        local: *;
+   };
+
+   DPDK_21 {
+        global:
+
+        rte_acl_create;
+   } DPDK_20;
+
+   EXPERIMENTAL {
+        global:
+
+        rte_acl_create;
+   };
+
+.. note::
+
+   Please note, similar to :ref:`symbol visioning <example_abi_macro_usage>`
+   when aliasing to experimental you will also need to take care of
+   :ref:`mapping static symbols <mapping_static_symbols>`.
+
+
 .. _abi_decprecation:

 Deprecating part of a public API
diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
index b9f862d..f588f26 100644
--- a/lib/librte_eal/include/rte_function_versioning.h
+++ b/lib/librte_eal/include/rte_function_versioning.h
@@ -47,6 +47,14 @@
 #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))

 /*
+ * VERSION_SYMBOL_EXPERIMENTAL
+ * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
+ * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
+ * to provide an alias to experimental for some time.
+ */
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
+
+/*
  * BIND_DEFAULT_SYMBOL
  * Creates a symbol version entry instructing the linker to bind references to
  * symbol <b> to the internal symbol <b><e>
@@ -79,6 +87,7 @@
  * No symbol versioning in use
  */
 #define VERSION_SYMBOL(b, e, n)
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e)
 #define __vsym
 #define BIND_DEFAULT_SYMBOL(b, e, n)
 #define MAP_STATIC_SYMBOL(f, p) f __attribute__((alias(RTE_STR(p))))
--
2.7.4

^ permalink raw reply	[relevance 12%]

* Re: [dpdk-dev] [PATCH v3 01/12] common/dpaax: move internal symbols into INTERNAL section
  2020-05-15  9:26  0%                             ` Thomas Monjalon
@ 2020-05-15 11:19  5%                               ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2020-05-15 11:19 UTC (permalink / raw)
  To: techboard; +Cc: David Marchand, hemant.agrawal, Ray Kinsella, dev, nhorman

Adding a bit more definitions to better understand.

A "stable" library exports at least one symbol in the current stable ABI.
Its soname is suffixed with the current ABI version.
If the library exports no symbol in the current ABI, but has a symbol in
the next ABI, the soname is suffixed also with the current ABI version.

A "pure experimental" library exports only experimental symbols.
Its soname is suffixed with 0. and the stable ABI version.

A "pure internal" library has only internal symbols,
or no exported symbols at all, like in most PMDs.
Its soname is suffixed with the current ABI version.

An "experimental & internal" library exports experimental and internal
symbols, but none in current or next stable ABI.
We don't have such case yet.

I think the original intent was to use the suffix 0.x for libs
which export no stable ABI. But it is inconsistent currently.

Please read the options below, and give your opinion.
Thanks

15/05/2020 11:26, Thomas Monjalon:
> Now the question is: what to do in v20.11?
> This question will have to be voted in the Technical Board which voted
> the "pure experimental" versioning rule.
> We have 3 options:
> 
> a) "Pure internal" libs are versioned as "stable" libs,
> while "pure experimental" libs have version 0.x.
> It looks inconsistent and non-sense.
> 
> b) "Pure internal" libs are versioned as
> "pure experimental" libs: version 0.x.
> It makes "pure internal" libs version decreasing in 20.11.
> 
> c) Forget about the different versioning scheme,
> i.e. increase 0.x versions to x as "stable" libs.
> 
> Of course, I vote for option c.




^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v8 07/13] crypto: move internal symbols into INTERNAL section
                       ` (5 preceding siblings ...)
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 06/13] net/dpaa2: " Hemant Agrawal
@ 2020-05-15  9:47  3%   ` Hemant Agrawal
  2020-05-19 11:17  0%     ` Ray Kinsella
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_event.h             | 5 +++--
 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 6 ++++--
 drivers/crypto/dpaa_sec/dpaa_sec_event.h               | 8 ++++----
 drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map   | 6 ++++--
 4 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
index c779d5d837..675cbbb81d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
@@ -6,12 +6,13 @@
 #ifndef _DPAA2_SEC_EVENT_H_
 #define _DPAA2_SEC_EVENT_H_
 
-int
-dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event *event);
 
+__rte_internal
 int dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 5952d645fd..3d863aff4d 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaa2_sec_eventq_attach;
 	dpaa2_sec_eventq_detach;
-
-	local: *;
 };
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_event.h b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
index 8d1a018096..0b09fa8f75 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_event.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
@@ -6,14 +6,14 @@
 #ifndef _DPAA_SEC_EVENT_H_
 #define _DPAA_SEC_EVENT_H_
 
-int
-dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		uint16_t ch_id,
 		const struct rte_event *event);
 
-int
-dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
 #endif /* _DPAA_SEC_EVENT_H_ */
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index 8580fa13db..023e120516 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaa_sec_eventq_attach;
 	dpaa_sec_eventq_detach;
-
-	local: *;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8 06/13] net/dpaa2: move internal symbols into INTERNAL section
                       ` (4 preceding siblings ...)
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 05/13] net/dpaa: " Hemant Agrawal
@ 2020-05-15  9:47  3%   ` Hemant Agrawal
  2020-05-19 11:15  0%     ` Ray Kinsella
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 07/13] crypto: " Hemant Agrawal
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 ++
 drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +++++++-----
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 2c49a7f01f..c7fb6539ff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -164,11 +164,13 @@ int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 
 int dpaa2_attach_bp_list(struct dpaa2_dev_priv *priv, void *blist);
 
+__rte_internal
 int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id);
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index f2bb793319..b633fdc2a8 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,9 +1,4 @@
 DPDK_20.0 {
-	global:
-
-	dpaa2_eth_eventq_attach;
-	dpaa2_eth_eventq_detach;
-
 	local: *;
 };
 
@@ -14,3 +9,10 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_set_custom_hash;
 	rte_pmd_dpaa2_set_timestamp;
 };
+
+INTERNAL {
+	global:
+
+	dpaa2_eth_eventq_attach;
+	dpaa2_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8 05/13] net/dpaa: move internal symbols into INTERNAL section
                       ` (3 preceding siblings ...)
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: " Hemant Agrawal
@ 2020-05-15  9:47  3%   ` Hemant Agrawal
  2020-05-19 11:14  0%     ` Ray Kinsella
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 06/13] net/dpaa2: " Hemant Agrawal
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 07/13] crypto: " Hemant Agrawal
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              | 2 ++
 drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
 drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 42f9469221..7b6358c394 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -63,3 +63,5 @@
 	name_regexp = ^rte_dpaa_bpid_info
 [suppress_variable]
 	name_regexp = ^rte_dpaa2_bpid_info
+[suppress_function]
+        name_regexp = ^dpaa
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index af9fc2105d..7393a9df05 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -160,12 +160,14 @@ struct dpaa_if_stats {
 	uint64_t tund;		/**<Tx Undersized */
 };
 
+__rte_internal
 int
 dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		u16 ch_id,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int
 dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
 			   int eth_rx_queue_id);
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index f403a1526d..774aa0de45 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,9 +1,14 @@
 DPDK_20.0 {
 	global:
 
-	dpaa_eth_eventq_attach;
-	dpaa_eth_eventq_detach;
 	rte_pmd_dpaa_set_tx_loopback;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	dpaa_eth_eventq_attach;
+	dpaa_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: move internal symbols into INTERNAL section
                       ` (2 preceding siblings ...)
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 03/13] bus/dpaa: " Hemant Agrawal
@ 2020-05-15  9:47  3%   ` Hemant Agrawal
  2020-05-19 11:03  0%     ` Ray Kinsella
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 05/13] net/dpaa: " Hemant Agrawal
                     ` (2 subsequent siblings)
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                        | 8 ++++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 6 ++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
 drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
 4 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ab34302d0c..42f9469221 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -55,3 +55,11 @@
 	file_name_regexp = ^librte_bus_fslmc\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_dpaa\.
+[suppress_function]
+	name = rte_dpaa2_mbuf_alloc_bulk
+[suppress_variable]
+	name_regexp = ^rte_dpaa_memsegs
+[suppress_variable]
+	name_regexp = ^rte_dpaa_bpid_info
+[suppress_variable]
+	name_regexp = ^rte_dpaa2_bpid_info
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 9eebaf7ffd..89d7cf4957 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	rte_dpaa_bpid_info;
 	rte_dpaa_memsegs;
-
-	local: *;
 };
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
index fa0f2280d5..53fa1552d1 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
@@ -61,6 +61,7 @@ struct dpaa2_bp_info {
 
 extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
 
+__rte_internal
 int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 		       void **obj_table, unsigned int count);
 
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index cd4bc88273..686b024624 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,10 +1,15 @@
 DPDK_20.0 {
 	global:
 
-	rte_dpaa2_bpid_info;
-	rte_dpaa2_mbuf_alloc_bulk;
 	rte_dpaa2_mbuf_from_buf_addr;
 	rte_dpaa2_mbuf_pool_bpid;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	rte_dpaa2_bpid_info;
+	rte_dpaa2_mbuf_alloc_bulk;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8 03/13] bus/dpaa: move internal symbols into INTERNAL section
    2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 02/13] bus/fslmc: " Hemant Agrawal
@ 2020-05-15  9:47  1%   ` Hemant Agrawal
  2020-05-19 10:56  0%     ` Ray Kinsella
  2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: " Hemant Agrawal
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

This patch also removes two symbols, which are not
to be exported.
rte_dpaa_mem_ptov  - static inline in the headerfile
fman_ccsr_map_fd - local shared variable.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              |  2 ++
 drivers/bus/dpaa/include/fsl_bman.h       |  6 +++++
 drivers/bus/dpaa/include/fsl_fman.h       | 27 +++++++++++++++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 32 +++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |  8 +++++-
 drivers/bus/dpaa/include/netcfg.h         |  2 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  8 +++---
 drivers/bus/dpaa/rte_dpaa_bus.h           |  5 ++++
 8 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 877c6d5be8..ab34302d0c 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -53,3 +53,5 @@
 	file_name_regexp = ^librte_common_dpaax\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_fslmc\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_dpaa\.
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index f9cd972153..82da2fcfe0 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);
  * the structure provided by the caller can be released or reused after the
  * function returns.
  */
+__rte_internal
 struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
 
 /**
  * bman_free_pool - Deallocates a Buffer Pool object
  * @pool: the pool object to release
  */
+__rte_internal
 void bman_free_pool(struct bman_pool *pool);
 
 /**
@@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);
  * The returned pointer refers to state within the pool object so must not be
  * modified and can no longer be read once the pool object is destroyed.
  */
+__rte_internal
 const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
 
 /**
@@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
  * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
  *
  */
+__rte_internal
 int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
  * The return value will be the number of buffers obtained from the pool, or a
  * negative error code if a h/w error or pool starvation was encountered.
  */
+__rte_internal
 int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);
  *
  * Return the number of the free buffers
  */
+__rte_internal
 u32 bman_query_free_buffers(struct bman_pool *pool);
 
 /**
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 5705ebfdce..6c87c8db0d 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -7,6 +7,8 @@
 #ifndef __FSL_FMAN_H
 #define __FSL_FMAN_H
 
+#include <rte_compat.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -43,18 +45,23 @@ struct fm_status_t {
 } __rte_packed;
 
 /* Set MAC address for a particular interface */
+__rte_internal
 int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
 
 /* Remove a MAC address for a particular interface */
+__rte_internal
 void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
 
 /* Get the FMAN statistics */
+__rte_internal
 void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
 
 /* Reset the FMAN statistics */
+__rte_internal
 void fman_if_stats_reset(struct fman_if *p);
 
 /* Get all of the FMAN statistics */
+__rte_internal
 void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
 
 /* Set ignore pause option for a specific interface */
@@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
 void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
 
 /* Enable/disable Rx promiscuous mode on specified interface */
+__rte_internal
 void fman_if_promiscuous_enable(struct fman_if *p);
+__rte_internal
 void fman_if_promiscuous_disable(struct fman_if *p);
 
 /* Enable/disable Rx on specific interfaces */
+__rte_internal
 void fman_if_enable_rx(struct fman_if *p);
+__rte_internal
 void fman_if_disable_rx(struct fman_if *p);
 
 /* Enable/disable loopback on specific interfaces */
+__rte_internal
 void fman_if_loopback_enable(struct fman_if *p);
+__rte_internal
 void fman_if_loopback_disable(struct fman_if *p);
 
 /* Set buffer pool on specific interface */
+__rte_internal
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
 /* Get Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_get_fc_threshold(struct fman_if *fm_if);
 
 /* Enable and Set Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_set_fc_threshold(struct fman_if *fm_if,
 			u32 high_water, u32 low_water, u32 bpid);
 
 /* Get Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
 /* Set Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
 
 /* Set default error fqid on specific interface */
@@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
 int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
 
 /* Set IC transfer params */
+__rte_internal
 int fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp);
 
 /* Get interface fd->offset value */
+__rte_internal
 int fman_if_get_fdoff(struct fman_if *fm_if);
 
 /* Set interface fd->offset value */
+__rte_internal
 void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
 
 /* Get interface SG enable status value */
+__rte_internal
 int fman_if_get_sg_enable(struct fman_if *fm_if);
 
 /* Set interface SG support mode */
+__rte_internal
 void fman_if_set_sg(struct fman_if *fm_if, int enable);
 
 /* Get interface Max Frame length (MTU) */
 uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
 
 /* Set interface  Max Frame length (MTU) */
+__rte_internal
 void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
 
 /* Set interface next invoked action for dequeue operation */
 void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
 
 /* discard error packets on rx */
+__rte_internal
 void fman_if_discard_rx_errors(struct fman_if *fm_if);
 
+__rte_internal
 void fman_if_set_mcast_filter_table(struct fman_if *p);
 
+__rte_internal
 void fman_if_reset_mcast_filter_table(struct fman_if *p);
 
 int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 1b3342e7e6..4411bb0a79 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1314,6 +1314,7 @@ struct qman_cgr {
 #define QMAN_CGR_MODE_FRAME          0x00000001
 
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+__rte_internal
 void qman_set_fq_lookup_table(void **table);
 #endif
 
@@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);
  */
 int qman_get_portal_index(void);
 
+__rte_internal
 u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
 			void **bufs);
 
@@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
  * processed via qman_poll_***() functions). Returns zero for success, or
  * -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_add(u32 bits);
 
 /**
@@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
 
 /**
@@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
  * instead be processed via qman_poll_***() functions. Returns zero for success,
  * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_remove(u32 bits);
 
 /**
@@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
 
 /**
@@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
  */
 u16 qman_affine_channel(int cpu);
 
+__rte_internal
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs, struct qman_portal *q);
 
@@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
  *
  * This function will issue a volatile dequeue command to the QMAN.
  */
+__rte_internal
 int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
 
 /**
@@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
  * is issued. It will keep returning NULL until there is no packet available on
  * the DQRR.
  */
+__rte_internal
 struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
 
 /**
@@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
  * This will consume the DQRR enrey and make it available for next volatile
  * dequeue.
  */
+__rte_internal
 void qman_dqrr_consume(struct qman_fq *fq,
 		       struct qm_dqrr_entry *dq);
 
@@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,
  * this function will return -EINVAL, otherwise the return value is >=0 and
  * represents the number of DQRR entries processed.
  */
+__rte_internal
 int qman_poll_dqrr(unsigned int limit);
 
 /**
@@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);
  * (SDQCR). The requested pools are limited to those the portal has dequeue
  * access to.
  */
+__rte_internal
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
 
 /**
@@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
  * function must be called from the same CPU as that which processed the DQRR
  * entry in the first place.
  */
+__rte_internal
 void qman_dca_index(u8 index, int park_request);
 
 /**
@@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
  * a frame queue object based on that, rather than assuming/requiring that it be
  * Out of Service.
  */
+__rte_internal
 int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
 
 /**
@@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
  * qman_fq_fqid - Queries the frame queue ID of a FQ object
  * @fq: the frame queue object to query
  */
+__rte_internal
 u32 qman_fq_fqid(struct qman_fq *fq);
 
 /**
@@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);
  * This captures the state, as seen by the driver, at the time the function
  * executes.
  */
+__rte_internal
 void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
 
 /**
@@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
  * context_a.address fields and will leave the stashing fields provided by the
  * user alone, otherwise it will zero out the context_a.stashing fields.
  */
+__rte_internal
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
 
 /**
@@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);
  * caller should be prepared to accept the callback as the function is called,
  * not only once it has returned.
  */
+__rte_internal
 int qman_retire_fq(struct qman_fq *fq, u32 *flags);
 
 /**
@@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
  * The frame queue must be retired and empty, and if any order restoration list
  * was released as ERNs at the time of retirement, they must all be consumed.
  */
+__rte_internal
 int qman_oos_fq(struct qman_fq *fq);
 
 /**
@@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
  * @fq: the frame queue object to be queried
  * @np: storage for the queried FQD fields
  */
+__rte_internal
 int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
 
 /**
@@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
  * @fq: the frame queue object to be queried
  * @frm_cnt: number of frames in the queue
  */
+__rte_internal
 int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
 
 /**
@@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
  * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
  * "flags" retrieved from qman_fq_state().
  */
+__rte_internal
 int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
 
 /**
@@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
  * of an already busy hardware resource by throttling many of the to-be-dropped
  * enqueues "at the source".
  */
+__rte_internal
 int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 
+__rte_internal
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
@@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
  * This API is similar to qman_enqueue_multi(), but it takes fd which needs
  * to be processed by different frame queues.
  */
+__rte_internal
 int
 qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 		      u32 *flags, int frames_to_send);
@@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);
  * @fqid: the base FQID of the range to deallocate
  * @count: the number of FQIDs in the range
  */
+__rte_internal
 int qman_reserve_fqid_range(u32 fqid, unsigned int count);
 static inline int qman_reserve_fqid(u32 fqid)
 {
@@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_pool(u32 *result)
 {
@@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);
  * any unspecified parameters) will be used rather than a modify hw hardware
  * (which only modifies the specified parameters).
  */
+__rte_internal
 int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
  * is executed. This must be excuted on the same affine portal on which it was
  * created.
  */
+__rte_internal
 int qman_delete_cgr(struct qman_cgr *cgr);
 
 /**
@@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);
  * unspecified parameters) will be used rather than a modify hw hardware (which
  * only modifies the specified parameters).
  */
+__rte_internal
 int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_cgrid(u32 *result)
 {
@@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)
  * @id: the base CGR ID of the range to deallocate
  * @count: the number of CGR IDs in the range
  */
+__rte_internal
 void qman_release_cgrid_range(u32 id, unsigned int count);
 static inline void qman_release_cgrid(u32 id)
 {
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 263d9bb976..dcf35e4adb 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
 /* Obtain thread-local UIO file-descriptors */
+__rte_internal
 int qman_thread_fd(void);
 int bman_thread_fd(void);
 
@@ -66,10 +67,14 @@ int bman_thread_fd(void);
  * processing is complete. As such, it is essential to call this before going
  * into another blocking read/select/poll.
  */
+__rte_internal
 void qman_thread_irq(void);
+
+__rte_internal
 void bman_thread_irq(void);
+__rte_internal
 void qman_fq_portal_thread_irq(struct qman_portal *qp);
-
+__rte_internal
 void qman_clear_irq(void);
 
 /* Global setup */
@@ -77,6 +82,7 @@ int qman_global_init(void);
 int bman_global_init(void);
 
 /* Direct portal create and destroy */
+__rte_internal
 struct qman_portal *fsl_qman_fq_portal_create(int *fd);
 int fsl_qman_fq_portal_destroy(struct qman_portal *qp);
 int fsl_qman_fq_portal_init(struct qman_portal *qp);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index bf7bfae8cb..d7d1befd24 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -46,11 +46,13 @@ struct netcfg_interface {
  * cfg_file: FMC config XML file
  * Returns the configuration information in newly allocated memory.
  */
+__rte_internal
 struct netcfg_info *netcfg_acquire(void);
 
 /* cfg_ptr: configuration information pointer.
  * Frees the resources allocated by the configuration layer.
  */
+__rte_internal
 void netcfg_release(struct netcfg_info *cfg_ptr);
 
 #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e0..53732289d3 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,8 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	bman_acquire;
@@ -13,7 +17,6 @@ DPDK_20.0 {
 	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	dpaa_svr_family;
-	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
 	fman_dealloc_bufs_mask_lo;
 	fman_if_add_mac_addr;
@@ -87,10 +90,7 @@ DPDK_20.0 {
 	qman_volatile_dequeue;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
-	rte_dpaa_mem_ptov;
 	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
-
-	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca9785..d4aee132ef 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)
  *   A pointer to a rte_dpaa_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
 
 /**
@@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  *	A pointer to a rte_dpaa_driver structure describing the driver
  *	to be unregistered.
  */
+__rte_internal
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
 /**
@@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
  * @return
  *	0 in case of success, error otherwise
  */
+__rte_internal
 int rte_dpaa_portal_init(void *arg);
 
+__rte_internal
 int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
 
+__rte_internal
 int rte_dpaa_portal_fq_close(struct qman_fq *fq);
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v8 02/13] bus/fslmc: move internal symbols into INTERNAL section
    2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
@ 2020-05-15  9:47  1%   ` Hemant Agrawal
  2020-05-19 10:00  0%     ` Ray Kinsella
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 03/13] bus/dpaa: " Hemant Agrawal
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

This patch also removes two symbols, which were not used
anywhere else i.e. rte_fslmc_vfio_dmamap & dpaa2_get_qbman_swp

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                  |  2 +
 drivers/bus/fslmc/fslmc_vfio.h                |  5 +++
 drivers/bus/fslmc/mc/fsl_dpbp.h               |  7 ++++
 drivers/bus/fslmc/mc/fsl_dpci.h               |  3 ++
 drivers/bus/fslmc/mc/fsl_dpcon.h              |  2 +
 drivers/bus/fslmc/mc/fsl_dpdmai.h             | 10 +++++
 drivers/bus/fslmc/mc/fsl_dpio.h               | 11 +++++
 drivers/bus/fslmc/mc/fsl_dpmng.h              |  4 ++
 drivers/bus/fslmc/mc/fsl_mc_cmd.h             |  2 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  5 +++
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  8 ++++
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  8 ++++
 .../fslmc/qbman/include/fsl_qbman_portal.h    | 42 +++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map   | 20 ++++-----
 drivers/bus/fslmc/rte_fslmc.h                 |  4 ++
 15 files changed, 123 insertions(+), 10 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index b1488d5549..877c6d5be8 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -51,3 +51,5 @@
 ; Ignore moving DPAAx stable functions to INTERNAL tag
 [suppress_file]
 	file_name_regexp = ^librte_common_dpaax\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_fslmc\.
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index c988121294..bc7c6f62d7 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -8,6 +8,7 @@
 #ifndef _FSLMC_VFIO_H_
 #define _FSLMC_VFIO_H_
 
+#include <rte_compat.h>
 #include <rte_vfio.h>
 
 /* Pathname of FSL-MC devices directory. */
@@ -41,7 +42,11 @@ typedef struct fslmc_vfio_container {
 } fslmc_vfio_container;
 
 extern char *fslmc_container;
+
+__rte_internal
 int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index);
+
+__rte_internal
 int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index);
 
 int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
index 9d405b42c4..0d590a2647 100644
--- a/drivers/bus/fslmc/mc/fsl_dpbp.h
+++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
@@ -7,6 +7,7 @@
 #ifndef __FSL_DPBP_H
 #define __FSL_DPBP_H
 
+#include <rte_compat.h>
 /*
  * Data Path Buffer Pool API
  * Contains initialization APIs and runtime control APIs for DPBP
@@ -14,6 +15,7 @@
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpbp_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpbp_id,
@@ -42,10 +44,12 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t obj_id);
 
+__rte_internal
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -55,6 +59,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -70,6 +75,7 @@ struct dpbp_attr {
 	uint16_t bpid;
 };
 
+__rte_internal
 int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -88,6 +94,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
index a0ee5bfe69..81fd3438aa 100644
--- a/drivers/bus/fslmc/mc/fsl_dpci.h
+++ b/drivers/bus/fslmc/mc/fsl_dpci.h
@@ -181,6 +181,7 @@ struct dpci_rx_queue_cfg {
 	int order_preservation_en;
 };
 
+__rte_internal
 int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
@@ -228,6 +229,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -235,6 +237,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint8_t options,
 		 struct opr_cfg *cfg);
 
+__rte_internal
 int dpci_get_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index af81d51195..7caa6c68a1 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -20,6 +20,7 @@ struct fsl_mc_io;
  */
 #define DPCON_INVALID_DPIO_ID		(int)(-1)
 
+__rte_internal
 int dpcon_open(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       int dpcon_id,
@@ -77,6 +78,7 @@ struct dpcon_attr {
 	uint8_t num_priorities;
 };
 
+__rte_internal
 int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint32_t cmd_flags,
 			 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 40469cc139..19328c00a0 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -5,6 +5,8 @@
 #ifndef __FSL_DPDMAI_H
 #define __FSL_DPDMAI_H
 
+#include <rte_compat.h>
+
 struct fsl_mc_io;
 
 /* Data Path DMA Interface API
@@ -23,11 +25,13 @@ struct fsl_mc_io;
  */
 #define DPDMAI_ALL_QUEUES	(uint8_t)(-1)
 
+__rte_internal
 int dpdmai_open(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		int dpdmai_id,
 		uint16_t *token);
 
+__rte_internal
 int dpdmai_close(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -54,10 +58,12 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint32_t object_id);
 
+__rte_internal
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
 
+__rte_internal
 int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token);
@@ -82,6 +88,7 @@ struct dpdmai_attr {
 	uint8_t num_of_queues;
 };
 
+__rte_internal
 int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
 			  uint32_t cmd_flags,
 			  uint16_t token,
@@ -148,6 +155,7 @@ struct dpdmai_rx_queue_cfg {
 
 };
 
+__rte_internal
 int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -168,6 +176,7 @@ struct dpdmai_rx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -184,6 +193,7 @@ struct dpdmai_tx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index 3158f53191..c2db76bdf8 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -7,17 +7,21 @@
 #ifndef __FSL_DPIO_H
 #define __FSL_DPIO_H
 
+#include <rte_compat.h>
+
 /* Data Path I/O Portal API
  * Contains initialization APIs and runtime control APIs for DPIO
  */
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpio_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpio_id,
 	      uint16_t *token);
 
+__rte_internal
 int dpio_close(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -57,10 +61,12 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t object_id);
 
+__rte_internal
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -70,10 +76,12 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
 
+__rte_internal
 int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -84,12 +92,14 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
 				    uint16_t token,
 				    int dpcon_id,
 				    uint8_t *channel_index);
 
+__rte_internal
 int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				       uint32_t cmd_flags,
 				       uint16_t token,
@@ -119,6 +129,7 @@ struct dpio_attr {
 	uint32_t clk;
 };
 
+__rte_internal
 int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 36c387af27..8764ceaed9 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -7,6 +7,8 @@
 #ifndef __FSL_DPMNG_H
 #define __FSL_DPMNG_H
 
+#include <rte_compat.h>
+
 /*
  * Management Complex General API
  * Contains general API for the Management Complex firmware
@@ -34,6 +36,7 @@ struct mc_version {
 	uint32_t revision;
 };
 
+__rte_internal
 int mc_get_version(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   struct mc_version *mc_ver_info);
@@ -48,6 +51,7 @@ struct mc_soc_version {
 	uint32_t pvr;
 };
 
+__rte_internal
 int mc_get_soc_version(struct fsl_mc_io *mc_io,
 		       uint32_t cmd_flags,
 		       struct mc_soc_version *mc_platform_info);
diff --git a/drivers/bus/fslmc/mc/fsl_mc_cmd.h b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
index ac919610cf..7c0ca6b73a 100644
--- a/drivers/bus/fslmc/mc/fsl_mc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
@@ -7,6 +7,7 @@
 #ifndef __FSL_MC_CMD_H
 #define __FSL_MC_CMD_H
 
+#include <rte_compat.h>
 #include <rte_byteorder.h>
 #include <stdint.h>
 
@@ -80,6 +81,7 @@ enum mc_cmd_status {
 
 #define MC_CMD_HDR_FLAGS_MASK	0xFF00FF00
 
+__rte_internal
 int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd);
 
 static inline uint64_t mc_encode_cmd_header(uint16_t cmd_id,
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 2829c93806..7c5966241a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -36,20 +36,25 @@ extern uint8_t dpaa2_eqcr_size;
 extern struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
 
 /* Affine a DPIO portal to current processing thread */
+__rte_internal
 int dpaa2_affine_qbman_swp(void);
 
 /* Affine additional DPIO portal to current crypto processing thread */
+__rte_internal
 int dpaa2_affine_qbman_ethrx_swp(void);
 
 /* allocate memory for FQ - dq storage */
+__rte_internal
 int
 dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free memory for FQ- dq storage */
+__rte_internal
 void
 dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free the enqueue response descriptors */
+__rte_internal
 uint32_t
 dpaa2_free_eq_descriptors(void);
 
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 368fe7c688..33b191f823 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -426,11 +426,19 @@ void set_swp_active_dqs(uint16_t dpio_index, struct qbman_result *dqs)
 {
 	rte_global_active_dqs_list[dpio_index].global_active_dqs = dqs;
 }
+__rte_internal
 struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
+
+__rte_internal
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
+
+__rte_internal
 int dpaa2_dpbp_supported(void);
 
+__rte_internal
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
+
+__rte_internal
 void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci);
 
 #endif
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index e010b1b6ae..f0c2f9fcb3 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -1,6 +1,10 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (C) 2015 Freescale Semiconductor, Inc.
  */
+#ifndef _FSL_QBMAN_DEBUG_H
+#define _FSL_QBMAN_DEBUG_H
+
+#include <rte_compat.h>
 
 struct qbman_swp;
 
@@ -24,7 +28,11 @@ uint8_t verb;
 	uint8_t reserved2[29];
 };
 
+__rte_internal
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r);
+
+__rte_internal
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
 uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
+#endif
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index 88f0a99686..f820077d2b 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -7,6 +7,7 @@
 #ifndef _FSL_QBMAN_PORTAL_H
 #define _FSL_QBMAN_PORTAL_H
 
+#include <rte_compat.h>
 #include <fsl_qbman_base.h>
 
 #define SVR_LS1080A	0x87030000
@@ -117,6 +118,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
  * @p: the given software portal object.
  * @mask: The value to set in SWP_ISR register.
  */
+__rte_internal
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
 
 /**
@@ -286,6 +288,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
  * rather by specifying the index (from 0 to 15) that has been mapped to the
  * desired channel.
  */
+__rte_internal
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable);
 
 /* ------------------- */
@@ -325,6 +328,7 @@ enum qbman_pull_type_e {
  * default/starting state.
  * @d: the pull dequeue descriptor to be cleared.
  */
+__rte_internal
 void qbman_pull_desc_clear(struct qbman_pull_desc *d);
 
 /**
@@ -340,6 +344,7 @@ void qbman_pull_desc_clear(struct qbman_pull_desc *d);
  * the caller provides in 'storage_phys'), and 'stash' controls whether or not
  * those writes to main-memory express a cache-warming attribute.
  */
+__rte_internal
 void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 				 struct qbman_result *storage,
 				 uint64_t storage_phys,
@@ -349,6 +354,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
  * @d: the pull dequeue descriptor to be set.
  * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
  */
+__rte_internal
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes);
 /**
@@ -372,6 +378,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
  * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
  * @fqid: the frame queue index of the given FQ.
  */
+__rte_internal
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
 
 /**
@@ -407,6 +414,7 @@ void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
  * Return 0 for success, and -EBUSY if the software portal is not ready
  * to do pull dequeue.
  */
+__rte_internal
 int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
 
 /* -------------------------------- */
@@ -421,12 +429,14 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
  * only once, so repeated calls can return a sequence of DQRR entries, without
  * requiring they be consumed immediately or in any particular order.
  */
+__rte_internal
 const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p);
 
 /**
  * qbman_swp_prefetch_dqrr_next() - prefetch the next DQRR entry.
  * @s: the software portal object.
  */
+__rte_internal
 void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
 
 /**
@@ -435,6 +445,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
  * @s: the software portal object.
  * @dq: the DQRR entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
 
 /**
@@ -442,6 +453,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
  * @s: the software portal object.
  * @dqrr_index: the DQRR index entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
 
 /**
@@ -450,6 +462,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
  *
  * Return dqrr index.
  */
+__rte_internal
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
 
 /**
@@ -460,6 +473,7 @@ uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
  *
  * Return dqrr entry object.
  */
+__rte_internal
 struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
 
 /* ------------------------------------------------- */
@@ -485,6 +499,7 @@ struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_result_has_new_result(struct qbman_swp *s,
 				struct qbman_result *dq);
 
@@ -497,8 +512,10 @@ int qbman_result_has_new_result(struct qbman_swp *s,
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_check_command_complete(struct qbman_result *dq);
 
+__rte_internal
 int qbman_check_new_result(struct qbman_result *dq);
 
 /* -------------------------------------------------------- */
@@ -624,6 +641,7 @@ int qbman_result_is_FQPN(const struct qbman_result *dq);
  *
  * Return the state field.
  */
+__rte_internal
 uint8_t qbman_result_DQ_flags(const struct qbman_result *dq);
 
 /**
@@ -658,6 +676,7 @@ static inline int qbman_result_DQ_is_pull_complete(
  *
  * Return seqnum.
  */
+__rte_internal
 uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
 
 /**
@@ -667,6 +686,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
  *
  * Return odpid.
  */
+__rte_internal
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
 
 /**
@@ -699,6 +719,7 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
  *
  * Return the frame queue context.
  */
+__rte_internal
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
 
 /**
@@ -707,6 +728,7 @@ uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
  *
  * Return the frame descriptor.
  */
+__rte_internal
 const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
 
 /* State-change notifications (FQDAN/CDAN/CSCN/...). */
@@ -717,6 +739,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
  *
  * Return the state in the notifiation.
  */
+__rte_internal
 uint8_t qbman_result_SCN_state(const struct qbman_result *scn);
 
 /**
@@ -850,6 +873,7 @@ struct qbman_eq_response {
  * default/starting state.
  * @d: the given enqueue descriptor.
  */
+__rte_internal
 void qbman_eq_desc_clear(struct qbman_eq_desc *d);
 
 /* Exactly one of the following descriptor "actions" should be set. (Calling
@@ -870,6 +894,7 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d);
  * @response_success: 1 = enqueue with response always; 0 = enqueue with
  * rejections returned on a FQ.
  */
+__rte_internal
 void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
 /**
  * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
@@ -881,6 +906,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
  * @incomplete: indiates whether this is the last fragments using the same
  * sequeue number.
  */
+__rte_internal
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete);
 
@@ -915,6 +941,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
  * data structure.) 'stash' controls whether or not the write to main-memory
  * expresses a cache-warming attribute.
  */
+__rte_internal
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				uint64_t storage_phys,
 				int stash);
@@ -929,6 +956,7 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
  * result "storage" before issuing an enqueue, and use any non-zero 'token'
  * value.
  */
+__rte_internal
 void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
 
 /**
@@ -944,6 +972,7 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
  * @d: the enqueue descriptor
  * @fqid: the id of the frame queue to be enqueued.
  */
+__rte_internal
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
 
 /**
@@ -953,6 +982,7 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
  * @qd_bin: the queuing destination bin
  * @qd_prio: the queuing destination priority.
  */
+__rte_internal
 void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
 			  uint16_t qd_bin, uint8_t qd_prio);
 
@@ -978,6 +1008,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
  * held-active (order-preserving) FQ, whether the FQ should be parked instead of
  * being rescheduled.)
  */
+__rte_internal
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park);
 
@@ -987,6 +1018,7 @@ void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
  *
  * Return the fd pointer.
  */
+__rte_internal
 struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
 
 /**
@@ -997,6 +1029,7 @@ struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
  * This value is set into the response id before the enqueue command, which,
  * get overwritten by qbman once the enqueue command is complete.
  */
+__rte_internal
 void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
 
 /**
@@ -1009,6 +1042,7 @@ void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
  * copied into the enqueue response to determine if the command has been
  * completed, and response has been updated.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
 
 /**
@@ -1017,6 +1051,7 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
  *
  * Return 0 when command is sucessful.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
 
 /**
@@ -1043,6 +1078,7 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 			       const struct qbman_eq_desc *d,
 			       const struct qbman_fd *fd,
@@ -1060,6 +1096,7 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 				  const struct qbman_eq_desc *d,
 				  struct qbman_fd **fd,
@@ -1076,6 +1113,7 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 				    const struct qbman_eq_desc *d,
 				    const struct qbman_fd *fd,
@@ -1117,12 +1155,14 @@ struct qbman_release_desc {
  * default/starting state.
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_clear(struct qbman_release_desc *d);
 
 /**
  * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
 
 /**
@@ -1141,6 +1181,7 @@ void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
  *
  * Return 0 for success, -EBUSY if the release command ring is not ready.
  */
+__rte_internal
 int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
 		      const uint64_t *buffers, unsigned int num_buffers);
 
@@ -1166,6 +1207,7 @@ int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh);
  * Return 0 for success, or negative error code if the acquire command
  * fails.
  */
+__rte_internal
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers);
 
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index fe45575046..1b7a5a45e9 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,4 +1,14 @@
 DPDK_20.0 {
+	local: *;
+};
+
+EXPERIMENTAL {
+	global:
+
+	rte_fslmc_vfio_mem_dmamap;
+};
+
+INTERNAL {
 	global:
 
 	dpaa2_affine_qbman_ethrx_swp;
@@ -11,7 +21,6 @@ DPDK_20.0 {
 	dpaa2_free_dpbp_dev;
 	dpaa2_free_dq_storage;
 	dpaa2_free_eq_descriptors;
-	dpaa2_get_qbman_swp;
 	dpaa2_io_portal;
 	dpaa2_svr_family;
 	dpaa2_virt_mode;
@@ -101,15 +110,6 @@ DPDK_20.0 {
 	rte_fslmc_driver_unregister;
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
-	rte_fslmc_vfio_dmamap;
 	rte_global_active_dqs_list;
 	rte_mcp_ptr_list;
-
-	local: *;
-};
-
-EXPERIMENTAL {
-	global:
-
-	rte_fslmc_vfio_mem_dmamap;
 };
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 96ba8dc259..5078b48ee1 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -162,6 +162,7 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
 
 /**
@@ -171,6 +172,7 @@ void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be unregistered.
  */
+__rte_internal
 void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
 
 /** Helper for DPAA2 device registration from driver (eth, crypto) instance */
@@ -189,6 +191,7 @@ RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
  *   A pointer to a rte_dpaa_object structure describing the mc object
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_object_register(struct rte_dpaa2_object *object);
 
 /**
@@ -200,6 +203,7 @@ void rte_fslmc_object_register(struct rte_dpaa2_object *object);
  *   >=0 for count; 0 indicates either no device of the said type scanned or
  *   invalid device type.
  */
+__rte_internal
 uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type);
 
 /** Helper for DPAA2 object registration */
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section
  @ 2020-05-15  9:47  3%   ` Hemant Agrawal
  2020-05-19  6:43  0%     ` Hemant Agrawal
  2020-05-19  9:51  0%     ` Ray Kinsella
  2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 02/13] bus/fslmc: " Hemant Agrawal
                     ` (5 subsequent siblings)
  6 siblings, 2 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  9:47 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                      |  3 +++
 drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
 drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
 drivers/common/dpaax/rte_common_dpaax_version.map |  6 ++++--
 4 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index c9ee73cb3c..b1488d5549 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -48,3 +48,6 @@
         changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
 [suppress_variable]
         name = rte_crypto_aead_algorithm_strings
+; Ignore moving DPAAx stable functions to INTERNAL tag
+[suppress_file]
+	file_name_regexp = ^librte_common_dpaax\.
diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
index 960b421766..38d91a1afe 100644
--- a/drivers/common/dpaax/dpaa_of.h
+++ b/drivers/common/dpaax/dpaa_of.h
@@ -24,6 +24,7 @@
 #include <limits.h>
 #include <rte_common.h>
 #include <dpaa_list.h>
+#include <rte_compat.h>
 
 #ifndef OF_INIT_DEFAULT_PATH
 #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
@@ -102,6 +103,7 @@ struct dt_file {
 	uint64_t buf[OF_FILE_BUF_MAX >> 3];
 };
 
+__rte_internal
 const struct device_node *of_find_compatible_node(
 					const struct device_node *from,
 					const char *type __rte_unused,
@@ -113,32 +115,44 @@ const struct device_node *of_find_compatible_node(
 		dev_node != NULL; \
 		dev_node = of_find_compatible_node(dev_node, type, compatible))
 
+__rte_internal
 const void *of_get_property(const struct device_node *from, const char *name,
 			    size_t *lenp) __attribute__((nonnull(2)));
+__rte_internal
 bool of_device_is_available(const struct device_node *dev_node);
 
+
+__rte_internal
 const struct device_node *of_find_node_by_phandle(uint64_t ph);
 
+__rte_internal
 const struct device_node *of_get_parent(const struct device_node *dev_node);
 
+__rte_internal
 const struct device_node *of_get_next_child(const struct device_node *dev_node,
 					    const struct device_node *prev);
 
+__rte_internal
 const void *of_get_mac_address(const struct device_node *np);
 
 #define for_each_child_node(parent, child) \
 	for (child = of_get_next_child(parent, NULL); child != NULL; \
 			child = of_get_next_child(parent, child))
 
+
+__rte_internal
 uint32_t of_n_addr_cells(const struct device_node *dev_node);
 uint32_t of_n_size_cells(const struct device_node *dev_node);
 
+__rte_internal
 const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
 			       uint64_t *size, uint32_t *flags);
 
+__rte_internal
 uint64_t of_translate_address(const struct device_node *dev_node,
 			      const uint32_t *addr) __attribute__((nonnull));
 
+__rte_internal
 bool of_device_is_compatible(const struct device_node *dev_node,
 			     const char *compatible);
 
@@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct device_node *dev_node,
  * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
  * The path should usually be "/proc/device-tree".
  */
+__rte_internal
 int of_init_path(const char *dt_path);
 
 /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index fc3b9e7a8f..230fba8ba0 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
 #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
 
 /* APIs exposed */
+__rte_internal
 int dpaax_iova_table_populate(void);
+__rte_internal
 void dpaax_iova_table_depopulate(void);
+__rte_internal
 int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
+__rte_internal
 void dpaax_iova_table_dump(void);
 
 static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index f72eba761d..14b507ad13 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,4 +1,8 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaax_iova_table_depopulate;
@@ -18,6 +22,4 @@ DPDK_20.0 {
 	of_init_path;
 	of_n_addr_cells;
 	of_translate_address;
-
-	local: *;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 01/12] common/dpaax: move internal symbols into INTERNAL section
  2020-05-14 17:15  0%                           ` Hemant Agrawal (OSS)
@ 2020-05-15  9:26  0%                             ` Thomas Monjalon
  2020-05-15 11:19  5%                               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2020-05-15  9:26 UTC (permalink / raw)
  To: David Marchand, hemant.agrawal; +Cc: Ray Kinsella, dev, neil.horman, techboard

14/05/2020 19:15, Hemant Agrawal (OSS):
> > On Thu, May 14, 2020 at 3:31 PM David Marchand
> > <david.marchand@redhat.com> wrote:
> > > On Thu, May 14, 2020 at 2:39 PM Hemant Agrawal (OSS)
> > > <hemant.agrawal@oss.nxp.com> wrote:
> > > >
> > > > [Hemant] this is working fine for pmd_dpaa but not for pmd_dpaa2
> > > >
> > > > I removed the filename_exp and introduced function based name
> > > > Now the issue is  the following warning SONAME changed from
> > > > 'librte_pmd_dpaa2.so.20.0' to 'librte_pmd_dpaa2.so.0.200.2'
> > > >
> > > > The  primary reason is that now pmd_dpaa2 has no symbol left for
> > > > 20.0 section.
> > > > Following is not helping.
> > > > [suppress_file]
> > > >         soname_regexp = ^librte_pmd_dpaa2 so, it seems for now, the
> > > > filename_exp is the only option
> > >
> > > That's interesting.
> > > Because I wondered about this point when reviewing __rte_internal.
> > > For components providing only internal symbols like components
> > > providing only experimental symbols, the build framework will select a
> > > soname with .0.200.x.

I will remind once again that I was against this rule.
Distinguishing "stable or partially stable" and "completely non-stable"
libraries is an useless complication.


> > > Here, your dpaa2 driver was seen as a stable library so far.
> > > Moving everything to internal changes this and the build framework
> > > changes the soname to non stable.
> > 
> > Looking at a v19.11 testpmd binary:
> > $ readelf -d $HOME/abi/v19.11/build-gcc-shared/usr/local/bin/dpdk-testpmd
> > |grep dpaa
> >  0x0000000000000001 (NEEDED)             Shared library:
> > [librte_bus_dpaa.so.20.0]
> >  0x0000000000000001 (NEEDED)             Shared library:
> > [librte_common_dpaax.so.20.0]
> >  0x0000000000000001 (NEEDED)             Shared library:
> > [librte_mempool_dpaa.so.20.0]
> >  0x0000000000000001 (NEEDED)             Shared library:
> > [librte_pmd_dpaa.so.20.0]
> > 
> > Changing the soname would break this.
> > 
> > > You could keep an empty DPDK_20.0 block to avoid this and the soname
> > > will be kept as is.
> 
> [Hemant] Yes, I was thinking about it but missed to make this change while sending patch. Will do it asap.
> > 
> > We will have to maintain such soname for all dpaa libraries until 20.11.

Thank you for maintaining the soname compatibility in v7.

Now the question is: what to do in v20.11?
This question will have to be voted in the Technical Board which voted
the "pure experimental" versioning rule.
We have 3 options:

a) "Pure internal" libs are versioned as "stable" libs,
while "pure experimental" libs have version 0.x.
It looks inconsistent and non-sense.

b) "Pure internal" libs are versioned as
"pure experimental" libs: version 0.x.
It makes "pure internal" libs version decreasing in 20.11.

c) Forget about the different versioning scheme,
i.e. increase 0.x versions to x as "stable" libs.

Of course, I vote for option c.



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v7 07/13] crypto: move internal symbols into INTERNAL section
                         ` (5 preceding siblings ...)
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 06/13] net/dpaa2: " Hemant Agrawal
@ 2020-05-15  5:08  3%     ` Hemant Agrawal
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_event.h             | 5 +++--
 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 6 ++++--
 drivers/crypto/dpaa_sec/dpaa_sec_event.h               | 8 ++++----
 drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map   | 6 ++++--
 4 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
index c779d5d837..675cbbb81d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
@@ -6,12 +6,13 @@
 #ifndef _DPAA2_SEC_EVENT_H_
 #define _DPAA2_SEC_EVENT_H_
 
-int
-dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event *event);
 
+__rte_internal
 int dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 5952d645fd..3d863aff4d 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaa2_sec_eventq_attach;
 	dpaa2_sec_eventq_detach;
-
-	local: *;
 };
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_event.h b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
index 8d1a018096..0b09fa8f75 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_event.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
@@ -6,14 +6,14 @@
 #ifndef _DPAA_SEC_EVENT_H_
 #define _DPAA_SEC_EVENT_H_
 
-int
-dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		uint16_t ch_id,
 		const struct rte_event *event);
 
-int
-dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
 #endif /* _DPAA_SEC_EVENT_H_ */
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index 8580fa13db..023e120516 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaa_sec_eventq_attach;
 	dpaa_sec_eventq_detach;
-
-	local: *;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 06/13] net/dpaa2: move internal symbols into INTERNAL section
                         ` (4 preceding siblings ...)
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 05/13] net/dpaa: " Hemant Agrawal
@ 2020-05-15  5:08  3%     ` Hemant Agrawal
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 07/13] crypto: " Hemant Agrawal
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 ++
 drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +++++++-----
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 2c49a7f01f..c7fb6539ff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -164,11 +164,13 @@ int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 
 int dpaa2_attach_bp_list(struct dpaa2_dev_priv *priv, void *blist);
 
+__rte_internal
 int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id);
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index f2bb793319..b633fdc2a8 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,9 +1,4 @@
 DPDK_20.0 {
-	global:
-
-	dpaa2_eth_eventq_attach;
-	dpaa2_eth_eventq_detach;
-
 	local: *;
 };
 
@@ -14,3 +9,10 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_set_custom_hash;
 	rte_pmd_dpaa2_set_timestamp;
 };
+
+INTERNAL {
+	global:
+
+	dpaa2_eth_eventq_attach;
+	dpaa2_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 05/13] net/dpaa: move internal symbols into INTERNAL section
                         ` (3 preceding siblings ...)
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 04/13] mempool/dpaa2: " Hemant Agrawal
@ 2020-05-15  5:08  3%     ` Hemant Agrawal
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 06/13] net/dpaa2: " Hemant Agrawal
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 07/13] crypto: " Hemant Agrawal
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              | 2 ++
 drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
 drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 42f9469221..7b6358c394 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -63,3 +63,5 @@
 	name_regexp = ^rte_dpaa_bpid_info
 [suppress_variable]
 	name_regexp = ^rte_dpaa2_bpid_info
+[suppress_function]
+        name_regexp = ^dpaa
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index af9fc2105d..7393a9df05 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -160,12 +160,14 @@ struct dpaa_if_stats {
 	uint64_t tund;		/**<Tx Undersized */
 };
 
+__rte_internal
 int
 dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		u16 ch_id,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int
 dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
 			   int eth_rx_queue_id);
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index f403a1526d..774aa0de45 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,9 +1,14 @@
 DPDK_20.0 {
 	global:
 
-	dpaa_eth_eventq_attach;
-	dpaa_eth_eventq_detach;
 	rte_pmd_dpaa_set_tx_loopback;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	dpaa_eth_eventq_attach;
+	dpaa_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 03/13] bus/dpaa: move internal symbols into INTERNAL section
    2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
  2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 02/13] bus/fslmc: " Hemant Agrawal
@ 2020-05-15  5:08  1%     ` Hemant Agrawal
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 04/13] mempool/dpaa2: " Hemant Agrawal
                       ` (3 subsequent siblings)
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

This patch also removes two symbols, which are not
to be exported.
rte_dpaa_mem_ptov  - static inline in the headerfile
fman_ccsr_map_fd - local shared variable.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              |  2 ++
 drivers/bus/dpaa/include/fsl_bman.h       |  6 +++++
 drivers/bus/dpaa/include/fsl_fman.h       | 27 +++++++++++++++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 32 +++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |  8 +++++-
 drivers/bus/dpaa/include/netcfg.h         |  2 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  8 +++---
 drivers/bus/dpaa/rte_dpaa_bus.h           |  5 ++++
 8 files changed, 85 insertions(+), 5 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 877c6d5be8..ab34302d0c 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -53,3 +53,5 @@
 	file_name_regexp = ^librte_common_dpaax\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_fslmc\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_dpaa\.
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index f9cd972153..82da2fcfe0 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);
  * the structure provided by the caller can be released or reused after the
  * function returns.
  */
+__rte_internal
 struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
 
 /**
  * bman_free_pool - Deallocates a Buffer Pool object
  * @pool: the pool object to release
  */
+__rte_internal
 void bman_free_pool(struct bman_pool *pool);
 
 /**
@@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);
  * The returned pointer refers to state within the pool object so must not be
  * modified and can no longer be read once the pool object is destroyed.
  */
+__rte_internal
 const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
 
 /**
@@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
  * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
  *
  */
+__rte_internal
 int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
  * The return value will be the number of buffers obtained from the pool, or a
  * negative error code if a h/w error or pool starvation was encountered.
  */
+__rte_internal
 int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);
  *
  * Return the number of the free buffers
  */
+__rte_internal
 u32 bman_query_free_buffers(struct bman_pool *pool);
 
 /**
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 5705ebfdce..6c87c8db0d 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -7,6 +7,8 @@
 #ifndef __FSL_FMAN_H
 #define __FSL_FMAN_H
 
+#include <rte_compat.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -43,18 +45,23 @@ struct fm_status_t {
 } __rte_packed;
 
 /* Set MAC address for a particular interface */
+__rte_internal
 int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
 
 /* Remove a MAC address for a particular interface */
+__rte_internal
 void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
 
 /* Get the FMAN statistics */
+__rte_internal
 void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
 
 /* Reset the FMAN statistics */
+__rte_internal
 void fman_if_stats_reset(struct fman_if *p);
 
 /* Get all of the FMAN statistics */
+__rte_internal
 void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
 
 /* Set ignore pause option for a specific interface */
@@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
 void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
 
 /* Enable/disable Rx promiscuous mode on specified interface */
+__rte_internal
 void fman_if_promiscuous_enable(struct fman_if *p);
+__rte_internal
 void fman_if_promiscuous_disable(struct fman_if *p);
 
 /* Enable/disable Rx on specific interfaces */
+__rte_internal
 void fman_if_enable_rx(struct fman_if *p);
+__rte_internal
 void fman_if_disable_rx(struct fman_if *p);
 
 /* Enable/disable loopback on specific interfaces */
+__rte_internal
 void fman_if_loopback_enable(struct fman_if *p);
+__rte_internal
 void fman_if_loopback_disable(struct fman_if *p);
 
 /* Set buffer pool on specific interface */
+__rte_internal
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
 /* Get Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_get_fc_threshold(struct fman_if *fm_if);
 
 /* Enable and Set Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_set_fc_threshold(struct fman_if *fm_if,
 			u32 high_water, u32 low_water, u32 bpid);
 
 /* Get Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
 /* Set Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
 
 /* Set default error fqid on specific interface */
@@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
 int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
 
 /* Set IC transfer params */
+__rte_internal
 int fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp);
 
 /* Get interface fd->offset value */
+__rte_internal
 int fman_if_get_fdoff(struct fman_if *fm_if);
 
 /* Set interface fd->offset value */
+__rte_internal
 void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
 
 /* Get interface SG enable status value */
+__rte_internal
 int fman_if_get_sg_enable(struct fman_if *fm_if);
 
 /* Set interface SG support mode */
+__rte_internal
 void fman_if_set_sg(struct fman_if *fm_if, int enable);
 
 /* Get interface Max Frame length (MTU) */
 uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
 
 /* Set interface  Max Frame length (MTU) */
+__rte_internal
 void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
 
 /* Set interface next invoked action for dequeue operation */
 void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
 
 /* discard error packets on rx */
+__rte_internal
 void fman_if_discard_rx_errors(struct fman_if *fm_if);
 
+__rte_internal
 void fman_if_set_mcast_filter_table(struct fman_if *p);
 
+__rte_internal
 void fman_if_reset_mcast_filter_table(struct fman_if *p);
 
 int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 1b3342e7e6..4411bb0a79 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1314,6 +1314,7 @@ struct qman_cgr {
 #define QMAN_CGR_MODE_FRAME          0x00000001
 
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+__rte_internal
 void qman_set_fq_lookup_table(void **table);
 #endif
 
@@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);
  */
 int qman_get_portal_index(void);
 
+__rte_internal
 u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
 			void **bufs);
 
@@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
  * processed via qman_poll_***() functions). Returns zero for success, or
  * -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_add(u32 bits);
 
 /**
@@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
 
 /**
@@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
  * instead be processed via qman_poll_***() functions. Returns zero for success,
  * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_remove(u32 bits);
 
 /**
@@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
 
 /**
@@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
  */
 u16 qman_affine_channel(int cpu);
 
+__rte_internal
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs, struct qman_portal *q);
 
@@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
  *
  * This function will issue a volatile dequeue command to the QMAN.
  */
+__rte_internal
 int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
 
 /**
@@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
  * is issued. It will keep returning NULL until there is no packet available on
  * the DQRR.
  */
+__rte_internal
 struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
 
 /**
@@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
  * This will consume the DQRR enrey and make it available for next volatile
  * dequeue.
  */
+__rte_internal
 void qman_dqrr_consume(struct qman_fq *fq,
 		       struct qm_dqrr_entry *dq);
 
@@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,
  * this function will return -EINVAL, otherwise the return value is >=0 and
  * represents the number of DQRR entries processed.
  */
+__rte_internal
 int qman_poll_dqrr(unsigned int limit);
 
 /**
@@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);
  * (SDQCR). The requested pools are limited to those the portal has dequeue
  * access to.
  */
+__rte_internal
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
 
 /**
@@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
  * function must be called from the same CPU as that which processed the DQRR
  * entry in the first place.
  */
+__rte_internal
 void qman_dca_index(u8 index, int park_request);
 
 /**
@@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
  * a frame queue object based on that, rather than assuming/requiring that it be
  * Out of Service.
  */
+__rte_internal
 int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
 
 /**
@@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
  * qman_fq_fqid - Queries the frame queue ID of a FQ object
  * @fq: the frame queue object to query
  */
+__rte_internal
 u32 qman_fq_fqid(struct qman_fq *fq);
 
 /**
@@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);
  * This captures the state, as seen by the driver, at the time the function
  * executes.
  */
+__rte_internal
 void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
 
 /**
@@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
  * context_a.address fields and will leave the stashing fields provided by the
  * user alone, otherwise it will zero out the context_a.stashing fields.
  */
+__rte_internal
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
 
 /**
@@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);
  * caller should be prepared to accept the callback as the function is called,
  * not only once it has returned.
  */
+__rte_internal
 int qman_retire_fq(struct qman_fq *fq, u32 *flags);
 
 /**
@@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
  * The frame queue must be retired and empty, and if any order restoration list
  * was released as ERNs at the time of retirement, they must all be consumed.
  */
+__rte_internal
 int qman_oos_fq(struct qman_fq *fq);
 
 /**
@@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
  * @fq: the frame queue object to be queried
  * @np: storage for the queried FQD fields
  */
+__rte_internal
 int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
 
 /**
@@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
  * @fq: the frame queue object to be queried
  * @frm_cnt: number of frames in the queue
  */
+__rte_internal
 int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
 
 /**
@@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
  * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
  * "flags" retrieved from qman_fq_state().
  */
+__rte_internal
 int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
 
 /**
@@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
  * of an already busy hardware resource by throttling many of the to-be-dropped
  * enqueues "at the source".
  */
+__rte_internal
 int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 
+__rte_internal
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
@@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
  * This API is similar to qman_enqueue_multi(), but it takes fd which needs
  * to be processed by different frame queues.
  */
+__rte_internal
 int
 qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 		      u32 *flags, int frames_to_send);
@@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);
  * @fqid: the base FQID of the range to deallocate
  * @count: the number of FQIDs in the range
  */
+__rte_internal
 int qman_reserve_fqid_range(u32 fqid, unsigned int count);
 static inline int qman_reserve_fqid(u32 fqid)
 {
@@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_pool(u32 *result)
 {
@@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);
  * any unspecified parameters) will be used rather than a modify hw hardware
  * (which only modifies the specified parameters).
  */
+__rte_internal
 int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
  * is executed. This must be excuted on the same affine portal on which it was
  * created.
  */
+__rte_internal
 int qman_delete_cgr(struct qman_cgr *cgr);
 
 /**
@@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);
  * unspecified parameters) will be used rather than a modify hw hardware (which
  * only modifies the specified parameters).
  */
+__rte_internal
 int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_cgrid(u32 *result)
 {
@@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)
  * @id: the base CGR ID of the range to deallocate
  * @count: the number of CGR IDs in the range
  */
+__rte_internal
 void qman_release_cgrid_range(u32 id, unsigned int count);
 static inline void qman_release_cgrid(u32 id)
 {
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 263d9bb976..dcf35e4adb 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
 /* Obtain thread-local UIO file-descriptors */
+__rte_internal
 int qman_thread_fd(void);
 int bman_thread_fd(void);
 
@@ -66,10 +67,14 @@ int bman_thread_fd(void);
  * processing is complete. As such, it is essential to call this before going
  * into another blocking read/select/poll.
  */
+__rte_internal
 void qman_thread_irq(void);
+
+__rte_internal
 void bman_thread_irq(void);
+__rte_internal
 void qman_fq_portal_thread_irq(struct qman_portal *qp);
-
+__rte_internal
 void qman_clear_irq(void);
 
 /* Global setup */
@@ -77,6 +82,7 @@ int qman_global_init(void);
 int bman_global_init(void);
 
 /* Direct portal create and destroy */
+__rte_internal
 struct qman_portal *fsl_qman_fq_portal_create(int *fd);
 int fsl_qman_fq_portal_destroy(struct qman_portal *qp);
 int fsl_qman_fq_portal_init(struct qman_portal *qp);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index bf7bfae8cb..d7d1befd24 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -46,11 +46,13 @@ struct netcfg_interface {
  * cfg_file: FMC config XML file
  * Returns the configuration information in newly allocated memory.
  */
+__rte_internal
 struct netcfg_info *netcfg_acquire(void);
 
 /* cfg_ptr: configuration information pointer.
  * Frees the resources allocated by the configuration layer.
  */
+__rte_internal
 void netcfg_release(struct netcfg_info *cfg_ptr);
 
 #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e0..53732289d3 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,8 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	bman_acquire;
@@ -13,7 +17,6 @@ DPDK_20.0 {
 	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	dpaa_svr_family;
-	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
 	fman_dealloc_bufs_mask_lo;
 	fman_if_add_mac_addr;
@@ -87,10 +90,7 @@ DPDK_20.0 {
 	qman_volatile_dequeue;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
-	rte_dpaa_mem_ptov;
 	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
-
-	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca9785..d4aee132ef 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)
  *   A pointer to a rte_dpaa_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
 
 /**
@@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  *	A pointer to a rte_dpaa_driver structure describing the driver
  *	to be unregistered.
  */
+__rte_internal
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
 /**
@@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
  * @return
  *	0 in case of success, error otherwise
  */
+__rte_internal
 int rte_dpaa_portal_init(void *arg);
 
+__rte_internal
 int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
 
+__rte_internal
 int rte_dpaa_portal_fq_close(struct qman_fq *fq);
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v7 04/13] mempool/dpaa2: move internal symbols into INTERNAL section
                         ` (2 preceding siblings ...)
  2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 03/13] bus/dpaa: " Hemant Agrawal
@ 2020-05-15  5:08  3%     ` Hemant Agrawal
  2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 05/13] net/dpaa: " Hemant Agrawal
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                        | 8 ++++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 6 ++++--
 drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
 drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
 4 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ab34302d0c..42f9469221 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -55,3 +55,11 @@
 	file_name_regexp = ^librte_bus_fslmc\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_dpaa\.
+[suppress_function]
+	name = rte_dpaa2_mbuf_alloc_bulk
+[suppress_variable]
+	name_regexp = ^rte_dpaa_memsegs
+[suppress_variable]
+	name_regexp = ^rte_dpaa_bpid_info
+[suppress_variable]
+	name_regexp = ^rte_dpaa2_bpid_info
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 9eebaf7ffd..89d7cf4957 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,8 +1,10 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	rte_dpaa_bpid_info;
 	rte_dpaa_memsegs;
-
-	local: *;
 };
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
index fa0f2280d5..53fa1552d1 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
@@ -61,6 +61,7 @@ struct dpaa2_bp_info {
 
 extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
 
+__rte_internal
 int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 		       void **obj_table, unsigned int count);
 
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index cd4bc88273..686b024624 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,10 +1,15 @@
 DPDK_20.0 {
 	global:
 
-	rte_dpaa2_bpid_info;
-	rte_dpaa2_mbuf_alloc_bulk;
 	rte_dpaa2_mbuf_from_buf_addr;
 	rte_dpaa2_mbuf_pool_bpid;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	rte_dpaa2_bpid_info;
+	rte_dpaa2_mbuf_alloc_bulk;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v7 02/13] bus/fslmc: move internal symbols into INTERNAL section
    2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
@ 2020-05-15  5:08  1%     ` Hemant Agrawal
  2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 03/13] bus/dpaa: " Hemant Agrawal
                       ` (4 subsequent siblings)
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

This patch also removes two symbols, which were not used
anywhere else i.e. rte_fslmc_vfio_dmamap & dpaa2_get_qbman_swp

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                  |  2 +
 drivers/bus/fslmc/fslmc_vfio.h                |  4 ++
 drivers/bus/fslmc/mc/fsl_dpbp.h               |  6 +++
 drivers/bus/fslmc/mc/fsl_dpci.h               |  3 ++
 drivers/bus/fslmc/mc/fsl_dpcon.h              |  2 +
 drivers/bus/fslmc/mc/fsl_dpdmai.h             |  8 ++++
 drivers/bus/fslmc/mc/fsl_dpio.h               |  9 ++++
 drivers/bus/fslmc/mc/fsl_dpmng.h              |  2 +
 drivers/bus/fslmc/mc/fsl_mc_cmd.h             |  1 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  5 +++
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  8 ++++
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  3 ++
 .../fslmc/qbman/include/fsl_qbman_portal.h    | 41 +++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map   | 20 ++++-----
 drivers/bus/fslmc/rte_fslmc.h                 |  4 ++
 15 files changed, 108 insertions(+), 10 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index b1488d5549..877c6d5be8 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -51,3 +51,5 @@
 ; Ignore moving DPAAx stable functions to INTERNAL tag
 [suppress_file]
 	file_name_regexp = ^librte_common_dpaax\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_fslmc\.
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index c988121294..609e48aea3 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -41,7 +41,11 @@ typedef struct fslmc_vfio_container {
 } fslmc_vfio_container;
 
 extern char *fslmc_container;
+
+__rte_internal
 int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index);
+
+__rte_internal
 int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index);
 
 int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
index 9d405b42c4..7b537a21be 100644
--- a/drivers/bus/fslmc/mc/fsl_dpbp.h
+++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
@@ -14,6 +14,7 @@
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpbp_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpbp_id,
@@ -42,10 +43,12 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t obj_id);
 
+__rte_internal
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -55,6 +58,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -70,6 +74,7 @@ struct dpbp_attr {
 	uint16_t bpid;
 };
 
+__rte_internal
 int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -88,6 +93,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
index a0ee5bfe69..81fd3438aa 100644
--- a/drivers/bus/fslmc/mc/fsl_dpci.h
+++ b/drivers/bus/fslmc/mc/fsl_dpci.h
@@ -181,6 +181,7 @@ struct dpci_rx_queue_cfg {
 	int order_preservation_en;
 };
 
+__rte_internal
 int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
@@ -228,6 +229,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -235,6 +237,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint8_t options,
 		 struct opr_cfg *cfg);
 
+__rte_internal
 int dpci_get_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index af81d51195..7caa6c68a1 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -20,6 +20,7 @@ struct fsl_mc_io;
  */
 #define DPCON_INVALID_DPIO_ID		(int)(-1)
 
+__rte_internal
 int dpcon_open(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       int dpcon_id,
@@ -77,6 +78,7 @@ struct dpcon_attr {
 	uint8_t num_priorities;
 };
 
+__rte_internal
 int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint32_t cmd_flags,
 			 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 40469cc139..e7e8a5dda9 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -23,11 +23,13 @@ struct fsl_mc_io;
  */
 #define DPDMAI_ALL_QUEUES	(uint8_t)(-1)
 
+__rte_internal
 int dpdmai_open(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		int dpdmai_id,
 		uint16_t *token);
 
+__rte_internal
 int dpdmai_close(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -54,10 +56,12 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint32_t object_id);
 
+__rte_internal
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
 
+__rte_internal
 int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token);
@@ -82,6 +86,7 @@ struct dpdmai_attr {
 	uint8_t num_of_queues;
 };
 
+__rte_internal
 int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
 			  uint32_t cmd_flags,
 			  uint16_t token,
@@ -148,6 +153,7 @@ struct dpdmai_rx_queue_cfg {
 
 };
 
+__rte_internal
 int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -168,6 +174,7 @@ struct dpdmai_rx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -184,6 +191,7 @@ struct dpdmai_tx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index 3158f53191..92e97db94b 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -13,11 +13,13 @@
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpio_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpio_id,
 	      uint16_t *token);
 
+__rte_internal
 int dpio_close(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -57,10 +59,12 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t object_id);
 
+__rte_internal
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -70,10 +74,12 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
 
+__rte_internal
 int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -84,12 +90,14 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
 				    uint16_t token,
 				    int dpcon_id,
 				    uint8_t *channel_index);
 
+__rte_internal
 int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				       uint32_t cmd_flags,
 				       uint16_t token,
@@ -119,6 +127,7 @@ struct dpio_attr {
 	uint32_t clk;
 };
 
+__rte_internal
 int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 36c387af27..cdd8506625 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -34,6 +34,7 @@ struct mc_version {
 	uint32_t revision;
 };
 
+__rte_internal
 int mc_get_version(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   struct mc_version *mc_ver_info);
@@ -48,6 +49,7 @@ struct mc_soc_version {
 	uint32_t pvr;
 };
 
+__rte_internal
 int mc_get_soc_version(struct fsl_mc_io *mc_io,
 		       uint32_t cmd_flags,
 		       struct mc_soc_version *mc_platform_info);
diff --git a/drivers/bus/fslmc/mc/fsl_mc_cmd.h b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
index ac919610cf..06ea41a3b2 100644
--- a/drivers/bus/fslmc/mc/fsl_mc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
@@ -80,6 +80,7 @@ enum mc_cmd_status {
 
 #define MC_CMD_HDR_FLAGS_MASK	0xFF00FF00
 
+__rte_internal
 int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd);
 
 static inline uint64_t mc_encode_cmd_header(uint16_t cmd_id,
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 2829c93806..7c5966241a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -36,20 +36,25 @@ extern uint8_t dpaa2_eqcr_size;
 extern struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
 
 /* Affine a DPIO portal to current processing thread */
+__rte_internal
 int dpaa2_affine_qbman_swp(void);
 
 /* Affine additional DPIO portal to current crypto processing thread */
+__rte_internal
 int dpaa2_affine_qbman_ethrx_swp(void);
 
 /* allocate memory for FQ - dq storage */
+__rte_internal
 int
 dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free memory for FQ- dq storage */
+__rte_internal
 void
 dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free the enqueue response descriptors */
+__rte_internal
 uint32_t
 dpaa2_free_eq_descriptors(void);
 
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 368fe7c688..33b191f823 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -426,11 +426,19 @@ void set_swp_active_dqs(uint16_t dpio_index, struct qbman_result *dqs)
 {
 	rte_global_active_dqs_list[dpio_index].global_active_dqs = dqs;
 }
+__rte_internal
 struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
+
+__rte_internal
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
+
+__rte_internal
 int dpaa2_dpbp_supported(void);
 
+__rte_internal
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
+
+__rte_internal
 void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci);
 
 #endif
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index e010b1b6ae..328f2022fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -24,7 +24,10 @@ uint8_t verb;
 	uint8_t reserved2[29];
 };
 
+__rte_internal
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r);
+
+__rte_internal
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
 uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index 88f0a99686..7ac0f82106 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -117,6 +117,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
  * @p: the given software portal object.
  * @mask: The value to set in SWP_ISR register.
  */
+__rte_internal
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
 
 /**
@@ -286,6 +287,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
  * rather by specifying the index (from 0 to 15) that has been mapped to the
  * desired channel.
  */
+__rte_internal
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable);
 
 /* ------------------- */
@@ -325,6 +327,7 @@ enum qbman_pull_type_e {
  * default/starting state.
  * @d: the pull dequeue descriptor to be cleared.
  */
+__rte_internal
 void qbman_pull_desc_clear(struct qbman_pull_desc *d);
 
 /**
@@ -340,6 +343,7 @@ void qbman_pull_desc_clear(struct qbman_pull_desc *d);
  * the caller provides in 'storage_phys'), and 'stash' controls whether or not
  * those writes to main-memory express a cache-warming attribute.
  */
+__rte_internal
 void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 				 struct qbman_result *storage,
 				 uint64_t storage_phys,
@@ -349,6 +353,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
  * @d: the pull dequeue descriptor to be set.
  * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
  */
+__rte_internal
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes);
 /**
@@ -372,6 +377,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
  * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
  * @fqid: the frame queue index of the given FQ.
  */
+__rte_internal
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
 
 /**
@@ -407,6 +413,7 @@ void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
  * Return 0 for success, and -EBUSY if the software portal is not ready
  * to do pull dequeue.
  */
+__rte_internal
 int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
 
 /* -------------------------------- */
@@ -421,12 +428,14 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
  * only once, so repeated calls can return a sequence of DQRR entries, without
  * requiring they be consumed immediately or in any particular order.
  */
+__rte_internal
 const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p);
 
 /**
  * qbman_swp_prefetch_dqrr_next() - prefetch the next DQRR entry.
  * @s: the software portal object.
  */
+__rte_internal
 void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
 
 /**
@@ -435,6 +444,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
  * @s: the software portal object.
  * @dq: the DQRR entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
 
 /**
@@ -442,6 +452,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
  * @s: the software portal object.
  * @dqrr_index: the DQRR index entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
 
 /**
@@ -450,6 +461,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
  *
  * Return dqrr index.
  */
+__rte_internal
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
 
 /**
@@ -460,6 +472,7 @@ uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
  *
  * Return dqrr entry object.
  */
+__rte_internal
 struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
 
 /* ------------------------------------------------- */
@@ -485,6 +498,7 @@ struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_result_has_new_result(struct qbman_swp *s,
 				struct qbman_result *dq);
 
@@ -497,8 +511,10 @@ int qbman_result_has_new_result(struct qbman_swp *s,
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_check_command_complete(struct qbman_result *dq);
 
+__rte_internal
 int qbman_check_new_result(struct qbman_result *dq);
 
 /* -------------------------------------------------------- */
@@ -624,6 +640,7 @@ int qbman_result_is_FQPN(const struct qbman_result *dq);
  *
  * Return the state field.
  */
+__rte_internal
 uint8_t qbman_result_DQ_flags(const struct qbman_result *dq);
 
 /**
@@ -658,6 +675,7 @@ static inline int qbman_result_DQ_is_pull_complete(
  *
  * Return seqnum.
  */
+__rte_internal
 uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
 
 /**
@@ -667,6 +685,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
  *
  * Return odpid.
  */
+__rte_internal
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
 
 /**
@@ -699,6 +718,7 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
  *
  * Return the frame queue context.
  */
+__rte_internal
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
 
 /**
@@ -707,6 +727,7 @@ uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
  *
  * Return the frame descriptor.
  */
+__rte_internal
 const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
 
 /* State-change notifications (FQDAN/CDAN/CSCN/...). */
@@ -717,6 +738,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
  *
  * Return the state in the notifiation.
  */
+__rte_internal
 uint8_t qbman_result_SCN_state(const struct qbman_result *scn);
 
 /**
@@ -850,6 +872,7 @@ struct qbman_eq_response {
  * default/starting state.
  * @d: the given enqueue descriptor.
  */
+__rte_internal
 void qbman_eq_desc_clear(struct qbman_eq_desc *d);
 
 /* Exactly one of the following descriptor "actions" should be set. (Calling
@@ -870,6 +893,7 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d);
  * @response_success: 1 = enqueue with response always; 0 = enqueue with
  * rejections returned on a FQ.
  */
+__rte_internal
 void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
 /**
  * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
@@ -881,6 +905,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
  * @incomplete: indiates whether this is the last fragments using the same
  * sequeue number.
  */
+__rte_internal
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete);
 
@@ -915,6 +940,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
  * data structure.) 'stash' controls whether or not the write to main-memory
  * expresses a cache-warming attribute.
  */
+__rte_internal
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				uint64_t storage_phys,
 				int stash);
@@ -929,6 +955,7 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
  * result "storage" before issuing an enqueue, and use any non-zero 'token'
  * value.
  */
+__rte_internal
 void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
 
 /**
@@ -944,6 +971,7 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
  * @d: the enqueue descriptor
  * @fqid: the id of the frame queue to be enqueued.
  */
+__rte_internal
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
 
 /**
@@ -953,6 +981,7 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
  * @qd_bin: the queuing destination bin
  * @qd_prio: the queuing destination priority.
  */
+__rte_internal
 void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
 			  uint16_t qd_bin, uint8_t qd_prio);
 
@@ -978,6 +1007,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
  * held-active (order-preserving) FQ, whether the FQ should be parked instead of
  * being rescheduled.)
  */
+__rte_internal
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park);
 
@@ -987,6 +1017,7 @@ void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
  *
  * Return the fd pointer.
  */
+__rte_internal
 struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
 
 /**
@@ -997,6 +1028,7 @@ struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
  * This value is set into the response id before the enqueue command, which,
  * get overwritten by qbman once the enqueue command is complete.
  */
+__rte_internal
 void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
 
 /**
@@ -1009,6 +1041,7 @@ void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
  * copied into the enqueue response to determine if the command has been
  * completed, and response has been updated.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
 
 /**
@@ -1017,6 +1050,7 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
  *
  * Return 0 when command is sucessful.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
 
 /**
@@ -1043,6 +1077,7 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 			       const struct qbman_eq_desc *d,
 			       const struct qbman_fd *fd,
@@ -1060,6 +1095,7 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 				  const struct qbman_eq_desc *d,
 				  struct qbman_fd **fd,
@@ -1076,6 +1112,7 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 				    const struct qbman_eq_desc *d,
 				    const struct qbman_fd *fd,
@@ -1117,12 +1154,14 @@ struct qbman_release_desc {
  * default/starting state.
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_clear(struct qbman_release_desc *d);
 
 /**
  * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
 
 /**
@@ -1141,6 +1180,7 @@ void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
  *
  * Return 0 for success, -EBUSY if the release command ring is not ready.
  */
+__rte_internal
 int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
 		      const uint64_t *buffers, unsigned int num_buffers);
 
@@ -1166,6 +1206,7 @@ int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh);
  * Return 0 for success, or negative error code if the acquire command
  * fails.
  */
+__rte_internal
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers);
 
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index fe45575046..1b7a5a45e9 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,4 +1,14 @@
 DPDK_20.0 {
+	local: *;
+};
+
+EXPERIMENTAL {
+	global:
+
+	rte_fslmc_vfio_mem_dmamap;
+};
+
+INTERNAL {
 	global:
 
 	dpaa2_affine_qbman_ethrx_swp;
@@ -11,7 +21,6 @@ DPDK_20.0 {
 	dpaa2_free_dpbp_dev;
 	dpaa2_free_dq_storage;
 	dpaa2_free_eq_descriptors;
-	dpaa2_get_qbman_swp;
 	dpaa2_io_portal;
 	dpaa2_svr_family;
 	dpaa2_virt_mode;
@@ -101,15 +110,6 @@ DPDK_20.0 {
 	rte_fslmc_driver_unregister;
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
-	rte_fslmc_vfio_dmamap;
 	rte_global_active_dqs_list;
 	rte_mcp_ptr_list;
-
-	local: *;
-};
-
-EXPERIMENTAL {
-	global:
-
-	rte_fslmc_vfio_mem_dmamap;
 };
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 96ba8dc259..5078b48ee1 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -162,6 +162,7 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
 
 /**
@@ -171,6 +172,7 @@ void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be unregistered.
  */
+__rte_internal
 void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
 
 /** Helper for DPAA2 device registration from driver (eth, crypto) instance */
@@ -189,6 +191,7 @@ RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
  *   A pointer to a rte_dpaa_object structure describing the mc object
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_object_register(struct rte_dpaa2_object *object);
 
 /**
@@ -200,6 +203,7 @@ void rte_fslmc_object_register(struct rte_dpaa2_object *object);
  *   >=0 for count; 0 indicates either no device of the said type scanned or
  *   invalid device type.
  */
+__rte_internal
 uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type);
 
 /** Helper for DPAA2 object registration */
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v7 01/13] common/dpaax: move internal symbols into INTERNAL section
  @ 2020-05-15  5:08  3%     ` Hemant Agrawal
  2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 02/13] bus/fslmc: " Hemant Agrawal
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-15  5:08 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                      |  3 +++
 drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
 drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
 drivers/common/dpaax/rte_common_dpaax_version.map |  6 ++++--
 4 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index c9ee73cb3c..b1488d5549 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -48,3 +48,6 @@
         changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
 [suppress_variable]
         name = rte_crypto_aead_algorithm_strings
+; Ignore moving DPAAx stable functions to INTERNAL tag
+[suppress_file]
+	file_name_regexp = ^librte_common_dpaax\.
diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
index 960b421766..38d91a1afe 100644
--- a/drivers/common/dpaax/dpaa_of.h
+++ b/drivers/common/dpaax/dpaa_of.h
@@ -24,6 +24,7 @@
 #include <limits.h>
 #include <rte_common.h>
 #include <dpaa_list.h>
+#include <rte_compat.h>
 
 #ifndef OF_INIT_DEFAULT_PATH
 #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
@@ -102,6 +103,7 @@ struct dt_file {
 	uint64_t buf[OF_FILE_BUF_MAX >> 3];
 };
 
+__rte_internal
 const struct device_node *of_find_compatible_node(
 					const struct device_node *from,
 					const char *type __rte_unused,
@@ -113,32 +115,44 @@ const struct device_node *of_find_compatible_node(
 		dev_node != NULL; \
 		dev_node = of_find_compatible_node(dev_node, type, compatible))
 
+__rte_internal
 const void *of_get_property(const struct device_node *from, const char *name,
 			    size_t *lenp) __attribute__((nonnull(2)));
+__rte_internal
 bool of_device_is_available(const struct device_node *dev_node);
 
+
+__rte_internal
 const struct device_node *of_find_node_by_phandle(uint64_t ph);
 
+__rte_internal
 const struct device_node *of_get_parent(const struct device_node *dev_node);
 
+__rte_internal
 const struct device_node *of_get_next_child(const struct device_node *dev_node,
 					    const struct device_node *prev);
 
+__rte_internal
 const void *of_get_mac_address(const struct device_node *np);
 
 #define for_each_child_node(parent, child) \
 	for (child = of_get_next_child(parent, NULL); child != NULL; \
 			child = of_get_next_child(parent, child))
 
+
+__rte_internal
 uint32_t of_n_addr_cells(const struct device_node *dev_node);
 uint32_t of_n_size_cells(const struct device_node *dev_node);
 
+__rte_internal
 const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
 			       uint64_t *size, uint32_t *flags);
 
+__rte_internal
 uint64_t of_translate_address(const struct device_node *dev_node,
 			      const uint32_t *addr) __attribute__((nonnull));
 
+__rte_internal
 bool of_device_is_compatible(const struct device_node *dev_node,
 			     const char *compatible);
 
@@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct device_node *dev_node,
  * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
  * The path should usually be "/proc/device-tree".
  */
+__rte_internal
 int of_init_path(const char *dt_path);
 
 /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index fc3b9e7a8f..230fba8ba0 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
 #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
 
 /* APIs exposed */
+__rte_internal
 int dpaax_iova_table_populate(void);
+__rte_internal
 void dpaax_iova_table_depopulate(void);
+__rte_internal
 int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
+__rte_internal
 void dpaax_iova_table_dump(void);
 
 static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index f72eba761d..14b507ad13 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,4 +1,8 @@
 DPDK_20.0 {
+	local: *;
+};
+
+INTERNAL {
 	global:
 
 	dpaax_iova_table_depopulate;
@@ -18,6 +22,4 @@ DPDK_20.0 {
 	of_init_path;
 	of_n_addr_cells;
 	of_translate_address;
-
-	local: *;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics
  2020-05-14 20:16  0%                   ` Mattias Rönnblom
@ 2020-05-14 21:00  0%                     ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2020-05-14 21:00 UTC (permalink / raw)
  To: Mattias Rönnblom, Morten Brørup, Stephen Hemminger, Phil Yang
  Cc: thomas, dev, bruce.richardson, ferruh.yigit, hemant.agrawal,
	jerinj, ktraynor, konstantin.ananyev, maxime.coquelin,
	olivier.matz, harry.van.haaren, erik.g.carrillo, nd,
	David Christensen, david.marchand, Song Zhu, Gavin Hu,
	Jeff Brownlee, Philippe Robin, Pravin Kantak, Chen, Zhaoyan,
	Honnappa Nagarahalli, nd

<snip>

> Subject: Re: [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics
> 
> On 2020-05-14 10:34, Morten Brørup wrote:
> > + Added people from the related discussion regarding the ARM roadmap
> [https://protect2.fireeye.com/v1/url?k=10efdd7b-4e4f1ed2-10ef9de0-
> 86959e472243-b772fef31e4ae6af&q=1&e=e3b0051e-bb23-4a30-84c7-
> 7e5e80f83325&u=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F20
> 20-April%2F162580.html].
> >
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Wednesday, May 13, 2020 10:17 PM
> >>
> >> On 2020-05-13 21:40, Honnappa Nagarahalli wrote:
> >>> <snip>
> >>>
> >>>>>> Subject: Re: [PATCH v4 4/4] eal/atomic: add wrapper for c11
> >> atomics
> >>>>>> On Tue, May 12, 2020 at 4:03 pm, Phil Yang
> >> <mailto:phil.yang@arm.com>
> >>>>>> wrote:
> >>>>>>
> >>>>>> parameter. Signed-off-by: Phil Yang <mailto:phil.yang@arm.com>
> >>>>>>
> >>>>>>
> >>>>>> What is the purpose of having rte_atomic at all?
> >>>>>> Is this level of indirection really helping?
> >>>>>> [HONNAPPA] (not sure why this email has html format, converted to
> >>>>>> text
> >>>>>> format)
> >>>>>> I believe you meant, why not use the __atomic_xxx built-ins
> >> directly?
> >>>>>> The only reason for now is handling of
> >>>>>> __atomic_thread_fence(__ATOMIC_SEQ_CST) for x86. This is
> >> equivalent
> >>>>>> to rte_smp_mb which has an optimized implementation for x86.
> >>>>>> According to Konstantin, the compiler does not generate optimal
> >> code.
> >>>>>> Wrapping that built-in alone is going to be confusing.
> >>>>>>
> >>>>>> The wrappers also allow us to have our own implementation using
> >>>>>> inline assembly for compilers versions that do not support C11
> >> atomic
> >>>>>> built- ins. But, I do not know if there is a need to support
> >>>>>> those
> >> versions.
> >>>>> If I recall correctly, someone mentioned that one (or more) of the
> >> aging
> >>>> enterprise Linux distributions don't include a compiler with C11
> >> atomics.
> >>>>> I think Stephen is onto something here...
> >>>>>
> >>>>> It is silly to add wrappers like this, if the only purpose is to
> >> support
> >>>> compilers and distributions that don't properly support an official
> >> C standard
> >>>> which is nearly a decade old. The quality and quantity of the DPDK
> >>>> documentation for these functions (including examples, discussions
> >> on Stack
> >>>> Overflow, etc.) will be inferior to the documentation of the
> >> standard C11
> >>>> atomics, which increases the probability of incorrect use.
> >>>>
> >>>>
> >>>> What's being used in DPDK today, and what's being wrapped here, is
> >> not
> >>>> standard C11 atomics - it's a bunch of GCC built-ins. Nothing in
> >>>> the
> >> __
> >>>> namespace is in the standard. It's reserved for the implementation
> >> (e.g.
> >>>> compiler).
> >>> I have tried to understand what it mean by 'built-ins', but I have
> >> not got a good answer. So, does it mean that the built-in function
> >> (same symbol and API interface) may not be available in another C
> >> compiler? IMO, this is what matters for DPDK.
> >>> Currently, the same built-in functions are available in GCC and
> >> Clang.
> >>
> >>
> >>   From what I understand, "built-ins" is GCC terminology for
> >> non-standard, implementation-specific intrinsic functions, built into
> >> the compiler. They all reside in the __* namespace.
> >>
> >>
> >> Since GCC is the industry standard, other compilers are likely to
> >> follow, including built-in functions.
> >>
> > Timeline:
> >
> > December 2011: The C11 standard was published
> [https://protect2.fireeye.com/v1/url?k=8e23b012-d08373bb-8e23f089-
> 86959e472243-a2babe7075f8ac38&q=1&e=e3b0051e-bb23-4a30-84c7-
> 7e5e80f83325&u=http%3A%2F%2Fwww.open-
> std.org%2Fjtc1%2Fsc22%2Fwg14%2Fwww%2Fstandards.html].
> >
> > March 2012: GCC 4.7 was released, introducing the __atomic built-ins
> [https://gcc.gnu.org/gcc-4.7/changes.html,
> https://www.gnu.org/software/gcc/gcc-4.7/].
> >
> > March 2013: GCC 4.8 was released [https://www.gnu.org/software/gcc/gcc-
> 4.8/].
> >
> > April 2014: GCC 4.9 was released, introducing C11 atomics (incl.
> <stdatomic.h>) [https://gcc.gnu.org/gcc-4.9/changes.html,
> https://www.gnu.org/software/gcc/gcc-4.9/].
> >
> > June 2014: RHEL7 was released
> > [https://access.redhat.com/articles/3078]. (RHEL7 Beta was released in
> > December 2013, which probably explains why the GA release doesn’t
> > include GCC 4.9.)
> >
> > May 2019 (i.e. one year ago): RHEL8 was released
> [https://access.redhat.com/articles/3078].
> >
> >
> > RHEL7 includes GCC 4.8 only [https://access.redhat.com/solutions/19458],
> and apparently RHEL7 has not been updated to GCC 4.9 with any of its minor
> releases.
> >
> > Should the DPDK project be stuck on "industry standard" GCC atomics,
> unable to use the decade old "official standard" C11 atomics, only because
> we want to support a six year old enterprise Linux distribution? Red Hat
> released a new enterprise version a year ago... perhaps it's time for their
> customers to upgrade, if they want to use the latest and greatest version of
> DPDK.
> 
> 
> Just to be clear - I wasn't arguing for the direct use of GCC built-ins.
> 
> 
> The GCC __atomic built-ins (called directly, or via a DPDK wrapper) do have
> some advantages over C11 atomics. One is that GCC supports 128-bit atomic
> operations, on certain architectures. <rte_atomic.h> already has a 128-bit
> compare-exchange. Also, since the GCC built-ins seem not to bother with
> architectures where atomics would be implemented by means of a lock, they
> are a little easier to use than <stdatomic.h>.
IMO, I do not think we should focus on built-ins vs APIs.

1) Built-ins are supported by both GCC and Clang today. If there is a new compiler in the future, most likely it will support these built-ins.
2) I like the fact that the built-ins always require the memory order parameter. stdatomic.h provides some APIs which do not need memory order (just like rte_atomicNN_xxx APIs). This needs us to implement checks in checkpatch script to avoid using such APIs.
3) If we need to replace the built-ins with APIs in the future, it is a simple search and replace.

If the decision to go with built-ins, turns out to be a bad decision, it can be corrected easily.

I think we should focus on the compiler not generating optimal code for __atomic_thread_fence(__ATOMIC_SEQ_CST) for x86. This is the main reason for these wrappers. From what I have seen, DPDK has tried to provide solutions internally for performance issues caused by compilers.
Given that we have provided 'rte_atomic128_cmp_exchange' (provided because both the compilers were not generating the 128b compare-exchange), I would say we should just provide wrapper for '__atomic_thread_fence' built-in.

> 
> 
> > Are all the other tools required for building DPDK (in the required versions)
> included in RHEL7, or do we require developers to install/upgrade any other
> tools anyway? If so, why not also GCC? DPDK can be used in a cross
> compilation environment, so we are not requiring RHEL7 users to replace
> their GCC 4.7 default compiler.
I have not used RHEL7, Intel CI uses RHEL7, may be they can answer.

> >
> >
> > Furthermore, the DPDK Documentation specifies GCC 4.9+ as a system
> requirement [https://protect2.fireeye.com/v1/url?k=339bad56-6d3b6eff-
> 339bedcd-86959e472243-cb1bf3934c202e3f&q=1&e=e3b0051e-bb23-4a30-
> 84c7-
> 7e5e80f83325&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2F
> sys_reqs.html%23compilation-of-the-dpdk]. If we are stuck on GCC 4.8, the
> documentation should be updated.
This is interesting. Then the CI systems should be upgraded to use GCC 4.9+.

> >
> >
> >>>>> And if some compiler generates code that is suboptimal for a user,
> >> then it
> >>>> should be the choice of the user to either accept it or use a
> >>>> better
> >> compiler.
> >>>> Using a suboptimal compiler will not only affect the user's DPDK
> >> applications,
> >>>> but all applications developed by the user. And if he accepts it
> >>>> for
> >> his other
> >>>> applications, he will also accept it for his DPDK applications.
> >>>>> We could introduce some sort of marker or standardized comment to
> >>>> indicate when functions only exist for backwards compatibility with
> >> ancient
> >>>> compilers and similar, with a reference to documentation describing
> >> why. And
> >>>> when the documented preconditions are no longer relevant, e.g. when
> >> those
> >>>> particular enterprise Linux distributions become obsolete, these
> >> functions
> >>>> become obsolete too, and should be removed. However, getting rid of
> >>>> obsolete cruft will break the ABI. In other words: Added cruft will
> >> never be
> >>>> removed again, so think twice before adding.
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics
  @ 2020-05-14 20:16  0%                   ` Mattias Rönnblom
  2020-05-14 21:00  0%                     ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2020-05-14 20:16 UTC (permalink / raw)
  To: Morten Brørup, Honnappa Nagarahalli, Stephen Hemminger, Phil Yang
  Cc: thomas, dev, bruce.richardson, ferruh.yigit, hemant.agrawal,
	jerinj, ktraynor, konstantin.ananyev, maxime.coquelin,
	olivier.matz, harry.van.haaren, erik.g.carrillo, nd,
	David Christensen, david.marchand, Song Zhu, Gavin Hu,
	Jeff Brownlee, Philippe Robin, Pravin Kantak, Chen, Zhaoyan

On 2020-05-14 10:34, Morten Brørup wrote:
> + Added people from the related discussion regarding the ARM roadmap [https://protect2.fireeye.com/v1/url?k=10efdd7b-4e4f1ed2-10ef9de0-86959e472243-b772fef31e4ae6af&q=1&e=e3b0051e-bb23-4a30-84c7-7e5e80f83325&u=https%3A%2F%2Fmails.dpdk.org%2Farchives%2Fdev%2F2020-April%2F162580.html].
>
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Wednesday, May 13, 2020 10:17 PM
>>
>> On 2020-05-13 21:40, Honnappa Nagarahalli wrote:
>>> <snip>
>>>
>>>>>> Subject: Re: [PATCH v4 4/4] eal/atomic: add wrapper for c11
>> atomics
>>>>>> On Tue, May 12, 2020 at 4:03 pm, Phil Yang
>> <mailto:phil.yang@arm.com>
>>>>>> wrote:
>>>>>>
>>>>>> parameter. Signed-off-by: Phil Yang <mailto:phil.yang@arm.com>
>>>>>>
>>>>>>
>>>>>> What is the purpose of having rte_atomic at all?
>>>>>> Is this level of indirection really helping?
>>>>>> [HONNAPPA] (not sure why this email has html format, converted to
>>>>>> text
>>>>>> format)
>>>>>> I believe you meant, why not use the __atomic_xxx built-ins
>> directly?
>>>>>> The only reason for now is handling of
>>>>>> __atomic_thread_fence(__ATOMIC_SEQ_CST) for x86. This is
>> equivalent
>>>>>> to rte_smp_mb which has an optimized implementation for x86.
>>>>>> According to Konstantin, the compiler does not generate optimal
>> code.
>>>>>> Wrapping that built-in alone is going to be confusing.
>>>>>>
>>>>>> The wrappers also allow us to have our own implementation using
>>>>>> inline assembly for compilers versions that do not support C11
>> atomic
>>>>>> built- ins. But, I do not know if there is a need to support those
>> versions.
>>>>> If I recall correctly, someone mentioned that one (or more) of the
>> aging
>>>> enterprise Linux distributions don't include a compiler with C11
>> atomics.
>>>>> I think Stephen is onto something here...
>>>>>
>>>>> It is silly to add wrappers like this, if the only purpose is to
>> support
>>>> compilers and distributions that don't properly support an official
>> C standard
>>>> which is nearly a decade old. The quality and quantity of the DPDK
>>>> documentation for these functions (including examples, discussions
>> on Stack
>>>> Overflow, etc.) will be inferior to the documentation of the
>> standard C11
>>>> atomics, which increases the probability of incorrect use.
>>>>
>>>>
>>>> What's being used in DPDK today, and what's being wrapped here, is
>> not
>>>> standard C11 atomics - it's a bunch of GCC built-ins. Nothing in the
>> __
>>>> namespace is in the standard. It's reserved for the implementation
>> (e.g.
>>>> compiler).
>>> I have tried to understand what it mean by 'built-ins', but I have
>> not got a good answer. So, does it mean that the built-in function
>> (same symbol and API interface) may not be available in another C
>> compiler? IMO, this is what matters for DPDK.
>>> Currently, the same built-in functions are available in GCC and
>> Clang.
>>
>>
>>   From what I understand, "built-ins" is GCC terminology for
>> non-standard, implementation-specific intrinsic functions, built into
>> the compiler. They all reside in the __* namespace.
>>
>>
>> Since GCC is the industry standard, other compilers are likely to
>> follow, including built-in functions.
>>
> Timeline:
>
> December 2011: The C11 standard was published [https://protect2.fireeye.com/v1/url?k=8e23b012-d08373bb-8e23f089-86959e472243-a2babe7075f8ac38&q=1&e=e3b0051e-bb23-4a30-84c7-7e5e80f83325&u=http%3A%2F%2Fwww.open-std.org%2Fjtc1%2Fsc22%2Fwg14%2Fwww%2Fstandards.html].
>
> March 2012: GCC 4.7 was released, introducing the __atomic built-ins [https://gcc.gnu.org/gcc-4.7/changes.html, https://www.gnu.org/software/gcc/gcc-4.7/].
>
> March 2013: GCC 4.8 was released [https://www.gnu.org/software/gcc/gcc-4.8/].
>
> April 2014: GCC 4.9 was released, introducing C11 atomics (incl. <stdatomic.h>) [https://gcc.gnu.org/gcc-4.9/changes.html, https://www.gnu.org/software/gcc/gcc-4.9/].
>
> June 2014: RHEL7 was released [https://access.redhat.com/articles/3078]. (RHEL7 Beta was released in December 2013, which probably explains why the GA release doesn’t include GCC 4.9.)
>
> May 2019 (i.e. one year ago): RHEL8 was released [https://access.redhat.com/articles/3078].
>
>
> RHEL7 includes GCC 4.8 only [https://access.redhat.com/solutions/19458], and apparently RHEL7 has not been updated to GCC 4.9 with any of its minor releases.
>
> Should the DPDK project be stuck on "industry standard" GCC atomics, unable to use the decade old "official standard" C11 atomics, only because we want to support a six year old enterprise Linux distribution? Red Hat released a new enterprise version a year ago... perhaps it's time for their customers to upgrade, if they want to use the latest and greatest version of DPDK.


Just to be clear - I wasn't arguing for the direct use of GCC built-ins.


The GCC __atomic built-ins (called directly, or via a DPDK wrapper) do 
have some advantages over C11 atomics. One is that GCC supports 128-bit 
atomic operations, on certain architectures. <rte_atomic.h> already has 
a 128-bit compare-exchange. Also, since the GCC built-ins seem not to 
bother with architectures where atomics would be implemented by means of 
a lock, they are a little easier to use than <stdatomic.h>.


> Are all the other tools required for building DPDK (in the required versions) included in RHEL7, or do we require developers to install/upgrade any other tools anyway? If so, why not also GCC? DPDK can be used in a cross compilation environment, so we are not requiring RHEL7 users to replace their GCC 4.7 default compiler.
>
>
> Furthermore, the DPDK Documentation specifies GCC 4.9+ as a system requirement [https://protect2.fireeye.com/v1/url?k=339bad56-6d3b6eff-339bedcd-86959e472243-cb1bf3934c202e3f&q=1&e=e3b0051e-bb23-4a30-84c7-7e5e80f83325&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fsys_reqs.html%23compilation-of-the-dpdk]. If we are stuck on GCC 4.8, the documentation should be updated.
>
>
>>>>> And if some compiler generates code that is suboptimal for a user,
>> then it
>>>> should be the choice of the user to either accept it or use a better
>> compiler.
>>>> Using a suboptimal compiler will not only affect the user's DPDK
>> applications,
>>>> but all applications developed by the user. And if he accepts it for
>> his other
>>>> applications, he will also accept it for his DPDK applications.
>>>>> We could introduce some sort of marker or standardized comment to
>>>> indicate when functions only exist for backwards compatibility with
>> ancient
>>>> compilers and similar, with a reference to documentation describing
>> why. And
>>>> when the documented preconditions are no longer relevant, e.g. when
>> those
>>>> particular enterprise Linux distributions become obsolete, these
>> functions
>>>> become obsolete too, and should be removed. However, getting rid of
>>>> obsolete cruft will break the ABI. In other words: Added cruft will
>> never be
>>>> removed again, so think twice before adding.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 01/12] common/dpaax: move internal symbols into INTERNAL section
  2020-05-14 16:28  3%                         ` David Marchand
@ 2020-05-14 17:15  0%                           ` Hemant Agrawal (OSS)
  2020-05-15  9:26  0%                             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Hemant Agrawal (OSS) @ 2020-05-14 17:15 UTC (permalink / raw)
  To: David Marchand, Hemant Agrawal (OSS), Ray Kinsella; +Cc: dev, Thomas Monjalon

> 
> On Thu, May 14, 2020 at 3:31 PM David Marchand
> <david.marchand@redhat.com> wrote:
> >
> > On Thu, May 14, 2020 at 2:39 PM Hemant Agrawal (OSS)
> > <hemant.agrawal@oss.nxp.com> wrote:
> > >
> > > [Hemant] this is working fine for pmd_dpaa but not for pmd_dpaa2
> > >
> > > I removed the filename_exp and introduced function based name= Now
> > > the issue is  the following warning SONAME changed from
> > > 'librte_pmd_dpaa2.so.20.0' to 'librte_pmd_dpaa2.so.0.200.2'
> > >
> > > The  primary reason is that now pmd_dpaa2 has no symbol left for 20.0
> section.
> > > Following is not helping.
> > > [suppress_file]
> > >         soname_regexp = ^librte_pmd_dpaa2 so, it seems for now, the
> > > filename_exp is the only option
> >
> > That's interesting.
> > Because I wondered about this point when reviewing __rte_internal.
> > For components providing only internal symbols like components
> > providing only experimental symbols, the build framework will select a
> > soname with .0.200.x.
> >
> > Here, your dpaa2 driver was seen as a stable library so far.
> > Moving everything to internal changes this and the build framework
> > changes the soname to non stable.
> 
> Looking at a v19.11 testpmd binary:
> $ readelf -d $HOME/abi/v19.11/build-gcc-shared/usr/local/bin/dpdk-testpmd
> |grep dpaa
>  0x0000000000000001 (NEEDED)             Shared library:
> [librte_bus_dpaa.so.20.0]
>  0x0000000000000001 (NEEDED)             Shared library:
> [librte_common_dpaax.so.20.0]
>  0x0000000000000001 (NEEDED)             Shared library:
> [librte_mempool_dpaa.so.20.0]
>  0x0000000000000001 (NEEDED)             Shared library:
> [librte_pmd_dpaa.so.20.0]
> 
> Changing the soname would break this.
> 
> > You could keep an empty DPDK_20.0 block to avoid this and the soname
> > will be kept as is.

[Hemant] Yes, I was thinking about it but missed to make this change while sending patch. Will do it asap.
> 
> We will have to maintain such soname for all dpaa libraries until 20.11.
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 01/12] common/dpaax: move internal symbols into INTERNAL section
  @ 2020-05-14 16:28  3%                         ` David Marchand
  2020-05-14 17:15  0%                           ` Hemant Agrawal (OSS)
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2020-05-14 16:28 UTC (permalink / raw)
  To: Hemant Agrawal (OSS), Ray Kinsella; +Cc: dev, Thomas Monjalon

On Thu, May 14, 2020 at 3:31 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Thu, May 14, 2020 at 2:39 PM Hemant Agrawal (OSS)
> <hemant.agrawal@oss.nxp.com> wrote:
> >
> > [Hemant] this is working fine for pmd_dpaa but not for pmd_dpaa2
> >
> > I removed the filename_exp and introduced function based name=
> > Now the issue is  the following warning
> > SONAME changed from 'librte_pmd_dpaa2.so.20.0' to 'librte_pmd_dpaa2.so.0.200.2'
> >
> > The  primary reason is that now pmd_dpaa2 has no symbol left for 20.0 section.
> > Following is not helping.
> > [suppress_file]
> >         soname_regexp = ^librte_pmd_dpaa2
> > so, it seems for now, the filename_exp is the only option
>
> That's interesting.
> Because I wondered about this point when reviewing __rte_internal.
> For components providing only internal symbols like components
> providing only experimental symbols, the build framework will select a
> soname with .0.200.x.
>
> Here, your dpaa2 driver was seen as a stable library so far.
> Moving everything to internal changes this and the build framework
> changes the soname to non stable.

Looking at a v19.11 testpmd binary:
$ readelf -d $HOME/abi/v19.11/build-gcc-shared/usr/local/bin/dpdk-testpmd
|grep dpaa
 0x0000000000000001 (NEEDED)             Shared library:
[librte_bus_dpaa.so.20.0]
 0x0000000000000001 (NEEDED)             Shared library:
[librte_common_dpaax.so.20.0]
 0x0000000000000001 (NEEDED)             Shared library:
[librte_mempool_dpaa.so.20.0]
 0x0000000000000001 (NEEDED)             Shared library:
[librte_pmd_dpaa.so.20.0]

Changing the soname would break this.

> You could keep an empty DPDK_20.0 block to avoid this and the soname
> will be kept as is.

We will have to maintain such soname for all dpaa libraries until 20.11.


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4] meter: provide experimental alias of API for old apps
    @ 2020-05-14 16:11  4% ` Ferruh Yigit
  2020-05-15 14:36 12%   ` [dpdk-dev] [PATCH v5] abi: " Ray Kinsella
                     ` (2 more replies)
  2020-05-18 18:30  2% ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
  2020-05-19 12:16 10% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
  3 siblings, 3 replies; 200+ results
From: Ferruh Yigit @ 2020-05-14 16:11 UTC (permalink / raw)
  To: Ray Kinsella, Neil Horman, Cristian Dumitrescu, Eelco Chaudron
  Cc: dev, Ferruh Yigit, Thomas Monjalon, David Marchand, stable,
	Luca Boccassi, Bruce Richardson, Ian Stokes, Andrzej Ostruszka

On v20.02 some meter APIs have been matured and symbols moved from
EXPERIMENTAL to DPDK_20.0.1 block.

This can break the applications that were using these mentioned APIs on
v19.11. Although there is no modification on the APIs and the action is
positive and matures the APIs, the affect can be negative to
applications.

Since experimental APIs can change or go away without notice as part of
contract, to prevent this negative affect that may occur by maturing
experimental API, a process update already suggested, which enables
aliasing without forcing it:
https://patches.dpdk.org/patch/65863/

This patch provides aliasing by duplicating the existing and versioned
symbols as experimental.

Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
aliasing done between EXPERIMENTAL and DPDK_21.

Also following changes done to enabling aliasing:

Created VERSION_SYMBOL_EXPERIMENTAL helper macro.

Updated the 'check-symbols.sh' buildtool, which was complaining that the
symbol is in EXPERIMENTAL tag in .map file but it is not in the
.experimental section (__rte_experimental tag is missing).
Updated tool in a way it won't complain if the symbol in the
EXPERIMENTAL tag duplicated in some other block in .map file (versioned)

Enabled function versioning for meson build for the library.

Fixes: 30512af820fe ("meter: remove experimental flag from RFC4115 trTCM API")
Cc: stable@dpdk.org

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Thomas Monjalon <thomas@monjalon.net>
Cc: Luca Boccassi <bluca@debian.org>
Cc: David Marchand <david.marchand@redhat.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>
Cc: Ian Stokes <ian.stokes@intel.com>
Cc: Eelco Chaudron <echaudro@redhat.com>
Cc: Andrzej Ostruszka <amo@semihalf.com>
Cc: Ray Kinsella <mdr@ashroe.eu>

v2:
* Commit log updated

v3:
* added suggested comment to VERSION_SYMBOL_EXPERIMENTAL macro

v4:
* update script name in commit log, remove empty line
---
 buildtools/check-symbols.sh                   |  3 +-
 .../include/rte_function_versioning.h         |  9 +++
 lib/librte_meter/meson.build                  |  1 +
 lib/librte_meter/rte_meter.c                  | 59 ++++++++++++++++++-
 lib/librte_meter/rte_meter_version.map        |  8 +++
 5 files changed, 76 insertions(+), 4 deletions(-)

diff --git a/buildtools/check-symbols.sh b/buildtools/check-symbols.sh
index 3df57c322c..e407553a34 100755
--- a/buildtools/check-symbols.sh
+++ b/buildtools/check-symbols.sh
@@ -26,7 +26,8 @@ ret=0
 for SYM in `$LIST_SYMBOL -S EXPERIMENTAL $MAPFILE |cut -d ' ' -f 3`
 do
 	if grep -q "\.text.*[[:space:]]$SYM$" $DUMPFILE &&
-		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE
+		! grep -q "\.text\.experimental.*[[:space:]]$SYM$" $DUMPFILE &&
+		$LIST_SYMBOL -s $SYM $MAPFILE | grep -q EXPERIMENTAL
 	then
 		cat >&2 <<- END_OF_MESSAGE
 		$SYM is not flagged as experimental
diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
index b9f862d295..f588f2643b 100644
--- a/lib/librte_eal/include/rte_function_versioning.h
+++ b/lib/librte_eal/include/rte_function_versioning.h
@@ -46,6 +46,14 @@
  */
 #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
 
+/*
+ * VERSION_SYMBOL_EXPERIMENTAL
+ * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
+ * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
+ * to provide an alias to experimental for some time.
+ */
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
+
 /*
  * BIND_DEFAULT_SYMBOL
  * Creates a symbol version entry instructing the linker to bind references to
@@ -79,6 +87,7 @@
  * No symbol versioning in use
  */
 #define VERSION_SYMBOL(b, e, n)
+#define VERSION_SYMBOL_EXPERIMENTAL(b, e)
 #define __vsym
 #define BIND_DEFAULT_SYMBOL(b, e, n)
 #define MAP_STATIC_SYMBOL(f, p) f __attribute__((alias(RTE_STR(p))))
diff --git a/lib/librte_meter/meson.build b/lib/librte_meter/meson.build
index 646fd4d43f..fce0368437 100644
--- a/lib/librte_meter/meson.build
+++ b/lib/librte_meter/meson.build
@@ -3,3 +3,4 @@
 
 sources = files('rte_meter.c')
 headers = files('rte_meter.h')
+use_function_versioning = true
diff --git a/lib/librte_meter/rte_meter.c b/lib/librte_meter/rte_meter.c
index da01429a8b..c600b05064 100644
--- a/lib/librte_meter/rte_meter.c
+++ b/lib/librte_meter/rte_meter.c
@@ -9,6 +9,7 @@
 #include <rte_common.h>
 #include <rte_log.h>
 #include <rte_cycles.h>
+#include <rte_function_versioning.h>
 
 #include "rte_meter.h"
 
@@ -119,8 +120,8 @@ rte_meter_trtcm_config(struct rte_meter_trtcm *m,
 	return 0;
 }
 
-int
-rte_meter_trtcm_rfc4115_profile_config(
+static int
+rte_meter_trtcm_rfc4115_profile_config_(
 	struct rte_meter_trtcm_rfc4115_profile *p,
 	struct rte_meter_trtcm_rfc4115_params *params)
 {
@@ -145,7 +146,35 @@ rte_meter_trtcm_rfc4115_profile_config(
 }
 
 int
-rte_meter_trtcm_rfc4115_config(
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_s(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_profile_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_profile_config(struct rte_meter_trtcm_rfc4115_profile *p,
+		struct rte_meter_trtcm_rfc4115_params *params), rte_meter_trtcm_rfc4115_profile_config_s);
+
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params);
+int
+rte_meter_trtcm_rfc4115_profile_config_e(
+	struct rte_meter_trtcm_rfc4115_profile *p,
+	struct rte_meter_trtcm_rfc4115_params *params)
+{
+	return rte_meter_trtcm_rfc4115_profile_config_(p, params);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_profile_config, _e);
+
+static int
+rte_meter_trtcm_rfc4115_config_(
 	struct rte_meter_trtcm_rfc4115 *m,
 	struct rte_meter_trtcm_rfc4115_profile *p)
 {
@@ -160,3 +189,27 @@ rte_meter_trtcm_rfc4115_config(
 
 	return 0;
 }
+
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_s(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return rte_meter_trtcm_rfc4115_config_(m, p);
+}
+BIND_DEFAULT_SYMBOL(rte_meter_trtcm_rfc4115_config, _s, 21);
+MAP_STATIC_SYMBOL(int rte_meter_trtcm_rfc4115_config(struct rte_meter_trtcm_rfc4115 *m,
+		 struct rte_meter_trtcm_rfc4115_profile *p), rte_meter_trtcm_rfc4115_config_s);
+
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p);
+int
+rte_meter_trtcm_rfc4115_config_e(struct rte_meter_trtcm_rfc4115 *m,
+	struct rte_meter_trtcm_rfc4115_profile *p)
+{
+	return rte_meter_trtcm_rfc4115_config_(m, p);
+}
+VERSION_SYMBOL_EXPERIMENTAL(rte_meter_trtcm_rfc4115_config, _e);
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 2c7dadbcac..b493bcebe9 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -20,4 +20,12 @@ DPDK_21 {
 	rte_meter_trtcm_rfc4115_color_blind_check;
 	rte_meter_trtcm_rfc4115_config;
 	rte_meter_trtcm_rfc4115_profile_config;
+
 } DPDK_20.0;
+
+EXPERIMENTAL {
+       global:
+
+	rte_meter_trtcm_rfc4115_config;
+	rte_meter_trtcm_rfc4115_profile_config;
+};
-- 
2.25.4


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3] meter: provide experimental alias of API for old apps
  2020-05-14 15:32  0%   ` David Marchand
  2020-05-14 15:56  0%     ` Ray Kinsella
@ 2020-05-14 16:07  0%     ` Ferruh Yigit
  1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2020-05-14 16:07 UTC (permalink / raw)
  To: David Marchand, Ray Kinsella
  Cc: Neil Horman, Cristian Dumitrescu, Eelco Chaudron, dev,
	Thomas Monjalon, dpdk stable, Luca Boccassi, Bruce Richardson,
	Ian Stokes, Andrzej Ostruszka

On 5/14/2020 4:32 PM, David Marchand wrote:
> On Thu, May 14, 2020 at 1:52 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>> On v20.02 some meter APIs have been matured and symbols moved from
>> EXPERIMENTAL to DPDK_20.0.1 block.
>>
>> This can break the applications that were using these mentioned APIs on
>> v19.11. Although there is no modification on the APIs and the action is
>> positive and matures the APIs, the affect can be negative to
>> applications.
>>
>> Since experimental APIs can change or go away without notice as part of
>> contract, to prevent this negative affect that may occur by maturing
>> experimental API, a process update already suggested, which enables
>> aliasing without forcing it:
>> https://patches.dpdk.org/patch/65863/
>>
>> This patch provides aliasing by duplicating the existing and versioned
>> symbols as experimental.
>>
>> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
>> aliasing done between EXPERIMENTAL and DPDK_21.
>>
>> Also following changes done to enabling aliasing:
>>
>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> 
> This helper (+ script update) must come with the process update: the
> macro is referenced in its v5 revision.

The macro is implementation detail, and this patch does the implementation.
There is a dependency to process update patch, but that doesn't need to define
how the macro should be.

Let me send a new version with below updates, we can discuss this more if required.

> 
> 
>>
>> Updated the 'check-experimental-syms.sh' buildtool, which was
> 
> Nit: the script name changed.
> 
> 
>> diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
>> index b9f862d295..534a8bff95 100644
>> --- a/lib/librte_eal/include/rte_function_versioning.h
>> +++ b/lib/librte_eal/include/rte_function_versioning.h
>> @@ -46,6 +46,15 @@
>>   */
>>  #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
>>
>> +
> 
> No need for this newline.
> 
>> +/*
>> + * VERSION_SYMBOL_EXPERIMENTAL
>> + * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
>> + * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
>> + * to provide an alias to experimental for some time.
>> + */
>> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
>> +
>>  /*
>>   * BIND_DEFAULT_SYMBOL
>>   * Creates a symbol version entry instructing the linker to bind references to
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] meter: provide experimental alias of API for old apps
  2020-05-14 15:32  0%   ` David Marchand
@ 2020-05-14 15:56  0%     ` Ray Kinsella
  2020-05-14 16:07  0%     ` Ferruh Yigit
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2020-05-14 15:56 UTC (permalink / raw)
  To: David Marchand, Ferruh Yigit
  Cc: Neil Horman, Cristian Dumitrescu, Eelco Chaudron, dev,
	Thomas Monjalon, dpdk stable, Luca Boccassi, Bruce Richardson,
	Ian Stokes, Andrzej Ostruszka



On 14/05/2020 16:32, David Marchand wrote:
> On Thu, May 14, 2020 at 1:52 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>> On v20.02 some meter APIs have been matured and symbols moved from
>> EXPERIMENTAL to DPDK_20.0.1 block.
>>
>> This can break the applications that were using these mentioned APIs on
>> v19.11. Although there is no modification on the APIs and the action is
>> positive and matures the APIs, the affect can be negative to
>> applications.
>>
>> Since experimental APIs can change or go away without notice as part of
>> contract, to prevent this negative affect that may occur by maturing
>> experimental API, a process update already suggested, which enables
>> aliasing without forcing it:
>> https://patches.dpdk.org/patch/65863/
>>
>> This patch provides aliasing by duplicating the existing and versioned
>> symbols as experimental.
>>
>> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
>> aliasing done between EXPERIMENTAL and DPDK_21.
>>
>> Also following changes done to enabling aliasing:
>>
>> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.
> 
> This helper (+ script update) must come with the process update: the
> macro is referenced in its v5 revision.

You mean you want to consolidate both into a single commit?

> 
> 
>>
>> Updated the 'check-experimental-syms.sh' buildtool, which was
> 
> Nit: the script name changed.
> 
> 
>> diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
>> index b9f862d295..534a8bff95 100644
>> --- a/lib/librte_eal/include/rte_function_versioning.h
>> +++ b/lib/librte_eal/include/rte_function_versioning.h
>> @@ -46,6 +46,15 @@
>>   */
>>  #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
>>
>> +
> 
> No need for this newline.
> 
>> +/*
>> + * VERSION_SYMBOL_EXPERIMENTAL
>> + * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
>> + * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
>> + * to provide an alias to experimental for some time.
>> + */
>> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
>> +
>>  /*
>>   * BIND_DEFAULT_SYMBOL
>>   * Creates a symbol version entry instructing the linker to bind references to
> 
> 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] meter: provide experimental alias of API for old apps
  @ 2020-05-14 15:32  0%   ` David Marchand
  2020-05-14 15:56  0%     ` Ray Kinsella
  2020-05-14 16:07  0%     ` Ferruh Yigit
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2020-05-14 15:32 UTC (permalink / raw)
  To: Ferruh Yigit, Ray Kinsella
  Cc: Neil Horman, Cristian Dumitrescu, Eelco Chaudron, dev,
	Thomas Monjalon, dpdk stable, Luca Boccassi, Bruce Richardson,
	Ian Stokes, Andrzej Ostruszka

On Thu, May 14, 2020 at 1:52 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On v20.02 some meter APIs have been matured and symbols moved from
> EXPERIMENTAL to DPDK_20.0.1 block.
>
> This can break the applications that were using these mentioned APIs on
> v19.11. Although there is no modification on the APIs and the action is
> positive and matures the APIs, the affect can be negative to
> applications.
>
> Since experimental APIs can change or go away without notice as part of
> contract, to prevent this negative affect that may occur by maturing
> experimental API, a process update already suggested, which enables
> aliasing without forcing it:
> https://patches.dpdk.org/patch/65863/
>
> This patch provides aliasing by duplicating the existing and versioned
> symbols as experimental.
>
> Since symbols moved from DPDK_20.0.1 to DPDK_21 block in the v20.05, the
> aliasing done between EXPERIMENTAL and DPDK_21.
>
> Also following changes done to enabling aliasing:
>
> Created VERSION_SYMBOL_EXPERIMENTAL helper macro.

This helper (+ script update) must come with the process update: the
macro is referenced in its v5 revision.


>
> Updated the 'check-experimental-syms.sh' buildtool, which was

Nit: the script name changed.


> diff --git a/lib/librte_eal/include/rte_function_versioning.h b/lib/librte_eal/include/rte_function_versioning.h
> index b9f862d295..534a8bff95 100644
> --- a/lib/librte_eal/include/rte_function_versioning.h
> +++ b/lib/librte_eal/include/rte_function_versioning.h
> @@ -46,6 +46,15 @@
>   */
>  #define VERSION_SYMBOL(b, e, n) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@DPDK_" RTE_STR(n))
>
> +

No need for this newline.

> +/*
> + * VERSION_SYMBOL_EXPERIMENTAL
> + * Creates a symbol version table entry binding the symbol <b>@EXPERIMENTAL to the internal
> + * function name <b><e>. The macro is used when a symbol matures to become part of the stable ABI,
> + * to provide an alias to experimental for some time.
> + */
> +#define VERSION_SYMBOL_EXPERIMENTAL(b, e) __asm__(".symver " RTE_STR(b) RTE_STR(e) ", " RTE_STR(b) "@EXPERIMENTAL")
> +
>  /*
>   * BIND_DEFAULT_SYMBOL
>   * Creates a symbol version entry instructing the linker to bind references to


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6 07/13] net/dpaa2: move internal symbols into INTERNAL section
                       ` (5 preceding siblings ...)
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 06/13] net/dpaa: " Hemant Agrawal
@ 2020-05-14 14:29  3%   ` Hemant Agrawal
    7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h            |  2 ++
 drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +++++++-----
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 2c49a7f01f..c7fb6539ff 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -164,11 +164,13 @@ int dpaa2_remove_flow_dist(struct rte_eth_dev *eth_dev,
 
 int dpaa2_attach_bp_list(struct dpaa2_dev_priv *priv, void *blist);
 
+__rte_internal
 int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id);
 
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index f2bb793319..b633fdc2a8 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,9 +1,4 @@
 DPDK_20.0 {
-	global:
-
-	dpaa2_eth_eventq_attach;
-	dpaa2_eth_eventq_detach;
-
 	local: *;
 };
 
@@ -14,3 +9,10 @@ EXPERIMENTAL {
 	rte_pmd_dpaa2_set_custom_hash;
 	rte_pmd_dpaa2_set_timestamp;
 };
+
+INTERNAL {
+	global:
+
+	dpaa2_eth_eventq_attach;
+	dpaa2_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 06/13] net/dpaa: move internal symbols into INTERNAL section
                       ` (4 preceding siblings ...)
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 05/13] mempool/dpaa2: " Hemant Agrawal
@ 2020-05-14 14:29  3%   ` Hemant Agrawal
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 07/13] net/dpaa2: " Hemant Agrawal
    7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              | 4 +++-
 drivers/net/dpaa/dpaa_ethdev.h            | 2 ++
 drivers/net/dpaa/rte_pmd_dpaa_version.map | 9 +++++++--
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 02b7a973cb..87c0a918bc 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -64,4 +64,6 @@
 [suppress_function]
 	name = rte_dpaa2_mbuf_alloc_bulk
 [suppress_function]
-	name = rte_dpaa2_bpid_info
+	name_regexp = ^rte_dpaa2_bpid_info
+[suppress_function]
+        name_regexp = ^dpaa
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index af9fc2105d..7393a9df05 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -160,12 +160,14 @@ struct dpaa_if_stats {
 	uint64_t tund;		/**<Tx Undersized */
 };
 
+__rte_internal
 int
 dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
 		int eth_rx_queue_id,
 		u16 ch_id,
 		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
 
+__rte_internal
 int
 dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
 			   int eth_rx_queue_id);
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index f403a1526d..774aa0de45 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,9 +1,14 @@
 DPDK_20.0 {
 	global:
 
-	dpaa_eth_eventq_attach;
-	dpaa_eth_eventq_detach;
 	rte_pmd_dpaa_set_tx_loopback;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	dpaa_eth_eventq_attach;
+	dpaa_eth_eventq_detach;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 05/13] mempool/dpaa2: move internal symbols into INTERNAL section
                       ` (3 preceding siblings ...)
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 04/13] crypto: " Hemant Agrawal
@ 2020-05-14 14:29  3%   ` Hemant Agrawal
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 06/13] net/dpaa: " Hemant Agrawal
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                        | 6 ++++++
 drivers/mempool/dpaa/rte_mempool_dpaa_version.map   | 2 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.h            | 1 +
 drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map | 9 +++++++--
 4 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 8db64f267d..02b7a973cb 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -59,3 +59,9 @@
 	file_name_regexp = ^librte_pmd_dpaa2_sec\.
 [suppress_file]
 	file_name_regexp = ^librte_pmd_dpaa_sec\.
+[suppress_file]
+	file_name_regexp = ^librte_mempool_dpaa\.
+[suppress_function]
+	name = rte_dpaa2_mbuf_alloc_bulk
+[suppress_function]
+	name = rte_dpaa2_bpid_info
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 9eebaf7ffd..142547ee38 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	rte_dpaa_bpid_info;
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
index fa0f2280d5..53fa1552d1 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
@@ -61,6 +61,7 @@ struct dpaa2_bp_info {
 
 extern struct dpaa2_bp_info *rte_dpaa2_bpid_info;
 
+__rte_internal
 int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 		       void **obj_table, unsigned int count);
 
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index cd4bc88273..686b024624 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,10 +1,15 @@
 DPDK_20.0 {
 	global:
 
-	rte_dpaa2_bpid_info;
-	rte_dpaa2_mbuf_alloc_bulk;
 	rte_dpaa2_mbuf_from_buf_addr;
 	rte_dpaa2_mbuf_pool_bpid;
 
 	local: *;
 };
+
+INTERNAL {
+	global:
+
+	rte_dpaa2_bpid_info;
+	rte_dpaa2_mbuf_alloc_bulk;
+};
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 04/13] crypto: move internal symbols into INTERNAL section
                       ` (2 preceding siblings ...)
  2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 03/13] bus/dpaa: " Hemant Agrawal
@ 2020-05-14 14:29  3%   ` Hemant Agrawal
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 05/13] mempool/dpaa2: " Hemant Agrawal
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                           | 4 ++++
 drivers/crypto/dpaa2_sec/dpaa2_sec_event.h             | 5 +++--
 drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 2 +-
 drivers/crypto/dpaa_sec/dpaa_sec_event.h               | 8 ++++----
 drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map   | 4 +---
 5 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ab34302d0c..8db64f267d 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -55,3 +55,7 @@
 	file_name_regexp = ^librte_bus_fslmc\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_dpaa\.
+[suppress_file]
+	file_name_regexp = ^librte_pmd_dpaa2_sec\.
+[suppress_file]
+	file_name_regexp = ^librte_pmd_dpaa_sec\.
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
index c779d5d837..675cbbb81d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_event.h
@@ -6,12 +6,13 @@
 #ifndef _DPAA2_SEC_EVENT_H_
 #define _DPAA2_SEC_EVENT_H_
 
-int
-dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa2_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		struct dpaa2_dpcon_dev *dpcon,
 		const struct rte_event *event);
 
+__rte_internal
 int dpaa2_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 5952d645fd..1352f576e5 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	dpaa2_sec_eventq_attach;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_event.h b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
index 8d1a018096..0b09fa8f75 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_event.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_event.h
@@ -6,14 +6,14 @@
 #ifndef _DPAA_SEC_EVENT_H_
 #define _DPAA_SEC_EVENT_H_
 
-int
-dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		int qp_id,
 		uint16_t ch_id,
 		const struct rte_event *event);
 
-int
-dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
+__rte_internal
+int dpaa_sec_eventq_detach(const struct rte_cryptodev *dev,
 		int qp_id);
 
 #endif /* _DPAA_SEC_EVENT_H_ */
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index 8580fa13db..aed07fb371 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,8 +1,6 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	dpaa_sec_eventq_attach;
 	dpaa_sec_eventq_detach;
-
-	local: *;
 };
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v6 03/13] bus/dpaa: move internal symbols into INTERNAL section
    2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
  2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 02/13] bus/fslmc: " Hemant Agrawal
@ 2020-05-14 14:29  1%   ` Hemant Agrawal
  2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 04/13] crypto: " Hemant Agrawal
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              |  2 ++
 drivers/bus/dpaa/include/fsl_bman.h       |  6 +++++
 drivers/bus/dpaa/include/fsl_fman.h       | 27 +++++++++++++++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 32 +++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |  6 +++++
 drivers/bus/dpaa/include/netcfg.h         |  2 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  7 +----
 drivers/bus/dpaa/rte_dpaa_bus.h           |  5 ++++
 8 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 877c6d5be8..ab34302d0c 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -53,3 +53,5 @@
 	file_name_regexp = ^librte_common_dpaax\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_fslmc\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_dpaa\.
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index f9cd972153..82da2fcfe0 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);
  * the structure provided by the caller can be released or reused after the
  * function returns.
  */
+__rte_internal
 struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
 
 /**
  * bman_free_pool - Deallocates a Buffer Pool object
  * @pool: the pool object to release
  */
+__rte_internal
 void bman_free_pool(struct bman_pool *pool);
 
 /**
@@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);
  * The returned pointer refers to state within the pool object so must not be
  * modified and can no longer be read once the pool object is destroyed.
  */
+__rte_internal
 const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
 
 /**
@@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
  * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
  *
  */
+__rte_internal
 int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
  * The return value will be the number of buffers obtained from the pool, or a
  * negative error code if a h/w error or pool starvation was encountered.
  */
+__rte_internal
 int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);
  *
  * Return the number of the free buffers
  */
+__rte_internal
 u32 bman_query_free_buffers(struct bman_pool *pool);
 
 /**
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 5705ebfdce..6c87c8db0d 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -7,6 +7,8 @@
 #ifndef __FSL_FMAN_H
 #define __FSL_FMAN_H
 
+#include <rte_compat.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -43,18 +45,23 @@ struct fm_status_t {
 } __rte_packed;
 
 /* Set MAC address for a particular interface */
+__rte_internal
 int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
 
 /* Remove a MAC address for a particular interface */
+__rte_internal
 void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
 
 /* Get the FMAN statistics */
+__rte_internal
 void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
 
 /* Reset the FMAN statistics */
+__rte_internal
 void fman_if_stats_reset(struct fman_if *p);
 
 /* Get all of the FMAN statistics */
+__rte_internal
 void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
 
 /* Set ignore pause option for a specific interface */
@@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
 void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
 
 /* Enable/disable Rx promiscuous mode on specified interface */
+__rte_internal
 void fman_if_promiscuous_enable(struct fman_if *p);
+__rte_internal
 void fman_if_promiscuous_disable(struct fman_if *p);
 
 /* Enable/disable Rx on specific interfaces */
+__rte_internal
 void fman_if_enable_rx(struct fman_if *p);
+__rte_internal
 void fman_if_disable_rx(struct fman_if *p);
 
 /* Enable/disable loopback on specific interfaces */
+__rte_internal
 void fman_if_loopback_enable(struct fman_if *p);
+__rte_internal
 void fman_if_loopback_disable(struct fman_if *p);
 
 /* Set buffer pool on specific interface */
+__rte_internal
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
 /* Get Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_get_fc_threshold(struct fman_if *fm_if);
 
 /* Enable and Set Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_set_fc_threshold(struct fman_if *fm_if,
 			u32 high_water, u32 low_water, u32 bpid);
 
 /* Get Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
 /* Set Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
 
 /* Set default error fqid on specific interface */
@@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
 int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
 
 /* Set IC transfer params */
+__rte_internal
 int fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp);
 
 /* Get interface fd->offset value */
+__rte_internal
 int fman_if_get_fdoff(struct fman_if *fm_if);
 
 /* Set interface fd->offset value */
+__rte_internal
 void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
 
 /* Get interface SG enable status value */
+__rte_internal
 int fman_if_get_sg_enable(struct fman_if *fm_if);
 
 /* Set interface SG support mode */
+__rte_internal
 void fman_if_set_sg(struct fman_if *fm_if, int enable);
 
 /* Get interface Max Frame length (MTU) */
 uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
 
 /* Set interface  Max Frame length (MTU) */
+__rte_internal
 void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
 
 /* Set interface next invoked action for dequeue operation */
 void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
 
 /* discard error packets on rx */
+__rte_internal
 void fman_if_discard_rx_errors(struct fman_if *fm_if);
 
+__rte_internal
 void fman_if_set_mcast_filter_table(struct fman_if *p);
 
+__rte_internal
 void fman_if_reset_mcast_filter_table(struct fman_if *p);
 
 int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 1b3342e7e6..4411bb0a79 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1314,6 +1314,7 @@ struct qman_cgr {
 #define QMAN_CGR_MODE_FRAME          0x00000001
 
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+__rte_internal
 void qman_set_fq_lookup_table(void **table);
 #endif
 
@@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);
  */
 int qman_get_portal_index(void);
 
+__rte_internal
 u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
 			void **bufs);
 
@@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
  * processed via qman_poll_***() functions). Returns zero for success, or
  * -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_add(u32 bits);
 
 /**
@@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
 
 /**
@@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
  * instead be processed via qman_poll_***() functions. Returns zero for success,
  * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_remove(u32 bits);
 
 /**
@@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
 
 /**
@@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
  */
 u16 qman_affine_channel(int cpu);
 
+__rte_internal
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs, struct qman_portal *q);
 
@@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
  *
  * This function will issue a volatile dequeue command to the QMAN.
  */
+__rte_internal
 int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
 
 /**
@@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
  * is issued. It will keep returning NULL until there is no packet available on
  * the DQRR.
  */
+__rte_internal
 struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
 
 /**
@@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
  * This will consume the DQRR enrey and make it available for next volatile
  * dequeue.
  */
+__rte_internal
 void qman_dqrr_consume(struct qman_fq *fq,
 		       struct qm_dqrr_entry *dq);
 
@@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,
  * this function will return -EINVAL, otherwise the return value is >=0 and
  * represents the number of DQRR entries processed.
  */
+__rte_internal
 int qman_poll_dqrr(unsigned int limit);
 
 /**
@@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);
  * (SDQCR). The requested pools are limited to those the portal has dequeue
  * access to.
  */
+__rte_internal
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
 
 /**
@@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
  * function must be called from the same CPU as that which processed the DQRR
  * entry in the first place.
  */
+__rte_internal
 void qman_dca_index(u8 index, int park_request);
 
 /**
@@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
  * a frame queue object based on that, rather than assuming/requiring that it be
  * Out of Service.
  */
+__rte_internal
 int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
 
 /**
@@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
  * qman_fq_fqid - Queries the frame queue ID of a FQ object
  * @fq: the frame queue object to query
  */
+__rte_internal
 u32 qman_fq_fqid(struct qman_fq *fq);
 
 /**
@@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);
  * This captures the state, as seen by the driver, at the time the function
  * executes.
  */
+__rte_internal
 void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
 
 /**
@@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
  * context_a.address fields and will leave the stashing fields provided by the
  * user alone, otherwise it will zero out the context_a.stashing fields.
  */
+__rte_internal
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
 
 /**
@@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);
  * caller should be prepared to accept the callback as the function is called,
  * not only once it has returned.
  */
+__rte_internal
 int qman_retire_fq(struct qman_fq *fq, u32 *flags);
 
 /**
@@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
  * The frame queue must be retired and empty, and if any order restoration list
  * was released as ERNs at the time of retirement, they must all be consumed.
  */
+__rte_internal
 int qman_oos_fq(struct qman_fq *fq);
 
 /**
@@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
  * @fq: the frame queue object to be queried
  * @np: storage for the queried FQD fields
  */
+__rte_internal
 int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
 
 /**
@@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
  * @fq: the frame queue object to be queried
  * @frm_cnt: number of frames in the queue
  */
+__rte_internal
 int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
 
 /**
@@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
  * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
  * "flags" retrieved from qman_fq_state().
  */
+__rte_internal
 int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
 
 /**
@@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
  * of an already busy hardware resource by throttling many of the to-be-dropped
  * enqueues "at the source".
  */
+__rte_internal
 int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 
+__rte_internal
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
@@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
  * This API is similar to qman_enqueue_multi(), but it takes fd which needs
  * to be processed by different frame queues.
  */
+__rte_internal
 int
 qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 		      u32 *flags, int frames_to_send);
@@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);
  * @fqid: the base FQID of the range to deallocate
  * @count: the number of FQIDs in the range
  */
+__rte_internal
 int qman_reserve_fqid_range(u32 fqid, unsigned int count);
 static inline int qman_reserve_fqid(u32 fqid)
 {
@@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_pool(u32 *result)
 {
@@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);
  * any unspecified parameters) will be used rather than a modify hw hardware
  * (which only modifies the specified parameters).
  */
+__rte_internal
 int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
  * is executed. This must be excuted on the same affine portal on which it was
  * created.
  */
+__rte_internal
 int qman_delete_cgr(struct qman_cgr *cgr);
 
 /**
@@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);
  * unspecified parameters) will be used rather than a modify hw hardware (which
  * only modifies the specified parameters).
  */
+__rte_internal
 int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_cgrid(u32 *result)
 {
@@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)
  * @id: the base CGR ID of the range to deallocate
  * @count: the number of CGR IDs in the range
  */
+__rte_internal
 void qman_release_cgrid_range(u32 id, unsigned int count);
 static inline void qman_release_cgrid(u32 id)
 {
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 263d9bb976..30ec63a09d 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
 /* Obtain thread-local UIO file-descriptors */
+__rte_internal
 int qman_thread_fd(void);
 int bman_thread_fd(void);
 
@@ -66,8 +67,12 @@ int bman_thread_fd(void);
  * processing is complete. As such, it is essential to call this before going
  * into another blocking read/select/poll.
  */
+__rte_internal
 void qman_thread_irq(void);
+
+__rte_internal
 void bman_thread_irq(void);
+__rte_internal
 void qman_fq_portal_thread_irq(struct qman_portal *qp);
 
 void qman_clear_irq(void);
@@ -77,6 +82,7 @@ int qman_global_init(void);
 int bman_global_init(void);
 
 /* Direct portal create and destroy */
+__rte_internal
 struct qman_portal *fsl_qman_fq_portal_create(int *fd);
 int fsl_qman_fq_portal_destroy(struct qman_portal *qp);
 int fsl_qman_fq_portal_init(struct qman_portal *qp);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index bf7bfae8cb..d7d1befd24 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -46,11 +46,13 @@ struct netcfg_interface {
  * cfg_file: FMC config XML file
  * Returns the configuration information in newly allocated memory.
  */
+__rte_internal
 struct netcfg_info *netcfg_acquire(void);
 
 /* cfg_ptr: configuration information pointer.
  * Frees the resources allocated by the configuration layer.
  */
+__rte_internal
 void netcfg_release(struct netcfg_info *cfg_ptr);
 
 #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e0..f4947fac41 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	bman_acquire;
@@ -13,7 +13,6 @@ DPDK_20.0 {
 	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	dpaa_svr_family;
-	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
 	fman_dealloc_bufs_mask_lo;
 	fman_if_add_mac_addr;
@@ -51,7 +50,6 @@ DPDK_20.0 {
 	qm_channel_pool1;
 	qman_alloc_cgrid_range;
 	qman_alloc_pool_range;
-	qman_clear_irq;
 	qman_create_cgr;
 	qman_create_fq;
 	qman_dca_index;
@@ -87,10 +85,7 @@ DPDK_20.0 {
 	qman_volatile_dequeue;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
-	rte_dpaa_mem_ptov;
 	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
-
-	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca9785..d4aee132ef 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)
  *   A pointer to a rte_dpaa_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
 
 /**
@@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  *	A pointer to a rte_dpaa_driver structure describing the driver
  *	to be unregistered.
  */
+__rte_internal
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
 /**
@@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
  * @return
  *	0 in case of success, error otherwise
  */
+__rte_internal
 int rte_dpaa_portal_init(void *arg);
 
+__rte_internal
 int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
 
+__rte_internal
 int rte_dpaa_portal_fq_close(struct qman_fq *fq);
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v6 02/13] bus/fslmc: move internal symbols into INTERNAL section
    2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
@ 2020-05-14 14:29  1%   ` Hemant Agrawal
  2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 03/13] bus/dpaa: " Hemant Agrawal
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

This patch also removes two symbols, which were not used
anywhere else i.e. rte_fslmc_vfio_dmamap & dpaa2_get_qbman_swp

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                  |  2 +
 drivers/bus/fslmc/fslmc_vfio.h                |  4 ++
 drivers/bus/fslmc/mc/fsl_dpbp.h               |  6 +++
 drivers/bus/fslmc/mc/fsl_dpci.h               |  3 ++
 drivers/bus/fslmc/mc/fsl_dpcon.h              |  2 +
 drivers/bus/fslmc/mc/fsl_dpdmai.h             |  8 ++++
 drivers/bus/fslmc/mc/fsl_dpio.h               |  9 ++++
 drivers/bus/fslmc/mc/fsl_dpmng.h              |  2 +
 drivers/bus/fslmc/mc/fsl_mc_cmd.h             |  1 +
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.h      |  5 +++
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |  8 ++++
 .../bus/fslmc/qbman/include/fsl_qbman_debug.h |  3 ++
 .../fslmc/qbman/include/fsl_qbman_portal.h    | 41 +++++++++++++++++++
 drivers/bus/fslmc/rte_bus_fslmc_version.map   |  4 +-
 drivers/bus/fslmc/rte_fslmc.h                 |  4 ++
 15 files changed, 99 insertions(+), 3 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index b1488d5549..877c6d5be8 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -51,3 +51,5 @@
 ; Ignore moving DPAAx stable functions to INTERNAL tag
 [suppress_file]
 	file_name_regexp = ^librte_common_dpaax\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_fslmc\.
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index c988121294..609e48aea3 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -41,7 +41,11 @@ typedef struct fslmc_vfio_container {
 } fslmc_vfio_container;
 
 extern char *fslmc_container;
+
+__rte_internal
 int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index);
+
+__rte_internal
 int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index);
 
 int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
diff --git a/drivers/bus/fslmc/mc/fsl_dpbp.h b/drivers/bus/fslmc/mc/fsl_dpbp.h
index 9d405b42c4..7b537a21be 100644
--- a/drivers/bus/fslmc/mc/fsl_dpbp.h
+++ b/drivers/bus/fslmc/mc/fsl_dpbp.h
@@ -14,6 +14,7 @@
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpbp_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpbp_id,
@@ -42,10 +43,12 @@ int dpbp_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t obj_id);
 
+__rte_internal
 int dpbp_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpbp_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -55,6 +58,7 @@ int dpbp_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpbp_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -70,6 +74,7 @@ struct dpbp_attr {
 	uint16_t bpid;
 };
 
+__rte_internal
 int dpbp_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -88,6 +93,7 @@ int dpbp_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpbp_get_num_free_bufs(struct fsl_mc_io *mc_io,
 			   uint32_t cmd_flags,
 			   uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpci.h b/drivers/bus/fslmc/mc/fsl_dpci.h
index a0ee5bfe69..81fd3438aa 100644
--- a/drivers/bus/fslmc/mc/fsl_dpci.h
+++ b/drivers/bus/fslmc/mc/fsl_dpci.h
@@ -181,6 +181,7 @@ struct dpci_rx_queue_cfg {
 	int order_preservation_en;
 };
 
+__rte_internal
 int dpci_set_rx_queue(struct fsl_mc_io *mc_io,
 		      uint32_t cmd_flags,
 		      uint16_t token,
@@ -228,6 +229,7 @@ int dpci_get_api_version(struct fsl_mc_io *mc_io,
 			 uint16_t *major_ver,
 			 uint16_t *minor_ver);
 
+__rte_internal
 int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
@@ -235,6 +237,7 @@ int dpci_set_opr(struct fsl_mc_io *mc_io,
 		 uint8_t options,
 		 struct opr_cfg *cfg);
 
+__rte_internal
 int dpci_get_opr(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpcon.h b/drivers/bus/fslmc/mc/fsl_dpcon.h
index af81d51195..7caa6c68a1 100644
--- a/drivers/bus/fslmc/mc/fsl_dpcon.h
+++ b/drivers/bus/fslmc/mc/fsl_dpcon.h
@@ -20,6 +20,7 @@ struct fsl_mc_io;
  */
 #define DPCON_INVALID_DPIO_ID		(int)(-1)
 
+__rte_internal
 int dpcon_open(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       int dpcon_id,
@@ -77,6 +78,7 @@ struct dpcon_attr {
 	uint8_t num_priorities;
 };
 
+__rte_internal
 int dpcon_get_attributes(struct fsl_mc_io *mc_io,
 			 uint32_t cmd_flags,
 			 uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpdmai.h b/drivers/bus/fslmc/mc/fsl_dpdmai.h
index 40469cc139..e7e8a5dda9 100644
--- a/drivers/bus/fslmc/mc/fsl_dpdmai.h
+++ b/drivers/bus/fslmc/mc/fsl_dpdmai.h
@@ -23,11 +23,13 @@ struct fsl_mc_io;
  */
 #define DPDMAI_ALL_QUEUES	(uint8_t)(-1)
 
+__rte_internal
 int dpdmai_open(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		int dpdmai_id,
 		uint16_t *token);
 
+__rte_internal
 int dpdmai_close(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -54,10 +56,12 @@ int dpdmai_destroy(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint32_t object_id);
 
+__rte_internal
 int dpdmai_enable(struct fsl_mc_io *mc_io,
 		  uint32_t cmd_flags,
 		  uint16_t token);
 
+__rte_internal
 int dpdmai_disable(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   uint16_t token);
@@ -82,6 +86,7 @@ struct dpdmai_attr {
 	uint8_t num_of_queues;
 };
 
+__rte_internal
 int dpdmai_get_attributes(struct fsl_mc_io *mc_io,
 			  uint32_t cmd_flags,
 			  uint16_t token,
@@ -148,6 +153,7 @@ struct dpdmai_rx_queue_cfg {
 
 };
 
+__rte_internal
 int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -168,6 +174,7 @@ struct dpdmai_rx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
@@ -184,6 +191,7 @@ struct dpdmai_tx_queue_attr {
 	uint32_t fqid;
 };
 
+__rte_internal
 int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpio.h b/drivers/bus/fslmc/mc/fsl_dpio.h
index 3158f53191..92e97db94b 100644
--- a/drivers/bus/fslmc/mc/fsl_dpio.h
+++ b/drivers/bus/fslmc/mc/fsl_dpio.h
@@ -13,11 +13,13 @@
 
 struct fsl_mc_io;
 
+__rte_internal
 int dpio_open(struct fsl_mc_io *mc_io,
 	      uint32_t cmd_flags,
 	      int dpio_id,
 	      uint16_t *token);
 
+__rte_internal
 int dpio_close(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
@@ -57,10 +59,12 @@ int dpio_destroy(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint32_t object_id);
 
+__rte_internal
 int dpio_enable(struct fsl_mc_io *mc_io,
 		uint32_t cmd_flags,
 		uint16_t token);
 
+__rte_internal
 int dpio_disable(struct fsl_mc_io *mc_io,
 		 uint32_t cmd_flags,
 		 uint16_t token);
@@ -70,10 +74,12 @@ int dpio_is_enabled(struct fsl_mc_io *mc_io,
 		    uint16_t token,
 		    int *en);
 
+__rte_internal
 int dpio_reset(struct fsl_mc_io *mc_io,
 	       uint32_t cmd_flags,
 	       uint16_t token);
 
+__rte_internal
 int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint32_t cmd_flags,
 				  uint16_t token,
@@ -84,12 +90,14 @@ int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
 				  uint16_t token,
 				  uint8_t *sdest);
 
+__rte_internal
 int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				    uint32_t cmd_flags,
 				    uint16_t token,
 				    int dpcon_id,
 				    uint8_t *channel_index);
 
+__rte_internal
 int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
 				       uint32_t cmd_flags,
 				       uint16_t token,
@@ -119,6 +127,7 @@ struct dpio_attr {
 	uint32_t clk;
 };
 
+__rte_internal
 int dpio_get_attributes(struct fsl_mc_io *mc_io,
 			uint32_t cmd_flags,
 			uint16_t token,
diff --git a/drivers/bus/fslmc/mc/fsl_dpmng.h b/drivers/bus/fslmc/mc/fsl_dpmng.h
index 36c387af27..cdd8506625 100644
--- a/drivers/bus/fslmc/mc/fsl_dpmng.h
+++ b/drivers/bus/fslmc/mc/fsl_dpmng.h
@@ -34,6 +34,7 @@ struct mc_version {
 	uint32_t revision;
 };
 
+__rte_internal
 int mc_get_version(struct fsl_mc_io *mc_io,
 		   uint32_t cmd_flags,
 		   struct mc_version *mc_ver_info);
@@ -48,6 +49,7 @@ struct mc_soc_version {
 	uint32_t pvr;
 };
 
+__rte_internal
 int mc_get_soc_version(struct fsl_mc_io *mc_io,
 		       uint32_t cmd_flags,
 		       struct mc_soc_version *mc_platform_info);
diff --git a/drivers/bus/fslmc/mc/fsl_mc_cmd.h b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
index ac919610cf..06ea41a3b2 100644
--- a/drivers/bus/fslmc/mc/fsl_mc_cmd.h
+++ b/drivers/bus/fslmc/mc/fsl_mc_cmd.h
@@ -80,6 +80,7 @@ enum mc_cmd_status {
 
 #define MC_CMD_HDR_FLAGS_MASK	0xFF00FF00
 
+__rte_internal
 int mc_send_command(struct fsl_mc_io *mc_io, struct mc_command *cmd);
 
 static inline uint64_t mc_encode_cmd_header(uint16_t cmd_id,
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
index 2829c93806..7c5966241a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.h
@@ -36,20 +36,25 @@ extern uint8_t dpaa2_eqcr_size;
 extern struct dpaa2_io_portal_t dpaa2_io_portal[RTE_MAX_LCORE];
 
 /* Affine a DPIO portal to current processing thread */
+__rte_internal
 int dpaa2_affine_qbman_swp(void);
 
 /* Affine additional DPIO portal to current crypto processing thread */
+__rte_internal
 int dpaa2_affine_qbman_ethrx_swp(void);
 
 /* allocate memory for FQ - dq storage */
+__rte_internal
 int
 dpaa2_alloc_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free memory for FQ- dq storage */
+__rte_internal
 void
 dpaa2_free_dq_storage(struct queue_storage_info_t *q_storage);
 
 /* free the enqueue response descriptors */
+__rte_internal
 uint32_t
 dpaa2_free_eq_descriptors(void);
 
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 368fe7c688..33b191f823 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -426,11 +426,19 @@ void set_swp_active_dqs(uint16_t dpio_index, struct qbman_result *dqs)
 {
 	rte_global_active_dqs_list[dpio_index].global_active_dqs = dqs;
 }
+__rte_internal
 struct dpaa2_dpbp_dev *dpaa2_alloc_dpbp_dev(void);
+
+__rte_internal
 void dpaa2_free_dpbp_dev(struct dpaa2_dpbp_dev *dpbp);
+
+__rte_internal
 int dpaa2_dpbp_supported(void);
 
+__rte_internal
 struct dpaa2_dpci_dev *rte_dpaa2_alloc_dpci_dev(void);
+
+__rte_internal
 void rte_dpaa2_free_dpci_dev(struct dpaa2_dpci_dev *dpci);
 
 #endif
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
index e010b1b6ae..328f2022fc 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_debug.h
@@ -24,7 +24,10 @@ uint8_t verb;
 	uint8_t reserved2[29];
 };
 
+__rte_internal
 int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
 			 struct qbman_fq_query_np_rslt *r);
+
+__rte_internal
 uint32_t qbman_fq_state_frame_count(const struct qbman_fq_query_np_rslt *r);
 uint32_t qbman_fq_state_byte_count(const struct qbman_fq_query_np_rslt *r);
diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
index 88f0a99686..7ac0f82106 100644
--- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
+++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h
@@ -117,6 +117,7 @@ uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
  * @p: the given software portal object.
  * @mask: The value to set in SWP_ISR register.
  */
+__rte_internal
 void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
 
 /**
@@ -286,6 +287,7 @@ void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled);
  * rather by specifying the index (from 0 to 15) that has been mapped to the
  * desired channel.
  */
+__rte_internal
 void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable);
 
 /* ------------------- */
@@ -325,6 +327,7 @@ enum qbman_pull_type_e {
  * default/starting state.
  * @d: the pull dequeue descriptor to be cleared.
  */
+__rte_internal
 void qbman_pull_desc_clear(struct qbman_pull_desc *d);
 
 /**
@@ -340,6 +343,7 @@ void qbman_pull_desc_clear(struct qbman_pull_desc *d);
  * the caller provides in 'storage_phys'), and 'stash' controls whether or not
  * those writes to main-memory express a cache-warming attribute.
  */
+__rte_internal
 void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
 				 struct qbman_result *storage,
 				 uint64_t storage_phys,
@@ -349,6 +353,7 @@ void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
  * @d: the pull dequeue descriptor to be set.
  * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
  */
+__rte_internal
 void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d,
 				   uint8_t numframes);
 /**
@@ -372,6 +377,7 @@ void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token);
  * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
  * @fqid: the frame queue index of the given FQ.
  */
+__rte_internal
 void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid);
 
 /**
@@ -407,6 +413,7 @@ void qbman_pull_desc_set_rad(struct qbman_pull_desc *d, int rad);
  * Return 0 for success, and -EBUSY if the software portal is not ready
  * to do pull dequeue.
  */
+__rte_internal
 int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
 
 /* -------------------------------- */
@@ -421,12 +428,14 @@ int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d);
  * only once, so repeated calls can return a sequence of DQRR entries, without
  * requiring they be consumed immediately or in any particular order.
  */
+__rte_internal
 const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *p);
 
 /**
  * qbman_swp_prefetch_dqrr_next() - prefetch the next DQRR entry.
  * @s: the software portal object.
  */
+__rte_internal
 void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
 
 /**
@@ -435,6 +444,7 @@ void qbman_swp_prefetch_dqrr_next(struct qbman_swp *s);
  * @s: the software portal object.
  * @dq: the DQRR entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
 
 /**
@@ -442,6 +452,7 @@ void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct qbman_result *dq);
  * @s: the software portal object.
  * @dqrr_index: the DQRR index entry to be consumed.
  */
+__rte_internal
 void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
 
 /**
@@ -450,6 +461,7 @@ void qbman_swp_dqrr_idx_consume(struct qbman_swp *s, uint8_t dqrr_index);
  *
  * Return dqrr index.
  */
+__rte_internal
 uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
 
 /**
@@ -460,6 +472,7 @@ uint8_t qbman_get_dqrr_idx(const struct qbman_result *dqrr);
  *
  * Return dqrr entry object.
  */
+__rte_internal
 struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
 
 /* ------------------------------------------------- */
@@ -485,6 +498,7 @@ struct qbman_result *qbman_get_dqrr_from_idx(struct qbman_swp *s, uint8_t idx);
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_result_has_new_result(struct qbman_swp *s,
 				struct qbman_result *dq);
 
@@ -497,8 +511,10 @@ int qbman_result_has_new_result(struct qbman_swp *s,
  * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
  * dequeue result.
  */
+__rte_internal
 int qbman_check_command_complete(struct qbman_result *dq);
 
+__rte_internal
 int qbman_check_new_result(struct qbman_result *dq);
 
 /* -------------------------------------------------------- */
@@ -624,6 +640,7 @@ int qbman_result_is_FQPN(const struct qbman_result *dq);
  *
  * Return the state field.
  */
+__rte_internal
 uint8_t qbman_result_DQ_flags(const struct qbman_result *dq);
 
 /**
@@ -658,6 +675,7 @@ static inline int qbman_result_DQ_is_pull_complete(
  *
  * Return seqnum.
  */
+__rte_internal
 uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
 
 /**
@@ -667,6 +685,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq);
  *
  * Return odpid.
  */
+__rte_internal
 uint16_t qbman_result_DQ_odpid(const struct qbman_result *dq);
 
 /**
@@ -699,6 +718,7 @@ uint32_t qbman_result_DQ_frame_count(const struct qbman_result *dq);
  *
  * Return the frame queue context.
  */
+__rte_internal
 uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
 
 /**
@@ -707,6 +727,7 @@ uint64_t qbman_result_DQ_fqd_ctx(const struct qbman_result *dq);
  *
  * Return the frame descriptor.
  */
+__rte_internal
 const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
 
 /* State-change notifications (FQDAN/CDAN/CSCN/...). */
@@ -717,6 +738,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq);
  *
  * Return the state in the notifiation.
  */
+__rte_internal
 uint8_t qbman_result_SCN_state(const struct qbman_result *scn);
 
 /**
@@ -850,6 +872,7 @@ struct qbman_eq_response {
  * default/starting state.
  * @d: the given enqueue descriptor.
  */
+__rte_internal
 void qbman_eq_desc_clear(struct qbman_eq_desc *d);
 
 /* Exactly one of the following descriptor "actions" should be set. (Calling
@@ -870,6 +893,7 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d);
  * @response_success: 1 = enqueue with response always; 0 = enqueue with
  * rejections returned on a FQ.
  */
+__rte_internal
 void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
 /**
  * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
@@ -881,6 +905,7 @@ void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
  * @incomplete: indiates whether this is the last fragments using the same
  * sequeue number.
  */
+__rte_internal
 void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
 			   uint16_t opr_id, uint16_t seqnum, int incomplete);
 
@@ -915,6 +940,7 @@ void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint16_t opr_id,
  * data structure.) 'stash' controls whether or not the write to main-memory
  * expresses a cache-warming attribute.
  */
+__rte_internal
 void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
 				uint64_t storage_phys,
 				int stash);
@@ -929,6 +955,7 @@ void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
  * result "storage" before issuing an enqueue, and use any non-zero 'token'
  * value.
  */
+__rte_internal
 void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
 
 /**
@@ -944,6 +971,7 @@ void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
  * @d: the enqueue descriptor
  * @fqid: the id of the frame queue to be enqueued.
  */
+__rte_internal
 void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
 
 /**
@@ -953,6 +981,7 @@ void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid);
  * @qd_bin: the queuing destination bin
  * @qd_prio: the queuing destination priority.
  */
+__rte_internal
 void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
 			  uint16_t qd_bin, uint8_t qd_prio);
 
@@ -978,6 +1007,7 @@ void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable);
  * held-active (order-preserving) FQ, whether the FQ should be parked instead of
  * being rescheduled.)
  */
+__rte_internal
 void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
 			   uint8_t dqrr_idx, int park);
 
@@ -987,6 +1017,7 @@ void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
  *
  * Return the fd pointer.
  */
+__rte_internal
 struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
 
 /**
@@ -997,6 +1028,7 @@ struct qbman_fd *qbman_result_eqresp_fd(struct qbman_result *eqresp);
  * This value is set into the response id before the enqueue command, which,
  * get overwritten by qbman once the enqueue command is complete.
  */
+__rte_internal
 void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
 
 /**
@@ -1009,6 +1041,7 @@ void qbman_result_eqresp_set_rspid(struct qbman_result *eqresp, uint8_t val);
  * copied into the enqueue response to determine if the command has been
  * completed, and response has been updated.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
 
 /**
@@ -1017,6 +1050,7 @@ uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp);
  *
  * Return 0 when command is sucessful.
  */
+__rte_internal
 uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp);
 
 /**
@@ -1043,6 +1077,7 @@ int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple(struct qbman_swp *s,
 			       const struct qbman_eq_desc *d,
 			       const struct qbman_fd *fd,
@@ -1060,6 +1095,7 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
 				  const struct qbman_eq_desc *d,
 				  struct qbman_fd **fd,
@@ -1076,6 +1112,7 @@ int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s,
  *
  * Return the number of enqueued frames, -EBUSY if the EQCR is not ready.
  */
+__rte_internal
 int qbman_swp_enqueue_multiple_desc(struct qbman_swp *s,
 				    const struct qbman_eq_desc *d,
 				    const struct qbman_fd *fd,
@@ -1117,12 +1154,14 @@ struct qbman_release_desc {
  * default/starting state.
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_clear(struct qbman_release_desc *d);
 
 /**
  * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
  * @d: the qbman release descriptor.
  */
+__rte_internal
 void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint16_t bpid);
 
 /**
@@ -1141,6 +1180,7 @@ void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable);
  *
  * Return 0 for success, -EBUSY if the release command ring is not ready.
  */
+__rte_internal
 int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
 		      const uint64_t *buffers, unsigned int num_buffers);
 
@@ -1166,6 +1206,7 @@ int qbman_swp_release_thresh(struct qbman_swp *s, unsigned int thresh);
  * Return 0 for success, or negative error code if the acquire command
  * fails.
  */
+__rte_internal
 int qbman_swp_acquire(struct qbman_swp *s, uint16_t bpid, uint64_t *buffers,
 		      unsigned int num_buffers);
 
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index fe45575046..04e61156c3 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	dpaa2_affine_qbman_ethrx_swp;
@@ -11,7 +11,6 @@ DPDK_20.0 {
 	dpaa2_free_dpbp_dev;
 	dpaa2_free_dq_storage;
 	dpaa2_free_eq_descriptors;
-	dpaa2_get_qbman_swp;
 	dpaa2_io_portal;
 	dpaa2_svr_family;
 	dpaa2_virt_mode;
@@ -101,7 +100,6 @@ DPDK_20.0 {
 	rte_fslmc_driver_unregister;
 	rte_fslmc_get_device_count;
 	rte_fslmc_object_register;
-	rte_fslmc_vfio_dmamap;
 	rte_global_active_dqs_list;
 	rte_mcp_ptr_list;
 
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 96ba8dc259..5078b48ee1 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -162,6 +162,7 @@ RTE_DECLARE_PER_LCORE(struct dpaa2_portal_dqrr, dpaa2_held_bufs);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
 
 /**
@@ -171,6 +172,7 @@ void rte_fslmc_driver_register(struct rte_dpaa2_driver *driver);
  *   A pointer to a rte_dpaa2_driver structure describing the driver
  *   to be unregistered.
  */
+__rte_internal
 void rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver);
 
 /** Helper for DPAA2 device registration from driver (eth, crypto) instance */
@@ -189,6 +191,7 @@ RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
  *   A pointer to a rte_dpaa_object structure describing the mc object
  *   to be registered.
  */
+__rte_internal
 void rte_fslmc_object_register(struct rte_dpaa2_object *object);
 
 /**
@@ -200,6 +203,7 @@ void rte_fslmc_object_register(struct rte_dpaa2_object *object);
  *   >=0 for count; 0 indicates either no device of the said type scanned or
  *   invalid device type.
  */
+__rte_internal
 uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type);
 
 /** Helper for DPAA2 object registration */
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v6 01/13] common/dpaax: move internal symbols into INTERNAL section
  @ 2020-05-14 14:29  3%   ` Hemant Agrawal
  2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 02/13] bus/fslmc: " Hemant Agrawal
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:29 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore                      |  3 +++
 drivers/common/dpaax/dpaa_of.h                    | 15 +++++++++++++++
 drivers/common/dpaax/dpaax_iova_table.h           |  4 ++++
 drivers/common/dpaax/rte_common_dpaax_version.map |  2 +-
 4 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index c9ee73cb3c..b1488d5549 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -48,3 +48,6 @@
         changed_enumerators = RTE_CRYPTO_AEAD_LIST_END
 [suppress_variable]
         name = rte_crypto_aead_algorithm_strings
+; Ignore moving DPAAx stable functions to INTERNAL tag
+[suppress_file]
+	file_name_regexp = ^librte_common_dpaax\.
diff --git a/drivers/common/dpaax/dpaa_of.h b/drivers/common/dpaax/dpaa_of.h
index 960b421766..38d91a1afe 100644
--- a/drivers/common/dpaax/dpaa_of.h
+++ b/drivers/common/dpaax/dpaa_of.h
@@ -24,6 +24,7 @@
 #include <limits.h>
 #include <rte_common.h>
 #include <dpaa_list.h>
+#include <rte_compat.h>
 
 #ifndef OF_INIT_DEFAULT_PATH
 #define OF_INIT_DEFAULT_PATH "/proc/device-tree"
@@ -102,6 +103,7 @@ struct dt_file {
 	uint64_t buf[OF_FILE_BUF_MAX >> 3];
 };
 
+__rte_internal
 const struct device_node *of_find_compatible_node(
 					const struct device_node *from,
 					const char *type __rte_unused,
@@ -113,32 +115,44 @@ const struct device_node *of_find_compatible_node(
 		dev_node != NULL; \
 		dev_node = of_find_compatible_node(dev_node, type, compatible))
 
+__rte_internal
 const void *of_get_property(const struct device_node *from, const char *name,
 			    size_t *lenp) __attribute__((nonnull(2)));
+__rte_internal
 bool of_device_is_available(const struct device_node *dev_node);
 
+
+__rte_internal
 const struct device_node *of_find_node_by_phandle(uint64_t ph);
 
+__rte_internal
 const struct device_node *of_get_parent(const struct device_node *dev_node);
 
+__rte_internal
 const struct device_node *of_get_next_child(const struct device_node *dev_node,
 					    const struct device_node *prev);
 
+__rte_internal
 const void *of_get_mac_address(const struct device_node *np);
 
 #define for_each_child_node(parent, child) \
 	for (child = of_get_next_child(parent, NULL); child != NULL; \
 			child = of_get_next_child(parent, child))
 
+
+__rte_internal
 uint32_t of_n_addr_cells(const struct device_node *dev_node);
 uint32_t of_n_size_cells(const struct device_node *dev_node);
 
+__rte_internal
 const uint32_t *of_get_address(const struct device_node *dev_node, size_t idx,
 			       uint64_t *size, uint32_t *flags);
 
+__rte_internal
 uint64_t of_translate_address(const struct device_node *dev_node,
 			      const uint32_t *addr) __attribute__((nonnull));
 
+__rte_internal
 bool of_device_is_compatible(const struct device_node *dev_node,
 			     const char *compatible);
 
@@ -146,6 +160,7 @@ bool of_device_is_compatible(const struct device_node *dev_node,
  * subsystem that is device-tree-dependent. Eg. Qman/Bman, config layers, etc.
  * The path should usually be "/proc/device-tree".
  */
+__rte_internal
 int of_init_path(const char *dt_path);
 
 /* of_finish() allows a controlled tear-down of the device-tree layer, eg. if a
diff --git a/drivers/common/dpaax/dpaax_iova_table.h b/drivers/common/dpaax/dpaax_iova_table.h
index fc3b9e7a8f..230fba8ba0 100644
--- a/drivers/common/dpaax/dpaax_iova_table.h
+++ b/drivers/common/dpaax/dpaax_iova_table.h
@@ -61,9 +61,13 @@ extern struct dpaax_iova_table *dpaax_iova_table_p;
 #define DPAAX_MEM_SPLIT_MASK_OFF (DPAAX_MEM_SPLIT - 1) /**< Offset */
 
 /* APIs exposed */
+__rte_internal
 int dpaax_iova_table_populate(void);
+__rte_internal
 void dpaax_iova_table_depopulate(void);
+__rte_internal
 int dpaax_iova_table_update(phys_addr_t paddr, void *vaddr, size_t length);
+__rte_internal
 void dpaax_iova_table_dump(void);
 
 static inline void *dpaax_iova_table_get_va(phys_addr_t paddr) __rte_hot;
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index f72eba761d..ad2b2b3fec 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	dpaax_iova_table_depopulate;
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v5 03/13] bus/dpaa: move internal symbols into INTERNAL section
  @ 2020-05-14 14:24  1%   ` Hemant Agrawal
  0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2020-05-14 14:24 UTC (permalink / raw)
  To: dev, david.marchand, mdr; +Cc: Hemant Agrawal

This patch moves the internal symbols to INTERNAL sections
so that any change in them is not reported as ABI breakage.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 devtools/libabigail.abignore              |  2 ++
 drivers/bus/dpaa/include/fsl_bman.h       |  6 +++++
 drivers/bus/dpaa/include/fsl_fman.h       | 27 +++++++++++++++++++
 drivers/bus/dpaa/include/fsl_qman.h       | 32 +++++++++++++++++++++++
 drivers/bus/dpaa/include/fsl_usd.h        |  6 +++++
 drivers/bus/dpaa/include/netcfg.h         |  2 ++
 drivers/bus/dpaa/rte_bus_dpaa_version.map |  7 +----
 drivers/bus/dpaa/rte_dpaa_bus.h           |  5 ++++
 8 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 877c6d5be8..ab34302d0c 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -53,3 +53,5 @@
 	file_name_regexp = ^librte_common_dpaax\.
 [suppress_file]
 	file_name_regexp = ^librte_bus_fslmc\.
+[suppress_file]
+	file_name_regexp = ^librte_bus_dpaa\.
diff --git a/drivers/bus/dpaa/include/fsl_bman.h b/drivers/bus/dpaa/include/fsl_bman.h
index f9cd972153..82da2fcfe0 100644
--- a/drivers/bus/dpaa/include/fsl_bman.h
+++ b/drivers/bus/dpaa/include/fsl_bman.h
@@ -264,12 +264,14 @@ int bman_shutdown_pool(u32 bpid);
  * the structure provided by the caller can be released or reused after the
  * function returns.
  */
+__rte_internal
 struct bman_pool *bman_new_pool(const struct bman_pool_params *params);
 
 /**
  * bman_free_pool - Deallocates a Buffer Pool object
  * @pool: the pool object to release
  */
+__rte_internal
 void bman_free_pool(struct bman_pool *pool);
 
 /**
@@ -279,6 +281,7 @@ void bman_free_pool(struct bman_pool *pool);
  * The returned pointer refers to state within the pool object so must not be
  * modified and can no longer be read once the pool object is destroyed.
  */
+__rte_internal
 const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
 
 /**
@@ -289,6 +292,7 @@ const struct bman_pool_params *bman_get_params(const struct bman_pool *pool);
  * @flags: bit-mask of BMAN_RELEASE_FLAG_*** options
  *
  */
+__rte_internal
 int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -302,6 +306,7 @@ int bman_release(struct bman_pool *pool, const struct bm_buffer *bufs, u8 num,
  * The return value will be the number of buffers obtained from the pool, or a
  * negative error code if a h/w error or pool starvation was encountered.
  */
+__rte_internal
 int bman_acquire(struct bman_pool *pool, struct bm_buffer *bufs, u8 num,
 		 u32 flags);
 
@@ -317,6 +322,7 @@ int bman_query_pools(struct bm_pool_state *state);
  *
  * Return the number of the free buffers
  */
+__rte_internal
 u32 bman_query_free_buffers(struct bman_pool *pool);
 
 /**
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 5705ebfdce..6c87c8db0d 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -7,6 +7,8 @@
 #ifndef __FSL_FMAN_H
 #define __FSL_FMAN_H
 
+#include <rte_compat.h>
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -43,18 +45,23 @@ struct fm_status_t {
 } __rte_packed;
 
 /* Set MAC address for a particular interface */
+__rte_internal
 int fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num);
 
 /* Remove a MAC address for a particular interface */
+__rte_internal
 void fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num);
 
 /* Get the FMAN statistics */
+__rte_internal
 void fman_if_stats_get(struct fman_if *p, struct rte_eth_stats *stats);
 
 /* Reset the FMAN statistics */
+__rte_internal
 void fman_if_stats_reset(struct fman_if *p);
 
 /* Get all of the FMAN statistics */
+__rte_internal
 void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
 
 /* Set ignore pause option for a specific interface */
@@ -64,32 +71,43 @@ void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
 void fman_if_conf_max_frame_len(struct fman_if *p, unsigned int max_frame_len);
 
 /* Enable/disable Rx promiscuous mode on specified interface */
+__rte_internal
 void fman_if_promiscuous_enable(struct fman_if *p);
+__rte_internal
 void fman_if_promiscuous_disable(struct fman_if *p);
 
 /* Enable/disable Rx on specific interfaces */
+__rte_internal
 void fman_if_enable_rx(struct fman_if *p);
+__rte_internal
 void fman_if_disable_rx(struct fman_if *p);
 
 /* Enable/disable loopback on specific interfaces */
+__rte_internal
 void fman_if_loopback_enable(struct fman_if *p);
+__rte_internal
 void fman_if_loopback_disable(struct fman_if *p);
 
 /* Set buffer pool on specific interface */
+__rte_internal
 void fman_if_set_bp(struct fman_if *fm_if, unsigned int num, int bpid,
 		    size_t bufsize);
 
 /* Get Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_get_fc_threshold(struct fman_if *fm_if);
 
 /* Enable and Set Flow Control threshold parameters on specific interface */
+__rte_internal
 int fman_if_set_fc_threshold(struct fman_if *fm_if,
 			u32 high_water, u32 low_water, u32 bpid);
 
 /* Get Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_get_fc_quanta(struct fman_if *fm_if);
 
 /* Set Flow Control pause quanta on specific interface */
+__rte_internal
 int fman_if_set_fc_quanta(struct fman_if *fm_if, u16 pause_quanta);
 
 /* Set default error fqid on specific interface */
@@ -99,35 +117,44 @@ void fman_if_set_err_fqid(struct fman_if *fm_if, uint32_t err_fqid);
 int fman_if_get_ic_params(struct fman_if *fm_if, struct fman_if_ic_params *icp);
 
 /* Set IC transfer params */
+__rte_internal
 int fman_if_set_ic_params(struct fman_if *fm_if,
 			  const struct fman_if_ic_params *icp);
 
 /* Get interface fd->offset value */
+__rte_internal
 int fman_if_get_fdoff(struct fman_if *fm_if);
 
 /* Set interface fd->offset value */
+__rte_internal
 void fman_if_set_fdoff(struct fman_if *fm_if, uint32_t fd_offset);
 
 /* Get interface SG enable status value */
+__rte_internal
 int fman_if_get_sg_enable(struct fman_if *fm_if);
 
 /* Set interface SG support mode */
+__rte_internal
 void fman_if_set_sg(struct fman_if *fm_if, int enable);
 
 /* Get interface Max Frame length (MTU) */
 uint16_t fman_if_get_maxfrm(struct fman_if *fm_if);
 
 /* Set interface  Max Frame length (MTU) */
+__rte_internal
 void fman_if_set_maxfrm(struct fman_if *fm_if, uint16_t max_frm);
 
 /* Set interface next invoked action for dequeue operation */
 void fman_if_set_dnia(struct fman_if *fm_if, uint32_t nia);
 
 /* discard error packets on rx */
+__rte_internal
 void fman_if_discard_rx_errors(struct fman_if *fm_if);
 
+__rte_internal
 void fman_if_set_mcast_filter_table(struct fman_if *p);
 
+__rte_internal
 void fman_if_reset_mcast_filter_table(struct fman_if *p);
 
 int fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth);
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 1b3342e7e6..4411bb0a79 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1314,6 +1314,7 @@ struct qman_cgr {
 #define QMAN_CGR_MODE_FRAME          0x00000001
 
 #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+__rte_internal
 void qman_set_fq_lookup_table(void **table);
 #endif
 
@@ -1322,6 +1323,7 @@ void qman_set_fq_lookup_table(void **table);
  */
 int qman_get_portal_index(void);
 
+__rte_internal
 u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
 			void **bufs);
 
@@ -1333,6 +1335,7 @@ u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
  * processed via qman_poll_***() functions). Returns zero for success, or
  * -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_add(u32 bits);
 
 /**
@@ -1340,6 +1343,7 @@ int qman_irqsource_add(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
 
 /**
@@ -1350,6 +1354,7 @@ int qman_fq_portal_irqsource_add(struct qman_portal *p, u32 bits);
  * instead be processed via qman_poll_***() functions. Returns zero for success,
  * or -EINVAL if the current CPU is sharing a portal hosted on another CPU.
  */
+__rte_internal
 int qman_irqsource_remove(u32 bits);
 
 /**
@@ -1357,6 +1362,7 @@ int qman_irqsource_remove(u32 bits);
  * takes portal (fq specific) as input rather than using the thread affined
  * portal.
  */
+__rte_internal
 int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
 
 /**
@@ -1369,6 +1375,7 @@ int qman_fq_portal_irqsource_remove(struct qman_portal *p, u32 bits);
  */
 u16 qman_affine_channel(int cpu);
 
+__rte_internal
 unsigned int qman_portal_poll_rx(unsigned int poll_limit,
 				 void **bufs, struct qman_portal *q);
 
@@ -1380,6 +1387,7 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
  *
  * This function will issue a volatile dequeue command to the QMAN.
  */
+__rte_internal
 int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
 
 /**
@@ -1390,6 +1398,7 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags);
  * is issued. It will keep returning NULL until there is no packet available on
  * the DQRR.
  */
+__rte_internal
 struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
 
 /**
@@ -1401,6 +1410,7 @@ struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);
  * This will consume the DQRR enrey and make it available for next volatile
  * dequeue.
  */
+__rte_internal
 void qman_dqrr_consume(struct qman_fq *fq,
 		       struct qm_dqrr_entry *dq);
 
@@ -1414,6 +1424,7 @@ void qman_dqrr_consume(struct qman_fq *fq,
  * this function will return -EINVAL, otherwise the return value is >=0 and
  * represents the number of DQRR entries processed.
  */
+__rte_internal
 int qman_poll_dqrr(unsigned int limit);
 
 /**
@@ -1460,6 +1471,7 @@ void qman_start_dequeues(void);
  * (SDQCR). The requested pools are limited to those the portal has dequeue
  * access to.
  */
+__rte_internal
 void qman_static_dequeue_add(u32 pools, struct qman_portal *qm);
 
 /**
@@ -1507,6 +1519,7 @@ void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
  * function must be called from the same CPU as that which processed the DQRR
  * entry in the first place.
  */
+__rte_internal
 void qman_dca_index(u8 index, int park_request);
 
 /**
@@ -1564,6 +1577,7 @@ void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);
  * a frame queue object based on that, rather than assuming/requiring that it be
  * Out of Service.
  */
+__rte_internal
 int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);
 
 /**
@@ -1582,6 +1596,7 @@ void qman_destroy_fq(struct qman_fq *fq, u32 flags);
  * qman_fq_fqid - Queries the frame queue ID of a FQ object
  * @fq: the frame queue object to query
  */
+__rte_internal
 u32 qman_fq_fqid(struct qman_fq *fq);
 
 /**
@@ -1594,6 +1609,7 @@ u32 qman_fq_fqid(struct qman_fq *fq);
  * This captures the state, as seen by the driver, at the time the function
  * executes.
  */
+__rte_internal
 void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
 
 /**
@@ -1630,6 +1646,7 @@ void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);
  * context_a.address fields and will leave the stashing fields provided by the
  * user alone, otherwise it will zero out the context_a.stashing fields.
  */
+__rte_internal
 int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);
 
 /**
@@ -1659,6 +1676,7 @@ int qman_schedule_fq(struct qman_fq *fq);
  * caller should be prepared to accept the callback as the function is called,
  * not only once it has returned.
  */
+__rte_internal
 int qman_retire_fq(struct qman_fq *fq, u32 *flags);
 
 /**
@@ -1668,6 +1686,7 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags);
  * The frame queue must be retired and empty, and if any order restoration list
  * was released as ERNs at the time of retirement, they must all be consumed.
  */
+__rte_internal
 int qman_oos_fq(struct qman_fq *fq);
 
 /**
@@ -1701,6 +1720,7 @@ int qman_query_fq_has_pkts(struct qman_fq *fq);
  * @fq: the frame queue object to be queried
  * @np: storage for the queried FQD fields
  */
+__rte_internal
 int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
 
 /**
@@ -1708,6 +1728,7 @@ int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);
  * @fq: the frame queue object to be queried
  * @frm_cnt: number of frames in the queue
  */
+__rte_internal
 int qman_query_fq_frm_cnt(struct qman_fq *fq, u32 *frm_cnt);
 
 /**
@@ -1738,6 +1759,7 @@ int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);
  * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the
  * "flags" retrieved from qman_fq_state().
  */
+__rte_internal
 int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
 
 /**
@@ -1773,8 +1795,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
  * of an already busy hardware resource by throttling many of the to-be-dropped
  * enqueues "at the source".
  */
+__rte_internal
 int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
 
+__rte_internal
 int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
 		       int frames_to_send);
 
@@ -1788,6 +1812,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
  * This API is similar to qman_enqueue_multi(), but it takes fd which needs
  * to be processed by different frame queues.
  */
+__rte_internal
 int
 qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd,
 		      u32 *flags, int frames_to_send);
@@ -1876,6 +1901,7 @@ int qman_shutdown_fq(u32 fqid);
  * @fqid: the base FQID of the range to deallocate
  * @count: the number of FQIDs in the range
  */
+__rte_internal
 int qman_reserve_fqid_range(u32 fqid, unsigned int count);
 static inline int qman_reserve_fqid(u32 fqid)
 {
@@ -1895,6 +1921,7 @@ static inline int qman_reserve_fqid(u32 fqid)
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_pool(u32 *result)
 {
@@ -1942,6 +1969,7 @@ void qman_seed_pool_range(u32 id, unsigned int count);
  * any unspecified parameters) will be used rather than a modify hw hardware
  * (which only modifies the specified parameters).
  */
+__rte_internal
 int qman_create_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -1964,6 +1992,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,
  * is executed. This must be excuted on the same affine portal on which it was
  * created.
  */
+__rte_internal
 int qman_delete_cgr(struct qman_cgr *cgr);
 
 /**
@@ -1980,6 +2009,7 @@ int qman_delete_cgr(struct qman_cgr *cgr);
  * unspecified parameters) will be used rather than a modify hw hardware (which
  * only modifies the specified parameters).
  */
+__rte_internal
 int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,
 		    struct qm_mcc_initcgr *opts);
 
@@ -2008,6 +2038,7 @@ int qman_query_congestion(struct qm_mcr_querycongestion *congestion);
  * than requested (though alignment will be as requested). If @partial is zero,
  * the return value will either be 'count' or negative.
  */
+__rte_internal
 int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);
 static inline int qman_alloc_cgrid(u32 *result)
 {
@@ -2021,6 +2052,7 @@ static inline int qman_alloc_cgrid(u32 *result)
  * @id: the base CGR ID of the range to deallocate
  * @count: the number of CGR IDs in the range
  */
+__rte_internal
 void qman_release_cgrid_range(u32 id, unsigned int count);
 static inline void qman_release_cgrid(u32 id)
 {
diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h
index 263d9bb976..30ec63a09d 100644
--- a/drivers/bus/dpaa/include/fsl_usd.h
+++ b/drivers/bus/dpaa/include/fsl_usd.h
@@ -58,6 +58,7 @@ int bman_allocate_raw_portal(struct dpaa_raw_portal *portal);
 int bman_free_raw_portal(struct dpaa_raw_portal *portal);
 
 /* Obtain thread-local UIO file-descriptors */
+__rte_internal
 int qman_thread_fd(void);
 int bman_thread_fd(void);
 
@@ -66,8 +67,12 @@ int bman_thread_fd(void);
  * processing is complete. As such, it is essential to call this before going
  * into another blocking read/select/poll.
  */
+__rte_internal
 void qman_thread_irq(void);
+
+__rte_internal
 void bman_thread_irq(void);
+__rte_internal
 void qman_fq_portal_thread_irq(struct qman_portal *qp);
 
 void qman_clear_irq(void);
@@ -77,6 +82,7 @@ int qman_global_init(void);
 int bman_global_init(void);
 
 /* Direct portal create and destroy */
+__rte_internal
 struct qman_portal *fsl_qman_fq_portal_create(int *fd);
 int fsl_qman_fq_portal_destroy(struct qman_portal *qp);
 int fsl_qman_fq_portal_init(struct qman_portal *qp);
diff --git a/drivers/bus/dpaa/include/netcfg.h b/drivers/bus/dpaa/include/netcfg.h
index bf7bfae8cb..d7d1befd24 100644
--- a/drivers/bus/dpaa/include/netcfg.h
+++ b/drivers/bus/dpaa/include/netcfg.h
@@ -46,11 +46,13 @@ struct netcfg_interface {
  * cfg_file: FMC config XML file
  * Returns the configuration information in newly allocated memory.
  */
+__rte_internal
 struct netcfg_info *netcfg_acquire(void);
 
 /* cfg_ptr: configuration information pointer.
  * Frees the resources allocated by the configuration layer.
  */
+__rte_internal
 void netcfg_release(struct netcfg_info *cfg_ptr);
 
 #ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index e6ca4361e0..f4947fac41 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_20.0 {
+INTERNAL {
 	global:
 
 	bman_acquire;
@@ -13,7 +13,6 @@ DPDK_20.0 {
 	dpaa_logtype_pmd;
 	dpaa_netcfg;
 	dpaa_svr_family;
-	fman_ccsr_map_fd;
 	fman_dealloc_bufs_mask_hi;
 	fman_dealloc_bufs_mask_lo;
 	fman_if_add_mac_addr;
@@ -51,7 +50,6 @@ DPDK_20.0 {
 	qm_channel_pool1;
 	qman_alloc_cgrid_range;
 	qman_alloc_pool_range;
-	qman_clear_irq;
 	qman_create_cgr;
 	qman_create_fq;
 	qman_dca_index;
@@ -87,10 +85,7 @@ DPDK_20.0 {
 	qman_volatile_dequeue;
 	rte_dpaa_driver_register;
 	rte_dpaa_driver_unregister;
-	rte_dpaa_mem_ptov;
 	rte_dpaa_portal_fq_close;
 	rte_dpaa_portal_fq_init;
 	rte_dpaa_portal_init;
-
-	local: *;
 };
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 373aca9785..d4aee132ef 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -158,6 +158,7 @@ rte_dpaa_mem_vtop(void *vaddr)
  *   A pointer to a rte_dpaa_driver structure describing the driver
  *   to be registered.
  */
+__rte_internal
 void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
 
 /**
@@ -167,6 +168,7 @@ void rte_dpaa_driver_register(struct rte_dpaa_driver *driver);
  *	A pointer to a rte_dpaa_driver structure describing the driver
  *	to be unregistered.
  */
+__rte_internal
 void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
 
 /**
@@ -178,10 +180,13 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver);
  * @return
  *	0 in case of success, error otherwise
  */
+__rte_internal
 int rte_dpaa_portal_init(void *arg);
 
+__rte_internal
 int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq);
 
+__rte_internal
 int rte_dpaa_portal_fq_close(struct qman_fq *fq);
 
 /**
-- 
2.17.1


^ permalink raw reply	[relevance 1%]

Results 5401-5600 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2019-09-06  9:45     [dpdk-dev] [PATCH v2 0/6] RCU integration with LPM library Ruifeng Wang
2020-06-08  5:16     ` [dpdk-dev] [PATCH v4 0/3] " Ruifeng Wang
2020-06-08  5:16       ` [dpdk-dev] [PATCH v4 1/3] lib/lpm: integrate RCU QSBR Ruifeng Wang
2020-06-08 18:46  3%     ` Honnappa Nagarahalli
2020-06-18 17:36  0%       ` Medvedkin, Vladimir
2020-02-12 23:08     [dpdk-dev] [RFC 0/4] Enforce flag checking in API's Stephen Hemminger
2020-04-27 23:16     ` [dpdk-dev] [PATCH v3 0/4] Enforce checking on flag values " Stephen Hemminger
2020-04-28 10:28       ` Bruce Richardson
2020-06-16 15:47  0%     ` Thomas Monjalon
2020-02-17 15:38     [dpdk-dev] [PATCH] doc: plan splitting the ethdev ops struct Ferruh Yigit
2020-02-18  5:07     ` Jerin Jacob
2020-05-26 13:01  0%   ` Thomas Monjalon
2020-02-25 12:44     [dpdk-dev] [PATCH v2] " Ferruh Yigit
2020-03-04  9:57     ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2020-05-24 23:18  0%   ` Thomas Monjalon
2020-05-25  9:11  0%     ` Andrew Rybchenko
2020-05-26 13:55  0%       ` Thomas Monjalon
2020-05-25 10:24  0%   ` David Marchand
2020-03-05  4:33     [dpdk-dev] [RFC v1 1/1] vfio: set vf token and gain vf device access vattunuru
2020-05-28  1:22  4% ` [dpdk-dev] [PATCH v14 0/2] support for VFIO-PCI VF token interface Haiyue Wang
2020-05-29  1:37  4% ` [dpdk-dev] [PATCH v15 " Haiyue Wang
2020-06-17  6:33  4% ` [dpdk-dev] [PATCH v16 " Haiyue Wang
2020-03-17  1:17     [dpdk-dev] [PATCH v3 00/12] generic rte atomic APIs deprecate proposal Phil Yang
2020-05-12  8:03     ` [dpdk-dev] [PATCH v4 0/4] " Phil Yang
2020-05-12  8:03       ` [dpdk-dev] [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics Phil Yang
2020-05-12 18:20         ` Stephen Hemminger
2020-05-12 19:23           ` Honnappa Nagarahalli
2020-05-13  8:57             ` Morten Brørup
2020-05-13 19:04               ` Mattias Rönnblom
2020-05-13 19:40                 ` Honnappa Nagarahalli
2020-05-13 20:17                   ` Mattias Rönnblom
2020-05-14  8:34                     ` Morten Brørup
2020-05-14 20:16  0%                   ` Mattias Rönnblom
2020-05-14 21:00  0%                     ` Honnappa Nagarahalli
2020-04-16 14:54     [dpdk-dev] [PATCHv3] Remove validate-abi.sh from tree Neil Horman
2020-05-24 20:34 39% ` [dpdk-dev] [PATCH v4] devtools: remove old ABI validation script Thomas Monjalon
2020-04-21  2:04     [dpdk-dev] [PATCH] devtools: remove useless files from ABI reference Thomas Monjalon
2020-05-24 17:43 13% ` [dpdk-dev] [PATCH v2] " Thomas Monjalon
2020-05-28 13:16  4%   ` David Marchand
2020-04-23 10:12     [dpdk-dev] [PATCH v1] abi: document reasons behind the three part versioning Ray Kinsella
2020-05-05  8:56     ` [dpdk-dev] [PATCH v2] " Ray Kinsella
2020-05-18 16:20  4%   ` Thomas Monjalon
2020-04-28 23:50     [dpdk-dev] [PATCH v4 0/8] Windows basic memory management Dmitry Kozlyuk
2020-05-25  0:37     ` [dpdk-dev] [PATCH v5 " Dmitry Kozlyuk
2020-05-25  0:37  4%   ` [dpdk-dev] [PATCH v5 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-02 23:03       ` [dpdk-dev] [PATCH v6 00/11] Windows basic memory management Dmitry Kozlyuk
2020-06-02 23:03  9%     ` [dpdk-dev] [PATCH v6 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-03  1:59  0%       ` Stephen Hemminger
2020-06-02 23:03         ` [dpdk-dev] [PATCH v6 02/11] eal: introduce internal wrappers for file operations Dmitry Kozlyuk
2020-06-03 12:07           ` Neil Horman
2020-06-03 12:34             ` Dmitry Kozlyuk
2020-06-04 21:07               ` Neil Horman
2020-06-05  0:16                 ` Dmitry Kozlyuk
2020-06-05 11:19  3%               ` Neil Horman
2020-06-08  7:41         ` [dpdk-dev] [PATCH v6 00/11] Windows basic memory management Dmitry Kozlyuk
2020-06-08  7:41  9%       ` [dpdk-dev] [PATCH v7 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-10 14:27           ` [dpdk-dev] [PATCH v8 00/11] Windows basic memory management Dmitry Kozlyuk
2020-06-10 14:27  9%         ` [dpdk-dev] [PATCH v8 01/11] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-06-15  0:43             ` [dpdk-dev] [PATCH v9 00/12] Windows basic memory management Dmitry Kozlyuk
2020-06-15  0:43  9%           ` [dpdk-dev] [PATCH v9 01/12] eal: replace rte_page_sizes with a set of constants Dmitry Kozlyuk
2020-04-30  5:46     [dpdk-dev] [PATCH v1 1/2] devtools: add internal ABI version check Haiyue Wang
2020-04-30  5:46     ` [dpdk-dev] [PATCH v1 2/2] devtools: updating internal symbols ABI version Haiyue Wang
2020-05-19 15:10  9%   ` David Marchand
2020-05-19 15:35  4% ` [dpdk-dev] [PATCH v1 1/2] devtools: add internal ABI version check David Marchand
2020-05-19 16:54  4%   ` David Marchand
2020-05-01 17:16     [dpdk-dev] [PATCH] doc: deprication notice to mark tm spec as experimental Nithin Dabilpuram
2020-05-05  8:07     ` [dpdk-dev] [PATCH v2] " Nithin Dabilpuram
2020-05-05  8:55       ` Dumitrescu, Cristian
2020-05-21 10:49  0%     ` Jerin Jacob
2020-05-24 20:58  0%       ` Nithin Kumar D
2020-05-24 23:33  0%       ` Thomas Monjalon
2020-05-05 17:49     [dpdk-dev] [PATCH v1] doc: fix references to bind_default_symbol Ray Kinsella
2020-05-06 15:41     ` [dpdk-dev] [PATCH v3] " Ray Kinsella
2020-05-18 16:54  3%   ` Thomas Monjalon
2020-05-12 14:00     [dpdk-dev] [PATCH v2 01/12] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-13 13:27     ` [dpdk-dev] [PATCH v3 00/12] NXP DPAAx: move internal symbols to INTERNAL Hemant Agrawal
2020-05-13 13:27       ` [dpdk-dev] [PATCH v3 01/12] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-13 14:06         ` Hemant Agrawal (OSS)
2020-05-14  7:13           ` Ray Kinsella
2020-05-14  9:53             ` Hemant Agrawal (OSS)
2020-05-14 10:09               ` Ray Kinsella
2020-05-14 11:06                 ` Hemant Agrawal (OSS)
2020-05-14 11:10                   ` Ray Kinsella
2020-05-14 11:19                     ` David Marchand
2020-05-14 11:23                       ` Hemant Agrawal (OSS)
2020-05-14 12:38                         ` Hemant Agrawal (OSS)
2020-05-14 13:31                           ` David Marchand
2020-05-14 16:28  3%                         ` David Marchand
2020-05-14 17:15  0%                           ` Hemant Agrawal (OSS)
2020-05-15  9:26  0%                             ` Thomas Monjalon
2020-05-15 11:19  5%                               ` Thomas Monjalon
2020-05-13 10:43     [dpdk-dev] [PATCH v1] doc: fix typos and errors in abi policy doc Gaetan Rivet
2020-05-14  6:40     ` Ray Kinsella
2020-05-19  9:46  4%   ` Thomas Monjalon
2020-05-13 12:11     [dpdk-dev] [PATCH] meter: provide experimental alias of API for old apps Ferruh Yigit
2020-05-14 11:52     ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2020-05-14 15:32  0%   ` David Marchand
2020-05-14 15:56  0%     ` Ray Kinsella
2020-05-14 16:07  0%     ` Ferruh Yigit
2020-05-14 16:11  4% ` [dpdk-dev] [PATCH v4] " Ferruh Yigit
2020-05-15 14:36 12%   ` [dpdk-dev] [PATCH v5] abi: " Ray Kinsella
2020-05-15 15:01 12%   ` [dpdk-dev] [PATCH v6] " Ray Kinsella
2020-05-16 11:53  4%     ` Neil Horman
2020-05-18 17:18  4%       ` Thomas Monjalon
2020-05-18 17:34  4%         ` Ferruh Yigit
2020-05-18 17:51  4%           ` Thomas Monjalon
2020-05-18 18:32  4%             ` Ferruh Yigit
2020-05-19 14:13  4%               ` Ray Kinsella
2020-05-19 14:14  4%         ` Ray Kinsella
2020-05-17 19:52  0%   ` [dpdk-dev] [PATCH v4] meter: " Dumitrescu, Cristian
2020-05-18  6:29  4%     ` Ray Kinsella
2020-05-18  9:22  4%       ` Thomas Monjalon
2020-05-18  9:30  4%         ` Ray Kinsella
2020-05-18 10:46  3%           ` Thomas Monjalon
2020-05-18 11:18  0%             ` Dumitrescu, Cristian
2020-05-18 11:49  0%               ` Ray Kinsella
2020-05-18 11:48  4%             ` Ray Kinsella
2020-05-18 12:13  3%               ` Thomas Monjalon
2020-05-18 13:06  0%                 ` Ray Kinsella
2020-05-18 18:30  2% ` [dpdk-dev] [PATCH v5] " Ferruh Yigit
2020-05-19 12:16 10% ` [dpdk-dev] [PATCH v6] " Ferruh Yigit
2020-05-19 13:26  0%   ` Dumitrescu, Cristian
2020-05-19 14:24  0%     ` Thomas Monjalon
2020-05-19 14:22  0%   ` Ray Kinsella
2020-05-14 13:25     [dpdk-dev] [PATCH v4 00/13]NXP DPAAx: move internal symbols to INTERNAL Hemant Agrawal
2020-05-14 14:24     ` [dpdk-dev] [PATCH v5 00/13] NXP " Hemant Agrawal
2020-05-14 14:24  1%   ` [dpdk-dev] [PATCH v5 03/13] bus/dpaa: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-14 14:29     ` [dpdk-dev] [PATCH v6 00/13] NXP DPAAx: move internal symbols to INTERNAL Hemant Agrawal
2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 02/13] bus/fslmc: " Hemant Agrawal
2020-05-14 14:29  1%   ` [dpdk-dev] [PATCH v6 03/13] bus/dpaa: " Hemant Agrawal
2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 04/13] crypto: " Hemant Agrawal
2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 05/13] mempool/dpaa2: " Hemant Agrawal
2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 06/13] net/dpaa: " Hemant Agrawal
2020-05-14 14:29  3%   ` [dpdk-dev] [PATCH v6 07/13] net/dpaa2: " Hemant Agrawal
2020-05-15  5:08       ` [dpdk-dev] [PATCH v7 00/13] NXP DPAAx: move internal symbols to INTERNAL Hemant Agrawal
2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 02/13] bus/fslmc: " Hemant Agrawal
2020-05-15  5:08  1%     ` [dpdk-dev] [PATCH v7 03/13] bus/dpaa: " Hemant Agrawal
2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 04/13] mempool/dpaa2: " Hemant Agrawal
2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 05/13] net/dpaa: " Hemant Agrawal
2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 06/13] net/dpaa2: " Hemant Agrawal
2020-05-15  5:08  3%     ` [dpdk-dev] [PATCH v7 07/13] crypto: " Hemant Agrawal
2020-05-15  9:47     ` [dpdk-dev] [PATCH v8 00/13] NXP DPAAx: move internal symbols to INTERNAL Hemant Agrawal
2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 01/13] common/dpaax: move internal symbols into INTERNAL section Hemant Agrawal
2020-05-19  6:43  0%     ` Hemant Agrawal
2020-05-19  6:44  0%       ` Ray Kinsella
2020-05-19  9:51  0%     ` Ray Kinsella
2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 02/13] bus/fslmc: " Hemant Agrawal
2020-05-19 10:00  0%     ` Ray Kinsella
2020-05-15  9:47  1%   ` [dpdk-dev] [PATCH v8 03/13] bus/dpaa: " Hemant Agrawal
2020-05-19 10:56  0%     ` Ray Kinsella
2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 04/13] mempool/dpaa2: " Hemant Agrawal
2020-05-19 11:03  0%     ` Ray Kinsella
2020-05-19 11:16  0%       ` Hemant Agrawal
2020-05-19 11:30  0%         ` Ray Kinsella
2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 05/13] net/dpaa: " Hemant Agrawal
2020-05-19 11:14  0%     ` Ray Kinsella
2020-05-19 11:39  0%       ` Hemant Agrawal
2020-05-19 11:41  0%         ` Ray Kinsella
2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 06/13] net/dpaa2: " Hemant Agrawal
2020-05-19 11:15  0%     ` Ray Kinsella
2020-05-15  9:47  3%   ` [dpdk-dev] [PATCH v8 07/13] crypto: " Hemant Agrawal
2020-05-19 11:17  0%     ` Ray Kinsella
2020-05-18 10:39  3% [dpdk-dev] DPDK 20.05 RC2 Test Report Peng, Yuan
2020-05-19  7:36  3% [dpdk-dev] [PATCH] doc: fix doc build failure Raslan Darawsheh
2020-05-19  7:39  0% ` Ray Kinsella
2020-05-19  8:03  0% ` David Marchand
2020-05-20 13:58  3% [dpdk-dev] [PATCH 0/4] fix build with GCC 10 Thomas Monjalon
2020-05-20 13:58 14% ` [dpdk-dev] [PATCH 4/4] devtools: allow warnings in ABI reference build Thomas Monjalon
2020-05-20 14:52  0% ` [dpdk-dev] [PATCH 0/4] fix build with GCC 10 David Marchand
2020-05-20 16:45  0% ` Kevin Traynor
2020-05-21 13:39  0%   ` Thomas Monjalon
2020-05-21 11:20  3% [dpdk-dev] DPDK Release Status Meeting 21/05/2020 Ferruh Yigit
2020-05-21 11:24  0% ` Ferruh Yigit
2020-05-21 11:24  3% [dpdk-dev] DPDK-20.05 RC3 day2 quick report Peng, Yuan
2020-05-22  6:58  4% [dpdk-dev] [PATCH 0/3] Experimental/internal libraries cleanup David Marchand
2020-05-22  6:58 17% ` [dpdk-dev] [PATCH 1/3] build: remove special versioning for non stable libraries David Marchand
2020-05-22  6:58  3% ` [dpdk-dev] [PATCH 2/3] drivers: drop workaround for internal libraries David Marchand
2020-05-22  6:58     ` [dpdk-dev] [PATCH 3/3] lib: remind experimental status in library headers David Marchand
2020-05-22 14:15       ` Honnappa Nagarahalli
2020-05-28  6:53  5%     ` David Marchand
2020-05-28 18:40  3%       ` Honnappa Nagarahalli
2020-05-22 13:23     [dpdk-dev] [PATCH 20.08 0/9] adding support for python 3 only Louise Kilheeney
2020-05-22 13:23  4% ` [dpdk-dev] [PATCH 20.08 8/9] devtools: support python3 only Louise Kilheeney
2020-05-27  6:15  0%   ` Ray Kinsella
2020-06-17 15:10     ` [dpdk-dev] [PATCH v2 0/9] adding support for python 3 only Louise Kilheeney
2020-06-17 15:10  4%   ` [dpdk-dev] [PATCH v2 8/9] devtools: support python3 only Louise Kilheeney
2020-05-22 14:06  4% [dpdk-dev] [PATCH v1] doc: update release notes for 20.05 John McNamara
2020-05-22 15:17  0% ` Kevin Traynor
2020-05-25  4:16  3% [dpdk-dev] DPDK-20.05 RC3 quick report Peng, Yuan
2020-05-25 19:11  4% [dpdk-dev] [PATCH v2] doc: update release notes for 20.05 John McNamara
2020-05-26  8:33  3% [dpdk-dev] DPDK-20.05 RC4 quick report Peng, Yuan
2020-05-26 19:53  3% [dpdk-dev] [dpdk-announce] DPDK 20.05 released Thomas Monjalon
2020-05-27  8:41  7% [dpdk-dev] [PATCH] version: 20.08-rc0 David Marchand
2020-05-27 10:41  7% [dpdk-dev] ABI versioning in Windows Fady Bader
2020-05-27 12:50  7% ` Thomas Monjalon
2020-05-27 14:32  4%   ` Ray Kinsella
2020-05-27 20:35  7%   ` Neil Horman
2020-05-27 21:27  4%     ` Thomas Monjalon
2020-05-27 21:43  4%       ` Thomas Monjalon
2020-05-28  0:28  4%         ` Neil Horman
2020-05-28  0:21  4%       ` Neil Horman
2020-05-28  1:04  3% [dpdk-dev] [PATCH] stack: remove experimental tag from API Gage Eads
2020-05-28  5:46  3% ` Ray Kinsella
2020-05-28 15:04  8% ` [dpdk-dev] [20.11] [PATCH v2] " Gage Eads
2020-05-31 14:43  3% [dpdk-dev] [RFC] ethdev: add fragment attribute to IPv6 item Dekel Peled
2020-06-01  5:38  3% ` Stephen Hemminger
2020-06-01  6:11  0%   ` Dekel Peled
2020-06-02 14:32  0% ` Andrew Rybchenko
2020-06-02 18:28  0%   ` Ori Kam
2020-06-02 19:04  3%     ` Adrien Mazarguil
2020-06-03  8:16  0%       ` Ori Kam
2020-06-03 12:10  0%         ` Dekel Peled
2020-06-18  6:58  0%           ` Dekel Peled
2020-06-01 10:31     [dpdk-dev] [PATCH v2 0/4] build mempool on Windows Fady Bader
2020-06-01 21:46     ` [dpdk-dev] [EXTERNAL] Re: [PATCH v2 1/4] eal: disable function versioning " Omar Cardona
2020-06-02 10:27       ` Neil Horman
2020-06-02 10:40         ` Thomas Monjalon
2020-06-11 10:09  4%       ` Kinsella, Ray
2020-06-03 19:43  3% [dpdk-dev] 19.11.3 patches review and test luca.boccassi
2020-06-10  7:19  0% ` Yu, PingX
2020-06-10  8:35  0%   ` Luca Boccassi
2020-06-15  2:05  3% ` Pei Zhang
2020-06-15  9:09  0%   ` Luca Boccassi
2020-06-16 14:20  0% ` Govindharajan, Hariprasad
2020-06-18 18:11  3% ` [dpdk-dev] [EXTERNAL] " Abhishek Marathe
2020-06-18 18:17  0%   ` Luca Boccassi
2020-06-04 21:02  6% [dpdk-dev] [RFC] doc: change to diverse and inclusive language Stephen Hemminger
2020-06-05  7:54  0% ` Luca Boccassi
2020-06-05  8:35  0%   ` Bruce Richardson
2020-06-05 21:40  4% ` Aaron Conole
2020-06-07 12:26     [dpdk-dev] Handling missing export functions in MSVC linkage Tal Shnaiderman
2020-06-08  0:09     ` Dmitry Kozlyuk
2020-06-08  8:33  3%   ` David Marchand
2020-06-07 17:01     [dpdk-dev] [PATCH 0/9] Rename blacklist/whitelist to blocklist/allowlist Stephen Hemminger
2020-06-07 17:01  4% ` [dpdk-dev] [PATCH 9/9] doc: add note about blacklist/whitelist changes Stephen Hemminger
2020-06-08 19:25     ` [dpdk-dev] [PATCH v2 00/10] Rename blacklist/whitelist to Stephen Hemminger
2020-06-08 19:25  4%   ` [dpdk-dev] [PATCH v2 09/10] doc: add note about blacklist/whitelist changes Stephen Hemminger
2020-06-09  5:29     [dpdk-dev] [PATCH] mbuf: remove unused next member Xiaolong Ye
2020-06-09  7:17     ` Olivier Matz
2020-06-09  7:15       ` Ye Xiaolong
2020-06-09 15:29  3%     ` Stephen Hemminger
2020-06-10  0:54  3%       ` Ye Xiaolong
2020-06-09 10:31     [dpdk-dev] [PATCH v5 1/8] eal: move OS common functions to single file talshn
2020-06-18 21:15     ` [dpdk-dev] [PATCH v6 0/9] Windows bus/pci support talshn
2020-06-18 21:15  3%   ` [dpdk-dev] [PATCH v6 9/9] build: generate version.map file for MingW on Windows talshn
2020-06-10  6:38  2% [dpdk-dev] [RFC] mbuf: accurate packet Tx scheduling Viacheslav Ovsiienko
2020-06-10 13:33  0% ` Harman Kalra
2020-06-10 15:16  0%   ` Slava Ovsiienko
2020-06-17 15:57  0%     ` [dpdk-dev] [EXT] " Harman Kalra
2020-06-10 14:44     [dpdk-dev] [PATCH 0/7] Register external threads as lcore David Marchand
2020-06-10 14:45  3% ` [dpdk-dev] [PATCH 2/7] eal: fix multiple definition of per lcore thread id David Marchand
2020-06-15  6:46  0%   ` Kinsella, Ray
2020-06-10 14:45     ` [dpdk-dev] [PATCH 7/7] eal: add lcore hotplug notifications David Marchand
2020-06-15  6:34  3%   ` Kinsella, Ray
2020-06-15  7:13  0%     ` David Marchand
2020-06-19 16:22     ` [dpdk-dev] [PATCH v2 0/9] Register non-EAL threads as lcore David Marchand
2020-06-19 16:22  3%   ` [dpdk-dev] [PATCH v2 2/9] eal: fix multiple definition of per lcore thread id David Marchand
2020-06-12  0:20     [dpdk-dev] [PATCH v2 00/10] Rename blacklsit/whitelist to block/allow list Stephen Hemminger
2020-06-12  0:20  4% ` [dpdk-dev] [PATCH v2 09/10] doc: add note about blacklist/whitelist changes Stephen Hemminger
2020-06-12 21:24     [dpdk-dev] [PATCH 00/27] V1 event/dlb add Intel DLB PMD McDaniel, Timothy
2020-06-12 21:24     ` [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites McDaniel, Timothy
2020-06-13  3:59  5%   ` Jerin Jacob
2020-06-13 10:43  0%     ` Mattias Rönnblom
2020-06-18 15:51  0%       ` McDaniel, Timothy
2020-06-18 15:44  5%     ` McDaniel, Timothy
2020-06-12 21:24  1% ` [dpdk-dev] [PATCH 03/27] event/dlb: add shared code version 10.7.9 McDaniel, Timothy
2020-06-12 21:24  1% ` [dpdk-dev] [PATCH 08/27] event/dlb: add definitions shared with LKM or shared code McDaniel, Timothy
2020-06-13  0:00     [dpdk-dev] [PATCH v3 00/10] rename blacklist/whitelist to block/allow Stephen Hemminger
2020-06-13  0:00  4% ` [dpdk-dev] [PATCH v3 09/10] doc: add note about blacklist/whitelist changes Stephen Hemminger
2020-06-14 22:57     [dpdk-dev] [PATCH 0/4] add PPC and Windows to meson test Thomas Monjalon
2020-06-15 22:22  3% ` [dpdk-dev] [PATCH v2 0/4] add PPC and Windows cross-compilation " Thomas Monjalon
2020-06-15 22:22  7%   ` [dpdk-dev] [PATCH v2 1/4] devtools: shrink cross-compilation test definition Thomas Monjalon
2020-06-17 21:05  0%     ` David Christensen
2020-06-15 22:52     [dpdk-dev] Aligning DPDK Link bonding with current standards terminology Stephen Hemminger
2020-06-16 11:48     ` Jay Rolette
2020-06-16 13:52       ` Chas Williams
2020-06-16 15:45  3%     ` Stephen Hemminger
2020-06-16 20:27  3%       ` Chas Williams
2020-06-18 10:26  4% [dpdk-dev] DPDK Release Status Meeting 18/06/2020 Ferruh Yigit
2020-06-18 14:09  3% ` Trahe, Fiona
2020-06-18 15:26  3%   ` Ferruh Yigit
2020-06-18 15:30  0%     ` Thomas Monjalon
2020-06-18 16:28     [dpdk-dev] [PATCH v1 0/4] vhost: improve ready state Matan Azrad
2020-06-18 16:28  4% ` [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration Matan Azrad
2020-06-19  6:44  0%   ` Maxime Coquelin
2020-06-19 13:28  0%     ` Matan Azrad
2020-06-19 14:01  4%       ` Maxime Coquelin
2020-06-21  6:26  0%         ` Matan Azrad
2020-06-22  8:06  0%           ` Maxime Coquelin
2020-06-18 19:06  2% [dpdk-dev] [dpdk-announce] DPDK 19.11.3 released luca.boccassi
2020-06-18 21:15     [dpdk-dev] [PATCH v6 1/9] eal: move OS common functions to single file talshn
2020-06-21 10:26     ` [dpdk-dev] [PATCH v7 0/9] Windows bus/pci support talshn
2020-06-21 10:26  4%   ` [dpdk-dev] [PATCH v7 9/9] build: generate version.map file for MinGW on Windows talshn
2020-06-21 10:26     [dpdk-dev] [PATCH v7 1/9] eal: move OS common functions to single file talshn
2020-06-22  7:55     ` [dpdk-dev] [PATCH v8 0/9] Windows bus/pci support talshn
2020-06-22  7:55  4%   ` [dpdk-dev] [PATCH v8 9/9] build: generate version.map file for MinGW on Windows talshn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).