* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-15 7:52 0% ` Maxime Coquelin
@ 2023-06-15 19:30 5% ` Chautru, Nicolas
0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2023-06-15 19:30 UTC (permalink / raw)
To: Maxime Coquelin, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> On 6/14/23 20:18, Chautru, Nicolas wrote:
> > Hi Maxime,
> >
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com> Hi,
> >>
> >> On 6/13/23 19:16, Chautru, Nicolas wrote:
> >>> Hi Maxime,
> >>>
> >>>> -----Original Message-----
> >>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>
> >>>>
> >>>> On 6/12/23 22:53, Chautru, Nicolas wrote:
> >>>>> Hi Maxime, David,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>>>>
> >>>>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
> >>>>>>> Hi David,
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: David Marchand <david.marchand@redhat.com>> >> On
> >> Mon, Jun
> >>>> 5,
> >>>>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
> >>>>>>>> wrote:
> >>>>>>>>> Wrt the MLD functions: these are new into the related serie
> >>>>>>>>> but still the
> >>>>>>>> break the ABI since the struct rte_bbdev includes these
> >>>>>>>> functions hence causing offset changes.
> >>>>>>>>>
> >>>>>>>>> Should I then just rephrase as:
> >>>>>>>>>
> >>>>>>>>> +* bbdev: Will extend the API to support the new operation
> >>>>>>>>> +type
> >>>>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
> >>>>>>>>> + this `v1
> >>>>>>>>>
> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> >>>>>>>>> This
> >>>>>>>>> + will notably introduce + new symbols for
> >>>>>>>>> ``rte_bbdev_dequeue_mldts_ops``,
> >>>>>>>>> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
> >>>>>>>>
> >>>>>>>> I don't think we need this deprecation notice.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Do you need to expose those new mldts ops in rte_bbdev?
> >>>>>>>> Can't they go to dev_ops?
> >>>>>>>> If you can't, at least moving those new ops at the end of the
> >>>>>>>> structure would avoid the breakage on rte_bbdev.
> >>>>>>>
> >>>>>>> It would probably be best to move all these ops at the end of
> >>>>>>> the structure
> >>>>>> (ie. keep them together).
> >>>>>>> In that case the deprecation notice would call out that the
> >>>>>>> rte_bbdev
> >>>>>> structure content is more generally modified. Probably best for
> >>>>>> the longer run.
> >>>>>>> David, Maxime, ok with that option?
> >>>>>>>
> >>>>>>> struct __rte_cache_aligned rte_bbdev {
> >>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> >>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> >>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> >>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> >>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> >>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> >>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> >>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> >>>>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> >>>>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> >>>>>>> const struct rte_bbdev_ops *dev_ops;
> >>>>>>> struct rte_bbdev_data *data;
> >>>>>>> enum rte_bbdev_state state;
> >>>>>>> struct rte_device *device;
> >>>>>>> struct rte_bbdev_cb_list list_cbs;
> >>>>>>> struct rte_intr_handle *intr_handle;
> >>>>>>> };
> >>>>>>
> >>>>>> The best thing, as suggested by David, would be to move all the
> >>>>>> ops out of struct rte_bbdev, as these should not be visible to
> >>>>>> the
> >> application.
> >>>>>
> >>>>> That would be quite disruptive across all PMDs and possible perf
> >>>>> impact to
> >>>> validate. I don’t think this is anywhere realistic to consider such
> >>>> a change in 23.11.
> >>>>> I believe moving these function at the end of the structure is a
> >>>>> good
> >>>> compromise to avoid future breakage of rte_bbdev structure with
> >>>> almost seamless impact (purely a ABI break when moving into 23.11
> >>>> which is not avoidable. Retrospectively we should have done that in
> >>>> 22.11
> >> really.
> >>>>
> >>>> If we are going to break the ABI, better to do the right rework
> >>>> directly. Otherwise we'll end-up breaking it again next year.
> >>>
> >>> With the suggested change, this will not break ABI next year. Any
> >>> future
> >> functions are added at the end of the structure anyway.
> >>
> >> I'm not so sure, it depends if adding a new field at the end cross a
> >> cacheline boundary or not:
> >>
> >> /*
> >> * Global array of all devices. This is not static because it's used by the
> >> * inline enqueue and dequeue functions
> >> */
> >> struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
> >>
> >> If the older inlined functions used by the application retrieve the
> >> dev pointer from the array directly (they do) and added new fields in
> >> new version cross a cacheline, then there will be a misalignement
> >> between the new lib version and the application using the older inlined
> functions.
> >>
> >> ABI-wise, this is not really future proof.
> >>
> >>>
> >>>>
> >>>> IMHO, moving these ops should be quite trivial and not much work.
> >>>>
> >>>> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
> >>>> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it
> >>>> may not break the ABI, but that's a bit fragile:
> >>>> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
> >>>> should be OK
> >>>> - struct rte_bbdev is cache-aligned, so it may work if adding these two
> >>>> ops do not overlap a cacheline which depends on the CPU
> architecture.
> >>>
> >>> If you prefer to add the only 2 new functions at the end of the
> >>> structure
> >> that is okay. I believe it would be cleaner to move all these
> >> enqueue/dequeue funs down together without drawback I can think of.
> >> Let me know.
> >>
> >> Adding the new ones at the end is not future proof, but at least it
> >> does not break ABI just for cosmetic reasons (that's a big drawback
> IMHO).
> >>
> >> I just checked using pahole:
> >>
> >> struct rte_bbdev {
> >> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
> >> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
> >> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
> >> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
> >> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8
> >> */
> >> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8
> >> */
> >> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8
> >> */
> >> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8
> >> */
> >> /* --- cacheline 1 boundary (64 bytes) --- */
> >> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
> >> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
> >> const struct rte_bbdev_ops * dev_ops; /* 80 8 */
> >> struct rte_bbdev_data * data; /* 88 8 */
> >> enum rte_bbdev_state state; /* 96 4 */
> >>
> >> /* XXX 4 bytes hole, try to pack */
> >>
> >> struct rte_device * device; /* 104 8 */
> >> struct rte_bbdev_cb_list list_cbs; /* 112 16 */
> >> /* --- cacheline 2 boundary (128 bytes) --- */
> >> struct rte_intr_handle * intr_handle; /* 128 8 */
> >>
> >> /* size: 192, cachelines: 3, members: 16 */
> >> /* sum members: 132, holes: 1, sum holes: 4 */
> >> /* padding: 56 */
> >> } __attribute__((__aligned__(64)));
> >>
> >> We're lucky on x86, we still have 56 bytes, so we can add 7 new ops
> >> at the end before breaking the ABI if I'm not mistaken.
> >>
> >> I checked the other architecture, and it seems we don't support any
> >> with 32B cacheline size so we're good for a while.
> >
> > OK then just adding the new functions at the end, no other cosmetic
> changes. Will update the patch to match this.
> > In term of deprecation notice, you are okay with latest draft?
> >
> > +* bbdev: Will extend the API to support the new operation type
> > +``RTE_BBDEV_OP_MLDTS`` as per this `v1
> > +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> > + This will notably introduce new symbols for
> > +``rte_bbdev_dequeue_mldts_ops``, ``rte_bbdev_enqueue_mldts_ops``
> into the stuct rte_bbdev.
>
> This is not needed in the deprecation notice.
> If you are willing to announce it, it could be part of the Intel roadmap.
>
I still see this abi failure as we extend the struct (see below), what is the harm in calling it out in the deprecation notice?
1 function with some indirect sub-type change:
[C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at rte_bbdev.c:174:1 has some indirect sub-type changes:
return type changed:
in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
type size hasn't changed
2 data member insertions:
'rte_bbdev_enqueue_mldts_ops_t enqueue_mldts_ops', at offset 1088 (in bits) at rte_bbdev.h:527:1
'rte_bbdev_dequeue_mldts_ops_t dequeue_mldts_ops', at offset 1152 (in bits) at rte_bbdev.h:529:1
no data member changes (12 filtered);
Error: ABI issue reported for abidiff --suppr /home-local/jenkins-local/jenkins-agent/workspace/Generic-DPDK-Compile-ABI@2/dpdk/devtools/libabigail.abignore --no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2 build_install/usr/local/include reference/usr/local/lib64/librte_bbdev.so.23.0 build_install/usr/local/lib64/librte_bbdev.so.23.2
ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue).
>
> To be on the safe side, could you try to dynamically link an application with
> a DPDK version before this change, then rebuild DPDK with adding these two
> fields. Then test with at least 2 devices with test-bbdev and see if it does not
> crash or fail?
This is not something we would validate really. But I agree the chance of that ABI having an actual impact is slim based on the address alignment.
Still by process, we can use the abi warning above as a litmus the ABI has some minor change.
Also we introduce this change in the new LTS version, so unsure this is controversial or impactful.
Let me know if different opinion.
Thanks
Nic
>
> Thanks,
> Maxime
>
> >
> >>
> >> Maxime
> >>
> >>>
> >>>>
> >>>> Maxime
> >>>>
> >>>>> What do you think Maxime, David? Based on this I can adjust the
> >>>>> change for
> >>>> 23.11 and update slightly the deprecation notice accordingly.
> >>>>>
> >>>>> Thanks
> >>>>> Nic
> >>>>>
> >>>
> >
^ permalink raw reply [relevance 5%]
* DPDK Release Status Meeting 2023-06-15
@ 2023-06-15 17:59 3% Mcnamara, John
0 siblings, 0 replies; 200+ results
From: Mcnamara, John @ 2023-06-15 17:59 UTC (permalink / raw)
To: dev; +Cc: thomas, david.marchand
[-- Attachment #1: Type: text/plain, Size: 3005 bytes --]
Release status meeting minutes 2023-06-15
=========================================
Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens
Participants:
* AMD
* ARM
* Debian/Microsoft
* Intel
* Marvell
* Nvidia
* Red Hat
Release Dates
-------------
The following are the proposed current dates for 23.07:
* V1: 22 April 2023
* RC1: 13 June 2023
* RC2: 23 June 2023 - Moved from 21
* RC3: 30 June 2023 - Moved from 28
* Release: 12 July 2023
Subtrees
--------
* next-net
* Looking at larger patches from SFC and Netronome.
* next-net-intel
* Mostly merged for RC1.
* next-net-mlx
* No update.
* next-net-mvl
* 20 patches merged and ready for RC2.
* next-eventdev
* 3 patches merged and ready for RC2.
* next-baseband
* Some discussion on API/ABI changes in BBDev.
* next-virtio
* Some patches, mostly fixes, under review.
* Majority of the larger features merged in RC1.
* next-crypto
* Comments on some of the patches for RC2: needs review
* MLX5 crypto driver.
* OpenSSL and QAT: from Kai
* Some compilations issue on IPsecMB blocking merge.
* main
* RC1 is out.
* Graph series has a new revision.
* Meson build script updates.
* Patch from Bruce on tracing/logging
* PCIe maintainership
* Overall we need more maintainers and/or more trees
* Trees for graph and pipeline
* Tree for core and other
Proposed Schedule for 2023
--------------------------
See also http://core.dpdk.org/roadmap/#dates
23.07
* Proposal deadline (RFC/v1 patches): 22 April 2023
* API freeze (-rc1): 7 June 2023
* PMD features freeze (-rc2): 23 June 2023
* Builtin applications features freeze (-rc3): 30 June 2023
* Release: 12 July 2023
23.11
* Proposal deadline (RFC/v1 patches): 12 August 2023
* API freeze (-rc1): 29 September 2023
* PMD features freeze (-rc2): 20 October 2023
* Builtin applications features freeze (-rc3): 27 October 2023
* Release: 15 November 2023
LTS
---
Backporting in progress.
Next LTS releases:
* 22.11.2
* 21.11.5
* 20.11.9
* 19.11.15
* Will be updated with CVE and critical fixes only.
* Distros
* v22.11 in Debian 12
* Ubuntu 22.04-LTS contains 21.11
* Ubuntu 23.04 contains 22.11
Defects
-------
* Bugzilla links, 'Bugs', added for hosted projects
* https://www.dpdk.org/hosted-projects/
Opens
-----
* None
DPDK Release Status Meetings
----------------------------
The DPDK Release Status Meeting is intended for DPDK Committers to discuss the
status of the master tree and sub-trees, and for project managers to track
progress or milestone dates.
The meeting occurs on every Thursday at 9:30 UTC over Jitsi on https://meet.jit.si/DPDK
You don't need an invite to join the meeting but if you want a calendar reminder just
send an email to "John McNamara john.mcnamara@intel.com" for the invite.
[-- Attachment #2: Type: text/html, Size: 16270 bytes --]
^ permalink raw reply [relevance 3%]
* [PATCH v2 5/5] devtools: ignore changes into bbdev experimental API
2023-06-15 16:48 5% [PATCH v2 0/5] bbdev: API extension for 23.11 Nicolas Chautru
@ 2023-06-15 16:49 8% ` Nicolas Chautru
0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2023-06-15 16:49 UTC (permalink / raw)
To: dev, maxime.coquelin
Cc: trix, hemant.agrawal, david.marchand, hernan.vargas, Nicolas Chautru
Developpers are warned that the related fft experimental functions
do not preserve ABI, hence these can be waived.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
devtools/libabigail.abignore | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 7a93de3ba1..09b8f156b5 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -30,7 +30,9 @@
[suppress_type]
type_kind = enum
changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
-
+; Ignore changes to bbdev FFT API which is experimental
+[suppress_type]
+ name = rte_bbdev_fft_op
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
--
2.34.1
^ permalink raw reply [relevance 8%]
* [PATCH v2 0/5] bbdev: API extension for 23.11
@ 2023-06-15 16:48 5% Nicolas Chautru
2023-06-15 16:49 8% ` [PATCH v2 5/5] devtools: ignore changes into bbdev experimental API Nicolas Chautru
0 siblings, 1 reply; 200+ results
From: Nicolas Chautru @ 2023-06-15 16:48 UTC (permalink / raw)
To: dev, maxime.coquelin
Cc: trix, hemant.agrawal, david.marchand, hernan.vargas, Nicolas Chautru
v2: moving the new mld functions at the end of struct rte_bbdev to avoid
ABI offset changes based on feedback with Maxime.
Adding a commit to waive the FFT ABI warning since that experimental function
could break ABI (let me know if preferred to be merged with the FFT
commit causing the FFT change).
Including v1 for extending the bbdev api for 23.11.
The new MLD-TS is expected to be non ABI compatible, the other ones
should not break ABI.
I will send a deprecation notice in parallel.
This introduces a new operation (on top of FEC and FFT) to support
equalization for MLD-TS. There also more modular API extension for
existing FFT and FEC operation.
Thanks
Nic
Nicolas Chautru (5):
bbdev: add operation type for MLDTS procession
bbdev: add new capabilities for FFT processing
bbdev: add new capability for FEC 5G UL processing
bbdev: improving error handling for queue configuration
devtools: ignore changes into bbdev experimental API
devtools/libabigail.abignore | 4 +-
doc/guides/prog_guide/bbdev.rst | 83 ++++++++++++++++++
lib/bbdev/rte_bbdev.c | 26 +++---
lib/bbdev/rte_bbdev.h | 76 +++++++++++++++++
lib/bbdev/rte_bbdev_op.h | 143 +++++++++++++++++++++++++++++++-
lib/bbdev/version.map | 5 ++
6 files changed, 323 insertions(+), 14 deletions(-)
--
2.34.1
^ permalink raw reply [relevance 5%]
* [PATCH v5] build: prevent accidentally building without NUMA support
2023-06-13 16:58 4% [PATCH v3] build: prevent accidentally building without NUMA support Bruce Richardson
2023-06-13 17:08 4% ` [PATCH v4] " Bruce Richardson
@ 2023-06-15 14:38 4% ` Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-06-15 14:38 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, David Marchand
When libnuma development package is missing on a system, DPDK can still
be built but will be missing much-needed support for NUMA memory
management. This may later cause issues at runtime if the resulting
binary is run on a NUMA system.
We can reduce the incidence of such runtime errors by ensuring that, for
native builds*, libnuma is present - unless the user actually specifies
via "max_numa_nodes" that they don't require NUMA support. Having this
as an error condition is also in keeping with what is documented in the
Linux GSG doc, where libnuma is listed as a requirement for building
DPDK [1].
* NOTE: cross-compilation builds have a different logic set, with a
separate "numa" value indicating if numa support is necessary.
[1] https://doc.dpdk.org/guides-23.03/linux_gsg/sys_reqs.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
V5: Rebase on main, since dependencies merged
V4: Add Depends-on tag so CI picks up dependency
V3:
- install 32-bit libnuma packages on CI systems [thanks to David
for the changes]
- split the patch out of the previous patchset, so it can be tracked
separately from the more minor fixup changes.
V2: Limit check to linux only
---
.github/workflows/build.yml | 5 ++++-
config/meson.build | 9 +++++++++
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..a479783bbc 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -91,6 +91,9 @@ jobs:
with:
path: reference
key: ${{ steps.get_ref_keys.outputs.abi }}
+ - name: Configure i386 architecture
+ if: env.BUILD_32BIT == 'true'
+ run: sudo dpkg --add-architecture i386
- name: Update APT cache
run: sudo apt update || true
- name: Install packages
@@ -104,7 +107,7 @@ jobs:
pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
- run: sudo apt install -y gcc-multilib g++-multilib
+ run: sudo apt install -y gcc-multilib g++-multilib libnuma-dev:i386
- name: Install aarch64 cross compiling packages
if: env.AARCH64 == 'true'
run: sudo apt install -y crossbuild-essential-arm64
diff --git a/config/meson.build b/config/meson.build
index 22d7d908b7..d8223718e4 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -381,6 +381,15 @@ endif
if not dpdk_conf.has('RTE_MAX_NUMA_NODES')
error('Number of NUMA nodes not specified.')
endif
+if (is_linux and
+ dpdk_conf.get('RTE_MAX_NUMA_NODES') > 1 and
+ not meson.is_cross_build() and
+ not has_libnuma)
+ error('''
+No NUMA library (development package) found, yet DPDK configured for multiple NUMA nodes.
+Please install libnuma, or set 'max_numa_nodes' option to '1' to build without NUMA support.
+''')
+endif
# set the install path for the drivers
dpdk_conf.set_quoted('RTE_EAL_PMD_PATH', eal_pmd_path)
--
2.39.2
^ permalink raw reply [relevance 4%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-14 18:18 0% ` Chautru, Nicolas
@ 2023-06-15 7:52 0% ` Maxime Coquelin
2023-06-15 19:30 5% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-15 7:52 UTC (permalink / raw)
To: Chautru, Nicolas, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
On 6/14/23 20:18, Chautru, Nicolas wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Hi,
>>
>> On 6/13/23 19:16, Chautru, Nicolas wrote:
>>> Hi Maxime,
>>>
>>>> -----Original Message-----
>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>
>>>>
>>>> On 6/12/23 22:53, Chautru, Nicolas wrote:
>>>>> Hi Maxime, David,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>>>
>>>>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
>>>>>>> Hi David,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: David Marchand <david.marchand@redhat.com>> >> On
>> Mon, Jun
>>>> 5,
>>>>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
>>>>>>>> wrote:
>>>>>>>>> Wrt the MLD functions: these are new into the related serie but
>>>>>>>>> still the
>>>>>>>> break the ABI since the struct rte_bbdev includes these functions
>>>>>>>> hence causing offset changes.
>>>>>>>>>
>>>>>>>>> Should I then just rephrase as:
>>>>>>>>>
>>>>>>>>> +* bbdev: Will extend the API to support the new operation type
>>>>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
>>>>>>>>> + this `v1
>>>>>>>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
>>>>>>>>> This
>>>>>>>>> + will notably introduce + new symbols for
>>>>>>>>> ``rte_bbdev_dequeue_mldts_ops``,
>>>>>>>>> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
>>>>>>>>
>>>>>>>> I don't think we need this deprecation notice.
>>>>>>>>
>>>>>>>>
>>>>>>>> Do you need to expose those new mldts ops in rte_bbdev?
>>>>>>>> Can't they go to dev_ops?
>>>>>>>> If you can't, at least moving those new ops at the end of the
>>>>>>>> structure would avoid the breakage on rte_bbdev.
>>>>>>>
>>>>>>> It would probably be best to move all these ops at the end of the
>>>>>>> structure
>>>>>> (ie. keep them together).
>>>>>>> In that case the deprecation notice would call out that the
>>>>>>> rte_bbdev
>>>>>> structure content is more generally modified. Probably best for the
>>>>>> longer run.
>>>>>>> David, Maxime, ok with that option?
>>>>>>>
>>>>>>> struct __rte_cache_aligned rte_bbdev {
>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
>>>>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
>>>>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
>>>>>>> const struct rte_bbdev_ops *dev_ops;
>>>>>>> struct rte_bbdev_data *data;
>>>>>>> enum rte_bbdev_state state;
>>>>>>> struct rte_device *device;
>>>>>>> struct rte_bbdev_cb_list list_cbs;
>>>>>>> struct rte_intr_handle *intr_handle;
>>>>>>> };
>>>>>>
>>>>>> The best thing, as suggested by David, would be to move all the ops
>>>>>> out of struct rte_bbdev, as these should not be visible to the
>> application.
>>>>>
>>>>> That would be quite disruptive across all PMDs and possible perf
>>>>> impact to
>>>> validate. I don’t think this is anywhere realistic to consider such a
>>>> change in 23.11.
>>>>> I believe moving these function at the end of the structure is a
>>>>> good
>>>> compromise to avoid future breakage of rte_bbdev structure with
>>>> almost seamless impact (purely a ABI break when moving into 23.11
>>>> which is not avoidable. Retrospectively we should have done that in 22.11
>> really.
>>>>
>>>> If we are going to break the ABI, better to do the right rework
>>>> directly. Otherwise we'll end-up breaking it again next year.
>>>
>>> With the suggested change, this will not break ABI next year. Any future
>> functions are added at the end of the structure anyway.
>>
>> I'm not so sure, it depends if adding a new field at the end cross a cacheline
>> boundary or not:
>>
>> /*
>> * Global array of all devices. This is not static because it's used by the
>> * inline enqueue and dequeue functions
>> */
>> struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
>>
>> If the older inlined functions used by the application retrieve the dev pointer
>> from the array directly (they do) and added new fields in new version cross
>> a cacheline, then there will be a misalignement between the new lib version
>> and the application using the older inlined functions.
>>
>> ABI-wise, this is not really future proof.
>>
>>>
>>>>
>>>> IMHO, moving these ops should be quite trivial and not much work.
>>>>
>>>> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
>>>> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it may
>>>> not break the ABI, but that's a bit fragile:
>>>> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
>>>> should be OK
>>>> - struct rte_bbdev is cache-aligned, so it may work if adding these two
>>>> ops do not overlap a cacheline which depends on the CPU architecture.
>>>
>>> If you prefer to add the only 2 new functions at the end of the structure
>> that is okay. I believe it would be cleaner to move all these
>> enqueue/dequeue funs down together without drawback I can think of. Let
>> me know.
>>
>> Adding the new ones at the end is not future proof, but at least it does not
>> break ABI just for cosmetic reasons (that's a big drawback IMHO).
>>
>> I just checked using pahole:
>>
>> struct rte_bbdev {
>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8
>> */
>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8
>> */
>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8
>> */
>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8
>> */
>> /* --- cacheline 1 boundary (64 bytes) --- */
>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
>> const struct rte_bbdev_ops * dev_ops; /* 80 8 */
>> struct rte_bbdev_data * data; /* 88 8 */
>> enum rte_bbdev_state state; /* 96 4 */
>>
>> /* XXX 4 bytes hole, try to pack */
>>
>> struct rte_device * device; /* 104 8 */
>> struct rte_bbdev_cb_list list_cbs; /* 112 16 */
>> /* --- cacheline 2 boundary (128 bytes) --- */
>> struct rte_intr_handle * intr_handle; /* 128 8 */
>>
>> /* size: 192, cachelines: 3, members: 16 */
>> /* sum members: 132, holes: 1, sum holes: 4 */
>> /* padding: 56 */
>> } __attribute__((__aligned__(64)));
>>
>> We're lucky on x86, we still have 56 bytes, so we can add 7 new ops at the
>> end before breaking the ABI if I'm not mistaken.
>>
>> I checked the other architecture, and it seems we don't support any with
>> 32B cacheline size so we're good for a while.
>
> OK then just adding the new functions at the end, no other cosmetic changes. Will update the patch to match this.
> In term of deprecation notice, you are okay with latest draft?
>
> +* bbdev: Will extend the API to support the new operation type
> +``RTE_BBDEV_OP_MLDTS`` as per this `v1
> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> + This will notably introduce new symbols for ``rte_bbdev_dequeue_mldts_ops``,
> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
This is not needed in the deprecation notice.
If you are willing to announce it, it could be part of the Intel
roadmap.
To be on the safe side, could you try to dynamically link an application
with a DPDK version before this change, then rebuild DPDK with adding
these two fields. Then test with at least 2 devices with test-bbdev and
see if it does not crash or fail?
Thanks,
Maxime
>
>>
>> Maxime
>>
>>>
>>>>
>>>> Maxime
>>>>
>>>>> What do you think Maxime, David? Based on this I can adjust the
>>>>> change for
>>>> 23.11 and update slightly the deprecation notice accordingly.
>>>>>
>>>>> Thanks
>>>>> Nic
>>>>>
>>>
>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-13 20:00 4% ` Maxime Coquelin
2023-06-13 21:22 3% ` Stephen Hemminger
@ 2023-06-14 18:18 0% ` Chautru, Nicolas
2023-06-15 7:52 0% ` Maxime Coquelin
1 sibling, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-14 18:18 UTC (permalink / raw)
To: Maxime Coquelin, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Hi,
>
> On 6/13/23 19:16, Chautru, Nicolas wrote:
> > Hi Maxime,
> >
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >
> >>
> >> On 6/12/23 22:53, Chautru, Nicolas wrote:
> >>> Hi Maxime, David,
> >>>
> >>>> -----Original Message-----
> >>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>>
> >>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
> >>>>> Hi David,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: David Marchand <david.marchand@redhat.com>> >> On
> Mon, Jun
> >> 5,
> >>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
> >>>>>> wrote:
> >>>>>>> Wrt the MLD functions: these are new into the related serie but
> >>>>>>> still the
> >>>>>> break the ABI since the struct rte_bbdev includes these functions
> >>>>>> hence causing offset changes.
> >>>>>>>
> >>>>>>> Should I then just rephrase as:
> >>>>>>>
> >>>>>>> +* bbdev: Will extend the API to support the new operation type
> >>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
> >>>>>>> + this `v1
> >>>>>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> >>>>>>> This
> >>>>>>> + will notably introduce + new symbols for
> >>>>>>> ``rte_bbdev_dequeue_mldts_ops``,
> >>>>>>> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
> >>>>>>
> >>>>>> I don't think we need this deprecation notice.
> >>>>>>
> >>>>>>
> >>>>>> Do you need to expose those new mldts ops in rte_bbdev?
> >>>>>> Can't they go to dev_ops?
> >>>>>> If you can't, at least moving those new ops at the end of the
> >>>>>> structure would avoid the breakage on rte_bbdev.
> >>>>>
> >>>>> It would probably be best to move all these ops at the end of the
> >>>>> structure
> >>>> (ie. keep them together).
> >>>>> In that case the deprecation notice would call out that the
> >>>>> rte_bbdev
> >>>> structure content is more generally modified. Probably best for the
> >>>> longer run.
> >>>>> David, Maxime, ok with that option?
> >>>>>
> >>>>> struct __rte_cache_aligned rte_bbdev {
> >>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> >>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> >>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> >>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> >>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> >>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> >>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> >>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> >>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> >>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> >>>>> const struct rte_bbdev_ops *dev_ops;
> >>>>> struct rte_bbdev_data *data;
> >>>>> enum rte_bbdev_state state;
> >>>>> struct rte_device *device;
> >>>>> struct rte_bbdev_cb_list list_cbs;
> >>>>> struct rte_intr_handle *intr_handle;
> >>>>> };
> >>>>
> >>>> The best thing, as suggested by David, would be to move all the ops
> >>>> out of struct rte_bbdev, as these should not be visible to the
> application.
> >>>
> >>> That would be quite disruptive across all PMDs and possible perf
> >>> impact to
> >> validate. I don’t think this is anywhere realistic to consider such a
> >> change in 23.11.
> >>> I believe moving these function at the end of the structure is a
> >>> good
> >> compromise to avoid future breakage of rte_bbdev structure with
> >> almost seamless impact (purely a ABI break when moving into 23.11
> >> which is not avoidable. Retrospectively we should have done that in 22.11
> really.
> >>
> >> If we are going to break the ABI, better to do the right rework
> >> directly. Otherwise we'll end-up breaking it again next year.
> >
> > With the suggested change, this will not break ABI next year. Any future
> functions are added at the end of the structure anyway.
>
> I'm not so sure, it depends if adding a new field at the end cross a cacheline
> boundary or not:
>
> /*
> * Global array of all devices. This is not static because it's used by the
> * inline enqueue and dequeue functions
> */
> struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
>
> If the older inlined functions used by the application retrieve the dev pointer
> from the array directly (they do) and added new fields in new version cross
> a cacheline, then there will be a misalignement between the new lib version
> and the application using the older inlined functions.
>
> ABI-wise, this is not really future proof.
>
> >
> >>
> >> IMHO, moving these ops should be quite trivial and not much work.
> >>
> >> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
> >> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it may
> >> not break the ABI, but that's a bit fragile:
> >> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
> >> should be OK
> >> - struct rte_bbdev is cache-aligned, so it may work if adding these two
> >> ops do not overlap a cacheline which depends on the CPU architecture.
> >
> > If you prefer to add the only 2 new functions at the end of the structure
> that is okay. I believe it would be cleaner to move all these
> enqueue/dequeue funs down together without drawback I can think of. Let
> me know.
>
> Adding the new ones at the end is not future proof, but at least it does not
> break ABI just for cosmetic reasons (that's a big drawback IMHO).
>
> I just checked using pahole:
>
> struct rte_bbdev {
> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8
> */
> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8
> */
> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8
> */
> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8
> */
> /* --- cacheline 1 boundary (64 bytes) --- */
> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
> const struct rte_bbdev_ops * dev_ops; /* 80 8 */
> struct rte_bbdev_data * data; /* 88 8 */
> enum rte_bbdev_state state; /* 96 4 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct rte_device * device; /* 104 8 */
> struct rte_bbdev_cb_list list_cbs; /* 112 16 */
> /* --- cacheline 2 boundary (128 bytes) --- */
> struct rte_intr_handle * intr_handle; /* 128 8 */
>
> /* size: 192, cachelines: 3, members: 16 */
> /* sum members: 132, holes: 1, sum holes: 4 */
> /* padding: 56 */
> } __attribute__((__aligned__(64)));
>
> We're lucky on x86, we still have 56 bytes, so we can add 7 new ops at the
> end before breaking the ABI if I'm not mistaken.
>
> I checked the other architecture, and it seems we don't support any with
> 32B cacheline size so we're good for a while.
OK then just adding the new functions at the end, no other cosmetic changes. Will update the patch to match this.
In term of deprecation notice, you are okay with latest draft?
+* bbdev: Will extend the API to support the new operation type
+``RTE_BBDEV_OP_MLDTS`` as per this `v1
+<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
+ This will notably introduce new symbols for ``rte_bbdev_dequeue_mldts_ops``,
+``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
>
> Maxime
>
> >
> >>
> >> Maxime
> >>
> >>> What do you think Maxime, David? Based on this I can adjust the
> >>> change for
> >> 23.11 and update slightly the deprecation notice accordingly.
> >>>
> >>> Thanks
> >>> Nic
> >>>
> >
^ permalink raw reply [relevance 0%]
* [PATCH 1/5] lib: remove blank line ending comment blocks
@ 2023-06-14 14:26 1% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-06-14 14:26 UTC (permalink / raw)
To: dev
Cc: Nicolas Chautru, Cristian Dumitrescu, Olivier Matz, Fan Zhang,
Ashish Gupta, Akhil Goyal, Ruifeng Wang, Anatoly Burakov,
Harman Kalra, Joyce Kong, Jerin Jacob, Sunil Kumar Kori,
Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
Pallavi Kadam, Byron Marohn, Yipeng Wang, Ferruh Yigit,
Andrew Rybchenko, Ori Kam, Erik Gabriel Carrillo, Kiran Kumar K,
Nithin Dabilpuram, Sameh Gobriel, Bruce Richardson,
Vladimir Medvedkin, Konstantin Ananyev, Srikanth Yalavarthi,
Pavan Nikhilesh, Reshma Pattan, Stephen Hemminger, Anoob Joseph,
Volodymyr Fialko, David Hunt, Sachin Saxena, Hemant Agrawal,
Honnappa Nagarahalli, Ciara Power
At the end of a comment, no need for an extra line.
This pattern was fixed with the following command:
git ls lib | xargs sed -i '/^ *\* *$/{N;/ *\*\/ *$/D;}'
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
lib/bbdev/rte_bbdev.h | 2 --
lib/bbdev/rte_bbdev_op.h | 1 -
lib/bbdev/rte_bbdev_pmd.h | 1 -
lib/cfgfile/rte_cfgfile.h | 1 -
lib/cmdline/cmdline_parse.h | 1 -
lib/compressdev/rte_comp.h | 1 -
lib/compressdev/rte_compressdev.h | 1 -
lib/cryptodev/cryptodev_pmd.h | 1 -
lib/cryptodev/rte_crypto.h | 1 -
lib/cryptodev/rte_crypto_asym.h | 12 ------------
lib/cryptodev/rte_cryptodev.h | 3 ---
lib/cryptodev/rte_cryptodev_core.h | 1 -
lib/eal/arm/include/rte_cycles_64.h | 1 -
lib/eal/common/eal_common_proc.c | 1 -
lib/eal/include/generic/rte_pause.h | 1 -
lib/eal/include/generic/rte_spinlock.h | 1 -
lib/eal/include/rte_bitmap.h | 1 -
lib/eal/include/rte_branch_prediction.h | 2 --
lib/eal/include/rte_eal.h | 2 --
lib/eal/include/rte_interrupts.h | 1 -
lib/eal/include/rte_keepalive.h | 1 -
lib/eal/include/rte_lcore.h | 1 -
lib/eal/include/rte_tailq.h | 1 -
lib/eal/include/rte_thread.h | 1 -
lib/eal/include/rte_ticketlock.h | 1 -
lib/eal/include/rte_trace_point.h | 1 -
lib/eal/windows/include/sys/queue.h | 1 -
lib/efd/rte_efd.c | 1 -
lib/ethdev/ethdev_driver.h | 1 -
lib/ethdev/rte_ethdev.c | 1 -
lib/ethdev/rte_ethdev.h | 1 -
lib/ethdev/rte_ethdev_core.h | 1 -
lib/ethdev/rte_flow.h | 11 -----------
lib/ethdev/rte_mtr.h | 2 --
lib/ethdev/sff_8636.h | 2 --
lib/ethdev/sff_common.h | 1 -
lib/eventdev/event_timer_adapter_pmd.h | 1 -
lib/eventdev/eventdev_pmd.h | 17 -----------------
lib/eventdev/rte_event_timer_adapter.h | 2 --
lib/eventdev/rte_eventdev.h | 11 -----------
lib/graph/rte_graph.h | 2 --
lib/hash/rte_hash.h | 1 -
lib/hash/rte_hash_crc.h | 1 -
lib/ip_frag/rte_ipv6_fragmentation.c | 1 -
lib/ip_frag/rte_ipv6_reassembly.c | 1 -
lib/ipsec/ipsec_telemetry.c | 1 -
lib/member/rte_member.h | 1 -
lib/mempool/rte_mempool.c | 1 -
lib/mempool/rte_mempool.h | 1 -
lib/meter/rte_meter.h | 3 ---
lib/mldev/mldev_utils.h | 1 -
lib/node/rte_node_eth_api.h | 1 -
lib/node/rte_node_ip4_api.h | 1 -
lib/node/rte_node_ip6_api.h | 1 -
lib/pcapng/rte_pcapng.h | 1 -
lib/pdcp/rte_pdcp_group.h | 1 -
lib/pipeline/rte_pipeline.c | 4 ----
lib/pipeline/rte_pipeline.h | 7 -------
lib/pipeline/rte_swx_ipsec.h | 2 --
lib/pipeline/rte_swx_pipeline_spec.c | 11 -----------
lib/port/rte_port.h | 3 ---
lib/port/rte_port_ethdev.h | 1 -
lib/port/rte_port_eventdev.h | 1 -
lib/port/rte_port_fd.h | 1 -
lib/port/rte_port_frag.h | 1 -
lib/port/rte_port_kni.h | 1 -
lib/port/rte_port_ras.h | 1 -
lib/port/rte_port_ring.h | 1 -
lib/port/rte_port_sched.h | 1 -
lib/port/rte_port_source_sink.h | 1 -
lib/port/rte_port_sym_crypto.h | 1 -
lib/port/rte_swx_port_fd.h | 1 -
lib/power/guest_channel.h | 1 -
lib/rawdev/rte_rawdev.h | 2 --
lib/rawdev/rte_rawdev_pmd.h | 3 ---
lib/rcu/rte_rcu_qsbr.h | 1 -
lib/regexdev/rte_regexdev_core.h | 1 -
lib/reorder/rte_reorder.h | 1 -
lib/ring/rte_ring.h | 1 -
lib/sched/rte_approx.h | 1 -
lib/sched/rte_pie.h | 2 --
lib/sched/rte_red.h | 4 ----
lib/sched/rte_sched.c | 1 -
lib/sched/rte_sched.h | 3 ---
lib/sched/rte_sched_common.h | 2 --
lib/security/rte_security.h | 1 -
lib/security/rte_security_driver.h | 1 -
lib/table/rte_swx_table_em.c | 1 -
lib/table/rte_table.h | 1 -
lib/table/rte_table_acl.h | 1 -
lib/table/rte_table_array.h | 1 -
lib/table/rte_table_hash.h | 1 -
lib/table/rte_table_hash_cuckoo.h | 1 -
lib/table/rte_table_hash_ext.c | 2 --
lib/table/rte_table_hash_lru.c | 2 --
lib/table/rte_table_lpm.h | 1 -
lib/table/rte_table_lpm_ipv6.h | 1 -
lib/table/rte_table_stub.h | 1 -
lib/telemetry/telemetry_internal.h | 1 -
lib/telemetry/telemetry_json.h | 1 -
100 files changed, 192 deletions(-)
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 52f6ed9b01..f124e1f5db 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -963,7 +963,6 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
*
* @returns
* Device status as string or NULL if invalid.
- *
*/
__rte_experimental
const char*
@@ -977,7 +976,6 @@ rte_bbdev_device_status_str(enum rte_bbdev_device_status status);
*
* @returns
* Queue status as string or NULL if op_type is invalid.
- *
*/
__rte_experimental
const char*
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index 96a390cd9b..cb17a17cbc 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -936,7 +936,6 @@ struct rte_bbdev_op_pool_private {
*
* @returns
* Operation type as string or NULL if op_type is invalid
- *
*/
const char*
rte_bbdev_op_type_str(enum rte_bbdev_op_type op_type);
diff --git a/lib/bbdev/rte_bbdev_pmd.h b/lib/bbdev/rte_bbdev_pmd.h
index 3da7a2bdf5..442b23943d 100644
--- a/lib/bbdev/rte_bbdev_pmd.h
+++ b/lib/bbdev/rte_bbdev_pmd.h
@@ -64,7 +64,6 @@ rte_bbdev_release(struct rte_bbdev *bbdev);
* @return
* - The device structure pointer, or
* - NULL otherwise
- *
*/
struct rte_bbdev *
rte_bbdev_get_named_dev(const char *name);
diff --git a/lib/cfgfile/rte_cfgfile.h b/lib/cfgfile/rte_cfgfile.h
index b2030fa66c..ea7244bcac 100644
--- a/lib/cfgfile/rte_cfgfile.h
+++ b/lib/cfgfile/rte_cfgfile.h
@@ -17,7 +17,6 @@ extern "C" {
*
* This library allows reading application defined parameters from standard
* format configuration file.
-*
***/
#ifndef CFG_NAME_LEN
diff --git a/lib/cmdline/cmdline_parse.h b/lib/cmdline/cmdline_parse.h
index afc2fcd3dc..a852ac411c 100644
--- a/lib/cmdline/cmdline_parse.h
+++ b/lib/cmdline/cmdline_parse.h
@@ -142,7 +142,6 @@ typedef struct cmdline_inst cmdline_parse_inst_t;
/**
* A context is identified by its name, and contains a list of
* instruction
- *
*/
typedef cmdline_parse_inst_t *cmdline_parse_ctx_t;
diff --git a/lib/compressdev/rte_comp.h b/lib/compressdev/rte_comp.h
index 026a2814b5..bf896d0722 100644
--- a/lib/compressdev/rte_comp.h
+++ b/lib/compressdev/rte_comp.h
@@ -9,7 +9,6 @@
* @file rte_comp.h
*
* RTE definitions for Data Compression Service
- *
*/
#ifdef __cplusplus
diff --git a/lib/compressdev/rte_compressdev.h b/lib/compressdev/rte_compressdev.h
index 7eb5c58798..13a4186318 100644
--- a/lib/compressdev/rte_compressdev.h
+++ b/lib/compressdev/rte_compressdev.h
@@ -495,7 +495,6 @@ rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
* - Returns -ENOTSUP if comp device does not support STATEFUL operations.
* - Returns -ENOTSUP if comp device does not support the comp transform.
* - Returns -ENOMEM if the private stream could not be allocated.
- *
*/
__rte_experimental
int
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 0dfad9e24f..585c29df8a 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -390,7 +390,6 @@ typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
*
* @return
* - Returns number of successfully processed packets.
- *
*/
typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t)
(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess,
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index f9644d29ec..7c9c413349 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -9,7 +9,6 @@
* @file rte_crypto.h
*
* RTE Cryptography Common Definitions
- *
*/
#ifdef __cplusplus
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 083320e555..fc3f331393 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -76,7 +76,6 @@ enum rte_crypto_curve_id {
* Asymmetric crypto transformation types.
* Each xform type maps to one asymmetric algorithm
* performing specific operation
- *
*/
enum rte_crypto_asym_xform_type {
RTE_CRYPTO_ASYM_XFORM_UNSPECIFIED = 0,
@@ -181,7 +180,6 @@ enum rte_crypto_rsa_padding_type {
*
* enumerates private key format required to perform RSA crypto
* transform.
- *
*/
enum rte_crypto_rsa_priv_key_type {
RTE_RSA_KEY_TYPE_EXP,
@@ -303,7 +301,6 @@ struct rte_crypto_rsa_padding {
* Asymmetric RSA transform data
*
* Structure describing RSA xform params
- *
*/
struct rte_crypto_rsa_xform {
rte_crypto_uint n;
@@ -326,7 +323,6 @@ struct rte_crypto_rsa_xform {
* Asymmetric Modular exponentiation transform data
*
* Structure describing modular exponentiation xform param
- *
*/
struct rte_crypto_modex_xform {
rte_crypto_uint modulus;
@@ -339,7 +335,6 @@ struct rte_crypto_modex_xform {
* Asymmetric modular multiplicative inverse transform operation
*
* Structure describing modular multiplicative inverse transform
- *
*/
struct rte_crypto_modinv_xform {
rte_crypto_uint modulus;
@@ -350,7 +345,6 @@ struct rte_crypto_modinv_xform {
* Asymmetric DH transform data
*
* Structure describing deffie-hellman xform params
- *
*/
struct rte_crypto_dh_xform {
rte_crypto_uint p;
@@ -363,7 +357,6 @@ struct rte_crypto_dh_xform {
* Asymmetric Digital Signature transform operation
*
* Structure describing DSA xform params
- *
*/
struct rte_crypto_dsa_xform {
rte_crypto_uint p;
@@ -380,7 +373,6 @@ struct rte_crypto_dsa_xform {
* Asymmetric elliptic curve transform data
*
* Structure describing all EC based xform params
- *
*/
struct rte_crypto_ec_xform {
enum rte_crypto_curve_id curve_id;
@@ -400,7 +392,6 @@ struct rte_crypto_sm2_xform {
/**
* Operations params for modular operations:
* exponentiation and multiplicative inverse
- *
*/
struct rte_crypto_mod_op_param {
rte_crypto_uint base;
@@ -411,7 +402,6 @@ struct rte_crypto_mod_op_param {
/**
* RSA operation params
- *
*/
struct rte_crypto_rsa_op_param {
enum rte_crypto_asym_op_type op_type;
@@ -545,7 +535,6 @@ struct rte_crypto_ecdh_op_param {
/**
* DSA Operations params
- *
*/
struct rte_crypto_dsa_op_param {
enum rte_crypto_asym_op_type op_type;
@@ -734,7 +723,6 @@ struct rte_crypto_sm2_op_param {
* Asymmetric Cryptographic Operation.
*
* Structure describing asymmetric crypto operation params.
- *
*/
struct rte_crypto_asym_op {
RTE_STD_C11
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 228273df90..1674945229 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -162,7 +162,6 @@ struct rte_cryptodev_symmetric_capability {
/**
* Asymmetric Xform Crypto Capability
- *
*/
struct rte_cryptodev_asymmetric_xform_capability {
enum rte_crypto_asym_xform_type xform_type;
@@ -188,7 +187,6 @@ struct rte_cryptodev_asymmetric_xform_capability {
/**
* Asymmetric Crypto Capability
- *
*/
struct rte_cryptodev_asymmetric_capability {
struct rte_cryptodev_asymmetric_xform_capability xform_capa;
@@ -222,7 +220,6 @@ struct rte_cryptodev_sym_capability_idx {
/**
* Structure used to describe asymmetric crypto xforms
* Each xform maps to one asym algorithm.
- *
*/
struct rte_cryptodev_asym_capability_idx {
enum rte_crypto_asym_xform_type type;
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 16832f645d..5de89d099f 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -14,7 +14,6 @@
* public API because they are used by inline functions in the published API.
*
* Applications should not use these directly.
- *
*/
typedef uint16_t (*dequeue_pkt_burst_t)(void *qp,
diff --git a/lib/eal/arm/include/rte_cycles_64.h b/lib/eal/arm/include/rte_cycles_64.h
index 029fdc4355..8b05302f47 100644
--- a/lib/eal/arm/include/rte_cycles_64.h
+++ b/lib/eal/arm/include/rte_cycles_64.h
@@ -72,7 +72,6 @@ rte_rdtsc(void)
* val |= (BIT(0) | BIT(2));
* isb();
* asm volatile("msr pmcr_el0, %0" : : "r" (val));
- *
*/
/** Read PMU cycle counter */
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 1fc1d6c53b..7a038e0a3c 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -669,7 +669,6 @@ rte_mp_channel_cleanup(void)
* Return -1, as fail to send message and it's caused by the local side.
* Return 0, as fail to send message and it's caused by the remote side.
* Return 1, as succeed to send message.
- *
*/
static int
send_msg(const char *dst_path, struct rte_mp_msg *msg, int type)
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 2173a544d5..ec1f41819c 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -10,7 +10,6 @@
* @file
*
* CPU pause operation.
- *
*/
#include <stdint.h>
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index 8ca47bbfaa..c50ebaaa80 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -15,7 +15,6 @@
* a loop repeatedly checking until the lock becomes available.
*
* All locks must be initialised before use, and only initialised once.
- *
*/
#include <rte_lcore.h>
diff --git a/lib/eal/include/rte_bitmap.h b/lib/eal/include/rte_bitmap.h
index 27ee3d18a4..25897ed512 100644
--- a/lib/eal/include/rte_bitmap.h
+++ b/lib/eal/include/rte_bitmap.h
@@ -32,7 +32,6 @@ extern "C" {
* serialization of the bit set/clear and bitmap scan operations needs to be
* enforced by the caller, while the bit get operation does not require locking
* the bitmap.
- *
***/
#include <string.h>
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9de60..414cd921ba 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -22,7 +22,6 @@ extern "C" {
*
* if (likely(x > 1))
* do_stuff();
- *
*/
#ifndef likely
#define likely(x) __builtin_expect(!!(x), 1)
@@ -36,7 +35,6 @@ extern "C" {
*
* if (unlikely(x < 1))
* do_stuff();
- *
*/
#ifndef unlikely
#define unlikely(x) __builtin_expect(!!(x), 0)
diff --git a/lib/eal/include/rte_eal.h b/lib/eal/include/rte_eal.h
index d52f79f63d..b612577b38 100644
--- a/lib/eal/include/rte_eal.h
+++ b/lib/eal/include/rte_eal.h
@@ -240,7 +240,6 @@ rte_mp_action_register(const char *name, rte_mp_t action);
*
* @param name
* The name argument plays as the nonredundant key to find the action.
- *
*/
void
rte_mp_action_unregister(const char *name);
@@ -463,7 +462,6 @@ uint64_t rte_eal_get_baseaddr(void);
* IOVA mapping mode is iommu programming mode of a device.
* That device (for example: IOMMU backed DMA device) based
* on rte_iova_mode will generate physical or virtual address.
- *
*/
enum rte_iova_mode {
RTE_IOVA_DC = 0, /* Don't care mode */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index 487e3c8875..bcafdd58a9 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -234,7 +234,6 @@ rte_intr_instance_alloc(uint32_t flags);
* @param intr_handle
* Interrupt handle allocated with rte_intr_instance_alloc().
* If intr_handle is NULL, no operation is performed.
- *
*/
__rte_experimental
void
diff --git a/lib/eal/include/rte_keepalive.h b/lib/eal/include/rte_keepalive.h
index 538fb09095..28c3064493 100644
--- a/lib/eal/include/rte_keepalive.h
+++ b/lib/eal/include/rte_keepalive.h
@@ -5,7 +5,6 @@
/**
* @file rte_keepalive.h
* DPDK RTE LCore Keepalive Monitor.
- *
**/
#ifndef _KEEPALIVE_H_
diff --git a/lib/eal/include/rte_lcore.h b/lib/eal/include/rte_lcore.h
index 6a355e9986..6ce810b876 100644
--- a/lib/eal/include/rte_lcore.h
+++ b/lib/eal/include/rte_lcore.h
@@ -9,7 +9,6 @@
* @file
*
* API for lcore and socket manipulation
- *
*/
#include <stdio.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index 0f67f9e4db..931d549e59 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -8,7 +8,6 @@
/**
* @file
* Here defines rte_tailq APIs for only internal use
- *
*/
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_thread.h b/lib/eal/include/rte_thread.h
index ddd411e067..369e2375f6 100644
--- a/lib/eal/include/rte_thread.h
+++ b/lib/eal/include/rte_thread.h
@@ -356,7 +356,6 @@ int rte_thread_set_affinity(rte_cpuset_t *cpusetp);
* @param cpusetp
* Pointer to CPU affinity of current thread.
* It presumes input is not NULL, otherwise it causes panic.
- *
*/
void rte_thread_get_affinity(rte_cpuset_t *cpusetp);
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 693c67b517..5db0d8ae92 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -15,7 +15,6 @@
* serviced.
*
* All locks must be initialised before use, and only initialised once.
- *
*/
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index 4d6b5700dd..c6b6fccda5 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -234,7 +234,6 @@ __rte_trace_point_fp_is_enabled(void)
* @internal
*
* Allocate trace memory buffer per thread.
- *
*/
__rte_experimental
void __rte_trace_mem_per_thread_alloc(void);
diff --git a/lib/eal/windows/include/sys/queue.h b/lib/eal/windows/include/sys/queue.h
index 9756bee6fb..917526531b 100644
--- a/lib/eal/windows/include/sys/queue.h
+++ b/lib/eal/windows/include/sys/queue.h
@@ -83,7 +83,6 @@
* _REMOVE_HEAD + - + -
* _REMOVE s + s +
* _SWAP + + + +
- *
*/
#ifdef QUEUE_MACRO_DEBUG
#warn Use QUEUE_MACRO_DEBUG_TRACE and/or QUEUE_MACRO_DEBUG_TRASH
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 686a137757..dad962ce29 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -344,7 +344,6 @@ efd_get_choice(const struct rte_efd_table * const table,
* Computed chunk ID
* @param bin_id
* Computed bin ID
- *
*/
static inline void
efd_compute_ids(const struct rte_efd_table * const table,
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 367c0c4878..980f837ab6 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -16,7 +16,6 @@ extern "C" {
*
* These APIs for the use from Ethernet drivers, user applications shouldn't
* use them.
- *
*/
#include <dev_driver.h>
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 7317015895..4428428adc 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1109,7 +1109,6 @@ eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
* @return
* - (0) if validation successful.
* - (-EINVAL) if requested offload has been silently disabled.
- *
*/
static int
eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 34ca25bbc0..e550f09889 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -6372,7 +6372,6 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
* - EINVAL: offload flags are not correctly set
* - ENOTSUP: the offload feature is not supported by the hardware
* - ENODEV: if *port_id* is invalid (with debug enabled only)
- *
*/
#ifndef RTE_ETHDEV_TX_PREPARE_NOOP
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index dcf8adab92..46e9721e07 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -16,7 +16,6 @@
* public API because they are used by inline functions in the published API.
*
* Applications should not use these directly.
- *
*/
struct rte_eth_dev_callback;
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index dec454275f..f1d6b4de30 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1726,7 +1726,6 @@ static const struct rte_flow_item_mark rte_flow_item_mark_mask = {
* RTE_FLOW_ITEM_TYPE_NSH
*
* Match network service header (NSH), RFC 8300
- *
*/
struct rte_flow_item_nsh {
uint32_t version:2;
@@ -1758,7 +1757,6 @@ static const struct rte_flow_item_nsh rte_flow_item_nsh_mask = {
* RTE_FLOW_ITEM_TYPE_IGMP
*
* Match Internet Group Management Protocol (IGMP), RFC 2236
- *
*/
struct rte_flow_item_igmp {
uint32_t type:8;
@@ -1781,7 +1779,6 @@ static const struct rte_flow_item_igmp rte_flow_item_igmp_mask = {
* RTE_FLOW_ITEM_TYPE_AH
*
* Match IP Authentication Header (AH), RFC 4302
- *
*/
struct rte_flow_item_ah {
uint32_t next_hdr:8;
@@ -3028,7 +3025,6 @@ struct rte_flow_action_quota {
* Query indirect QUOTA action.
*
* @see RTE_FLOW_ACTION_TYPE_QUOTA
- *
*/
struct rte_flow_query_quota {
int64_t quota; /**< Quota value. */
@@ -3422,7 +3418,6 @@ struct rte_flow_action_of_push_mpls {
* - ETH / IPV4 / UDP / VXLAN / END
* - ETH / IPV6 / UDP / VXLAN / END
* - ETH / VLAN / IPV4 / UDP / VXLAN / END
- *
*/
struct rte_flow_action_vxlan_encap {
/**
@@ -3456,7 +3451,6 @@ struct rte_flow_action_vxlan_encap {
*
* - ETH / IPV4 / NVGRE / END
* - ETH / VLAN / IPV6 / NVGRE / END
- *
*/
struct rte_flow_action_nvgre_encap {
/**
@@ -4041,7 +4035,6 @@ struct rte_flow_action_meter_mark {
* RTE_FLOW_ACTION_TYPE_METER_MARK
*
* Wrapper structure for the context update interface.
- *
*/
struct rte_flow_update_meter_mark {
/** New meter_mark parameters to be updated. */
@@ -5285,7 +5278,6 @@ rte_flow_flex_item_release(uint16_t port_id,
*
* Information about flow engine resources.
* The zero value means a resource is not supported.
- *
*/
struct rte_flow_port_info {
/**
@@ -5329,7 +5321,6 @@ struct rte_flow_port_info {
*
* Information about flow engine asynchronous queues.
* The value only valid if @p port_attr.max_nb_queues is not zero.
- *
*/
struct rte_flow_queue_info {
/**
@@ -5379,7 +5370,6 @@ rte_flow_info_get(uint16_t port_id,
*
* Flow engine resources settings.
* The zero value means on demand resource allocations only.
- *
*/
struct rte_flow_port_attr {
/**
@@ -5423,7 +5413,6 @@ struct rte_flow_port_attr {
*
* Flow engine asynchronous queues settings.
* The value means default value picked by PMD.
- *
*/
struct rte_flow_queue_attr {
/**
diff --git a/lib/ethdev/rte_mtr.h b/lib/ethdev/rte_mtr.h
index 46c398dd8b..7e6a66b938 100644
--- a/lib/ethdev/rte_mtr.h
+++ b/lib/ethdev/rte_mtr.h
@@ -223,7 +223,6 @@ struct rte_mtr_meter_policy_params {
* applicable for the current input packet wins;
* if none is both enabled and applicable, the default input color is used.
* @see function rte_mtr_color_in_protocol_set()
- *
*/
enum rte_mtr_color_in_protocol {
/**
@@ -1043,7 +1042,6 @@ rte_mtr_color_in_protocol_set(uint16_t port_id, uint32_t mtr_id,
* Error details. Filled in only on error, when not NULL.
* @return
* 0 on success, non-zero error code otherwise.
- *
*/
__rte_experimental
int
diff --git a/lib/ethdev/sff_8636.h b/lib/ethdev/sff_8636.h
index 70f70353d0..cf11a2c247 100644
--- a/lib/ethdev/sff_8636.h
+++ b/lib/ethdev/sff_8636.h
@@ -17,7 +17,6 @@
*
* Lower Memory Page 00h
* Measurement, Diagnostic and Control Functions
- *
*/
/* Identifier - 0 */
/* Values are defined under SFF_8024_ID_OFFSET */
@@ -221,7 +220,6 @@
*
* Upper Memory Page 00h
* Serial ID - Base ID, Extended ID and Vendor Specific ID fields
- *
*/
/* Identifier - 128 */
/* Identifier values same as Lower Memory Page 00h */
diff --git a/lib/ethdev/sff_common.h b/lib/ethdev/sff_common.h
index e44f3c7bf3..2e42cbe8be 100644
--- a/lib/ethdev/sff_common.h
+++ b/lib/ethdev/sff_common.h
@@ -3,7 +3,6 @@
*
* Implements SFF-8024 Rev 4.0 of pluggable I/O configuration and some
* common utilities for SFF-8436/8636 and SFF-8472/8079
- *
*/
#ifndef _SFF_COMMON_H_
diff --git a/lib/eventdev/event_timer_adapter_pmd.h b/lib/eventdev/event_timer_adapter_pmd.h
index c7d4a4f0f6..8f3e6c4851 100644
--- a/lib/eventdev/event_timer_adapter_pmd.h
+++ b/lib/eventdev/event_timer_adapter_pmd.h
@@ -14,7 +14,6 @@
* This file provides implementation helpers for internal use by PMDs. They
* are not intended to be exposed to applications and are not subject to ABI
* versioning.
- *
*/
#ifdef __cplusplus
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index aebab26852..c68c3a2262 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -310,7 +310,6 @@ typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
* Event queue index
* @param[out] queue_conf
* Event queue configuration structure
- *
*/
typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev *dev,
uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
@@ -339,7 +338,6 @@ typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
* Event device pointer
* @param queue_id
* Event queue index
- *
*/
typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
uint8_t queue_id);
@@ -373,7 +371,6 @@ typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
* Event port index
* @param[out] port_conf
* Event port configuration structure
- *
*/
typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
uint8_t port_id, struct rte_event_port_conf *port_conf);
@@ -400,7 +397,6 @@ typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
*
* @param port
* Event port pointer
- *
*/
typedef void (*eventdev_port_release_t)(void *port);
@@ -415,7 +411,6 @@ typedef void (*eventdev_port_release_t)(void *port);
* User-provided event flush function.
* @param args
* Arguments to be passed to the user-provided event flush function.
- *
*/
typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port,
rte_eventdev_port_flush_t flush_cb,
@@ -439,7 +434,6 @@ typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port,
*
* @return
* Returns 0 on success.
- *
*/
typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
const uint8_t queues[], const uint8_t priorities[],
@@ -459,7 +453,6 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port,
*
* @return
* Returns 0 on success.
- *
*/
typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port,
uint8_t queues[], uint16_t nb_unlinks);
@@ -493,7 +486,6 @@ typedef int (*eventdev_port_unlinks_in_progress_t)(struct rte_eventdev *dev,
*
* @return
* Returns 0 on success.
- *
*/
typedef int (*eventdev_dequeue_timeout_ticks_t)(struct rte_eventdev *dev,
uint64_t ns, uint64_t *timeout_ticks);
@@ -505,7 +497,6 @@ typedef int (*eventdev_dequeue_timeout_ticks_t)(struct rte_eventdev *dev,
* Event device pointer
* @param f
* A pointer to a file for output
- *
*/
typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
@@ -602,7 +593,6 @@ typedef uint64_t (*eventdev_xstats_get_by_name)(const struct rte_eventdev *dev,
* - 0: Success, driver provides Rx event adapter capabilities for the
* ethernet device.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_eth_rx_adapter_caps_get_t)
(const struct rte_eventdev *dev,
@@ -634,7 +624,6 @@ struct rte_event_eth_rx_adapter_queue_conf;
* - 0: Success, driver provides Rx event adapter capabilities for the
* ethernet device.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_timer_adapter_caps_get_t)(
const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps,
@@ -660,7 +649,6 @@ typedef int (*eventdev_timer_adapter_caps_get_t)(
* @return
* - 0: Success, ethernet receive queue added successfully.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_eth_rx_adapter_queue_add_t)(
const struct rte_eventdev *dev,
@@ -685,7 +673,6 @@ typedef int (*eventdev_eth_rx_adapter_queue_add_t)(
* @return
* - 0: Success, ethernet receive queue deleted successfully.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_eth_rx_adapter_queue_del_t)
(const struct rte_eventdev *dev,
@@ -932,7 +919,6 @@ struct rte_event_crypto_adapter_queue_conf;
* - 0: Success, driver provides event adapter capabilities for the
* cryptodev.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_crypto_adapter_caps_get_t)
(const struct rte_eventdev *dev,
@@ -963,7 +949,6 @@ typedef int (*eventdev_crypto_adapter_caps_get_t)
* @return
* - 0: Success, cryptodev queue pair added successfully.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_crypto_adapter_queue_pair_add_t)(
const struct rte_eventdev *dev,
@@ -991,7 +976,6 @@ typedef int (*eventdev_crypto_adapter_queue_pair_add_t)(
* @return
* - 0: Success, cryptodev queue pair deleted successfully.
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_crypto_adapter_queue_pair_del_t)
(const struct rte_eventdev *dev,
@@ -1114,7 +1098,6 @@ typedef int (*eventdev_crypto_adapter_vector_limits_get_t)(
* @return
* - 0: Success, driver provides eth Tx adapter capabilities
* - <0: Error code returned by the driver function.
- *
*/
typedef int (*eventdev_eth_tx_adapter_caps_get_t)
(const struct rte_eventdev *dev,
diff --git a/lib/eventdev/rte_event_timer_adapter.h b/lib/eventdev/rte_event_timer_adapter.h
index 9a771ac679..9ac35b7d5a 100644
--- a/lib/eventdev/rte_event_timer_adapter.h
+++ b/lib/eventdev/rte_event_timer_adapter.h
@@ -310,7 +310,6 @@ struct rte_event_timer_adapter_info {
*
* @see RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES,
* struct rte_event_timer_adapter_info
- *
*/
int
rte_event_timer_adapter_get_info(
@@ -596,7 +595,6 @@ struct rte_event_timer_adapter {
* - EALREADY A timer was encountered that was already armed
*
* @see RTE_EVENT_TIMER_ADAPTER_F_PERIODIC
- *
*/
static inline uint16_t
rte_event_timer_arm_burst(const struct rte_event_timer_adapter *adapter,
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a90e23ac8b..b6a4fa1495 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -203,7 +203,6 @@
* rte_event_enqueue_burst(...);
* }
* \endcode
- *
*/
#ifdef __cplusplus
@@ -462,7 +461,6 @@ struct rte_event_dev_info {
* @return
* - 0: Success, driver updates the contextual information of the event device
* - <0: Error code returned by the driver info get function.
- *
*/
int
rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
@@ -679,7 +677,6 @@ struct rte_event_queue_conf {
* - <0: Error code returned by the driver info get function.
*
* @see rte_event_queue_setup()
- *
*/
int
rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
@@ -881,7 +878,6 @@ struct rte_event_port_conf {
* - <0: Error code returned by the driver info get function.
*
* @see rte_event_port_setup()
- *
*/
int
rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
@@ -1271,7 +1267,6 @@ struct rte_event_vector {
*
* This operation must only be enqueued to the same port that the
* event to be released was dequeued from.
- *
*/
/**
@@ -1388,7 +1383,6 @@ struct rte_event {
* - 0: Success, driver provides Rx event adapter capabilities for the
* ethernet device.
* - <0: Error code returned by the driver function.
- *
*/
int
rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
@@ -1464,7 +1458,6 @@ rte_event_timer_adapter_caps_get(uint8_t dev_id, uint32_t *caps);
* - 0: Success, driver provides event adapter capabilities for the
* cryptodev device.
* - <0: Error code returned by the driver function.
- *
*/
int
rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
@@ -1494,7 +1487,6 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id,
* @return
* - 0: Success, driver provides eth Tx adapter capabilities.
* - <0: Error code returned by the driver function.
- *
*/
int
rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
@@ -1523,7 +1515,6 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,
*
* @see rte_event_dequeue_burst(), RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
* @see rte_event_dev_configure()
- *
*/
int
rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
@@ -1587,7 +1578,6 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
* (EDQUOT) Quota exceeded(Application tried to link the queue configured with
* RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
* (EINVAL) Invalid parameter
- *
*/
int
rte_event_port_link(uint8_t dev_id, uint8_t port_id,
@@ -1686,7 +1676,6 @@ rte_event_port_unlinks_in_progress(uint8_t dev_id, uint8_t port_id);
* The number of links established on the event port designated by its
* *port_id*.
* - <0 on failure.
- *
*/
int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h
index c9a77297fc..c30de0c5bb 100644
--- a/lib/graph/rte_graph.h
+++ b/lib/graph/rte_graph.h
@@ -20,7 +20,6 @@
* dump and destroy on graph and node operations such as clone,
* edge update, and edge shrink, etc. The API also allows to create the stats
* cluster to monitor per graph and per node stats.
- *
*/
#include <stdbool.h>
@@ -95,7 +94,6 @@ struct rte_graph_cluster_node_stats; /**< Node stats within cluster of graphs */
* Number of objects processed.
*
* @see rte_graph_walk()
- *
*/
typedef uint16_t (*rte_node_process_t)(struct rte_graph *graph,
struct rte_node *node, void **objs,
diff --git a/lib/hash/rte_hash.h b/lib/hash/rte_hash.h
index a399346d02..7ecc021111 100644
--- a/lib/hash/rte_hash.h
+++ b/lib/hash/rte_hash.h
@@ -176,7 +176,6 @@ rte_hash_find_existing(const char *name);
*
* @param h
* Hash table to free, if NULL, the function does nothing.
- *
*/
void
rte_hash_free(struct rte_hash *h);
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5..60bf42ce1d 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -51,7 +51,6 @@ static uint8_t crc32_alg = CRC32_SW;
* - (CRC32_SSE42) Use SSE4.2 intrinsics if available
* - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
* - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
- *
*/
static inline void
rte_hash_crc_set_alg(uint8_t alg)
diff --git a/lib/ip_frag/rte_ipv6_fragmentation.c b/lib/ip_frag/rte_ipv6_fragmentation.c
index 2e692418b5..56696f32f8 100644
--- a/lib/ip_frag/rte_ipv6_fragmentation.c
+++ b/lib/ip_frag/rte_ipv6_fragmentation.c
@@ -14,7 +14,6 @@
* RTE IPv6 Fragmentation
*
* Implementation of IPv6 fragmentation.
- *
*/
static inline void
diff --git a/lib/ip_frag/rte_ipv6_reassembly.c b/lib/ip_frag/rte_ipv6_reassembly.c
index d4019e87e6..88863a98d1 100644
--- a/lib/ip_frag/rte_ipv6_reassembly.c
+++ b/lib/ip_frag/rte_ipv6_reassembly.c
@@ -13,7 +13,6 @@
* IPv6 reassemble
*
* Implementation of IPv6 reassembly.
- *
*/
static inline void
diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c
index 90d4b67156..68a91108dd 100644
--- a/lib/ipsec/ipsec_telemetry.c
+++ b/lib/ipsec/ipsec_telemetry.c
@@ -41,7 +41,6 @@ handle_telemetry_cmd_ipsec_sa_list(const char *cmd __rte_unused,
* "SA_SPI_XX": {"count": 0, "bytes": 0, "errors": 0},
* "SA_SPI_YY": {"count": 0, "bytes": 0, "errors": 0}
* }
- *
*/
static int
handle_telemetry_cmd_ipsec_sa_stats(const char *cmd __rte_unused,
diff --git a/lib/member/rte_member.h b/lib/member/rte_member.h
index 072a253c89..237d403714 100644
--- a/lib/member/rte_member.h
+++ b/lib/member/rte_member.h
@@ -213,7 +213,6 @@ struct rte_member_setsum {
* Parameters used when create the set summary table. Currently user can
* specify two types of setsummary: HT based and vBF. For HT based, user can
* specify cache or non-cache mode. Here is a table to describe some differences
- *
*/
struct rte_member_parameters {
const char *name; /**< Name of the hash. */
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index cf5dea2304..4d337fca8d 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -55,7 +55,6 @@ mempool_event_callback_invoke(enum rte_mempool_event event,
#if defined(RTE_ARCH_X86)
/*
* return the greatest common divisor between a and b (fast algorithm)
- *
*/
static unsigned get_gcd(unsigned a, unsigned b)
{
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index e53f9e7abd..160975a7e7 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1867,7 +1867,6 @@ void rte_mempool_list_dump(FILE *f);
* NULL on error
* with rte_errno set appropriately. Possible rte_errno values include:
* - ENOENT - required entry not available to return.
- *
*/
struct rte_mempool *rte_mempool_lookup(const char *name);
diff --git a/lib/meter/rte_meter.h b/lib/meter/rte_meter.h
index 0932645d0a..d2c91b3a40 100644
--- a/lib/meter/rte_meter.h
+++ b/lib/meter/rte_meter.h
@@ -18,7 +18,6 @@ extern "C" {
* 1. Single Rate Three Color Marker (srTCM): defined by IETF RFC 2697
* 2. Two Rate Three Color Marker (trTCM): defined by IETF RFC 2698
* 3. Two Rate Three Color Marker (trTCM): defined by IETF RFC 4115
- *
***/
#include <stdint.h>
@@ -26,7 +25,6 @@ extern "C" {
/*
* Application Programmer's Interface (API)
- *
***/
/**
@@ -328,7 +326,6 @@ rte_meter_trtcm_rfc4115_color_aware_check(
/*
* Inline implementation of run-time methods
- *
***/
struct rte_meter_srtcm_profile {
diff --git a/lib/mldev/mldev_utils.h b/lib/mldev/mldev_utils.h
index 7d798a92a5..5bc8020453 100644
--- a/lib/mldev/mldev_utils.h
+++ b/lib/mldev/mldev_utils.h
@@ -15,7 +15,6 @@ extern "C" {
* ML Device PMD utility API
*
* These APIs for the use from ML drivers, user applications shouldn't use them.
- *
*/
#include <rte_compat.h>
diff --git a/lib/node/rte_node_eth_api.h b/lib/node/rte_node_eth_api.h
index 1e7477349c..40b2021f01 100644
--- a/lib/node/rte_node_eth_api.h
+++ b/lib/node/rte_node_eth_api.h
@@ -14,7 +14,6 @@
*
* This API allows to setup ethdev_rx and ethdev_tx nodes
* and its queue associations.
- *
*/
#ifdef __cplusplus
diff --git a/lib/node/rte_node_ip4_api.h b/lib/node/rte_node_ip4_api.h
index 46d0d8976b..3397da0ae8 100644
--- a/lib/node/rte_node_ip4_api.h
+++ b/lib/node/rte_node_ip4_api.h
@@ -14,7 +14,6 @@
*
* This API allows to do control path functions of ip4_* nodes
* like ip4_lookup, ip4_rewrite.
- *
*/
#ifdef __cplusplus
extern "C" {
diff --git a/lib/node/rte_node_ip6_api.h b/lib/node/rte_node_ip6_api.h
index 1459ccef47..f3b5a1002a 100644
--- a/lib/node/rte_node_ip6_api.h
+++ b/lib/node/rte_node_ip6_api.h
@@ -14,7 +14,6 @@
*
* This API allows to do control path functions of ip6_* nodes
* like ip6_lookup, ip6_rewrite.
- *
*/
#ifdef __cplusplus
extern "C" {
diff --git a/lib/pcapng/rte_pcapng.h b/lib/pcapng/rte_pcapng.h
index 4afdec22ef..d93cc9f73a 100644
--- a/lib/pcapng/rte_pcapng.h
+++ b/lib/pcapng/rte_pcapng.h
@@ -131,7 +131,6 @@ enum rte_pcapng_direction {
* @return
* - The pointer to the new mbuf formatted for pcapng_write
* - NULL if allocation fails.
- *
*/
__rte_experimental
struct rte_mbuf *
diff --git a/lib/pdcp/rte_pdcp_group.h b/lib/pdcp/rte_pdcp_group.h
index 2ac2af9b36..f6a94ff587 100644
--- a/lib/pdcp/rte_pdcp_group.h
+++ b/lib/pdcp/rte_pdcp_group.h
@@ -78,7 +78,6 @@ rte_pdcp_en_from_cop(const struct rte_crypto_op *cop)
* The maximum number of crypto-ops to process.
* @return
* Number of filled elements in *grp* array.
- *
*/
static inline uint16_t
rte_pdcp_pkt_crypto_group(struct rte_crypto_op *cop[], struct rte_mbuf *mb[],
diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c
index ff86c7cf96..1fa9f9c47e 100644
--- a/lib/pipeline/rte_pipeline.c
+++ b/lib/pipeline/rte_pipeline.c
@@ -155,7 +155,6 @@ rte_pipeline_port_out_free(struct rte_port_out *port);
/*
* Pipeline
- *
*/
static int
rte_pipeline_check_params(struct rte_pipeline_params *params)
@@ -267,7 +266,6 @@ rte_pipeline_free(struct rte_pipeline *p)
/*
* Table
- *
*/
static int
rte_table_check_params(struct rte_pipeline *p,
@@ -682,7 +680,6 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p,
/*
* Port
- *
*/
static int
rte_pipeline_port_in_check_params(struct rte_pipeline *p,
@@ -1030,7 +1027,6 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
/*
* Pipeline run-time
- *
*/
int
rte_pipeline_check(struct rte_pipeline *p)
diff --git a/lib/pipeline/rte_pipeline.h b/lib/pipeline/rte_pipeline.h
index 3cfb6868f7..267dbcfa2c 100644
--- a/lib/pipeline/rte_pipeline.h
+++ b/lib/pipeline/rte_pipeline.h
@@ -51,7 +51,6 @@ extern "C" {
* <B>Thread safety.</B> It is possible to have multiple pipelines running on
* the same CPU core, but it is not allowed (for thread safety reasons) to have
* multiple CPU cores running the same pipeline instance.
- *
***/
#include <stdint.h>
@@ -64,7 +63,6 @@ struct rte_mbuf;
/*
* Pipeline
- *
*/
/** Opaque data type for pipeline */
struct rte_pipeline;
@@ -175,7 +173,6 @@ int rte_pipeline_flush(struct rte_pipeline *p);
/*
* Actions
- *
*/
/** Reserved actions */
enum rte_pipeline_action {
@@ -197,7 +194,6 @@ enum rte_pipeline_action {
/*
* Table
- *
*/
/** Maximum number of tables allowed for any given pipeline instance. The
value of this parameter cannot be changed. */
@@ -530,7 +526,6 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id,
/*
* Port IN
- *
*/
/** Maximum number of input ports allowed for any given pipeline instance. The
value of this parameter cannot be changed. */
@@ -662,7 +657,6 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
/*
* Port OUT
- *
*/
/** Maximum number of output ports allowed for any given pipeline instance. The
value of this parameter cannot be changed. */
@@ -753,7 +747,6 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
/*
* Functions to be called as part of the port IN/OUT or table action handlers
- *
*/
/**
* Action handler packet insert to output port
diff --git a/lib/pipeline/rte_swx_ipsec.h b/lib/pipeline/rte_swx_ipsec.h
index ebfb7ea5ea..a43e341cac 100644
--- a/lib/pipeline/rte_swx_ipsec.h
+++ b/lib/pipeline/rte_swx_ipsec.h
@@ -87,7 +87,6 @@ struct rte_swx_ipsec_burst_size {
/**
* IPsec instance configuration parameters
- *
*/
struct rte_swx_ipsec_params {
/** Input packet queue. */
@@ -111,7 +110,6 @@ struct rte_swx_ipsec_params {
/**
* IPsec input packet meta-data
- *
*/
struct rte_swx_ipsec_input_packet_metadata {
/* SA ID. */
diff --git a/lib/pipeline/rte_swx_pipeline_spec.c b/lib/pipeline/rte_swx_pipeline_spec.c
index 006b24082a..2bba0d0524 100644
--- a/lib/pipeline/rte_swx_pipeline_spec.c
+++ b/lib/pipeline/rte_swx_pipeline_spec.c
@@ -102,7 +102,6 @@ extobj_statement_parse(struct extobj_spec *s,
/*
* struct.
- *
*/
static void
struct_spec_free(struct struct_spec *s)
@@ -279,7 +278,6 @@ struct_block_parse(struct struct_spec *s,
/*
* header.
- *
*/
static void
header_spec_free(struct header_spec *s)
@@ -331,7 +329,6 @@ header_statement_parse(struct header_spec *s,
/*
* metadata.
- *
*/
static void
metadata_spec_free(struct metadata_spec *s)
@@ -375,7 +372,6 @@ metadata_statement_parse(struct metadata_spec *s,
/*
* action.
- *
*/
static void
action_spec_free(struct action_spec *s)
@@ -504,7 +500,6 @@ action_block_parse(struct action_spec *s,
/*
* table.
- *
*/
static void
table_spec_free(struct table_spec *s)
@@ -1059,7 +1054,6 @@ table_block_parse(struct table_spec *s,
/*
* selector.
- *
*/
static void
selector_spec_free(struct selector_spec *s)
@@ -1345,7 +1339,6 @@ selector_block_parse(struct selector_spec *s,
/*
* learner.
- *
*/
static void
learner_spec_free(struct learner_spec *s)
@@ -1927,7 +1920,6 @@ learner_block_parse(struct learner_spec *s,
/*
* regarray.
- *
*/
static void
regarray_spec_free(struct regarray_spec *s)
@@ -1995,7 +1987,6 @@ regarray_statement_parse(struct regarray_spec *s,
/*
* metarray.
- *
*/
static void
metarray_spec_free(struct metarray_spec *s)
@@ -2052,7 +2043,6 @@ metarray_statement_parse(struct metarray_spec *s,
/*
*
* rss
- *
*/
static void
@@ -2097,7 +2087,6 @@ rss_statement_parse(struct rss_spec *s,
/*
* apply.
- *
*/
static void
apply_spec_free(struct apply_spec *s)
diff --git a/lib/port/rte_port.h b/lib/port/rte_port.h
index 6b6a2cdd17..f001ffbacf 100644
--- a/lib/port/rte_port.h
+++ b/lib/port/rte_port.h
@@ -15,7 +15,6 @@ extern "C" {
*
* This tool is part of the DPDK Packet Framework tool suite and provides
* a standard interface to implement different types of packet ports.
- *
***/
#include <stdint.h>
@@ -46,7 +45,6 @@ extern "C" {
/*
* Port IN
- *
*/
/** Maximum number of packets read from any input port in a single burst.
Cannot be changed. */
@@ -125,7 +123,6 @@ struct rte_port_in_ops {
/*
* Port OUT
- *
*/
/** Output port statistics */
struct rte_port_out_stats {
diff --git a/lib/port/rte_port_ethdev.h b/lib/port/rte_port_ethdev.h
index 7f28d512f1..b607365c98 100644
--- a/lib/port/rte_port_ethdev.h
+++ b/lib/port/rte_port_ethdev.h
@@ -15,7 +15,6 @@ extern "C" {
*
* ethdev_reader: input port built on top of pre-initialized NIC RX queue
* ethdev_writer: output port built on top of pre-initialized NIC TX queue
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_eventdev.h b/lib/port/rte_port_eventdev.h
index 966e9cdafb..bf3f8254af 100644
--- a/lib/port/rte_port_eventdev.h
+++ b/lib/port/rte_port_eventdev.h
@@ -17,7 +17,6 @@ extern "C" {
* interface
* eventdev_writer: output port built on top of pre-initialized eventdev
* interface
- *
**/
#include <stdint.h>
diff --git a/lib/port/rte_port_fd.h b/lib/port/rte_port_fd.h
index c8cfd9765a..810b0e9cc0 100644
--- a/lib/port/rte_port_fd.h
+++ b/lib/port/rte_port_fd.h
@@ -15,7 +15,6 @@ extern "C" {
*
* fd_reader: input port built on top of valid non-blocking file descriptor
* fd_writer: output port built on top of valid non-blocking file descriptor
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_frag.h b/lib/port/rte_port_frag.h
index 07060297f6..69fdc000f1 100644
--- a/lib/port/rte_port_frag.h
+++ b/lib/port/rte_port_frag.h
@@ -24,7 +24,6 @@ extern "C" {
* packets read from the ring are all non-jumbo frames. The complete IP
* datagrams written to the ring are not changed. The jumbo frames are
* fragmented into several IP packets with length less or equal to MTU.
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_kni.h b/lib/port/rte_port_kni.h
index 35c6253806..54227251f6 100644
--- a/lib/port/rte_port_kni.h
+++ b/lib/port/rte_port_kni.h
@@ -16,7 +16,6 @@ extern "C" {
*
* kni_reader: input port built on top of pre-initialized KNI interface
* kni_writer: output port built on top of pre-initialized KNI interface
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_ras.h b/lib/port/rte_port_ras.h
index ee1d8ae21e..8fd8ae8444 100644
--- a/lib/port/rte_port_ras.h
+++ b/lib/port/rte_port_ras.h
@@ -25,7 +25,6 @@ extern "C" {
* The complete IP datagrams written to the ring are not changed. The IP
* fragments written to the ring are first reassembled and into complete IP
* datagrams or dropped on error or IP reassembly time-out.
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_ring.h b/lib/port/rte_port_ring.h
index ba609b3436..9532ac1d22 100644
--- a/lib/port/rte_port_ring.h
+++ b/lib/port/rte_port_ring.h
@@ -21,7 +21,6 @@ extern "C" {
* input port built on top of pre-initialized multi consumers ring
* ring_multi_writer:
* output port built on top of pre-initialized multi producers ring
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_sched.h b/lib/port/rte_port_sched.h
index 953451463f..5f46297a60 100644
--- a/lib/port/rte_port_sched.h
+++ b/lib/port/rte_port_sched.h
@@ -15,7 +15,6 @@ extern "C" {
*
* sched_reader: input port built on top of pre-initialized rte_sched_port
* sched_writer: output port built on top of pre-initialized rte_sched_port
- *
***/
#include <stdint.h>
diff --git a/lib/port/rte_port_source_sink.h b/lib/port/rte_port_source_sink.h
index 16b8318e52..c2ddb297f4 100644
--- a/lib/port/rte_port_source_sink.h
+++ b/lib/port/rte_port_source_sink.h
@@ -15,7 +15,6 @@ extern "C" {
*
* source: input port that can be used to generate packets
* sink: output port that drops all packets written to it
- *
***/
#include "rte_port.h"
diff --git a/lib/port/rte_port_sym_crypto.h b/lib/port/rte_port_sym_crypto.h
index 181f6ce01d..4cf8566633 100644
--- a/lib/port/rte_port_sym_crypto.h
+++ b/lib/port/rte_port_sym_crypto.h
@@ -15,7 +15,6 @@ extern "C" {
*
* crypto_reader: input port built on top of pre-initialized crypto interface
* crypto_writer: output port built on top of pre-initialized crypto interface
- *
**/
#include <stdint.h>
diff --git a/lib/port/rte_swx_port_fd.h b/lib/port/rte_swx_port_fd.h
index c1a9200a4f..2dd1480a42 100644
--- a/lib/port/rte_swx_port_fd.h
+++ b/lib/port/rte_swx_port_fd.h
@@ -12,7 +12,6 @@ extern "C" {
/**
* @file
* RTE SWX FD Input and Output Ports
- *
***/
#include <stdint.h>
diff --git a/lib/power/guest_channel.h b/lib/power/guest_channel.h
index bdda1367f0..409fa67b74 100644
--- a/lib/power/guest_channel.h
+++ b/lib/power/guest_channel.h
@@ -39,7 +39,6 @@ int guest_channel_host_connect(const char *path, unsigned int lcore_id);
*
* @param lcore_id
* lcore_id.
- *
*/
void guest_channel_host_disconnect(unsigned int lcore_id);
diff --git a/lib/rawdev/rte_rawdev.h b/lib/rawdev/rte_rawdev.h
index 66080eae9e..36c9c9a9b4 100644
--- a/lib/rawdev/rte_rawdev.h
+++ b/lib/rawdev/rte_rawdev.h
@@ -90,7 +90,6 @@ struct rte_rawdev_info;
* @return
* - 0: Success, driver updates the contextual information of the raw device
* - <0: Error code returned by the driver info get function.
- *
*/
int
rte_rawdev_info_get(uint16_t dev_id, struct rte_rawdev_info *dev_info,
@@ -152,7 +151,6 @@ rte_rawdev_configure(uint16_t dev_id, struct rte_rawdev_info *dev_conf,
* - <0: Error code returned by the driver info get function.
*
* @see rte_raw_queue_setup()
- *
*/
int
rte_rawdev_queue_conf_get(uint16_t dev_id,
diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h
index a51944c8ff..7b9ef1d09f 100644
--- a/lib/rawdev/rte_rawdev_pmd.h
+++ b/lib/rawdev/rte_rawdev_pmd.h
@@ -254,7 +254,6 @@ typedef int (*rawdev_queue_setup_t)(struct rte_rawdev *dev,
* Raw device pointer
* @param queue_id
* Raw queue index
- *
*/
typedef int (*rawdev_queue_release_t)(struct rte_rawdev *dev,
uint16_t queue_id);
@@ -273,7 +272,6 @@ typedef int (*rawdev_queue_release_t)(struct rte_rawdev *dev,
* Raw device pointer
* @return
* Number of queues; 0 is assumed to be a valid response.
- *
*/
typedef uint16_t (*rawdev_queue_count_t)(struct rte_rawdev *dev);
@@ -339,7 +337,6 @@ typedef int (*rawdev_dequeue_bufs_t)(struct rte_rawdev *dev,
* @return
* 0 for success,
* !0 Error
- *
*/
typedef int (*rawdev_dump_t)(struct rte_rawdev *dev, FILE *f);
diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h
index 1aa078e2c2..87e1b55153 100644
--- a/lib/rcu/rte_rcu_qsbr.h
+++ b/lib/rcu/rte_rcu_qsbr.h
@@ -226,7 +226,6 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads);
* On error - 1 with error code set in rte_errno.
* Possible rte_errno codes are:
* - EINVAL - max_threads is 0 or 'v' is NULL.
- *
*/
int
rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads);
diff --git a/lib/regexdev/rte_regexdev_core.h b/lib/regexdev/rte_regexdev_core.h
index a5576d353f..15ba712b86 100644
--- a/lib/regexdev/rte_regexdev_core.h
+++ b/lib/regexdev/rte_regexdev_core.h
@@ -14,7 +14,6 @@
* in order to expose their ops to the class.
*
* Applications should not use these API directly.
- *
*/
struct rte_regexdev;
diff --git a/lib/reorder/rte_reorder.h b/lib/reorder/rte_reorder.h
index 06a8d56e3f..56a6507f9f 100644
--- a/lib/reorder/rte_reorder.h
+++ b/lib/reorder/rte_reorder.h
@@ -12,7 +12,6 @@
* Reorder library is a component which is designed to
* provide ordering of out of ordered packets based on
* sequence number present in mbuf.
- *
*/
#include <rte_compat.h>
diff --git a/lib/ring/rte_ring.h b/lib/ring/rte_ring.h
index 7e4cd60650..c709f30497 100644
--- a/lib/ring/rte_ring.h
+++ b/lib/ring/rte_ring.h
@@ -32,7 +32,6 @@
* Note: the ring implementation is not preemptible. Refer to Programmer's
* guide/Environment Abstraction Layer/Multiple pthread/Known Issues/rte_ring
* for more information.
- *
*/
#ifdef __cplusplus
diff --git a/lib/sched/rte_approx.h b/lib/sched/rte_approx.h
index 74d55f457b..0200fa1521 100644
--- a/lib/sched/rte_approx.h
+++ b/lib/sched/rte_approx.h
@@ -16,7 +16,6 @@ extern "C" {
* Given a rational number alpha with 0 < alpha < 1 and a precision d, the goal
* is to find positive integers p, q such that alpha - d < p/q < alpha + d, and
* q is minimal.
- *
***/
#include <stdint.h>
diff --git a/lib/sched/rte_pie.h b/lib/sched/rte_pie.h
index 2236b98a71..cb52fd933e 100644
--- a/lib/sched/rte_pie.h
+++ b/lib/sched/rte_pie.h
@@ -32,7 +32,6 @@ extern "C" {
/**
* PIE configuration parameters passed by user
- *
*/
struct rte_pie_params {
uint16_t qdelay_ref; /**< Latency Target (milliseconds) */
@@ -43,7 +42,6 @@ struct rte_pie_params {
/**
* PIE configuration parameters
- *
*/
struct rte_pie_config {
uint64_t qdelay_ref; /**< Latency Target (in CPU cycles.) */
diff --git a/lib/sched/rte_red.h b/lib/sched/rte_red.h
index 80b43b6da0..13a9ad24a8 100644
--- a/lib/sched/rte_red.h
+++ b/lib/sched/rte_red.h
@@ -12,8 +12,6 @@ extern "C" {
/**
* @file
* RTE Random Early Detection (RED)
- *
- *
***/
#include <stdint.h>
@@ -35,7 +33,6 @@ extern "C" {
/**
* Externs
- *
*/
extern uint32_t rte_red_rand_val;
extern uint32_t rte_red_rand_seed;
@@ -44,7 +41,6 @@ extern uint16_t rte_red_pow2_frac_inv[16];
/**
* RED configuration parameters passed by user
- *
*/
struct rte_red_params {
uint16_t min_th; /**< Minimum threshold for queue (max_th) */
diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
index 19768d8c38..e7787344b3 100644
--- a/lib/sched/rte_sched.c
+++ b/lib/sched/rte_sched.c
@@ -2043,7 +2043,6 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port,
* | 0 | | 1 | | 2 | | 3 |
* ----->|_______|----->|_______|----->|_______|----->|_______|----->
* p01 p11 p21 p31
- *
*/
int
rte_sched_port_enqueue(struct rte_sched_port *port, struct rte_mbuf **pkts,
diff --git a/lib/sched/rte_sched.h b/lib/sched/rte_sched.h
index a33292b066..c7081ceeef 100644
--- a/lib/sched/rte_sched.h
+++ b/lib/sched/rte_sched.h
@@ -53,7 +53,6 @@ extern "C" {
* the same user;
* - Weighted Round Robin (WRR) is used to service the
* queues within same pipe lowest priority traffic class (best-effort).
- *
*/
#include <rte_compat.h>
@@ -310,7 +309,6 @@ struct rte_sched_port_params {
/*
* Configuration
- *
***/
/**
@@ -436,7 +434,6 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
struct rte_sched_subport_params **subport_params);
/*
* Statistics
- *
***/
/**
diff --git a/lib/sched/rte_sched_common.h b/lib/sched/rte_sched_common.h
index e4cbbd9077..419700b1a5 100644
--- a/lib/sched/rte_sched_common.h
+++ b/lib/sched/rte_sched_common.h
@@ -49,7 +49,6 @@ rte_min_pos_4_u16(uint16_t *x)
* This implementation uses Euclid's algorithm:
* gcd(a, 0) = a
* gcd(a, b) = gcd(b, a mod b)
- *
*/
static inline uint64_t
rte_get_gcd64(uint64_t a, uint64_t b)
@@ -89,7 +88,6 @@ rte_get_gcd(uint32_t a, uint32_t b)
* Compute the Lowest Common Denominator (LCD) of two numbers.
* This implementation computes GCD first:
* LCD(a, b) = (a * b) / GCD(a, b)
- *
*/
static inline uint32_t
rte_get_lcd(uint32_t a, uint32_t b)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 30bac4e25a..daef846d40 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -10,7 +10,6 @@
* @file rte_security.h
*
* RTE Security Common Definitions
- *
*/
#ifdef __cplusplus
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 677c7d1f91..31444a05d3 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -10,7 +10,6 @@
* @file rte_security_driver.h
*
* RTE Security Common Definitions
- *
*/
#ifdef __cplusplus
diff --git a/lib/table/rte_swx_table_em.c b/lib/table/rte_swx_table_em.c
index 2b5201e006..84837c8be4 100644
--- a/lib/table/rte_swx_table_em.c
+++ b/lib/table/rte_swx_table_em.c
@@ -489,7 +489,6 @@ table_mailbox_size_get(void)
* match = 1111_1111_1111_1110 = 0xFFFE
* match_many = 1111_1110_1110_1000 = 0xFEE8
* match_pos = 0001_0010_0001_0011__0001_0010_0001_0000 = 0x12131210
- *
*/
#define LUT_MATCH 0xFFFE
diff --git a/lib/table/rte_table.h b/lib/table/rte_table.h
index 096ab8a7c8..3f592d1aa2 100644
--- a/lib/table/rte_table.h
+++ b/lib/table/rte_table.h
@@ -22,7 +22,6 @@ extern "C" {
* use-case, the lookup key is an n-tuple of packet fields that uniquely
* identifies a traffic flow, while data represents actions and action
* meta-data associated with the same traffic flow.
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_acl.h b/lib/table/rte_table_acl.h
index 516819213c..673725f82e 100644
--- a/lib/table/rte_table_acl.h
+++ b/lib/table/rte_table_acl.h
@@ -17,7 +17,6 @@ extern "C" {
* associate data to lookup keys.
*
* Use-cases: Firewall rule database, etc.
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_array.h b/lib/table/rte_table_array.h
index b16c5dfe5b..ac3ca59c75 100644
--- a/lib/table/rte_table_array.h
+++ b/lib/table/rte_table_array.h
@@ -14,7 +14,6 @@ extern "C" {
* RTE Table Array
*
* Simple array indexing. Lookup key is the array entry index.
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_hash.h b/lib/table/rte_table_hash.h
index 61a0eed6c5..5d1a0e2bea 100644
--- a/lib/table/rte_table_hash.h
+++ b/lib/table/rte_table_hash.h
@@ -46,7 +46,6 @@ extern "C" {
* 2. Key size:
* a. Configurable key size
* b. Single key size (8-byte, 16-byte or 32-byte key size)
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_hash_cuckoo.h b/lib/table/rte_table_hash_cuckoo.h
index d9d4312190..cb5a771c8b 100644
--- a/lib/table/rte_table_hash_cuckoo.h
+++ b/lib/table/rte_table_hash_cuckoo.h
@@ -12,7 +12,6 @@ extern "C" {
/**
* @file
* RTE Table Hash Cuckoo
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c
index 70ea84fa2e..bd97dc5eba 100644
--- a/lib/table/rte_table_hash_ext.c
+++ b/lib/table/rte_table_hash_ext.c
@@ -552,7 +552,6 @@ static int rte_table_hash_ext_lookup_unoptimized(
* match = 0xFFFELLU
* match_many = 0xFEE8LLU
* match_pos = 0x12131210LLU
- *
***/
#define LUT_MATCH 0xFFFELLU
@@ -844,7 +843,6 @@ static int rte_table_hash_ext_lookup_unoptimized(
*
* The naming convention is:
* pXY = packet Y of stage X, X = 0 .. 3, Y = 0 .. 1
-*
***/
static int rte_table_hash_ext_lookup(
void *table,
diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c
index c31acc11cf..354cd5aee4 100644
--- a/lib/table/rte_table_hash_lru.c
+++ b/lib/table/rte_table_hash_lru.c
@@ -489,7 +489,6 @@ static int rte_table_hash_lru_lookup_unoptimized(
* match = 0xFFFELLU
* match_many = 0xFEE8LLU
* match_pos = 0x12131210LLU
-*
***/
#define LUT_MATCH 0xFFFELLU
@@ -796,7 +795,6 @@ static int rte_table_hash_lru_lookup_unoptimized(
*
* The naming convention is:
* pXY = packet Y of stage X, X = 0 .. 3, Y = 0 .. 1
-*
***/
static int rte_table_hash_lru_lookup(
void *table,
diff --git a/lib/table/rte_table_lpm.h b/lib/table/rte_table_lpm.h
index 571ff2f009..135f07384c 100644
--- a/lib/table/rte_table_lpm.h
+++ b/lib/table/rte_table_lpm.h
@@ -39,7 +39,6 @@ extern "C" {
* has to carefully manage the format of the LPM table entry (i.e. the next
* hop information) so that any next hop data that changes value during
* run-time (e.g. counters) is placed outside of this area.
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_lpm_ipv6.h b/lib/table/rte_table_lpm_ipv6.h
index eadc316ee1..a6cea2c707 100644
--- a/lib/table/rte_table_lpm_ipv6.h
+++ b/lib/table/rte_table_lpm_ipv6.h
@@ -39,7 +39,6 @@ extern "C" {
* has to carefully manage the format of the LPM table entry (i.e. the next
* hop information) so that any next hop data that changes value during
* run-time (e.g. counters) is placed outside of this area.
- *
***/
#include <stdint.h>
diff --git a/lib/table/rte_table_stub.h b/lib/table/rte_table_stub.h
index 9086e4edcc..b4aa0a16a2 100644
--- a/lib/table/rte_table_stub.h
+++ b/lib/table/rte_table_stub.h
@@ -14,7 +14,6 @@ extern "C" {
* RTE Table Stub
*
* The stub table lookup operation produces lookup miss for all input packets.
- *
***/
diff --git a/lib/telemetry/telemetry_internal.h b/lib/telemetry/telemetry_internal.h
index d085c492dc..37d79bcb24 100644
--- a/lib/telemetry/telemetry_internal.h
+++ b/lib/telemetry/telemetry_internal.h
@@ -14,7 +14,6 @@
*
* @file
* RTE Telemetry Legacy and internal definitions
- *
***/
/**
diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h
index 7a246deacb..31a3d56756 100644
--- a/lib/telemetry/telemetry_json.h
+++ b/lib/telemetry/telemetry_json.h
@@ -18,7 +18,6 @@
*
* This file contains small inline functions to make it easier for applications
* to build up valid JSON responses to telemetry requests.
- *
***/
/**
--
2.40.1
^ permalink raw reply [relevance 1%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-13 20:00 4% ` Maxime Coquelin
@ 2023-06-13 21:22 3% ` Stephen Hemminger
2023-06-14 18:18 0% ` Chautru, Nicolas
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-13 21:22 UTC (permalink / raw)
To: Maxime Coquelin
Cc: Chautru, Nicolas, David Marchand, dev, Rix, Tom, hemant.agrawal,
Vargas, Hernan
On Tue, 13 Jun 2023 22:00:25 +0200
Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
> >>
> >> If we are going to break the ABI, better to do the right rework directly. Otherwise
> >> we'll end-up breaking it again next year.
> >
> > With the suggested change, this will not break ABI next year. Any future functions are added at the end of the structure anyway.
Do it right in 23.11, break the ABI and fix the few drivers.
It is not hard to have one ops struct (and it can/should be const) that is pointed
to by the bbdev. That will hide the ops from the application.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-13 17:16 3% ` Chautru, Nicolas
@ 2023-06-13 20:00 4% ` Maxime Coquelin
2023-06-13 21:22 3% ` Stephen Hemminger
2023-06-14 18:18 0% ` Chautru, Nicolas
0 siblings, 2 replies; 200+ results
From: Maxime Coquelin @ 2023-06-13 20:00 UTC (permalink / raw)
To: Chautru, Nicolas, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
Hi,
On 6/13/23 19:16, Chautru, Nicolas wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>
>>
>> On 6/12/23 22:53, Chautru, Nicolas wrote:
>>> Hi Maxime, David,
>>>
>>>> -----Original Message-----
>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>
>>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
>>>>> Hi David,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: David Marchand <david.marchand@redhat.com>> >> On Mon, Jun
>> 5,
>>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
>>>>>> wrote:
>>>>>>> Wrt the MLD functions: these are new into the related serie but
>>>>>>> still the
>>>>>> break the ABI since the struct rte_bbdev includes these functions
>>>>>> hence causing offset changes.
>>>>>>>
>>>>>>> Should I then just rephrase as:
>>>>>>>
>>>>>>> +* bbdev: Will extend the API to support the new operation type
>>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
>>>>>>> + this `v1
>>>>>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
>>>>>>> This
>>>>>>> + will notably introduce + new symbols for
>>>>>>> ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
>>>>>>> into the stuct rte_bbdev.
>>>>>>
>>>>>> I don't think we need this deprecation notice.
>>>>>>
>>>>>>
>>>>>> Do you need to expose those new mldts ops in rte_bbdev?
>>>>>> Can't they go to dev_ops?
>>>>>> If you can't, at least moving those new ops at the end of the
>>>>>> structure would avoid the breakage on rte_bbdev.
>>>>>
>>>>> It would probably be best to move all these ops at the end of the
>>>>> structure
>>>> (ie. keep them together).
>>>>> In that case the deprecation notice would call out that the
>>>>> rte_bbdev
>>>> structure content is more generally modified. Probably best for the
>>>> longer run.
>>>>> David, Maxime, ok with that option?
>>>>>
>>>>> struct __rte_cache_aligned rte_bbdev {
>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
>>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
>>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
>>>>> const struct rte_bbdev_ops *dev_ops;
>>>>> struct rte_bbdev_data *data;
>>>>> enum rte_bbdev_state state;
>>>>> struct rte_device *device;
>>>>> struct rte_bbdev_cb_list list_cbs;
>>>>> struct rte_intr_handle *intr_handle;
>>>>> };
>>>>
>>>> The best thing, as suggested by David, would be to move all the ops
>>>> out of struct rte_bbdev, as these should not be visible to the application.
>>>
>>> That would be quite disruptive across all PMDs and possible perf impact to
>> validate. I don’t think this is anywhere realistic to consider such a change in
>> 23.11.
>>> I believe moving these function at the end of the structure is a good
>> compromise to avoid future breakage of rte_bbdev structure with almost
>> seamless impact (purely a ABI break when moving into 23.11 which is not
>> avoidable. Retrospectively we should have done that in 22.11 really.
>>
>> If we are going to break the ABI, better to do the right rework directly. Otherwise
>> we'll end-up breaking it again next year.
>
> With the suggested change, this will not break ABI next year. Any future functions are added at the end of the structure anyway.
I'm not so sure, it depends if adding a new field at the end cross a
cacheline boundary or not:
/*
* Global array of all devices. This is not static because it's used by the
* inline enqueue and dequeue functions
*/
struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
If the older inlined functions used by the application retrieve the dev
pointer from the array directly (they do) and added new fields in new
version cross a cacheline, then there will be a misalignement between
the new lib version and the application using the older inlined
functions.
ABI-wise, this is not really future proof.
>
>>
>> IMHO, moving these ops should be quite trivial and not much work.
>>
>> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
>> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it may not
>> break the ABI, but that's a bit fragile:
>> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
>> should be OK
>> - struct rte_bbdev is cache-aligned, so it may work if adding these two
>> ops do not overlap a cacheline which depends on the CPU architecture.
>
> If you prefer to add the only 2 new functions at the end of the structure that is okay. I believe it would be cleaner to move all these enqueue/dequeue funs down together without drawback I can think of. Let me know.
Adding the new ones at the end is not future proof, but at least it does
not break ABI just for cosmetic reasons (that's a big drawback IMHO).
I just checked using pahole:
struct rte_bbdev {
rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8 */
rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8 */
rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8 */
rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
const struct rte_bbdev_ops * dev_ops; /* 80 8 */
struct rte_bbdev_data * data; /* 88 8 */
enum rte_bbdev_state state; /* 96 4 */
/* XXX 4 bytes hole, try to pack */
struct rte_device * device; /* 104 8 */
struct rte_bbdev_cb_list list_cbs; /* 112 16 */
/* --- cacheline 2 boundary (128 bytes) --- */
struct rte_intr_handle * intr_handle; /* 128 8 */
/* size: 192, cachelines: 3, members: 16 */
/* sum members: 132, holes: 1, sum holes: 4 */
/* padding: 56 */
} __attribute__((__aligned__(64)));
We're lucky on x86, we still have 56 bytes, so we can add 7 new ops at
the end before breaking the ABI if I'm not mistaken.
I checked the other architecture, and it seems we don't support any with
32B cacheline size so we're good for a while.
Maxime
>
>>
>> Maxime
>>
>>> What do you think Maxime, David? Based on this I can adjust the change for
>> 23.11 and update slightly the deprecation notice accordingly.
>>>
>>> Thanks
>>> Nic
>>>
>
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-13 8:14 4% ` Maxime Coquelin
@ 2023-06-13 17:16 3% ` Chautru, Nicolas
2023-06-13 20:00 4% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-13 17:16 UTC (permalink / raw)
To: Maxime Coquelin, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> On 6/12/23 22:53, Chautru, Nicolas wrote:
> > Hi Maxime, David,
> >
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>
> >> On 6/6/23 23:01, Chautru, Nicolas wrote:
> >>> Hi David,
> >>>
> >>>> -----Original Message-----
> >>>> From: David Marchand <david.marchand@redhat.com>> >> On Mon, Jun
> 5,
> >>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
> >>>> wrote:
> >>>>> Wrt the MLD functions: these are new into the related serie but
> >>>>> still the
> >>>> break the ABI since the struct rte_bbdev includes these functions
> >>>> hence causing offset changes.
> >>>>>
> >>>>> Should I then just rephrase as:
> >>>>>
> >>>>> +* bbdev: Will extend the API to support the new operation type
> >>>>> +``RTE_BBDEV_OP_MLDTS`` as per
> >>>>> + this `v1
> >>>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> >>>>> This
> >>>>> + will notably introduce + new symbols for
> >>>>> ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
> >>>>> into the stuct rte_bbdev.
> >>>>
> >>>> I don't think we need this deprecation notice.
> >>>>
> >>>>
> >>>> Do you need to expose those new mldts ops in rte_bbdev?
> >>>> Can't they go to dev_ops?
> >>>> If you can't, at least moving those new ops at the end of the
> >>>> structure would avoid the breakage on rte_bbdev.
> >>>
> >>> It would probably be best to move all these ops at the end of the
> >>> structure
> >> (ie. keep them together).
> >>> In that case the deprecation notice would call out that the
> >>> rte_bbdev
> >> structure content is more generally modified. Probably best for the
> >> longer run.
> >>> David, Maxime, ok with that option?
> >>>
> >>> struct __rte_cache_aligned rte_bbdev {
> >>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> >>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> >>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> >>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> >>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> >>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> >>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> >>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> >>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> >>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> >>> const struct rte_bbdev_ops *dev_ops;
> >>> struct rte_bbdev_data *data;
> >>> enum rte_bbdev_state state;
> >>> struct rte_device *device;
> >>> struct rte_bbdev_cb_list list_cbs;
> >>> struct rte_intr_handle *intr_handle;
> >>> };
> >>
> >> The best thing, as suggested by David, would be to move all the ops
> >> out of struct rte_bbdev, as these should not be visible to the application.
> >
> > That would be quite disruptive across all PMDs and possible perf impact to
> validate. I don’t think this is anywhere realistic to consider such a change in
> 23.11.
> > I believe moving these function at the end of the structure is a good
> compromise to avoid future breakage of rte_bbdev structure with almost
> seamless impact (purely a ABI break when moving into 23.11 which is not
> avoidable. Retrospectively we should have done that in 22.11 really.
>
> If we are going to break the ABI, better to do the right rework directly. Otherwise
> we'll end-up breaking it again next year.
With the suggested change, this will not break ABI next year. Any future functions are added at the end of the structure anyway.
>
> IMHO, moving these ops should be quite trivial and not much work.
>
> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it may not
> break the ABI, but that's a bit fragile:
> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
> should be OK
> - struct rte_bbdev is cache-aligned, so it may work if adding these two
> ops do not overlap a cacheline which depends on the CPU architecture.
If you prefer to add the only 2 new functions at the end of the structure that is okay. I believe it would be cleaner to move all these enqueue/dequeue funs down together without drawback I can think of. Let me know.
>
> Maxime
>
> > What do you think Maxime, David? Based on this I can adjust the change for
> 23.11 and update slightly the deprecation notice accordingly.
> >
> > Thanks
> > Nic
> >
^ permalink raw reply [relevance 3%]
* [PATCH v4] build: prevent accidentally building without NUMA support
2023-06-13 16:58 4% [PATCH v3] build: prevent accidentally building without NUMA support Bruce Richardson
@ 2023-06-13 17:08 4% ` Bruce Richardson
2023-06-15 14:38 4% ` [PATCH v5] " Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-06-13 17:08 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, David Marchand
When libnuma development package is missing on a system, DPDK can still
be built but will be missing much-needed support for NUMA memory
management. This may later cause issues at runtime if the resulting
binary is run on a NUMA system.
We can reduce the incidence of such runtime errors by ensuring that, for
native builds*, libnuma is present - unless the user actually specifies
via "max_numa_nodes" that they don't require NUMA support. Having this
as an error condition is also in keeping with what is documented in the
Linux GSG doc, where libnuma is listed as a requirement for building
DPDK [1].
* NOTE: cross-compilation builds have a different logic set, with a
separate "numa" value indicating if numa support is necessary.
Depends-on: series-28489 ("replace int flags with booleans")
[1] https://doc.dpdk.org/guides-23.03/linux_gsg/sys_reqs.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
V4: Add Depends-on tag so CI picks up dependency
V3:
- install 32-bit libnuma packages on CI systems [thanks to David
for the changes]
- split the patch out of the previous patchset, so it can be tracked
separately from the more minor fixup changes.
V2: Limit check to linux only
---
.github/workflows/build.yml | 5 ++++-
config/meson.build | 9 +++++++++
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..a479783bbc 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -91,6 +91,9 @@ jobs:
with:
path: reference
key: ${{ steps.get_ref_keys.outputs.abi }}
+ - name: Configure i386 architecture
+ if: env.BUILD_32BIT == 'true'
+ run: sudo dpkg --add-architecture i386
- name: Update APT cache
run: sudo apt update || true
- name: Install packages
@@ -104,7 +107,7 @@ jobs:
pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
- run: sudo apt install -y gcc-multilib g++-multilib
+ run: sudo apt install -y gcc-multilib g++-multilib libnuma-dev:i386
- name: Install aarch64 cross compiling packages
if: env.AARCH64 == 'true'
run: sudo apt install -y crossbuild-essential-arm64
diff --git a/config/meson.build b/config/meson.build
index 22d7d908b7..d8223718e4 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -381,6 +381,15 @@ endif
if not dpdk_conf.has('RTE_MAX_NUMA_NODES')
error('Number of NUMA nodes not specified.')
endif
+if (is_linux and
+ dpdk_conf.get('RTE_MAX_NUMA_NODES') > 1 and
+ not meson.is_cross_build() and
+ not has_libnuma)
+ error('''
+No NUMA library (development package) found, yet DPDK configured for multiple NUMA nodes.
+Please install libnuma, or set 'max_numa_nodes' option to '1' to build without NUMA support.
+''')
+endif
# set the install path for the drivers
dpdk_conf.set_quoted('RTE_EAL_PMD_PATH', eal_pmd_path)
--
2.39.2
^ permalink raw reply [relevance 4%]
* [PATCH v3] build: prevent accidentally building without NUMA support
@ 2023-06-13 16:58 4% Bruce Richardson
2023-06-13 17:08 4% ` [PATCH v4] " Bruce Richardson
2023-06-15 14:38 4% ` [PATCH v5] " Bruce Richardson
0 siblings, 2 replies; 200+ results
From: Bruce Richardson @ 2023-06-13 16:58 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, David Marchand
When libnuma development package is missing on a system, DPDK can still
be built but will be missing much-needed support for NUMA memory
management. This may later cause issues at runtime if the resulting
binary is run on a NUMA system.
We can reduce the incidence of such runtime errors by ensuring that, for
native builds*, libnuma is present - unless the user actually specifies
via "max_numa_nodes" that they don't require NUMA support. Having this
as an error condition is also in keeping with what is documented in the
Linux GSG doc, where libnuma is listed as a requirement for building
DPDK [1].
* NOTE: cross-compilation builds have a different logic set, with a
separate "numa" value indicating if numa support is necessary.
[1] https://doc.dpdk.org/guides-23.03/linux_gsg/sys_reqs.html
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
V3:
- install 32-bit libnuma packages on CI systems [thanks to David
for the changes]
- split the patch out of the previous patchset, so it can be tracked
separately from the more minor fixup changes.
V2: Limit check to linux only
---
.github/workflows/build.yml | 5 ++++-
config/meson.build | 9 +++++++++
2 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..a479783bbc 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -91,6 +91,9 @@ jobs:
with:
path: reference
key: ${{ steps.get_ref_keys.outputs.abi }}
+ - name: Configure i386 architecture
+ if: env.BUILD_32BIT == 'true'
+ run: sudo dpkg --add-architecture i386
- name: Update APT cache
run: sudo apt update || true
- name: Install packages
@@ -104,7 +107,7 @@ jobs:
pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
- run: sudo apt install -y gcc-multilib g++-multilib
+ run: sudo apt install -y gcc-multilib g++-multilib libnuma-dev:i386
- name: Install aarch64 cross compiling packages
if: env.AARCH64 == 'true'
run: sudo apt install -y crossbuild-essential-arm64
diff --git a/config/meson.build b/config/meson.build
index 22d7d908b7..d8223718e4 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -381,6 +381,15 @@ endif
if not dpdk_conf.has('RTE_MAX_NUMA_NODES')
error('Number of NUMA nodes not specified.')
endif
+if (is_linux and
+ dpdk_conf.get('RTE_MAX_NUMA_NODES') > 1 and
+ not meson.is_cross_build() and
+ not has_libnuma)
+ error('''
+No NUMA library (development package) found, yet DPDK configured for multiple NUMA nodes.
+Please install libnuma, or set 'max_numa_nodes' option to '1' to build without NUMA support.
+''')
+endif
# set the install path for the drivers
dpdk_conf.set_quoted('RTE_EAL_PMD_PATH', eal_pmd_path)
--
2.39.2
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] fbarray: get fbarrays from containerized secondary
@ 2023-06-13 16:51 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-13 16:51 UTC (permalink / raw)
To: ogawa.yasufumi; +Cc: anatoly.burakov, dev, stable
On Tue, 16 Apr 2019 10:59:12 +0900
ogawa.yasufumi@lab.ntt.co.jp wrote:
> From: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>
>
> In secondary_msl_create_walk(), it creates a file for fbarrays with its
> PID for reserving unique name among secondary processes. However, it
> does not work as expected if secondary is run as app container becuase
> each of containerized secondary has PID 1. To reserve unique name, use
> hostname instead of PID if the value is 1.
>
> Cc: stable@dpdk.org
>
> Signed-off-by: Yasufumi Ogawa <ogawa.yasufumi@lab.ntt.co.jp>
> ---
Since this is an ABI break. I propose that a more invasive solution
would be better. Either change to using something more unique GUID
or change the fbarray structure to be a variable length array.
The internals of fbarray should also be hidden (ie not in rte_fbarray.h)
and the init() function changed into something that allocates the array.
The current patch has not gotten any followup or acceptance in 4 years.
So marking it as changes requested.
^ permalink raw reply [relevance 3%]
* [PATCH v2 4/4] ci: build examples externally
@ 2023-06-13 14:06 10% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-06-13 14:06 UTC (permalink / raw)
To: dev; +Cc: thomas, bruce.richardson, Aaron Conole, Michael Santana
Enhance our CI coverage by building examples against an installed DPDK.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Changes since v1:
- reworked built examples discovery,
- added comment for people who are not sed fluent,
---
.ci/linux-build.sh | 27 ++++++++++++++++++++++++++-
.github/workflows/build.yml | 6 +++---
2 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 9631e342b5..b8f80760c2 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -1,7 +1,7 @@
#!/bin/sh -xe
if [ -z "${DEF_LIB:-}" ]; then
- DEF_LIB=static ABI_CHECKS= BUILD_DOCS= RUN_TESTS= $0
+ DEF_LIB=static ABI_CHECKS= BUILD_DOCS= BUILD_EXAMPLES= RUN_TESTS= $0
DEF_LIB=shared $0
exit
fi
@@ -99,6 +99,7 @@ if [ "$MINI" = "true" ]; then
else
OPTS="$OPTS -Ddisable_libs="
fi
+OPTS="$OPTS -Dlibdir=lib"
if [ "$ASAN" = "true" ]; then
OPTS="$OPTS -Db_sanitize=address"
@@ -168,3 +169,27 @@ if [ "$RUN_TESTS" = "true" ]; then
catch_coredump
[ "$failed" != "true" ]
fi
+
+# Test examples compilation with an installed dpdk
+if [ "$BUILD_EXAMPLES" = "true" ]; then
+ [ -d install ] || DESTDIR=$(pwd)/install ninja -C build install
+ export LD_LIBRARY_PATH=$(dirname $(find $(pwd)/install -name librte_eal.so)):$LD_LIBRARY_PATH
+ export PKG_CONFIG_PATH=$(dirname $(find $(pwd)/install -name libdpdk.pc)):$PKG_CONFIG_PATH
+ export PKGCONF="pkg-config --define-prefix"
+ find build/examples -maxdepth 1 -type f -name "dpdk-*" |
+ while read target; do
+ target=${target%%:*}
+ target=${target#build/examples/dpdk-}
+ if [ -e examples/$target/Makefile ]; then
+ echo $target
+ continue
+ fi
+ # Some examples binaries are built from an example sub
+ # directory, discover the "top level" example name.
+ find examples -name Makefile |
+ sed -ne "s,examples/\([^/]*\)\(/.*\|\)/$target/Makefile,\1,p"
+ done | sort -u |
+ while read example; do
+ make -C install/usr/local/share/dpdk/examples/$example clean shared
+ done
+fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..414dd089e0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -20,6 +20,7 @@ jobs:
BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
+ BUILD_EXAMPLES: ${{ contains(matrix.config.checks, 'examples') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
LIBABIGAIL_VERSION: libabigail-2.1
@@ -39,7 +40,7 @@ jobs:
mini: mini
- os: ubuntu-20.04
compiler: gcc
- checks: abi+debug+doc+tests
+ checks: abi+debug+doc+examples+tests
- os: ubuntu-20.04
compiler: clang
checks: asan+doc+tests
@@ -96,12 +97,11 @@ jobs:
- name: Install packages
run: sudo apt install -y ccache libarchive-dev libbsd-dev libfdt-dev
libibverbs-dev libjansson-dev libnuma-dev libpcap-dev libssl-dev
- ninja-build python3-pip python3-pyelftools python3-setuptools
+ ninja-build pkg-config python3-pip python3-pyelftools python3-setuptools
python3-wheel zlib1g-dev
- name: Install libabigail build dependencies if no cache is available
if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
run: sudo apt install -y autoconf automake libdw-dev libtool libxml2-dev
- pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
run: sudo apt install -y gcc-multilib g++-multilib
--
2.40.1
^ permalink raw reply [relevance 10%]
* [PATCH 4/4] ci: build examples externally
@ 2023-06-13 8:17 10% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2023-06-13 8:17 UTC (permalink / raw)
To: dev; +Cc: thomas, bruce.richardson, Aaron Conole, Michael Santana
Enhance our CI coverage by building examples against an installed DPDK.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.ci/linux-build.sh | 25 ++++++++++++++++++++++++-
.github/workflows/build.yml | 6 +++---
2 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 9631e342b5..1b1f9d07f3 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -1,7 +1,7 @@
#!/bin/sh -xe
if [ -z "${DEF_LIB:-}" ]; then
- DEF_LIB=static ABI_CHECKS= BUILD_DOCS= RUN_TESTS= $0
+ DEF_LIB=static ABI_CHECKS= BUILD_DOCS= BUILD_EXAMPLES= RUN_TESTS= $0
DEF_LIB=shared $0
exit
fi
@@ -99,6 +99,7 @@ if [ "$MINI" = "true" ]; then
else
OPTS="$OPTS -Ddisable_libs="
fi
+OPTS="$OPTS -Dlibdir=lib"
if [ "$ASAN" = "true" ]; then
OPTS="$OPTS -Db_sanitize=address"
@@ -168,3 +169,25 @@ if [ "$RUN_TESTS" = "true" ]; then
catch_coredump
[ "$failed" != "true" ]
fi
+
+# Test examples compilation with an installed dpdk
+if [ "$BUILD_EXAMPLES" = "true" ]; then
+ [ -d install ] || DESTDIR=$(pwd)/install ninja -C build install
+ export LD_LIBRARY_PATH=$(dirname $(find $(pwd)/install -name librte_eal.so)):$LD_LIBRARY_PATH
+ export PKG_CONFIG_PATH=$(dirname $(find $(pwd)/install -name libdpdk.pc)):$PKG_CONFIG_PATH
+ export PKGCONF="pkg-config --define-prefix"
+ ninja -C build -t targets all | grep 'examples/.*:.*c_LINKER' |
+ while read target; do
+ target=${target%%:*}
+ target=${target#examples/dpdk-}
+ if [ -e examples/$target/Makefile ]; then
+ echo $target
+ continue
+ fi
+ find examples -name Makefile |
+ sed -ne "s,examples/\([^/]*\)\(/.*\|\)/$target/Makefile,\1,p"
+ done | sort -u |
+ while read example; do
+ make -C install/usr/local/share/dpdk/examples/$example clean shared
+ done
+fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..414dd089e0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -20,6 +20,7 @@ jobs:
BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
+ BUILD_EXAMPLES: ${{ contains(matrix.config.checks, 'examples') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
LIBABIGAIL_VERSION: libabigail-2.1
@@ -39,7 +40,7 @@ jobs:
mini: mini
- os: ubuntu-20.04
compiler: gcc
- checks: abi+debug+doc+tests
+ checks: abi+debug+doc+examples+tests
- os: ubuntu-20.04
compiler: clang
checks: asan+doc+tests
@@ -96,12 +97,11 @@ jobs:
- name: Install packages
run: sudo apt install -y ccache libarchive-dev libbsd-dev libfdt-dev
libibverbs-dev libjansson-dev libnuma-dev libpcap-dev libssl-dev
- ninja-build python3-pip python3-pyelftools python3-setuptools
+ ninja-build pkg-config python3-pip python3-pyelftools python3-setuptools
python3-wheel zlib1g-dev
- name: Install libabigail build dependencies if no cache is available
if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
run: sudo apt install -y autoconf automake libdw-dev libtool libxml2-dev
- pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
run: sudo apt install -y gcc-multilib g++-multilib
--
2.40.1
^ permalink raw reply [relevance 10%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-12 20:53 3% ` Chautru, Nicolas
@ 2023-06-13 8:14 4% ` Maxime Coquelin
2023-06-13 17:16 3% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-13 8:14 UTC (permalink / raw)
To: Chautru, Nicolas, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
On 6/12/23 22:53, Chautru, Nicolas wrote:
> Hi Maxime, David,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>
>> On 6/6/23 23:01, Chautru, Nicolas wrote:
>>> Hi David,
>>>
>>>> -----Original Message-----
>>>> From: David Marchand <david.marchand@redhat.com>> >>
>>>> On Mon, Jun 5, 2023 at 10:08 PM Chautru, Nicolas
>>>> <nicolas.chautru@intel.com> wrote:
>>>>> Wrt the MLD functions: these are new into the related serie but
>>>>> still the
>>>> break the ABI since the struct rte_bbdev includes these functions
>>>> hence causing offset changes.
>>>>>
>>>>> Should I then just rephrase as:
>>>>>
>>>>> +* bbdev: Will extend the API to support the new operation type
>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
>>>>> + this `v1
>>>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This
>>>>> + will notably introduce + new symbols for
>>>>> ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
>>>>> into the stuct rte_bbdev.
>>>>
>>>> I don't think we need this deprecation notice.
>>>>
>>>>
>>>> Do you need to expose those new mldts ops in rte_bbdev?
>>>> Can't they go to dev_ops?
>>>> If you can't, at least moving those new ops at the end of the
>>>> structure would avoid the breakage on rte_bbdev.
>>>
>>> It would probably be best to move all these ops at the end of the structure
>> (ie. keep them together).
>>> In that case the deprecation notice would call out that the rte_bbdev
>> structure content is more generally modified. Probably best for the longer
>> run.
>>> David, Maxime, ok with that option?
>>>
>>> struct __rte_cache_aligned rte_bbdev {
>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
>>> const struct rte_bbdev_ops *dev_ops;
>>> struct rte_bbdev_data *data;
>>> enum rte_bbdev_state state;
>>> struct rte_device *device;
>>> struct rte_bbdev_cb_list list_cbs;
>>> struct rte_intr_handle *intr_handle;
>>> };
>>
>> The best thing, as suggested by David, would be to move all the ops out of
>> struct rte_bbdev, as these should not be visible to the application.
>
> That would be quite disruptive across all PMDs and possible perf impact to validate. I don’t think this is anywhere realistic to consider such a change in 23.11.
> I believe moving these function at the end of the structure is a good compromise to avoid future breakage of rte_bbdev structure with almost seamless impact (purely a ABI break when moving into 23.11 which is not avoidable. Retrospectively we should have done that in 22.11 really.
If we are going to break the ABI, better to do the right rework
directly. Otherwise we'll end-up breaking it again next year.
IMHO, moving these ops should be quite trivial and not much work.
Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it may
not break the ABI, but that's a bit fragile:
- rte_bbdev_devices[] is not static, but is placed in the BSS section so
should be OK
- struct rte_bbdev is cache-aligned, so it may work if adding these two
ops do not overlap a cacheline which depends on the CPU architecture.
Maxime
> What do you think Maxime, David? Based on this I can adjust the change for 23.11 and update slightly the deprecation notice accordingly.
>
> Thanks
> Nic
>
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-08 8:47 0% ` Maxime Coquelin
@ 2023-06-12 20:53 3% ` Chautru, Nicolas
2023-06-13 8:14 4% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-12 20:53 UTC (permalink / raw)
To: Maxime Coquelin, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
Hi Maxime, David,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> On 6/6/23 23:01, Chautru, Nicolas wrote:
> > Hi David,
> >
> >> -----Original Message-----
> >> From: David Marchand <david.marchand@redhat.com>> >>
> >> On Mon, Jun 5, 2023 at 10:08 PM Chautru, Nicolas
> >> <nicolas.chautru@intel.com> wrote:
> >>> Wrt the MLD functions: these are new into the related serie but
> >>> still the
> >> break the ABI since the struct rte_bbdev includes these functions
> >> hence causing offset changes.
> >>>
> >>> Should I then just rephrase as:
> >>>
> >>> +* bbdev: Will extend the API to support the new operation type
> >>> +``RTE_BBDEV_OP_MLDTS`` as per
> >>> + this `v1
> >>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This
> >>> + will notably introduce + new symbols for
> >>> ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
> >>> into the stuct rte_bbdev.
> >>
> >> I don't think we need this deprecation notice.
> >>
> >>
> >> Do you need to expose those new mldts ops in rte_bbdev?
> >> Can't they go to dev_ops?
> >> If you can't, at least moving those new ops at the end of the
> >> structure would avoid the breakage on rte_bbdev.
> >
> > It would probably be best to move all these ops at the end of the structure
> (ie. keep them together).
> > In that case the deprecation notice would call out that the rte_bbdev
> structure content is more generally modified. Probably best for the longer
> run.
> > David, Maxime, ok with that option?
> >
> > struct __rte_cache_aligned rte_bbdev {
> > rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> > rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> > rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> > rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> > rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> > rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> > rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> > rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> > rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> > rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> > const struct rte_bbdev_ops *dev_ops;
> > struct rte_bbdev_data *data;
> > enum rte_bbdev_state state;
> > struct rte_device *device;
> > struct rte_bbdev_cb_list list_cbs;
> > struct rte_intr_handle *intr_handle;
> > };
>
> The best thing, as suggested by David, would be to move all the ops out of
> struct rte_bbdev, as these should not be visible to the application.
That would be quite disruptive across all PMDs and possible perf impact to validate. I don’t think this is anywhere realistic to consider such a change in 23.11.
I believe moving these function at the end of the structure is a good compromise to avoid future breakage of rte_bbdev structure with almost seamless impact (purely a ABI break when moving into 23.11 which is not avoidable. Retrospectively we should have done that in 22.11 really.
What do you think Maxime, David? Based on this I can adjust the change for 23.11 and update slightly the deprecation notice accordingly.
Thanks
Nic
> >>>
> >>> Pasting below the ABI results for reference
> >>>
> >>> [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at
> >> rte_bbdev.c:174:1 has some indirect sub-type changes:
> >>> return type changed:
> >>> in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
> >>> type size hasn't changed
> >>> 2 data member insertions:
> >>> 'rte_bbdev_enqueue_mldts_ops_t
> >>> rte_bbdev::enqueue_mldts_ops',
> >> at offset 640 (in bits) at rte_bbdev.h:520:1
> >>> 'rte_bbdev_dequeue_mldts_ops_t
> >>> rte_bbdev::dequeue_mldts_ops',
> >> at offset 704 (in bits) at rte_bbdev.h:522:1
> >>> 7 data member changes (9 filtered):
> >>> type of 'rte_bbdev_dequeue_fft_ops_t
> rte_bbdev::dequeue_fft_ops'
> >> changed:
> >>> underlying type 'typedef uint16_t
> >>> (rte_bbdev_queue_data*,
> >> rte_bbdev_fft_op**, typedef uint16_t)*' changed:
> >>> in pointed to type 'function type typedef uint16_t
> >> (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)':
> >>> parameter 2 of type 'rte_bbdev_fft_op**' has sub-type
> changes:
> >>> in pointed to type 'rte_bbdev_fft_op*':
> >>> in pointed to type 'struct rte_bbdev_fft_op' at
> >> rte_bbdev_op.h:978:1:
> >>> type size changed from 832 to 1664 (in bits)
> >>> 1 data member change:
> >>> type of 'rte_bbdev_op_fft rte_bbdev_fft_op::fft'
> changed:
> >>> type size changed from 640 to 1472 (in bits)
> >>> 6 data member insertions:
> >>> 'rte_bbdev_op_data
> >> rte_bbdev_op_fft::dewindowing_input', at offset 256 (in bits) at
> >> rte_bbdev_op.h:771:1
> >>> 'int8_t
> >>> rte_bbdev_op_fft::freq_resample_mode', at offset
> >> 768 (in bits) at rte_bbdev_op.h:807:1
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::output_depadded_size', at
> >> offset 784 (in bits) at rte_bbdev_op.h:809:1
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::cs_theta_0[12]', at offset 800
> >> (in bits) at rte_bbdev_op.h:811:1
> >>> 'uint32_t
> >>> rte_bbdev_op_fft::cs_theta_d[12]', at offset 992
> >> (in bits) at rte_bbdev_op.h:813:1
> >>> 'int8_t
> >>> rte_bbdev_op_fft::time_offset[12]', at offset 1376
> >> (in bits) at rte_bbdev_op.h:815:1
> >>> 17 data member changes:
> >>> 'rte_bbdev_op_data
> >> rte_bbdev_op_fft::power_meas_output' offset changed from 256 to 384
> >> (in
> >> bits) (by +128 bits)
> >>> 'uint32_t rte_bbdev_op_fft::op_flags'
> >>> offset changed from
> >> 384 to 512 (in bits) (by +128 bits)
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::input_sequence_size' offset
> >> changed from 416 to 544 (in bits) (by +128 bits)
> >>> 'uint16_t rte_bbdev_op_fft::input_leading_padding'
> >> offset changed from 432 to 560 (in bits) (by +128 bits)
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::output_sequence_size' offset
> >> changed from 448 to 576 (in bits) (by +128 bits)
> >>> 'uint16_t
> rte_bbdev_op_fft::output_leading_depadding'
> >> offset changed from 464 to 592 (in bits) (by +128 bits)
> >>> 'uint8_t
> >>> rte_bbdev_op_fft::window_index[6]' offset
> >> changed from 480 to 608 (in bits) (by +128 bits)
> >>> 'uint16_t rte_bbdev_op_fft::cs_bitmap'
> >>> offset changed
> >> from 528 to 656 (in bits) (by +128 bits)
> >>> 'uint8_t
> >>> rte_bbdev_op_fft::num_antennas_log2' offset
> >> changed from 544 to 672 (in bits) (by +128 bits)
> >>> 'uint8_t rte_bbdev_op_fft::idft_log2'
> >>> offset changed from
> >> 552 to 680 (in bits) (by +128 bits)
> >>> 'uint8_t rte_bbdev_op_fft::dft_log2'
> >>> offset changed from
> >> 560 to 688 (in bits) (by +128 bits)
> >>> 'int8_t
> >>> rte_bbdev_op_fft::cs_time_adjustment' offset
> >> changed from 568 to 696 (in bits) (by +128 bits)
> >>> 'int8_t rte_bbdev_op_fft::idft_shift'
> >>> offset changed from
> >> 576 to 704 (in bits) (by +128 bits)
> >>> 'int8_t rte_bbdev_op_fft::dft_shift'
> >>> offset changed from
> >> 584 to 712 (in bits) (by +128 bits)
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::ncs_reciprocal' offset
> >> changed from 592 to 720 (in bits) (by +128 bits)
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::power_shift' offset changed
> >> from 608 to 736 (in bits) (by +128 bits)
> >>> 'uint16_t
> >>> rte_bbdev_op_fft::fp16_exp_adjust' offset
> >> changed from 624 to 752 (in bits) (by +128 bits)
> >>> 'const rte_bbdev_ops* rte_bbdev::dev_ops' offset changed
> >>> from 640
> >> to 768 (in bits) (by +128 bits)
> >>> 'rte_bbdev_data* rte_bbdev::data' offset changed from 704
> >>> to 832
> >> (in bits) (by +128 bits)
> >>> 'rte_bbdev_state rte_bbdev::state' offset changed from
> >>> 768 to 896
> >> (in bits) (by +128 bits)
> >>> 'rte_device* rte_bbdev::device' offset changed from 832
> >>> to 960 (in
> >> bits) (by +128 bits)
> >>> 'rte_bbdev_cb_list rte_bbdev::list_cbs' offset changed
> >>> from 896 to
> >> 1024 (in bits) (by +128 bits)
> >>> 'rte_intr_handle* rte_bbdev::intr_handle' offset changed
> >>> from 1024 to 1152 (in bits) (by +128 bits)
> >>
> >> As for the report on the rte_bbdev_op_fft structure changes:
> >> - wrt to its size, I think it is okay to waive it, rte_bbdev_fft_op
> >> objects are coming from a bbdev mempool which is created by the bbdev
> >> library itself (with the right element size if the application asked
> >> for RTE_BBDEV_OP_FFT type),
> >> - wrt to the fields locations, an application may have been touching
> >> those fields, so moving all the added fields at the end of the
> >> structure would be better.
> >> But on the other hand, an application will have to call an fft_ops
> >> experimental API at some point, and the application developer is
> >> already warned that ABI is not preserved on this part of the API,
> >>
> >> So I would waive the changes on rte_bbdev_fft_op with something like:
> >>
> >> diff --git a/devtools/libabigail.abignore
> >> b/devtools/libabigail.abignore index
> >> 3ff51509de..3cdce69418 100644
> >> --- a/devtools/libabigail.abignore
> >> +++ b/devtools/libabigail.abignore
> >> @@ -36,6 +36,8 @@
> >> [suppress_type]
> >> type_kind = enum
> >> changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM,
> >> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> >> +[suppress_type]
> >> + name = rte_bbdev_fft_op
> >
> >
> > OK I did not know about this method. Shouldn't this apply more generally
> to all experimental structures?
> > This can be added into the serie for 23.11.
> >
> >
> > Thanks
> > Nic
> >
> >
> >
> >>
> >> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> >> ; Temporary exceptions till next major ABI version ;
> >>
> >>
> >> --
^ permalink raw reply [relevance 3%]
* RE: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
2023-06-07 0:00 0% ` Ferruh Yigit
@ 2023-06-12 3:25 0% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-06-12 3:25 UTC (permalink / raw)
To: Ferruh Yigit, Konstantin Ananyev,
Константин
Ананьев,
thomas, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, nd
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Wednesday, June 7, 2023 8:01 AM
> To: Konstantin Ananyev <konstantin.ananyev@huawei.com>; Feifei Wang
> <Feifei.Wang2@arm.com>; Константин Ананьев
> <konstantin.v.ananyev@yandex.ru>; thomas@monjalon.net; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>
> Subject: Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
>
> On 6/6/2023 9:34 AM, Konstantin Ananyev wrote:
> >
> >
> >>
> >> [...]
> >>>> Probably I am missing something, but why it is not possible to do
> >>>> something
> >>> like that:
> >>>>
> >>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> >>>> tx_queue_id=M, ...); ....
> >>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> >>>> tx_queue_id=K, ...);
> >>>>
> >>>> I.E. feed rx queue from 2 tx queues?
> >>>>
> >>>> Two problems for this:
> >>>> 1. If we have 2 tx queues for rx, the thread should make the extra
> >>>> judgement to decide which one to choose in the driver layer.
> >>>
> >>> Not sure, why on the driver layer?
> >>> The example I gave above - decision is made on application layer.
> >>> Lets say first call didn't free enough mbufs, so app decided to use
> >>> second txq for rearm.
> >> [Feifei] I think currently mbuf recycle mode can support this usage. For
> examples:
> >> n = rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> >> tx_queue_id=M, ...); if (n < planned_number)
> >> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> >> tx_queue_id=K, ...);
> >>
> >> Thus, if users want, they can do like this.
> >
> > Yes, that was my thought, that's why I was surprise that in the comments we
> have:
> > " Currently, the rte_eth_recycle_mbufs() function can only support
> > one-time pairing
> > * between the receive queue and transmit queue. Do not pair one
> > receive queue with
> > * multiple transmit queues or pair one transmit queue with multiple
> > receive queues,
> > * in order to avoid memory error rewriting."
> >
>
> I guess that is from previous versions of the set, it can be good to address
> limitations/restrictions again with latest version.
[Feifei] Sorry, I think this is due to my ambiguous expression in function description.
I wanted to show 'mbufs_recycle' cannot support multiple threads.
I will change the description and add extra expression to tell users that they
can change config from one txq to another in single thread.
Thanks for the comments.
>
>
> >>
> >>>
> >>>> On the other hand, current mechanism can support users to switch 1
> >>>> txq to another timely in the application layer. If user want to
> >>>> choose another txq, he just need to change the txq_queue_id parameter
> in the API.
> >>>> 2. If you want one rxq to support two txq at the same time, this
> >>>> needs to add spinlock on guard variable to avoid multi-thread conflict.
> >>>> Spinlock will decrease the data-path performance greatly. Thus, we
> >>>> do not consider
> >>>> 1 rxq mapping multiple txqs here.
> >>>
> >>> I am talking about situation when one thread controls 2 tx queues.
> >>>
> >>>> + *
> >>>> + * @param rx_port_id
> >>>> + * Port identifying the receive side.
> >>>> + * @param rx_queue_id
> >>>> + * The index of the receive queue identifying the receive side.
> >>>> + * The value must be in the range [0, nb_rx_queue - 1] previously
> >>>> +supplied
> >>>> + * to rte_eth_dev_configure().
> >>>> + * @param tx_port_id
> >>>> + * Port identifying the transmit side.
> >>>> + * @param tx_queue_id
> >>>> + * The index of the transmit queue identifying the transmit side.
> >>>> + * The value must be in the range [0, nb_tx_queue - 1] previously
> >>>> +supplied
> >>>> + * to rte_eth_dev_configure().
> >>>> + * @param recycle_rxq_info
> >>>> + * A pointer to a structure of type *rte_eth_recycle_rxq_info*
> >>>> +which contains
> >>>> + * the information of the Rx queue mbuf ring.
> >>>> + * @return
> >>>> + * The number of recycling mbufs.
> >>>> + */
> >>>> +__rte_experimental
> >>>> +static inline uint16_t
> >>>> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> >>>> +uint16_t tx_port_id, uint16_t tx_queue_id, struct
> >>>> +rte_eth_recycle_rxq_info *recycle_rxq_info) { struct
> >>>> +rte_eth_fp_ops *p; void *qd; uint16_t nb_mbufs;
> >>>> +
> >>>> +#ifdef RTE_ETHDEV_DEBUG_TX
> >>>> + if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >=
> >>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> >>>> +tx_port_id=%u or tx_queue_id=%u\n", tx_port_id, tx_queue_id);
> >>>> +return 0; } #endif
> >>>> +
> >>>> + /* fetch pointer to queue data */ p =
> >>>> + &rte_eth_fp_ops[tx_port_id]; qd = p->txq.data[tx_queue_id];
> >>>> +
> >>>> +#ifdef RTE_ETHDEV_DEBUG_TX
> >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> >>>> +
> >>>> + if (qd == NULL) {
> >>>> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
> >>>> +tx_queue_id, tx_port_id); return 0; } #endif if
> >>>> +(p->recycle_tx_mbufs_reuse == NULL) return 0;
> >>>> +
> >>>> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> >>>> + * into Rx mbuf ring.
> >>>> + */
> >>>> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
> >>>> +
> >>>> + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return
> >>>> + 0;
> >>>> +
> >>>> +#ifdef RTE_ETHDEV_DEBUG_RX
> >>>> + if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >=
> >>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> >>>> +rx_port_id=%u or rx_queue_id=%u\n", rx_port_id, rx_queue_id);
> >>>> +return 0; } #endif
> >>>> +
> >>>> + /* fetch pointer to queue data */ p =
> >>>> + &rte_eth_fp_ops[rx_port_id]; qd = p->rxq.data[rx_queue_id];
> >>>> +
> >>>> +#ifdef RTE_ETHDEV_DEBUG_RX
> >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> >>>> +
> >>>> + if (qd == NULL) {
> >>>> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
> >>>> +rx_queue_id, rx_port_id); return 0; } #endif
> >>>> +
> >>>> + if (p->recycle_rx_descriptors_refill == NULL) return 0;
> >>>> +
> >>>> + /* Replenish the Rx descriptors with the recycling
> >>>> + * into Rx mbuf ring.
> >>>> + */
> >>>> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> >>>> +
> >>>> + return nb_mbufs;
> >>>> +}
> >>>> +
> >>>> /**
> >>>> * @warning
> >>>> * @b EXPERIMENTAL: this API may change without prior notice diff
> >>>> --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> >>>> index dcf8adab92..a2e6ea6b6c 100644
> >>>> --- a/lib/ethdev/rte_ethdev_core.h
> >>>> +++ b/lib/ethdev/rte_ethdev_core.h
> >>>> @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void
> >>>> *rxq, uint16_t offset);
> >>>> /** @internal Check the status of a Tx descriptor */ typedef int
> >>>> (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >>>>
> >>>> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring
> >>>> +*/ typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
> >>>> +struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> >>>> +
> >>>> +/** @internal Refill Rx descriptors with the recycling mbufs */
> >>>> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq,
> >>>> +uint16_t nb);
> >>>> +
> >>>> /**
> >>>> * @internal
> >>>> * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> >>>> @@
> >>>> -90,9 +97,11 @@ struct rte_eth_fp_ops {
> >>>> eth_rx_queue_count_t rx_queue_count;
> >>>> /** Check the status of a Rx descriptor. */
> >>>> eth_rx_descriptor_status_t rx_descriptor_status;
> >>>> + /** Refill Rx descriptors with the recycling mbufs. */
> >>>> + eth_recycle_rx_descriptors_refill_t
> >>>> + recycle_rx_descriptors_refill;
> >>>> I am afraid we can't put new fields here without ABI breakage.
> >>>>
> >>>> Agree
> >>>>
> >>>> It has to be below rxq.
> >>>> Now thinking about current layout probably not the best one, and
> >>>> when introducing this struct, I should probably put rxq either on
> >>>> the top of the struct, or on the next cache line.
> >>>> But such change is not possible right now anyway.
> >>>> Same story for txq.
> >>>>
> >>>> Thus we should rearrange the structure like below:
> >>>> struct rte_eth_fp_ops {
> >>>> struct rte_ethdev_qdata rxq;
> >>>> eth_rx_burst_t rx_pkt_burst;
> >>>> eth_rx_queue_count_t rx_queue_count;
> >>>> eth_rx_descriptor_status_t rx_descriptor_status;
> >>>> eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> >>>> uintptr_t reserved1[2]; }
> >>>
> >>> Yes, I think such layout will be better.
> >>> The only problem here - we have to wait for 23.11 for that.
> >>>
> >> Ok, if not this change, maybe we still need to wait. Because
> >> mbufs_recycle have other ABI breakage. Such as the change for 'struct
> rte_eth_dev'.
> >
> > Ok by me.
> >
> >>>>
> >>>>
> >>>> /** Rx queues data. */
> >>>> struct rte_ethdev_qdata rxq;
> >>>> - uintptr_t reserved1[3];
> >>>> + uintptr_t reserved1[2];
> >>>> /**@}*/
> >>>>
> >>>> /**@{*/
> >>>> @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
> >>>> eth_tx_prep_t tx_pkt_prepare;
> >>>> /** Check the status of a Tx descriptor. */
> >>>> eth_tx_descriptor_status_t tx_descriptor_status;
> >>>> + /** Copy used mbufs from Tx mbuf ring into Rx. */
> >>>> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> >>>> /** Tx queues data. */
> >>>> struct rte_ethdev_qdata txq;
> >>>> - uintptr_t reserved2[3];
> >>>> + uintptr_t reserved2[2];
> >>>> /**@}*/
> >>>>
> >>>> } __rte_cache_aligned;
> >>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> >>>> 357d1a88c0..45c417f6bd 100644
> >>>> --- a/lib/ethdev/version.map
> >>>> +++ b/lib/ethdev/version.map
> >>>> @@ -299,6 +299,10 @@ EXPERIMENTAL {
> >>>> rte_flow_action_handle_query_update;
> >>>> rte_flow_async_action_handle_query_update;
> >>>> rte_flow_async_create_by_index;
> >>>> +
> >>>> + # added in 23.07
> >>>> + rte_eth_recycle_mbufs;
> >>>> + rte_eth_recycle_rx_queue_info_get;
> >>>> };
> >>>>
> >>>> INTERNAL {
> >>>> --
> >>>> 2.25.1
> >>>>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH] mark experimental variables
@ 2023-06-12 2:49 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-12 2:49 UTC (permalink / raw)
To: David Marchand
Cc: nhorman, dev, thomas, arybchenko, stable, Ray Kinsella,
John McNamara, Marko Kovacevic, Qiming Yang, Wenzhuo Lu,
Declan Doherty, Adrien Mazarguil, Ferruh Yigit,
Cristian Dumitrescu
On Mon, 25 Nov 2019 17:13:14 +0100
David Marchand <david.marchand@redhat.com> wrote:
> So far, we did not pay attention to direct access to variables but they
> are part of the API/ABI too and should be clearly identified.
>
> Introduce a __rte_experimental_var tag and mark existing variables.
>
> Fixes: a4bcd61de82d ("buildtools: add script to check experimental API exports")
> Cc: stable@dpdk.org
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Quick patch to try to catch experimental variables.
> Not sure if we could use a single section, so please advise if there is
> better to do about this.
>
> ---
> buildtools/check-experimental-syms.sh | 17 +++++++++++++++--
> devtools/checkpatches.sh | 14 +++++++++-----
> doc/guides/contributing/abi_policy.rst | 7 ++++---
> drivers/net/ice/rte_pmd_ice.h | 3 +++
> lib/librte_cryptodev/rte_crypto_asym.h | 3 +++
> lib/librte_eal/common/include/rte_compat.h | 5 +++++
> lib/librte_ethdev/rte_flow.h | 17 +++++++++++++++++
> lib/librte_port/rte_port_eventdev.h | 5 +++++
> 8 files changed, 61 insertions(+), 10 deletions(-)
This is a good idea, but the patch has gone stale in 4 years.
Symbols have changed, directories have changed.
If someone wants to continue this, please rebase and recheck.
Marking the original patch with Changes Requested.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/2] eal/bitmap: support reverse bitmap scan
@ 2023-06-12 2:23 4% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-12 2:23 UTC (permalink / raw)
To: Vivek Sharma; +Cc: dev, cristian.dumitrescu
On Tue, 9 Oct 2018 13:24:57 +0530
Vivek Sharma <vivek.sharma@caviumnetworks.com> wrote:
> This patchset implements the support for reverse bitmap scanning along with
> test support. Reverse scanning is quite useful when bit position signifies
> an ordering according to some attribute, e.g., priority ordering.
>
> Prerequisite:
> * Note that this patchset is dependent on patch:-
> 'http://patches.dpdk.org/patch/45307/'
>
> Vivek Sharma (2):
> eal/bitmap: support bitmap reverse scanning
> test/bitmap: implement reverse bitmap scan test
>
> lib/librte_eal/common/include/rte_bitmap.h | 164 +++++++++++++++++++++++++----
> test/test/test_bitmap.c | 71 ++++++++++++-
> 2 files changed, 213 insertions(+), 22 deletions(-)
>
This patchset has had no feedback in 5 years.
* There was never an application that needed it.
* EAL directory structure changed.
* It would cause an ABI breakage.
If you have an example that needs it, then rebase and follow ABI
rules and the next ABI change will be for 23.11
^ permalink raw reply [relevance 4%]
* [PATCH v3] doc: prefer installing using meson rather than ninja
2023-06-09 13:34 3% ` [PATCH v2] " Bruce Richardson
@ 2023-06-09 14:51 3% ` Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-06-09 14:51 UTC (permalink / raw)
To: dev; +Cc: david.marchand, Bruce Richardson
After doing a build, to install DPDK system-wide our documentation
recommended using the "ninja install" command. However, for anyone
building as a non-root user and only installing as root, the "meson
install" command is a better alternative, as it provides for
automatically dropping or elevating privileges as necessary in more
recent meson releases [1].
[1] https://mesonbuild.com/Installing.html#installing-as-the-superuser
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V3:
* correct order of arguments to meson in CI scripts. The "-C" option
must follow the meson "install" command. [This is consistent with
other uses e.g. meson compile -C ..., meson test -C ...]
V2:
* Fix one missed reference to "ninja install" in Linux GSG
* Changed CI scripts to use "meson install" to ensure step is properly
tested.
---
.ci/linux-build.sh | 4 ++--
doc/guides/contributing/coding_style.rst | 2 +-
doc/guides/cryptodevs/uadk.rst | 2 +-
doc/guides/freebsd_gsg/build_dpdk.rst | 2 +-
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_dpdk.rst | 4 ++--
doc/guides/prog_guide/build-sdk-meson.rst | 4 ++--
7 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 9631e342b5..76d3e776af 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -150,14 +150,14 @@ if [ "$ABI_CHECKS" = "true" ]; then
git clone --single-branch -b "$REF_GIT_TAG" $REF_GIT_REPO $refsrcdir
meson setup $OPTS -Dexamples= $refsrcdir $refsrcdir/build
ninja -C $refsrcdir/build
- DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
+ DESTDIR=$(pwd)/reference meson install -C $refsrcdir/build
find reference/usr/local -name '*.a' -delete
rm -rf reference/usr/local/bin
rm -rf reference/usr/local/share
echo $REF_GIT_TAG > reference/VERSION
fi
- DESTDIR=$(pwd)/install ninja -C build install
+ DESTDIR=$(pwd)/install meson install -C build
devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
fi
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 89db6260cf..00d6270624 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -956,7 +956,7 @@ ext_deps
headers
**Default Value = []**.
Used to return the list of header files for the library that should be
- installed to $PREFIX/include when ``ninja install`` is run. As with
+ installed to $PREFIX/include when ``meson install`` is run. As with
source files, these should be specified using the meson ``files()``
function.
When ``check_includes`` build option is set to ``true``, each header file
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index 9af6b88a5a..136ab4be6a 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -90,7 +90,7 @@ Test steps
meson setup build (--reconfigure)
cd build
ninja
- sudo ninja install
+ sudo meson install
#. Prepare hugepages for DPDK (see also :doc:`../tools/hugepages`)
diff --git a/doc/guides/freebsd_gsg/build_dpdk.rst b/doc/guides/freebsd_gsg/build_dpdk.rst
index 514d18c870..86e8e5a805 100644
--- a/doc/guides/freebsd_gsg/build_dpdk.rst
+++ b/doc/guides/freebsd_gsg/build_dpdk.rst
@@ -47,7 +47,7 @@ The final, install, step generally needs to be run as root::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will install the DPDK libraries and drivers to `/usr/local/lib` with a
pkg-config file `libdpdk.pc` installed to `/usr/local/lib/pkgconfig`. The
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index c87e982759..b1ab7545b1 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -22,7 +22,7 @@ the system when DPDK is installed, and so can be built using GNU make.
on the FreeBSD system.
The following shows how to compile the helloworld example app, following
-the installation of DPDK using `ninja install` as described previously::
+the installation of DPDK using `meson install` as described previously::
$ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index bbd2efc9d8..9c0dd9daf6 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -68,11 +68,11 @@ Once configured, to build and then install DPDK system-wide use:
cd build
ninja
- ninja install
+ meson install
ldconfig
The last two commands above generally need to be run as root,
-with the `ninja install` step copying the built objects to their final system-wide locations,
+with the `meson install` step copying the built objects to their final system-wide locations,
and the last step causing the dynamic loader `ld.so` to update its cache to take account of the new objects.
.. note::
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 5deabbe54c..93aa1f80e3 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -12,7 +12,7 @@ following set of commands::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will compile DPDK in the ``build`` subdirectory, and then install the
resulting libraries, drivers and header files onto the system - generally
@@ -165,7 +165,7 @@ printing each command on a new line as it runs.
Installing the Compiled Files
------------------------------
-Use ``ninja install`` to install the required DPDK files onto the system.
+Use ``meson install`` to install the required DPDK files onto the system.
The install prefix defaults to ``/usr/local`` but can be used as with other
options above. The environment variable ``DESTDIR`` can be used to adjust
the root directory for the install, for example when packaging.
--
2.39.2
^ permalink raw reply [relevance 3%]
* [PATCH v2] doc: prefer installing using meson rather than ninja
@ 2023-06-09 13:34 3% ` Bruce Richardson
2023-06-09 14:51 3% ` [PATCH v3] " Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-06-09 13:34 UTC (permalink / raw)
To: dev; +Cc: david.marchand, Bruce Richardson
After doing a build, to install DPDK system-wide our documentation
recommended using the "ninja install" command. However, for anyone
building as a non-root user and only installing as root, the "meson
install" command is a better alternative, as it provides for
automatically dropping or elevating privileges as necessary in more
recent meson releases [1].
[1] https://mesonbuild.com/Installing.html#installing-as-the-superuser
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V2:
* Fix one missed reference to "ninja install" in Linux GSG
* Changed CI scripts to use "meson install" to ensure step is properly
tested.
---
.ci/linux-build.sh | 4 ++--
doc/guides/contributing/coding_style.rst | 2 +-
doc/guides/cryptodevs/uadk.rst | 2 +-
doc/guides/freebsd_gsg/build_dpdk.rst | 2 +-
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_dpdk.rst | 4 ++--
doc/guides/prog_guide/build-sdk-meson.rst | 4 ++--
7 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 9631e342b5..69ca46a6a1 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -150,14 +150,14 @@ if [ "$ABI_CHECKS" = "true" ]; then
git clone --single-branch -b "$REF_GIT_TAG" $REF_GIT_REPO $refsrcdir
meson setup $OPTS -Dexamples= $refsrcdir $refsrcdir/build
ninja -C $refsrcdir/build
- DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
+ DESTDIR=$(pwd)/reference meson -C $refsrcdir/build install
find reference/usr/local -name '*.a' -delete
rm -rf reference/usr/local/bin
rm -rf reference/usr/local/share
echo $REF_GIT_TAG > reference/VERSION
fi
- DESTDIR=$(pwd)/install ninja -C build install
+ DESTDIR=$(pwd)/install meson -C build install
devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
fi
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 89db6260cf..00d6270624 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -956,7 +956,7 @@ ext_deps
headers
**Default Value = []**.
Used to return the list of header files for the library that should be
- installed to $PREFIX/include when ``ninja install`` is run. As with
+ installed to $PREFIX/include when ``meson install`` is run. As with
source files, these should be specified using the meson ``files()``
function.
When ``check_includes`` build option is set to ``true``, each header file
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index 9af6b88a5a..136ab4be6a 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -90,7 +90,7 @@ Test steps
meson setup build (--reconfigure)
cd build
ninja
- sudo ninja install
+ sudo meson install
#. Prepare hugepages for DPDK (see also :doc:`../tools/hugepages`)
diff --git a/doc/guides/freebsd_gsg/build_dpdk.rst b/doc/guides/freebsd_gsg/build_dpdk.rst
index 514d18c870..86e8e5a805 100644
--- a/doc/guides/freebsd_gsg/build_dpdk.rst
+++ b/doc/guides/freebsd_gsg/build_dpdk.rst
@@ -47,7 +47,7 @@ The final, install, step generally needs to be run as root::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will install the DPDK libraries and drivers to `/usr/local/lib` with a
pkg-config file `libdpdk.pc` installed to `/usr/local/lib/pkgconfig`. The
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index c87e982759..b1ab7545b1 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -22,7 +22,7 @@ the system when DPDK is installed, and so can be built using GNU make.
on the FreeBSD system.
The following shows how to compile the helloworld example app, following
-the installation of DPDK using `ninja install` as described previously::
+the installation of DPDK using `meson install` as described previously::
$ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index bbd2efc9d8..9c0dd9daf6 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -68,11 +68,11 @@ Once configured, to build and then install DPDK system-wide use:
cd build
ninja
- ninja install
+ meson install
ldconfig
The last two commands above generally need to be run as root,
-with the `ninja install` step copying the built objects to their final system-wide locations,
+with the `meson install` step copying the built objects to their final system-wide locations,
and the last step causing the dynamic loader `ld.so` to update its cache to take account of the new objects.
.. note::
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 5deabbe54c..93aa1f80e3 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -12,7 +12,7 @@ following set of commands::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will compile DPDK in the ``build`` subdirectory, and then install the
resulting libraries, drivers and header files onto the system - generally
@@ -165,7 +165,7 @@ printing each command on a new line as it runs.
Installing the Compiled Files
------------------------------
-Use ``ninja install`` to install the required DPDK files onto the system.
+Use ``meson install`` to install the required DPDK files onto the system.
The install prefix defaults to ``/usr/local`` but can be used as with other
options above. The environment variable ``DESTDIR`` can be used to adjust
the root directory for the install, for example when packaging.
--
2.39.2
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-06 21:01 0% ` Chautru, Nicolas
@ 2023-06-08 8:47 0% ` Maxime Coquelin
2023-06-12 20:53 3% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-08 8:47 UTC (permalink / raw)
To: Chautru, Nicolas, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
On 6/6/23 23:01, Chautru, Nicolas wrote:
> Hi David,
>
>> -----Original Message-----
>> From: David Marchand <david.marchand@redhat.com>
>> Sent: Tuesday, June 6, 2023 2:21 AM
>> To: Chautru, Nicolas <nicolas.chautru@intel.com>
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>; Stephen Hemminger
>> <stephen@networkplumber.org>; dev@dpdk.org; Rix, Tom
>> <trix@redhat.com>; hemant.agrawal@nxp.com; Vargas, Hernan
>> <hernan.vargas@intel.com>
>> Subject: Re: [PATCH v1 1/1] doc: announce change in bbdev api related to
>> operation extension
>>
>> On Mon, Jun 5, 2023 at 10:08 PM Chautru, Nicolas
>> <nicolas.chautru@intel.com> wrote:
>>> Wrt the MLD functions: these are new into the related serie but still the
>> break the ABI since the struct rte_bbdev includes these functions hence
>> causing offset changes.
>>>
>>> Should I then just rephrase as:
>>>
>>> +* bbdev: Will extend the API to support the new operation type
>>> +``RTE_BBDEV_OP_MLDTS`` as per
>>> + this `v1
>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This +
>>> will notably introduce + new symbols for
>>> ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
>>> into the stuct rte_bbdev.
>>
>> I don't think we need this deprecation notice.
>>
>>
>> Do you need to expose those new mldts ops in rte_bbdev?
>> Can't they go to dev_ops?
>> If you can't, at least moving those new ops at the end of the structure
>> would avoid the breakage on rte_bbdev.
>
> It would probably be best to move all these ops at the end of the structure (ie. keep them together).
> In that case the deprecation notice would call out that the rte_bbdev structure content is more generally modified. Probably best for the longer run.
> David, Maxime, ok with that option?
>
> struct __rte_cache_aligned rte_bbdev {
> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> const struct rte_bbdev_ops *dev_ops;
> struct rte_bbdev_data *data;
> enum rte_bbdev_state state;
> struct rte_device *device;
> struct rte_bbdev_cb_list list_cbs;
> struct rte_intr_handle *intr_handle;
> };
The best thing, as suggested by David, would be to move all the ops out
of struct rte_bbdev, as these should not be visible to the application.
>
>
>
>>
>>
>>>
>>> Pasting below the ABI results for reference
>>>
>>> [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at
>> rte_bbdev.c:174:1 has some indirect sub-type changes:
>>> return type changed:
>>> in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
>>> type size hasn't changed
>>> 2 data member insertions:
>>> 'rte_bbdev_enqueue_mldts_ops_t rte_bbdev::enqueue_mldts_ops',
>> at offset 640 (in bits) at rte_bbdev.h:520:1
>>> 'rte_bbdev_dequeue_mldts_ops_t rte_bbdev::dequeue_mldts_ops',
>> at offset 704 (in bits) at rte_bbdev.h:522:1
>>> 7 data member changes (9 filtered):
>>> type of 'rte_bbdev_dequeue_fft_ops_t rte_bbdev::dequeue_fft_ops'
>> changed:
>>> underlying type 'typedef uint16_t (rte_bbdev_queue_data*,
>> rte_bbdev_fft_op**, typedef uint16_t)*' changed:
>>> in pointed to type 'function type typedef uint16_t
>> (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)':
>>> parameter 2 of type 'rte_bbdev_fft_op**' has sub-type changes:
>>> in pointed to type 'rte_bbdev_fft_op*':
>>> in pointed to type 'struct rte_bbdev_fft_op' at
>> rte_bbdev_op.h:978:1:
>>> type size changed from 832 to 1664 (in bits)
>>> 1 data member change:
>>> type of 'rte_bbdev_op_fft rte_bbdev_fft_op::fft' changed:
>>> type size changed from 640 to 1472 (in bits)
>>> 6 data member insertions:
>>> 'rte_bbdev_op_data
>> rte_bbdev_op_fft::dewindowing_input', at offset 256 (in bits) at
>> rte_bbdev_op.h:771:1
>>> 'int8_t rte_bbdev_op_fft::freq_resample_mode', at offset
>> 768 (in bits) at rte_bbdev_op.h:807:1
>>> 'uint16_t rte_bbdev_op_fft::output_depadded_size', at
>> offset 784 (in bits) at rte_bbdev_op.h:809:1
>>> 'uint16_t rte_bbdev_op_fft::cs_theta_0[12]', at offset 800
>> (in bits) at rte_bbdev_op.h:811:1
>>> 'uint32_t rte_bbdev_op_fft::cs_theta_d[12]', at offset 992
>> (in bits) at rte_bbdev_op.h:813:1
>>> 'int8_t rte_bbdev_op_fft::time_offset[12]', at offset 1376
>> (in bits) at rte_bbdev_op.h:815:1
>>> 17 data member changes:
>>> 'rte_bbdev_op_data
>> rte_bbdev_op_fft::power_meas_output' offset changed from 256 to 384 (in
>> bits) (by +128 bits)
>>> 'uint32_t rte_bbdev_op_fft::op_flags' offset changed from
>> 384 to 512 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::input_sequence_size' offset
>> changed from 416 to 544 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::input_leading_padding'
>> offset changed from 432 to 560 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::output_sequence_size' offset
>> changed from 448 to 576 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::output_leading_depadding'
>> offset changed from 464 to 592 (in bits) (by +128 bits)
>>> 'uint8_t rte_bbdev_op_fft::window_index[6]' offset
>> changed from 480 to 608 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::cs_bitmap' offset changed
>> from 528 to 656 (in bits) (by +128 bits)
>>> 'uint8_t rte_bbdev_op_fft::num_antennas_log2' offset
>> changed from 544 to 672 (in bits) (by +128 bits)
>>> 'uint8_t rte_bbdev_op_fft::idft_log2' offset changed from
>> 552 to 680 (in bits) (by +128 bits)
>>> 'uint8_t rte_bbdev_op_fft::dft_log2' offset changed from
>> 560 to 688 (in bits) (by +128 bits)
>>> 'int8_t rte_bbdev_op_fft::cs_time_adjustment' offset
>> changed from 568 to 696 (in bits) (by +128 bits)
>>> 'int8_t rte_bbdev_op_fft::idft_shift' offset changed from
>> 576 to 704 (in bits) (by +128 bits)
>>> 'int8_t rte_bbdev_op_fft::dft_shift' offset changed from
>> 584 to 712 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::ncs_reciprocal' offset
>> changed from 592 to 720 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::power_shift' offset changed
>> from 608 to 736 (in bits) (by +128 bits)
>>> 'uint16_t rte_bbdev_op_fft::fp16_exp_adjust' offset
>> changed from 624 to 752 (in bits) (by +128 bits)
>>> 'const rte_bbdev_ops* rte_bbdev::dev_ops' offset changed from 640
>> to 768 (in bits) (by +128 bits)
>>> 'rte_bbdev_data* rte_bbdev::data' offset changed from 704 to 832
>> (in bits) (by +128 bits)
>>> 'rte_bbdev_state rte_bbdev::state' offset changed from 768 to 896
>> (in bits) (by +128 bits)
>>> 'rte_device* rte_bbdev::device' offset changed from 832 to 960 (in
>> bits) (by +128 bits)
>>> 'rte_bbdev_cb_list rte_bbdev::list_cbs' offset changed from 896 to
>> 1024 (in bits) (by +128 bits)
>>> 'rte_intr_handle* rte_bbdev::intr_handle' offset changed
>>> from 1024 to 1152 (in bits) (by +128 bits)
>>
>> As for the report on the rte_bbdev_op_fft structure changes:
>> - wrt to its size, I think it is okay to waive it, rte_bbdev_fft_op objects are
>> coming from a bbdev mempool which is created by the bbdev library itself
>> (with the right element size if the application asked for RTE_BBDEV_OP_FFT
>> type),
>> - wrt to the fields locations, an application may have been touching those
>> fields, so moving all the added fields at the end of the structure would be
>> better.
>> But on the other hand, an application will have to call an fft_ops
>> experimental API at some point, and the application developer is already
>> warned that ABI is not preserved on this part of the API,
>>
>> So I would waive the changes on rte_bbdev_fft_op with something like:
>>
>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index
>> 3ff51509de..3cdce69418 100644
>> --- a/devtools/libabigail.abignore
>> +++ b/devtools/libabigail.abignore
>> @@ -36,6 +36,8 @@
>> [suppress_type]
>> type_kind = enum
>> changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM,
>> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
>> +[suppress_type]
>> + name = rte_bbdev_fft_op
>
>
> OK I did not know about this method. Shouldn't this apply more generally to all experimental structures?
> This can be added into the serie for 23.11.
>
>
> Thanks
> Nic
>
>
>
>>
>> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>> ; Temporary exceptions till next major ABI version ;
>>
>>
>> --
>> David Marchand
>
^ permalink raw reply [relevance 0%]
* RE: [EXT] Re: [PATCH v2 01/13] security: add direction in SA/SC configuration
2023-06-07 19:49 3% ` David Marchand
@ 2023-06-08 6:58 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2023-06-08 6:58 UTC (permalink / raw)
To: David Marchand
Cc: dev, thomas, olivier.matz, orika, hemant.agrawal,
Vamsi Krishna Attunuru, ferruh.yigit, andrew.rybchenko,
Jerin Jacob Kollanukkaran, Ankur Dwivedi, Dodji Seketeli
> On Wed, Jun 7, 2023 at 5:20 PM Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > MACsec SC/SA ids are created based on direction of the flow.
> > Hence, added the missing field for configuration and cleanup
> > of the SCs and SAs.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > devtools/libabigail.abignore | 7 +++++++
> > lib/security/rte_security.c | 16 ++++++++++------
> > lib/security/rte_security.h | 14 ++++++++++----
> > lib/security/rte_security_driver.h | 12 ++++++++++--
> > 4 files changed, 37 insertions(+), 12 deletions(-)
> >
>
> Looking at the report with no supression rule:
> $ abidiff --suppr .../next-cryptodev/devtools/libabigail.abignore
> --no-added-syms --headers-dir1
> .../abi/v23.03/build-gcc-shared/usr/local/include --headers-dir2
> .../next-cryptodev/build-gcc-shared/install/usr/local/include
> .../abi/v23.03/build-gcc-shared/usr/local/lib64/librte_security.so.23.1
> .../next-cryptodev/build-gcc-
> shared/install/usr/local/lib64/librte_security.so.23.2
> Functions changes summary: 0 Removed, 1 Changed (13 filtered out), 0
> Added functions
> Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
>
> 1 function with some indirect sub-type change:
>
> [C] 'function const rte_security_capability*
> rte_security_capabilities_get(rte_security_ctx*)' at
> rte_security.c:241:1 has some indirect sub-type changes:
> parameter 1 of type 'rte_security_ctx*' has sub-type changes:
> in pointed to type 'struct rte_security_ctx' at rte_security.h:69:1:
> type size hasn't changed
> 1 data member change:
> type of 'const rte_security_ops* ops' changed:
> in pointed to type 'const rte_security_ops':
> in unqualified underlying type 'struct rte_security_ops'
> at rte_security_driver.h:230:1:
> type size hasn't changed
> 4 data member changes (4 filtered):
> type of 'security_macsec_sc_create_t
> macsec_sc_create' changed:
> underlying type 'int (void*,
> rte_security_macsec_sc*)*' changed:
> in pointed to type 'function type int (void*,
> rte_security_macsec_sc*)':
> parameter 2 of type 'rte_security_macsec_sc*'
> has sub-type changes:
> in pointed to type 'struct
> rte_security_macsec_sc' at rte_security.h:399:1:
> type size changed from 256 to 320 (in bits)
> 1 data member insertion:
> 'union {struct {uint16_t sa_id[4];
> uint8_t sa_in_use[4]; uint8_t active; uint8_t is_xpn; uint8_t
> reserved;} sc_rx; struct {uint16_t sa_id; uint16_t sa_id_rekey;
> uint64_t sci; uint8_t active; uint8_t re_key_en; uint8_t is_xpn;
> uint8_t reserved;} sc_tx;}', at offset 128 (in bits)
> 1 data member change:
> anonymous data member union {struct
> {uint16_t sa_id[4]; uint8_t sa_in_use[4]; uint8_t active; uint8_t
> reserved;} sc_rx; struct {uint16_t sa_id; uint16_t sa_id_rekey;
> uint64_t sci; uint8_t active; uint8_t re_key_en; uint8_t reserved;}
> sc_tx;} at offset 64 (in bits) became data member 'uint64_t
> pn_threshold'
> and size changed from 192 to 64 (in
> bits) (by -128 bits)
> type of 'security_macsec_sc_destroy_t
> macsec_sc_destroy' changed:
> underlying type 'int (void*, typedef uint16_t)*' changed:
> in pointed to type 'function type int (void*,
> typedef uint16_t)':
> parameter 3 of type 'enum
> rte_security_macsec_direction' was added
> type of 'security_macsec_sc_stats_get_t
> macsec_sc_stats_get' changed:
> underlying type 'int (void*, typedef uint16_t,
> rte_security_macsec_sc_stats*)*' changed:
> in pointed to type 'function type int (void*,
> typedef uint16_t, rte_security_macsec_sc_stats*)':
> parameter 3 of type
> 'rte_security_macsec_sc_stats*' changed:
> entity changed from
> 'rte_security_macsec_sc_stats*' to 'enum
> rte_security_macsec_direction' at rte_security.h:361:1
> type size changed from 64 to 32 (in bits)
> type alignment changed from 0 to 32
> parameter 4 of type
> 'rte_security_macsec_sc_stats*' was added
> type of 'security_macsec_sa_stats_get_t
> macsec_sa_stats_get' changed:
> underlying type 'int (void*, typedef uint16_t,
> rte_security_macsec_sa_stats*)*' changed:
> in pointed to type 'function type int (void*,
> typedef uint16_t, rte_security_macsec_sa_stats*)':
> parameter 3 of type
> 'rte_security_macsec_sa_stats*' changed:
> entity changed from
> 'rte_security_macsec_sa_stats*' to 'enum
> rte_security_macsec_direction' at rte_security.h:361:1
> type size changed from 64 to 32 (in bits)
> type alignment changed from 0 to 32
> parameter 4 of type
> 'rte_security_macsec_sa_stats*' was added
>
> The report complains about the macsec ops type changes.
>
> > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > index c0361bfc7b..14d8fa4293 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -37,6 +37,13 @@
> > [suppress_type]
> > type_kind = enum
> > changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM,
> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> > +; Ignore changes to rte_security_ops MACsec APIs which are experimental
> > +[suppress_type]
> > + name = rte_security_ops
> > + has_data_member_inserted_between =
> > + {
> > + offset_of(security_macsec_sc_create_t),
> offset_of(security_macsec_sa_stats_get_t)
> > + }
>
> So I don't get the intent with this rule.
> There is no field named neither security_macsec_sc_create_t nor
> security_macsec_sa_stats_get_t in the rte_security_ops struct.
>
> Now.. why is this rule making the check pass... it is a mystery to me.
> I already hit a case when libabigail ignored statements that are
> invalid or make no sense, so my guess is that this rule is actually
> applied as a simple:
> [suppress_type]
> name = rte_security_ops
>
> And well, this rule is ok from my pov: this rte_security_ops struct
> does not change in size.
> An application is not supposed to know about its content (that is
> defined in a driver header) and all accesses to those ops are supposed
> to be through symbols from the security library.
> So I would go with this "larger" and simpler rule.
The intent was to make it specific to MACsec APIs only.
I am ok with both as these are internal thing and would change it in next release.
Updated the v3 as per your suggestion.
>
>
> Just a small addition to this, as discussed offlist, this is going to
> be reworked in v23.11 and this rule on rte_security_ops will be
> unneccesary, so please move it in the relevant block (at the end) of
> the libabigail.abignore file.
Ack
^ permalink raw reply [relevance 0%]
* [PATCH v3 01/13] security: add direction in SA/SC configuration
@ 2023-06-08 6:54 3% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2023-06-08 6:54 UTC (permalink / raw)
To: dev
Cc: thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
vattunuru, ferruh.yigit, andrew.rybchenko, jerinj, adwivedi,
Akhil Goyal
MACsec SC/SA ids are created based on direction of the flow.
Hence, added the missing field for configuration and cleanup
of the SCs and SAs.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 4 ++++
lib/security/rte_security.c | 16 ++++++++++------
lib/security/rte_security.h | 14 ++++++++++----
lib/security/rte_security_driver.h | 12 ++++++++++--
4 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index c0361bfc7b..03bfbce259 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -41,3 +41,7 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; Ignore changes to rte_security_ops which are internal to PMD.
+[suppress_type]
+ name = rte_security_ops
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index e102c55e55..c4d64bb8e9 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -164,13 +164,14 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
}
int
-rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id)
+rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir)
{
int ret;
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sc_destroy, -EINVAL, -ENOTSUP);
- ret = instance->ops->macsec_sc_destroy(instance->device, sc_id);
+ ret = instance->ops->macsec_sc_destroy(instance->device, sc_id, dir);
if (ret != 0)
return ret;
@@ -181,13 +182,14 @@ rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id
}
int
-rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id)
+rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir)
{
int ret;
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sa_destroy, -EINVAL, -ENOTSUP);
- ret = instance->ops->macsec_sa_destroy(instance->device, sa_id);
+ ret = instance->ops->macsec_sa_destroy(instance->device, sa_id, dir);
if (ret != 0)
return ret;
@@ -199,22 +201,24 @@ rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id
int
rte_security_macsec_sc_stats_get(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats)
{
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sc_stats_get, -EINVAL, -ENOTSUP);
RTE_PTR_OR_ERR_RET(stats, -EINVAL);
- return instance->ops->macsec_sc_stats_get(instance->device, sc_id, stats);
+ return instance->ops->macsec_sc_stats_get(instance->device, sc_id, dir, stats);
}
int
rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats)
{
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sa_stats_get, -EINVAL, -ENOTSUP);
RTE_PTR_OR_ERR_RET(stats, -EINVAL);
- return instance->ops->macsec_sa_stats_get(instance->device, sa_id, stats);
+ return instance->ops->macsec_sa_stats_get(instance->device, sa_id, dir, stats);
}
int
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4bacf9fcd9..c7a523b6d6 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -761,6 +761,7 @@ rte_security_macsec_sc_create(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sc_id SC ID to be destroyed
+ * @param dir direction of the SC
* @return
* - 0 if successful.
* - -EINVAL if sc_id is invalid or instance is NULL.
@@ -768,7 +769,8 @@ rte_security_macsec_sc_create(struct rte_security_ctx *instance,
*/
__rte_experimental
int
-rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id);
+rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir);
/**
* @warning
@@ -798,6 +800,7 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sa_id SA ID to be destroyed
+ * @param dir direction of the SA
* @return
* - 0 if successful.
* - -EINVAL if sa_id is invalid or instance is NULL.
@@ -805,7 +808,8 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
*/
__rte_experimental
int
-rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id);
+rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir);
/** Device-specific metadata field type */
typedef uint64_t rte_security_dynfield_t;
@@ -1077,6 +1081,7 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sa_id SA ID for which stats are needed
+ * @param dir direction of the SA
* @param stats statistics
* @return
* - On success, return 0.
@@ -1085,7 +1090,7 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
__rte_experimental
int
rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
- uint16_t sa_id,
+ uint16_t sa_id, enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats);
/**
@@ -1096,6 +1101,7 @@ rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sc_id SC ID for which stats are needed
+ * @param dir direction of the SC
* @param stats SC statistics
* @return
* - On success, return 0.
@@ -1104,7 +1110,7 @@ rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
__rte_experimental
int
rte_security_macsec_sc_stats_get(struct rte_security_ctx *instance,
- uint16_t sc_id,
+ uint16_t sc_id, enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats);
/**
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 421e6f7780..677c7d1f91 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -106,8 +106,10 @@ typedef int (*security_macsec_sc_create_t)(void *device, struct rte_security_mac
*
* @param device Crypto/eth device pointer
* @param sc_id MACsec SC ID
+ * @param dir Direction of SC
*/
-typedef int (*security_macsec_sc_destroy_t)(void *device, uint16_t sc_id);
+typedef int (*security_macsec_sc_destroy_t)(void *device, uint16_t sc_id,
+ enum rte_security_macsec_direction dir);
/**
* Configure a MACsec security Association (SA) on a device.
@@ -128,8 +130,10 @@ typedef int (*security_macsec_sa_create_t)(void *device, struct rte_security_mac
*
* @param device Crypto/eth device pointer
* @param sa_id MACsec SA ID
+ * @param dir Direction of SA
*/
-typedef int (*security_macsec_sa_destroy_t)(void *device, uint16_t sa_id);
+typedef int (*security_macsec_sa_destroy_t)(void *device, uint16_t sa_id,
+ enum rte_security_macsec_direction dir);
/**
* Get the size of a security session
@@ -162,6 +166,7 @@ typedef int (*security_session_stats_get_t)(void *device,
*
* @param device Crypto/eth device pointer
* @param sc_id secure channel ID created by rte_security_macsec_sc_create()
+ * @param dir direction of SC
* @param stats SC stats of the driver
*
* @return
@@ -169,6 +174,7 @@ typedef int (*security_session_stats_get_t)(void *device,
* - -EINVAL if sc_id or device is invalid.
*/
typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats);
/**
@@ -176,6 +182,7 @@ typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
*
* @param device Crypto/eth device pointer
* @param sa_id secure channel ID created by rte_security_macsec_sc_create()
+ * @param dir direction of SA
* @param stats SC stats of the driver
*
* @return
@@ -183,6 +190,7 @@ typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
* - -EINVAL if sa_id or device is invalid.
*/
typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats);
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 01/13] security: add direction in SA/SC configuration
2023-06-07 15:19 3% ` [PATCH v2 01/13] security: add direction in SA/SC configuration Akhil Goyal
@ 2023-06-07 19:49 3% ` David Marchand
2023-06-08 6:58 0% ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-06-07 19:49 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, thomas, olivier.matz, orika, hemant.agrawal, vattunuru,
ferruh.yigit, andrew.rybchenko, jerinj, adwivedi, Dodji Seketeli
On Wed, Jun 7, 2023 at 5:20 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
> MACsec SC/SA ids are created based on direction of the flow.
> Hence, added the missing field for configuration and cleanup
> of the SCs and SAs.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> devtools/libabigail.abignore | 7 +++++++
> lib/security/rte_security.c | 16 ++++++++++------
> lib/security/rte_security.h | 14 ++++++++++----
> lib/security/rte_security_driver.h | 12 ++++++++++--
> 4 files changed, 37 insertions(+), 12 deletions(-)
>
Looking at the report with no supression rule:
$ abidiff --suppr .../next-cryptodev/devtools/libabigail.abignore
--no-added-syms --headers-dir1
.../abi/v23.03/build-gcc-shared/usr/local/include --headers-dir2
.../next-cryptodev/build-gcc-shared/install/usr/local/include
.../abi/v23.03/build-gcc-shared/usr/local/lib64/librte_security.so.23.1
.../next-cryptodev/build-gcc-shared/install/usr/local/lib64/librte_security.so.23.2
Functions changes summary: 0 Removed, 1 Changed (13 filtered out), 0
Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
1 function with some indirect sub-type change:
[C] 'function const rte_security_capability*
rte_security_capabilities_get(rte_security_ctx*)' at
rte_security.c:241:1 has some indirect sub-type changes:
parameter 1 of type 'rte_security_ctx*' has sub-type changes:
in pointed to type 'struct rte_security_ctx' at rte_security.h:69:1:
type size hasn't changed
1 data member change:
type of 'const rte_security_ops* ops' changed:
in pointed to type 'const rte_security_ops':
in unqualified underlying type 'struct rte_security_ops'
at rte_security_driver.h:230:1:
type size hasn't changed
4 data member changes (4 filtered):
type of 'security_macsec_sc_create_t
macsec_sc_create' changed:
underlying type 'int (void*,
rte_security_macsec_sc*)*' changed:
in pointed to type 'function type int (void*,
rte_security_macsec_sc*)':
parameter 2 of type 'rte_security_macsec_sc*'
has sub-type changes:
in pointed to type 'struct
rte_security_macsec_sc' at rte_security.h:399:1:
type size changed from 256 to 320 (in bits)
1 data member insertion:
'union {struct {uint16_t sa_id[4];
uint8_t sa_in_use[4]; uint8_t active; uint8_t is_xpn; uint8_t
reserved;} sc_rx; struct {uint16_t sa_id; uint16_t sa_id_rekey;
uint64_t sci; uint8_t active; uint8_t re_key_en; uint8_t is_xpn;
uint8_t reserved;} sc_tx;}', at offset 128 (in bits)
1 data member change:
anonymous data member union {struct
{uint16_t sa_id[4]; uint8_t sa_in_use[4]; uint8_t active; uint8_t
reserved;} sc_rx; struct {uint16_t sa_id; uint16_t sa_id_rekey;
uint64_t sci; uint8_t active; uint8_t re_key_en; uint8_t reserved;}
sc_tx;} at offset 64 (in bits) became data member 'uint64_t
pn_threshold'
and size changed from 192 to 64 (in
bits) (by -128 bits)
type of 'security_macsec_sc_destroy_t
macsec_sc_destroy' changed:
underlying type 'int (void*, typedef uint16_t)*' changed:
in pointed to type 'function type int (void*,
typedef uint16_t)':
parameter 3 of type 'enum
rte_security_macsec_direction' was added
type of 'security_macsec_sc_stats_get_t
macsec_sc_stats_get' changed:
underlying type 'int (void*, typedef uint16_t,
rte_security_macsec_sc_stats*)*' changed:
in pointed to type 'function type int (void*,
typedef uint16_t, rte_security_macsec_sc_stats*)':
parameter 3 of type
'rte_security_macsec_sc_stats*' changed:
entity changed from
'rte_security_macsec_sc_stats*' to 'enum
rte_security_macsec_direction' at rte_security.h:361:1
type size changed from 64 to 32 (in bits)
type alignment changed from 0 to 32
parameter 4 of type
'rte_security_macsec_sc_stats*' was added
type of 'security_macsec_sa_stats_get_t
macsec_sa_stats_get' changed:
underlying type 'int (void*, typedef uint16_t,
rte_security_macsec_sa_stats*)*' changed:
in pointed to type 'function type int (void*,
typedef uint16_t, rte_security_macsec_sa_stats*)':
parameter 3 of type
'rte_security_macsec_sa_stats*' changed:
entity changed from
'rte_security_macsec_sa_stats*' to 'enum
rte_security_macsec_direction' at rte_security.h:361:1
type size changed from 64 to 32 (in bits)
type alignment changed from 0 to 32
parameter 4 of type
'rte_security_macsec_sa_stats*' was added
The report complains about the macsec ops type changes.
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index c0361bfc7b..14d8fa4293 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -37,6 +37,13 @@
> [suppress_type]
> type_kind = enum
> changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> +; Ignore changes to rte_security_ops MACsec APIs which are experimental
> +[suppress_type]
> + name = rte_security_ops
> + has_data_member_inserted_between =
> + {
> + offset_of(security_macsec_sc_create_t), offset_of(security_macsec_sa_stats_get_t)
> + }
So I don't get the intent with this rule.
There is no field named neither security_macsec_sc_create_t nor
security_macsec_sa_stats_get_t in the rte_security_ops struct.
Now.. why is this rule making the check pass... it is a mystery to me.
I already hit a case when libabigail ignored statements that are
invalid or make no sense, so my guess is that this rule is actually
applied as a simple:
[suppress_type]
name = rte_security_ops
And well, this rule is ok from my pov: this rte_security_ops struct
does not change in size.
An application is not supposed to know about its content (that is
defined in a driver header) and all accesses to those ops are supposed
to be through symbols from the security library.
So I would go with this "larger" and simpler rule.
Just a small addition to this, as discussed offlist, this is going to
be reworked in v23.11 and this rule on rte_security_ops will be
unneccesary, so please move it in the relevant block (at the end) of
the libabigail.abignore file.
--
David Marchand
^ permalink raw reply [relevance 3%]
* [PATCH v2 01/13] security: add direction in SA/SC configuration
@ 2023-06-07 15:19 3% ` Akhil Goyal
2023-06-07 19:49 3% ` David Marchand
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2023-06-07 15:19 UTC (permalink / raw)
To: dev
Cc: thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
vattunuru, ferruh.yigit, andrew.rybchenko, jerinj, adwivedi,
Akhil Goyal
MACsec SC/SA ids are created based on direction of the flow.
Hence, added the missing field for configuration and cleanup
of the SCs and SAs.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 7 +++++++
lib/security/rte_security.c | 16 ++++++++++------
lib/security/rte_security.h | 14 ++++++++++----
lib/security/rte_security_driver.h | 12 ++++++++++--
4 files changed, 37 insertions(+), 12 deletions(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index c0361bfc7b..14d8fa4293 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -37,6 +37,13 @@
[suppress_type]
type_kind = enum
changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
+; Ignore changes to rte_security_ops MACsec APIs which are experimental
+[suppress_type]
+ name = rte_security_ops
+ has_data_member_inserted_between =
+ {
+ offset_of(security_macsec_sc_create_t), offset_of(security_macsec_sa_stats_get_t)
+ }
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index e102c55e55..c4d64bb8e9 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -164,13 +164,14 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
}
int
-rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id)
+rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir)
{
int ret;
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sc_destroy, -EINVAL, -ENOTSUP);
- ret = instance->ops->macsec_sc_destroy(instance->device, sc_id);
+ ret = instance->ops->macsec_sc_destroy(instance->device, sc_id, dir);
if (ret != 0)
return ret;
@@ -181,13 +182,14 @@ rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id
}
int
-rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id)
+rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir)
{
int ret;
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sa_destroy, -EINVAL, -ENOTSUP);
- ret = instance->ops->macsec_sa_destroy(instance->device, sa_id);
+ ret = instance->ops->macsec_sa_destroy(instance->device, sa_id, dir);
if (ret != 0)
return ret;
@@ -199,22 +201,24 @@ rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id
int
rte_security_macsec_sc_stats_get(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats)
{
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sc_stats_get, -EINVAL, -ENOTSUP);
RTE_PTR_OR_ERR_RET(stats, -EINVAL);
- return instance->ops->macsec_sc_stats_get(instance->device, sc_id, stats);
+ return instance->ops->macsec_sc_stats_get(instance->device, sc_id, dir, stats);
}
int
rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats)
{
RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, macsec_sa_stats_get, -EINVAL, -ENOTSUP);
RTE_PTR_OR_ERR_RET(stats, -EINVAL);
- return instance->ops->macsec_sa_stats_get(instance->device, sa_id, stats);
+ return instance->ops->macsec_sa_stats_get(instance->device, sa_id, dir, stats);
}
int
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4bacf9fcd9..c7a523b6d6 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -761,6 +761,7 @@ rte_security_macsec_sc_create(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sc_id SC ID to be destroyed
+ * @param dir direction of the SC
* @return
* - 0 if successful.
* - -EINVAL if sc_id is invalid or instance is NULL.
@@ -768,7 +769,8 @@ rte_security_macsec_sc_create(struct rte_security_ctx *instance,
*/
__rte_experimental
int
-rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id);
+rte_security_macsec_sc_destroy(struct rte_security_ctx *instance, uint16_t sc_id,
+ enum rte_security_macsec_direction dir);
/**
* @warning
@@ -798,6 +800,7 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sa_id SA ID to be destroyed
+ * @param dir direction of the SA
* @return
* - 0 if successful.
* - -EINVAL if sa_id is invalid or instance is NULL.
@@ -805,7 +808,8 @@ rte_security_macsec_sa_create(struct rte_security_ctx *instance,
*/
__rte_experimental
int
-rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id);
+rte_security_macsec_sa_destroy(struct rte_security_ctx *instance, uint16_t sa_id,
+ enum rte_security_macsec_direction dir);
/** Device-specific metadata field type */
typedef uint64_t rte_security_dynfield_t;
@@ -1077,6 +1081,7 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sa_id SA ID for which stats are needed
+ * @param dir direction of the SA
* @param stats statistics
* @return
* - On success, return 0.
@@ -1085,7 +1090,7 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
__rte_experimental
int
rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
- uint16_t sa_id,
+ uint16_t sa_id, enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats);
/**
@@ -1096,6 +1101,7 @@ rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
*
* @param instance security instance
* @param sc_id SC ID for which stats are needed
+ * @param dir direction of the SC
* @param stats SC statistics
* @return
* - On success, return 0.
@@ -1104,7 +1110,7 @@ rte_security_macsec_sa_stats_get(struct rte_security_ctx *instance,
__rte_experimental
int
rte_security_macsec_sc_stats_get(struct rte_security_ctx *instance,
- uint16_t sc_id,
+ uint16_t sc_id, enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats);
/**
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 421e6f7780..677c7d1f91 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -106,8 +106,10 @@ typedef int (*security_macsec_sc_create_t)(void *device, struct rte_security_mac
*
* @param device Crypto/eth device pointer
* @param sc_id MACsec SC ID
+ * @param dir Direction of SC
*/
-typedef int (*security_macsec_sc_destroy_t)(void *device, uint16_t sc_id);
+typedef int (*security_macsec_sc_destroy_t)(void *device, uint16_t sc_id,
+ enum rte_security_macsec_direction dir);
/**
* Configure a MACsec security Association (SA) on a device.
@@ -128,8 +130,10 @@ typedef int (*security_macsec_sa_create_t)(void *device, struct rte_security_mac
*
* @param device Crypto/eth device pointer
* @param sa_id MACsec SA ID
+ * @param dir Direction of SA
*/
-typedef int (*security_macsec_sa_destroy_t)(void *device, uint16_t sa_id);
+typedef int (*security_macsec_sa_destroy_t)(void *device, uint16_t sa_id,
+ enum rte_security_macsec_direction dir);
/**
* Get the size of a security session
@@ -162,6 +166,7 @@ typedef int (*security_session_stats_get_t)(void *device,
*
* @param device Crypto/eth device pointer
* @param sc_id secure channel ID created by rte_security_macsec_sc_create()
+ * @param dir direction of SC
* @param stats SC stats of the driver
*
* @return
@@ -169,6 +174,7 @@ typedef int (*security_session_stats_get_t)(void *device,
* - -EINVAL if sc_id or device is invalid.
*/
typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sc_stats *stats);
/**
@@ -176,6 +182,7 @@ typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
*
* @param device Crypto/eth device pointer
* @param sa_id secure channel ID created by rte_security_macsec_sc_create()
+ * @param dir direction of SA
* @param stats SC stats of the driver
*
* @return
@@ -183,6 +190,7 @@ typedef int (*security_macsec_sc_stats_get_t)(void *device, uint16_t sc_id,
* - -EINVAL if sa_id or device is invalid.
*/
typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id,
+ enum rte_security_macsec_direction dir,
struct rte_security_macsec_sa_stats *stats);
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port
2023-06-06 16:26 0% ` [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port Ferruh Yigit
@ 2023-06-07 10:14 0% ` lihuisong (C)
0 siblings, 0 replies; 200+ results
From: lihuisong (C) @ 2023-06-07 10:14 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: thomas, andrew.rybchenko, liudongdong3, liuyonglong, fengchengwen
在 2023/6/7 0:26, Ferruh Yigit 写道:
> On 5/27/2023 3:11 AM, Huisong Li wrote:
>> This patchset fix some bugs and support attaching and detaching port
>> in primary and secondary.
>>
> Hi Huisong,
>
> As commented on v4, I have some concerns on this set.
please see my reply.
>
> The set does multiple ethdev/testpmd change, but the main target of the
> patch is not described clearly/simply.
The main target is to support attaching and detaching port in primary
and secondary.
Fixed some problems by the way.
>
> It looks like intention is to be able to register NEW event callback in
> the secondary process and be able to setup device in secondary when
> primary attaches a device,
> but my question is why not multi-process communication socket can't be
> used for this?
>
> MP socket/communication/thread is developed for this reason, I am not
> convinced why it can't be used to sync primary and secondary for device
> attach/detach.
The secondary process automatically probes the device when primary
attaches a device.
The primary process automatically probes the device before doing probe
phase in secondary when secondary attaches a device.
Above behavior itself is attributed to multi-process socket
communication(see hogplug_mp.c).
This series are just to support this feature in testpmd.
But hogplug_mp cannot do something for application, like updating
information, which is the duty of application.
>
>
>> ---
>> -v6: adjust rte_eth_dev_is_used position based on alphabetical order
>> in version.map
>> -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
>> -v4: fix a misspelling.
>> -v3:
>> #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
>> for other bus type.
>> #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
>> the probelm in patch 2/5.
>> -v2: resend due to CI unexplained failure.
>>
>> Huisong Li (5):
>> drivers/bus: restore driver assignment at front of probing
>> ethdev: fix skip valid port in probing callback
>> app/testpmd: check the validity of the port
>> app/testpmd: add attach and detach port for multiple process
>> app/testpmd: stop forwarding in new or destroy event
>>
>> app/test-pmd/testpmd.c | 47 +++++++++++++++---------
>> app/test-pmd/testpmd.h | 1 -
>> drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
>> drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
>> drivers/bus/fslmc/fslmc_bus.c | 8 +++-
>> drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
>> drivers/bus/pci/pci_common.c | 9 ++++-
>> drivers/bus/vdev/vdev.c | 10 ++++-
>> drivers/bus/vmbus/vmbus_common.c | 9 ++++-
>> drivers/net/bnxt/bnxt_ethdev.c | 3 +-
>> drivers/net/bonding/bonding_testpmd.c | 1 -
>> drivers/net/mlx5/mlx5.c | 2 +-
>> lib/ethdev/ethdev_driver.c | 13 +++++--
>> lib/ethdev/ethdev_driver.h | 12 ++++++
>> lib/ethdev/ethdev_pci.h | 2 +-
>> lib/ethdev/rte_class_eth.c | 2 +-
>> lib/ethdev/rte_ethdev.c | 4 +-
>> lib/ethdev/rte_ethdev.h | 4 +-
>> lib/ethdev/version.map | 1 +
>> 19 files changed, 114 insertions(+), 44 deletions(-)
>>
> .
^ permalink raw reply [relevance 0%]
* [PATCH 02/10] net/nfp: add a check function for the NFD version
@ 2023-06-07 1:57 12% ` Chaoyong He
0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-06-07 1:57 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add a check function for the NFD version, this will make the logics
which using this version free from validaty check, thus simplify
the driver.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 3 +++
drivers/net/nfp/nfp_common.c | 37 ++++++++++++++++++-----------
drivers/net/nfp/nfp_common.h | 1 +
drivers/net/nfp/nfp_ethdev.c | 35 +++++++++------------------
drivers/net/nfp/nfp_ethdev_vf.c | 32 ++++++++-----------------
drivers/net/nfp/nfp_rxtx.c | 14 ++---------
6 files changed, 50 insertions(+), 72 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index c5cc537790..afb4e5b344 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -649,6 +649,9 @@ nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
/* Get some of the read-only fields from the config BAR */
nfp_net_cfg_read_version(hw);
+ if (!nfp_net_is_valid_nfd_version(hw->ver))
+ return -EINVAL;
+
hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP);
hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU);
/* Set the current MTU to the maximum supported */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 497128f6a6..08f9529ead 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -1162,22 +1162,10 @@ nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
{
uint16_t tx_dpp;
- switch (hw->ver.extend) {
- case NFP_NET_CFG_VERSION_DP_NFD3:
+ if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3)
tx_dpp = NFD3_TX_DESC_PER_PKT;
- break;
- case NFP_NET_CFG_VERSION_DP_NFDK:
- if (hw->ver.major < 5) {
- PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- hw->ver.major);
- return -EINVAL;
- }
+ else
tx_dpp = NFDK_TX_DESC_PER_SIMPLE_PKT;
- break;
- default:
- PMD_DRV_LOG(ERR, "The version of firmware is not correct.");
- return -EINVAL;
- }
*max_tx_desc = NFP_NET_MAX_TX_DESC / tx_dpp;
@@ -2106,3 +2094,24 @@ nfp_repr_firmware_version_get(struct rte_eth_dev *dev,
return 0;
}
+
+bool
+nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version)
+{
+ uint8_t nfd_version = version.extend;
+
+ if (nfd_version == NFP_NET_CFG_VERSION_DP_NFD3)
+ return true;
+
+ if (nfd_version == NFP_NET_CFG_VERSION_DP_NFDK) {
+ if (version.major < 5) {
+ PMD_INIT_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
+ version.major);
+ return false;
+ }
+
+ return true;
+ }
+
+ return false;
+}
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 2281445861..acb34535c5 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -444,6 +444,7 @@ void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
void nfp_net_cfg_read_version(struct nfp_net_hw *hw);
int nfp_net_firmware_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size);
int nfp_repr_firmware_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size);
+bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version);
#define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index e84d2ac82e..0ccb543f14 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -474,31 +474,18 @@ static const struct eth_dev_ops nfp_net_eth_dev_ops = {
.fw_version_get = nfp_net_firmware_version_get,
};
-static inline int
-nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
+static inline void
+nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw,
+ struct rte_eth_dev *eth_dev)
{
- switch (hw->ver.extend) {
- case NFP_NET_CFG_VERSION_DP_NFD3:
- eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
- break;
- case NFP_NET_CFG_VERSION_DP_NFDK:
- if (hw->ver.major < 5) {
- PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- hw->ver.major);
- return -EINVAL;
- }
- eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
- break;
- default:
- PMD_DRV_LOG(ERR, "The version of firmware is not correct.");
- return -EINVAL;
- }
+ if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3)
+ eth_dev->tx_pkt_burst = nfp_net_nfd3_xmit_pkts;
+ else
+ eth_dev->tx_pkt_burst = nfp_net_nfdk_xmit_pkts;
eth_dev->dev_ops = &nfp_net_eth_dev_ops;
eth_dev->rx_queue_count = nfp_net_rx_queue_count;
eth_dev->rx_pkt_burst = &nfp_net_recv_pkts;
-
- return 0;
}
static int
@@ -583,12 +570,13 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(DEBUG, "MAC stats: %p", hw->mac_stats);
nfp_net_cfg_read_version(hw);
+ if (!nfp_net_is_valid_nfd_version(hw->ver))
+ return -EINVAL;
if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
return -ENODEV;
- if (nfp_net_ethdev_ops_mount(hw, eth_dev))
- return -EINVAL;
+ nfp_net_ethdev_ops_mount(hw, eth_dev);
hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS);
hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS);
@@ -1133,8 +1121,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
eth_dev->process_private = cpp;
hw = NFP_NET_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
- if (nfp_net_ethdev_ops_mount(hw, eth_dev))
- return -EINVAL;
+ nfp_net_ethdev_ops_mount(hw, eth_dev);
rte_eth_dev_probing_finish(eth_dev);
}
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 71f5020ecd..f971bb8903 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -244,31 +244,18 @@ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
.fw_version_get = nfp_net_firmware_version_get,
};
-static inline int
-nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
+static inline void
+nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw,
+ struct rte_eth_dev *eth_dev)
{
- switch (hw->ver.extend) {
- case NFP_NET_CFG_VERSION_DP_NFD3:
- eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
- break;
- case NFP_NET_CFG_VERSION_DP_NFDK:
- if (hw->ver.major < 5) {
- PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- hw->ver.major);
- return -EINVAL;
- }
- eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
- break;
- default:
- PMD_DRV_LOG(ERR, "The version of firmware is not correct.");
- return -EINVAL;
- }
+ if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3)
+ eth_dev->tx_pkt_burst = nfp_net_nfd3_xmit_pkts;
+ else
+ eth_dev->tx_pkt_burst = nfp_net_nfdk_xmit_pkts;
eth_dev->dev_ops = &nfp_netvf_eth_dev_ops;
eth_dev->rx_queue_count = nfp_net_rx_queue_count;
eth_dev->rx_pkt_burst = &nfp_net_recv_pkts;
-
- return 0;
}
static int
@@ -300,12 +287,13 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar);
nfp_net_cfg_read_version(hw);
+ if (!nfp_net_is_valid_nfd_version(hw->ver))
+ return -EINVAL;
if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
return -ENODEV;
- if (nfp_netvf_ethdev_ops_mount(hw, eth_dev))
- return -EINVAL;
+ nfp_netvf_ethdev_ops_mount(hw, eth_dev);
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 0ac9d6db03..ce9a07309e 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -889,20 +889,10 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- switch (hw->ver.extend) {
- case NFP_NET_CFG_VERSION_DP_NFD3:
+ if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3)
return nfp_net_nfd3_tx_queue_setup(dev, queue_idx,
nb_desc, socket_id, tx_conf);
- case NFP_NET_CFG_VERSION_DP_NFDK:
- if (hw->ver.major < 5) {
- PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- hw->ver.major);
- return -EINVAL;
- }
+ else
return nfp_net_nfdk_tx_queue_setup(dev, queue_idx,
nb_desc, socket_id, tx_conf);
- default:
- PMD_DRV_LOG(ERR, "The version of firmware is not correct.");
- return -EINVAL;
- }
}
--
2.39.1
^ permalink raw reply [relevance 12%]
* Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
2023-06-06 8:34 0% ` Konstantin Ananyev
@ 2023-06-07 0:00 0% ` Ferruh Yigit
2023-06-12 3:25 0% ` Feifei Wang
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-06-07 0:00 UTC (permalink / raw)
To: Konstantin Ananyev, Feifei Wang,
Константин
Ананьев,
thomas, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang
On 6/6/2023 9:34 AM, Konstantin Ananyev wrote:
>
>
>>
>> [...]
>>>> Probably I am missing something, but why it is not possible to do something
>>> like that:
>>>>
>>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
>>>> tx_queue_id=M, ...); ....
>>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
>>>> tx_queue_id=K, ...);
>>>>
>>>> I.E. feed rx queue from 2 tx queues?
>>>>
>>>> Two problems for this:
>>>> 1. If we have 2 tx queues for rx, the thread should make the extra
>>>> judgement to decide which one to choose in the driver layer.
>>>
>>> Not sure, why on the driver layer?
>>> The example I gave above - decision is made on application layer.
>>> Lets say first call didn't free enough mbufs, so app decided to use second txq
>>> for rearm.
>> [Feifei] I think currently mbuf recycle mode can support this usage. For examples:
>> n = rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
>> if (n < planned_number)
>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
>>
>> Thus, if users want, they can do like this.
>
> Yes, that was my thought, that's why I was surprise that in the comments we have:
> " Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing
> * between the receive queue and transmit queue. Do not pair one receive queue with
> * multiple transmit queues or pair one transmit queue with multiple receive queues,
> * in order to avoid memory error rewriting."
>
I guess that is from previous versions of the set, it can be good to
address limitations/restrictions again with latest version.
>>
>>>
>>>> On the other hand, current mechanism can support users to switch 1 txq
>>>> to another timely in the application layer. If user want to choose
>>>> another txq, he just need to change the txq_queue_id parameter in the API.
>>>> 2. If you want one rxq to support two txq at the same time, this needs
>>>> to add spinlock on guard variable to avoid multi-thread conflict.
>>>> Spinlock will decrease the data-path performance greatly. Thus, we do
>>>> not consider
>>>> 1 rxq mapping multiple txqs here.
>>>
>>> I am talking about situation when one thread controls 2 tx queues.
>>>
>>>> + *
>>>> + * @param rx_port_id
>>>> + * Port identifying the receive side.
>>>> + * @param rx_queue_id
>>>> + * The index of the receive queue identifying the receive side.
>>>> + * The value must be in the range [0, nb_rx_queue - 1] previously
>>>> +supplied
>>>> + * to rte_eth_dev_configure().
>>>> + * @param tx_port_id
>>>> + * Port identifying the transmit side.
>>>> + * @param tx_queue_id
>>>> + * The index of the transmit queue identifying the transmit side.
>>>> + * The value must be in the range [0, nb_tx_queue - 1] previously
>>>> +supplied
>>>> + * to rte_eth_dev_configure().
>>>> + * @param recycle_rxq_info
>>>> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which
>>>> +contains
>>>> + * the information of the Rx queue mbuf ring.
>>>> + * @return
>>>> + * The number of recycling mbufs.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
>>>> +uint16_t tx_port_id, uint16_t tx_queue_id, struct
>>>> +rte_eth_recycle_rxq_info *recycle_rxq_info) { struct rte_eth_fp_ops
>>>> +*p; void *qd; uint16_t nb_mbufs;
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>>> + if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >=
>>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
>>>> +tx_port_id=%u or tx_queue_id=%u\n", tx_port_id, tx_queue_id);
>>>> +return 0; } #endif
>>>> +
>>>> + /* fetch pointer to queue data */
>>>> + p = &rte_eth_fp_ops[tx_port_id];
>>>> + qd = p->txq.data[tx_queue_id];
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_TX
>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
>>>> +
>>>> + if (qd == NULL) {
>>>> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
>>>> +tx_queue_id, tx_port_id); return 0; } #endif if
>>>> +(p->recycle_tx_mbufs_reuse == NULL) return 0;
>>>> +
>>>> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
>>>> + * into Rx mbuf ring.
>>>> + */
>>>> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
>>>> +
>>>> + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return 0;
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>>> + if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >=
>>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
>>>> +rx_port_id=%u or rx_queue_id=%u\n", rx_port_id, rx_queue_id);
>>>> +return 0; } #endif
>>>> +
>>>> + /* fetch pointer to queue data */
>>>> + p = &rte_eth_fp_ops[rx_port_id];
>>>> + qd = p->rxq.data[rx_queue_id];
>>>> +
>>>> +#ifdef RTE_ETHDEV_DEBUG_RX
>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
>>>> +
>>>> + if (qd == NULL) {
>>>> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
>>>> +rx_queue_id, rx_port_id); return 0; } #endif
>>>> +
>>>> + if (p->recycle_rx_descriptors_refill == NULL) return 0;
>>>> +
>>>> + /* Replenish the Rx descriptors with the recycling
>>>> + * into Rx mbuf ring.
>>>> + */
>>>> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
>>>> +
>>>> + return nb_mbufs;
>>>> +}
>>>> +
>>>> /**
>>>> * @warning
>>>> * @b EXPERIMENTAL: this API may change without prior notice diff
>>>> --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>>>> index dcf8adab92..a2e6ea6b6c 100644
>>>> --- a/lib/ethdev/rte_ethdev_core.h
>>>> +++ b/lib/ethdev/rte_ethdev_core.h
>>>> @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void
>>>> *rxq, uint16_t offset);
>>>> /** @internal Check the status of a Tx descriptor */ typedef int
>>>> (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>>>
>>>> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
>>>> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq, struct
>>>> +rte_eth_recycle_rxq_info *recycle_rxq_info);
>>>> +
>>>> +/** @internal Refill Rx descriptors with the recycling mbufs */
>>>> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq,
>>>> +uint16_t nb);
>>>> +
>>>> /**
>>>> * @internal
>>>> * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@
>>>> -90,9 +97,11 @@ struct rte_eth_fp_ops {
>>>> eth_rx_queue_count_t rx_queue_count;
>>>> /** Check the status of a Rx descriptor. */
>>>> eth_rx_descriptor_status_t rx_descriptor_status;
>>>> + /** Refill Rx descriptors with the recycling mbufs. */
>>>> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>>>> I am afraid we can't put new fields here without ABI breakage.
>>>>
>>>> Agree
>>>>
>>>> It has to be below rxq.
>>>> Now thinking about current layout probably not the best one, and when
>>>> introducing this struct, I should probably put rxq either on the top
>>>> of the struct, or on the next cache line.
>>>> But such change is not possible right now anyway.
>>>> Same story for txq.
>>>>
>>>> Thus we should rearrange the structure like below:
>>>> struct rte_eth_fp_ops {
>>>> struct rte_ethdev_qdata rxq;
>>>> eth_rx_burst_t rx_pkt_burst;
>>>> eth_rx_queue_count_t rx_queue_count;
>>>> eth_rx_descriptor_status_t rx_descriptor_status;
>>>> eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>>>> uintptr_t reserved1[2];
>>>> }
>>>
>>> Yes, I think such layout will be better.
>>> The only problem here - we have to wait for 23.11 for that.
>>>
>> Ok, if not this change, maybe we still need to wait. Because mbufs_recycle have other
>> ABI breakage. Such as the change for 'struct rte_eth_dev'.
>
> Ok by me.
>
>>>>
>>>>
>>>> /** Rx queues data. */
>>>> struct rte_ethdev_qdata rxq;
>>>> - uintptr_t reserved1[3];
>>>> + uintptr_t reserved1[2];
>>>> /**@}*/
>>>>
>>>> /**@{*/
>>>> @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
>>>> eth_tx_prep_t tx_pkt_prepare;
>>>> /** Check the status of a Tx descriptor. */
>>>> eth_tx_descriptor_status_t tx_descriptor_status;
>>>> + /** Copy used mbufs from Tx mbuf ring into Rx. */
>>>> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
>>>> /** Tx queues data. */
>>>> struct rte_ethdev_qdata txq;
>>>> - uintptr_t reserved2[3];
>>>> + uintptr_t reserved2[2];
>>>> /**@}*/
>>>>
>>>> } __rte_cache_aligned;
>>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
>>>> 357d1a88c0..45c417f6bd 100644
>>>> --- a/lib/ethdev/version.map
>>>> +++ b/lib/ethdev/version.map
>>>> @@ -299,6 +299,10 @@ EXPERIMENTAL {
>>>> rte_flow_action_handle_query_update;
>>>> rte_flow_async_action_handle_query_update;
>>>> rte_flow_async_create_by_index;
>>>> +
>>>> + # added in 23.07
>>>> + rte_eth_recycle_mbufs;
>>>> + rte_eth_recycle_rx_queue_info_get;
>>>> };
>>>>
>>>> INTERNAL {
>>>> --
>>>> 2.25.1
>>>>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-06 9:20 4% ` David Marchand
@ 2023-06-06 21:01 0% ` Chautru, Nicolas
2023-06-08 8:47 0% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-06 21:01 UTC (permalink / raw)
To: David Marchand
Cc: Maxime Coquelin, Stephen Hemminger, dev, Rix, Tom,
hemant.agrawal, Vargas, Hernan
Hi David,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, June 6, 2023 2:21 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>; Stephen Hemminger
> <stephen@networkplumber.org>; dev@dpdk.org; Rix, Tom
> <trix@redhat.com>; hemant.agrawal@nxp.com; Vargas, Hernan
> <hernan.vargas@intel.com>
> Subject: Re: [PATCH v1 1/1] doc: announce change in bbdev api related to
> operation extension
>
> On Mon, Jun 5, 2023 at 10:08 PM Chautru, Nicolas
> <nicolas.chautru@intel.com> wrote:
> > Wrt the MLD functions: these are new into the related serie but still the
> break the ABI since the struct rte_bbdev includes these functions hence
> causing offset changes.
> >
> > Should I then just rephrase as:
> >
> > +* bbdev: Will extend the API to support the new operation type
> > +``RTE_BBDEV_OP_MLDTS`` as per
> > + this `v1
> > +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This +
> > will notably introduce + new symbols for
> > ``rte_bbdev_dequeue_mldts_ops``, +``rte_bbdev_enqueue_mldts_ops``
> > into the stuct rte_bbdev.
>
> I don't think we need this deprecation notice.
>
>
> Do you need to expose those new mldts ops in rte_bbdev?
> Can't they go to dev_ops?
> If you can't, at least moving those new ops at the end of the structure
> would avoid the breakage on rte_bbdev.
It would probably be best to move all these ops at the end of the structure (ie. keep them together).
In that case the deprecation notice would call out that the rte_bbdev structure content is more generally modified. Probably best for the longer run.
David, Maxime, ok with that option?
struct __rte_cache_aligned rte_bbdev {
rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
const struct rte_bbdev_ops *dev_ops;
struct rte_bbdev_data *data;
enum rte_bbdev_state state;
struct rte_device *device;
struct rte_bbdev_cb_list list_cbs;
struct rte_intr_handle *intr_handle;
};
>
>
> >
> > Pasting below the ABI results for reference
> >
> > [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at
> rte_bbdev.c:174:1 has some indirect sub-type changes:
> > return type changed:
> > in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
> > type size hasn't changed
> > 2 data member insertions:
> > 'rte_bbdev_enqueue_mldts_ops_t rte_bbdev::enqueue_mldts_ops',
> at offset 640 (in bits) at rte_bbdev.h:520:1
> > 'rte_bbdev_dequeue_mldts_ops_t rte_bbdev::dequeue_mldts_ops',
> at offset 704 (in bits) at rte_bbdev.h:522:1
> > 7 data member changes (9 filtered):
> > type of 'rte_bbdev_dequeue_fft_ops_t rte_bbdev::dequeue_fft_ops'
> changed:
> > underlying type 'typedef uint16_t (rte_bbdev_queue_data*,
> rte_bbdev_fft_op**, typedef uint16_t)*' changed:
> > in pointed to type 'function type typedef uint16_t
> (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)':
> > parameter 2 of type 'rte_bbdev_fft_op**' has sub-type changes:
> > in pointed to type 'rte_bbdev_fft_op*':
> > in pointed to type 'struct rte_bbdev_fft_op' at
> rte_bbdev_op.h:978:1:
> > type size changed from 832 to 1664 (in bits)
> > 1 data member change:
> > type of 'rte_bbdev_op_fft rte_bbdev_fft_op::fft' changed:
> > type size changed from 640 to 1472 (in bits)
> > 6 data member insertions:
> > 'rte_bbdev_op_data
> rte_bbdev_op_fft::dewindowing_input', at offset 256 (in bits) at
> rte_bbdev_op.h:771:1
> > 'int8_t rte_bbdev_op_fft::freq_resample_mode', at offset
> 768 (in bits) at rte_bbdev_op.h:807:1
> > 'uint16_t rte_bbdev_op_fft::output_depadded_size', at
> offset 784 (in bits) at rte_bbdev_op.h:809:1
> > 'uint16_t rte_bbdev_op_fft::cs_theta_0[12]', at offset 800
> (in bits) at rte_bbdev_op.h:811:1
> > 'uint32_t rte_bbdev_op_fft::cs_theta_d[12]', at offset 992
> (in bits) at rte_bbdev_op.h:813:1
> > 'int8_t rte_bbdev_op_fft::time_offset[12]', at offset 1376
> (in bits) at rte_bbdev_op.h:815:1
> > 17 data member changes:
> > 'rte_bbdev_op_data
> rte_bbdev_op_fft::power_meas_output' offset changed from 256 to 384 (in
> bits) (by +128 bits)
> > 'uint32_t rte_bbdev_op_fft::op_flags' offset changed from
> 384 to 512 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::input_sequence_size' offset
> changed from 416 to 544 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::input_leading_padding'
> offset changed from 432 to 560 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::output_sequence_size' offset
> changed from 448 to 576 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::output_leading_depadding'
> offset changed from 464 to 592 (in bits) (by +128 bits)
> > 'uint8_t rte_bbdev_op_fft::window_index[6]' offset
> changed from 480 to 608 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::cs_bitmap' offset changed
> from 528 to 656 (in bits) (by +128 bits)
> > 'uint8_t rte_bbdev_op_fft::num_antennas_log2' offset
> changed from 544 to 672 (in bits) (by +128 bits)
> > 'uint8_t rte_bbdev_op_fft::idft_log2' offset changed from
> 552 to 680 (in bits) (by +128 bits)
> > 'uint8_t rte_bbdev_op_fft::dft_log2' offset changed from
> 560 to 688 (in bits) (by +128 bits)
> > 'int8_t rte_bbdev_op_fft::cs_time_adjustment' offset
> changed from 568 to 696 (in bits) (by +128 bits)
> > 'int8_t rte_bbdev_op_fft::idft_shift' offset changed from
> 576 to 704 (in bits) (by +128 bits)
> > 'int8_t rte_bbdev_op_fft::dft_shift' offset changed from
> 584 to 712 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::ncs_reciprocal' offset
> changed from 592 to 720 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::power_shift' offset changed
> from 608 to 736 (in bits) (by +128 bits)
> > 'uint16_t rte_bbdev_op_fft::fp16_exp_adjust' offset
> changed from 624 to 752 (in bits) (by +128 bits)
> > 'const rte_bbdev_ops* rte_bbdev::dev_ops' offset changed from 640
> to 768 (in bits) (by +128 bits)
> > 'rte_bbdev_data* rte_bbdev::data' offset changed from 704 to 832
> (in bits) (by +128 bits)
> > 'rte_bbdev_state rte_bbdev::state' offset changed from 768 to 896
> (in bits) (by +128 bits)
> > 'rte_device* rte_bbdev::device' offset changed from 832 to 960 (in
> bits) (by +128 bits)
> > 'rte_bbdev_cb_list rte_bbdev::list_cbs' offset changed from 896 to
> 1024 (in bits) (by +128 bits)
> > 'rte_intr_handle* rte_bbdev::intr_handle' offset changed
> > from 1024 to 1152 (in bits) (by +128 bits)
>
> As for the report on the rte_bbdev_op_fft structure changes:
> - wrt to its size, I think it is okay to waive it, rte_bbdev_fft_op objects are
> coming from a bbdev mempool which is created by the bbdev library itself
> (with the right element size if the application asked for RTE_BBDEV_OP_FFT
> type),
> - wrt to the fields locations, an application may have been touching those
> fields, so moving all the added fields at the end of the structure would be
> better.
> But on the other hand, an application will have to call an fft_ops
> experimental API at some point, and the application developer is already
> warned that ABI is not preserved on this part of the API,
>
> So I would waive the changes on rte_bbdev_fft_op with something like:
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index
> 3ff51509de..3cdce69418 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -36,6 +36,8 @@
> [suppress_type]
> type_kind = enum
> changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM,
> RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
> +[suppress_type]
> + name = rte_bbdev_fft_op
OK I did not know about this method. Shouldn't this apply more generally to all experimental structures?
This can be added into the serie for 23.11.
Thanks
Nic
>
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> ; Temporary exceptions till next major ABI version ;
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: deprecation notice to add RSS hash algorithm field
2023-06-06 15:50 3% ` Ferruh Yigit
@ 2023-06-06 16:35 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-06 16:35 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Dongdong Liu, dev
On Tue, 6 Jun 2023 16:50:53 +0100
Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> On 6/6/2023 4:39 PM, Stephen Hemminger wrote:
> > On Tue, 6 Jun 2023 20:11:26 +0800
> > Dongdong Liu <liudongdong3@huawei.com> wrote:
> >
> >> Deprecation notice to add "func" field to ``rte_eth_rss_conf``
> >> structure for RSS hash algorithm.
> >>
> >> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
> >> ---
> >
> > New fields do not require deprecation notice.
> > Since this seems to be a repeated issue, perhaps someone should
> > add this to the documentation.
> >
>
>
> Hi Stephen,
>
> This is follow up to an existing patchset:
> https://patches.dpdk.org/project/dpdk/list/?series=27400&state=*
>
> Although field is addition to the "struct rte_eth_rss_conf" struct, it
> is embedded into "struct rte_eth_conf" which is parameter to an API, so
> change cause size increase in outer struct and causes ABI breakage,
> requiring deprecation notice.
It will change ABI so will have to wait for 23.11.
But the purpose of deprecation notice is more about telling users that API
will change.
The automated tools may give false complaint. Ok to add to deprecation,
but really not necessary.
^ permalink raw reply [relevance 3%]
* Re: [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port
2023-05-27 2:11 3% ` [PATCH V6 " Huisong Li
2023-05-27 2:11 2% ` [PATCH V6 2/5] ethdev: fix skip valid port in probing callback Huisong Li
@ 2023-06-06 16:26 0% ` Ferruh Yigit
2023-06-07 10:14 0% ` lihuisong (C)
1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-06-06 16:26 UTC (permalink / raw)
To: Huisong Li, dev
Cc: thomas, andrew.rybchenko, liudongdong3, liuyonglong, fengchengwen
On 5/27/2023 3:11 AM, Huisong Li wrote:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
>
Hi Huisong,
As commented on v4, I have some concerns on this set.
The set does multiple ethdev/testpmd change, but the main target of the
patch is not described clearly/simply.
It looks like intention is to be able to register NEW event callback in
the secondary process and be able to setup device in secondary when
primary attaches a device,
but my question is why not multi-process communication socket can't be
used for this?
MP socket/communication/thread is developed for this reason, I am not
convinced why it can't be used to sync primary and secondary for device
attach/detach.
> ---
> -v6: adjust rte_eth_dev_is_used position based on alphabetical order
> in version.map
> -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
> -v4: fix a misspelling.
> -v3:
> #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
> for other bus type.
> #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
> the probelm in patch 2/5.
> -v2: resend due to CI unexplained failure.
>
> Huisong Li (5):
> drivers/bus: restore driver assignment at front of probing
> ethdev: fix skip valid port in probing callback
> app/testpmd: check the validity of the port
> app/testpmd: add attach and detach port for multiple process
> app/testpmd: stop forwarding in new or destroy event
>
> app/test-pmd/testpmd.c | 47 +++++++++++++++---------
> app/test-pmd/testpmd.h | 1 -
> drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
> drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
> drivers/bus/fslmc/fslmc_bus.c | 8 +++-
> drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
> drivers/bus/pci/pci_common.c | 9 ++++-
> drivers/bus/vdev/vdev.c | 10 ++++-
> drivers/bus/vmbus/vmbus_common.c | 9 ++++-
> drivers/net/bnxt/bnxt_ethdev.c | 3 +-
> drivers/net/bonding/bonding_testpmd.c | 1 -
> drivers/net/mlx5/mlx5.c | 2 +-
> lib/ethdev/ethdev_driver.c | 13 +++++--
> lib/ethdev/ethdev_driver.h | 12 ++++++
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 4 +-
> lib/ethdev/rte_ethdev.h | 4 +-
> lib/ethdev/version.map | 1 +
> 19 files changed, 114 insertions(+), 44 deletions(-)
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: deprecation notice to add RSS hash algorithm field
@ 2023-06-06 15:50 3% ` Ferruh Yigit
2023-06-06 16:35 3% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-06-06 15:50 UTC (permalink / raw)
To: Stephen Hemminger, Dongdong Liu; +Cc: dev
On 6/6/2023 4:39 PM, Stephen Hemminger wrote:
> On Tue, 6 Jun 2023 20:11:26 +0800
> Dongdong Liu <liudongdong3@huawei.com> wrote:
>
>> Deprecation notice to add "func" field to ``rte_eth_rss_conf``
>> structure for RSS hash algorithm.
>>
>> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
>> ---
>
> New fields do not require deprecation notice.
> Since this seems to be a repeated issue, perhaps someone should
> add this to the documentation.
>
Hi Stephen,
This is follow up to an existing patchset:
https://patches.dpdk.org/project/dpdk/list/?series=27400&state=*
Although field is addition to the "struct rte_eth_rss_conf" struct, it
is embedded into "struct rte_eth_conf" which is parameter to an API, so
change cause size increase in outer struct and causes ABI breakage,
requiring deprecation notice.
^ permalink raw reply [relevance 3%]
* Re: [PATCH] ethdev: validate reserved fields
2023-06-06 15:24 3% ` Ferruh Yigit
@ 2023-06-06 15:38 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-06-06 15:38 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, thomas, Andrew Rybchenko
On Tue, 6 Jun 2023 16:24:36 +0100
Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> No objection on validating reserved fields, but if any application
> passing not zero values before (I think this was possible), it will
> break with this change, so can we get this patch in this release?
>
> Or should it need to wait ABI break release?
> If we will wait v23.11, perhaps this should be applied to all structs
> with reserved fields, and may be good to add a deprecation notice in
> this release, what do you t
Yes, do this in 23.11 (early in merge cycle).
Did ethdev because it is heavily used, and easy to test and validate.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ethdev: validate reserved fields
2023-05-25 20:39 8% [PATCH] ethdev: validate reserved fields Stephen Hemminger
2023-05-26 8:15 0% ` Bruce Richardson
@ 2023-06-06 15:24 3% ` Ferruh Yigit
2023-06-06 15:38 0% ` Stephen Hemminger
1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-06-06 15:24 UTC (permalink / raw)
To: Stephen Hemminger, dev; +Cc: thomas, Andrew Rybchenko
On 5/25/2023 9:39 PM, Stephen Hemminger wrote:
> The various reserved fields added to ethdev could not be
> safely used for future extensions because they were never
> checked on input. Therefore ABI would be broken if these
> fields were added in a future DPDK release.
>
> Fixes: 436b3a6b6e62 ("ethdev: reserve space in main structs for extension")
> Cc: thomas@monjalon.net
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> lib/ethdev/rte_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 41 insertions(+)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d0325568322..4f937a1914c9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1228,6 +1228,25 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> /* Backup mtu for rollback */
> old_mtu = dev->data->mtu;
>
> + /* fields must be zero to reserve them for future ABI changes */
> + if (dev_conf->rxmode.reserved_64s[0] != 0 ||
> + dev_conf->rxmode.reserved_64s[1] != 0 ||
> + dev_conf->rxmode.reserved_ptrs[0] != NULL ||
> + dev_conf->rxmode.reserved_ptrs[1] != NULL) {
> + RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n");
> + ret = -EINVAL;
> + goto rollback;
> + }
> +
> + if (dev_conf->txmode.reserved_64s[0] != 0 ||
> + dev_conf->txmode.reserved_64s[1] != 0 ||
> + dev_conf->txmode.reserved_ptrs[0] != NULL ||
> + dev_conf->txmode.reserved_ptrs[1] != NULL) {
> + RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n");
> + ret = -EINVAL;
> + goto rollback;
> + }
> +
>
No objection on validating reserved fields, but if any application
passing not zero values before (I think this was possible), it will
break with this change, so can we get this patch in this release?
Or should it need to wait ABI break release?
If we will wait v23.11, perhaps this should be applied to all structs
with reserved fields, and may be good to add a deprecation notice in
this release, what do you think?
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-05 20:08 4% ` Chautru, Nicolas
@ 2023-06-06 9:20 4% ` David Marchand
2023-06-06 21:01 0% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-06-06 9:20 UTC (permalink / raw)
To: Chautru, Nicolas
Cc: Maxime Coquelin, Stephen Hemminger, dev, Rix, Tom,
hemant.agrawal, Vargas, Hernan
On Mon, Jun 5, 2023 at 10:08 PM Chautru, Nicolas
<nicolas.chautru@intel.com> wrote:
> Wrt the MLD functions: these are new into the related serie but still the break the ABI since the struct rte_bbdev includes these functions hence causing offset changes.
>
> Should I then just rephrase as:
>
> +* bbdev: Will extend the API to support the new operation type
> +``RTE_BBDEV_OP_MLDTS`` as per
> + this `v1
> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This
> + will notably introduce
> + new symbols for ``rte_bbdev_dequeue_mldts_ops``,
> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
I don't think we need this deprecation notice.
Do you need to expose those new mldts ops in rte_bbdev?
Can't they go to dev_ops?
If you can't, at least moving those new ops at the end of the
structure would avoid the breakage on rte_bbdev.
>
> Pasting below the ABI results for reference
>
> [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at rte_bbdev.c:174:1 has some indirect sub-type changes:
> return type changed:
> in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
> type size hasn't changed
> 2 data member insertions:
> 'rte_bbdev_enqueue_mldts_ops_t rte_bbdev::enqueue_mldts_ops', at offset 640 (in bits) at rte_bbdev.h:520:1
> 'rte_bbdev_dequeue_mldts_ops_t rte_bbdev::dequeue_mldts_ops', at offset 704 (in bits) at rte_bbdev.h:522:1
> 7 data member changes (9 filtered):
> type of 'rte_bbdev_dequeue_fft_ops_t rte_bbdev::dequeue_fft_ops' changed:
> underlying type 'typedef uint16_t (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)*' changed:
> in pointed to type 'function type typedef uint16_t (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)':
> parameter 2 of type 'rte_bbdev_fft_op**' has sub-type changes:
> in pointed to type 'rte_bbdev_fft_op*':
> in pointed to type 'struct rte_bbdev_fft_op' at rte_bbdev_op.h:978:1:
> type size changed from 832 to 1664 (in bits)
> 1 data member change:
> type of 'rte_bbdev_op_fft rte_bbdev_fft_op::fft' changed:
> type size changed from 640 to 1472 (in bits)
> 6 data member insertions:
> 'rte_bbdev_op_data rte_bbdev_op_fft::dewindowing_input', at offset 256 (in bits) at rte_bbdev_op.h:771:1
> 'int8_t rte_bbdev_op_fft::freq_resample_mode', at offset 768 (in bits) at rte_bbdev_op.h:807:1
> 'uint16_t rte_bbdev_op_fft::output_depadded_size', at offset 784 (in bits) at rte_bbdev_op.h:809:1
> 'uint16_t rte_bbdev_op_fft::cs_theta_0[12]', at offset 800 (in bits) at rte_bbdev_op.h:811:1
> 'uint32_t rte_bbdev_op_fft::cs_theta_d[12]', at offset 992 (in bits) at rte_bbdev_op.h:813:1
> 'int8_t rte_bbdev_op_fft::time_offset[12]', at offset 1376 (in bits) at rte_bbdev_op.h:815:1
> 17 data member changes:
> 'rte_bbdev_op_data rte_bbdev_op_fft::power_meas_output' offset changed from 256 to 384 (in bits) (by +128 bits)
> 'uint32_t rte_bbdev_op_fft::op_flags' offset changed from 384 to 512 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::input_sequence_size' offset changed from 416 to 544 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::input_leading_padding' offset changed from 432 to 560 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::output_sequence_size' offset changed from 448 to 576 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::output_leading_depadding' offset changed from 464 to 592 (in bits) (by +128 bits)
> 'uint8_t rte_bbdev_op_fft::window_index[6]' offset changed from 480 to 608 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::cs_bitmap' offset changed from 528 to 656 (in bits) (by +128 bits)
> 'uint8_t rte_bbdev_op_fft::num_antennas_log2' offset changed from 544 to 672 (in bits) (by +128 bits)
> 'uint8_t rte_bbdev_op_fft::idft_log2' offset changed from 552 to 680 (in bits) (by +128 bits)
> 'uint8_t rte_bbdev_op_fft::dft_log2' offset changed from 560 to 688 (in bits) (by +128 bits)
> 'int8_t rte_bbdev_op_fft::cs_time_adjustment' offset changed from 568 to 696 (in bits) (by +128 bits)
> 'int8_t rte_bbdev_op_fft::idft_shift' offset changed from 576 to 704 (in bits) (by +128 bits)
> 'int8_t rte_bbdev_op_fft::dft_shift' offset changed from 584 to 712 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::ncs_reciprocal' offset changed from 592 to 720 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::power_shift' offset changed from 608 to 736 (in bits) (by +128 bits)
> 'uint16_t rte_bbdev_op_fft::fp16_exp_adjust' offset changed from 624 to 752 (in bits) (by +128 bits)
> 'const rte_bbdev_ops* rte_bbdev::dev_ops' offset changed from 640 to 768 (in bits) (by +128 bits)
> 'rte_bbdev_data* rte_bbdev::data' offset changed from 704 to 832 (in bits) (by +128 bits)
> 'rte_bbdev_state rte_bbdev::state' offset changed from 768 to 896 (in bits) (by +128 bits)
> 'rte_device* rte_bbdev::device' offset changed from 832 to 960 (in bits) (by +128 bits)
> 'rte_bbdev_cb_list rte_bbdev::list_cbs' offset changed from 896 to 1024 (in bits) (by +128 bits)
> 'rte_intr_handle* rte_bbdev::intr_handle' offset changed from 1024 to 1152 (in bits) (by +128 bits)
As for the report on the rte_bbdev_op_fft structure changes:
- wrt to its size, I think it is okay to waive it, rte_bbdev_fft_op
objects are coming from a bbdev mempool which is created by the bbdev
library itself (with the right element size if the application asked
for RTE_BBDEV_OP_FFT type),
- wrt to the fields locations, an application may have been touching
those fields, so moving all the added fields at the end of the
structure would be better.
But on the other hand, an application will have to call an fft_ops
experimental API at some point, and the application developer is
already warned that ABI is not preserved on this part of the API,
So I would waive the changes on rte_bbdev_fft_op with something like:
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..3cdce69418 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -36,6 +36,8 @@
[suppress_type]
type_kind = enum
changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM,
RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
+[suppress_type]
+ name = rte_bbdev_fft_op
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
--
David Marchand
^ permalink raw reply [relevance 4%]
* RE: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
2023-06-06 7:31 3% ` Feifei Wang
@ 2023-06-06 8:34 0% ` Konstantin Ananyev
2023-06-07 0:00 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2023-06-06 8:34 UTC (permalink / raw)
To: Feifei Wang,
Константин
Ананьев,
thomas, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, nd, nd
>
> [...]
> > > Probably I am missing something, but why it is not possible to do something
> > like that:
> > >
> > > rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> > > tx_queue_id=M, ...); ....
> > > rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> > > tx_queue_id=K, ...);
> > >
> > > I.E. feed rx queue from 2 tx queues?
> > >
> > > Two problems for this:
> > > 1. If we have 2 tx queues for rx, the thread should make the extra
> > > judgement to decide which one to choose in the driver layer.
> >
> > Not sure, why on the driver layer?
> > The example I gave above - decision is made on application layer.
> > Lets say first call didn't free enough mbufs, so app decided to use second txq
> > for rearm.
> [Feifei] I think currently mbuf recycle mode can support this usage. For examples:
> n = rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
> if (n < planned_number)
> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
>
> Thus, if users want, they can do like this.
Yes, that was my thought, that's why I was surprise that in the comments we have:
" Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing
* between the receive queue and transmit queue. Do not pair one receive queue with
* multiple transmit queues or pair one transmit queue with multiple receive queues,
* in order to avoid memory error rewriting."
>
> >
> > > On the other hand, current mechanism can support users to switch 1 txq
> > > to another timely in the application layer. If user want to choose
> > > another txq, he just need to change the txq_queue_id parameter in the API.
> > > 2. If you want one rxq to support two txq at the same time, this needs
> > > to add spinlock on guard variable to avoid multi-thread conflict.
> > > Spinlock will decrease the data-path performance greatly. Thus, we do
> > > not consider
> > > 1 rxq mapping multiple txqs here.
> >
> > I am talking about situation when one thread controls 2 tx queues.
> >
> > > + *
> > > + * @param rx_port_id
> > > + * Port identifying the receive side.
> > > + * @param rx_queue_id
> > > + * The index of the receive queue identifying the receive side.
> > > + * The value must be in the range [0, nb_rx_queue - 1] previously
> > > +supplied
> > > + * to rte_eth_dev_configure().
> > > + * @param tx_port_id
> > > + * Port identifying the transmit side.
> > > + * @param tx_queue_id
> > > + * The index of the transmit queue identifying the transmit side.
> > > + * The value must be in the range [0, nb_tx_queue - 1] previously
> > > +supplied
> > > + * to rte_eth_dev_configure().
> > > + * @param recycle_rxq_info
> > > + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which
> > > +contains
> > > + * the information of the Rx queue mbuf ring.
> > > + * @return
> > > + * The number of recycling mbufs.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> > > +uint16_t tx_port_id, uint16_t tx_queue_id, struct
> > > +rte_eth_recycle_rxq_info *recycle_rxq_info) { struct rte_eth_fp_ops
> > > +*p; void *qd; uint16_t nb_mbufs;
> > > +
> > > +#ifdef RTE_ETHDEV_DEBUG_TX
> > > + if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >=
> > > +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> > > +tx_port_id=%u or tx_queue_id=%u\n", tx_port_id, tx_queue_id);
> > > +return 0; } #endif
> > > +
> > > + /* fetch pointer to queue data */
> > > + p = &rte_eth_fp_ops[tx_port_id];
> > > + qd = p->txq.data[tx_queue_id];
> > > +
> > > +#ifdef RTE_ETHDEV_DEBUG_TX
> > > + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> > > +
> > > + if (qd == NULL) {
> > > + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
> > > +tx_queue_id, tx_port_id); return 0; } #endif if
> > > +(p->recycle_tx_mbufs_reuse == NULL) return 0;
> > > +
> > > + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> > > + * into Rx mbuf ring.
> > > + */
> > > + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
> > > +
> > > + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return 0;
> > > +
> > > +#ifdef RTE_ETHDEV_DEBUG_RX
> > > + if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >=
> > > +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> > > +rx_port_id=%u or rx_queue_id=%u\n", rx_port_id, rx_queue_id);
> > > +return 0; } #endif
> > > +
> > > + /* fetch pointer to queue data */
> > > + p = &rte_eth_fp_ops[rx_port_id];
> > > + qd = p->rxq.data[rx_queue_id];
> > > +
> > > +#ifdef RTE_ETHDEV_DEBUG_RX
> > > + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> > > +
> > > + if (qd == NULL) {
> > > + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
> > > +rx_queue_id, rx_port_id); return 0; } #endif
> > > +
> > > + if (p->recycle_rx_descriptors_refill == NULL) return 0;
> > > +
> > > + /* Replenish the Rx descriptors with the recycling
> > > + * into Rx mbuf ring.
> > > + */
> > > + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> > > +
> > > + return nb_mbufs;
> > > +}
> > > +
> > > /**
> > > * @warning
> > > * @b EXPERIMENTAL: this API may change without prior notice diff
> > > --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > > index dcf8adab92..a2e6ea6b6c 100644
> > > --- a/lib/ethdev/rte_ethdev_core.h
> > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void
> > > *rxq, uint16_t offset);
> > > /** @internal Check the status of a Tx descriptor */ typedef int
> > > (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> > >
> > > +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
> > > +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq, struct
> > > +rte_eth_recycle_rxq_info *recycle_rxq_info);
> > > +
> > > +/** @internal Refill Rx descriptors with the recycling mbufs */
> > > +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq,
> > > +uint16_t nb);
> > > +
> > > /**
> > > * @internal
> > > * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@
> > > -90,9 +97,11 @@ struct rte_eth_fp_ops {
> > > eth_rx_queue_count_t rx_queue_count;
> > > /** Check the status of a Rx descriptor. */
> > > eth_rx_descriptor_status_t rx_descriptor_status;
> > > + /** Refill Rx descriptors with the recycling mbufs. */
> > > + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > > I am afraid we can't put new fields here without ABI breakage.
> > >
> > > Agree
> > >
> > > It has to be below rxq.
> > > Now thinking about current layout probably not the best one, and when
> > > introducing this struct, I should probably put rxq either on the top
> > > of the struct, or on the next cache line.
> > > But such change is not possible right now anyway.
> > > Same story for txq.
> > >
> > > Thus we should rearrange the structure like below:
> > > struct rte_eth_fp_ops {
> > > struct rte_ethdev_qdata rxq;
> > > eth_rx_burst_t rx_pkt_burst;
> > > eth_rx_queue_count_t rx_queue_count;
> > > eth_rx_descriptor_status_t rx_descriptor_status;
> > > eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > > uintptr_t reserved1[2];
> > > }
> >
> > Yes, I think such layout will be better.
> > The only problem here - we have to wait for 23.11 for that.
> >
> Ok, if not this change, maybe we still need to wait. Because mbufs_recycle have other
> ABI breakage. Such as the change for 'struct rte_eth_dev'.
Ok by me.
> > >
> > >
> > > /** Rx queues data. */
> > > struct rte_ethdev_qdata rxq;
> > > - uintptr_t reserved1[3];
> > > + uintptr_t reserved1[2];
> > > /**@}*/
> > >
> > > /**@{*/
> > > @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
> > > eth_tx_prep_t tx_pkt_prepare;
> > > /** Check the status of a Tx descriptor. */
> > > eth_tx_descriptor_status_t tx_descriptor_status;
> > > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > > /** Tx queues data. */
> > > struct rte_ethdev_qdata txq;
> > > - uintptr_t reserved2[3];
> > > + uintptr_t reserved2[2];
> > > /**@}*/
> > >
> > > } __rte_cache_aligned;
> > > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> > > 357d1a88c0..45c417f6bd 100644
> > > --- a/lib/ethdev/version.map
> > > +++ b/lib/ethdev/version.map
> > > @@ -299,6 +299,10 @@ EXPERIMENTAL {
> > > rte_flow_action_handle_query_update;
> > > rte_flow_async_action_handle_query_update;
> > > rte_flow_async_create_by_index;
> > > +
> > > + # added in 23.07
> > > + rte_eth_recycle_mbufs;
> > > + rte_eth_recycle_rx_queue_info_get;
> > > };
> > >
> > > INTERNAL {
> > > --
> > > 2.25.1
> > >
^ permalink raw reply [relevance 0%]
* RE: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
2023-06-06 7:10 0% ` Konstantin Ananyev
@ 2023-06-06 7:31 3% ` Feifei Wang
2023-06-06 8:34 0% ` Konstantin Ananyev
0 siblings, 1 reply; 200+ results
From: Feifei Wang @ 2023-06-06 7:31 UTC (permalink / raw)
To: Konstantin Ananyev,
Константин
Ананьев,
thomas, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, nd, nd
[...]
> > Probably I am missing something, but why it is not possible to do something
> like that:
> >
> > rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> > tx_queue_id=M, ...); ....
> > rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N,
> > tx_queue_id=K, ...);
> >
> > I.E. feed rx queue from 2 tx queues?
> >
> > Two problems for this:
> > 1. If we have 2 tx queues for rx, the thread should make the extra
> > judgement to decide which one to choose in the driver layer.
>
> Not sure, why on the driver layer?
> The example I gave above - decision is made on application layer.
> Lets say first call didn't free enough mbufs, so app decided to use second txq
> for rearm.
[Feifei] I think currently mbuf recycle mode can support this usage. For examples:
n = rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
if (n < planned_number)
rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
Thus, if users want, they can do like this.
>
> > On the other hand, current mechanism can support users to switch 1 txq
> > to another timely in the application layer. If user want to choose
> > another txq, he just need to change the txq_queue_id parameter in the API.
> > 2. If you want one rxq to support two txq at the same time, this needs
> > to add spinlock on guard variable to avoid multi-thread conflict.
> > Spinlock will decrease the data-path performance greatly. Thus, we do
> > not consider
> > 1 rxq mapping multiple txqs here.
>
> I am talking about situation when one thread controls 2 tx queues.
>
> > + *
> > + * @param rx_port_id
> > + * Port identifying the receive side.
> > + * @param rx_queue_id
> > + * The index of the receive queue identifying the receive side.
> > + * The value must be in the range [0, nb_rx_queue - 1] previously
> > +supplied
> > + * to rte_eth_dev_configure().
> > + * @param tx_port_id
> > + * Port identifying the transmit side.
> > + * @param tx_queue_id
> > + * The index of the transmit queue identifying the transmit side.
> > + * The value must be in the range [0, nb_tx_queue - 1] previously
> > +supplied
> > + * to rte_eth_dev_configure().
> > + * @param recycle_rxq_info
> > + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which
> > +contains
> > + * the information of the Rx queue mbuf ring.
> > + * @return
> > + * The number of recycling mbufs.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> > +uint16_t tx_port_id, uint16_t tx_queue_id, struct
> > +rte_eth_recycle_rxq_info *recycle_rxq_info) { struct rte_eth_fp_ops
> > +*p; void *qd; uint16_t nb_mbufs;
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_TX
> > + if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >=
> > +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> > +tx_port_id=%u or tx_queue_id=%u\n", tx_port_id, tx_queue_id);
> > +return 0; } #endif
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[tx_port_id];
> > + qd = p->txq.data[tx_queue_id];
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_TX
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> > +
> > + if (qd == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
> > +tx_queue_id, tx_port_id); return 0; } #endif if
> > +(p->recycle_tx_mbufs_reuse == NULL) return 0;
> > +
> > + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> > + * into Rx mbuf ring.
> > + */
> > + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
> > +
> > + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return 0;
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_RX
> > + if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >=
> > +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid
> > +rx_port_id=%u or rx_queue_id=%u\n", rx_port_id, rx_queue_id);
> > +return 0; } #endif
> > +
> > + /* fetch pointer to queue data */
> > + p = &rte_eth_fp_ops[rx_port_id];
> > + qd = p->rxq.data[rx_queue_id];
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_RX
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> > +
> > + if (qd == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
> > +rx_queue_id, rx_port_id); return 0; } #endif
> > +
> > + if (p->recycle_rx_descriptors_refill == NULL) return 0;
> > +
> > + /* Replenish the Rx descriptors with the recycling
> > + * into Rx mbuf ring.
> > + */
> > + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> > +
> > + return nb_mbufs;
> > +}
> > +
> > /**
> > * @warning
> > * @b EXPERIMENTAL: this API may change without prior notice diff
> > --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index dcf8adab92..a2e6ea6b6c 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void
> > *rxq, uint16_t offset);
> > /** @internal Check the status of a Tx descriptor */ typedef int
> > (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >
> > +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
> > +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq, struct
> > +rte_eth_recycle_rxq_info *recycle_rxq_info);
> > +
> > +/** @internal Refill Rx descriptors with the recycling mbufs */
> > +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq,
> > +uint16_t nb);
> > +
> > /**
> > * @internal
> > * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@
> > -90,9 +97,11 @@ struct rte_eth_fp_ops {
> > eth_rx_queue_count_t rx_queue_count;
> > /** Check the status of a Rx descriptor. */
> > eth_rx_descriptor_status_t rx_descriptor_status;
> > + /** Refill Rx descriptors with the recycling mbufs. */
> > + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > I am afraid we can't put new fields here without ABI breakage.
> >
> > Agree
> >
> > It has to be below rxq.
> > Now thinking about current layout probably not the best one, and when
> > introducing this struct, I should probably put rxq either on the top
> > of the struct, or on the next cache line.
> > But such change is not possible right now anyway.
> > Same story for txq.
> >
> > Thus we should rearrange the structure like below:
> > struct rte_eth_fp_ops {
> > struct rte_ethdev_qdata rxq;
> > eth_rx_burst_t rx_pkt_burst;
> > eth_rx_queue_count_t rx_queue_count;
> > eth_rx_descriptor_status_t rx_descriptor_status;
> > eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > uintptr_t reserved1[2];
> > }
>
> Yes, I think such layout will be better.
> The only problem here - we have to wait for 23.11 for that.
>
Ok, if not this change, maybe we still need to wait. Because mbufs_recycle have other
ABI breakage. Such as the change for 'struct rte_eth_dev'.
> >
> >
> > /** Rx queues data. */
> > struct rte_ethdev_qdata rxq;
> > - uintptr_t reserved1[3];
> > + uintptr_t reserved1[2];
> > /**@}*/
> >
> > /**@{*/
> > @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
> > eth_tx_prep_t tx_pkt_prepare;
> > /** Check the status of a Tx descriptor. */
> > eth_tx_descriptor_status_t tx_descriptor_status;
> > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > /** Tx queues data. */
> > struct rte_ethdev_qdata txq;
> > - uintptr_t reserved2[3];
> > + uintptr_t reserved2[2];
> > /**@}*/
> >
> > } __rte_cache_aligned;
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> > 357d1a88c0..45c417f6bd 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -299,6 +299,10 @@ EXPERIMENTAL {
> > rte_flow_action_handle_query_update;
> > rte_flow_async_action_handle_query_update;
> > rte_flow_async_create_by_index;
> > +
> > + # added in 23.07
> > + rte_eth_recycle_mbufs;
> > + rte_eth_recycle_rx_queue_info_get;
> > };
> >
> > INTERNAL {
> > --
> > 2.25.1
> >
^ permalink raw reply [relevance 3%]
* RE: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
2023-06-06 2:55 1% ` Feifei Wang
@ 2023-06-06 7:10 0% ` Konstantin Ananyev
2023-06-06 7:31 3% ` Feifei Wang
0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2023-06-06 7:10 UTC (permalink / raw)
To: Feifei Wang,
Константин
Ананьев,
thomas, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, nd
>
> Thanks for the comments.
>
> From: Константин Ананьев <mailto:konstantin.v.ananyev@yandex.ru>
> Sent: Monday, June 5, 2023 8:54 PM
> To: Feifei Wang <mailto:Feifei.Wang2@arm.com>; mailto:thomas@monjalon.net; Ferruh Yigit <mailto:ferruh.yigit@amd.com>;
> Andrew Rybchenko <mailto:andrew.rybchenko@oktetlabs.ru>
> Cc: mailto:dev@dpdk.org; nd <mailto:nd@arm.com>; Honnappa Nagarahalli <mailto:Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <mailto:Ruifeng.Wang@arm.com>
> Subject: Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
>
>
>
> Hi Feifei,
>
> few more comments from me, see below.
> Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
> APIs to recycle used mbufs from a transmit queue of an Ethernet device,
> and move these mbufs into a mbuf ring for a receive queue of an Ethernet
> device. This can bypass mempool 'put/get' operations hence saving CPU
> cycles.
>
> For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
> the following operations:
> - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
> ring.
> - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> from the Tx mbuf ring.
>
> Suggested-by: Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com>
> Suggested-by: Ruifeng Wang <mailto:ruifeng.wang@arm.com>
> Signed-off-by: Feifei Wang <mailto:feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <mailto:ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <mailto:honnappa.nagarahalli@arm.com>
> ---
> doc/guides/rel_notes/release_23_07.rst | 7 +
> lib/ethdev/ethdev_driver.h | 10 ++
> lib/ethdev/ethdev_private.c | 2 +
> lib/ethdev/rte_ethdev.c | 31 +++++
> lib/ethdev/rte_ethdev.h | 182 +++++++++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 15 +-
> lib/ethdev/version.map | 4 +
> 7 files changed, 249 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..f279036cb9 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Add mbufs recycling support. **
> + Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
> + APIs which allow the user to copy used mbufs from the Tx mbuf ring
> + into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
> + device is different from the Tx Ethernet device with respective driver
> + callback functions in ``rte_eth_recycle_mbufs``.
> +
>
> Removed Items
> -------------
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 2c9d615fb5..c6723d5277 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -59,6 +59,10 @@ struct rte_eth_dev {
> eth_rx_descriptor_status_t rx_descriptor_status;
> /** Check the status of a Tx descriptor */
> eth_tx_descriptor_status_t tx_descriptor_status;
> + /** Pointer to PMD transmit mbufs reuse function */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + /** Pointer to PMD receive descriptors refill function */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>
> /**
> * Device data that is shared between primary and secondary processes
> @@ -504,6 +508,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
> typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
> uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
>
> +typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
> + uint16_t rx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
> uint16_t queue_id, struct rte_eth_burst_mode *mode);
>
> @@ -1247,6 +1255,8 @@ struct eth_dev_ops {
> eth_rxq_info_get_t rxq_info_get;
> /** Retrieve Tx queue information */
> eth_txq_info_get_t txq_info_get;
> + /** Retrieve mbufs recycle Rx queue information */
> + eth_recycle_rxq_info_get_t recycle_rxq_info_get;
> eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
> eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
> eth_fw_version_get_t fw_version_get; /**< Get firmware version */
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 14ec8c6ccf..f8ab64f195 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> fpo->rx_queue_count = dev->rx_queue_count;
> fpo->rx_descriptor_status = dev->rx_descriptor_status;
> fpo->tx_descriptor_status = dev->tx_descriptor_status;
> + fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
> + fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
>
> fpo->rxq.data = dev->data->rx_queues;
> fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d03255683..7c27dcfea4 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5784,6 +5784,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> return 0;
> }
>
> +int
> +rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (queue_id >= dev->data->nb_rx_queues) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
> + return -EINVAL;
> + }
> +
> + if (dev->data->rx_queues == NULL ||
> + dev->data->rx_queues[queue_id] == NULL) {
> + RTE_ETHDEV_LOG(ERR,
> + "Rx queue %"PRIu16" of device with port_id=%"
> + PRIu16" has not been setup\n",
> + queue_id, port_id);
> + return -EINVAL;
> + }
> +
> + if (*dev->dev_ops->recycle_rxq_info_get == NULL)
> + return -ENOTSUP;
> +
> + dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
> +
> + return 0;
> +}
> +
> int
> rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
> struct rte_eth_burst_mode *mode)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 99fe9e238b..7434aa2483 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
> uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
> } __rte_cache_min_aligned;
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * Ethernet device Rx queue information structure for recycling mbufs.
> + * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
> + * them into Rx mbuf ring.
> + */
> +struct rte_eth_recycle_rxq_info {
> + struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
> + struct rte_mempool *mp; /**< mempool of Rx queue. */
> + uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
> + uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
> + uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
> + /**
> + * Requirement on mbuf refilling batch size of Rx mbuf ring.
> + * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
> + * should be aligned with mbuf ring size, in order to simplify
> + * ring wrapping around.
> + * Value 0 means that PMD drivers have no requirement for this.
> + */
> + uint16_t refill_requirement;
> +} __rte_cache_min_aligned;
> +
> /* Generic Burst mode flag definition, values can be ORed. */
>
> /**
> @@ -4809,6 +4833,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> struct rte_eth_txq_info *qinfo);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + *
> + * Retrieve information about given ports's Rx queue for recycling mbufs.
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param queue_id
> + * The Rx queue on the Ethernet devicefor which information
> + * will be retrieved.
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
> + *
> + * @return
> + * - 0: Success
> + * - -ENODEV: If *port_id* is invalid.
> + * - -ENOTSUP: routine is not supported by the device PMD.
> + * - -EINVAL: The queue_id is out of range.
> + */
> +__rte_experimental
> +int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
> + uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> /**
> * Retrieve information about the Rx packet burst mode.
> *
> @@ -6483,6 +6532,139 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
> return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + *
> + * Recycle used mbufs from a transmit queue of an Ethernet device, and move
> + * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
> + * This can bypass mempool path to save CPU cycles.
> + *
> + * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
> + * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
> + * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
> + * ring, with the constraint of enough used mbufs from Tx mbuf ring.
> + *
> + * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
> + * following operations:
> + *
> + * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
> + *
> + * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> + * from the Tx mbuf ring.
> + *
> + * This function spilts Rx and Tx path with different callback functions. The
> + * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
> + * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
> + * can support the case that Rx Ethernet device is different from Tx Ethernet device.
> + *
> + * It is the responsibility of users to select the Rx/Tx queue pair to recycle
> + * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
> + * function to retrieve selected Rx queue information.
> + * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
> + *
> + * Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing
> + * between the receive queue and transmit queue. Do not pair one receive queue with
> + * multiple transmit queues or pair one transmit queue with multiple receive queues,
> + * in order to avoid memory error rewriting.
> Probably I am missing something, but why it is not possible to do something like that:
>
> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
> ....
> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
>
> I.E. feed rx queue from 2 tx queues?
>
> Two problems for this:
> 1. If we have 2 tx queues for rx, the thread should make the extra judgement to
> decide which one to choose in the driver layer.
Not sure, why on the driver layer?
The example I gave above - decision is made on application layer.
Lets say first call didn't free enough mbufs, so app decided to use second txq for rearm.
> On the other hand, current mechanism can support users to switch 1 txq to another timely
> in the application layer. If user want to choose another txq, he just need to change the txq_queue_id parameter
> in the API.
> 2. If you want one rxq to support two txq at the same time, this needs to add spinlock on guard variable to
> avoid multi-thread conflict. Spinlock will decrease the data-path performance greatly. Thus, we do not consider
> 1 rxq mapping multiple txqs here.
I am talking about situation when one thread controls 2 tx queues.
> + *
> + * @param rx_port_id
> + * Port identifying the receive side.
> + * @param rx_queue_id
> + * The index of the receive queue identifying the receive side.
> + * The value must be in the range [0, nb_rx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param tx_port_id
> + * Port identifying the transmit side.
> + * @param tx_queue_id
> + * The index of the transmit queue identifying the transmit side.
> + * The value must be in the range [0, nb_tx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
> + * the information of the Rx queue mbuf ring.
> + * @return
> + * The number of recycling mbufs.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> + uint16_t tx_port_id, uint16_t tx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info)
> +{
> + struct rte_eth_fp_ops *p;
> + void *qd;
> + uint16_t nb_mbufs;
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + if (tx_port_id >= RTE_MAX_ETHPORTS ||
> + tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid tx_port_id=%u or tx_queue_id=%u\n",
> + tx_port_id, tx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[tx_port_id];
> + qd = p->txq.data[tx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
> + tx_queue_id, tx_port_id);
> + return 0;
> + }
> +#endif
> + if (p->recycle_tx_mbufs_reuse == NULL)
> + return 0;
> +
> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> + * into Rx mbuf ring.
> + */
> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
> +
> + /* If no recycling mbufs, return 0. */
> + if (nb_mbufs == 0)
> + return 0;
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + if (rx_port_id >= RTE_MAX_ETHPORTS ||
> + rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
> + rx_port_id, rx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[rx_port_id];
> + qd = p->rxq.data[rx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
> + rx_queue_id, rx_port_id);
> + return 0;
> + }
> +#endif
> +
> + if (p->recycle_rx_descriptors_refill == NULL)
> + return 0;
> +
> + /* Replenish the Rx descriptors with the recycling
> + * into Rx mbuf ring.
> + */
> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> +
> + return nb_mbufs;
> +}
> +
> /**
> * @warning
> * @b EXPERIMENTAL: this API may change without prior notice
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index dcf8adab92..a2e6ea6b6c 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> /** @internal Check the status of a Tx descriptor */
> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>
> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> +/** @internal Refill Rx descriptors with the recycling mbufs */
> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
> +
> /**
> * @internal
> * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> @@ -90,9 +97,11 @@ struct rte_eth_fp_ops {
> eth_rx_queue_count_t rx_queue_count;
> /** Check the status of a Rx descriptor. */
> eth_rx_descriptor_status_t rx_descriptor_status;
> + /** Refill Rx descriptors with the recycling mbufs. */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> I am afraid we can't put new fields here without ABI breakage.
>
> Agree
>
> It has to be below rxq.
> Now thinking about current layout probably not the best one,
> and when introducing this struct, I should probably put rxq either
> on the top of the struct, or on the next cache line.
> But such change is not possible right now anyway.
> Same story for txq.
>
> Thus we should rearrange the structure like below:
> struct rte_eth_fp_ops {
> struct rte_ethdev_qdata rxq;
> eth_rx_burst_t rx_pkt_burst;
> eth_rx_queue_count_t rx_queue_count;
> eth_rx_descriptor_status_t rx_descriptor_status;
> eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> uintptr_t reserved1[2];
> }
Yes, I think such layout will be better.
The only problem here - we have to wait for 23.11 for that.
>
>
> /** Rx queues data. */
> struct rte_ethdev_qdata rxq;
> - uintptr_t reserved1[3];
> + uintptr_t reserved1[2];
> /**@}*/
>
> /**@{*/
> @@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
> eth_tx_prep_t tx_pkt_prepare;
> /** Check the status of a Tx descriptor. */
> eth_tx_descriptor_status_t tx_descriptor_status;
> + /** Copy used mbufs from Tx mbuf ring into Rx. */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> /** Tx queues data. */
> struct rte_ethdev_qdata txq;
> - uintptr_t reserved2[3];
> + uintptr_t reserved2[2];
> /**@}*/
>
> } __rte_cache_aligned;
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 357d1a88c0..45c417f6bd 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -299,6 +299,10 @@ EXPERIMENTAL {
> rte_flow_action_handle_query_update;
> rte_flow_async_action_handle_query_update;
> rte_flow_async_create_by_index;
> +
> + # added in 23.07
> + rte_eth_recycle_mbufs;
> + rte_eth_recycle_rx_queue_info_get;
> };
>
> INTERNAL {
> --
> 2.25.1
>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
@ 2023-06-06 2:55 1% ` Feifei Wang
2023-06-06 7:10 0% ` Konstantin Ananyev
0 siblings, 1 reply; 200+ results
From: Feifei Wang @ 2023-06-06 2:55 UTC (permalink / raw)
To: Константин
Ананьев,
thomas, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, nd
[-- Attachment #1: Type: text/plain, Size: 17652 bytes --]
Thanks for the comments.
From: Константин Ананьев <konstantin.v.ananyev@yandex.ru>
Sent: Monday, June 5, 2023 8:54 PM
To: Feifei Wang <Feifei.Wang2@arm.com>; thomas@monjalon.net; Ferruh Yigit <ferruh.yigit@amd.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
Subject: Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode
Hi Feifei,
few more comments from me, see below.
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com<mailto:honnappa.nagarahalli@arm.com>>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com<mailto:ruifeng.wang@arm.com>>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com<mailto:feifei.wang2@arm.com>>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com<mailto:ruifeng.wang@arm.com>>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com<mailto:honnappa.nagarahalli@arm.com>>
---
doc/guides/rel_notes/release_23_07.rst | 7 +
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 31 +++++
lib/ethdev/rte_ethdev.h | 182 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 15 +-
lib/ethdev/version.map | 4 +
7 files changed, 249 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..f279036cb9 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Add mbufs recycling support. **
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
+
Removed Items
-------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 2c9d615fb5..c6723d5277 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -59,6 +59,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -504,6 +508,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1247,6 +1255,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..7c27dcfea4 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5784,6 +5784,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->rx_queues == NULL ||
+ dev->data->rx_queues[queue_id] == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx queue %"PRIu16" of device with port_id=%"
+ PRIu16" has not been setup\n",
+ queue_id, port_id);
+ return -EINVAL;
+ }
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..7434aa2483 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4809,6 +4833,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6483,6 +6532,139 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing
+ * between the receive queue and transmit queue. Do not pair one receive queue with
+ * multiple transmit queues or pair one transmit queue with multiple receive queues,
+ * in order to avoid memory error rewriting.
Probably I am missing something, but why it is not possible to do something like that:
rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...);
....
rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...);
I.E. feed rx queue from 2 tx queues?
Two problems for this:
1. If we have 2 tx queues for rx, the thread should make the extra judgement to
decide which one to choose in the driver layer.
On the other hand, current mechanism can support users to switch 1 txq to another timely
in the application layer. If user want to choose another txq, he just need to change the txq_queue_id parameter
in the API.
1. If you want one rxq to support two txq at the same time, this needs to add spinlock on guard variable to
avoid multi-thread conflict. Spinlock will decrease the data-path performance greatly. Thus, we do not consider
1 rxq mapping multiple txqs here.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p;
+ void *qd;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[tx_port_id];
+ qd = p->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[rx_port_id];
+ qd = p->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+
+ if (p->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p->recycle_rx_descriptors_refill(qd, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index dcf8adab92..a2e6ea6b6c 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -90,9 +97,11 @@ struct rte_eth_fp_ops {
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
I am afraid we can't put new fields here without ABI breakage.
Agree
It has to be below rxq.
Now thinking about current layout probably not the best one,
and when introducing this struct, I should probably put rxq either
on the top of the struct, or on the next cache line.
But such change is not possible right now anyway.
Same story for txq.
Thus we should rearrange the structure like below:
struct rte_eth_fp_ops {
struct rte_ethdev_qdata rxq;
eth_rx_burst_t rx_pkt_burst;
eth_rx_queue_count_t rx_queue_count;
eth_rx_descriptor_status_t rx_descriptor_status;
eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
uintptr_t reserved1[2];
}
/** Rx queues data. */
struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -106,9 +115,11 @@ struct rte_eth_fp_ops {
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
/** Tx queues data. */
struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 357d1a88c0..45c417f6bd 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -299,6 +299,10 @@ EXPERIMENTAL {
rte_flow_action_handle_query_update;
rte_flow_async_action_handle_query_update;
rte_flow_async_create_by_index;
+
+ # added in 23.07
+ rte_eth_recycle_mbufs;
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
[-- Attachment #2: Type: text/html, Size: 29050 bytes --]
^ permalink raw reply [relevance 1%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
@ 2023-06-05 20:08 4% ` Chautru, Nicolas
2023-06-06 9:20 4% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-05 20:08 UTC (permalink / raw)
To: Maxime Coquelin, Stephen Hemminger
Cc: dev, Rix, Tom, hemant.agrawal, david.marchand, Vargas, Hernan
Hi Maxime,
So basically the fft structure change is okay since these are still marked as rte_experimental (it got reported in the ABI report though).
Wrt the MLD functions: these are new into the related serie but still the break the ABI since the struct rte_bbdev includes these functions hence causing offset changes.
Should I then just rephrase as:
+* bbdev: Will extend the API to support the new operation type
+``RTE_BBDEV_OP_MLDTS`` as per
+ this `v1
+<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This
+ will notably introduce
+ new symbols for ``rte_bbdev_dequeue_mldts_ops``,
+``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
Pasting below the ABI results for reference
[C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at rte_bbdev.c:174:1 has some indirect sub-type changes:
return type changed:
in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
type size hasn't changed
2 data member insertions:
'rte_bbdev_enqueue_mldts_ops_t rte_bbdev::enqueue_mldts_ops', at offset 640 (in bits) at rte_bbdev.h:520:1
'rte_bbdev_dequeue_mldts_ops_t rte_bbdev::dequeue_mldts_ops', at offset 704 (in bits) at rte_bbdev.h:522:1
7 data member changes (9 filtered):
type of 'rte_bbdev_dequeue_fft_ops_t rte_bbdev::dequeue_fft_ops' changed:
underlying type 'typedef uint16_t (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)*' changed:
in pointed to type 'function type typedef uint16_t (rte_bbdev_queue_data*, rte_bbdev_fft_op**, typedef uint16_t)':
parameter 2 of type 'rte_bbdev_fft_op**' has sub-type changes:
in pointed to type 'rte_bbdev_fft_op*':
in pointed to type 'struct rte_bbdev_fft_op' at rte_bbdev_op.h:978:1:
type size changed from 832 to 1664 (in bits)
1 data member change:
type of 'rte_bbdev_op_fft rte_bbdev_fft_op::fft' changed:
type size changed from 640 to 1472 (in bits)
6 data member insertions:
'rte_bbdev_op_data rte_bbdev_op_fft::dewindowing_input', at offset 256 (in bits) at rte_bbdev_op.h:771:1
'int8_t rte_bbdev_op_fft::freq_resample_mode', at offset 768 (in bits) at rte_bbdev_op.h:807:1
'uint16_t rte_bbdev_op_fft::output_depadded_size', at offset 784 (in bits) at rte_bbdev_op.h:809:1
'uint16_t rte_bbdev_op_fft::cs_theta_0[12]', at offset 800 (in bits) at rte_bbdev_op.h:811:1
'uint32_t rte_bbdev_op_fft::cs_theta_d[12]', at offset 992 (in bits) at rte_bbdev_op.h:813:1
'int8_t rte_bbdev_op_fft::time_offset[12]', at offset 1376 (in bits) at rte_bbdev_op.h:815:1
17 data member changes:
'rte_bbdev_op_data rte_bbdev_op_fft::power_meas_output' offset changed from 256 to 384 (in bits) (by +128 bits)
'uint32_t rte_bbdev_op_fft::op_flags' offset changed from 384 to 512 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::input_sequence_size' offset changed from 416 to 544 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::input_leading_padding' offset changed from 432 to 560 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::output_sequence_size' offset changed from 448 to 576 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::output_leading_depadding' offset changed from 464 to 592 (in bits) (by +128 bits)
'uint8_t rte_bbdev_op_fft::window_index[6]' offset changed from 480 to 608 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::cs_bitmap' offset changed from 528 to 656 (in bits) (by +128 bits)
'uint8_t rte_bbdev_op_fft::num_antennas_log2' offset changed from 544 to 672 (in bits) (by +128 bits)
'uint8_t rte_bbdev_op_fft::idft_log2' offset changed from 552 to 680 (in bits) (by +128 bits)
'uint8_t rte_bbdev_op_fft::dft_log2' offset changed from 560 to 688 (in bits) (by +128 bits)
'int8_t rte_bbdev_op_fft::cs_time_adjustment' offset changed from 568 to 696 (in bits) (by +128 bits)
'int8_t rte_bbdev_op_fft::idft_shift' offset changed from 576 to 704 (in bits) (by +128 bits)
'int8_t rte_bbdev_op_fft::dft_shift' offset changed from 584 to 712 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::ncs_reciprocal' offset changed from 592 to 720 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::power_shift' offset changed from 608 to 736 (in bits) (by +128 bits)
'uint16_t rte_bbdev_op_fft::fp16_exp_adjust' offset changed from 624 to 752 (in bits) (by +128 bits)
'const rte_bbdev_ops* rte_bbdev::dev_ops' offset changed from 640 to 768 (in bits) (by +128 bits)
'rte_bbdev_data* rte_bbdev::data' offset changed from 704 to 832 (in bits) (by +128 bits)
'rte_bbdev_state rte_bbdev::state' offset changed from 768 to 896 (in bits) (by +128 bits)
'rte_device* rte_bbdev::device' offset changed from 832 to 960 (in bits) (by +128 bits)
'rte_bbdev_cb_list rte_bbdev::list_cbs' offset changed from 896 to 1024 (in bits) (by +128 bits)
'rte_intr_handle* rte_bbdev::intr_handle' offset changed from 1024 to 1152 (in bits) (by +128 bits)
Thanks
Nic
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Monday, June 5, 2023 12:08 PM
> To: Stephen Hemminger <stephen@networkplumber.org>; Chautru, Nicolas
> <nicolas.chautru@intel.com>
> Cc: dev@dpdk.org; Rix, Tom <trix@redhat.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com; Vargas, Hernan <hernan.vargas@intel.com>
> Subject: Re: [PATCH v1 1/1] doc: announce change in bbdev api related to
> operation extension
>
>
>
> On 5/26/23 05:47, Stephen Hemminger wrote:
> > On Fri, 26 May 2023 02:11:32 +0000
> > Nicolas Chautru <nicolas.chautru@intel.com> wrote:
> >
> >> +
> >> +* bbdev: Will extend the API to support the new operation type
> >> +``RTE_BBDEV_OP_MLDTS`` as per
> >> + this `v1
> >> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`. This
> >> +will also introduce
> >> + new symbols for ``rte_bbdev_dequeue_mldts_ops``,
> >> +``rte_bbdev_enqueue_mldts_ops``,
> >> + ``rte_bbdev_mldts_op_alloc_bulk`` and
> >> +``rte_bbdev_mldts_op_free_bulk``. This will also extend
> >> + the API related to the FFT operation in ``rte_bbdev_op_fft``.
> >> --
> >
> > New symbols do not require a deprecation notice.
> > Only changes and removal.
> >
> I agree with Stephen.
> There is some changes in struct rte_bbdev_op_fft, but the related API are
> experimental, so I think it is not needed to have a deprecation notice.
>
> Regards,
> Maxime
^ permalink raw reply [relevance 4%]
* Re: [PATCH v1 1/1] bbdev: extend range of allocation function
2023-06-02 14:17 3% ` Chautru, Nicolas
@ 2023-06-05 19:08 3% ` Maxime Coquelin
0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2023-06-05 19:08 UTC (permalink / raw)
To: Chautru, Nicolas, dev; +Cc: hemant.agrawal, Vargas, Hernan
On 6/2/23 16:17, Chautru, Nicolas wrote:
> Hi Maxime,
> I don't think it does since no offset position change for the symbol. Also this only extends the type, so still fine if using uin16_t from application.
> I did not receive an email from CICD related to ABI change when pushing this (unlike the other serie for the MLD/FFT changes pushed earlier this week).
> Still let me know if you would like this added as well into deprecation notice, but it doesn't look required.
If ABI checks are OK, then this is good to me.
Thanks,
Maxime
> Thanks
> Nic
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Friday, June 2, 2023 12:56 AM
>> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org
>> Cc: hemant.agrawal@nxp.com; Vargas, Hernan <hernan.vargas@intel.com>
>> Subject: Re: [PATCH v1 1/1] bbdev: extend range of allocation function
>>
>>
>>
>> On 6/2/23 04:04, Nicolas Chautru wrote:
>>> Realigning the argument to unsigned int to align with number support
>>> by underlying rte_mempool_get_bulk function.
>>>
>>> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
>>> ---
>>> lib/bbdev/rte_bbdev_op.h | 6 +++---
>>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h index
>>> 96a390cd9b..9353fd588b 100644
>>> --- a/lib/bbdev/rte_bbdev_op.h
>>> +++ b/lib/bbdev/rte_bbdev_op.h
>>> @@ -982,7 +982,7 @@ rte_bbdev_op_pool_create(const char *name,
>> enum rte_bbdev_op_type type,
>>> */
>>> static inline int
>>> rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
>>> - struct rte_bbdev_enc_op **ops, uint16_t num_ops)
>>> + struct rte_bbdev_enc_op **ops, unsigned int num_ops)
>>> {
>>> struct rte_bbdev_op_pool_private *priv;
>>>
>>> @@ -1013,7 +1013,7 @@ rte_bbdev_enc_op_alloc_bulk(struct
>> rte_mempool *mempool,
>>> */
>>> static inline int
>>> rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
>>> - struct rte_bbdev_dec_op **ops, uint16_t num_ops)
>>> + struct rte_bbdev_dec_op **ops, unsigned int num_ops)
>>> {
>>> struct rte_bbdev_op_pool_private *priv;
>>
>> Isn't it breaking the ABI?
>>
>>> @@ -1045,7 +1045,7 @@ rte_bbdev_dec_op_alloc_bulk(struct
>> rte_mempool *mempool,
>>> __rte_experimental
>>> static inline int
>>> rte_bbdev_fft_op_alloc_bulk(struct rte_mempool *mempool,
>>> - struct rte_bbdev_fft_op **ops, uint16_t num_ops)
>>> + struct rte_bbdev_fft_op **ops, unsigned int num_ops)
>>> {
>>> struct rte_bbdev_op_pool_private *priv;
>>>
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-06-02 20:19 0% ` Ferruh Yigit
@ 2023-06-05 12:34 0% ` Dongdong Liu
0 siblings, 0 replies; 200+ results
From: Dongdong Liu @ 2023-06-05 12:34 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon, Jie Hai
Cc: dev, andrew.rybchenko, reshma.pattan, stable, yisen.zhuang,
david.marchand
Hi Ferruh
On 2023/6/3 4:19, Ferruh Yigit wrote:
> On 3/16/2023 1:16 PM, Dongdong Liu wrote:
>> Hi Thomas
>> On 2023/3/15 21:43, Thomas Monjalon wrote:
>>> 15/03/2023 12:00, Dongdong Liu:
>>>> From: Jie Hai <haijie1@huawei.com>
>>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>>> -* No ABI change that would break compatibility with 22.11.
>>>> -
>>>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>>>> RSS hash
>>>> + algorithm.
>>>
>>> We cannot break ABI compatibility until 23.11.
>> Got it. Thank you for reminding.
>>
>
> Hi Dongdong,
>
> Please remember to send a deprecation notice for this release.
> Deprecation notice should be merged in this release so that it can be
> applied in v23.11
Thanks for pointing that.
Will do.
Thanks,
Dongdong
>
>
>> [PATCH 3/5] and [PATCH 4/5] do not relate with this ABI compatibility.
>> I will send them separately.
>>
>> Thanks,
>> Dongdong
>>>
>>>
>>>
>>> .
>>>
>
> .
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-16 13:16 3% ` Dongdong Liu
@ 2023-06-02 20:19 0% ` Ferruh Yigit
2023-06-05 12:34 0% ` Dongdong Liu
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-06-02 20:19 UTC (permalink / raw)
To: Dongdong Liu, Thomas Monjalon, Jie Hai
Cc: dev, andrew.rybchenko, reshma.pattan, stable, yisen.zhuang,
david.marchand
On 3/16/2023 1:16 PM, Dongdong Liu wrote:
> Hi Thomas
> On 2023/3/15 21:43, Thomas Monjalon wrote:
>> 15/03/2023 12:00, Dongdong Liu:
>>> From: Jie Hai <haijie1@huawei.com>
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> -* No ABI change that would break compatibility with 22.11.
>>> -
>>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>>> RSS hash
>>> + algorithm.
>>
>> We cannot break ABI compatibility until 23.11.
> Got it. Thank you for reminding.
>
Hi Dongdong,
Please remember to send a deprecation notice for this release.
Deprecation notice should be merged in this release so that it can be
applied in v23.11
> [PATCH 3/5] and [PATCH 4/5] do not relate with this ABI compatibility.
> I will send them separately.
>
> Thanks,
> Dongdong
>>
>>
>>
>> .
>>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4] net/bonding: replace master/slave to main/member
2023-05-18 15:39 3% ` Stephen Hemminger
@ 2023-06-02 15:05 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-06-02 15:05 UTC (permalink / raw)
To: Stephen Hemminger, Chaoyong He
Cc: dev, oss-drivers, niklas.soderlund, Long Wu, James Hershaw
On 5/18/2023 4:39 PM, Stephen Hemminger wrote:
> On Thu, 18 May 2023 16:44:58 +0800
> Chaoyong He <chaoyong.he@corigine.com> wrote:
>
>> From: Long Wu <long.wu@corigine.com>
>>
>> This patch replaces the usage of the word 'master/slave' with more
>> appropriate word 'main/member' in bonding PMD as well as in its docs
>> and examples. Also the test app and testpmd were modified to use the
>> new wording.
>>
>> The bonding PMD's public API was modified according to the changes
>> in word:
>> rte_eth_bond_8023ad_slave_info is now called
>> rte_eth_bond_8023ad_member_info,
>> rte_eth_bond_active_slaves_get is now called
>> rte_eth_bond_active_members_get,
>> rte_eth_bond_slave_add is now called
>> rte_eth_bond_member_add,
>> rte_eth_bond_slave_remove is now called
>> rte_eth_bond_member_remove,
>> rte_eth_bond_slaves_get is now called
>> rte_eth_bond_members_get.
>>
>> Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
>> RTE_ETH_DEV_BONDED_MEMBER.
>>
>> Mark the old visible API's as deprecated and remove
>> from the ABI.
>>
>> Signed-off-by: Long Wu <long.wu@corigine.com>
>> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
>> Reviewed-by: James Hershaw <james.hershaw@corigine.com>
>
> Since this will be ABI change it will have to wait for 23.11 release.
> Could you make a deprecation notice now, to foreshadow that change?
>
> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
>
For reference, deprecation notice is:
https://patches.dpdk.org/project/dpdk/patch/20230519063334.1645933-1-chaoyong.he@corigine.com/
Deferring the patch to v23.11 release.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v1 1/1] bbdev: extend range of allocation function
2023-06-02 7:56 3% ` Maxime Coquelin
@ 2023-06-02 14:17 3% ` Chautru, Nicolas
2023-06-05 19:08 3% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-06-02 14:17 UTC (permalink / raw)
To: Maxime Coquelin, dev; +Cc: hemant.agrawal, Vargas, Hernan
Hi Maxime,
I don't think it does since no offset position change for the symbol. Also this only extends the type, so still fine if using uin16_t from application.
I did not receive an email from CICD related to ABI change when pushing this (unlike the other serie for the MLD/FFT changes pushed earlier this week).
Still let me know if you would like this added as well into deprecation notice, but it doesn't look required.
Thanks
Nic
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, June 2, 2023 12:56 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org
> Cc: hemant.agrawal@nxp.com; Vargas, Hernan <hernan.vargas@intel.com>
> Subject: Re: [PATCH v1 1/1] bbdev: extend range of allocation function
>
>
>
> On 6/2/23 04:04, Nicolas Chautru wrote:
> > Realigning the argument to unsigned int to align with number support
> > by underlying rte_mempool_get_bulk function.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> > lib/bbdev/rte_bbdev_op.h | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h index
> > 96a390cd9b..9353fd588b 100644
> > --- a/lib/bbdev/rte_bbdev_op.h
> > +++ b/lib/bbdev/rte_bbdev_op.h
> > @@ -982,7 +982,7 @@ rte_bbdev_op_pool_create(const char *name,
> enum rte_bbdev_op_type type,
> > */
> > static inline int
> > rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
> > - struct rte_bbdev_enc_op **ops, uint16_t num_ops)
> > + struct rte_bbdev_enc_op **ops, unsigned int num_ops)
> > {
> > struct rte_bbdev_op_pool_private *priv;
> >
> > @@ -1013,7 +1013,7 @@ rte_bbdev_enc_op_alloc_bulk(struct
> rte_mempool *mempool,
> > */
> > static inline int
> > rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
> > - struct rte_bbdev_dec_op **ops, uint16_t num_ops)
> > + struct rte_bbdev_dec_op **ops, unsigned int num_ops)
> > {
> > struct rte_bbdev_op_pool_private *priv;
>
> Isn't it breaking the ABI?
>
> > @@ -1045,7 +1045,7 @@ rte_bbdev_dec_op_alloc_bulk(struct
> rte_mempool *mempool,
> > __rte_experimental
> > static inline int
> > rte_bbdev_fft_op_alloc_bulk(struct rte_mempool *mempool,
> > - struct rte_bbdev_fft_op **ops, uint16_t num_ops)
> > + struct rte_bbdev_fft_op **ops, unsigned int num_ops)
> > {
> > struct rte_bbdev_op_pool_private *priv;
> >
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1 1/1] bbdev: extend range of allocation function
@ 2023-06-02 7:56 3% ` Maxime Coquelin
2023-06-02 14:17 3% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-02 7:56 UTC (permalink / raw)
To: Nicolas Chautru, dev; +Cc: hemant.agrawal, hernan.vargas
On 6/2/23 04:04, Nicolas Chautru wrote:
> Realigning the argument to unsigned int to
> align with number support by underlying
> rte_mempool_get_bulk function.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
> lib/bbdev/rte_bbdev_op.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
> index 96a390cd9b..9353fd588b 100644
> --- a/lib/bbdev/rte_bbdev_op.h
> +++ b/lib/bbdev/rte_bbdev_op.h
> @@ -982,7 +982,7 @@ rte_bbdev_op_pool_create(const char *name, enum rte_bbdev_op_type type,
> */
> static inline int
> rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
> - struct rte_bbdev_enc_op **ops, uint16_t num_ops)
> + struct rte_bbdev_enc_op **ops, unsigned int num_ops)
> {
> struct rte_bbdev_op_pool_private *priv;
>
> @@ -1013,7 +1013,7 @@ rte_bbdev_enc_op_alloc_bulk(struct rte_mempool *mempool,
> */
> static inline int
> rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
> - struct rte_bbdev_dec_op **ops, uint16_t num_ops)
> + struct rte_bbdev_dec_op **ops, unsigned int num_ops)
> {
> struct rte_bbdev_op_pool_private *priv;
Isn't it breaking the ABI?
> @@ -1045,7 +1045,7 @@ rte_bbdev_dec_op_alloc_bulk(struct rte_mempool *mempool,
> __rte_experimental
> static inline int
> rte_bbdev_fft_op_alloc_bulk(struct rte_mempool *mempool,
> - struct rte_bbdev_fft_op **ops, uint16_t num_ops)
> + struct rte_bbdev_fft_op **ops, unsigned int num_ops)
> {
> struct rte_bbdev_op_pool_private *priv;
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 0/4] vhost: add device op to offload the interrupt kick
2023-06-01 20:00 0% ` Maxime Coquelin
@ 2023-06-02 6:20 0% ` Eelco Chaudron
0 siblings, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-06-02 6:20 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: chenbo.xia, david.marchand, dev
On 1 Jun 2023, at 22:00, Maxime Coquelin wrote:
> On 5/17/23 11:08, Eelco Chaudron wrote:
>> This series adds an operation callback which gets called every time the
>> library wants to call eventfd_write(). This eventfd_write() call could
>> result in a system call, which could potentially block the PMD thread.
>>
>> The callback function can decide whether it's ok to handle the
>> eventfd_write() now or have the newly introduced function,
>> rte_vhost_notify_guest(), called at a later time.
>>
>> This can be used by 3rd party applications, like OVS, to avoid system
>> calls being called as part of the PMD threads.
>>
>> v3:
>> - Changed ABI compatibility code to no longer use a boolean
>> to avoid having to disable specific GCC warnings.
>> - Moved the fd check fix to a separate patch (patch 3/4).
>> - Fixed some coding style issues.
>>
>> v2: - Used vhost_virtqueue->index to find index for operation.
>> - Aligned function name to VDUSE RFC patchset.
>> - Added error and offload statistics counter.
>> - Mark new API as experimental.
>> - Change the virtual queue spin lock to read/write spin lock.
>> - Made shared counters atomic.
>> - Add versioned rte_vhost_driver_callback_register() for
>> ABI compliance.
>>
>> Eelco Chaudron (4):
>> vhost: change vhost_virtqueue access lock to a read/write one
>> vhost: make the guest_notifications statistic counter atomic
>> vhost: fix invalid call FD handling
>> vhost: add device op to offload the interrupt kick
>>
>>
>> lib/eal/include/generic/rte_rwlock.h | 17 +++++
>> lib/vhost/meson.build | 2 +
>> lib/vhost/rte_vhost.h | 23 ++++++-
>> lib/vhost/socket.c | 63 +++++++++++++++++--
>> lib/vhost/version.map | 9 +++
>> lib/vhost/vhost.c | 92 +++++++++++++++++++++-------
>> lib/vhost/vhost.h | 69 ++++++++++++++-------
>> lib/vhost/vhost_user.c | 14 ++---
>> lib/vhost/virtio_net.c | 90 +++++++++++++--------------
>> 9 files changed, 278 insertions(+), 101 deletions(-)
>>
>
>
> Applied to dpdk-next-virtio/main.
Thanks Maxime!
//Eelco
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 0/4] vhost: add device op to offload the interrupt kick
2023-05-17 9:08 4% [PATCH v3 0/4] vhost: add device op to offload the interrupt kick Eelco Chaudron
@ 2023-06-01 20:00 0% ` Maxime Coquelin
2023-06-02 6:20 0% ` Eelco Chaudron
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-01 20:00 UTC (permalink / raw)
To: Eelco Chaudron, chenbo.xia, david.marchand; +Cc: dev
On 5/17/23 11:08, Eelco Chaudron wrote:
> This series adds an operation callback which gets called every time the
> library wants to call eventfd_write(). This eventfd_write() call could
> result in a system call, which could potentially block the PMD thread.
>
> The callback function can decide whether it's ok to handle the
> eventfd_write() now or have the newly introduced function,
> rte_vhost_notify_guest(), called at a later time.
>
> This can be used by 3rd party applications, like OVS, to avoid system
> calls being called as part of the PMD threads.
>
> v3:
> - Changed ABI compatibility code to no longer use a boolean
> to avoid having to disable specific GCC warnings.
> - Moved the fd check fix to a separate patch (patch 3/4).
> - Fixed some coding style issues.
>
> v2: - Used vhost_virtqueue->index to find index for operation.
> - Aligned function name to VDUSE RFC patchset.
> - Added error and offload statistics counter.
> - Mark new API as experimental.
> - Change the virtual queue spin lock to read/write spin lock.
> - Made shared counters atomic.
> - Add versioned rte_vhost_driver_callback_register() for
> ABI compliance.
>
> Eelco Chaudron (4):
> vhost: change vhost_virtqueue access lock to a read/write one
> vhost: make the guest_notifications statistic counter atomic
> vhost: fix invalid call FD handling
> vhost: add device op to offload the interrupt kick
>
>
> lib/eal/include/generic/rte_rwlock.h | 17 +++++
> lib/vhost/meson.build | 2 +
> lib/vhost/rte_vhost.h | 23 ++++++-
> lib/vhost/socket.c | 63 +++++++++++++++++--
> lib/vhost/version.map | 9 +++
> lib/vhost/vhost.c | 92 +++++++++++++++++++++-------
> lib/vhost/vhost.h | 69 ++++++++++++++-------
> lib/vhost/vhost_user.c | 14 ++---
> lib/vhost/virtio_net.c | 90 +++++++++++++--------------
> 9 files changed, 278 insertions(+), 101 deletions(-)
>
Applied to dpdk-next-virtio/main.
Thanks,
Maxime
^ permalink raw reply [relevance 0%]
* Re: [PATCH] common/sfc_efx/base: update fields name for MARK and FLAG actions
2023-05-31 7:08 3% [PATCH] common/sfc_efx/base: update fields name for MARK and FLAG actions Artemii Morozov
@ 2023-06-01 15:43 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-06-01 15:43 UTC (permalink / raw)
To: Artemii Morozov, dev; +Cc: Andy Moreton, Andrew Rybchenko
On 5/31/2023 8:08 AM, Artemii Morozov wrote:
> The MCDI headers have newer, but ABI-compatible field names for
> these actions.
>
> Signed-off-by: Artemii Morozov <artemii.morozov@arknetworks.am>
> Reviewed-by: Andy Moreton <amoreton@xilinx.com>
>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
2023-04-18 9:22 3% ` Bruce Richardson
@ 2023-06-01 9:23 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-06-01 9:23 UTC (permalink / raw)
To: Bruce Richardson, Ferruh Yigit, Sivaprasad Tummala
Cc: david.hunt, dev, Thomas Monjalon
On Tue, Apr 18, 2023 at 11:22 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Apr 18, 2023 at 09:52:49AM +0100, Ferruh Yigit wrote:
> > On 4/18/2023 9:25 AM, Sivaprasad Tummala wrote:
> > > A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> > > DPDK 23.07 release to support monitorx instruction on EPYC processors.
> > > This results in ABI breakage for legacy apps.
> > >
> > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > > index dcc1ca1696..831713983f 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -163,3 +163,6 @@ Deprecation Notices
> > > The new port library API (functions rte_swx_port_*)
> > > will gradually transition from experimental to stable status
> > > starting with DPDK 23.07 release.
> > > +
> > > +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> > > + ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
There is no need for announcing an addition.
The problem is moving/removing other elements of an enum.
> >
> >
> > OK to add new CPU flag,
> > Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
> >
> >
> > But @David, @Bruce, is it OK to break ABI whenever a new CPU flag is
> > added, should we hide CPU flags better?
> >
> > Or other option can be drop the 'RTE_CPUFLAG_NUMFLAGS' and allow
> > appending new flags to the end although this may lead enum become more
> > messy by time.
>
> +1 top drop the NUMFLAGS value. We should not break ABI each time we need a
> new flag.
+1.
So in 23.07 we need an announce for this removal to happen in 23.11.
--
David Marchand
^ permalink raw reply [relevance 0%]
* RE: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-05-30 13:51 0% ` Thomas Monjalon
@ 2023-05-31 9:26 5% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-05-31 9:26 UTC (permalink / raw)
To: Thomas Monjalon, Jerin Jacob
Cc: Stephen Hemminger, Nithin Dabilpuram, Akhil Goyal, jerinj, dev,
techboard
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, 30 May 2023 15.52
>
> 30/05/2023 11:23, Jerin Jacob:
> > > > > > > + */
> > > > > > > + uint32_t ingress_oop : 1;
> > > > > > > +
> > > > > > > /** Reserved bit fields for future extension
> > > > > > > *
> > > > > > > * User should ensure reserved_opts is cleared as it may
> change in
> > > > > > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > > > > > *
> > > > > > > * Note: Reduce number of bits in reserved_opts for every
> new option.
> > > > > > > */
> > > > > > > - uint32_t reserved_opts : 17;
> > > > > > > + uint32_t reserved_opts : 16;
> > > > > > > };
> > > > > >
> > > > > > NAK
> > > > > > Let me repeat the reserved bit rant. YAGNI
> > > > > >
> > > > > > Reserved space is not usable without ABI breakage unless the
> existing
> > > > > > code enforces that reserved space has to be zero.
> > > > > >
> > > > > > Just saying "User should ensure reserved_opts is cleared" is not
> enough.
> > > > >
> > > > > Yes. I think, we need to enforce to have _init functions for the
> > > > > structures which is using reserved filed.
> > > > >
> > > > > On the same note on YAGNI, I am wondering why NOT introduce
> > > > > RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> > > > > By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> > > > > wants it to avoid waiting for one year any ABI breaking changes.
> > > > > There are a lot of "fixed appliance" customers (not OS distribution
> > > > > driven customer) they are willing to recompile DPDK for new feature.
> > > > > What we are loosing with this scheme?
> > > >
> > > > RTE_NEXT_ABI is described in the ABI policy.
> > > > We are not doing it currently, but I think we could
> > > > when it is not too much complicate in the code.
> > > >
> > > > The only problems I see are:
> > > > - more #ifdef clutter
> > > > - 2 binary versions to test
> > > > - CI and checks must handle RTE_NEXT_ABI version
> > >
> > > I think, we have two buckets of ABI breakages via RTE_NEXT_ABI
> > >
> > > 1) Changes that introduces compilation failures like adding new
> > > argument to API or change API name etc
> > > 2) Structure size change which won't affect the compilation but breaks
> > > the ABI for shared library usage.
> > >
> > > I think, (1) is very distributive, and I don't see recently such
> > > changes. I think, we should avoid (1) for non XX.11 releases.(or two
> > > or three-year cycles if we decide that path)
> > >
> > > The (2) comes are very common due to the fact HW features are
> > > evolving. I think, to address the (2), we have two options
> > > a) Have reserved fields and have _init() function to initialize the
> structures
High probability that (a) is not going to work: There will not be enough reserved fields, and/or they will be in the wrong places in the structures.
Also, (a) is really intrusive on existing applications: They MUST be rewritten to call the _init() function instead of using pre-initialized structures, or the library will behave unexpectedly. Extreme example, to prove my point: A new field "allow_ingress" (don't drop all packets on ingress) is introduced, and _init() sets it to true. If the application doesn't call _init(), it will not receive any packets.
Are _init() functions required on all structures, or only some? And how about structures containing other structures?
How does the application developer know which structures have _init() functions, and which do not?
<irony>
We could also switch to C++, where the _init() function comes native in the form of an object constructor.
</irony>
> > > b) Follow YAGNI style and introduce RTE_NEXT_ABI for structure size
> change.
+1 for (b), because (a) is too problematic.
> > >
> > > The above concerns[1] can greatly reduce with option b OR option a.
> > >
> > > [1]
> > > 1) more #ifdef clutter
> > > For option (a) this is not needed or option (b) the clutter will be
> > > limited, it will be around structure which add the new filed and
> > > around the FULL block where new functions are added (not inside the
> > > functions)
> > >
> > > 2) 2 binary versions to test
> > > For option (a) this is not needed, for option (b) it is limited as for
> > > new features only one needs to test another binary (rather than NOT
> > > adding a new feature).
> > >
> > > 3) CI and checks must handle RTE_NEXT_ABI version
> > >
> > > I think, it is cheap to add this, at least for compilation test.
> > >
> > > IMO, We need to change the API break release to 3 year kind of time
> > > frame to have very good end user experience
> > > and allow ABI related change to get in every release and force
> > > _rebuild_ shared objects in major LTS release.
> > >
> > > I think, in this major LTS version(23.11) if we can decide (a) vs (b)
> > > then we can align the code accordingly . e.s.p for (a) we need to add
> > > _init() functions.
> > >
> > > Thoughts?
> >
> > Not much input from mailing list. Can we discuss this next TB meeting?
> > Especially how to align with next LTS release on
> > -YAGNI vs reserved fileds with init()
Whichever decision is made on this, remember to also consider if it has any consequences regarding older LTS versions and possibly backporting.
> > -What it takes to Extend the API breaking release more than a year as
> > first step.
Others might disagree, but in my personal opinion, DPDK is still evolving much too rapidly to lock down its ABI/API for more than one year. For reference, consider what has been changed within the last three years, i.e. since DPDK 20.05, and if those changes could have been done within the DPDK 20.05 ABI/API without requiring a substantial additional effort, and while still providing clean and understandable APIs (and not a bunch of weird hacks to shoehorn the new features into the existing APIs).
If you want continuity, use an LTS release. If we lock down the ABI/API for multiple years at a time, what is the point of the LTS releases?
PS: If we start using the RTE_NEXT_ABI concept more, we should remember to promote the additions with each ABI/API breaking release. And we should probably have a rule of thumb to choose between using RTE_NEXT_ABI and using "experimental" marking.
>
> Yes I agree it should be discussed interactively in techboard meeting.
I'm unable to participate in today's techboard meeting, so I have provided my opinions in this email.
-Morten
^ permalink raw reply [relevance 5%]
* [PATCH] common/sfc_efx/base: update fields name for MARK and FLAG actions
@ 2023-05-31 7:08 3% Artemii Morozov
2023-06-01 15:43 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Artemii Morozov @ 2023-05-31 7:08 UTC (permalink / raw)
To: dev; +Cc: Andy Moreton, Andrew Rybchenko
The MCDI headers have newer, but ABI-compatible field names for
these actions.
Signed-off-by: Artemii Morozov <artemii.morozov@arknetworks.am>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_filter.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_filter.c b/drivers/common/sfc_efx/base/ef10_filter.c
index 6d19797d16..d6940011c0 100644
--- a/drivers/common/sfc_efx/base/ef10_filter.c
+++ b/drivers/common/sfc_efx/base/ef10_filter.c
@@ -329,13 +329,13 @@ efx_mcdi_filter_op_add(
goto fail3;
}
if (spec->efs_flags & EFX_FILTER_FLAG_ACTION_MARK) {
- MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_ACTION,
- MC_CMD_FILTER_OP_V3_IN_MATCH_ACTION_MARK);
+ MCDI_IN_SET_DWORD_FIELD(req, FILTER_OP_V3_IN_MATCH_ACTION_FLAGS,
+ FILTER_OP_V3_IN_MATCH_SET_MARK, 1);
MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_MARK_VALUE,
spec->efs_mark);
} else if (spec->efs_flags & EFX_FILTER_FLAG_ACTION_FLAG) {
- MCDI_IN_SET_DWORD(req, FILTER_OP_V3_IN_MATCH_ACTION,
- MC_CMD_FILTER_OP_V3_IN_MATCH_ACTION_FLAG);
+ MCDI_IN_SET_DWORD_FIELD(req, FILTER_OP_V3_IN_MATCH_ACTION_FLAGS,
+ FILTER_OP_V3_IN_MATCH_SET_FLAG, 1);
}
efx_mcdi_execute(enp, &req);
--
2.34.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-05-30 9:23 0% ` Jerin Jacob
@ 2023-05-30 13:51 0% ` Thomas Monjalon
2023-05-31 9:26 5% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-05-30 13:51 UTC (permalink / raw)
To: Jerin Jacob
Cc: Stephen Hemminger, Nithin Dabilpuram, Akhil Goyal, jerinj, dev,
Morten Brørup, techboard
30/05/2023 11:23, Jerin Jacob:
> > > > > > + */
> > > > > > + uint32_t ingress_oop : 1;
> > > > > > +
> > > > > > /** Reserved bit fields for future extension
> > > > > > *
> > > > > > * User should ensure reserved_opts is cleared as it may change in
> > > > > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > > > > *
> > > > > > * Note: Reduce number of bits in reserved_opts for every new option.
> > > > > > */
> > > > > > - uint32_t reserved_opts : 17;
> > > > > > + uint32_t reserved_opts : 16;
> > > > > > };
> > > > >
> > > > > NAK
> > > > > Let me repeat the reserved bit rant. YAGNI
> > > > >
> > > > > Reserved space is not usable without ABI breakage unless the existing
> > > > > code enforces that reserved space has to be zero.
> > > > >
> > > > > Just saying "User should ensure reserved_opts is cleared" is not enough.
> > > >
> > > > Yes. I think, we need to enforce to have _init functions for the
> > > > structures which is using reserved filed.
> > > >
> > > > On the same note on YAGNI, I am wondering why NOT introduce
> > > > RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> > > > By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> > > > wants it to avoid waiting for one year any ABI breaking changes.
> > > > There are a lot of "fixed appliance" customers (not OS distribution
> > > > driven customer) they are willing to recompile DPDK for new feature.
> > > > What we are loosing with this scheme?
> > >
> > > RTE_NEXT_ABI is described in the ABI policy.
> > > We are not doing it currently, but I think we could
> > > when it is not too much complicate in the code.
> > >
> > > The only problems I see are:
> > > - more #ifdef clutter
> > > - 2 binary versions to test
> > > - CI and checks must handle RTE_NEXT_ABI version
> >
> > I think, we have two buckets of ABI breakages via RTE_NEXT_ABI
> >
> > 1) Changes that introduces compilation failures like adding new
> > argument to API or change API name etc
> > 2) Structure size change which won't affect the compilation but breaks
> > the ABI for shared library usage.
> >
> > I think, (1) is very distributive, and I don't see recently such
> > changes. I think, we should avoid (1) for non XX.11 releases.(or two
> > or three-year cycles if we decide that path)
> >
> > The (2) comes are very common due to the fact HW features are
> > evolving. I think, to address the (2), we have two options
> > a) Have reserved fields and have _init() function to initialize the structures
> > b) Follow YAGNI style and introduce RTE_NEXT_ABI for structure size change.
> >
> > The above concerns[1] can greatly reduce with option b OR option a.
> >
> > [1]
> > 1) more #ifdef clutter
> > For option (a) this is not needed or option (b) the clutter will be
> > limited, it will be around structure which add the new filed and
> > around the FULL block where new functions are added (not inside the
> > functions)
> >
> > 2) 2 binary versions to test
> > For option (a) this is not needed, for option (b) it is limited as for
> > new features only one needs to test another binary (rather than NOT
> > adding a new feature).
> >
> > 3) CI and checks must handle RTE_NEXT_ABI version
> >
> > I think, it is cheap to add this, at least for compilation test.
> >
> > IMO, We need to change the API break release to 3 year kind of time
> > frame to have very good end user experience
> > and allow ABI related change to get in every release and force
> > _rebuild_ shared objects in major LTS release.
> >
> > I think, in this major LTS version(23.11) if we can decide (a) vs (b)
> > then we can align the code accordingly . e.s.p for (a) we need to add
> > _init() functions.
> >
> > Thoughts?
>
> Not much input from mailing list. Can we discuss this next TB meeting?
> Especially how to align with next LTS release on
> -YAGNI vs reserved fileds with init()
> -What it takes to Extend the API breaking release more than a year as
> first step.
Yes I agree it should be discussed interactively in techboard meeting.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 0/3] add telemetry cmds for ring
2023-05-09 9:24 3% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
2023-05-09 9:24 3% ` [PATCH v6 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-05-30 9:27 0% ` Jie Hai
1 sibling, 0 replies; 200+ results
From: Jie Hai @ 2023-05-30 9:27 UTC (permalink / raw)
To: dev, thomas
Hi, Thomas and all maintainers,
Kindly ping for comments, thanks.
On 2023/5/9 17:24, Jie Hai wrote:
> This patch set supports telemetry cmd to list rings and dump information
> of a ring by its name.
>
> v1->v2:
> 1. Add space after "switch".
> 2. Fix wrong strlen parameter.
>
> v2->v3:
> 1. Remove prefix "rte_" for static function.
> 2. Add Acked-by Konstantin Ananyev for PATCH 1.
> 3. Introduce functions to return strings instead copy strings.
> 4. Check pointer to memzone of ring.
> 5. Remove redundant variable.
> 6. Hold lock when access ring data.
>
> v3->v4:
> 1. Update changelog according to reviews of Honnappa Nagarahalli.
> 2. Add Reviewed-by Honnappa Nagarahalli.
> 3. Correct grammar in help information.
> 4. Correct spell warning on "te" reported by checkpatch.pl.
> 5. Use ring_walk() to query ring info instead of rte_ring_lookup().
> 6. Fix that type definition the flag field of rte_ring does not match the usage.
> 7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
> for mask and flags.
>
> v4->v5:
> 1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
> 2. Add ABI change explanation for commit message of patch 1/3.
>
> v5->v6:
> 1. Add Acked-by Morten Brørup.
> 2. Fix incorrect reference of commit.
>
> Jie Hai (3):
> ring: fix unmatched type definition and usage
> ring: add telemetry cmd to list rings
> ring: add telemetry cmd for ring info
>
> lib/ring/meson.build | 1 +
> lib/ring/rte_ring.c | 139 +++++++++++++++++++++++++++++++++++++++
> lib/ring/rte_ring_core.h | 2 +-
> 3 files changed, 141 insertions(+), 1 deletion(-)
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-05-19 8:07 4% ` Jerin Jacob
@ 2023-05-30 9:23 0% ` Jerin Jacob
2023-05-30 13:51 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-05-30 9:23 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Stephen Hemminger, Nithin Dabilpuram, Akhil Goyal, jerinj, dev,
Morten Brørup, techboard
> > > > > + */
> > > > > + uint32_t ingress_oop : 1;
> > > > > +
> > > > > /** Reserved bit fields for future extension
> > > > > *
> > > > > * User should ensure reserved_opts is cleared as it may change in
> > > > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > > > *
> > > > > * Note: Reduce number of bits in reserved_opts for every new option.
> > > > > */
> > > > > - uint32_t reserved_opts : 17;
> > > > > + uint32_t reserved_opts : 16;
> > > > > };
> > > >
> > > > NAK
> > > > Let me repeat the reserved bit rant. YAGNI
> > > >
> > > > Reserved space is not usable without ABI breakage unless the existing
> > > > code enforces that reserved space has to be zero.
> > > >
> > > > Just saying "User should ensure reserved_opts is cleared" is not enough.
> > >
> > > Yes. I think, we need to enforce to have _init functions for the
> > > structures which is using reserved filed.
> > >
> > > On the same note on YAGNI, I am wondering why NOT introduce
> > > RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> > > By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> > > wants it to avoid waiting for one year any ABI breaking changes.
> > > There are a lot of "fixed appliance" customers (not OS distribution
> > > driven customer) they are willing to recompile DPDK for new feature.
> > > What we are loosing with this scheme?
> >
> > RTE_NEXT_ABI is described in the ABI policy.
> > We are not doing it currently, but I think we could
> > when it is not too much complicate in the code.
> >
> > The only problems I see are:
> > - more #ifdef clutter
> > - 2 binary versions to test
> > - CI and checks must handle RTE_NEXT_ABI version
>
> I think, we have two buckets of ABI breakages via RTE_NEXT_ABI
>
> 1) Changes that introduces compilation failures like adding new
> argument to API or change API name etc
> 2) Structure size change which won't affect the compilation but breaks
> the ABI for shared library usage.
>
> I think, (1) is very distributive, and I don't see recently such
> changes. I think, we should avoid (1) for non XX.11 releases.(or two
> or three-year cycles if we decide that path)
>
> The (2) comes are very common due to the fact HW features are
> evolving. I think, to address the (2), we have two options
> a) Have reserved fields and have _init() function to initialize the structures
> b) Follow YAGNI style and introduce RTE_NEXT_ABI for structure size change.
>
> The above concerns[1] can greatly reduce with option b OR option a.
>
> [1]
> 1) more #ifdef clutter
> For option (a) this is not needed or option (b) the clutter will be
> limited, it will be around structure which add the new filed and
> around the FULL block where new functions are added (not inside the
> functions)
>
> 2) 2 binary versions to test
> For option (a) this is not needed, for option (b) it is limited as for
> new features only one needs to test another binary (rather than NOT
> adding a new feature).
>
> 3) CI and checks must handle RTE_NEXT_ABI version
>
> I think, it is cheap to add this, at least for compilation test.
>
> IMO, We need to change the API break release to 3 year kind of time
> frame to have very good end user experience
> and allow ABI related change to get in every release and force
> _rebuild_ shared objects in major LTS release.
>
> I think, in this major LTS version(23.11) if we can decide (a) vs (b)
> then we can align the code accordingly . e.s.p for (a) we need to add
> _init() functions.
>
> Thoughts?
Not much input from mailing list. Can we discuss this next TB meeting?
Especially how to align with next LTS release on
-YAGNI vs reserved fileds with init()
-What it takes to Extend the API breaking release more than a year as
first step.
^ permalink raw reply [relevance 0%]
* 回复: [EXT] [PATCH v3 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB
2023-05-26 7:15 4% ` [EXT] " Akhil Goyal
@ 2023-05-29 3:06 4% ` Sunyang Wu
0 siblings, 0 replies; 200+ results
From: Sunyang Wu @ 2023-05-29 3:06 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: kai.ji
Hi Akhil,
Thank you very much for your patient guidance, the patches have been resubmitted.
Best wishes
Sunyang
> Add SM3_HMAC/SM4_CFB/SM4_OFB support in DPDK.
>
> Signed-off-by: Sunyang Wu <sunyang.wu@jaguarmicro.com>
> ---
> doc/guides/cryptodevs/features/default.ini | 3 +++
> doc/guides/rel_notes/release_23_07.rst | 5 +++++
> lib/cryptodev/rte_crypto_sym.h | 8 +++++++-
> lib/cryptodev/rte_cryptodev.c | 5 ++++-
> 4 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 523da0cfa8..8f54d4a2a5 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -64,6 +64,8 @@ ZUC EEA3 =
> SM4 ECB =
> SM4 CBC =
> SM4 CTR =
> +SM4 CFB =
> +SM4 OFB =
>
> ;
> ; Supported authentication algorithms of a default crypto driver.
> @@ -99,6 +101,7 @@ SHA3_384 HMAC =
> SHA3_512 =
> SHA3_512 HMAC =
> SM3 =
> +SM3 HMAC =
> SHAKE_128 =
> SHAKE_256 =
>
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..405b34c6d2 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,11 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added new algorithms to cryptodev.**
> +
> + * Added symmetric hash algorithm SM3-HMAC.
> + * Added symmetric cipher algorithm ShangMi 4 (SM4) in CFB and OFB modes.
> +
>
> Removed Items
> -------------
> diff --git a/lib/cryptodev/rte_crypto_sym.h
> b/lib/cryptodev/rte_crypto_sym.h index b43174dbec..428603d06e 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -172,8 +172,12 @@ enum rte_crypto_cipher_algorithm {
> /**< ShangMi 4 (SM4) algorithm in ECB mode */
> RTE_CRYPTO_CIPHER_SM4_CBC,
> /**< ShangMi 4 (SM4) algorithm in CBC mode */
> - RTE_CRYPTO_CIPHER_SM4_CTR
> + RTE_CRYPTO_CIPHER_SM4_CTR,
> /**< ShangMi 4 (SM4) algorithm in CTR mode */
> + RTE_CRYPTO_CIPHER_SM4_OFB,
> + /**< ShangMi 4 (SM4) algorithm in OFB mode */
> + RTE_CRYPTO_CIPHER_SM4_CFB
> + /**< ShangMi 4 (SM4) algorithm in CFB mode */
> };
>
> /** Cipher algorithm name strings */
> @@ -376,6 +380,8 @@ enum rte_crypto_auth_algorithm {
> /**< HMAC using 512 bit SHA3 algorithm. */
> RTE_CRYPTO_AUTH_SM3,
> /**< ShangMi 3 (SM3) algorithm */
> + RTE_CRYPTO_AUTH_SM3_HMAC,
> + /** < HMAC using ShangMi 3 (SM3) algorithm */
You cannot insert in the middle of enum.
This will result in ABI break.
http://mails.dpdk.org/archives/test-report/2023-May/400475.html
Please move this change to end of enum for this release.
You can submit a patch for next release(which is an ABI break release.) to move it back.
>
> RTE_CRYPTO_AUTH_SHAKE_128,
> /**< 128 bit SHAKE algorithm. */
> diff --git a/lib/cryptodev/rte_cryptodev.c
> b/lib/cryptodev/rte_cryptodev.c index a96114b2da..4ff7046e97 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -127,7 +127,9 @@ crypto_cipher_algorithm_strings[] = {
> [RTE_CRYPTO_CIPHER_ZUC_EEA3] = "zuc-eea3",
> [RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
> [RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
> - [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
> + [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
> + [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
> + [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
> };
>
> /**
> @@ -227,6 +229,7 @@ crypto_auth_algorithm_strings[] = {
> [RTE_CRYPTO_AUTH_SNOW3G_UIA2] = "snow3g-uia2",
> [RTE_CRYPTO_AUTH_ZUC_EIA3] = "zuc-eia3",
> [RTE_CRYPTO_AUTH_SM3] = "sm3",
> + [RTE_CRYPTO_AUTH_SM3_HMAC] = "sm3-hmac",
>
> [RTE_CRYPTO_AUTH_SHAKE_128] = "shake-128",
> [RTE_CRYPTO_AUTH_SHAKE_256] = "shake-256",
> --
> 2.19.0.rc0.windows.1
^ permalink raw reply [relevance 4%]
* [PATCH V6 2/5] ethdev: fix skip valid port in probing callback
2023-05-27 2:11 3% ` [PATCH V6 " Huisong Li
@ 2023-05-27 2:11 2% ` Huisong Li
2023-06-06 16:26 0% ` [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Huisong Li @ 2023-05-27 2:11 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, liudongdong3,
liuyonglong, fengchengwen, lihuisong
The event callback in application may use the macro RTE_ETH_FOREACH_DEV to
iterate over all enabled ports to do something(like, verifying the port id
validity) when receive a probing event. If the ethdev state of a port is
not RTE_ETH_DEV_UNUSED, this port will be considered as a valid port.
However, this state is set to RTE_ETH_DEV_ATTACHED after pushing probing
event. It means that probing callback will skip this port. But this
assignment can not move to front of probing notification. See
commit be8cd210379a ("ethdev: fix port probing notification")
So this patch has to add a new state, RTE_ETH_DEV_ALLOCATED. Set the ethdev
state to RTE_ETH_DEV_ALLOCATED before pushing probing event and set it to
RTE_ETH_DEV_ATTACHED after definitely probed. And this port is valid if its
device state is 'ALLOCATED' or 'ATTACHED'.
In addition, the new state has to be placed behind 'REMOVED' to avoid ABI
break. Fortunately, this ethdev state is internal and applications can not
access it directly. So this patch encapsulates an API, rte_eth_dev_is_used,
for ethdev or PMD to call and eliminate concerns about using this state
enum value comparison.
Fixes: be8cd210379a ("ethdev: fix port probing notification")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
drivers/net/mlx5/mlx5.c | 2 +-
lib/ethdev/ethdev_driver.c | 13 ++++++++++---
lib/ethdev/ethdev_driver.h | 12 ++++++++++++
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 4 ++--
lib/ethdev/rte_ethdev.h | 4 +++-
lib/ethdev/version.map | 1 +
9 files changed, 33 insertions(+), 10 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ef7b8859d9..74ec0c88fb 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -6053,7 +6053,8 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(DEBUG, "Calling Device uninit\n");
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+
+ if (rte_eth_dev_is_used(eth_dev->state))
bnxt_dev_close_op(eth_dev);
return 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index a75fa1b7f0..881425bf83 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -3145,7 +3145,7 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev)
while (port_id < RTE_MAX_ETHPORTS) {
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
- if (dev->state != RTE_ETH_DEV_UNUSED &&
+ if (rte_eth_dev_is_used(dev->state) &&
dev->device &&
(dev->device == odev ||
(dev->device->driver &&
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index 0be1e8ca04..29e9417bea 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -50,8 +50,8 @@ eth_dev_find_free_port(void)
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
/* Using shared name field to find a free port. */
if (eth_dev_shared_data->data[i].name[0] == '\0') {
- RTE_ASSERT(rte_eth_devices[i].state ==
- RTE_ETH_DEV_UNUSED);
+ RTE_ASSERT(!rte_eth_dev_is_used(
+ rte_eth_devices[i].state));
return i;
}
}
@@ -208,11 +208,18 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+ dev->state = RTE_ETH_DEV_ALLOCATED;
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
dev->state = RTE_ETH_DEV_ATTACHED;
}
+bool rte_eth_dev_is_used(uint16_t dev_state)
+{
+ return dev_state == RTE_ETH_DEV_ALLOCATED ||
+ dev_state == RTE_ETH_DEV_ATTACHED;
+}
+
int
rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
{
@@ -221,7 +228,7 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
eth_dev_shared_data_prepare();
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+ if (rte_eth_dev_is_used(eth_dev->state))
rte_eth_dev_callback_process(eth_dev,
RTE_ETH_EVENT_DESTROY, NULL);
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 367c0c4878..d5fd6e19ba 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1583,6 +1583,18 @@ int rte_eth_dev_callback_process(struct rte_eth_dev *dev,
__rte_internal
void rte_eth_dev_probing_finish(struct rte_eth_dev *dev);
+/**
+ * Check if a Ethernet device state is used or not
+ *
+ * @param dev_state
+ * The state of the Ethernet device
+ * @return
+ * - true if the state of the Ethernet device is allocated or attached
+ * - false if this state is neither allocated nor attached
+ */
+__rte_internal
+bool rte_eth_dev_is_used(uint16_t dev_state);
+
/**
* Create memzone for HW rings.
* malloc can't be used as the physical address is needed.
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 94b8fba5d7..23270ccd73 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -164,7 +164,7 @@ rte_eth_dev_pci_generic_remove(struct rte_pci_device *pci_dev,
* eth device has been released.
*/
if (rte_eal_process_type() == RTE_PROC_SECONDARY &&
- eth_dev->state == RTE_ETH_DEV_UNUSED)
+ !rte_eth_dev_is_used(eth_dev->state))
return 0;
if (dev_uninit) {
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index b61dae849d..88e56dd9a4 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -118,7 +118,7 @@ eth_dev_match(const struct rte_eth_dev *edev,
const struct rte_kvargs *kvlist = arg->kvlist;
unsigned int pair;
- if (edev->state == RTE_ETH_DEV_UNUSED)
+ if (!rte_eth_dev_is_used(edev->state))
return -1;
if (arg->device != NULL && arg->device != edev->device)
return -1;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index d46e74504e..c8f800bb12 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -338,7 +338,7 @@ uint16_t
rte_eth_find_next(uint16_t port_id)
{
while (port_id < RTE_MAX_ETHPORTS &&
- rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED)
+ !rte_eth_dev_is_used(rte_eth_devices[port_id].state))
port_id++;
if (port_id >= RTE_MAX_ETHPORTS)
@@ -397,7 +397,7 @@ rte_eth_dev_is_valid_port(uint16_t port_id)
int is_valid;
if (port_id >= RTE_MAX_ETHPORTS ||
- (rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED))
+ !rte_eth_dev_is_used(rte_eth_devices[port_id].state))
is_valid = 0;
else
is_valid = 1;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fe8f7466c8..d4de7942d0 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2002,10 +2002,12 @@ typedef uint16_t (*rte_tx_callback_fn)(uint16_t port_id, uint16_t queue,
enum rte_eth_dev_state {
/** Device is unused before being probed. */
RTE_ETH_DEV_UNUSED = 0,
- /** Device is attached when allocated in probing. */
+ /** Device is attached when definitely probed. */
RTE_ETH_DEV_ATTACHED,
/** Device is in removed state when plug-out is detected. */
RTE_ETH_DEV_REMOVED,
+ /** Device is allocated and is set before reporting new event. */
+ RTE_ETH_DEV_ALLOCATED,
};
struct rte_eth_dev_sriov {
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 041f0da31f..673123dfb7 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -317,6 +317,7 @@ INTERNAL {
rte_eth_dev_get_by_name;
rte_eth_dev_is_rx_hairpin_queue;
rte_eth_dev_is_tx_hairpin_queue;
+ rte_eth_dev_is_used;
rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
rte_eth_dev_internal_reset;
--
2.22.0
^ permalink raw reply [relevance 2%]
* [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port
[not found] <20220825024425.10534-1-lihuisong@huawei.com>
@ 2023-05-27 2:11 3% ` Huisong Li
2023-05-27 2:11 2% ` [PATCH V6 2/5] ethdev: fix skip valid port in probing callback Huisong Li
2023-06-06 16:26 0% ` [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port Ferruh Yigit
1 sibling, 2 replies; 200+ results
From: Huisong Li @ 2023-05-27 2:11 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, liudongdong3,
liuyonglong, fengchengwen, lihuisong
This patchset fix some bugs and support attaching and detaching port
in primary and secondary.
---
-v6: adjust rte_eth_dev_is_used position based on alphabetical order
in version.map
-v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
-v4: fix a misspelling.
-v3:
#1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
for other bus type.
#2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
the probelm in patch 2/5.
-v2: resend due to CI unexplained failure.
Huisong Li (5):
drivers/bus: restore driver assignment at front of probing
ethdev: fix skip valid port in probing callback
app/testpmd: check the validity of the port
app/testpmd: add attach and detach port for multiple process
app/testpmd: stop forwarding in new or destroy event
app/test-pmd/testpmd.c | 47 +++++++++++++++---------
app/test-pmd/testpmd.h | 1 -
drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
drivers/bus/fslmc/fslmc_bus.c | 8 +++-
drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
drivers/bus/pci/pci_common.c | 9 ++++-
drivers/bus/vdev/vdev.c | 10 ++++-
drivers/bus/vmbus/vmbus_common.c | 9 ++++-
drivers/net/bnxt/bnxt_ethdev.c | 3 +-
drivers/net/bonding/bonding_testpmd.c | 1 -
drivers/net/mlx5/mlx5.c | 2 +-
lib/ethdev/ethdev_driver.c | 13 +++++--
lib/ethdev/ethdev_driver.h | 12 ++++++
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 4 +-
lib/ethdev/rte_ethdev.h | 4 +-
lib/ethdev/version.map | 1 +
19 files changed, 114 insertions(+), 44 deletions(-)
--
2.22.0
^ permalink raw reply [relevance 3%]
* RE: [PATCH v6 04/15] graph: add get/set graph worker model APIs
2023-05-24 6:08 3% ` Jerin Jacob
@ 2023-05-26 9:58 0% ` Yan, Zhirun
0 siblings, 0 replies; 200+ results
From: Yan, Zhirun @ 2023-05-26 9:58 UTC (permalink / raw)
To: Jerin Jacob
Cc: dev, jerinj, kirankumark, ndabilpuram, stephen, pbhagavatula,
Liang, Cunming, Wang, Haiyue
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, May 24, 2023 2:09 PM
> To: Yan, Zhirun <zhirun.yan@intel.com>
> Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> ndabilpuram@marvell.com; stephen@networkplumber.org;
> pbhagavatula@marvell.com; Liang, Cunming <cunming.liang@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>
> Subject: Re: [PATCH v6 04/15] graph: add get/set graph worker model APIs
>
> On Tue, May 9, 2023 at 11:34 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
> >
> > Add new get/set APIs to configure graph worker model which is used to
> > determine which model will be chosen.
> >
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > ---
> > diff --git a/lib/graph/rte_graph_worker.c
> > b/lib/graph/rte_graph_worker.c new file mode 100644 index
> > 0000000000..cabc101262
> > --- /dev/null
> > +++ b/lib/graph/rte_graph_worker.c
> > @@ -0,0 +1,54 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2023 Intel Corporation */
> > +
> > +#include "rte_graph_worker_common.h"
> > +
> > +RTE_DEFINE_PER_LCORE(enum rte_graph_worker_model, worker_model) =
> > +RTE_GRAPH_MODEL_DEFAULT;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> > +notice
> > + * Set the graph worker model
>
> Just declaring this top of the header file enough to avoid duplicating in every
> functions as all functions in header is experimental. See lib/graph/rte_graph.h
>
Got it, I will do in next version.
>
> > + *
> > + * @note This function does not perform any locking, and is only safe to call
> > + * before graph running.
> > + *
> > + * @param name
> > + * Name of the graph worker model.
> > + *
> > + * @return
> > + * 0 on success, -1 otherwise.
> > + */
> > +int
> > +rte_graph_worker_model_set(enum rte_graph_worker_model model) {
> > + if (model >= RTE_GRAPH_MODEL_LIST_END)
> > + goto fail;
> > +
> > + RTE_PER_LCORE(worker_model) = model;
>
> Application needs to set this per core . Right?
Yes. Each worker needs to know its model.
> Are we anticipating a case where one core runs one model and another core
> runs with another model?
> If not OR it is not practically possible, then, To make application programmer
> life easy, We could loop through all lore and set on all of them instead of
> application setting on each one separately.
>
For current rtc and dispatch models, it is not necessary.
To some extent that models are mutually exclusive.
For this case:
Core 1: A->B->C (RTC)
Core 2: A' (DISPATCH)
Core 3: B' (DISPATCH)
Core 4: C' (DISPATCH)
It may change the graph topo, or need some prerequisites like RSS before input node A.
BTW, if there are some requirements with more models in future, we could add some attributions for graph, lcore, node.
Like taint/affinity for node and model. Then we could allow a node to appeal/repel a set of models.
I will change to put the model into struct rte_graph as you suggested in patch 12 for this release.
>
> > + return 0;
> > +
> > +fail:
> > + RTE_PER_LCORE(worker_model) = RTE_GRAPH_MODEL_DEFAULT;
> > + return -1;
> > +}
> > +
>
> > +/** Graph worker models */
> > +enum rte_graph_worker_model {
> > + RTE_GRAPH_MODEL_DEFAULT,
>
> Add Doxygen comment
> > + RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT,
>
>
> Add Doxygen comment to explain what this mode does.
>
>
> > + RTE_GRAPH_MODEL_MCORE_DISPATCH,
>
> Add Doxygen comment to explain what this mode does.
>
Ok, I will add Doxygen comments for these models.
> > + RTE_GRAPH_MODEL_LIST_END
>
> This can break the ABI if we add one in middle. Please remove this.
> See lib/crytodev for
> how to handle with _END symbols.
Yes, I will remove this.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ethdev: validate reserved fields
2023-05-25 20:39 8% [PATCH] ethdev: validate reserved fields Stephen Hemminger
@ 2023-05-26 8:15 0% ` Bruce Richardson
2023-06-06 15:24 3% ` Ferruh Yigit
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-05-26 8:15 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, thomas, Ferruh Yigit, Andrew Rybchenko
On Thu, May 25, 2023 at 01:39:42PM -0700, Stephen Hemminger wrote:
> The various reserved fields added to ethdev could not be
> safely used for future extensions because they were never
> checked on input. Therefore ABI would be broken if these
> fields were added in a future DPDK release.
>
> Fixes: 436b3a6b6e62 ("ethdev: reserve space in main structs for extension")
> Cc: thomas@monjalon.net
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> lib/ethdev/rte_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 41 insertions(+)
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* RE: [EXT] [PATCH v3 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB
@ 2023-05-26 7:15 4% ` Akhil Goyal
2023-05-29 3:06 4% ` 回复: " Sunyang Wu
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2023-05-26 7:15 UTC (permalink / raw)
To: Sunyang Wu, dev; +Cc: kai.ji
> Add SM3_HMAC/SM4_CFB/SM4_OFB support in DPDK.
>
> Signed-off-by: Sunyang Wu <sunyang.wu@jaguarmicro.com>
> ---
> doc/guides/cryptodevs/features/default.ini | 3 +++
> doc/guides/rel_notes/release_23_07.rst | 5 +++++
> lib/cryptodev/rte_crypto_sym.h | 8 +++++++-
> lib/cryptodev/rte_cryptodev.c | 5 ++++-
> 4 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 523da0cfa8..8f54d4a2a5 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -64,6 +64,8 @@ ZUC EEA3 =
> SM4 ECB =
> SM4 CBC =
> SM4 CTR =
> +SM4 CFB =
> +SM4 OFB =
>
> ;
> ; Supported authentication algorithms of a default crypto driver.
> @@ -99,6 +101,7 @@ SHA3_384 HMAC =
> SHA3_512 =
> SHA3_512 HMAC =
> SM3 =
> +SM3 HMAC =
> SHAKE_128 =
> SHAKE_256 =
>
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..405b34c6d2 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,11 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added new algorithms to cryptodev.**
> +
> + * Added symmetric hash algorithm SM3-HMAC.
> + * Added symmetric cipher algorithm ShangMi 4 (SM4) in CFB and OFB modes.
> +
>
> Removed Items
> -------------
> diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
> index b43174dbec..428603d06e 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -172,8 +172,12 @@ enum rte_crypto_cipher_algorithm {
> /**< ShangMi 4 (SM4) algorithm in ECB mode */
> RTE_CRYPTO_CIPHER_SM4_CBC,
> /**< ShangMi 4 (SM4) algorithm in CBC mode */
> - RTE_CRYPTO_CIPHER_SM4_CTR
> + RTE_CRYPTO_CIPHER_SM4_CTR,
> /**< ShangMi 4 (SM4) algorithm in CTR mode */
> + RTE_CRYPTO_CIPHER_SM4_OFB,
> + /**< ShangMi 4 (SM4) algorithm in OFB mode */
> + RTE_CRYPTO_CIPHER_SM4_CFB
> + /**< ShangMi 4 (SM4) algorithm in CFB mode */
> };
>
> /** Cipher algorithm name strings */
> @@ -376,6 +380,8 @@ enum rte_crypto_auth_algorithm {
> /**< HMAC using 512 bit SHA3 algorithm. */
> RTE_CRYPTO_AUTH_SM3,
> /**< ShangMi 3 (SM3) algorithm */
> + RTE_CRYPTO_AUTH_SM3_HMAC,
> + /** < HMAC using ShangMi 3 (SM3) algorithm */
You cannot insert in the middle of enum.
This will result in ABI break.
http://mails.dpdk.org/archives/test-report/2023-May/400475.html
Please move this change to end of enum for this release.
You can submit a patch for next release(which is an ABI break release.) to move it back.
>
> RTE_CRYPTO_AUTH_SHAKE_128,
> /**< 128 bit SHAKE algorithm. */
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index a96114b2da..4ff7046e97 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -127,7 +127,9 @@ crypto_cipher_algorithm_strings[] = {
> [RTE_CRYPTO_CIPHER_ZUC_EEA3] = "zuc-eea3",
> [RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
> [RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
> - [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
> + [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
> + [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
> + [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
> };
>
> /**
> @@ -227,6 +229,7 @@ crypto_auth_algorithm_strings[] = {
> [RTE_CRYPTO_AUTH_SNOW3G_UIA2] = "snow3g-uia2",
> [RTE_CRYPTO_AUTH_ZUC_EIA3] = "zuc-eia3",
> [RTE_CRYPTO_AUTH_SM3] = "sm3",
> + [RTE_CRYPTO_AUTH_SM3_HMAC] = "sm3-hmac",
>
> [RTE_CRYPTO_AUTH_SHAKE_128] = "shake-128",
> [RTE_CRYPTO_AUTH_SHAKE_256] = "shake-256",
> --
> 2.19.0.rc0.windows.1
^ permalink raw reply [relevance 4%]
* [PATCH v1 0/4] bbdev: API extension for 23.11
@ 2023-05-25 23:23 4% Nicolas Chautru
0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2023-05-25 23:23 UTC (permalink / raw)
To: dev, maxime.coquelin
Cc: trix, hemant.agrawal, david.marchand, hernan.vargas, Nicolas Chautru
Hi,
Including v1 for extending the bbdev api for 23.11.
The new MLD-TS is expected to be non ABI compatible, the other
ones should not break ABI.
I will send a deprecation notice in parallel.
This introduces a new operation (on top of FEC and FFT) to
support equalization for MLD-TS. There also more modular
API extension for existing FFT and FEC operation.
Thanks
Nic
Nicolas Chautru (4):
bbdev: add operation type for MLDTS procession
bbdev: add new capabilities for FFT processing
bbdev: add new capability for FEC 5G UL processing
bbdev: improving error handling for queue configuration
doc/guides/prog_guide/bbdev.rst | 83 ++++++++++++++++++
lib/bbdev/rte_bbdev.c | 26 +++---
lib/bbdev/rte_bbdev.h | 76 +++++++++++++++++
lib/bbdev/rte_bbdev_op.h | 143 +++++++++++++++++++++++++++++++-
lib/bbdev/version.map | 5 ++
5 files changed, 320 insertions(+), 13 deletions(-)
--
2.34.1
^ permalink raw reply [relevance 4%]
* [PATCH] ethdev: validate reserved fields
@ 2023-05-25 20:39 8% Stephen Hemminger
2023-05-26 8:15 0% ` Bruce Richardson
2023-06-06 15:24 3% ` Ferruh Yigit
0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2023-05-25 20:39 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, thomas, Ferruh Yigit, Andrew Rybchenko
The various reserved fields added to ethdev could not be
safely used for future extensions because they were never
checked on input. Therefore ABI would be broken if these
fields were added in a future DPDK release.
Fixes: 436b3a6b6e62 ("ethdev: reserve space in main structs for extension")
Cc: thomas@monjalon.net
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/ethdev/rte_ethdev.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d0325568322..4f937a1914c9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1228,6 +1228,25 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
/* Backup mtu for rollback */
old_mtu = dev->data->mtu;
+ /* fields must be zero to reserve them for future ABI changes */
+ if (dev_conf->rxmode.reserved_64s[0] != 0 ||
+ dev_conf->rxmode.reserved_64s[1] != 0 ||
+ dev_conf->rxmode.reserved_ptrs[0] != NULL ||
+ dev_conf->rxmode.reserved_ptrs[1] != NULL) {
+ RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n");
+ ret = -EINVAL;
+ goto rollback;
+ }
+
+ if (dev_conf->txmode.reserved_64s[0] != 0 ||
+ dev_conf->txmode.reserved_64s[1] != 0 ||
+ dev_conf->txmode.reserved_ptrs[0] != NULL ||
+ dev_conf->txmode.reserved_ptrs[1] != NULL) {
+ RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n");
+ ret = -EINVAL;
+ goto rollback;
+ }
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
goto rollback;
@@ -2003,6 +2022,14 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
if (*dev->dev_ops->rx_queue_setup == NULL)
return -ENOTSUP;
+ if (rx_conf->reserved_64s[0] != 0 ||
+ rx_conf->reserved_64s[1] != 0 ||
+ rx_conf->reserved_ptrs[0] != NULL ||
+ rx_conf->reserved_ptrs[1] != NULL) {
+ RTE_ETHDEV_LOG(ERR, "Rx conf reserved fields not zero\n");
+ return -EINVAL;
+ }
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -2206,6 +2233,12 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (conf->reserved != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx hairpin reserved field not zero\n");
+ return -EINVAL;
+ }
+
ret = rte_eth_dev_hairpin_capability_get(port_id, &cap);
if (ret != 0)
return ret;
@@ -2301,6 +2334,14 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id,
if (*dev->dev_ops->tx_queue_setup == NULL)
return -ENOTSUP;
+ if (tx_conf->reserved_64s[0] != 0 ||
+ tx_conf->reserved_64s[1] != 0 ||
+ tx_conf->reserved_ptrs[0] != NULL ||
+ tx_conf->reserved_ptrs[1] != NULL) {
+ RTE_ETHDEV_LOG(ERR, "Tx conf reserved fields not zero\n");
+ return -EINVAL;
+ }
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
--
2.39.2
^ permalink raw reply [relevance 8%]
* RE: [EXT] [PATCH v2 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB
@ 2023-05-25 14:48 3% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2023-05-25 14:48 UTC (permalink / raw)
To: Sunyang Wu, dev; +Cc: kai.ji
> Add SM3_HMAC/SM4_CFB/SM4_OFB support in DPDK.
>
> Signed-off-by: Sunyang Wu <sunyang.wu@jaguarmicro.com>
> ---
> doc/guides/cryptodevs/features/default.ini | 3 +++
> doc/guides/rel_notes/release_23_07.rst | 5 +++++
> lib/cryptodev/rte_crypto_sym.h | 8 +++++++-
> lib/cryptodev/rte_cryptodev.c | 12 +++++++++---
> 4 files changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 523da0cfa8..8f54d4a2a5 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -64,6 +64,8 @@ ZUC EEA3 =
> SM4 ECB =
> SM4 CBC =
> SM4 CTR =
> +SM4 CFB =
> +SM4 OFB =
>
> ;
> ; Supported authentication algorithms of a default crypto driver.
> @@ -99,6 +101,7 @@ SHA3_384 HMAC =
> SHA3_512 =
> SHA3_512 HMAC =
> SM3 =
> +SM3 HMAC =
> SHAKE_128 =
> SHAKE_256 =
>
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..405b34c6d2 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -55,6 +55,11 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added new algorithms to cryptodev.**
> +
> + * Added symmetric hash algorithm SM3-HMAC.
> + * Added symmetric cipher algorithm ShangMi 4 (SM4) in CFB and OFB modes.
> +
>
> Removed Items
> -------------
> diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
> index b43174dbec..428603d06e 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -172,8 +172,12 @@ enum rte_crypto_cipher_algorithm {
> /**< ShangMi 4 (SM4) algorithm in ECB mode */
> RTE_CRYPTO_CIPHER_SM4_CBC,
> /**< ShangMi 4 (SM4) algorithm in CBC mode */
> - RTE_CRYPTO_CIPHER_SM4_CTR
> + RTE_CRYPTO_CIPHER_SM4_CTR,
> /**< ShangMi 4 (SM4) algorithm in CTR mode */
> + RTE_CRYPTO_CIPHER_SM4_OFB,
> + /**< ShangMi 4 (SM4) algorithm in OFB mode */
> + RTE_CRYPTO_CIPHER_SM4_CFB
> + /**< ShangMi 4 (SM4) algorithm in CFB mode */
> };
>
> /** Cipher algorithm name strings */
> @@ -376,6 +380,8 @@ enum rte_crypto_auth_algorithm {
> /**< HMAC using 512 bit SHA3 algorithm. */
> RTE_CRYPTO_AUTH_SM3,
> /**< ShangMi 3 (SM3) algorithm */
> + RTE_CRYPTO_AUTH_SM3_HMAC,
> + /** < HMAC using ShangMi 3 (SM3) algorithm */
>
> RTE_CRYPTO_AUTH_SHAKE_128,
> /**< 128 bit SHAKE algorithm. */
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index a96114b2da..3e5e65dc58 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -94,7 +94,9 @@ rte_crypto_cipher_algorithm_strings[] = {
> [RTE_CRYPTO_CIPHER_ZUC_EEA3] = "zuc-eea3",
> [RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
> [RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
> - [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
> + [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
> + [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
> + [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
> };
>
> /**
> @@ -127,7 +129,9 @@ crypto_cipher_algorithm_strings[] = {
> [RTE_CRYPTO_CIPHER_ZUC_EEA3] = "zuc-eea3",
> [RTE_CRYPTO_CIPHER_SM4_ECB] = "sm4-ecb",
> [RTE_CRYPTO_CIPHER_SM4_CBC] = "sm4-cbc",
> - [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr"
> + [RTE_CRYPTO_CIPHER_SM4_CTR] = "sm4-ctr",
> + [RTE_CRYPTO_CIPHER_SM4_CFB] = "sm4-cfb",
> + [RTE_CRYPTO_CIPHER_SM4_OFB] = "sm4-ofb"
> };
>
> /**
> @@ -182,7 +186,8 @@ rte_crypto_auth_algorithm_strings[] = {
> [RTE_CRYPTO_AUTH_KASUMI_F9] = "kasumi-f9",
> [RTE_CRYPTO_AUTH_SNOW3G_UIA2] = "snow3g-uia2",
> [RTE_CRYPTO_AUTH_ZUC_EIA3] = "zuc-eia3",
> - [RTE_CRYPTO_AUTH_SM3] = "sm3"
> + [RTE_CRYPTO_AUTH_SM3] = "sm3",
> + [RTE_CRYPTO_AUTH_SM3_HMAC] = "sm3-hmac"
> };
>
> /**
> @@ -227,6 +232,7 @@ crypto_auth_algorithm_strings[] = {
> [RTE_CRYPTO_AUTH_SNOW3G_UIA2] = "snow3g-uia2",
> [RTE_CRYPTO_AUTH_ZUC_EIA3] = "zuc-eia3",
> [RTE_CRYPTO_AUTH_SM3] = "sm3",
> + [RTE_CRYPTO_AUTH_SM3_HMAC] = "sm3",
>
> [RTE_CRYPTO_AUTH_SHAKE_128] = "shake-128",
> [RTE_CRYPTO_AUTH_SHAKE_256] = "shake-256",
> --
I asked you to update only the crypto_auth_algorithm_strings
And crypto_cipher_algorithm_strings and not the rte_ ones.
Changing rte_ will create ABI break and these are deprecated
arrays, so it should not be changed.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
2023-05-24 22:56 0% ` Honnappa Nagarahalli
@ 2023-05-25 0:02 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-05-25 0:02 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: David Marchand, dev, Ruifeng Wang, thomas, stephen, mb, Ferruh Yigit, nd
Morten,
David and Honnappa are discussing the /* NOTE: */ comments that were
added. If the three of you could come to conclusion about keeping or
removing them it would be appreciated.
Thanks!
On Wed, May 24, 2023 at 10:56:01PM +0000, Honnappa Nagarahalli wrote:
>
>
> > -----Original Message-----
> > From: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Sent: Wednesday, May 24, 2023 5:51 PM
> > To: David Marchand <david.marchand@redhat.com>
> > Cc: dev@dpdk.org; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> > Ruifeng Wang <Ruifeng.Wang@arm.com>; thomas@monjalon.net;
> > stephen@networkplumber.org; mb@smartsharesystems.com; Ferruh Yigit
> > <ferruh.yigit@amd.com>
> > Subject: Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
> >
> > On Wed, May 24, 2023 at 10:06:24PM +0200, David Marchand wrote:
> > > On Wed, May 24, 2023 at 5:47 PM Tyler Retzlaff
> > > <roretzla@linux.microsoft.com> wrote:
> > > > On Wed, May 24, 2023 at 02:40:43PM +0200, David Marchand wrote:
> > > > > Hello Tyler,
> > > > >
> > > > > On Thu, Mar 23, 2023 at 11:54 PM Tyler Retzlaff
> > > > > <roretzla@linux.microsoft.com> wrote:
> > > > > >
> > > > > > Replace the use of rte_atomic.h types and functions, instead use
> > > > > > GCC supplied C++11 memory model builtins.
> > > > > >
> > > > > > This series covers the libraries and drivers that are built on Windows.
> > > > > >
> > > > > > The code has be converted to use the __atomic builtins but there
> > > > > > are additional during conversion i notice that there may be some
> > > > > > issues that need to be addressed.
> > > > > >
> > > > > > I'll comment in the patches where my concerns are so the
> > > > > > maintainers may comment.
> > > > > >
> > > > > > v3:
> > > > > > * style, don't use c99 comments
> > > > > >
> > > > > > v2:
> > > > > > * comment code where optimizations may be possible now that
> > memory
> > > > > > order can be specified.
> > > > > > * comment code where operations should potentially be atomic so that
> > > > > > maintainers can review.
> > > > > > * change a couple of variables labeled as counters to be unsigned.
> > > > > >
> > > > > > Tyler Retzlaff (7):
> > > > > > ring: replace rte atomics with GCC builtin atomics
> > > > > > stack: replace rte atomics with GCC builtin atomics
> > > > > > dma/idxd: replace rte atomics with GCC builtin atomics
> > > > > > net/ice: replace rte atomics with GCC builtin atomics
> > > > > > net/ixgbe: replace rte atomics with GCC builtin atomics
> > > > > > net/null: replace rte atomics with GCC builtin atomics
> > > > > > net/ring: replace rte atomics with GCC builtin atomics
> > > > > >
> > > > > > drivers/dma/idxd/idxd_internal.h | 3 +--
> > > > > > drivers/dma/idxd/idxd_pci.c | 8 +++++---
> > > > > > drivers/net/ice/ice_dcf.c | 1 -
> > > > > > drivers/net/ice/ice_dcf_ethdev.c | 1 -
> > > > > > drivers/net/ice/ice_ethdev.c | 12 ++++++++----
> > > > > > drivers/net/ixgbe/ixgbe_bypass.c | 1 -
> > > > > > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++------
> > > > > > drivers/net/ixgbe/ixgbe_ethdev.h | 3 ++-
> > > > > > drivers/net/ixgbe/ixgbe_flow.c | 1 -
> > > > > > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > > > > > drivers/net/null/rte_eth_null.c | 28
> > > > > > ++++++++++++++++++---------- drivers/net/ring/rte_eth_ring.c | 26
> > ++++++++++++++++----------
> > > > > > lib/ring/rte_ring_core.h | 1 -
> > > > > > lib/ring/rte_ring_generic_pvt.h | 12 ++++++++----
> > > > > > lib/stack/rte_stack_lf_generic.h | 16 +++++++++-------
> > > > > > 15 files changed, 79 insertions(+), 53 deletions(-)
> > > > > >
> > > > >
> > > > > There is still some code using the DPDK "legacy" atomic API, but I
> > > > > guess this will be converted later.
> > > >
> > > > Yes, it will be converted later.
> > > >
> > > > If I did it correctly... the series was an attempt to move away from
> > > > the legacy API where there was a dependency on EAL that would change
> > > > when moving to stdatomic. I'm hoping that the remaining use of the
> > > > legacy API are not sensitive to the theoretical ABI surface changing
> > > > when that move is complete.
> > >
> > > Ok.
> > >
> > >
> > > > > As you proposed, I dropped patch 1 on the ring library (waiting
> > > > > for ARM to provide an alternative) and applied this series, thanks.
> > > > >
> > > > > Note: Thomas, Ferruh, we will have to be careful when merging
> > > > > subtrees to make sure we are not reintroducing those again (like
> > > > > for example net/ice).
> > >
> > > Well, I have some second thought about this series so I did not push
> > > it to dpdk.org yet.
> >
> > Understood. It's very important to have these reviewed well so no objection just
> > hope we can get them reviewed properly soon.
> >
> > > Drivers maintainers were not copied so I would like another pair of
> > > eyes on the series: ideally no /* Note: */ should be left when merging
> > > those patches.
> >
> > The /* Note: */ was explicitly requested by other reviewers as they were
> > concerned we would lose track of opportunities to weaken ordering after
> > switching from __sync to __atomic.
> Note that some of the changes that I checked are in control plane. While it is good to optimize those, but the benefits might not be much. The presence of SEQ_CST also can act as a note.
>
> >
> > Is your request that the comments now be removed?
> >
> > Thanks!
> >
> > > I'll reply individually on the patches.
> > >
> > >
> > > --
> > > David Marchand
^ permalink raw reply [relevance 0%]
* RE: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
2023-05-24 22:50 0% ` Tyler Retzlaff
@ 2023-05-24 22:56 0% ` Honnappa Nagarahalli
2023-05-25 0:02 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2023-05-24 22:56 UTC (permalink / raw)
To: Tyler Retzlaff, David Marchand
Cc: dev, Ruifeng Wang, thomas, stephen, mb, Ferruh Yigit, nd, nd
> -----Original Message-----
> From: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Sent: Wednesday, May 24, 2023 5:51 PM
> To: David Marchand <david.marchand@redhat.com>
> Cc: dev@dpdk.org; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> Ruifeng Wang <Ruifeng.Wang@arm.com>; thomas@monjalon.net;
> stephen@networkplumber.org; mb@smartsharesystems.com; Ferruh Yigit
> <ferruh.yigit@amd.com>
> Subject: Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
>
> On Wed, May 24, 2023 at 10:06:24PM +0200, David Marchand wrote:
> > On Wed, May 24, 2023 at 5:47 PM Tyler Retzlaff
> > <roretzla@linux.microsoft.com> wrote:
> > > On Wed, May 24, 2023 at 02:40:43PM +0200, David Marchand wrote:
> > > > Hello Tyler,
> > > >
> > > > On Thu, Mar 23, 2023 at 11:54 PM Tyler Retzlaff
> > > > <roretzla@linux.microsoft.com> wrote:
> > > > >
> > > > > Replace the use of rte_atomic.h types and functions, instead use
> > > > > GCC supplied C++11 memory model builtins.
> > > > >
> > > > > This series covers the libraries and drivers that are built on Windows.
> > > > >
> > > > > The code has be converted to use the __atomic builtins but there
> > > > > are additional during conversion i notice that there may be some
> > > > > issues that need to be addressed.
> > > > >
> > > > > I'll comment in the patches where my concerns are so the
> > > > > maintainers may comment.
> > > > >
> > > > > v3:
> > > > > * style, don't use c99 comments
> > > > >
> > > > > v2:
> > > > > * comment code where optimizations may be possible now that
> memory
> > > > > order can be specified.
> > > > > * comment code where operations should potentially be atomic so that
> > > > > maintainers can review.
> > > > > * change a couple of variables labeled as counters to be unsigned.
> > > > >
> > > > > Tyler Retzlaff (7):
> > > > > ring: replace rte atomics with GCC builtin atomics
> > > > > stack: replace rte atomics with GCC builtin atomics
> > > > > dma/idxd: replace rte atomics with GCC builtin atomics
> > > > > net/ice: replace rte atomics with GCC builtin atomics
> > > > > net/ixgbe: replace rte atomics with GCC builtin atomics
> > > > > net/null: replace rte atomics with GCC builtin atomics
> > > > > net/ring: replace rte atomics with GCC builtin atomics
> > > > >
> > > > > drivers/dma/idxd/idxd_internal.h | 3 +--
> > > > > drivers/dma/idxd/idxd_pci.c | 8 +++++---
> > > > > drivers/net/ice/ice_dcf.c | 1 -
> > > > > drivers/net/ice/ice_dcf_ethdev.c | 1 -
> > > > > drivers/net/ice/ice_ethdev.c | 12 ++++++++----
> > > > > drivers/net/ixgbe/ixgbe_bypass.c | 1 -
> > > > > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++------
> > > > > drivers/net/ixgbe/ixgbe_ethdev.h | 3 ++-
> > > > > drivers/net/ixgbe/ixgbe_flow.c | 1 -
> > > > > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > > > > drivers/net/null/rte_eth_null.c | 28
> > > > > ++++++++++++++++++---------- drivers/net/ring/rte_eth_ring.c | 26
> ++++++++++++++++----------
> > > > > lib/ring/rte_ring_core.h | 1 -
> > > > > lib/ring/rte_ring_generic_pvt.h | 12 ++++++++----
> > > > > lib/stack/rte_stack_lf_generic.h | 16 +++++++++-------
> > > > > 15 files changed, 79 insertions(+), 53 deletions(-)
> > > > >
> > > >
> > > > There is still some code using the DPDK "legacy" atomic API, but I
> > > > guess this will be converted later.
> > >
> > > Yes, it will be converted later.
> > >
> > > If I did it correctly... the series was an attempt to move away from
> > > the legacy API where there was a dependency on EAL that would change
> > > when moving to stdatomic. I'm hoping that the remaining use of the
> > > legacy API are not sensitive to the theoretical ABI surface changing
> > > when that move is complete.
> >
> > Ok.
> >
> >
> > > > As you proposed, I dropped patch 1 on the ring library (waiting
> > > > for ARM to provide an alternative) and applied this series, thanks.
> > > >
> > > > Note: Thomas, Ferruh, we will have to be careful when merging
> > > > subtrees to make sure we are not reintroducing those again (like
> > > > for example net/ice).
> >
> > Well, I have some second thought about this series so I did not push
> > it to dpdk.org yet.
>
> Understood. It's very important to have these reviewed well so no objection just
> hope we can get them reviewed properly soon.
>
> > Drivers maintainers were not copied so I would like another pair of
> > eyes on the series: ideally no /* Note: */ should be left when merging
> > those patches.
>
> The /* Note: */ was explicitly requested by other reviewers as they were
> concerned we would lose track of opportunities to weaken ordering after
> switching from __sync to __atomic.
Note that some of the changes that I checked are in control plane. While it is good to optimize those, but the benefits might not be much. The presence of SEQ_CST also can act as a note.
>
> Is your request that the comments now be removed?
>
> Thanks!
>
> > I'll reply individually on the patches.
> >
> >
> > --
> > David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
2023-05-24 20:06 0% ` David Marchand
@ 2023-05-24 22:50 0% ` Tyler Retzlaff
2023-05-24 22:56 0% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-05-24 22:50 UTC (permalink / raw)
To: David Marchand
Cc: dev, Honnappa.Nagarahalli, Ruifeng.Wang, thomas, stephen, mb,
Ferruh Yigit
On Wed, May 24, 2023 at 10:06:24PM +0200, David Marchand wrote:
> On Wed, May 24, 2023 at 5:47 PM Tyler Retzlaff
> <roretzla@linux.microsoft.com> wrote:
> > On Wed, May 24, 2023 at 02:40:43PM +0200, David Marchand wrote:
> > > Hello Tyler,
> > >
> > > On Thu, Mar 23, 2023 at 11:54 PM Tyler Retzlaff
> > > <roretzla@linux.microsoft.com> wrote:
> > > >
> > > > Replace the use of rte_atomic.h types and functions, instead use GCC
> > > > supplied C++11 memory model builtins.
> > > >
> > > > This series covers the libraries and drivers that are built on Windows.
> > > >
> > > > The code has be converted to use the __atomic builtins but there are
> > > > additional during conversion i notice that there may be some issues
> > > > that need to be addressed.
> > > >
> > > > I'll comment in the patches where my concerns are so the maintainers
> > > > may comment.
> > > >
> > > > v3:
> > > > * style, don't use c99 comments
> > > >
> > > > v2:
> > > > * comment code where optimizations may be possible now that memory
> > > > order can be specified.
> > > > * comment code where operations should potentially be atomic so that
> > > > maintainers can review.
> > > > * change a couple of variables labeled as counters to be unsigned.
> > > >
> > > > Tyler Retzlaff (7):
> > > > ring: replace rte atomics with GCC builtin atomics
> > > > stack: replace rte atomics with GCC builtin atomics
> > > > dma/idxd: replace rte atomics with GCC builtin atomics
> > > > net/ice: replace rte atomics with GCC builtin atomics
> > > > net/ixgbe: replace rte atomics with GCC builtin atomics
> > > > net/null: replace rte atomics with GCC builtin atomics
> > > > net/ring: replace rte atomics with GCC builtin atomics
> > > >
> > > > drivers/dma/idxd/idxd_internal.h | 3 +--
> > > > drivers/dma/idxd/idxd_pci.c | 8 +++++---
> > > > drivers/net/ice/ice_dcf.c | 1 -
> > > > drivers/net/ice/ice_dcf_ethdev.c | 1 -
> > > > drivers/net/ice/ice_ethdev.c | 12 ++++++++----
> > > > drivers/net/ixgbe/ixgbe_bypass.c | 1 -
> > > > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++------
> > > > drivers/net/ixgbe/ixgbe_ethdev.h | 3 ++-
> > > > drivers/net/ixgbe/ixgbe_flow.c | 1 -
> > > > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > > > drivers/net/null/rte_eth_null.c | 28 ++++++++++++++++++----------
> > > > drivers/net/ring/rte_eth_ring.c | 26 ++++++++++++++++----------
> > > > lib/ring/rte_ring_core.h | 1 -
> > > > lib/ring/rte_ring_generic_pvt.h | 12 ++++++++----
> > > > lib/stack/rte_stack_lf_generic.h | 16 +++++++++-------
> > > > 15 files changed, 79 insertions(+), 53 deletions(-)
> > > >
> > >
> > > There is still some code using the DPDK "legacy" atomic API, but I
> > > guess this will be converted later.
> >
> > Yes, it will be converted later.
> >
> > If I did it correctly... the series was an attempt to move away
> > from the legacy API where there was a dependency on EAL that would
> > change when moving to stdatomic. I'm hoping that the remaining use of
> > the legacy API are not sensitive to the theoretical ABI surface
> > changing when that move is complete.
>
> Ok.
>
>
> > > As you proposed, I dropped patch 1 on the ring library (waiting for
> > > ARM to provide an alternative) and applied this series, thanks.
> > >
> > > Note: Thomas, Ferruh, we will have to be careful when merging subtrees
> > > to make sure we are not reintroducing those again (like for example
> > > net/ice).
>
> Well, I have some second thought about this series so I did not push
> it to dpdk.org yet.
Understood. It's very important to have these reviewed well so no
objection just hope we can get them reviewed properly soon.
> Drivers maintainers were not copied so I would like another pair of
> eyes on the series: ideally no /* Note: */ should be left when merging
> those patches.
The /* Note: */ was explicitly requested by other reviewers as they were
concerned we would lose track of opportunities to weaken ordering after
switching from __sync to __atomic.
Is your request that the comments now be removed?
Thanks!
> I'll reply individually on the patches.
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
2023-05-24 15:47 3% ` Tyler Retzlaff
@ 2023-05-24 20:06 0% ` David Marchand
2023-05-24 22:50 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-05-24 20:06 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, Honnappa.Nagarahalli, Ruifeng.Wang, thomas, stephen, mb,
Ferruh Yigit
On Wed, May 24, 2023 at 5:47 PM Tyler Retzlaff
<roretzla@linux.microsoft.com> wrote:
> On Wed, May 24, 2023 at 02:40:43PM +0200, David Marchand wrote:
> > Hello Tyler,
> >
> > On Thu, Mar 23, 2023 at 11:54 PM Tyler Retzlaff
> > <roretzla@linux.microsoft.com> wrote:
> > >
> > > Replace the use of rte_atomic.h types and functions, instead use GCC
> > > supplied C++11 memory model builtins.
> > >
> > > This series covers the libraries and drivers that are built on Windows.
> > >
> > > The code has be converted to use the __atomic builtins but there are
> > > additional during conversion i notice that there may be some issues
> > > that need to be addressed.
> > >
> > > I'll comment in the patches where my concerns are so the maintainers
> > > may comment.
> > >
> > > v3:
> > > * style, don't use c99 comments
> > >
> > > v2:
> > > * comment code where optimizations may be possible now that memory
> > > order can be specified.
> > > * comment code where operations should potentially be atomic so that
> > > maintainers can review.
> > > * change a couple of variables labeled as counters to be unsigned.
> > >
> > > Tyler Retzlaff (7):
> > > ring: replace rte atomics with GCC builtin atomics
> > > stack: replace rte atomics with GCC builtin atomics
> > > dma/idxd: replace rte atomics with GCC builtin atomics
> > > net/ice: replace rte atomics with GCC builtin atomics
> > > net/ixgbe: replace rte atomics with GCC builtin atomics
> > > net/null: replace rte atomics with GCC builtin atomics
> > > net/ring: replace rte atomics with GCC builtin atomics
> > >
> > > drivers/dma/idxd/idxd_internal.h | 3 +--
> > > drivers/dma/idxd/idxd_pci.c | 8 +++++---
> > > drivers/net/ice/ice_dcf.c | 1 -
> > > drivers/net/ice/ice_dcf_ethdev.c | 1 -
> > > drivers/net/ice/ice_ethdev.c | 12 ++++++++----
> > > drivers/net/ixgbe/ixgbe_bypass.c | 1 -
> > > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++------
> > > drivers/net/ixgbe/ixgbe_ethdev.h | 3 ++-
> > > drivers/net/ixgbe/ixgbe_flow.c | 1 -
> > > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > > drivers/net/null/rte_eth_null.c | 28 ++++++++++++++++++----------
> > > drivers/net/ring/rte_eth_ring.c | 26 ++++++++++++++++----------
> > > lib/ring/rte_ring_core.h | 1 -
> > > lib/ring/rte_ring_generic_pvt.h | 12 ++++++++----
> > > lib/stack/rte_stack_lf_generic.h | 16 +++++++++-------
> > > 15 files changed, 79 insertions(+), 53 deletions(-)
> > >
> >
> > There is still some code using the DPDK "legacy" atomic API, but I
> > guess this will be converted later.
>
> Yes, it will be converted later.
>
> If I did it correctly... the series was an attempt to move away
> from the legacy API where there was a dependency on EAL that would
> change when moving to stdatomic. I'm hoping that the remaining use of
> the legacy API are not sensitive to the theoretical ABI surface
> changing when that move is complete.
Ok.
> > As you proposed, I dropped patch 1 on the ring library (waiting for
> > ARM to provide an alternative) and applied this series, thanks.
> >
> > Note: Thomas, Ferruh, we will have to be careful when merging subtrees
> > to make sure we are not reintroducing those again (like for example
> > net/ice).
Well, I have some second thought about this series so I did not push
it to dpdk.org yet.
Drivers maintainers were not copied so I would like another pair of
eyes on the series: ideally no /* Note: */ should be left when merging
those patches.
I'll reply individually on the patches.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 0/7] replace rte atomics with GCC builtin atomics
@ 2023-05-24 15:47 3% ` Tyler Retzlaff
2023-05-24 20:06 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-05-24 15:47 UTC (permalink / raw)
To: David Marchand
Cc: dev, Honnappa.Nagarahalli, Ruifeng.Wang, thomas, stephen, mb,
Ferruh Yigit
On Wed, May 24, 2023 at 02:40:43PM +0200, David Marchand wrote:
> Hello Tyler,
>
> On Thu, Mar 23, 2023 at 11:54 PM Tyler Retzlaff
> <roretzla@linux.microsoft.com> wrote:
> >
> > Replace the use of rte_atomic.h types and functions, instead use GCC
> > supplied C++11 memory model builtins.
> >
> > This series covers the libraries and drivers that are built on Windows.
> >
> > The code has be converted to use the __atomic builtins but there are
> > additional during conversion i notice that there may be some issues
> > that need to be addressed.
> >
> > I'll comment in the patches where my concerns are so the maintainers
> > may comment.
> >
> > v3:
> > * style, don't use c99 comments
> >
> > v2:
> > * comment code where optimizations may be possible now that memory
> > order can be specified.
> > * comment code where operations should potentially be atomic so that
> > maintainers can review.
> > * change a couple of variables labeled as counters to be unsigned.
> >
> > Tyler Retzlaff (7):
> > ring: replace rte atomics with GCC builtin atomics
> > stack: replace rte atomics with GCC builtin atomics
> > dma/idxd: replace rte atomics with GCC builtin atomics
> > net/ice: replace rte atomics with GCC builtin atomics
> > net/ixgbe: replace rte atomics with GCC builtin atomics
> > net/null: replace rte atomics with GCC builtin atomics
> > net/ring: replace rte atomics with GCC builtin atomics
> >
> > drivers/dma/idxd/idxd_internal.h | 3 +--
> > drivers/dma/idxd/idxd_pci.c | 8 +++++---
> > drivers/net/ice/ice_dcf.c | 1 -
> > drivers/net/ice/ice_dcf_ethdev.c | 1 -
> > drivers/net/ice/ice_ethdev.c | 12 ++++++++----
> > drivers/net/ixgbe/ixgbe_bypass.c | 1 -
> > drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++++++++++++------
> > drivers/net/ixgbe/ixgbe_ethdev.h | 3 ++-
> > drivers/net/ixgbe/ixgbe_flow.c | 1 -
> > drivers/net/ixgbe/ixgbe_rxtx.c | 1 -
> > drivers/net/null/rte_eth_null.c | 28 ++++++++++++++++++----------
> > drivers/net/ring/rte_eth_ring.c | 26 ++++++++++++++++----------
> > lib/ring/rte_ring_core.h | 1 -
> > lib/ring/rte_ring_generic_pvt.h | 12 ++++++++----
> > lib/stack/rte_stack_lf_generic.h | 16 +++++++++-------
> > 15 files changed, 79 insertions(+), 53 deletions(-)
> >
>
> There is still some code using the DPDK "legacy" atomic API, but I
> guess this will be converted later.
Yes, it will be converted later.
If I did it correctly... the series was an attempt to move away
from the legacy API where there was a dependency on EAL that would
change when moving to stdatomic. I'm hoping that the remaining use of
the legacy API are not sensitive to the theoretical ABI surface
changing when that move is complete.
> As you proposed, I dropped patch 1 on the ring library (waiting for
> ARM to provide an alternative) and applied this series, thanks.
>
> Note: Thomas, Ferruh, we will have to be careful when merging subtrees
> to make sure we are not reintroducing those again (like for example
> net/ice).
>
> --
> David Marchand
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number threshold
2023-05-24 7:12 0% ` [EXT] " Akhil Goyal
@ 2023-05-24 8:09 3% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2023-05-24 8:09 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
Vamsi Krishna Attunuru, ferruh.yigit, Jerin Jacob Kollanukkaran,
Ankur Dwivedi
> Subject: RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number
> threshold
>
> > On Wed, 24 May 2023 01:19:07 +0530
> > Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > index c7a523b6d6..30bac4e25a 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
> > > struct rte_security_macsec_sc {
> > > /** Direction of SC */
> > > enum rte_security_macsec_direction dir;
> > > + /** Packet number threshold */
> > > + uint64_t pn_threshold;
> > > union {
> > > struct {
> > > /** SAs for each association number */
> > > @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
> > > uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
> > > /** Channel is active */
> > > uint8_t active : 1;
> > > + /** Extended packet number is enabled for SAs */
> > > + uint8_t is_xpn : 1;
> > > /** Reserved bitfields for future */
> > > - uint8_t reserved : 7;
> > > + uint8
> >
> > Is this an ABI change? If so needs to wait for 23.11 release
> rte_security_macsec_sc/sa_create are experimental APIs. So, it won't be an
> issue I believe.
Looking at the ABI issues reported for this patchset.
Even if these APIs are experimental, we cannot really change them.
As all are part of rte_security_ctx which is exposed.
But, user is not required to know its contents and it should not be exposed.
In next release I would make it internal like rte_security_session.
For now, I would defer this MACsec support to next release.
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number threshold
2023-05-23 21:29 3% ` Stephen Hemminger
@ 2023-05-24 7:12 0% ` Akhil Goyal
2023-05-24 8:09 3% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2023-05-24 7:12 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
Vamsi Krishna Attunuru, ferruh.yigit, Jerin Jacob Kollanukkaran,
Ankur Dwivedi
> On Wed, 24 May 2023 01:19:07 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index c7a523b6d6..30bac4e25a 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
> > struct rte_security_macsec_sc {
> > /** Direction of SC */
> > enum rte_security_macsec_direction dir;
> > + /** Packet number threshold */
> > + uint64_t pn_threshold;
> > union {
> > struct {
> > /** SAs for each association number */
> > @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
> > uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
> > /** Channel is active */
> > uint8_t active : 1;
> > + /** Extended packet number is enabled for SAs */
> > + uint8_t is_xpn : 1;
> > /** Reserved bitfields for future */
> > - uint8_t reserved : 7;
> > + uint8
>
> Is this an ABI change? If so needs to wait for 23.11 release
rte_security_macsec_sc/sa_create are experimental APIs. So, it won't be an issue I believe.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 04/15] graph: add get/set graph worker model APIs
@ 2023-05-24 6:08 3% ` Jerin Jacob
2023-05-26 9:58 0% ` Yan, Zhirun
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-05-24 6:08 UTC (permalink / raw)
To: Zhirun Yan
Cc: dev, jerinj, kirankumark, ndabilpuram, stephen, pbhagavatula,
cunming.liang, haiyue.wang
On Tue, May 9, 2023 at 11:34 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> Add new get/set APIs to configure graph worker model which is used to
> determine which model will be chosen.
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> ---
> diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c
> new file mode 100644
> index 0000000000..cabc101262
> --- /dev/null
> +++ b/lib/graph/rte_graph_worker.c
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2023 Intel Corporation
> + */
> +
> +#include "rte_graph_worker_common.h"
> +
> +RTE_DEFINE_PER_LCORE(enum rte_graph_worker_model, worker_model) = RTE_GRAPH_MODEL_DEFAULT;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + * Set the graph worker model
Just declaring this top of the header file enough to avoid duplicating
in every functions
as all functions in header is experimental. See lib/graph/rte_graph.h
> + *
> + * @note This function does not perform any locking, and is only safe to call
> + * before graph running.
> + *
> + * @param name
> + * Name of the graph worker model.
> + *
> + * @return
> + * 0 on success, -1 otherwise.
> + */
> +int
> +rte_graph_worker_model_set(enum rte_graph_worker_model model)
> +{
> + if (model >= RTE_GRAPH_MODEL_LIST_END)
> + goto fail;
> +
> + RTE_PER_LCORE(worker_model) = model;
Application needs to set this per core . Right?
Are we anticipating a case where one core runs one model and another
core runs with another model?
If not OR it is not practically possible, then, To make application
programmer life easy,
We could loop through all lore and set on all of them instead of
application setting on each
one separately.
> + return 0;
> +
> +fail:
> + RTE_PER_LCORE(worker_model) = RTE_GRAPH_MODEL_DEFAULT;
> + return -1;
> +}
> +
> +/** Graph worker models */
> +enum rte_graph_worker_model {
> + RTE_GRAPH_MODEL_DEFAULT,
Add Doxygen comment
> + RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT,
Add Doxygen comment to explain what this mode does.
> + RTE_GRAPH_MODEL_MCORE_DISPATCH,
Add Doxygen comment to explain what this mode does.
> + RTE_GRAPH_MODEL_LIST_END
This can break the ABI if we add one in middle. Please remove this.
See lib/crytodev for
how to handle with _END symbols.
^ permalink raw reply [relevance 3%]
* [PATCH v5 5/5] ethdev: add MPLS header modification support
2023-05-23 21:31 3% ` [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-23 21:31 2% ` Michael Baum
1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 21:31 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.
Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.
This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.
Since the "tag_index" field is added, the "RTE_FLOW_FIELD_TAG" type
moves to use it for tag array instead of using "level" field.
Using "level" is still supported for backwards compatibility when
"tag_index" field is zero.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 24 +++++-
doc/guides/prog_guide/rte_flow.rst | 18 ++--
doc/guides/rel_notes/release_23_07.rst | 8 +-
drivers/net/mlx5/mlx5_flow.c | 34 ++++++++
drivers/net/mlx5/mlx5_flow.h | 23 ++++++
drivers/net/mlx5/mlx5_flow_dv.c | 110 +++++++++++--------------
drivers/net/mlx5/mlx5_flow_hw.c | 21 +++--
lib/ethdev/rte_flow.h | 51 ++++++++----
8 files changed, 199 insertions(+), 90 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
"ipv6_proto",
"flex_item",
"hash_result",
- "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
NULL
};
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+ .name = "dst_tag_index",
+ .help = "destination field tag array",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
.name = "dst_type_id",
.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+ .name = "stc_tag_index",
+ .help = "source field tag array",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
.name = "src_type_id",
.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
``width`` defines a number of bits to use from ``src`` field.
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
- ``0`` means the default behaviour. Depending on the packet type,
it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
- ``2`` and subsequent values requests access to the specified packet
encapsulation level, from outermost to innermost (lower to higher values).
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+ For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+ field and it is still supported for backwards compatibility.
+ When ``tag_index`` is zero, the tag array is taken from ``level`` field.
``type`` is used to specify (along with ``class_id``) the Geneve option which
is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
+=================+==========================================================+
| ``field`` | ID: packet field, mark, meta, tag, immediate, pointer |
+-----------------+----------------------------------------------------------+
- | ``level`` | encapsulation level of a packet field or tag array index |
+ | ``level`` | encapsulation level of a packet field |
+ +-----------------+----------------------------------------------------------+
+ | ``tag_index`` | tag index inside encapsulation level |
+-----------------+----------------------------------------------------------+
| ``type`` | geneve option type |
+-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
-* The ``level`` field in experimental structure
- ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+ * ``level`` field was reduced to 8 bits.
+
+ * ``tag_index`` field replaced ``level`` field in representing tag array for
+ ``RTE_FLOW_FIELD_TAG`` type.
ABI Changes
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 19f7f92717..867b7b8ea2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2318,6 +2318,40 @@ mlx5_validate_action_ct(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate the level value for modify field action.
+ *
+ * @param[in] data
+ * Pointer to the rte_flow_action_modify_data structure either src or dst.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
+ struct rte_flow_error *error)
+{
+ if (data->level == 0)
+ return 0;
+ if (data->field != RTE_FLOW_FIELD_TAG)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "inner header fields modification is not supported");
+ if (data->tag_index != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "tag array can be provided using 'level' or 'tag_index' fields, not both");
+ /*
+ * The tag array for RTE_FLOW_FIELD_TAG type is provided using
+ * 'tag_index' field. In old API, it was provided using 'level' field
+ * and it is still supported for backwards compatibility.
+ */
+ DRV_LOG(WARNING, "tag array provided in 'level' field instead of 'tag_index' field.");
+ return 0;
+}
+
/**
* Validate ICMP6 item.
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..cba04b4f45 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1045,6 +1045,26 @@ flow_items_to_tunnel(const struct rte_flow_item items[])
return items[0].spec;
}
+/**
+ * Gets the tag array given for RTE_FLOW_FIELD_TAG type.
+ *
+ * In old API the value was provided in "level" field, but in new API
+ * it is provided in "tag_array" field. Since encapsulation level is not
+ * relevant for metadata, the tag array can be still provided in "level"
+ * for backwards compatibility.
+ *
+ * @param[in] data
+ * Pointer to tag modify data structure.
+ *
+ * @return
+ * Tag array index.
+ */
+static inline uint8_t
+flow_tag_index_get(const struct rte_flow_action_modify_data *data)
+{
+ return data->tag_index ? data->tag_index : data->level;
+}
+
/**
* Fetch 1, 2, 3 or 4 byte field from the byte array
* and return as unsigned integer in host-endian format.
@@ -2276,6 +2296,9 @@ int mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
int mlx5_flow_validate_action_default_miss(uint64_t action_flags,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
+int flow_validate_modify_field_level
+ (const struct rte_flow_action_modify_data *data,
+ struct rte_flow_error *error);
int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
const uint8_t *mask,
const uint8_t *nic_mask,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f136f43b0a..3070f75ce8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1896,16 +1896,17 @@ mlx5_flow_field_id_to_modify_info
case RTE_FLOW_FIELD_TAG:
{
MLX5_ASSERT(data->offset + width <= 32);
+ uint8_t tag_index = flow_tag_index_get(data);
int reg;
- off_be = (data->level == MLX5_LINEAR_HASH_TAG_INDEX) ?
+ off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
16 - (data->offset + width) + 16 : data->offset;
if (priv->sh->config.dv_flow_en == 2)
reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG,
- data->level);
+ tag_index);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
- data->level, error);
+ tag_index, error);
if (reg < 0)
return;
MLX5_ASSERT(reg != REG_NON);
@@ -1985,7 +1986,7 @@ mlx5_flow_field_id_to_modify_info
{
uint32_t meta_mask = priv->sh->dv_meta_mask;
uint32_t meta_count = __builtin_popcount(meta_mask);
- uint32_t reg = data->level;
+ uint8_t reg = flow_tag_index_get(data);
RTE_SET_USED(meta_count);
MLX5_ASSERT(data->offset + width <= meta_count);
@@ -5245,115 +5246,105 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_sh_config *config = &priv->sh->config;
struct mlx5_hca_attr *hca_attr = &priv->sh->cdev->config.hca_attr;
- const struct rte_flow_action_modify_field *action_modify_field =
- action->conf;
- uint32_t dst_width, src_width;
+ const struct rte_flow_action_modify_field *conf = action->conf;
+ const struct rte_flow_action_modify_data *src_data = &conf->src;
+ const struct rte_flow_action_modify_data *dst_data = &conf->dst;
+ uint32_t dst_width, src_width, width = conf->width;
ret = flow_dv_validate_action_modify_hdr(action_flags, action, error);
if (ret)
return ret;
- if (action_modify_field->src.field == RTE_FLOW_FIELD_FLEX_ITEM ||
- action_modify_field->dst.field == RTE_FLOW_FIELD_FLEX_ITEM)
+ if (src_data->field == RTE_FLOW_FIELD_FLEX_ITEM ||
+ dst_data->field == RTE_FLOW_FIELD_FLEX_ITEM)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"flex item fields modification"
" is not supported");
- dst_width = mlx5_flow_item_field_width(dev, action_modify_field->dst.field,
+ dst_width = mlx5_flow_item_field_width(dev, dst_data->field,
-1, attr, error);
- src_width = mlx5_flow_item_field_width(dev, action_modify_field->src.field,
+ src_width = mlx5_flow_item_field_width(dev, src_data->field,
dst_width, attr, error);
- if (action_modify_field->width == 0)
+ if (width == 0)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"no bits are requested to be modified");
- else if (action_modify_field->width > dst_width ||
- action_modify_field->width > src_width)
+ else if (width > dst_width || width > src_width)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"cannot modify more bits than"
" the width of a field");
- if (action_modify_field->dst.field != RTE_FLOW_FIELD_VALUE &&
- action_modify_field->dst.field != RTE_FLOW_FIELD_POINTER) {
- if (action_modify_field->dst.offset +
- action_modify_field->width > dst_width)
+ if (dst_data->field != RTE_FLOW_FIELD_VALUE &&
+ dst_data->field != RTE_FLOW_FIELD_POINTER) {
+ if (dst_data->offset + width > dst_width)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination offset is too big");
- if (action_modify_field->dst.level &&
- action_modify_field->dst.field != RTE_FLOW_FIELD_TAG)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "inner header fields modification"
- " is not supported");
+ ret = flow_validate_modify_field_level(dst_data, error);
+ if (ret)
+ return ret;
}
- if (action_modify_field->src.field != RTE_FLOW_FIELD_VALUE &&
- action_modify_field->src.field != RTE_FLOW_FIELD_POINTER) {
+ if (src_data->field != RTE_FLOW_FIELD_VALUE &&
+ src_data->field != RTE_FLOW_FIELD_POINTER) {
if (root)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modify field action is not"
" supported for group 0");
- if (action_modify_field->src.offset +
- action_modify_field->width > src_width)
+ if (src_data->offset + width > src_width)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source offset is too big");
- if (action_modify_field->src.level &&
- action_modify_field->src.field != RTE_FLOW_FIELD_TAG)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "inner header fields modification"
- " is not supported");
+ ret = flow_validate_modify_field_level(src_data, error);
+ if (ret)
+ return ret;
}
- if ((action_modify_field->dst.field ==
- action_modify_field->src.field) &&
- (action_modify_field->dst.level ==
- action_modify_field->src.level))
+ if ((dst_data->field == src_data->field) &&
+ (dst_data->level == src_data->level))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source and destination fields"
" cannot be the same");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_VALUE ||
- action_modify_field->dst.field == RTE_FLOW_FIELD_POINTER ||
- action_modify_field->dst.field == RTE_FLOW_FIELD_MARK)
+ if (dst_data->field == RTE_FLOW_FIELD_VALUE ||
+ dst_data->field == RTE_FLOW_FIELD_POINTER ||
+ dst_data->field == RTE_FLOW_FIELD_MARK)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"mark, immediate value or a pointer to it"
" cannot be used as a destination");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_START ||
- action_modify_field->src.field == RTE_FLOW_FIELD_START)
+ if (dst_data->field == RTE_FLOW_FIELD_START ||
+ src_data->field == RTE_FLOW_FIELD_START)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modifications of an arbitrary"
" place in a packet is not supported");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_VLAN_TYPE ||
- action_modify_field->src.field == RTE_FLOW_FIELD_VLAN_TYPE)
+ if (dst_data->field == RTE_FLOW_FIELD_VLAN_TYPE ||
+ src_data->field == RTE_FLOW_FIELD_VLAN_TYPE)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modifications of the 802.1Q Tag"
" Identifier is not supported");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_VXLAN_VNI ||
- action_modify_field->src.field == RTE_FLOW_FIELD_VXLAN_VNI)
+ if (dst_data->field == RTE_FLOW_FIELD_VXLAN_VNI ||
+ src_data->field == RTE_FLOW_FIELD_VXLAN_VNI)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modifications of the VXLAN Network"
" Identifier is not supported");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_GENEVE_VNI ||
- action_modify_field->src.field == RTE_FLOW_FIELD_GENEVE_VNI)
+ if (dst_data->field == RTE_FLOW_FIELD_GENEVE_VNI ||
+ src_data->field == RTE_FLOW_FIELD_GENEVE_VNI)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modifications of the GENEVE Network"
" Identifier is not supported");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_MARK ||
- action_modify_field->src.field == RTE_FLOW_FIELD_MARK)
+ if (dst_data->field == RTE_FLOW_FIELD_MARK ||
+ src_data->field == RTE_FLOW_FIELD_MARK)
if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
!mlx5_flow_ext_mreg_supported(dev))
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"cannot modify mark in legacy mode"
" or without extensive registers");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_META ||
- action_modify_field->src.field == RTE_FLOW_FIELD_META) {
+ if (dst_data->field == RTE_FLOW_FIELD_META ||
+ src_data->field == RTE_FLOW_FIELD_META) {
if (config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
!mlx5_flow_ext_mreg_supported(dev))
return rte_flow_error_set(error, ENOTSUP,
@@ -5367,20 +5358,19 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
"cannot modify meta without"
" extensive registers available");
}
- if (action_modify_field->operation == RTE_FLOW_MODIFY_SUB)
+ if (conf->operation == RTE_FLOW_MODIFY_SUB)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"sub operations are not supported");
- if (action_modify_field->dst.field == RTE_FLOW_FIELD_IPV4_ECN ||
- action_modify_field->src.field == RTE_FLOW_FIELD_IPV4_ECN ||
- action_modify_field->dst.field == RTE_FLOW_FIELD_IPV6_ECN ||
- action_modify_field->src.field == RTE_FLOW_FIELD_IPV6_ECN)
+ if (dst_data->field == RTE_FLOW_FIELD_IPV4_ECN ||
+ src_data->field == RTE_FLOW_FIELD_IPV4_ECN ||
+ dst_data->field == RTE_FLOW_FIELD_IPV6_ECN ||
+ src_data->field == RTE_FLOW_FIELD_IPV6_ECN)
if (!hca_attr->modify_outer_ip_ecn && root)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"modifications of the ECN for current firmware is not supported");
- return (action_modify_field->width / 32) +
- !!(action_modify_field->width % 32);
+ return (width / 32) + !!(width % 32);
}
/**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1b68a19900..39ea76c0c0 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1022,9 +1022,11 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
conf->dst.field == RTE_FLOW_FIELD_TAG ||
conf->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+ uint8_t tag_index = flow_tag_index_get(&conf->dst);
+
value = *(const unaligned_uint32_t *)item.spec;
if (conf->dst.field == RTE_FLOW_FIELD_TAG &&
- conf->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+ tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
value = rte_cpu_to_be_32(value << 16);
else
value = rte_cpu_to_be_32(value);
@@ -2055,9 +2057,11 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job,
mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+ uint8_t tag_index = flow_tag_index_get(&mhdr_action->dst);
+
value_p = (unaligned_uint32_t *)values;
if (mhdr_action->dst.field == RTE_FLOW_FIELD_TAG &&
- mhdr_action->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+ tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
*value_p = rte_cpu_to_be_32(*value_p << 16);
else
*value_p = rte_cpu_to_be_32(*value_p);
@@ -3546,10 +3550,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
const struct rte_flow_action *mask,
struct rte_flow_error *error)
{
- const struct rte_flow_action_modify_field *action_conf =
- action->conf;
- const struct rte_flow_action_modify_field *mask_conf =
- mask->conf;
+ const struct rte_flow_action_modify_field *action_conf = action->conf;
+ const struct rte_flow_action_modify_field *mask_conf = mask->conf;
+ int ret;
if (action_conf->operation != mask_conf->operation)
return rte_flow_error_set(error, EINVAL,
@@ -3565,6 +3568,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"immediate value, pointer and hash result cannot be used as destination");
+ ret = flow_validate_modify_field_level(&action_conf->dst, error);
+ if (ret)
+ return ret;
if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
@@ -3587,6 +3593,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source offset level must be fully masked");
+ ret = flow_validate_modify_field_level(&action_conf->src, error);
+ if (ret)
+ return ret;
}
if (mask_conf->width != UINT32_MAX)
return rte_flow_error_set(error, EINVAL,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_START = 0, /**< Start of a packet. */
RTE_FLOW_FIELD_MAC_DST, /**< Destination MAC Address. */
RTE_FLOW_FIELD_MAC_SRC, /**< Source MAC Address. */
- RTE_FLOW_FIELD_VLAN_TYPE, /**< 802.1Q Tag Identifier. */
- RTE_FLOW_FIELD_VLAN_ID, /**< 802.1Q VLAN Identifier. */
+ RTE_FLOW_FIELD_VLAN_TYPE, /**< VLAN Tag Identifier. */
+ RTE_FLOW_FIELD_VLAN_ID, /**< VLAN Identifier. */
RTE_FLOW_FIELD_MAC_TYPE, /**< EtherType. */
RTE_FLOW_FIELD_IPV4_DSCP, /**< IPv4 DSCP. */
RTE_FLOW_FIELD_IPV4_TTL, /**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
- RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA, /**< GENEVE option data */
+ RTE_FLOW_FIELD_MPLS /**< MPLS header. */
};
/**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
RTE_STD_C11
union {
struct {
- /** Encapsulation level or tag index or flex item handle. */
+ /** Encapsulation level and tag index or flex item handle. */
union {
struct {
/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
*
* Values other than @p 0 are not
* necessarily supported.
+ *
+ * @note that for MPLS field,
+ * encapsulation level also include
+ * tunnel since MPLS may appear in
+ * outer, inner or tunnel.
*/
uint8_t level;
- /**
- * Geneve option type. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- uint8_t type;
- /**
- * Geneve option class. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- rte_be16_t class_id;
+ union {
+ /**
+ * Tag index array inside
+ * encapsulation level.
+ * Used for VLAN, MPLS or TAG
+ * types.
+ */
+ uint8_t tag_index;
+ /**
+ * Geneve option identifier.
+ * relevant only for
+ * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ struct {
+ /**
+ * Geneve option type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class.
+ */
+ rte_be16_t class_id;
+ };
+ };
};
struct rte_flow_item_flex_handle *flex_handle;
};
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support
@ 2023-05-23 21:31 3% ` Michael Baum
2023-05-23 21:31 2% ` [PATCH v5 5/5] ethdev: add MPLS header " Michael Baum
1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 21:31 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add modify field support for GENEVE option fields:
- "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
- "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
- "RTE_FLOW_FIELD_GENEVE_OPT_DATA"
Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.
To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++++-
doc/guides/prog_guide/rte_flow.rst | 23 ++++++++++++
doc/guides/rel_notes/release_23_07.rst | 3 ++
drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++------
lib/ethdev/rte_flow.h | 48 +++++++++++++++++++++++++-
5 files changed, 131 insertions(+), 13 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
"ipv6_proto",
"flex_item",
- "hash_result", NULL
+ "hash_result",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ NULL
};
static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+ .name = "dst_type_id",
+ .help = "destination field type ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+ .name = "dst_class",
+ .help = "destination field class ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ dst.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_OFFSET] = {
.name = "dst_offset",
.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+ .name = "src_type_id",
+ .help = "source field type ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+ .name = "src_class",
+ .help = "source field class ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ src.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
.name = "src_offset",
.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
For the tag array (in case of multiple tags are supported and present)
``level`` translates directly into the array index.
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
``flex_handle`` is used to specify the flex item pointer which is being
modified. ``flex_handle`` and ``level`` are mutually exclusive.
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
specify destination width as 8, destination offset as 16, and provide immediate
value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
.. _table_rte_flow_action_modify_field:
.. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+-----------------+----------------------------------------------------------+
| ``level`` | encapsulation level of a packet field or tag array index |
+-----------------+----------------------------------------------------------+
+ | ``type`` | geneve option type |
+ +-----------------+----------------------------------------------------------+
+ | ``class_id`` | geneve option class ID |
+ +-----------------+----------------------------------------------------------+
| ``flex_handle`` | flex item handle of a packet field |
+-----------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* The ``level`` field in experimental structure
+ ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
ABI Changes
-----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"immediate value, pointer and hash result cannot be used as destination");
- if (mask_conf->dst.level != UINT32_MAX)
+ if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
"destination field mask and template are not equal");
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
- if (mask_conf->src.level != UINT32_MAX)
+ if (mask_conf->src.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = RTE_FLOW_FIELD_VLAN_ID,
- .level = 0xffffffff, .offset = 0xffffffff,
+ .level = 0xff, .offset = 0xffffffff,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */
RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
+ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
+ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
};
/**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
struct {
/** Encapsulation level or tag index or flex item handle. */
union {
- uint32_t level;
+ struct {
+ /**
+ * Packet encapsulation level containing
+ * the field modify to.
+ *
+ * - @p 0 requests the default behavior.
+ * Depending on the packet type, it
+ * can mean outermost, innermost or
+ * anything in between.
+ *
+ * It basically stands for the
+ * innermost encapsulation level
+ * modification can be performed on
+ * according to PMD and device
+ * capabilities.
+ *
+ * - @p 1 requests modification to be
+ * performed on the outermost packet
+ * encapsulation level.
+ *
+ * - @p 2 and subsequent values request
+ * modification to be performed on
+ * the specified inner packet
+ * encapsulation level, from
+ * outermost to innermost (lower to
+ * higher values).
+ *
+ * Values other than @p 0 are not
+ * necessarily supported.
+ */
+ uint8_t level;
+ /**
+ * Geneve option type. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ rte_be16_t class_id;
+ };
struct rte_flow_item_flex_handle *flex_handle;
};
/** Number of bits to skip from a field. */
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH 02/13] security: add MACsec packet number threshold
@ 2023-05-23 21:29 3% ` Stephen Hemminger
2023-05-24 7:12 0% ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-05-23 21:29 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
vattunuru, ferruh.yigit, jerinj, adwivedi
On Wed, 24 May 2023 01:19:07 +0530
Akhil Goyal <gakhil@marvell.com> wrote:
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index c7a523b6d6..30bac4e25a 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
> struct rte_security_macsec_sc {
> /** Direction of SC */
> enum rte_security_macsec_direction dir;
> + /** Packet number threshold */
> + uint64_t pn_threshold;
> union {
> struct {
> /** SAs for each association number */
> @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
> uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
> /** Channel is active */
> uint8_t active : 1;
> + /** Extended packet number is enabled for SAs */
> + uint8_t is_xpn : 1;
> /** Reserved bitfields for future */
> - uint8_t reserved : 7;
> + uint8
Is this an ABI change? If so needs to wait for 23.11 release
^ permalink raw reply [relevance 3%]
* Re: [PATCH] eventdev: fix alignment padding
2023-05-17 13:35 3% ` Morten Brørup
@ 2023-05-23 15:15 3% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-23 15:15 UTC (permalink / raw)
To: Morten Brørup; +Cc: Mattias Rönnblom, Sivaprasad Tummala, jerinj, dev
On Wed, May 17, 2023 at 7:05 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > Sent: Wednesday, 17 May 2023 15.20
> >
> > On Tue, Apr 18, 2023 at 8:46 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> > >
> > > On 2023-04-18 16:07, Morten Brørup wrote:
> > > >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> > > >> Sent: Tuesday, 18 April 2023 14.31
> > > >>
> > > >> On 2023-04-18 12:45, Sivaprasad Tummala wrote:
> > > >>> fixed the padding required to align to cacheline size.
> > > >>>
> > > >>
> > > >> What's the point in having this structure cache-line aligned? False
> > > >> sharing is a non-issue, since this is more or less a read only struct.
> > > >>
> > > >> This is not so much a comment on your patch, but the __rte_cache_aligned
> > > >> attribute.
> > > >
> > > > When the structure is cache aligned, an individual entry in the array does
> > not unnecessarily cross a cache line border. With 16 pointers and aligned, it
> > uses exactly two cache lines. If unaligned, it may span three cache lines.
> > > >
> > > An *element* in the reserved uint64_t array won't span across two cache
> > > lines, regardless if __rte_cache_aligned is specified or not. You would
> > > need a packed struct for that to occur, plus the reserved array field
> > > being preceded by some appropriately-sized fields.
> > >
> > > The only effect __rte_cache_aligned has on this particular struct is
> > > that if you instantiate the struct on the stack, or as a static
> > > variable, it will be cache-line aligned. That effect you can get by
> > > specifying the attribute when you define the variable, and you will save
> > > some space (by having smaller elements). In this case it doesn't matter
> > > if the array is compact or not, since an application is likely to only
> > > use one of the members in the array.
> > >
> > > It also doesn't matter of the struct is two or three cache lines, as
> > > long as only the first two are used.
> >
> >
> > Discussions stalled at this point.
>
> Not stalled at this point. You seem to have missed my follow-up email clarifying why cache aligning is relevant:
> http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D87897@smartserver.smartshare.dk/
>
> But the patch still breaks the ABI, and thus should be postponed to 23.11.
Yes.
>
> >
> > Hi Shiva,
> >
> > Marking this patch as rejected. If you think the other way, Please
> > change patchwork status and let's discuss more here.
>
> I am not taking any action regarding the status of this patch. I will leave that decision to Jerin and Shiva.
It is good to merge.
Shiva,
Please send ABI change notice for this for 23.11 NOW.
Once it is Acked and merged. I will merge the patch for 23.11 release.
I am marking the patch as DEFERRED in patchwork and next release
window it will come as NEW in patchwork.
>
> >
> >
> >
> > >
> > > >>
> > > >>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> > > >>> Cc: mattias.ronnblom@ericsson.com
> > > >>>
> > > >>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > >>> ---
> > > >>> lib/eventdev/rte_eventdev_core.h | 2 +-
> > > >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >>>
> > > >>> diff --git a/lib/eventdev/rte_eventdev_core.h
> > > >> b/lib/eventdev/rte_eventdev_core.h
> > > >>> index c328bdbc82..c27a52ccc0 100644
> > > >>> --- a/lib/eventdev/rte_eventdev_core.h
> > > >>> +++ b/lib/eventdev/rte_eventdev_core.h
> > > >>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
> > > >>> /**< PMD Tx adapter enqueue same destination function. */
> > > >>> event_crypto_adapter_enqueue_t ca_enqueue;
> > > >>> /**< PMD Crypto adapter enqueue function. */
> > > >>> - uintptr_t reserved[6];
> > > >>> + uintptr_t reserved[5];
> > > >>> } __rte_cache_aligned;
> > > >>>
> > > >>> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> > > >
> > >
^ permalink raw reply [relevance 3%]
* [PATCH v4 5/5] ethdev: add MPLS header modification support
2023-05-23 12:48 3% ` [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-23 12:48 2% ` Michael Baum
2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 12:48 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.
Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.
This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.
Since the "tag_index" field is added, the "RTE_FLOW_FIELD_TAG" type
moves to use it for tag array instead of using "level" field.
Using "level" is still supported for backwards compatibility when
"tag_index" field is zero.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 24 +++++++++++-
doc/guides/prog_guide/rte_flow.rst | 18 ++++++---
doc/guides/rel_notes/release_23_07.rst | 8 +++-
drivers/net/mlx5/mlx5_flow.c | 34 +++++++++++++++++
drivers/net/mlx5/mlx5_flow.h | 23 ++++++++++++
drivers/net/mlx5/mlx5_flow_dv.c | 29 +++++++--------
drivers/net/mlx5/mlx5_flow_hw.c | 21 ++++++++---
lib/ethdev/rte_flow.h | 51 ++++++++++++++++++--------
8 files changed, 162 insertions(+), 46 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
"ipv6_proto",
"flex_item",
"hash_result",
- "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
NULL
};
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+ .name = "dst_tag_index",
+ .help = "destination field tag array",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
.name = "dst_type_id",
.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+ .name = "stc_tag_index",
+ .help = "source field tag array",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
.name = "src_type_id",
.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
``width`` defines a number of bits to use from ``src`` field.
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
- ``0`` means the default behaviour. Depending on the packet type,
it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
- ``2`` and subsequent values requests access to the specified packet
encapsulation level, from outermost to innermost (lower to higher values).
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+ For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+ field and it is still supported for backwards compatibility.
+ When ``tag_index`` is zero, the tag array is taken from ``level`` field.
``type`` is used to specify (along with ``class_id``) the Geneve option which
is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
+=================+==========================================================+
| ``field`` | ID: packet field, mark, meta, tag, immediate, pointer |
+-----------------+----------------------------------------------------------+
- | ``level`` | encapsulation level of a packet field or tag array index |
+ | ``level`` | encapsulation level of a packet field |
+ +-----------------+----------------------------------------------------------+
+ | ``tag_index`` | tag index inside encapsulation level |
+-----------------+----------------------------------------------------------+
| ``type`` | geneve option type |
+-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
-* The ``level`` field in experimental structure
- ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+ * ``level`` field was reduced to 8 bits.
+
+ * ``tag_index`` field replaced ``level`` field in representing tag array for
+ ``RTE_FLOW_FIELD_TAG`` type.
ABI Changes
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 19f7f92717..867b7b8ea2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2318,6 +2318,40 @@ mlx5_validate_action_ct(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate the level value for modify field action.
+ *
+ * @param[in] data
+ * Pointer to the rte_flow_action_modify_data structure either src or dst.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
+ struct rte_flow_error *error)
+{
+ if (data->level == 0)
+ return 0;
+ if (data->field != RTE_FLOW_FIELD_TAG)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "inner header fields modification is not supported");
+ if (data->tag_index != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "tag array can be provided using 'level' or 'tag_index' fields, not both");
+ /*
+ * The tag array for RTE_FLOW_FIELD_TAG type is provided using
+ * 'tag_index' field. In old API, it was provided using 'level' field
+ * and it is still supported for backwards compatibility.
+ */
+ DRV_LOG(WARNING, "tag array provided in 'level' field instead of 'tag_index' field.");
+ return 0;
+}
+
/**
* Validate ICMP6 item.
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..cba04b4f45 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1045,6 +1045,26 @@ flow_items_to_tunnel(const struct rte_flow_item items[])
return items[0].spec;
}
+/**
+ * Gets the tag array given for RTE_FLOW_FIELD_TAG type.
+ *
+ * In old API the value was provided in "level" field, but in new API
+ * it is provided in "tag_array" field. Since encapsulation level is not
+ * relevant for metadata, the tag array can be still provided in "level"
+ * for backwards compatibility.
+ *
+ * @param[in] data
+ * Pointer to tag modify data structure.
+ *
+ * @return
+ * Tag array index.
+ */
+static inline uint8_t
+flow_tag_index_get(const struct rte_flow_action_modify_data *data)
+{
+ return data->tag_index ? data->tag_index : data->level;
+}
+
/**
* Fetch 1, 2, 3 or 4 byte field from the byte array
* and return as unsigned integer in host-endian format.
@@ -2276,6 +2296,9 @@ int mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
int mlx5_flow_validate_action_default_miss(uint64_t action_flags,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
+int flow_validate_modify_field_level
+ (const struct rte_flow_action_modify_data *data,
+ struct rte_flow_error *error);
int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
const uint8_t *mask,
const uint8_t *nic_mask,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f136f43b0a..729962a488 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1896,16 +1896,17 @@ mlx5_flow_field_id_to_modify_info
case RTE_FLOW_FIELD_TAG:
{
MLX5_ASSERT(data->offset + width <= 32);
+ uint8_t tag_index = flow_tag_index_get(data);
int reg;
- off_be = (data->level == MLX5_LINEAR_HASH_TAG_INDEX) ?
+ off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
16 - (data->offset + width) + 16 : data->offset;
if (priv->sh->config.dv_flow_en == 2)
reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG,
- data->level);
+ tag_index);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
- data->level, error);
+ tag_index, error);
if (reg < 0)
return;
MLX5_ASSERT(reg != REG_NON);
@@ -1985,7 +1986,7 @@ mlx5_flow_field_id_to_modify_info
{
uint32_t meta_mask = priv->sh->dv_meta_mask;
uint32_t meta_count = __builtin_popcount(meta_mask);
- uint32_t reg = data->level;
+ uint8_t reg = flow_tag_index_get(data);
RTE_SET_USED(meta_count);
MLX5_ASSERT(data->offset + width <= meta_count);
@@ -5250,6 +5251,14 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
uint32_t dst_width, src_width;
ret = flow_dv_validate_action_modify_hdr(action_flags, action, error);
+ if (ret)
+ return ret;
+ ret = flow_validate_modify_field_level(&action_modify_field->dst,
+ error);
+ if (ret)
+ return ret;
+ ret = flow_validate_modify_field_level(&action_modify_field->src,
+ error);
if (ret)
return ret;
if (action_modify_field->src.field == RTE_FLOW_FIELD_FLEX_ITEM ||
@@ -5279,12 +5288,6 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination offset is too big");
- if (action_modify_field->dst.level &&
- action_modify_field->dst.field != RTE_FLOW_FIELD_TAG)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "inner header fields modification"
- " is not supported");
}
if (action_modify_field->src.field != RTE_FLOW_FIELD_VALUE &&
action_modify_field->src.field != RTE_FLOW_FIELD_POINTER) {
@@ -5298,12 +5301,6 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source offset is too big");
- if (action_modify_field->src.level &&
- action_modify_field->src.field != RTE_FLOW_FIELD_TAG)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "inner header fields modification"
- " is not supported");
}
if ((action_modify_field->dst.field ==
action_modify_field->src.field) &&
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1b68a19900..e55e3d6c1a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1022,9 +1022,11 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
conf->dst.field == RTE_FLOW_FIELD_TAG ||
conf->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+ uint8_t tag_index = flow_tag_index_get(&conf->dst);
+
value = *(const unaligned_uint32_t *)item.spec;
if (conf->dst.field == RTE_FLOW_FIELD_TAG &&
- conf->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+ tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
value = rte_cpu_to_be_32(value << 16);
else
value = rte_cpu_to_be_32(value);
@@ -2055,9 +2057,11 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job,
mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+ uint8_t tag_index = flow_tag_index_get(&mhdr_action->dst);
+
value_p = (unaligned_uint32_t *)values;
if (mhdr_action->dst.field == RTE_FLOW_FIELD_TAG &&
- mhdr_action->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+ tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
*value_p = rte_cpu_to_be_32(*value_p << 16);
else
*value_p = rte_cpu_to_be_32(*value_p);
@@ -3546,11 +3550,16 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
const struct rte_flow_action *mask,
struct rte_flow_error *error)
{
- const struct rte_flow_action_modify_field *action_conf =
- action->conf;
- const struct rte_flow_action_modify_field *mask_conf =
- mask->conf;
+ const struct rte_flow_action_modify_field *action_conf = action->conf;
+ const struct rte_flow_action_modify_field *mask_conf = mask->conf;
+ int ret;
+ ret = flow_validate_modify_field_level(&action_conf->dst, error);
+ if (ret)
+ return ret;
+ ret = flow_validate_modify_field_level(&action_conf->src, error);
+ if (ret)
+ return ret;
if (action_conf->operation != mask_conf->operation)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_START = 0, /**< Start of a packet. */
RTE_FLOW_FIELD_MAC_DST, /**< Destination MAC Address. */
RTE_FLOW_FIELD_MAC_SRC, /**< Source MAC Address. */
- RTE_FLOW_FIELD_VLAN_TYPE, /**< 802.1Q Tag Identifier. */
- RTE_FLOW_FIELD_VLAN_ID, /**< 802.1Q VLAN Identifier. */
+ RTE_FLOW_FIELD_VLAN_TYPE, /**< VLAN Tag Identifier. */
+ RTE_FLOW_FIELD_VLAN_ID, /**< VLAN Identifier. */
RTE_FLOW_FIELD_MAC_TYPE, /**< EtherType. */
RTE_FLOW_FIELD_IPV4_DSCP, /**< IPv4 DSCP. */
RTE_FLOW_FIELD_IPV4_TTL, /**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
- RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA, /**< GENEVE option data */
+ RTE_FLOW_FIELD_MPLS /**< MPLS header. */
};
/**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
RTE_STD_C11
union {
struct {
- /** Encapsulation level or tag index or flex item handle. */
+ /** Encapsulation level and tag index or flex item handle. */
union {
struct {
/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
*
* Values other than @p 0 are not
* necessarily supported.
+ *
+ * @note that for MPLS field,
+ * encapsulation level also include
+ * tunnel since MPLS may appear in
+ * outer, inner or tunnel.
*/
uint8_t level;
- /**
- * Geneve option type. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- uint8_t type;
- /**
- * Geneve option class. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- rte_be16_t class_id;
+ union {
+ /**
+ * Tag index array inside
+ * encapsulation level.
+ * Used for VLAN, MPLS or TAG
+ * types.
+ */
+ uint8_t tag_index;
+ /**
+ * Geneve option identifier.
+ * relevant only for
+ * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ struct {
+ /**
+ * Geneve option type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class.
+ */
+ rte_be16_t class_id;
+ };
+ };
};
struct rte_flow_item_flex_handle *flex_handle;
};
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support
@ 2023-05-23 12:48 3% ` Michael Baum
2023-05-23 12:48 2% ` [PATCH v4 5/5] ethdev: add MPLS header " Michael Baum
2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 12:48 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add modify field support for GENEVE option fields:
- "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
- "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
- "RTE_FLOW_FIELD_GENEVE_OPT_DATA"
Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.
To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++++-
doc/guides/prog_guide/rte_flow.rst | 23 ++++++++++++
doc/guides/rel_notes/release_23_07.rst | 3 ++
drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++------
lib/ethdev/rte_flow.h | 48 +++++++++++++++++++++++++-
5 files changed, 131 insertions(+), 13 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
"ipv6_proto",
"flex_item",
- "hash_result", NULL
+ "hash_result",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ NULL
};
static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+ .name = "dst_type_id",
+ .help = "destination field type ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+ .name = "dst_class",
+ .help = "destination field class ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ dst.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_OFFSET] = {
.name = "dst_offset",
.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+ .name = "src_type_id",
+ .help = "source field type ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+ .name = "src_class",
+ .help = "source field class ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ src.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
.name = "src_offset",
.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
For the tag array (in case of multiple tags are supported and present)
``level`` translates directly into the array index.
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
``flex_handle`` is used to specify the flex item pointer which is being
modified. ``flex_handle`` and ``level`` are mutually exclusive.
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
specify destination width as 8, destination offset as 16, and provide immediate
value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
.. _table_rte_flow_action_modify_field:
.. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+-----------------+----------------------------------------------------------+
| ``level`` | encapsulation level of a packet field or tag array index |
+-----------------+----------------------------------------------------------+
+ | ``type`` | geneve option type |
+ +-----------------+----------------------------------------------------------+
+ | ``class_id`` | geneve option class ID |
+ +-----------------+----------------------------------------------------------+
| ``flex_handle`` | flex item handle of a packet field |
+-----------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* The ``level`` field in experimental structure
+ ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
ABI Changes
-----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"immediate value, pointer and hash result cannot be used as destination");
- if (mask_conf->dst.level != UINT32_MAX)
+ if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
"destination field mask and template are not equal");
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
- if (mask_conf->src.level != UINT32_MAX)
+ if (mask_conf->src.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = RTE_FLOW_FIELD_VLAN_ID,
- .level = 0xffffffff, .offset = 0xffffffff,
+ .level = 0xff, .offset = 0xffffffff,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */
RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
+ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
+ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
};
/**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
struct {
/** Encapsulation level or tag index or flex item handle. */
union {
- uint32_t level;
+ struct {
+ /**
+ * Packet encapsulation level containing
+ * the field modify to.
+ *
+ * - @p 0 requests the default behavior.
+ * Depending on the packet type, it
+ * can mean outermost, innermost or
+ * anything in between.
+ *
+ * It basically stands for the
+ * innermost encapsulation level
+ * modification can be performed on
+ * according to PMD and device
+ * capabilities.
+ *
+ * - @p 1 requests modification to be
+ * performed on the outermost packet
+ * encapsulation level.
+ *
+ * - @p 2 and subsequent values request
+ * modification to be performed on
+ * the specified inner packet
+ * encapsulation level, from
+ * outermost to innermost (lower to
+ * higher values).
+ *
+ * Values other than @p 0 are not
+ * necessarily supported.
+ */
+ uint8_t level;
+ /**
+ * Geneve option type. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ rte_be16_t class_id;
+ };
struct rte_flow_item_flex_handle *flex_handle;
};
/** Number of bits to skip from a field. */
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port
2023-05-16 11:27 0% ` lihuisong (C)
@ 2023-05-23 0:46 0% ` fengchengwen
1 sibling, 0 replies; 200+ results
From: fengchengwen @ 2023-05-23 0:46 UTC (permalink / raw)
To: Huisong Li, dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, liudongdong3, huangdaode
with 2/5 fixed,
Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>
On 2023/1/31 11:33, Huisong Li wrote:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
>
> ---
> -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
> -v4: fix a misspelling.
> -v3:
> #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
> for other bus type.
> #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
> the probelm in patch 2/5.
> -v2: resend due to CI unexplained failure.
>
> Huisong Li (5):
> drivers/bus: restore driver assignment at front of probing
> ethdev: fix skip valid port in probing callback
> app/testpmd: check the validity of the port
> app/testpmd: add attach and detach port for multiple process
> app/testpmd: stop forwarding in new or destroy event
>
> app/test-pmd/testpmd.c | 47 +++++++++++++++---------
> app/test-pmd/testpmd.h | 1 -
> drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
> drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
> drivers/bus/fslmc/fslmc_bus.c | 8 +++-
> drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
> drivers/bus/pci/pci_common.c | 9 ++++-
> drivers/bus/vdev/vdev.c | 10 ++++-
> drivers/bus/vmbus/vmbus_common.c | 9 ++++-
> drivers/net/bnxt/bnxt_ethdev.c | 3 +-
> drivers/net/bonding/bonding_testpmd.c | 1 -
> drivers/net/mlx5/mlx5.c | 2 +-
> lib/ethdev/ethdev_driver.c | 13 +++++--
> lib/ethdev/ethdev_driver.h | 12 ++++++
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 4 +-
> lib/ethdev/rte_ethdev.h | 4 +-
> lib/ethdev/version.map | 1 +
> 19 files changed, 114 insertions(+), 44 deletions(-)
>
^ permalink raw reply [relevance 0%]
* [PATCH v3 5/5] ethdev: add MPLS header modification support
2023-05-22 19:28 3% ` [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-22 19:28 3% ` Michael Baum
2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-22 19:28 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.
Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.
This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 24 +++++++++++-
doc/guides/prog_guide/rte_flow.rst | 18 ++++++---
doc/guides/rel_notes/release_23_07.rst | 8 +++-
lib/ethdev/rte_flow.h | 51 ++++++++++++++++++--------
4 files changed, 77 insertions(+), 24 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
"ipv6_proto",
"flex_item",
"hash_result",
- "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
NULL
};
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TAG_INDEX,
ACTION_MODIFY_FIELD_DST_TYPE_ID,
ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
ACTION_MODIFY_FIELD_SRC_TYPE_ID,
ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+ .name = "dst_tag_index",
+ .help = "destination field tag array",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
.name = "dst_type_id",
.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+ .name = "stc_tag_index",
+ .help = "source field tag array",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.tag_index)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
.name = "src_type_id",
.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
``width`` defines a number of bits to use from ``src`` field.
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
- ``0`` means the default behaviour. Depending on the packet type,
it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
- ``2`` and subsequent values requests access to the specified packet
encapsulation level, from outermost to innermost (lower to higher values).
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+ For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+ field and it is still supported for backwards compatibility.
+ When ``tag_index`` is zero, the tag array is taken from ``level`` field.
``type`` is used to specify (along with ``class_id``) the Geneve option which
is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
+=================+==========================================================+
| ``field`` | ID: packet field, mark, meta, tag, immediate, pointer |
+-----------------+----------------------------------------------------------+
- | ``level`` | encapsulation level of a packet field or tag array index |
+ | ``level`` | encapsulation level of a packet field |
+ +-----------------+----------------------------------------------------------+
+ | ``tag_index`` | tag index inside encapsulation level |
+-----------------+----------------------------------------------------------+
| ``type`` | geneve option type |
+-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
-* The ``level`` field in experimental structure
- ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+ * ``level`` field was reduced to 8 bits.
+
+ * ``tag_index`` field replaced ``level`` field in representing tag array for
+ ``RTE_FLOW_FIELD_TAG`` type.
ABI Changes
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_START = 0, /**< Start of a packet. */
RTE_FLOW_FIELD_MAC_DST, /**< Destination MAC Address. */
RTE_FLOW_FIELD_MAC_SRC, /**< Source MAC Address. */
- RTE_FLOW_FIELD_VLAN_TYPE, /**< 802.1Q Tag Identifier. */
- RTE_FLOW_FIELD_VLAN_ID, /**< 802.1Q VLAN Identifier. */
+ RTE_FLOW_FIELD_VLAN_TYPE, /**< VLAN Tag Identifier. */
+ RTE_FLOW_FIELD_VLAN_ID, /**< VLAN Identifier. */
RTE_FLOW_FIELD_MAC_TYPE, /**< EtherType. */
RTE_FLOW_FIELD_IPV4_DSCP, /**< IPv4 DSCP. */
RTE_FLOW_FIELD_IPV4_TTL, /**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
- RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA, /**< GENEVE option data */
+ RTE_FLOW_FIELD_MPLS /**< MPLS header. */
};
/**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
RTE_STD_C11
union {
struct {
- /** Encapsulation level or tag index or flex item handle. */
+ /** Encapsulation level and tag index or flex item handle. */
union {
struct {
/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
*
* Values other than @p 0 are not
* necessarily supported.
+ *
+ * @note that for MPLS field,
+ * encapsulation level also include
+ * tunnel since MPLS may appear in
+ * outer, inner or tunnel.
*/
uint8_t level;
- /**
- * Geneve option type. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- uint8_t type;
- /**
- * Geneve option class. relevant only
- * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
- * modification type.
- */
- rte_be16_t class_id;
+ union {
+ /**
+ * Tag index array inside
+ * encapsulation level.
+ * Used for VLAN, MPLS or TAG
+ * types.
+ */
+ uint8_t tag_index;
+ /**
+ * Geneve option identifier.
+ * relevant only for
+ * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ struct {
+ /**
+ * Geneve option type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class.
+ */
+ rte_be16_t class_id;
+ };
+ };
};
struct rte_flow_item_flex_handle *flex_handle;
};
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support
@ 2023-05-22 19:28 3% ` Michael Baum
2023-05-22 19:28 3% ` [PATCH v3 5/5] ethdev: add MPLS header " Michael Baum
2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-22 19:28 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add modify field support for GENEVE option fields:
- "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
- "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
- "RTE_FLOW_FIELD_GENEVE_OPT_DATA"
Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.
To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++++-
doc/guides/prog_guide/rte_flow.rst | 23 ++++++++++++
doc/guides/rel_notes/release_23_07.rst | 3 ++
drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++------
lib/ethdev/rte_flow.h | 48 +++++++++++++++++++++++++-
5 files changed, 131 insertions(+), 13 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
"ipv6_proto",
"flex_item",
- "hash_result", NULL
+ "hash_result",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ NULL
};
static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+ .name = "dst_type_id",
+ .help = "destination field type ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+ .name = "dst_class",
+ .help = "destination field class ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ dst.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_OFFSET] = {
.name = "dst_offset",
.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+ .name = "src_type_id",
+ .help = "source field type ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+ .name = "src_class",
+ .help = "source field class ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ src.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
.name = "src_offset",
.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
For the tag array (in case of multiple tags are supported and present)
``level`` translates directly into the array index.
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
``flex_handle`` is used to specify the flex item pointer which is being
modified. ``flex_handle`` and ``level`` are mutually exclusive.
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
specify destination width as 8, destination offset as 16, and provide immediate
value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
.. _table_rte_flow_action_modify_field:
.. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+-----------------+----------------------------------------------------------+
| ``level`` | encapsulation level of a packet field or tag array index |
+-----------------+----------------------------------------------------------+
+ | ``type`` | geneve option type |
+ +-----------------+----------------------------------------------------------+
+ | ``class_id`` | geneve option class ID |
+ +-----------------+----------------------------------------------------------+
| ``flex_handle`` | flex item handle of a packet field |
+-----------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* The ``level`` field in experimental structure
+ ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
ABI Changes
-----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"immediate value, pointer and hash result cannot be used as destination");
- if (mask_conf->dst.level != UINT32_MAX)
+ if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
"destination field mask and template are not equal");
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
- if (mask_conf->src.level != UINT32_MAX)
+ if (mask_conf->src.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = RTE_FLOW_FIELD_VLAN_ID,
- .level = 0xffffffff, .offset = 0xffffffff,
+ .level = 0xff, .offset = 0xffffffff,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */
RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
+ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
+ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
};
/**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
struct {
/** Encapsulation level or tag index or flex item handle. */
union {
- uint32_t level;
+ struct {
+ /**
+ * Packet encapsulation level containing
+ * the field modify to.
+ *
+ * - @p 0 requests the default behavior.
+ * Depending on the packet type, it
+ * can mean outermost, innermost or
+ * anything in between.
+ *
+ * It basically stands for the
+ * innermost encapsulation level
+ * modification can be performed on
+ * according to PMD and device
+ * capabilities.
+ *
+ * - @p 1 requests modification to be
+ * performed on the outermost packet
+ * encapsulation level.
+ *
+ * - @p 2 and subsequent values request
+ * modification to be performed on
+ * the specified inner packet
+ * encapsulation level, from
+ * outermost to innermost (lower to
+ * higher values).
+ *
+ * Values other than @p 0 are not
+ * necessarily supported.
+ */
+ uint8_t level;
+ /**
+ * Geneve option type. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ rte_be16_t class_id;
+ };
struct rte_flow_item_flex_handle *flex_handle;
};
/** Number of bits to skip from a field. */
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH 1/2] net/nfp: align reading of version info with kernel driver
@ 2023-05-22 11:40 6% ` Chaoyong He
0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-05-22 11:40 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Align the method of reading the version information with the linux
driver. This is done to make it easier to share code between the
DPDK PMD and the kernel driver.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 4 ++--
drivers/net/nfp/nfp_common.c | 30 +++++++++++++++++++----------
drivers/net/nfp/nfp_common.h | 21 ++------------------
drivers/net/nfp/nfp_ctrl.h | 22 +++++++++++++--------
drivers/net/nfp/nfp_ethdev.c | 10 +++++-----
drivers/net/nfp/nfp_ethdev_vf.c | 10 +++++-----
drivers/net/nfp/nfp_rxtx.c | 6 +++---
7 files changed, 51 insertions(+), 52 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 72933e55d0..778ea777dd 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -650,7 +650,7 @@ nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
hw->rx_bar = pf_dev->hw_queues + rx_bar_off;
/* Get some of the read-only fields from the config BAR */
- hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+ nfp_net_cfg_read_version(hw);
hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP);
hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU);
/* Set the current MTU to the maximum supported */
@@ -661,7 +661,7 @@ nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
return -ENODEV;
/* read the Rx offset configured from firmware */
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+ if (hw->ver.major < 2)
hw->rx_offset = NFP_NET_RX_OFFSET;
else
hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index c9fea765a4..a9af215626 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -356,8 +356,7 @@ void
nfp_net_log_device_information(const struct nfp_net_hw *hw)
{
PMD_INIT_LOG(INFO, "VER: %u.%u, Maximum supported MTU: %d",
- NFD_CFG_MAJOR_VERSION_of(hw->ver),
- NFD_CFG_MINOR_VERSION_of(hw->ver), hw->max_mtu);
+ hw->ver.major, hw->ver.minor, hw->max_mtu);
PMD_INIT_LOG(INFO, "CAP: %#x, %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s", hw->cap,
hw->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "",
@@ -1114,14 +1113,14 @@ nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
{
uint16_t tx_dpp;
- switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+ switch (hw->ver.extend) {
case NFP_NET_CFG_VERSION_DP_NFD3:
tx_dpp = NFD3_TX_DESC_PER_PKT;
break;
case NFP_NET_CFG_VERSION_DP_NFDK:
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+ if (hw->ver.major < 5) {
PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- NFD_CFG_MAJOR_VERSION_of(hw->ver));
+ hw->ver.major);
return -EINVAL;
}
tx_dpp = NFDK_TX_DESC_PER_SIMPLE_PKT;
@@ -1911,11 +1910,10 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
int
nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
{
- if (NFD_CFG_CLASS_VER_of(hw->ver) == NFP_NET_CFG_VERSION_DP_NFD3 &&
+ if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&
rte_mem_check_dma_mask(40) != 0) {
- PMD_DRV_LOG(ERR,
- "The device %s can't be used: restricted dma mask to 40 bits!",
- name);
+ PMD_DRV_LOG(ERR, "Device %s can't be used: restricted dma mask to 40 bits!",
+ name);
return -ENODEV;
}
@@ -1930,7 +1928,7 @@ nfp_net_init_metadata_format(struct nfp_net_hw *hw)
* single metadata if only RSS(v1) is supported by hw capability, and RSS(v2)
* also indicate that we are using chained metadata.
*/
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4) {
+ if (hw->ver.major == 4) {
hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
} else if ((hw->cap & NFP_NET_CFG_CTRL_CHAIN_META) != 0) {
hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
@@ -1944,3 +1942,15 @@ nfp_net_init_metadata_format(struct nfp_net_hw *hw)
hw->meta_format = NFP_NET_METAFORMAT_SINGLE;
}
}
+
+void
+nfp_net_cfg_read_version(struct nfp_net_hw *hw)
+{
+ union {
+ uint32_t whole;
+ struct nfp_net_fw_ver split;
+ } version;
+
+ version.whole = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+ hw->ver = version.split;
+}
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 47df0510c5..424b18b0ad 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -80,24 +80,6 @@ struct nfp_net_adapter;
#define NFP_NET_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
#define NFP_NET_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
-/* Version number helper defines */
-#define NFD_CFG_CLASS_VER_msk 0xff
-#define NFD_CFG_CLASS_VER_shf 24
-#define NFD_CFG_CLASS_VER(x) (((x) & 0xff) << 24)
-#define NFD_CFG_CLASS_VER_of(x) (((x) >> 24) & 0xff)
-#define NFD_CFG_CLASS_TYPE_msk 0xff
-#define NFD_CFG_CLASS_TYPE_shf 16
-#define NFD_CFG_CLASS_TYPE(x) (((x) & 0xff) << 16)
-#define NFD_CFG_CLASS_TYPE_of(x) (((x) >> 16) & 0xff)
-#define NFD_CFG_MAJOR_VERSION_msk 0xff
-#define NFD_CFG_MAJOR_VERSION_shf 8
-#define NFD_CFG_MAJOR_VERSION(x) (((x) & 0xff) << 8)
-#define NFD_CFG_MAJOR_VERSION_of(x) (((x) >> 8) & 0xff)
-#define NFD_CFG_MINOR_VERSION_msk 0xff
-#define NFD_CFG_MINOR_VERSION_shf 0
-#define NFD_CFG_MINOR_VERSION(x) (((x) & 0xff) << 0)
-#define NFD_CFG_MINOR_VERSION_of(x) (((x) >> 0) & 0xff)
-
/* Number of supported physical ports */
#define NFP_MAX_PHYPORTS 12
@@ -196,7 +178,7 @@ struct nfp_net_hw {
struct rte_eth_dev *eth_dev;
/* Info from the firmware */
- uint32_t ver;
+ struct nfp_net_fw_ver ver;
uint32_t cap;
uint32_t max_mtu;
uint32_t mtu;
@@ -490,6 +472,7 @@ int nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
uint16_t *max_tx_desc);
int nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name);
void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
+void nfp_net_cfg_read_version(struct nfp_net_hw *hw);
#define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index bca31ac311..ff2245dfff 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -130,6 +130,20 @@
#define NFP_NET_CFG_CTRL_CHAIN_META (NFP_NET_CFG_CTRL_RSS2 | \
NFP_NET_CFG_CTRL_CSUM_COMPLETE)
+
+/* Version number helper defines */
+struct nfp_net_fw_ver {
+ uint8_t minor;
+ uint8_t major;
+ uint8_t class;
+ /**
+ * This byte can be extended for more use.
+ * BIT0: NFD dp type, refer NFP_NET_CFG_VERSION_DP_NFDx
+ * BIT[7:1]: reserved
+ */
+ uint8_t extend;
+};
+
/*
* Read-only words (0x0030 - 0x0050):
* @NFP_NET_CFG_VERSION: Firmware version number
@@ -147,14 +161,6 @@
#define NFP_NET_CFG_VERSION 0x0030
#define NFP_NET_CFG_VERSION_DP_NFD3 0
#define NFP_NET_CFG_VERSION_DP_NFDK 1
-#define NFP_NET_CFG_VERSION_RESERVED_MASK (0xff << 24)
-#define NFP_NET_CFG_VERSION_CLASS_MASK (0xff << 16)
-#define NFP_NET_CFG_VERSION_CLASS(x) (((x) & 0xff) << 16)
-#define NFP_NET_CFG_VERSION_CLASS_GENERIC 0
-#define NFP_NET_CFG_VERSION_MAJOR_MASK (0xff << 8)
-#define NFP_NET_CFG_VERSION_MAJOR(x) (((x) & 0xff) << 8)
-#define NFP_NET_CFG_VERSION_MINOR_MASK (0xff << 0)
-#define NFP_NET_CFG_VERSION_MINOR(x) (((x) & 0xff) << 0)
#define NFP_NET_CFG_STS 0x0034
#define NFP_NET_CFG_STS_LINK (0x1 << 0) /* Link up or down */
/* Link rate */
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 722ec17dce..0b2dd7801b 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -466,14 +466,14 @@ static const struct eth_dev_ops nfp_net_eth_dev_ops = {
static inline int
nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
{
- switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+ switch (hw->ver.extend) {
case NFP_NET_CFG_VERSION_DP_NFD3:
eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
break;
case NFP_NET_CFG_VERSION_DP_NFDK:
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+ if (hw->ver.major < 5) {
PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- NFD_CFG_MAJOR_VERSION_of(hw->ver));
+ hw->ver.major);
return -EINVAL;
}
eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
@@ -571,7 +571,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar);
PMD_INIT_LOG(DEBUG, "MAC stats: %p", hw->mac_stats);
- hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+ nfp_net_cfg_read_version(hw);
if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
return -ENODEV;
@@ -629,7 +629,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
nfp_net_init_metadata_format(hw);
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+ if (hw->ver.major < 2)
hw->rx_offset = NFP_NET_RX_OFFSET;
else
hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index ce55e3b728..cf3548e63a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -246,14 +246,14 @@ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
static inline int
nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
{
- switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+ switch (hw->ver.extend) {
case NFP_NET_CFG_VERSION_DP_NFD3:
eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
break;
case NFP_NET_CFG_VERSION_DP_NFDK:
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+ if (hw->ver.major < 5) {
PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- NFD_CFG_MAJOR_VERSION_of(hw->ver));
+ hw->ver.major);
return -EINVAL;
}
eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
@@ -298,7 +298,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar);
- hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+ nfp_net_cfg_read_version(hw);
if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
return -ENODEV;
@@ -380,7 +380,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
nfp_net_init_metadata_format(hw);
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+ if (hw->ver.major < 2)
hw->rx_offset = NFP_NET_RX_OFFSET;
else
hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 3c78557221..478752fa14 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -764,14 +764,14 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+ switch (hw->ver.extend) {
case NFP_NET_CFG_VERSION_DP_NFD3:
return nfp_net_nfd3_tx_queue_setup(dev, queue_idx,
nb_desc, socket_id, tx_conf);
case NFP_NET_CFG_VERSION_DP_NFDK:
- if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+ if (hw->ver.major < 5) {
PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
- NFD_CFG_MAJOR_VERSION_of(hw->ver));
+ hw->ver.major);
return -EINVAL;
}
return nfp_net_nfdk_tx_queue_setup(dev, queue_idx,
--
2.39.1
^ permalink raw reply [relevance 6%]
* [PATCH v3 01/19] mbuf: replace term sanity check
@ 2023-05-19 17:45 2% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-19 17:45 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Olivier Matz, Steven Webster, Matt Peters,
Andrew Rybchenko
Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
to match the similar macro RTE_VERIFY() in rte_debug.h
The term sanity check is on the Tier 2 list of words
that should be replaced.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test/test_mbuf.c | 30 ++++++------
doc/guides/prog_guide/mbuf_lib.rst | 4 +-
doc/guides/rel_notes/deprecation.rst | 3 ++
drivers/net/avp/avp_ethdev.c | 18 +++----
drivers/net/sfc/sfc_ef100_rx.c | 6 +--
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 4 +-
drivers/net/sfc/sfc_rx.c | 2 +-
examples/ipv4_multicast/main.c | 2 +-
lib/mbuf/rte_mbuf.c | 23 +++++----
lib/mbuf/rte_mbuf.h | 71 +++++++++++++++-------------
lib/mbuf/version.map | 1 +
12 files changed, 91 insertions(+), 77 deletions(-)
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 8d8d3b9386ce..c2716dc4e5fe 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -261,8 +261,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("Buffer should be continuous");
memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
- rte_mbuf_sanity_check(m, 1);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 1);
+ rte_mbuf_verify(m, 0);
rte_pktmbuf_dump(stdout, m, 0);
/* this prepend should fail */
@@ -1161,7 +1161,7 @@ test_refcnt_mbuf(void)
#ifdef RTE_EXEC_ENV_WINDOWS
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
RTE_SET_USED(pktmbuf_pool);
return TEST_SKIPPED;
@@ -1188,7 +1188,7 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
/* No need to generate a coredump when panicking. */
rl.rlim_cur = rl.rlim_max = 0;
setrlimit(RLIMIT_CORE, &rl);
- rte_mbuf_sanity_check(buf, 1); /* should panic */
+ rte_mbuf_verify(buf, 1); /* should panic */
exit(0); /* return normally if it doesn't panic */
} else if (pid < 0) {
printf("Fork Failed\n");
@@ -1202,12 +1202,12 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
}
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
struct rte_mbuf *buf;
struct rte_mbuf badbuf;
- printf("Checking rte_mbuf_sanity_check for failure conditions\n");
+ printf("Checking rte_mbuf_verify for failure conditions\n");
/* get a good mbuf to use to make copies */
buf = rte_pktmbuf_alloc(pktmbuf_pool);
@@ -1729,7 +1729,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
m->ol_flags = ol_flags;
m->tso_segsz = segsize;
ret = rte_validate_tx_offload(m);
@@ -1936,7 +1936,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
if (data == NULL)
@@ -1985,7 +1985,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool *pktmbuf_pool)
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
/* prepend an ethernet header */
hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
@@ -2130,7 +2130,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(pkt_seg, 0);
+ rte_mbuf_verify(pkt_seg, 0);
/* Add header only for the first segment */
if (test_data->flags == MBUF_HEADER && seg == 0) {
hdr_len = sizeof(struct rte_ether_hdr);
@@ -2342,7 +2342,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
ext_buf_addr = rte_malloc("External buffer", buf_len,
RTE_CACHE_LINE_SIZE);
@@ -2506,8 +2506,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool *std_pool)
GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
__func__);
- if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
- GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
+ if (test_failing_mbuf_verify(pinned_pool) < 0)
+ GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
" failed\n", __func__);
if (test_mbuf_linearize_check(pinned_pool) < 0)
@@ -2881,8 +2881,8 @@ test_mbuf(void)
goto err;
}
- if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
- printf("test_failing_mbuf_sanity_check() failed\n");
+ if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
+ printf("test_failing_mbuf_verify() failed\n");
goto err;
}
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 049357c75563..0accb51a98c7 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -266,8 +266,8 @@ can be found in several of the sample applications, for example, the IPv4 Multic
Debug
-----
-In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
-bad type, and so on).
+In debug mode, the functions of the mbuf library perform consistency checks
+before any operation (such as, buffer corruption, bad type, and so on).
Use Cases
---------
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca169668..186cc13eea60 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* mbuf: The function ``rte_mbuf_sanity_check`` will be deprecated in DPDK 23.07
+ and removed in DPDK 23.11. The new function will be ``rte_mbuf_verify``.
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b2a08f563542..b402c7a2ad16 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1231,7 +1231,7 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
static inline void
-__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+__avp_dev_buffer_check(struct avp_dev *avp, struct rte_avp_desc *buf)
{
struct rte_avp_desc *first_buf;
struct rte_avp_desc *pkt_buf;
@@ -1272,12 +1272,12 @@ __avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
first_buf->pkt_len, pkt_len);
}
-#define avp_dev_buffer_sanity_check(a, b) \
- __avp_dev_buffer_sanity_check((a), (b))
+#define avp_dev_buffer_check(a, b) \
+ __avp_dev_buffer_check((a), (b))
#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
-#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+#define avp_dev_buffer_check(a, b) do {} while (0)
#endif
@@ -1302,7 +1302,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
void *pkt_data;
unsigned int i;
- avp_dev_buffer_sanity_check(avp, buf);
+ avp_dev_buffer_check(avp, buf);
/* setup the first source buffer */
pkt_buf = avp_dev_translate_buffer(avp, buf);
@@ -1370,7 +1370,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
rte_pktmbuf_pkt_len(m) = total_length;
m->vlan_tci = vlan_tci;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m;
}
@@ -1614,7 +1614,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
char *pkt_data;
unsigned int i;
- __rte_mbuf_sanity_check(mbuf, 1);
+ __rte_mbuf_verify(mbuf, 1);
m = mbuf;
src_offset = 0;
@@ -1680,7 +1680,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
first_buf->vlan_tci = mbuf->vlan_tci;
}
- avp_dev_buffer_sanity_check(avp, buffers[0]);
+ avp_dev_buffer_check(avp, buffers[0]);
return total_length;
}
@@ -1798,7 +1798,7 @@ avp_xmit_scattered_pkts(void *tx_queue,
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
for (i = 0; i < nb_pkts; i++)
- avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+ avp_dev_buffer_check(avp, tx_bufs[i]);
#endif
/* send the packets */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 16cd8524d32f..dcd3b3316752 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -178,7 +178,7 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
struct sfc_ef100_rx_sw_desc *rxd;
rte_iova_t dma_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
dma_addr = rte_mbuf_data_iova_default(m);
if (rxq->flags & SFC_EF100_RXQ_NIC_DMA_MAP) {
@@ -541,7 +541,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
rxq->ready_pkts--;
pkt = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(pkt);
+ __rte_mbuf_raw_verify(pkt);
RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
sizeof(rxq->rearm_data));
@@ -565,7 +565,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
struct rte_mbuf *seg;
seg = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(seg);
+ __rte_mbuf_raw_verify(seg);
seg->data_off = RTE_PKTMBUF_HEADROOM;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 78bd430363b1..74647e2792b1 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -125,7 +125,7 @@ sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
@@ -136,7 +136,7 @@ sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7be224c9c412..0fdd0d84c17c 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -148,7 +148,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
struct sfc_ef10_rx_sw_desc *rxd;
rte_iova_t phys_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
SFC_ASSERT((id & ~ptr_mask) == 0);
rxd = &rxq->sw_ring[id];
@@ -297,7 +297,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
rxd = &rxq->sw_ring[pending++ & ptr_mask];
m = rxd->mbuf;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
m->data_off = RTE_PKTMBUF_HEADROOM;
rte_pktmbuf_data_len(m) = seg_len;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 5ea98187c3b4..5d5df52b269a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -120,7 +120,7 @@ sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq)
++i, id = (id + 1) & rxq->ptr_mask) {
m = objs[i];
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
rxd = &rxq->sw_desc[id];
rxd->mbuf = m;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 6d0a8501eff5..f39658f4e249 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -258,7 +258,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
hdr->nb_segs = pkt->nb_segs + 1;
- __rte_mbuf_sanity_check(hdr, 1);
+ __rte_mbuf_verify(hdr, 1);
return hdr;
}
/* >8 End of mcast_out_kt. */
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 686e797c80c4..56fb6c846df6 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -363,9 +363,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
return mp;
}
-/* do some sanity checks on a mbuf: panic if it fails */
+/* do some checks on a mbuf: panic if it fails */
void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
{
const char *reason;
@@ -373,6 +373,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
rte_panic("%s\n", reason);
}
+/* For ABI compatabilty, to be removed in next release */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+ rte_mbuf_verify(m, is_header);
+}
+
int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason)
{
@@ -492,7 +499,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
if (unlikely(m == NULL))
continue;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
m_next = m->next;
@@ -542,7 +549,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
return NULL;
}
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -592,7 +599,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
struct rte_mbuf *mc, *m_last, **prev;
/* garbage in check */
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
/* check for request to copy at offset past end of mbuf */
if (unlikely(off >= m->pkt_len))
@@ -656,7 +663,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
}
/* garbage out check */
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -667,7 +674,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
unsigned int len;
unsigned int nb_segs;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m, rte_mbuf_iova_get(m),
m->buf_len);
@@ -685,7 +692,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
nb_segs = m->nb_segs;
while (m && nb_segs != 0) {
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
fprintf(f, " segment at %p, data=%p, len=%u, off=%u, refcnt=%u\n",
m, rte_pktmbuf_mtod(m, void *),
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459b1cc6..3bd50d7307b3 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -339,13 +339,13 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
#ifdef RTE_LIBRTE_MBUF_DEBUG
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+/** do mbuf type in debug mode */
+#define __rte_mbuf_verify(m, is_h) rte_mbuf_verify(m, is_h)
#else /* RTE_LIBRTE_MBUF_DEBUG */
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+/** ignore mbuf checks if not in debug mode */
+#define __rte_mbuf_verify(m, is_h) do { } while (0)
#endif /* RTE_LIBRTE_MBUF_DEBUG */
@@ -514,10 +514,9 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
/**
- * Sanity checks on an mbuf.
+ * Check that the mbuf is valid and panic if corrupted.
*
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
+ * Acts assertion that mbuf is consistent. If not it calls rte_panic().
*
* @param m
* The mbuf to be checked.
@@ -526,13 +525,17 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
* of a packet (in this case, some fields like nb_segs are not checked)
*/
void
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
+
+/* Older deprecated name for rte_mbuf_verify() */
+void __rte_deprecated
rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
/**
- * Sanity checks on a mbuf.
+ * Do consistency checks on a mbuf.
*
- * Almost like rte_mbuf_sanity_check(), but this function gives the reason
- * if corruption is detected rather than panic.
+ * Check the consistency of the given mbuf and if not valid
+ * return the reason.
*
* @param m
* The mbuf to be checked.
@@ -551,7 +554,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason);
/**
- * Sanity checks on a reinitialized mbuf in debug mode.
+ * Do checks on a reinitialized mbuf in debug mode.
*
* Check the consistency of the given reinitialized mbuf.
* The function will cause a panic if corruption is detected.
@@ -563,16 +566,16 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
* The mbuf to be checked.
*/
static __rte_always_inline void
-__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
+__rte_mbuf_raw_verify(__rte_unused const struct rte_mbuf *m)
{
RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
RTE_ASSERT(m->next == NULL);
RTE_ASSERT(m->nb_segs == 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
}
/** For backwards compatibility. */
-#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
+#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_verify(m)
/**
* Allocate an uninitialized mbuf from mempool *mp*.
@@ -599,7 +602,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
if (rte_mempool_get(mp, (void **)&m) < 0)
return NULL;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
@@ -622,7 +625,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
{
RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
(!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
rte_mempool_put(m->pool, m);
}
@@ -886,7 +889,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
rte_pktmbuf_reset_headroom(m);
m->data_len = 0;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
}
/**
@@ -942,22 +945,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
switch (count % 4) {
case 0:
while (idx != count) {
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 3:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 2:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 1:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
@@ -1185,8 +1188,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->pkt_len = mi->data_len;
mi->nb_segs = 1;
- __rte_mbuf_sanity_check(mi, 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(mi, 1);
+ __rte_mbuf_verify(m, 0);
}
/**
@@ -1341,7 +1344,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
static __rte_always_inline struct rte_mbuf *
rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
if (likely(rte_mbuf_refcnt_read(m) == 1)) {
@@ -1412,7 +1415,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
struct rte_mbuf *m_next;
if (m != NULL)
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m != NULL) {
m_next = m->next;
@@ -1493,7 +1496,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
*/
static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
rte_mbuf_refcnt_update(m, v);
@@ -1510,7 +1513,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
*/
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return m->data_off;
}
@@ -1524,7 +1527,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
*/
static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
m->data_len);
}
@@ -1539,7 +1542,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
*/
static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m->next != NULL)
m = m->next;
return m;
@@ -1583,7 +1586,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
@@ -1618,7 +1621,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
void *tail;
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -1646,7 +1649,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
*/
static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > m->data_len))
return NULL;
@@ -1678,7 +1681,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
{
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > m_last->data_len))
@@ -1700,7 +1703,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
*/
static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m->nb_segs == 1;
}
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index ed486ed14ec7..f134946f3d8d 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -31,6 +31,7 @@ DPDK_23 {
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
+ rte_mbuf_verify;
rte_pktmbuf_clone;
rte_pktmbuf_copy;
rte_pktmbuf_dump;
--
2.39.2
^ permalink raw reply [relevance 2%]
* [PATCH V10] ethdev: fix one address occupies two entries in MAC addrs
2023-05-19 3:00 4% ` [PATCH V9] " Huisong Li
@ 2023-05-19 9:31 3% ` Huisong Li
2 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-05-19 9:31 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, bruce.richardson, andrew.rybchenko,
liudongdong3, liuyonglong, fengchengwen, lihuisong
The dev->data->mac_addrs[0] will be changed to a new MAC address when
applications modify the default MAC address by .mac_addr_set(). However,
if the new default one has been added as a non-default MAC address by
.mac_addr_add(), the .mac_addr_set() didn't check this address.
As a result, this MAC address occupies two entries in the list. Like:
add(MAC1)
add(MAC2)
add(MAC3)
add(MAC4)
set_default(MAC3)
default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
Note: MAC3 occupies two entries.
But .mac_addr_set() cannot remove it implicitly in case of MAC address
shrinking in the list.
So this patch adds a check on whether the new default address was already
in the list and if so requires the user to remove it first.
In addition, this patch documents the position of the default MAC address
and address unique in the list.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v10: add '-EEXIST' error type case under @return.
v9: request user to remove the address instead of doing it implicitly in
.mac_addr_set() API.
v8: fix some comments.
v7: add announcement in the release notes and document this behavior.
v6: fix commit log and some code comments.
v5:
- merge the second patch into the first patch.
- add error log when rollback failed.
v4:
- fix broken in the patchwork
v3:
- first explicitly remove the non-default MAC, then set default one.
- document default and non-default MAC address
v2:
- fixed commit log.
---
doc/guides/rel_notes/release_23_07.rst | 5 +++++
lib/ethdev/ethdev_driver.h | 6 +++++-
lib/ethdev/rte_ethdev.c | 10 ++++++++++
lib/ethdev/rte_ethdev.h | 4 ++++
4 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index 4ffef85d74..7c624d8315 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -96,6 +96,11 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+ * ethdev: ensured all entries in MAC address list are uniques.
+ When setting a default MAC address with the function
+ ``rte_eth_dev_default_mac_addr_set``,
+ the default one needs to be removed by user if it was already in
+ the list.
ABI Changes
-----------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 2c9d615fb5..367c0c4878 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -117,7 +117,11 @@ struct rte_eth_dev_data {
uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
- /** Device Ethernet link address. @see rte_eth_dev_release_port() */
+ /**
+ * Device Ethernet link addresses.
+ * All entries are unique.
+ * The first entry (index zero) is the default address.
+ */
struct rte_ether_addr *mac_addrs;
/** Bitmap associating MAC addresses to pools */
uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..d46e74504e 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4898,6 +4898,7 @@ int
rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
{
struct rte_eth_dev *dev;
+ int index;
int ret;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
@@ -4916,6 +4917,15 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
if (*dev->dev_ops->mac_addr_set == NULL)
return -ENOTSUP;
+ /* Keep address unique in dev->data->mac_addrs[]. */
+ index = eth_dev_get_mac_addr_index(port_id, addr);
+ if (index > 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "New default address for port %u was already in the address list. Please remove it first.\n",
+ port_id);
+ return -EEXIST;
+ }
+
ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
if (ret < 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..fe8f7466c8 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4381,6 +4381,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
/**
* Set the default MAC address.
+ * It replaces the address at index 0 of the MAC address list.
+ * If the address was already in the MAC address list,
+ * please remove it first.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -4391,6 +4394,7 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
* - (-ENOTSUP) if hardware doesn't support.
* - (-ENODEV) if *port* invalid.
* - (-EINVAL) if MAC address is invalid.
+ * - (-EEXIST) if MAC address was already in the address list.
*/
int rte_eth_dev_default_mac_addr_set(uint16_t port_id,
struct rte_ether_addr *mac_addr);
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-04-24 22:41 3% ` Thomas Monjalon
@ 2023-05-19 8:07 4% ` Jerin Jacob
2023-05-30 9:23 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-05-19 8:07 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Stephen Hemminger, Nithin Dabilpuram, Akhil Goyal, jerinj, dev,
Morten Brørup, techboard
On Tue, Apr 25, 2023 at 4:11 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 18/04/2023 10:33, Jerin Jacob:
> > On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > >
> > > On Tue, 11 Apr 2023 15:34:07 +0530
> > > Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
> > >
> > > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > > index 4bacf9fcd9..866cd4e8ee 100644
> > > > --- a/lib/security/rte_security.h
> > > > +++ b/lib/security/rte_security.h
> > > > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> > > > */
> > > > uint32_t ip_reassembly_en : 1;
> > > >
> > > > + /** Enable out of place processing on inline inbound packets.
> > > > + *
> > > > + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > > > + * inbound SA if supported by driver. PMD need to register mbuf
> > > > + * dynamic field using rte_security_oop_dynfield_register()
> > > > + * and security session creation would fail if dynfield is not
> > > > + * registered successfully.
> > > > + * * 0: Disable OOP processing for this session (default).
> > > > + */
> > > > + uint32_t ingress_oop : 1;
> > > > +
> > > > /** Reserved bit fields for future extension
> > > > *
> > > > * User should ensure reserved_opts is cleared as it may change in
> > > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > > *
> > > > * Note: Reduce number of bits in reserved_opts for every new option.
> > > > */
> > > > - uint32_t reserved_opts : 17;
> > > > + uint32_t reserved_opts : 16;
> > > > };
> > >
> > > NAK
> > > Let me repeat the reserved bit rant. YAGNI
> > >
> > > Reserved space is not usable without ABI breakage unless the existing
> > > code enforces that reserved space has to be zero.
> > >
> > > Just saying "User should ensure reserved_opts is cleared" is not enough.
> >
> > Yes. I think, we need to enforce to have _init functions for the
> > structures which is using reserved filed.
> >
> > On the same note on YAGNI, I am wondering why NOT introduce
> > RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> > By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> > wants it to avoid waiting for one year any ABI breaking changes.
> > There are a lot of "fixed appliance" customers (not OS distribution
> > driven customer) they are willing to recompile DPDK for new feature.
> > What we are loosing with this scheme?
>
> RTE_NEXT_ABI is described in the ABI policy.
> We are not doing it currently, but I think we could
> when it is not too much complicate in the code.
>
> The only problems I see are:
> - more #ifdef clutter
> - 2 binary versions to test
> - CI and checks must handle RTE_NEXT_ABI version
I think, we have two buckets of ABI breakages via RTE_NEXT_ABI
1) Changes that introduces compilation failures like adding new
argument to API or change API name etc
2) Structure size change which won't affect the compilation but breaks
the ABI for shared library usage.
I think, (1) is very distributive, and I don't see recently such
changes. I think, we should avoid (1) for non XX.11 releases.(or two
or three-year cycles if we decide that path)
The (2) comes are very common due to the fact HW features are
evolving. I think, to address the (2), we have two options
a) Have reserved fields and have _init() function to initialize the structures
b) Follow YAGNI style and introduce RTE_NEXT_ABI for structure size change.
The above concerns[1] can greatly reduce with option b OR option a.
[1]
1) more #ifdef clutter
For option (a) this is not needed or option (b) the clutter will be
limited, it will be around structure which add the new filed and
around the FULL block where new functions are added (not inside the
functions)
2) 2 binary versions to test
For option (a) this is not needed, for option (b) it is limited as for
new features only one needs to test another binary (rather than NOT
adding a new feature).
3) CI and checks must handle RTE_NEXT_ABI version
I think, it is cheap to add this, at least for compilation test.
IMO, We need to change the API break release to 3 year kind of time
frame to have very good end user experience
and allow ABI related change to get in every release and force
_rebuild_ shared objects in major LTS release.
I think, in this major LTS version(23.11) if we can decide (a) vs (b)
then we can align the code accordingly . e.s.p for (a) we need to add
_init() functions.
Thoughts?
^ permalink raw reply [relevance 4%]
* [PATCH V9] ethdev: fix one address occupies two entries in MAC addrs
@ 2023-05-19 3:00 4% ` Huisong Li
2023-05-19 9:31 3% ` [PATCH V10] " Huisong Li
2 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-05-19 3:00 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, bruce.richardson, andrew.rybchenko,
liudongdong3, huangdaode, fengchengwen, lihuisong
The dev->data->mac_addrs[0] will be changed to a new MAC address when
applications modify the default MAC address by .mac_addr_set(). However,
if the new default one has been added as a non-default MAC address by
.mac_addr_add(), the .mac_addr_set() didn't check this address.
As a result, this MAC address occupies two entries in the list. Like:
add(MAC1)
add(MAC2)
add(MAC3)
add(MAC4)
set_default(MAC3)
default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
Note: MAC3 occupies two entries.
But .mac_addr_set() cannot remove it implicitly in case of MAC address
shrinking in the list.
So this patch adds a check on whether the new default address was already
in the list and if so requires the user to remove it first.
In addition, this patch documents the position of the default MAC address
and address unique in the list.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v9: request user to remove the address instead of doing it implicitly in
.mac_addr_set() API.
v8: fix some comments.
v7: add announcement in the release notes and document this behavior.
v6: fix commit log and some code comments.
v5:
- merge the second patch into the first patch.
- add error log when rollback failed.
v4:
- fix broken in the patchwork
v3:
- first explicitly remove the non-default MAC, then set default one.
- document default and non-default MAC address
v2:
- fixed commit log.
---
doc/guides/rel_notes/release_23_07.rst | 5 +++++
lib/ethdev/ethdev_driver.h | 6 +++++-
lib/ethdev/rte_ethdev.c | 10 ++++++++++
lib/ethdev/rte_ethdev.h | 3 +++
4 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index 4ffef85d74..7c624d8315 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -96,6 +96,11 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+ * ethdev: ensured all entries in MAC address list are uniques.
+ When setting a default MAC address with the function
+ ``rte_eth_dev_default_mac_addr_set``,
+ the default one needs to be removed by user if it was already in
+ the list.
ABI Changes
-----------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 2c9d615fb5..367c0c4878 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -117,7 +117,11 @@ struct rte_eth_dev_data {
uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
- /** Device Ethernet link address. @see rte_eth_dev_release_port() */
+ /**
+ * Device Ethernet link addresses.
+ * All entries are unique.
+ * The first entry (index zero) is the default address.
+ */
struct rte_ether_addr *mac_addrs;
/** Bitmap associating MAC addresses to pools */
uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..d46e74504e 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4898,6 +4898,7 @@ int
rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
{
struct rte_eth_dev *dev;
+ int index;
int ret;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
@@ -4916,6 +4917,15 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
if (*dev->dev_ops->mac_addr_set == NULL)
return -ENOTSUP;
+ /* Keep address unique in dev->data->mac_addrs[]. */
+ index = eth_dev_get_mac_addr_index(port_id, addr);
+ if (index > 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "New default address for port %u was already in the address list. Please remove it first.\n",
+ port_id);
+ return -EEXIST;
+ }
+
ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
if (ret < 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..09b2ff9e5e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4381,6 +4381,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
/**
* Set the default MAC address.
+ * It replaces the address at index 0 of the MAC address list.
+ * If the address was already in the MAC address list,
+ * please remove it first.
*
* @param port_id
* The port identifier of the Ethernet device.
--
2.33.0
^ permalink raw reply [relevance 4%]
* [PATCH v2 4/5] ethdev: add GENEVE TLV option modification support
@ 2023-05-18 17:40 3% ` Michael Baum
1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-18 17:40 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add modify field support for GENEVE option fields:
- "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
- "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
- "RTE_FLOW_FIELD_GENEVE_OPT_DATA"
Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.
To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++++-
doc/guides/prog_guide/rte_flow.rst | 23 ++++++++++++
doc/guides/rel_notes/release_23_07.rst | 3 ++
drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++------
lib/ethdev/rte_flow.h | 48 +++++++++++++++++++++++++-
5 files changed, 131 insertions(+), 13 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
"ipv6_proto",
"flex_item",
- "hash_result", NULL
+ "hash_result",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ NULL
};
static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+ .name = "dst_type_id",
+ .help = "destination field type ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+ .name = "dst_class",
+ .help = "destination field class ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ dst.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_OFFSET] = {
.name = "dst_offset",
.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+ .name = "src_type_id",
+ .help = "source field type ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+ .name = "src_class",
+ .help = "source field class ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ src.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
.name = "src_offset",
.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
For the tag array (in case of multiple tags are supported and present)
``level`` translates directly into the array index.
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
``flex_handle`` is used to specify the flex item pointer which is being
modified. ``flex_handle`` and ``level`` are mutually exclusive.
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
specify destination width as 8, destination offset as 16, and provide immediate
value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
.. _table_rte_flow_action_modify_field:
.. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+-----------------+----------------------------------------------------------+
| ``level`` | encapsulation level of a packet field or tag array index |
+-----------------+----------------------------------------------------------+
+ | ``type`` | geneve option type |
+ +-----------------+----------------------------------------------------------+
+ | ``class_id`` | geneve option class ID |
+ +-----------------+----------------------------------------------------------+
| ``flex_handle`` | flex item handle of a packet field |
+-----------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* The ``level`` field in experimental structure
+ ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
ABI Changes
-----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"immediate value, pointer and hash result cannot be used as destination");
- if (mask_conf->dst.level != UINT32_MAX)
+ if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
"destination field mask and template are not equal");
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
- if (mask_conf->src.level != UINT32_MAX)
+ if (mask_conf->src.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = RTE_FLOW_FIELD_VLAN_ID,
- .level = 0xffffffff, .offset = 0xffffffff,
+ .level = 0xff, .offset = 0xffffffff,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
.operation = RTE_FLOW_MODIFY_SET,
.dst = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.src = {
.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
+ .level = UINT8_MAX,
.offset = UINT32_MAX,
},
.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */
RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
+ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
+ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
};
/**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
struct {
/** Encapsulation level or tag index or flex item handle. */
union {
- uint32_t level;
+ struct {
+ /**
+ * Packet encapsulation level containing
+ * the field modify to.
+ *
+ * - @p 0 requests the default behavior.
+ * Depending on the packet type, it
+ * can mean outermost, innermost or
+ * anything in between.
+ *
+ * It basically stands for the
+ * innermost encapsulation level
+ * modification can be performed on
+ * according to PMD and device
+ * capabilities.
+ *
+ * - @p 1 requests modification to be
+ * performed on the outermost packet
+ * encapsulation level.
+ *
+ * - @p 2 and subsequent values request
+ * modification to be performed on
+ * the specified inner packet
+ * encapsulation level, from
+ * outermost to innermost (lower to
+ * higher values).
+ *
+ * Values other than @p 0 are not
+ * necessarily supported.
+ */
+ uint8_t level;
+ /**
+ * Geneve option type. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ rte_be16_t class_id;
+ };
struct rte_flow_item_flex_handle *flex_handle;
};
/** Number of bits to skip from a field. */
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH v2 01/19] mbuf: replace term sanity check
@ 2023-05-18 16:45 2% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-18 16:45 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Olivier Matz, Steven Webster, Matt Peters,
Andrew Rybchenko
Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
to match the similar macro RTE_VERIFY() in rte_debug.h
The term sanity check is on the Tier 2 list of words
that should be replaced.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test/test_mbuf.c | 30 ++++++-------
doc/guides/prog_guide/mbuf_lib.rst | 4 +-
drivers/net/avp/avp_ethdev.c | 18 ++++----
drivers/net/sfc/sfc_ef100_rx.c | 6 +--
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 4 +-
drivers/net/sfc/sfc_rx.c | 2 +-
examples/ipv4_multicast/main.c | 2 +-
lib/mbuf/rte_mbuf.c | 23 ++++++----
lib/mbuf/rte_mbuf.h | 71 ++++++++++++++++--------------
lib/mbuf/version.map | 1 +
11 files changed, 88 insertions(+), 77 deletions(-)
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 8d8d3b9386ce..c2716dc4e5fe 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -261,8 +261,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("Buffer should be continuous");
memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
- rte_mbuf_sanity_check(m, 1);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 1);
+ rte_mbuf_verify(m, 0);
rte_pktmbuf_dump(stdout, m, 0);
/* this prepend should fail */
@@ -1161,7 +1161,7 @@ test_refcnt_mbuf(void)
#ifdef RTE_EXEC_ENV_WINDOWS
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
RTE_SET_USED(pktmbuf_pool);
return TEST_SKIPPED;
@@ -1188,7 +1188,7 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
/* No need to generate a coredump when panicking. */
rl.rlim_cur = rl.rlim_max = 0;
setrlimit(RLIMIT_CORE, &rl);
- rte_mbuf_sanity_check(buf, 1); /* should panic */
+ rte_mbuf_verify(buf, 1); /* should panic */
exit(0); /* return normally if it doesn't panic */
} else if (pid < 0) {
printf("Fork Failed\n");
@@ -1202,12 +1202,12 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
}
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
struct rte_mbuf *buf;
struct rte_mbuf badbuf;
- printf("Checking rte_mbuf_sanity_check for failure conditions\n");
+ printf("Checking rte_mbuf_verify for failure conditions\n");
/* get a good mbuf to use to make copies */
buf = rte_pktmbuf_alloc(pktmbuf_pool);
@@ -1729,7 +1729,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
m->ol_flags = ol_flags;
m->tso_segsz = segsize;
ret = rte_validate_tx_offload(m);
@@ -1936,7 +1936,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
if (data == NULL)
@@ -1985,7 +1985,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool *pktmbuf_pool)
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
/* prepend an ethernet header */
hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
@@ -2130,7 +2130,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(pkt_seg, 0);
+ rte_mbuf_verify(pkt_seg, 0);
/* Add header only for the first segment */
if (test_data->flags == MBUF_HEADER && seg == 0) {
hdr_len = sizeof(struct rte_ether_hdr);
@@ -2342,7 +2342,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
ext_buf_addr = rte_malloc("External buffer", buf_len,
RTE_CACHE_LINE_SIZE);
@@ -2506,8 +2506,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool *std_pool)
GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
__func__);
- if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
- GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
+ if (test_failing_mbuf_verify(pinned_pool) < 0)
+ GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
" failed\n", __func__);
if (test_mbuf_linearize_check(pinned_pool) < 0)
@@ -2881,8 +2881,8 @@ test_mbuf(void)
goto err;
}
- if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
- printf("test_failing_mbuf_sanity_check() failed\n");
+ if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
+ printf("test_failing_mbuf_verify() failed\n");
goto err;
}
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 049357c75563..0accb51a98c7 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -266,8 +266,8 @@ can be found in several of the sample applications, for example, the IPv4 Multic
Debug
-----
-In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
-bad type, and so on).
+In debug mode, the functions of the mbuf library perform consistency checks
+before any operation (such as, buffer corruption, bad type, and so on).
Use Cases
---------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b2a08f563542..b402c7a2ad16 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1231,7 +1231,7 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
static inline void
-__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+__avp_dev_buffer_check(struct avp_dev *avp, struct rte_avp_desc *buf)
{
struct rte_avp_desc *first_buf;
struct rte_avp_desc *pkt_buf;
@@ -1272,12 +1272,12 @@ __avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
first_buf->pkt_len, pkt_len);
}
-#define avp_dev_buffer_sanity_check(a, b) \
- __avp_dev_buffer_sanity_check((a), (b))
+#define avp_dev_buffer_check(a, b) \
+ __avp_dev_buffer_check((a), (b))
#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
-#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+#define avp_dev_buffer_check(a, b) do {} while (0)
#endif
@@ -1302,7 +1302,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
void *pkt_data;
unsigned int i;
- avp_dev_buffer_sanity_check(avp, buf);
+ avp_dev_buffer_check(avp, buf);
/* setup the first source buffer */
pkt_buf = avp_dev_translate_buffer(avp, buf);
@@ -1370,7 +1370,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
rte_pktmbuf_pkt_len(m) = total_length;
m->vlan_tci = vlan_tci;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m;
}
@@ -1614,7 +1614,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
char *pkt_data;
unsigned int i;
- __rte_mbuf_sanity_check(mbuf, 1);
+ __rte_mbuf_verify(mbuf, 1);
m = mbuf;
src_offset = 0;
@@ -1680,7 +1680,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
first_buf->vlan_tci = mbuf->vlan_tci;
}
- avp_dev_buffer_sanity_check(avp, buffers[0]);
+ avp_dev_buffer_check(avp, buffers[0]);
return total_length;
}
@@ -1798,7 +1798,7 @@ avp_xmit_scattered_pkts(void *tx_queue,
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
for (i = 0; i < nb_pkts; i++)
- avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+ avp_dev_buffer_check(avp, tx_bufs[i]);
#endif
/* send the packets */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 16cd8524d32f..fe8920b12590 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -178,7 +178,7 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
struct sfc_ef100_rx_sw_desc *rxd;
rte_iova_t dma_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
dma_addr = rte_mbuf_data_iova_default(m);
if (rxq->flags & SFC_EF100_RXQ_NIC_DMA_MAP) {
@@ -541,7 +541,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
rxq->ready_pkts--;
pkt = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(pkt);
+ __rte_mbuf_raw_validate(pkt);
RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
sizeof(rxq->rearm_data));
@@ -565,7 +565,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
struct rte_mbuf *seg;
seg = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(seg);
+ __rte_mbuf_raw_validate(seg);
seg->data_off = RTE_PKTMBUF_HEADROOM;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 78bd430363b1..de80be462a0f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -125,7 +125,7 @@ sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
return m;
}
@@ -136,7 +136,7 @@ sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
return m;
}
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7be224c9c412..f6c2345d2b74 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -148,7 +148,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
struct sfc_ef10_rx_sw_desc *rxd;
rte_iova_t phys_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
SFC_ASSERT((id & ~ptr_mask) == 0);
rxd = &rxq->sw_ring[id];
@@ -297,7 +297,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
rxd = &rxq->sw_ring[pending++ & ptr_mask];
m = rxd->mbuf;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
m->data_off = RTE_PKTMBUF_HEADROOM;
rte_pktmbuf_data_len(m) = seg_len;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 5ea98187c3b4..d9f99a9d583d 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -120,7 +120,7 @@ sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq)
++i, id = (id + 1) & rxq->ptr_mask) {
m = objs[i];
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
rxd = &rxq->sw_desc[id];
rxd->mbuf = m;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 6d0a8501eff5..f39658f4e249 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -258,7 +258,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
hdr->nb_segs = pkt->nb_segs + 1;
- __rte_mbuf_sanity_check(hdr, 1);
+ __rte_mbuf_verify(hdr, 1);
return hdr;
}
/* >8 End of mcast_out_kt. */
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 686e797c80c4..56fb6c846df6 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -363,9 +363,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
return mp;
}
-/* do some sanity checks on a mbuf: panic if it fails */
+/* do some checks on a mbuf: panic if it fails */
void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
{
const char *reason;
@@ -373,6 +373,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
rte_panic("%s\n", reason);
}
+/* For ABI compatabilty, to be removed in next release */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+ rte_mbuf_verify(m, is_header);
+}
+
int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason)
{
@@ -492,7 +499,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
if (unlikely(m == NULL))
continue;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
m_next = m->next;
@@ -542,7 +549,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
return NULL;
}
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -592,7 +599,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
struct rte_mbuf *mc, *m_last, **prev;
/* garbage in check */
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
/* check for request to copy at offset past end of mbuf */
if (unlikely(off >= m->pkt_len))
@@ -656,7 +663,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
}
/* garbage out check */
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -667,7 +674,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
unsigned int len;
unsigned int nb_segs;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m, rte_mbuf_iova_get(m),
m->buf_len);
@@ -685,7 +692,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
nb_segs = m->nb_segs;
while (m && nb_segs != 0) {
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
fprintf(f, " segment at %p, data=%p, len=%u, off=%u, refcnt=%u\n",
m, rte_pktmbuf_mtod(m, void *),
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459b1cc6..f3b62009accf 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -339,13 +339,13 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
#ifdef RTE_LIBRTE_MBUF_DEBUG
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+/** do mbuf type in debug mode */
+#define __rte_mbuf_verify(m, is_h) rte_mbuf_validate(m, is_h)
#else /* RTE_LIBRTE_MBUF_DEBUG */
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+/** ignore mbuf checks if not in debug mode */
+#define __rte_mbuf_verify(m, is_h) do { } while (0)
#endif /* RTE_LIBRTE_MBUF_DEBUG */
@@ -514,10 +514,9 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
/**
- * Sanity checks on an mbuf.
+ * Check that the mbuf is valid and panic if corrupted.
*
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
+ * Acts assertion that mbuf is consistent. If not it calls rte_panic().
*
* @param m
* The mbuf to be checked.
@@ -526,13 +525,17 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
* of a packet (in this case, some fields like nb_segs are not checked)
*/
void
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
+
+/* Older deprecated name for rte_mbuf_verify() */
+void __rte_deprecated
rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
/**
- * Sanity checks on a mbuf.
+ * Do consistency checks on a mbuf.
*
- * Almost like rte_mbuf_sanity_check(), but this function gives the reason
- * if corruption is detected rather than panic.
+ * Check the consistency of the given mbuf and if not valid
+ * return the reason.
*
* @param m
* The mbuf to be checked.
@@ -551,7 +554,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason);
/**
- * Sanity checks on a reinitialized mbuf in debug mode.
+ * Do checks on a reinitialized mbuf in debug mode.
*
* Check the consistency of the given reinitialized mbuf.
* The function will cause a panic if corruption is detected.
@@ -563,16 +566,16 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
* The mbuf to be checked.
*/
static __rte_always_inline void
-__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
+__rte_mbuf_raw_validate(__rte_unused const struct rte_mbuf *m)
{
RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
RTE_ASSERT(m->next == NULL);
RTE_ASSERT(m->nb_segs == 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
}
/** For backwards compatibility. */
-#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
+#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_validate(m)
/**
* Allocate an uninitialized mbuf from mempool *mp*.
@@ -599,7 +602,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
if (rte_mempool_get(mp, (void **)&m) < 0)
return NULL;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
return m;
}
@@ -622,7 +625,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
{
RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
(!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_validate(m);
rte_mempool_put(m->pool, m);
}
@@ -886,7 +889,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
rte_pktmbuf_reset_headroom(m);
m->data_len = 0;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
}
/**
@@ -942,22 +945,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
switch (count % 4) {
case 0:
while (idx != count) {
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_validate(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 3:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_validate(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 2:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_validate(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 1:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_validate(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
@@ -1185,8 +1188,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->pkt_len = mi->data_len;
mi->nb_segs = 1;
- __rte_mbuf_sanity_check(mi, 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(mi, 1);
+ __rte_mbuf_verify(m, 0);
}
/**
@@ -1341,7 +1344,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
static __rte_always_inline struct rte_mbuf *
rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
if (likely(rte_mbuf_refcnt_read(m) == 1)) {
@@ -1412,7 +1415,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
struct rte_mbuf *m_next;
if (m != NULL)
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m != NULL) {
m_next = m->next;
@@ -1493,7 +1496,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
*/
static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
rte_mbuf_refcnt_update(m, v);
@@ -1510,7 +1513,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
*/
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return m->data_off;
}
@@ -1524,7 +1527,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
*/
static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
m->data_len);
}
@@ -1539,7 +1542,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
*/
static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m->next != NULL)
m = m->next;
return m;
@@ -1583,7 +1586,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
@@ -1618,7 +1621,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
void *tail;
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -1646,7 +1649,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
*/
static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > m->data_len))
return NULL;
@@ -1678,7 +1681,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
{
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > m_last->data_len))
@@ -1700,7 +1703,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
*/
static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m->nb_segs == 1;
}
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index ed486ed14ec7..f134946f3d8d 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -31,6 +31,7 @@ DPDK_23 {
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
+ rte_mbuf_verify;
rte_pktmbuf_clone;
rte_pktmbuf_copy;
rte_pktmbuf_dump;
--
2.39.2
^ permalink raw reply [relevance 2%]
* Re: [PATCH v4] net/bonding: replace master/slave to main/member
2023-05-18 8:44 1% ` [PATCH v4] " Chaoyong He
@ 2023-05-18 15:39 3% ` Stephen Hemminger
2023-06-02 15:05 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-05-18 15:39 UTC (permalink / raw)
To: Chaoyong He; +Cc: dev, oss-drivers, niklas.soderlund, Long Wu, James Hershaw
On Thu, 18 May 2023 16:44:58 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:
> From: Long Wu <long.wu@corigine.com>
>
> This patch replaces the usage of the word 'master/slave' with more
> appropriate word 'main/member' in bonding PMD as well as in its docs
> and examples. Also the test app and testpmd were modified to use the
> new wording.
>
> The bonding PMD's public API was modified according to the changes
> in word:
> rte_eth_bond_8023ad_slave_info is now called
> rte_eth_bond_8023ad_member_info,
> rte_eth_bond_active_slaves_get is now called
> rte_eth_bond_active_members_get,
> rte_eth_bond_slave_add is now called
> rte_eth_bond_member_add,
> rte_eth_bond_slave_remove is now called
> rte_eth_bond_member_remove,
> rte_eth_bond_slaves_get is now called
> rte_eth_bond_members_get.
>
> Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
> RTE_ETH_DEV_BONDED_MEMBER.
>
> Mark the old visible API's as deprecated and remove
> from the ABI.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: James Hershaw <james.hershaw@corigine.com>
Since this will be ABI change it will have to wait for 23.11 release.
Could you make a deprecation notice now, to foreshadow that change?
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [relevance 3%]
* [PATCH v4] net/bonding: replace master/slave to main/member
2023-05-18 7:01 1% ` [PATCH v3] " Chaoyong He
@ 2023-05-18 8:44 1% ` Chaoyong He
2023-05-18 15:39 3% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18 8:44 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw
From: Long Wu <long.wu@corigine.com>
This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.
The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.
Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.
Mark the old visible API's as deprecated and remove
from the ABI.
Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
v2:
* Modify related doc.
* Add 'RTE_DEPRECATED' to related APIs.
v3:
* Fix the check warning about 'CamelCase'.
v4:
* Fix the doc compile problem.
---
app/test-pmd/testpmd.c | 112 +-
app/test-pmd/testpmd.h | 8 +-
app/test/test_link_bonding.c | 2792 +++++++++--------
app/test/test_link_bonding_mode4.c | 588 ++--
| 166 +-
doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
doc/guides/nics/bnxt.rst | 4 +-
doc/guides/prog_guide/img/bond-mode-1.svg | 2 +-
.../link_bonding_poll_mode_drv_lib.rst | 230 +-
drivers/net/bonding/bonding_testpmd.c | 178 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
drivers/net/bonding/eth_bond_private.h | 108 +-
drivers/net/bonding/rte_eth_bond.h | 126 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 372 +--
drivers/net/bonding/rte_eth_bond_8023ad.h | 75 +-
drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
drivers/net/bonding/rte_eth_bond_api.c | 474 +--
drivers/net/bonding/rte_eth_bond_args.c | 32 +-
drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 1384 ++++----
drivers/net/bonding/version.map | 15 +-
examples/bond/main.c | 40 +-
lib/ethdev/rte_ethdev.h | 9 +-
24 files changed, 3509 insertions(+), 3388 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
{
#ifdef RTE_NET_BOND
- portid_t slave_pids[RTE_MAX_ETHPORTS];
+ portid_t member_pids[RTE_MAX_ETHPORTS];
struct rte_port *port;
- int num_slaves;
- portid_t slave_pid;
+ int num_members;
+ portid_t member_pid;
int i;
- num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+ num_members = rte_eth_bond_members_get(bond_pid, member_pids,
RTE_MAX_ETHPORTS);
- if (num_slaves < 0) {
- fprintf(stderr, "Failed to get slave list for port = %u\n",
+ if (num_members < 0) {
+ fprintf(stderr, "Failed to get member list for port = %u\n",
bond_pid);
- return num_slaves;
+ return num_members;
}
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- port = &ports[slave_pid];
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ port = &ports[member_pid];
port->port_status =
is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Starting a bonded port also starts all slaves under the bonded
+ * Starting a bonded port also starts all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, false);
+ return change_bonding_member_port_status(port_id, false);
}
return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Stopping a bonded port also stops all slaves under the bonded
+ * Stopping a bonded port also stops all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, true);
+ return change_bonding_member_port_status(port_id, true);
}
return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
port = &ports[pi];
/* Check if there is a port which is not started */
if ((port->port_status != RTE_PORT_STARTED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
}
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
struct rte_port *port = &ports[port_id];
if ((port->port_status != RTE_PORT_STOPPED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
return 1;
}
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
/*
* Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
* to update the port configurations of bonding device.
*/
static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
continue;
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
}
static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
{
struct rte_port *port;
- portid_t slave_pid;
+ portid_t member_pid;
uint16_t i;
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- if (port_is_started(slave_pid) == 1) {
- if (rte_eth_dev_stop(slave_pid) != 0)
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ if (port_is_started(member_pid) == 1) {
+ if (rte_eth_dev_stop(member_pid) != 0)
fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
- slave_pid);
+ member_pid);
- port = &ports[slave_pid];
+ port = &ports[member_pid];
port->port_status = RTE_PORT_STOPPED;
}
- clear_port_slave_flag(slave_pid);
+ clear_port_member_flag(member_pid);
- /* Close slave device when testpmd quit or is killed. */
+ /* Close member device when testpmd quit or is killed. */
if (cl_quit == 1 || f_quit == 1)
- rte_eth_dev_close(slave_pid);
+ rte_eth_dev_close(member_pid);
}
}
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
{
portid_t pi;
struct rte_port *port;
- portid_t slave_pids[RTE_MAX_ETHPORTS];
- int num_slaves = 0;
+ portid_t member_pids[RTE_MAX_ETHPORTS];
+ int num_members = 0;
if (port_id_is_invalid(pid, ENABLED_WARN))
return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
flush_port_owned_resources(pi);
#ifdef RTE_NET_BOND
if (port->bond_flag == 1)
- num_slaves = rte_eth_bond_slaves_get(pi,
- slave_pids, RTE_MAX_ETHPORTS);
+ num_members = rte_eth_bond_members_get(pi,
+ member_pids, RTE_MAX_ETHPORTS);
#endif
rte_eth_dev_close(pi);
/*
- * If this port is bonded device, all slaves under the
+ * If this port is bonded device, all members under the
* device need to be removed or closed.
*/
- if (port->bond_flag == 1 && num_slaves > 0)
- clear_bonding_slave_device(slave_pids,
- num_slaves);
+ if (port->bond_flag == 1 && num_members > 0)
+ clear_bonding_member_device(member_pids,
+ num_members);
}
free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
}
}
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 1;
+ port = &ports[member_pid];
+ port->member_flag = 1;
}
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 0;
+ port = &ports[member_pid];
+ port->member_flag = 0;
}
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
{
struct rte_port *port;
struct rte_eth_dev_info dev_info;
int ret;
- port = &ports[slave_pid];
- ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+ port = &ports[member_pid];
+ ret = eth_dev_info_get_print_err(member_pid, &dev_info);
if (ret != 0) {
TESTPMD_LOG(ERR,
"Failed to get device info for port id %d,"
- "cannot determine if the port is a bonded slave",
- slave_pid);
+ "cannot determine if the port is a bonded member",
+ member_pid);
return 0;
}
- if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+ if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
return 1;
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
queueid_t queue_nb; /**< nb. of queues for flow rules */
uint32_t queue_sz; /**< size of a queue for flow rules */
- uint8_t slave_flag : 1, /**< bonding slave port */
+ uint8_t member_flag : 1, /**< bonding member port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before forward */
update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
void dev_set_link_up(portid_t pid);
void dev_set_link_down(portid_t pid);
void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
#define INVALID_BONDING_MODE (-1)
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
struct link_bonding_unittest_params {
int16_t bonded_port_id;
- int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
- uint16_t bonded_slave_count;
+ int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+ uint16_t bonded_member_count;
uint8_t bonding_mode;
uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
struct rte_mempool *mbuf_pool;
- struct rte_ether_addr *default_slave_mac;
+ struct rte_ether_addr *default_member_mac;
struct rte_ether_addr *default_bonded_mac;
/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
static struct link_bonding_unittest_params default_params = {
.bonded_port_id = -1,
- .slave_port_ids = { -1 },
- .bonded_slave_count = 0,
+ .member_port_ids = { -1 },
+ .bonded_member_count = 0,
.bonding_mode = BONDING_MODE_ROUND_ROBIN,
.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
.mbuf_pool = NULL,
- .default_slave_mac = (struct rte_ether_addr *)slave_mac,
+ .default_member_mac = (struct rte_ether_addr *)member_mac,
.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
return 0;
}
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
test_setup(void)
{
int i, nb_mbuf_per_pool;
- struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+ struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
/* Allocate ethernet packet header with space for VLAN header */
if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
}
/* Create / Initialize virtual eth devs */
- if (!slaves_initialized) {
+ if (!members_initialized) {
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
@@ -243,16 +243,16 @@ test_setup(void)
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
- test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+ test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+ TEST_ASSERT(test_params->member_port_ids[i] >= 0,
"Failed to create virtual virtual ethdev %s", pmd_name);
TEST_ASSERT_SUCCESS(configure_ethdev(
- test_params->slave_port_ids[i], 1, 0),
+ test_params->member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s", pmd_name);
}
- slaves_initialized = 1;
+ members_initialized = 1;
}
return 0;
@@ -261,9 +261,9 @@ test_setup(void)
static int
test_create_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
/* Don't try to recreate bonded device if re-running test suite*/
if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
test_params->bonded_port_id, test_params->bonding_mode);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of members %d is great than expected %d.",
+ current_member_count, 0);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members %d is great than expected %d.",
+ current_member_count, 0);
return 0;
}
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
}
static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave (%d) to bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member (%d) to bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
- "Number of slaves (%d) is greater than expected (%d).",
- current_slave_count, test_params->bonded_slave_count + 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+ "Number of members (%d) is greater than expected (%d).",
+ current_member_count, test_params->bonded_member_count + 1);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d).\n",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not as expected (%d).\n",
+ current_member_count, 0);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
return 0;
}
static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
struct rte_ether_addr read_mac_addr, *mac_addr;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count-1]),
- "Failed to remove slave %d from bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count-1]),
+ "Failed to remove member %d from bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
- "Number of slaves (%d) is great than expected (%d).\n",
- current_slave_count, test_params->bonded_slave_count - 1);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+ "Number of members (%d) is great than expected (%d).\n",
+ current_member_count, test_params->bonded_member_count - 1);
- mac_addr = (struct rte_ether_addr *)slave_mac;
+ mac_addr = (struct rte_ether_addr *)member_mac;
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
- test_params->bonded_slave_count-1;
+ test_params->bonded_member_count-1;
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ test_params->member_port_ids[test_params->bonded_member_count-1],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
0);
- test_params->bonded_slave_count--;
+ test_params->bonded_member_count--;
return 0;
}
static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
- test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+ test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
static int bonded_id = 2;
static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
{
- int port_id, current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int port_id, current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- test_add_slave_to_bonded_device();
+ test_add_member_to_bonded_device();
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 1,
- "Number of slaves (%d) is not that expected (%d).",
- current_slave_count, 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 1,
+ "Number of members (%d) is not that expected (%d).",
+ current_member_count, 1);
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
rte_socket_id());
TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
- TEST_ASSERT(rte_eth_bond_slave_add(port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+ TEST_ASSERT(rte_eth_bond_member_add(port_id,
+ test_params->member_port_ids[test_params->bonded_member_count - 1])
< 0,
- "Added slave (%d) to bonded port (%d) unexpectedly.",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ "Added member (%d) to bonded port (%d) unexpectedly.",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
port_id);
- return test_remove_slave_from_bonded_device();
+ return test_remove_member_from_bonded_device();
}
static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
/* Invalid port id */
- current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+ current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- /* Invalid slaves pointer */
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+ /* Invalid members pointer */
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
+ current_member_count = rte_eth_bond_active_members_get(
test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
/* non bonded device*/
- current_slave_count = rte_eth_bond_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
{
int i;
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
{
int i;
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
}
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
{
struct rte_eth_link link_status;
- int current_slave_count, current_bonding_mode, primary_port;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count, current_bonding_mode, primary_port;
+ uint16_t members[RTE_MAX_ETHPORTS];
int retval;
- /* Add slave to bonded device*/
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ /* Add member to bonded device*/
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- /* Change link status of virtual pmd so it will be added to the active
- * slave list of the bonded device*/
+ /*
+ * Change link status of virtual pmd so it will be added to the active
+ * member list of the bonded device.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+ test_params->member_port_ids[test_params->bonded_member_count-1], 1);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
current_bonding_mode, test_params->bonding_mode);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port (%d) is not expected value (%d).",
- primary_port, test_params->slave_port_ids[0]);
+ primary_port, test_params->member_port_ids[0]);
retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
static int
test_stop_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_eth_link link_status;
int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
"Bonded port (%d) status (%d) is not expected value (%d).",
test_params->bonded_port_id, link_status.link_status, 0);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, 0);
return 0;
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- /* Clean up and remove slaves from bonded device */
+ /* Clean up and remove members from bonded device */
free_virtualpmd_tx_queue();
- while (test_params->bonded_slave_count > 0)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "test_remove_slave_from_bonded_device failed");
+ while (test_params->bonded_member_count > 0)
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "test_remove_member_from_bonded_device failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+ TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
bonding_modes[i]),
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+ bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
TEST_ASSERT(bonding_mode < 0,
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
{
int i, j, retval;
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr *expected_mac_addr;
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
/* Invalid port ID */
TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
- test_params->slave_port_ids[i]),
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
- /* Set slave as primary
- * Verify slave it is now primary slave
- * Verify that MAC address of bonded device is that of primary slave
- * Verify that MAC address of all bonded slaves are that of primary slave
+ /* Set member as primary
+ * Verify member it is now primary member
+ * Verify that MAC address of bonded device is that of primary member
+ * Verify that MAC address of all bonded members are that of primary member
*/
for (i = 0; i < 4; i++) {
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(retval >= 0,
"Failed to read primary port from bonded port (%d)\n",
test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+ TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
"Bonded port (%d) primary port (%d) not expected value (%d)\n",
test_params->bonded_port_id, retval,
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
/* stop/start bonded eth dev to apply new MAC */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
"Failed to start bonded port %d",
test_params->bonded_port_id);
- expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+ expected_mac_addr = (struct rte_ether_addr *)&member_mac;
expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Check primary slave MAC */
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Check primary member MAC */
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (j = 0; j < 4; j++) {
if (j != i) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+ test_params->member_port_ids[j],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[j]);
+ test_params->member_port_ids[j]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary "
+ "member port mac address not set to that of primary "
"port");
}
}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
"read primary port from expectedly");
- /* Test with slave port */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+ /* Test with member port */
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
"read primary port from expectedly\n");
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to stop and remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to stop and remove members from bonded device");
- /* No slaves */
+ /* No members */
TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
"read primary port from expectedly\n");
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
/* Non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
- test_params->slave_port_ids[0], mac_addr),
+ test_params->member_port_ids[0], mac_addr),
"Expected call to failed as invalid port specified.");
/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
"Failed to set MAC address on bonded port (%d)",
test_params->bonded_port_id);
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++) {
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.\n");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++) {
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.\n");
}
/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (i = 0; i < 4; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary port");
+ "member port mac address not set to that of primary port");
}
/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
test_params->bonded_port_id);
TEST_ASSERT_FAIL(
- rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+ rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
"Reset MAC address on bonded port (%d) unexpectedly",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* test resetting mac address on bonded device with no slaves */
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to remove slaves and stop bonded device");
+ /* test resetting mac address on bonded device with no members */
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to remove members and stop bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
return 0;
}
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
static int
test_set_bonded_port_initialization_mac_assignment(void)
{
- int i, slave_count;
+ int i, member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
static int bonded_port_id = -1;
- static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+ static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
- struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+ struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
/* Initialize default values for MAC addresses */
- memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
- memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+ memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+ memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
/*
- * 1. a - Create / configure bonded / slave ethdevs
+ * 1. a - Create / configure bonded / member ethdevs
*/
if (bonded_port_id == -1) {
bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
"Failed to configure bonded ethdev");
}
- if (!mac_slaves_initialized) {
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ if (!mac_members_initialized) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
i + 100;
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
- "eth_slave_%d", i);
+ "eth_member_%d", i);
- slave_port_ids[i] = virtual_ethdev_create(pmd_name,
- &slave_mac_addr, rte_socket_id(), 1);
+ member_port_ids[i] = virtual_ethdev_create(pmd_name,
+ &member_mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(slave_port_ids[i] >= 0,
- "Failed to create slave ethdev %s",
+ TEST_ASSERT(member_port_ids[i] >= 0,
+ "Failed to create member ethdev %s",
pmd_name);
- TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+ TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s",
pmd_name);
}
- mac_slaves_initialized = 1;
+ mac_members_initialized = 1;
}
/*
- * 2. Add slave ethdevs to bonded device
+ * 2. Add member ethdevs to bonded device
*/
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
- slave_port_ids[i]),
- "Failed to add slave (%d) to bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to add member (%d) to bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
- "Number of slaves (%d) is not as expected (%d)",
- slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+ "Number of members (%d) is not as expected (%d)",
+ member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
/* 4. a - Start bonded ethdev
- * b - Enable slave devices
- * c - Verify bonded/slaves ethdev MAC addresses
+ * b - Enable member devices
+ * c - Verify bonded/members ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
"Failed to start bonded pmd eth device %d.",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- slave_port_ids[i], 1);
+ member_port_ids[i], 1);
}
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
+ member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 7. a - Change primary port
* b - Stop / Start bonded port
- * d - Verify slave ethdev MAC addresses
+ * d - Verify member ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
- slave_port_ids[2]),
+ member_port_ids[2]),
"failed to set primary port on bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
+ member_port_ids[2]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 6. a - Stop bonded ethdev
- * b - remove slave ethdevs
- * c - Verify slave ethdevs MACs are restored
+ * b - remove member ethdevs
+ * c - Verify member ethdevs MACs are restored
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
"Failed to stop bonded port %u",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
- slave_port_ids[i]),
- "Failed to remove slave %d from bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to remove member %d from bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of slaves (%d) is great than expected (%d).",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of members (%d) is great than expected (%d).",
+ member_count, 0);
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
return 0;
}
static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
- uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+ uint16_t number_of_members, uint8_t enable_member)
{
/* Configure bonded device */
TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
- "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
- number_of_slaves);
-
- /* Add slaves to bonded device */
- while (number_of_slaves > test_params->bonded_slave_count)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave (%d to bonding port (%d).",
- test_params->bonded_slave_count - 1,
+ "with (%d) members.", test_params->bonded_port_id, bonding_mode,
+ number_of_members);
+
+ /* Add members to bonded device */
+ while (number_of_members > test_params->bonded_member_count)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member (%d to bonding port (%d).",
+ test_params->bonded_member_count - 1,
test_params->bonded_port_id);
/* Set link bonding mode */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- if (enable_slave)
- enable_bonded_slaves();
+ if (enable_member)
+ enable_bonded_members();
return 0;
}
static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
{
int i;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
- "Failed to add slaves to bonded device");
+ "Failed to add members to bonded device");
- /* Enabled slave devices */
- for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+ /* Enabled member devices */
+ for (i = 0; i < test_params->bonded_member_count + 1; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave to bonded port.\n");
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member to bonded port.\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count]);
+ test_params->member_port_ids[test_params->bonded_member_count]);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT 4
#define TEST_LSC_WAIT_TIMEOUT_US 500000
int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
static int
test_status_interrupt(void)
{
- int slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- /* initialized bonding device with T slaves */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* initialized bonding device with T members */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 1,
- TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+ TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
- /* Bring all 4 slaves link status to down and test that we have received a
+ /* Bring all 4 members link status to down and test that we have received a
* lsc interrupts */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
"Received a link status change interrupt unexpectedly");
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, 0);
- /* bring one slave port up so link status will change */
+ /* bring one member port up so link status will change */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- /* Verify that calling the same slave lsc interrupt doesn't cause another
+ /* Verify that calling the same member lsc interrupt doesn't cause another
* lsc interrupt from bonded device */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size <= MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size / test_params->bonded_slave_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ (uint64_t)burst_size / test_params->bonded_member_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
pkt_burst, burst_size), 0,
"tx burst return unexpected value");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
rte_pktmbuf_free(mbufs[i]);
}
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE (64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT (22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (1)
static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
{
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
int i, first_fail_idx, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0,
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
/* Copy references to packets which we expect not to be transmitted */
- first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
- TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+ first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ (TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+ TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
- (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+ (i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
}
- /* Set virtual slave to only fail transmission of
- * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+ /*
+ * Set virtual member to only fail transmission of
+ * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ (uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- int slave_expected_tx_count;
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ int member_expected_tx_count;
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
- slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
- test_params->bonded_slave_count;
+ member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+ test_params->bonded_member_count;
- if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
- slave_expected_tx_count = slave_expected_tx_count -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+ if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+ member_expected_tx_count = member_expected_tx_count -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)slave_expected_tx_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[i],
- (unsigned int)port_stats.opackets, slave_expected_tx_count);
+ (uint64_t)member_expected_tx_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.opackets, member_expected_tx_count);
}
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
- free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
{
struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
int i, j, burst_size = 25;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
- /* Verify bonded slave devices rx count */
- /* Verify slave ports tx stats */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ /* Verify member ports tx stats */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
- /* Reset bonded slaves stats */
- rte_eth_stats_reset(test_params->slave_port_ids[j]);
+ /* Reset bonded members stats */
+ rte_eth_stats_reset(test_params->member_port_ids[j]);
}
/* reset bonded device stats */
rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+ int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
int i, nb_rx;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
burst_size[i], "burst generation failed");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2],
(unsigned int)port_stats.ipackets, burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3],
(unsigned int)port_stats.ipackets, 0);
/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_2),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that all MACs are the same as first slave added to bonded dev */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Verify that all MACs are the same as first member added to bonded dev */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary"
+ "member port (%d) mac address has changed to that of primary"
" port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagate to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagate to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(
memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary"
- " port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary"
+ " port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
- sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
- " that of new primary port\n", test_params->slave_port_ids[i]);
+ sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+ " that of new primary port\n", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
int i, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
"Port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
{
struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
- struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
/* NULL all pointers in array to simplify cleanup */
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+ /* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
* in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves eth_devs link status to down */
+ /* Set 2 members eth_devs link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count,
- TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).\n",
- slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count,
+ TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).\n",
+ member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
burst_size = 20;
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test burst of traffic
* 2. Transmit burst on bonded eth_dev
* 3. Verify stats for bonded eth_dev (opackets = burst_size)
- * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
TEST_ASSERT_EQUAL(
generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[0], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[1], (int)port_stats.opackets, 0);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[2], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[3], (int)port_stats.opackets, 0);
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test bursts of traffic
* 2. Add bursts on to virtual eth_devs
* 3. Rx burst on bonded eth_dev, expected (burst_ size *
- * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+ * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
* 4. Verify stats for bonded eth_dev
- * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
- for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size);
}
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
- (struct rte_ether_addr *)polling_slave_mac;
- char slave_name[RTE_ETH_NAME_MAX_LEN];
+ (struct rte_ether_addr *)polling_member_mac;
+ char member_name[RTE_ETH_NAME_MAX_LEN];
int i;
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
- /* Generate slave name / MAC address */
- snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+ /* Generate member name / MAC address */
+ snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Create slave devices with no ISR Support */
- if (polling_test_slaves[i] == -1) {
- polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+ /* Create member devices with no ISR Support */
+ if (polling_test_members[i] == -1) {
+ polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
rte_socket_id(), 0);
- TEST_ASSERT(polling_test_slaves[i] >= 0,
- "Failed to create virtual virtual ethdev %s\n", slave_name);
+ TEST_ASSERT(polling_test_members[i] >= 0,
+ "Failed to create virtual ethdev %s\n", member_name);
- /* Configure slave */
- TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
- "Failed to configure virtual ethdev %s(%d)", slave_name,
- polling_test_slaves[i]);
+ /* Configure member */
+ TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+ "Failed to configure virtual ethdev %s(%d)", member_name,
+ polling_test_members[i]);
}
- /* Add slave to bonded device */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to add slave %s(%d) to bonded device %d",
- slave_name, polling_test_slaves[i],
+ /* Add member to bonded device */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to add member %s(%d) to bonded device %d",
+ member_name, polling_test_members[i],
test_params->bonded_port_id);
}
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* link status change callback for first slave link up */
+ /* link status change callback for first member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+ virtual_ethdev_set_link_status(polling_test_members[0], 1);
TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
- /* no link status change callback for second slave link up */
+ /* no link status change callback for second member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+ virtual_ethdev_set_link_status(polling_test_members[1], 1);
TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
- /* link status change callback for both slave links down */
+ /* link status change callback for both member links down */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
- virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+ virtual_ethdev_set_link_status(polling_test_members[0], 0);
+ virtual_ethdev_set_link_status(polling_test_members[1], 0);
TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+ /* Clean up and remove members from bonded device */
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_SUCCESS(
- rte_eth_bond_slave_remove(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to remove slave %d from bonded port (%d)",
- polling_test_slaves[i], test_params->bonded_port_id);
+ rte_eth_bond_member_remove(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to remove member %d from bonded port (%d)",
+ polling_test_members[i], test_params->bonded_port_id);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
- if (test_params->slave_port_ids[i] == primary_port) {
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+ if (test_params->member_port_ids[i] == primary_port) {
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
} else {
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, 0);
}
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
pkts_burst, burst_size), 0, "Sending empty burst failed");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
static int
test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
int i, j, burst_size = 17;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
burst_size, "burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
"rte_eth_rx_burst failed");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)", test_params->slave_port_ids[i],
- (unsigned int)port_stats.ipackets, burst_size);
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.ipackets,
+ burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)\n", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected "
- "(%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected "
+ "(%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
} else {
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode enabled",
+ test_params->member_port_ids[i]);
}
}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active member count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(
test_params->bonded_port_id, 0, &pkt_burst[0][0],
burst_size), burst_size, "rte_eth_tx_burst failed");
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected",
test_params->bonded_port_id);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
/** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
static int
test_balance_xmit_policy_configuration(void)
{
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
/* Invalid port id */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
/* Set xmit policy on non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
- test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
+ test_params->member_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
"Expected call to failed as invalid port specified.");
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
"Expected call to failed as invalid port specified.");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
static int
test_balance_l2_tx_burst(void)
{
- struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
- int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+ struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+ int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
uint16_t pktlen;
int i;
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
"failed to generate packet burst");
/* Send burst 1 on bonded port */
- for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
&pkts_burst[i][0], burst_size[i]),
burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
burst_size[0] + burst_size[1]);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
burst_size[1]);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, 0, pkts_burst_1,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
return balance_l34_tx_burst(0, 0, 0, 0, 1);
}
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 (40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2 (20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT (25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (0)
static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
- struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+ struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+ struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
- struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+ struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, first_tx_fail_idx, tx_count_1, tx_count_2;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0,
- TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
"Failed to generate test packet burst 1");
- first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+ first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
/* copy mbuf references for expected transmission failures */
- for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+ for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Failed to generate test packet burst 2");
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Transmit burst 1 */
tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
- TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Transmit burst 2 */
tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+ (uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ (TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1],
+ (uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
static int
test_balance_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+ int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
0, 0), burst_size[i],
"failed to generate packet burst");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
"Failed to initialise bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]),
+ test_params->member_port_ids[1]),
"Failed to set bonded port (%d) primary port to (%d)\n",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected\n",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected\n",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
"Failed to set balance xmit policy.");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- /* Send to sets of packet burst and verify that they are balanced across
- * slaves */
+ /*
+ * Send to sets of packet burst and verify that they are balanced across
+ * members.
+ */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[2], (int)port_stats.opackets,
+ test_params->member_port_ids[2], (int)port_stats.opackets,
burst_size);
- /* verify that all packets get send on primary slave when no other slaves
+ /* verify that all packets get send on primary member when no other members
* are available */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 1);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 1);
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size + burst_size);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 1);
+ test_params->member_port_ids[2], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"Failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.ipackets,
burst_size * 3);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 2, 1),
"Failed to initialise bonded device");
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size * test_params->bonded_slave_count,
+ (uint64_t)burst_size * test_params->bonded_member_count,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, burst_size);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
"transmitted an unexpected number of packets");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT (3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE (40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT (15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT (10)
static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
- struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+ struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+ struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0,
- TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
- expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
}
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[0],
+ test_params->member_port_ids[0],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[1],
+ test_params->member_port_ids[1],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[2],
+ test_params->member_port_ids[2],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[0],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[0],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[1],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ test_params->member_port_ids[1],
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[2],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[2],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Transmit burst */
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
}
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Verify that all mbufs who transmission failed have a ref value of one */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
static int
test_broadcast_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+ int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
burst_size[i], "failed to generate packet burst");
}
- /* Add rx data to slave 0 */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member 0 */
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs allocate for rx testing */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
- /* Verify that all MACs are the same as first slave added to bonded
+ /* Verify that all MACs are the same as first member added to bonded
* device */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary "
+ "member port (%d) mac address has changed to that of primary "
"port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
1), "Failed to initialise bonded device");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- for (i = 0; i < test_params->bonded_slave_count; i++)
- rte_eth_stats_reset(test_params->slave_port_ids[i]);
+ for (i = 0; i < test_params->bonded_member_count; i++)
+ rte_eth_stats_reset(test_params->member_port_ids[i]);
- /* Verify that pkts are not sent on slaves with link status down */
+ /* Verify that pkts are not sent on members with link status down */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"rte_eth_tx_burst failed\n");
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
- TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+ TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
"(%d) port_stats.opackets (%d) not as expected (%d)\n",
test_params->bonded_port_id, (int)port_stats.opackets,
- burst_size * slave_count);
+ burst_size * member_count);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
free(test_params->pkt_eth_hdr);
test_params->pkt_eth_hdr = NULL;
- /* Clean up and remove slaves from bonded device */
- remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ remove_members_and_stop_bonded_device();
}
static void
free_virtualpmd_tx_queue(void)
{
- int i, slave_port, to_free_cnt;
+ int i, member_port, to_free_cnt;
struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
/* Free tx queue of virtual pmd */
- for (slave_port = 0; slave_port < test_params->bonded_slave_count;
- slave_port++) {
+ for (member_port = 0; member_port < test_params->bonded_member_count;
+ member_port++) {
to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_port],
+ test_params->member_port_ids[member_port],
pkts_to_free, MAX_PKT_BURST);
for (i = 0; i < to_free_cnt; i++)
rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
uint16_t pktlen;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
(BONDING_MODE_TLB, 1, 3, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
RTE_ETHER_TYPE_IPV4, 0, 0);
} else {
initialize_eth_header(test_params->pkt_eth_hdr,
- (struct rte_ether_addr *)test_params->default_slave_mac,
+ (struct rte_ether_addr *)test_params->default_member_mac,
(struct rte_ether_addr *)dst_mac_0,
RTE_ETHER_TYPE_IPV4, 0, 0);
}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
sum_ports_opackets += port_stats[i].opackets;
}
TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
- "Total packets sent by slaves is not equal to packets sent by bond interface");
+ "Total packets sent by members is not equal to packets sent by bond interface");
- /* checking if distribution of packets is balanced over slaves */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* checking if distribution of packets is balanced over members */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT(port_stats[i].obytes > 0 &&
port_stats[i].obytes < all_bond_obytes,
- "Packets are not balanced over slaves");
+ "Packets are not balanced over members");
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
burst_size);
TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
- /* Clean ugit checkout masterp and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean ugit checkout masterp and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
static int
test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
uint16_t i, j, nb_rx, burst_size = 17;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 4, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 2, 1),
"Failed to initialize bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
- * MAC hasn't been changed */
+ /*
+ * Verify that bonded MACs is that of first member and that the other member
+ * MAC hasn't been changed.
+ */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, (int)4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
- * has changed */
+ /*
+ * Bring primary port down, verify that active member count is 3 and primary
+ * has changed.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
rte_delay_us(500000);
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
for (i = 0; i < 4; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
rte_delay_us(11000);
}
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
burst_size)
return -1;
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ALB_SLAVE_COUNT 2
+#define TEST_ALB_MEMBER_COUNT 2
static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
struct rte_ether_hdr *eth_pkt;
struct rte_arp_hdr *arp_pkt;
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
- slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count;
+ member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
RTE_ARP_OP_REPLY);
rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
- slave_mac1 =
- rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 =
- rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 =
+ rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 =
+ rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
* Checking if packets are properly distributed on bonding ports. Packets
* 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+ int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
- * Checking if update ARP packets were properly send on slave ports.
+ * Checking if update ARP packets were properly send on member ports.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+ test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
nb_pkts_sum += nb_pkts;
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
/*
* Checking if VLAN headers in generated ARP Update packet are correct.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
retval = 0;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
burst_size = 32;
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite = {
.unit_test_cases = {
TEST_CASE(test_create_bonded_device),
TEST_CASE(test_create_bonded_device_with_invalid_params),
- TEST_CASE(test_add_slave_to_bonded_device),
- TEST_CASE(test_add_slave_to_invalid_bonded_device),
- TEST_CASE(test_remove_slave_from_bonded_device),
- TEST_CASE(test_remove_slave_from_invalid_bonded_device),
- TEST_CASE(test_get_slaves_from_bonded_device),
- TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
- TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+ TEST_CASE(test_add_member_to_bonded_device),
+ TEST_CASE(test_add_member_to_invalid_bonded_device),
+ TEST_CASE(test_remove_member_from_bonded_device),
+ TEST_CASE(test_remove_member_from_invalid_bonded_device),
+ TEST_CASE(test_get_members_from_bonded_device),
+ TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+ TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
TEST_CASE(test_start_bonded_device),
TEST_CASE(test_stop_bonded_device),
TEST_CASE(test_set_bonding_mode),
- TEST_CASE(test_set_primary_slave),
+ TEST_CASE(test_set_primary_member),
TEST_CASE(test_set_explicit_bonded_mac),
TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
TEST_CASE(test_status_interrupt),
- TEST_CASE(test_adding_slave_after_bonded_device_started),
+ TEST_CASE(test_adding_member_after_bonded_device_started),
TEST_CASE(test_roundrobin_tx_burst),
- TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
- TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
- TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+ TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+ TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+ TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
- TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+ TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
TEST_CASE(test_activebackup_verify_mac_assignment),
- TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+ TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
TEST_CASE(test_balance_xmit_policy_configuration),
TEST_CASE(test_balance_l2_tx_burst),
TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
- TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+ TEST_CASE(test_balance_tx_burst_member_tx_fail),
TEST_CASE(test_balance_rx_burst),
TEST_CASE(test_balance_verify_promiscuous_enable_disable),
TEST_CASE(test_balance_verify_mac_assignment),
- TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
TEST_CASE(test_tlb_tx_burst),
TEST_CASE(test_tlb_rx_burst),
TEST_CASE(test_tlb_verify_mac_assignment),
TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
- TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+ TEST_CASE(test_tlb_verify_member_link_status_change_failover),
TEST_CASE(test_alb_change_mac_in_reply_sent),
TEST_CASE(test_alb_reply_from_client),
TEST_CASE(test_alb_receive_vlan_reply),
TEST_CASE(test_alb_ipv4_tx),
TEST_CASE(test_broadcast_tx_burst),
- TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+ TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
TEST_CASE(test_broadcast_rx_burst),
TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
TEST_CASE(test_broadcast_verify_mac_assignment),
- TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
TEST_CASE(test_reconfigure_bonded_device),
TEST_CASE(test_close_bonded_device),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RX_RING_SIZE 1024
#define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
#define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
-#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT ("net_virt_%d_tx")
#define INVALID_SOCKET_ID (-1)
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
};
-struct slave_conf {
+struct member_conf {
struct rte_ring *rx_queue;
struct rte_ring *tx_queue;
uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
struct link_bonding_unittest_params {
uint8_t bonded_port_id;
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
-#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
static struct link_bonding_unittest_params test_params = {
.bonded_port_id = INVALID_PORT_ID,
- .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+ .member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
.mbuf_pool = NULL,
};
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test and satisfy given condition.
*
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* _condition condition that need to be checked
*/
#define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
if (!!(_condition))
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
* device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* */
-#define FOR_EACH_SLAVE(_i, _slave) \
- FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+ FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
/*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
* buffer for packets
* size size of buffer
* return number of packets or negative error number
*/
static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+ return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
size, NULL);
}
/*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
* buffer for packets
* size number of packets to be injected
* return number of queued packets or negative error number
*/
static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+ return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
size, NULL);
}
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
}
static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
{
struct rte_ether_addr addr, addr_check;
int retval;
/* Some sanity check */
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
- RTE_VERIFY(slave->bonded == 0);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+ RTE_VERIFY(member->bonded == 0);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- rte_ether_addr_copy(&slave_mac_default, &addr);
- addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+ rte_ether_addr_copy(&member_mac_default, &addr);
+ addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
- rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+ rte_eth_dev_mac_addr_remove(member->port_id, &addr);
- TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
- "Failed to set slave MAC address");
+ TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+ "Failed to set member MAC address");
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
- slave->port_id),
- "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
- (uint8_t)(slave - test_params.slave_ports), slave->port_id,
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+ member->port_id),
+ "Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+ (uint8_t)(member - test_params.member_ports), member->port_id,
test_params.bonded_port_id);
- slave->bonded = 1;
+ member->bonded = 1;
if (start) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
- "Failed to start slave %u", slave->port_id);
+ TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+ "Failed to start member %u", member->port_id);
}
- retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
- TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+ retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+ TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
strerror(-retval));
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
- "Slave MAC address is not as expected");
+ "Member MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(member->lacp_parnter_state == 0);
return 0;
}
static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
{
- ptrdiff_t slave_idx = slave - test_params.slave_ports;
+ ptrdiff_t member_idx = member - test_params.member_ports;
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
- RTE_VERIFY(slave->bonded == 1);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(member->bonded == 1);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
- slave->port_id), 0,
- "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
- (uint8_t)slave_idx, slave->port_id,
+ TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+ member->port_id), 0,
+ "Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+ (uint8_t)member_idx, member->port_id,
test_params.bonded_port_id);
- slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ member->bonded = 0;
+ member->lacp_parnter_state = 0;
return 0;
}
static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
- lacpdu_rx_count[slave_id]++;
+ lacpdu_rx_count[member_id]++;
rte_pktmbuf_free(lacp_pkt);
}
static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
{
uint8_t i;
int ret;
RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
- for (i = 0; i < slave_count; i++) {
- TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+ for (i = 0; i < member_count; i++) {
+ TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
"Failed to add port %u to bonded device.\n",
- test_params.slave_ports[i].port_id);
+ test_params.member_ports[i].port_id);
}
/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
int retval;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint16_t i;
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
"Failed to stop bonded port %u",
test_params.bonded_port_id);
- FOR_EACH_SLAVE(i, slave)
- remove_slave(slave);
+ FOR_EACH_MEMBER(i, member)
+ remove_member(member);
- retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
- RTE_DIM(slaves));
+ retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+ RTE_DIM(members));
TEST_ASSERT_EQUAL(retval, 0,
- "Expected bonded device %u have 0 slaves but returned %d.",
+ "Expected bonded device %u have 0 members but returned %d.",
test_params.bonded_port_id, retval);
- FOR_EACH_PORT(i, slave) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+ FOR_EACH_PORT(i, member) {
+ TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
"Failed to stop bonded port %u",
- slave->port_id);
+ member->port_id);
- TEST_ASSERT(slave->bonded == 0,
- "Port id=%u is still marked as enslaved.", slave->port_id);
+ TEST_ASSERT(member->bonded == 0,
+ "Port id=%u is still marked as enmemberd.", member->port_id);
}
return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
{
int retval, nb_mbuf_per_pool;
char name[RTE_ETH_NAME_MAX_LEN];
- struct slave_conf *port;
+ struct member_conf *port;
const uint8_t socket_id = rte_socket_id();
uint16_t i;
@@ -400,10 +400,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(i, port) {
- port = &test_params.slave_ports[i];
+ port = &test_params.member_ports[i];
if (port->rx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
}
if (port->tx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
}
if (port->port_id == INVALID_PORT_ID) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
retval = rte_eth_from_rings(name, &port->rx_queue, 1,
&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
* frame but not LACP
*/
static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
/* Change source address to partner address */
rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ member->lacp_parnter_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
}
/*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
*
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
* all other packets. Prepares response LACP and sends it back.
*
* return number of LACP received and replied, -1 on error.
*/
static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
{
int retval;
struct rte_mbuf *rx_buf[MAX_PKT_BURST];
struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
uint16_t lacp_tx_buf_cnt = 0, i;
- retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
- TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
- slave->port_id);
+ retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+ TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+ member->port_id);
for (i = 0; i < (uint16_t)retval; i++) {
- if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+ if (make_lacp_reply(member, rx_buf[i]) == 0) {
/* reply with actor's LACP */
lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
if (lacp_tx_buf_cnt == 0)
return 0;
- retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+ retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
if (retval <= lacp_tx_buf_cnt) {
/* retval might be negative */
for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
}
TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
- "Failed to equeue lacp packets into slave %u tx queue.",
- slave->port_id);
+ "Failed to equeue lacp packets into member %u tx queue.",
+ member->port_id);
return lacp_tx_buf_cnt;
}
/*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
* return 0 if handshake not completed, 1 if handshake was complete,
*/
static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
{
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return member->lacp_parnter_state == expected_state;
}
static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
static int
bond_handshake(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *buf[MAX_PKT_BURST];
uint16_t nb_pkts;
- uint8_t all_slaves_done, i, j;
- uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+ uint8_t all_members_done, i, j;
+ uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
const unsigned delay = bond_get_update_timeout_ms();
/* Exchange LACP frames */
- all_slaves_done = 0;
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ all_members_done = 0;
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
rte_delay_ms(delay);
- all_slaves_done = 1;
- FOR_EACH_SLAVE(j, slave) {
- /* If response already send, skip slave */
+ all_members_done = 1;
+ FOR_EACH_MEMBER(j, member) {
+ /* If response already send, skip member */
if (status[j] != 0)
continue;
- if (bond_handshake_reply(slave) < 0) {
- all_slaves_done = 0;
+ if (bond_handshake_reply(member) < 0) {
+ all_members_done = 0;
break;
}
- status[j] = bond_handshake_done(slave);
+ status[j] = bond_handshake_done(member);
if (status[j] == 0)
- all_slaves_done = 0;
+ all_members_done = 0;
}
nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
}
/* If response didn't send - report failure */
- TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+ TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
/* If flags doesn't match - report failure */
- return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+ return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
}
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
static int
test_mode4_lacp(void)
{
int retval;
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
/* Test LACP handshake function */
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
{
int retval;
/* Test and verify for Stable mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_STABLE,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify for Bandwidth mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify selection for count mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_COUNT,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
}
static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
struct rte_ether_addr *src_mac,
struct rte_ether_addr *dst_mac, uint16_t count)
{
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
if (retval != (int)count)
return retval;
- retval = slave_put_pkts(slave, pkts, count);
+ retval = member_put_pkts(member, pkts, count);
if (retval > 0 && retval != count)
free_pkts(&pkts[retval], count - retval);
TEST_ASSERT_EQUAL(retval, count,
- "Failed to enqueue packets into slave %u RX queue", slave->port_id);
+ "Failed to enqueue packets into member %u RX queue", member->port_id);
return TEST_SUCCESS;
}
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
static int
test_mode4_rx(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
struct rte_ether_addr dst_mac;
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -838,7 +838,7 @@ test_mode4_rx(void)
dst_mac.addr_bytes[0] += 2;
/* First try with promiscuous mode enabled.
- * Add 2 packets to each slave. First with bonding MAC address, second with
+ * Add 2 packets to each member. First with bonding MAC address, second with
* different. Check if we received all of them. */
retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect 2 packets per slave */
+ /* Expect 2 packets per member */
expected_pkts_cnt += 2;
}
@@ -894,16 +894,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect only one packet per slave */
+ /* Expect only one packet per member */
expected_pkts_cnt += 1;
}
@@ -927,19 +927,19 @@ test_mode4_rx(void)
TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
"Expected %u packets but received only %d", expected_pkts_cnt, retval);
- /* Link down test: simulate link down for first slave. */
+ /* Link down test: simulate link down for first member. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- /* Find first slave and make link down on it*/
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ /* Find first member and make link down on it*/
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding */
for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
- /* Put packet to each slave */
- FOR_EACH_SLAVE(i, slave) {
+ /* Put packet to each member */
+ FOR_EACH_MEMBER(i, member) {
void *pkt = NULL;
- dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+ dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
- src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+ src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
if (retval > 0)
free_pkts(pkts, retval);
- while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+ while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
rte_pktmbuf_free(pkt);
- if (slave_down_id == slave->port_id)
+ if (member_down_id == member->port_id)
TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
else
TEST_ASSERT_NOT_EQUAL(retval, 0,
- "Expected to receive some packets on slave %u.",
- slave->port_id);
- rte_eth_dev_start(slave->port_id);
+ "Expected to receive some packets on member %u.",
+ member->port_id);
+ rte_eth_dev_start(member->port_id);
for (j = 0; j < 5; j++) {
- TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+ TEST_ASSERT(bond_handshake_reply(member) >= 0,
"Handshake after link up");
- if (bond_handshake_done(slave) == 1)
+ if (bond_handshake_done(member) == 1)
break;
}
- TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
}
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
static int
test_mode4_tx_burst(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets were transmitted properly. Every slave should have
+ /* Check if packets were transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+ "member %u unexpectedly transmitted %d SLOW packets", member->port_id,
slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
pkts_cnt += normal_cnt;
}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- /* Link down test:
- * simulate link down for first slave. */
+ /*
+ * Link down test:
+ * simulate link down for first member.
+ */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding. */
for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets was transmitted properly. Every slave should have
+ /* Check if packets was transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
- if (slave_down_id == slave->port_id) {
+ if (member_down_id == member->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
- normal_cnt + slow_cnt, slave->port_id);
+ "member %u enexpectedly transmitted %u packets",
+ normal_cnt + slow_cnt, member->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets",
- slave->port_id, slow_cnt);
+ "member %u unexpectedly transmitted %d SLOW packets",
+ member->port_id, slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
}
pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
{
struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
rte_ether_addr_copy(&parnter_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
offsetof(struct marker, reserved_90) -
offsetof(struct marker, requester_port);
RTE_VERIFY(marker_hdr->marker.info_length == 16);
- marker_hdr->marker.requester_port = slave->port_id + 1;
+ marker_hdr->marker.requester_port = member->port_id + 1;
marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
marker_hdr->marker.terminator_length = 0;
}
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
static int
test_mode4_marker(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *pkts[MAX_PKT_BURST];
struct rte_mbuf *marker_pkt;
struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
uint8_t i, j;
const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
- retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+ retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
delay = bond_get_update_timeout_ms();
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
- init_marker(marker_pkt, slave);
+ init_marker(marker_pkt, member);
- retval = slave_put_pkts(slave, &marker_pkt, 1);
+ retval = member_put_pkts(member, &marker_pkt, 1);
if (retval != 1)
rte_pktmbuf_free(marker_pkt);
TEST_ASSERT_EQUAL(retval, 1,
- "Failed to send marker packet to slave %u", slave->port_id);
+ "Failed to send marker packet to member %u", member->port_id);
for (j = 0; j < 20; ++j) {
rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
/* Check if LACP packet was send by state machines
First and only packet must be a maker response */
- retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+ retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
if (retval == 0)
continue;
if (retval > 1)
free_pkts(pkts, retval);
- TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+ TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
nb_pkts = retval;
marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
TEST_ASSERT(j < 20, "Marker response not found");
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
static int
test_mode4_expired(void)
{
- struct slave_conf *slave, *exp_slave = NULL;
+ struct member_conf *member, *exp_member = NULL;
struct rte_mbuf *pkts[MAX_PKT_BURST];
int retval;
uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
struct rte_eth_bond_8023ad_conf conf;
- retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
0);
/* Set custom timeouts to make test last shorter. */
rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
/* Wait for new settings to be applied. */
for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
- FOR_EACH_SLAVE(j, slave)
- bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(j, member)
+ bond_handshake_reply(member);
rte_delay_ms(conf.update_timeout_ms);
}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- /* Find first slave */
- FOR_EACH_SLAVE(i, slave) {
- exp_slave = slave;
+ /* Find first member */
+ FOR_EACH_MEMBER(i, member) {
+ exp_member = member;
break;
}
- RTE_VERIFY(exp_slave != NULL);
+ RTE_VERIFY(exp_member != NULL);
/* When one of partners do not send or respond to LACP frame in
* conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
retval);
- FOR_EACH_SLAVE(i, slave) {
- retval = bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(i, member) {
+ retval = bond_handshake_reply(member);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that suppose to be expired. */
- if (slave == exp_slave) {
- while (rte_ring_count(slave->rx_queue) > 0) {
+ /* Remove replay for member that suppose to be expired. */
+ if (member == exp_member) {
+ while (rte_ring_count(member->rx_queue) > 0) {
void *pkt = NULL;
- rte_ring_dequeue(slave->rx_queue, &pkt);
+ rte_ring_dequeue(member->rx_queue, &pkt);
rte_pktmbuf_free(pkt);
}
}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
retval);
}
- /* After test only expected slave should be in EXPIRED state */
- FOR_EACH_SLAVE(i, slave) {
- if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
- "Slave %u should be in expired.", slave->port_id);
+ /* After test only expected member should be in EXPIRED state */
+ FOR_EACH_MEMBER(i, member) {
+ if (member == exp_member)
+ TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+ "Member %u should be in expired.", member->port_id);
else
- TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
- "Slave %u should be operational.", slave->port_id);
+ TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+ "Member %u should be operational.", member->port_id);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
* . try to transmit lacpdu (should fail)
* . try to set collecting and distributing flags (should fail)
* reconfigure w/external sm
- * . transmit one lacpdu on each slave using new api
- * . make sure each slave receives one lacpdu using the callback api
- * . transmit one data pdu on each slave (should fail)
+ * . transmit one lacpdu on each member using new api
+ * . make sure each member receives one lacpdu using the callback api
+ * . transmit one data pdu on each member (should fail)
* . enable distribution and collection, send one data pdu each again
*/
int retval;
- struct slave_conf *slave = NULL;
+ struct member_conf *member = NULL;
uint8_t i;
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]),
- "Slave should not allow manual LACP xmit");
+ member->port_id, lacp_tx_buf[i]),
+ "Member should not allow manual LACP xmit");
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
test_params.bonded_port_id,
- slave->port_id, 1),
- "Slave should not allow external state controls");
+ member->port_id, 1),
+ "Member should not allow external state controls");
}
free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
test_mode4_ext_lacp(void)
{
int retval;
- struct slave_conf *slave = NULL;
- uint8_t all_slaves_done = 0, i;
+ struct member_conf *member = NULL;
+ uint8_t all_members_done = 0, i;
uint16_t nb_pkts;
const unsigned int delay = bond_get_update_timeout_ms();
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
- struct rte_mbuf *buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+ struct rte_mbuf *buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
for (i = 0; i < 30; ++i)
rte_delay_ms(delay);
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
retval = rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]);
+ member->port_id, lacp_tx_buf[i]);
TEST_ASSERT_SUCCESS(retval,
- "Slave should allow manual LACP xmit");
+ "Member should allow manual LACP xmit");
}
nb_pkts = bond_tx(NULL, 0);
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
- FOR_EACH_SLAVE(i, slave) {
- nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
- TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+ FOR_EACH_MEMBER(i, member) {
+ nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+ TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
nb_pkts, i);
- slave_put_pkts(slave, buf, nb_pkts);
+ member_put_pkts(member, buf, nb_pkts);
}
nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
/* wait for the periodic callback to run */
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
uint8_t s, total = 0;
rte_delay_ms(delay);
- FOR_EACH_SLAVE(s, slave) {
- total += lacpdu_rx_count[slave->port_id];
+ FOR_EACH_MEMBER(s, member) {
+ total += lacpdu_rx_count[member->port_id];
}
- if (total >= SLAVE_COUNT)
- all_slaves_done = 1;
+ if (total >= MEMBER_COUNT)
+ all_members_done = 1;
}
- FOR_EACH_SLAVE(i, slave) {
- TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
- "Slave port %u should have received 1 lacpdu (count=%u)",
- slave->port_id,
- lacpdu_rx_count[slave->port_id]);
+ FOR_EACH_MEMBER(i, member) {
+ TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+ "Member port %u should have received 1 lacpdu (count=%u)",
+ member->port_id,
+ lacpdu_rx_count[member->port_id]);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
static int
check_environment(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i, env_state;
- uint16_t slaves[RTE_DIM(test_params.slave_ports)];
- int slaves_count;
+ uint16_t members[RTE_DIM(test_params.member_ports)];
+ int members_count;
env_state = 0;
FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
break;
}
- slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
- slaves, RTE_DIM(slaves));
+ members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+ members, RTE_DIM(members));
- if (slaves_count != 0)
+ if (members_count != 0)
env_state |= 0x10;
TEST_ASSERT_EQUAL(env_state, 0,
"Environment not clean (port %u):%s%s%s%s%s",
port->port_id,
- env_state & 0x01 ? " slave rx queue not clean" : "",
- env_state & 0x02 ? " slave tx queue not clean" : "",
- env_state & 0x04 ? " port marked as enslaved" : "",
- env_state & 0x80 ? " slave state is not reset" : "",
- env_state & 0x10 ? " slave count not equal 0" : ".");
+ env_state & 0x01 ? " member rx queue not clean" : "",
+ env_state & 0x02 ? " member tx queue not clean" : "",
+ env_state & 0x04 ? " port marked as enmemberd" : "",
+ env_state & 0x80 ? " member state is not reset" : "",
+ env_state & 0x10 ? " member count not equal 0" : ".");
return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
static int
test_mode4_executor(int (*test_func)(void))
{
- struct slave_conf *port;
+ struct member_conf *port;
int test_result;
uint8_t i;
void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
FOR_EACH_PORT(i, port) {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RXTX_RING_SIZE 1024
#define RXTX_QUEUE_COUNT 4
#define BONDED_DEV_NAME ("net_bonding_rss")
-#define SLAVE_DEV_NAME_FMT ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT ("rssconf_member%d_q%d")
#define NUM_MBUFS 8191
#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-struct slave_conf {
+struct member_conf {
uint16_t port_id;
struct rte_eth_dev_info dev_info;
@@ -54,7 +54,7 @@ struct slave_conf {
uint8_t rss_key[40];
struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- uint8_t is_slave;
+ uint8_t is_member;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
};
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
static struct link_bonding_rssconf_unittest_params test_params = {
.bond_port_id = INVALID_PORT_ID,
- .slave_ports = {
- [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+ .member_ports = {
+ [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
},
.mbuf_pool = NULL,
};
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
static int
configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
}
/**
- * Remove all slaves from bonding
+ * Remove all members from bonding
*/
static int
-remove_slaves(void)
+remove_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+ port = &test_params.member_ports[n];
+ if (port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
test_params.bond_port_id, port->port_id),
- "Cannot remove slave %d from bonding", port->port_id);
- port->is_slave = 0;
+ "Cannot remove member %d from bonding", port->port_id);
+ port->is_member = 0;
}
}
@@ -173,30 +173,30 @@ remove_slaves(void)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+ TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
"Failed to stop port %u", test_params.bond_port_id);
return TEST_SUCCESS;
}
/**
- * Add all slaves to bonding
+ * Add all members to bonding
*/
static int
-bond_slaves(void)
+bond_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (!port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot attach slave %d to the bonding",
+ port = &test_params.member_ports[n];
+ if (!port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot attach member %d to the bonding",
port->port_id);
- port->is_slave = 1;
+ port->is_member = 1;
}
}
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
}
/**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
* port is synced with bonding port.
*/
static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
{
unsigned i;
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
}
/**
- * Fetch slaves RETA
+ * Fetch members RETA
*/
static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
unsigned j;
for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
}
/**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
*/
static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
{
- struct slave_conf *port = &(test_params.slave_ports[0]);
+ struct member_conf *port = &(test_params.member_ports[0]);
- /* 1. Remove first slave from bonding */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
- port->port_id), "Cannot remove slave #d from bonding");
+ /* 1. Remove first member from bonding */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+ port->port_id), "Cannot remove member #d from bonding");
- /* 2. Change removed (ex-)slave and bonding configuration to different
+ /* 2. Change removed (ex-)member and bonding configuration to different
* values
*/
reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
bond_reta_fetch();
reta_set(port->port_id, 2, port->dev_info.reta_size);
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 0,
- "Removed slave didn't should be synchronized with bonding port");
+ "Removed member didn't should be synchronized with bonding port");
- /* 3. Add (ex-)slave and check if configuration changed*/
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot add slave");
+ /* 3. Add (ex-)member and check if configuration changed*/
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot add member");
bond_reta_fetch();
- slave_reta_fetch(port);
+ member_reta_fetch(port);
return reta_check_synced(port);
}
/**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
*/
static int
test_propagate(void)
{
unsigned i;
uint8_t n;
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t bond_rss_key[40];
struct rte_eth_rss_conf bond_rss_conf;
@@ -349,18 +349,18 @@ test_propagate(void)
retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
&bond_rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
- "Hash function not propagated for slave %d",
+ "Hash function not propagated for member %d",
port->port_id);
}
@@ -376,11 +376,11 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
memset(port->rss_conf.rss_key, 0, 40);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
}
memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&(port->rss_conf));
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
/* compare keys */
retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
sizeof(bond_rss_key));
- TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+ TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
port->port_id);
}
}
@@ -416,10 +416,10 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
}
TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
bond_reta_fetch();
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
}
}
@@ -459,29 +459,29 @@ test_rss(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
- TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+ TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
/**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
*/
static int
test_rss_config_lazy(void)
{
struct rte_eth_rss_conf bond_rss_conf = {0};
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t rss_key[40];
uint64_t rss_hf;
int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
}
- /* Set all keys to zero for all slaves */
+ /* Set all keys to zero for all members */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+ TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
memset(port->rss_key, 0, sizeof(port->rss_key));
port->rss_conf.rss_key = port->rss_key;
port->rss_conf.rss_key_len = sizeof(port->rss_key);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
}
/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
/* Test RETA propagation */
for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
}
retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
@@ -579,13 +579,13 @@ test_setup(void)
int retval;
int port_id;
char name[256];
- struct slave_conf *port;
+ struct member_conf *port;
struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
if (test_params.mbuf_pool == NULL) {
test_params.mbuf_pool = rte_pktmbuf_pool_create(
- "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+ "RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
port_id = rte_eth_dev_count_avail();
- snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+ snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
retval = rte_vdev_init(name, "size=64,copy=0");
TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
}
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
----------
A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the
backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
testpmd> create bonded device 1 0
Created new bonded device net_bond_testpmd_0 on (port 2).
- testpmd> add bonding slave 0 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 0 2
+ testpmd> add bonding member 1 2
testpmd> show bonding config 2
The syntax of the ``testpmd`` command is:
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
Set primary to P1 before starting bonding port.
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
Use P2 only for forwarding.
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
testpmd> start
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
.. code-block:: console
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
testpmd> clear port stats all
testpmd> set bonding primary 0 2
- testpmd> remove bonding slave 1 2
+ testpmd> remove bonding member 1 2
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
.. code-block:: console
- testpmd> remove bonding slave 0 2
+ testpmd> remove bonding member 0 2
testpmd> show bonding config 2
testpmd> port stop 0
testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
.. code-block:: console
- dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
- (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+ dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+ (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
Vector Processing
-----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
v:langID="1033"
v:metric="true"
v:viewMarkup="false"><v:userDefs><v:ud
- v:nameU="msvSubprocessMaster"
+ v:nameU="msvSubprocessMain"
v:prompt=""
v:val="VT4(Rectangle)" /><v:ud
v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..58e5ef41da 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
``rte_eth_dev`` ports of the same speed and duplex to provide similar
capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
and a switch. The new bonded PMD will then process these interfaces based on
the mode of operation specified to provide support for features such as
redundant links, fault tolerance and/or load balancing.
The librte_net_bond library exports a C API which provides an API for the
creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
.. note::
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides load balancing and fault tolerance by transmission of
- packets in sequential order from the first available slave device through
+ packets in sequential order from the first available member device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
Active Backup (Mode 1)
- In this mode only one slave in the bond is active at any time, a different
- slave becomes active if, and only if, the primary active slave fails,
- thereby providing fault tolerance to slave failure. The single logical
+ In this mode only one member in the bond is active at any time, a different
+ member becomes active if, and only if, the primary active member fails,
+ thereby providing fault tolerance to member failure. The single logical
bonded interface's MAC address is externally visible on only one NIC (port)
to avoid confusing the network switch.
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides transmit load balancing (based on the selected
transmission policy) and fault tolerance. The default policy (layer2) uses
a simple calculation based on the packet flow source and destination MAC
- addresses as well as the number of active slaves available to the bonded
- device to classify the packet to a specific slave to transmit on. Alternate
+ addresses as well as the number of active members available to the bonded
+ device to classify the packet to a specific member to transmit on. Alternate
transmission policies supported are layer 2+3, this takes the IP source and
- destination addresses into the calculation of the transmit slave port and
+ destination addresses into the calculation of the transmit member port and
the final supported policy is layer 3+4, this uses IP source and
destination addresses as well as the TCP/UDP source and destination port.
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
Broadcast (Mode 3)
- This mode provides fault tolerance by transmission of packets on all slave
+ This mode provides fault tolerance by transmission of packets on all member
ports.
* **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
intervals period of less than 100ms.
#. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
- where N is the number of slaves. This is a space required for LACP
+ where N is the number of members. This is a space required for LACP
frames. Additionally LACP packets are included in the statistics, but
they are not returned to the application.
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides an adaptive transmit load balancing. It dynamically
- changes the transmitting slave, according to the computed load. Statistics
+ changes the transmitting member, according to the computed load. Statistics
are collected in 100ms intervals and scheduled every 10ms.
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
programmatically via the C API ``rte_eth_bond_create`` function.
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
next rte flow operations:
Validate:
- - Validate flow for each slave, failure at least for one slave causes to
+ - Validate flow for each member, failure at least for one member causes to
bond validation failure.
Create:
- - Create the flow in all slaves.
- - Save all the slaves created flows objects in bonding internal flow
+ - Create the flow in all members.
+ - Save all the members created flows objects in bonding internal flow
structure.
- - Failure in flow creation for existed slave rejects the flow.
- - Failure in flow creation for new slaves in slave adding time rejects
- the slave.
+ - Failure in flow creation for existed member rejects the flow.
+ - Failure in flow creation for new members in member adding time rejects
+ the member.
Destroy:
- - Destroy the flow in all slaves and release the bond internal flow
+ - Destroy the flow in all members and release the bond internal flow
memory.
Flush:
- - Destroy all the bonding PMD flows in all the slaves.
+ - Destroy all the bonding PMD flows in all the members.
.. note::
- Don't call slaves flush directly, It destroys all the slave flows which
+ Don't call members flush directly, It destroys all the member flows which
may include external flows or the bond internal LACP flow.
Query:
- - Summarize flow counters from all the slaves, relevant only for
+ - Summarize flow counters from all the members, relevant only for
``RTE_FLOW_ACTION_TYPE_COUNT``.
Isolate:
- - Call to flow isolate for all slaves.
- - Failure in flow isolation for existed slave rejects the isolate mode.
- - Failure in flow isolation for new slaves in slave adding time rejects
- the slave.
+ - Call to flow isolate for all members.
+ - Failure in flow isolation for existed member rejects the isolate mode.
+ - Failure in flow isolation for new members in member adding time rejects
+ the member.
All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
Link Status Change Interrupts / Polling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
Link bonding devices support the registration of a link status change callback,
using the ``rte_eth_dev_callback_register`` API, this will be called when the
status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
The link bonding library also supports devices which do not implement link
status change interrupts, this is achieved by polling the devices link status at
a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
whether the device supports interrupts or whether the link status should be
monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
these parameters.
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
itself can be started.
To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
Like all other PMD, all functions exported by a PMD are lock-free functions
that are assumed not to be invoked in parallel on different logical cores to
work on the same target object.
It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
bonded device to read.
Configuration
@@ -265,25 +265,25 @@ Configuration
Link bonding devices are created using the ``rte_eth_bond_create`` API
which requires a unique device name, the bonding mode,
and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
the device is in balance XOR mode.
-Slave Devices
-^^^^^^^^^^^^^
+Member Devices
+^^^^^^^^^^^^^^
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
configuration of the bonded device on being added to a bonded device.
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
-Primary Slave
-^^^^^^^^^^^^^
+Primary Member
+^^^^^^^^^^^^^^
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
device is in active backup mode. A different port will only be used if, and
only if, the current primary port goes down. If the user does not specify a
primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
^^^^^^^^^^^
The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
Balance XOR Transmit Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
* **Layer 2:** Ethernet MAC address based balancing is the default
transmission policy for Balance XOR bonding mode. It uses a simple XOR
calculation on the source MAC address and destination MAC address of the
- packet and then calculate the modulus of this value to calculate the slave
+ packet and then calculate the modulus of this value to calculate the member
device to transmit the packet on.
* **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
combination of source/destination MAC addresses and the source/destination
- IP addresses of the data packet to decide which slave port the packet will
+ IP addresses of the data packet to decide which member port the packet will
be transmitted on.
* **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
of source/destination IP Address and the source/destination UDP ports of
- the packet of the data packet to decide which slave port the packet will be
+ the packet of the data packet to decide which member port the packet will be
transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
which will be used must be setup using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup``.
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
before it can be started using ``rte_eth_dev_start``.
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
bonding device then the link status of the bonding device will go down.
It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
-* A least one slave device is provided with for each bonded device definition.
+* A least one member device is provided with for each bonded device definition.
* The operation mode of the bonded device being created is provided.
@@ -404,20 +404,20 @@ The different options are:
mode=2
-* slave: Defines the PMD device which will be added as slave to the bonded
+* member: Defines the PMD device which will be added as member to the bonded
device. This option can be selected multiple times, for each device to be
- added as a slave. Physical devices should be specified using their PCI
+ added as a member. Physical devices should be specified using their PCI
address, in the format domain:bus:devid.function
.. code-block:: console
- slave=0000:0a:00.0,slave=0000:0a:00.1
+ member=0000:0a:00.0,member=0000:0a:00.1
-* primary: Optional parameter which defines the primary slave port,
- is used in active backup mode to select the primary slave for data TX/RX if
+* primary: Optional parameter which defines the primary member port,
+ is used in active backup mode to select the primary member for data TX/RX if
it is available. The primary port also is used to select the MAC address to
- use when it is not defined by the user. This defaults to the first slave
- added to the device if it is specified. The primary device must be a slave
+ use when it is not defined by the user. This defaults to the first member
+ added to the device if it is specified. The primary device must be a member
of the bonded device.
.. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
socket_id=0
* mac: Optional parameter to select a MAC address for link bonding device,
- this overrides the value of the primary slave device.
+ this overrides the value of the primary member device.
.. code-block:: console
@@ -474,29 +474,29 @@ The different options are:
Examples of Usage
^^^^^^^^^^^^^^^^^
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
.. _bonding_testpmd_commands:
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
testpmd> create bonded device 1 0
created new bonded device (port X)
-add bonding slave
-~~~~~~~~~~~~~~~~~
+add bonding member
+~~~~~~~~~~~~~~~~~~
Adds Ethernet device to a Link Bonding device::
- testpmd> add bonding slave (slave id) (port id)
+ testpmd> add bonding member (member id) (port id)
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
- testpmd> add bonding slave 6 10
+ testpmd> add bonding member 6 10
-remove bonding slave
-~~~~~~~~~~~~~~~~~~~~
+remove bonding member
+~~~~~~~~~~~~~~~~~~~~~
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
- testpmd> remove bonding slave (slave id) (port id)
+ testpmd> remove bonding member (member id) (port id)
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
- testpmd> remove bonding slave 6 10
+ testpmd> remove bonding member 6 10
set bonding mode
~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
set bonding primary
~~~~~~~~~~~~~~~~~~~
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
- testpmd> set bonding primary (slave id) (port id)
+ testpmd> set bonding primary (member id) (port id)
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
testpmd> set bonding primary 6 10
@@ -590,7 +590,7 @@ set bonding mon_period
Set the link status monitoring polling period in milliseconds for a bonding device.
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
When the mon_period is set to a value greater than 0 then all PMD's which do not support
link status ISR will be queried every polling interval to check if their link status has changed::
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
set bonding lacp dedicated_queue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
when in mode 4 (link-aggregation-802.3ad)::
testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
testpmd> show bonding config (port id)
For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
in balance mode with a transmission policy of layer 2+3::
testpmd> show bonding config 9
- Dev basic:
Bonding mode: BALANCE(2)
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
- Slaves (3): [1 3 4]
- Active Slaves (3): [1 3 4]
+ Members (3): [1 3 4]
+ Active Members (3): [1 3 4]
Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
cmdline_fixed_string_t set;
cmdline_fixed_string_t bonding;
cmdline_fixed_string_t primary;
- portid_t slave_id;
+ portid_t member_id;
portid_t port_id;
};
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
struct cmd_set_bonding_primary_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* Set the primary slave for a bonded device. */
- if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
- fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
- master_port_id);
+ /* Set the primary member for a bonded device. */
+ if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+ fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+ main_port_id);
return;
}
init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, RTE_UINT16);
+ member_id, RTE_UINT16);
static cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
port_id, RTE_UINT16);
static cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
- .help_str = "set bonding primary <slave_id> <port_id>: "
- "Set the primary slave for port_id",
+ .help_str = "set bonding primary <member_id> <port_id>: "
+ "Set the primary member for port_id",
.data = NULL,
.tokens = {
(void *)&cmd_setbonding_primary_set,
(void *)&cmd_setbonding_primary_bonding,
(void *)&cmd_setbonding_primary_primary,
- (void *)&cmd_setbonding_primary_slave,
+ (void *)&cmd_setbonding_primary_member,
(void *)&cmd_setbonding_primary_port,
NULL
}
};
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
cmdline_fixed_string_t add;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_add_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_add_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* add the slave for a bonded device. */
- if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+ /* add the member for a bonded device. */
+ if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to add slave %d to master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to add member %d to main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
- ports[master_port_id].update_conf = 1;
+ ports[main_port_id].update_conf = 1;
init_port_config();
- set_port_slave_flag(slave_port_id);
+ set_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
- .f = cmd_add_bonding_slave_parsed,
- .help_str = "add bonding slave <slave_id> <port_id>: "
- "Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+ .f = cmd_add_bonding_member_parsed,
+ .help_str = "add bonding member <member_id> <port_id>: "
+ "Add a member device to a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_addbonding_slave_add,
- (void *)&cmd_addbonding_slave_bonding,
- (void *)&cmd_addbonding_slave_slave,
- (void *)&cmd_addbonding_slave_slaveid,
- (void *)&cmd_addbonding_slave_port,
+ (void *)&cmd_addbonding_member_add,
+ (void *)&cmd_addbonding_member_bonding,
+ (void *)&cmd_addbonding_member_member,
+ (void *)&cmd_addbonding_member_memberid,
+ (void *)&cmd_addbonding_member_port,
NULL
}
};
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
cmdline_fixed_string_t remove;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_remove_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_remove_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* remove the slave from a bonded device. */
- if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+ /* remove the member from a bonded device. */
+ if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to remove slave %d from master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to remove member %d from main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
init_port_config();
- clear_port_slave_flag(slave_port_id);
+ clear_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
- .f = cmd_remove_bonding_slave_parsed,
- .help_str = "remove bonding slave <slave_id> <port_id>: "
- "Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+ .f = cmd_remove_bonding_member_parsed,
+ .help_str = "remove bonding member <member_id> <port_id>: "
+ "Remove a member device from a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_removebonding_slave_remove,
- (void *)&cmd_removebonding_slave_bonding,
- (void *)&cmd_removebonding_slave_slave,
- (void *)&cmd_removebonding_slave_slaveid,
- (void *)&cmd_removebonding_slave_port,
+ (void *)&cmd_removebonding_member_remove,
+ (void *)&cmd_removebonding_member_bonding,
+ (void *)&cmd_removebonding_member_member,
+ (void *)&cmd_removebonding_member_memberid,
+ (void *)&cmd_removebonding_member_port,
NULL
}
};
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
},
{
&cmd_set_bonding_primary,
- "set bonding primary (slave_id) (port_id)\n"
- " Set the primary slave for a bonded device.\n",
+ "set bonding primary (member_id) (port_id)\n"
+ " Set the primary member for a bonded device.\n",
},
{
- &cmd_add_bonding_slave,
- "add bonding slave (slave_id) (port_id)\n"
- " Add a slave device to a bonded device.\n",
+ &cmd_add_bonding_member,
+ "add bonding member (member_id) (port_id)\n"
+ " Add a member device to a bonded device.\n",
},
{
- &cmd_remove_bonding_slave,
- "remove bonding slave (slave_id) (port_id)\n"
- " Remove a slave device from a bonded device.\n",
+ &cmd_remove_bonding_member,
+ "remove bonding member (member_id) (port_id)\n"
+ " Remove a member device from a bonded device.\n",
},
{
&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..77892c0601 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
#include "rte_eth_bond_8023ad.h"
#define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_RX_PKTS 3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_TX_PKTS 1
/**
* Timeouts definitions (5.4.4 in 802.1AX documentation).
*/
@@ -113,7 +113,7 @@ struct port {
enum rte_bond_8023ad_selection selected;
/** Indicates if either allmulti or promisc has been enforced on the
- * slave so that we can receive lacp packets
+ * member so that we can receive lacp packets
*/
#define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
#define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
uint8_t external_sm;
struct rte_ether_addr mac_addr;
- struct rte_eth_link slave_link;
- /***< slave link properties */
+ struct rte_eth_link member_link;
+ /***< member link properties */
/**
* Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
/**
* @internal
*
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
*
* @param dev Bonded interface
* @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
/**
* @internal
*
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
*
* @param dev Bonded interface
* @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
*
* Passes given slow packet to state machines management logic.
* @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
* @param slot_pkt Slow packet.
*/
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt);
+ uint16_t member_id, struct rte_mbuf *pkt);
/**
* @internal
*
- * Appends given slave used slave
+ * Appends given member used member
*
* @param dev Bonded interface.
- * @param port_id Slave port ID to be added
+ * @param port_id Member port ID to be added
*
* @return
* 0 on success, negative value otherwise.
*/
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
/**
* @internal
*
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
*
* @param dev Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
*
* @return
* 0 on success, negative value otherwise.
*/
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
/**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
* @param bond_dev Bonded device
*/
void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port);
+ uint16_t member_port);
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
int
bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
#include "eth_bond_8023ad_private.h"
#include "rte_eth_bond_alb.h"
-#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG ("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG ("primary")
#define PMD_BOND_MODE_KVARG ("mode")
#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
/** Port Queue Mapping Structure */
struct bond_rx_queue {
uint16_t queue_id;
- /**< Next active_slave to poll */
- uint16_t active_slave;
+ /**< Next active_member to poll */
+ uint16_t active_member;
/**< Queue Id */
struct bond_dev_private *dev_private;
/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
/**< Copy of TX configuration structure for queue */
};
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
- uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
- uint16_t slave_count; /**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+ uint16_t members[RTE_MAX_ETHPORTS]; /**< Member port id array */
+ uint16_t member_count; /**< Number of members */
};
-struct bond_slave_details {
+struct bond_member_details {
uint16_t port_id;
uint8_t link_status_poll_enabled;
uint8_t link_status_wait_to_complete;
uint8_t last_link_status;
- /**< Port Id of slave eth_dev */
+ /**< Port Id of member eth_dev */
struct rte_ether_addr persisted_mac_addr;
uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next;
- /* Slaves flows */
+ /* Members flows */
struct rte_flow *flows[RTE_MAX_ETHPORTS];
/* Flow description for synchronization */
struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
};
typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
/** Link Bonding PMD device private configuration Structure */
struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
rte_spinlock_t lock;
rte_spinlock_t lsc_lock;
- uint16_t primary_port; /**< Primary Slave Port */
- uint16_t current_primary_port; /**< Primary Slave Port */
+ uint16_t primary_port; /**< Primary Member Port */
+ uint16_t current_primary_port; /**< Primary Member Port */
uint16_t user_defined_primary_port;
/**< Flag for whether primary port is user defined or not */
@@ -137,16 +137,16 @@ struct bond_dev_private {
uint16_t nb_rx_queues; /**< Total number of rx queues */
uint16_t nb_tx_queues; /**< Total number of tx queues*/
- uint16_t active_slave_count; /**< Number of active slaves */
- uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
+ uint16_t active_member_count; /**< Number of active members */
+ uint16_t active_members[RTE_MAX_ETHPORTS]; /**< Active member list */
- uint16_t slave_count; /**< Number of bonded slaves */
- struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
- /**< Array of bonded slaves details */
+ uint16_t member_count; /**< Number of bonded members */
+ struct bond_member_details members[RTE_MAX_ETHPORTS];
+ /**< Array of bonded members details */
struct mode8023ad_private mode4;
- uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
- /**< TLB active slaves send order */
+ uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+ /**< TLB active members send order */
struct mode_alb_private mode6;
uint64_t rx_offload_capa; /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
uint8_t rss_key_len; /**< hash key length in bytes. */
struct rte_kvargs *kvlist;
- uint8_t slave_update_idx;
+ uint8_t member_update_idx;
bool kvargs_processing_is_done;
@@ -191,19 +191,21 @@ struct bond_dev_private {
extern const struct eth_dev_ops default_dev_ops;
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
uint16_t pos;
- for (pos = 0; pos < slaves_count; pos++) {
- if (slave_id == slaves[pos])
+ for (pos = 0; pos < members_count; pos++) {
+ if (member_id == members[pos])
break;
}
@@ -217,13 +219,13 @@ int
valid_bonded_port_id(uint16_t port_id);
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
int
mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *dst_mac_addr);
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id);
+ uint16_t member_port_id);
int
bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
void *param, void *ret_param);
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args);
int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
const char *value, void *extra_args);
int
@@ -323,7 +325,7 @@ void
bond_tlb_enable(struct bond_dev_private *internals);
void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
int
bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
*
* RTE Link Bonding Ethernet Device
* Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
* these interfaces based on the mode of operation specified and supported.
* This implementation supports 4 modes of operation round robin, active backup
* balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
#define BONDING_MODE_ROUND_ROBIN (0)
/**< Round Robin (Mode 0).
* In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
#define BONDING_MODE_ACTIVE_BACKUP (1)
/**< Active Backup (Mode 1).
* In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
#define BONDING_MODE_BALANCE (2)
/**< Balance (Mode 2).
* In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
* See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
#define BONDING_MODE_BROADCAST (3)
/**< Broadcast (Mode 3).
* In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
#define BONDING_MODE_8023AD (4)
/**< 802.3AD (Mode 4).
*
@@ -62,22 +66,22 @@ extern "C" {
* be handled with the expected latency and this may cause the link status to be
* incorrectly marked as down or failure to correctly negotiate with peers.
* - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
*/
#define BONDING_MODE_TLB (5)
/**< Adaptive TLB (Mode 5)
* This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
#define BONDING_MODE_ALB (6)
/**< Adaptive Load Balancing (Mode 6)
* This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
* bonding driver intercepts ARP replies send by local system and overwrites its
* source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
* information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
*/
/* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
rte_eth_bond_free(const char *name);
/**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
/**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
/**
* Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
rte_eth_bond_mode_get(uint16_t bonded_port_id);
/**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
/**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
*
* @return
- * Port Id of primary slave on success, -1 on failure
+ * Port Id of primary member on success, -1 on failure
*/
int
rte_eth_bond_primary_get(uint16_t bonded_port_id);
/**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of slaves associated with bonded device on success,
+ * Number of members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
/**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
* device.
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of active slaves associated with bonded device on success,
+ * Number of active members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
/**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
*
* @param bonded_port_id Port ID of bonded device.
* @param mac_addr MAC Address to use on bonded device overriding
- * slaves MAC addresses
+ * members MAC addresses
*
* @return
* 0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
struct rte_ether_addr *mac_addr);
/**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
*
* @param bonded_port_id Port ID of bonded device.
*
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
/**
* Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
*
* @param bonded_port_id Port ID of bonded device.
* @param internal_ms Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
/**
* Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
*
* @param bonded_port_id Port ID of bonded device.
*
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..ac9f414e74 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
#define MODE4_DEBUG(fmt, ...) \
rte_log(RTE_LOG_DEBUG, bond_logtype, \
"%6u [Port %u: %s] " fmt, \
- bond_dbg_get_time_diff_ms(), slave_id, \
+ bond_dbg_get_time_diff_ms(), member_id, \
__func__, ##__VA_ARGS__)
static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
}
static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
uint8_t warnings;
do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
if (warnings & WRN_RX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+ "Member %u: failed to enqueue LACP packet into RX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will notwork correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_TX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+ "Member %u: failed to enqueue LACP packet into TX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will not work correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_RX_MARKER_TO_FAST)
- RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+ member_id);
if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
RTE_BOND_LOG(INFO,
- "Slave %u: ignoring unknown slow protocol frame type",
- slave_id);
+ "Member %u: ignoring unknown slow protocol frame type",
+ member_id);
}
if (warnings & WRN_UNKNOWN_MARKER_TYPE)
- RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+ member_id);
if (warnings & WRN_NOT_LACP_CAPABLE)
- MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+ MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
}
static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
* @param port Port on which LACPDU was received.
*/
static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
struct lacpdu *lacp)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
uint64_t timeout;
if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
* @param port Port to handle state machine.
*/
static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Calculate if either site is LACP enabled */
uint64_t timeout;
uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port Port to handle state machine.
*/
static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Save current state for later use */
const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing started.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing started.",
+ internals->port_id, member_id);
}
} else {
if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing stopped.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing stopped.",
+ internals->port_id, member_id);
}
}
}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port
*/
static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
struct rte_mbuf *lacp_pkt = NULL;
struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
/* Source and destination MAC */
rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
return;
}
} else {
- uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+ uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, 1);
- pkts_sent = rte_eth_tx_burst(slave_id,
+ pkts_sent = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, pkts_sent);
if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
* @param port_pos Port to assign.
*/
static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
{
struct port *agg, *port;
- uint16_t slaves_count, new_agg_id, i, j = 0;
- uint16_t *slaves;
+ uint16_t members_count, new_agg_id, i, j = 0;
+ uint16_t *members;
uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
- uint16_t default_slave = 0;
+ uint16_t default_member = 0;
struct rte_eth_link link_info;
uint16_t agg_new_idx = 0;
int ret;
- slaves = internals->active_slaves;
- slaves_count = internals->active_slave_count;
- port = &bond_mode_8023ad_ports[slave_id];
+ members = internals->active_members;
+ members_count = internals->active_member_count;
+ port = &bond_mode_8023ad_ports[member_id];
/* Search for aggregator suitable for this port */
- for (i = 0; i < slaves_count; ++i) {
- agg = &bond_mode_8023ad_ports[slaves[i]];
+ for (i = 0; i < members_count; ++i) {
+ agg = &bond_mode_8023ad_ports[members[i]];
/* Skip ports that are not aggregators */
- if (agg->aggregator_port_id != slaves[i])
+ if (agg->aggregator_port_id != members[i])
continue;
- ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+ ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slaves[i], rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ members[i], rte_strerror(-ret));
continue;
}
agg_count[i] += 1;
agg_bandwidth[i] += link_info.link_speed;
- /* Actors system ID is not checked since all slave device have the same
+ /* Actors system ID is not checked since all member device have the same
* ID (MAC address). */
if ((agg->actor.key == port->actor.key &&
agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
if (j == 0)
- default_slave = i;
+ default_member = i;
j++;
}
}
switch (internals->mode4.agg_selection) {
case AGG_COUNT:
- agg_new_idx = max_index(agg_count, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_count, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_BANDWIDTH:
- agg_new_idx = max_index(agg_bandwidth, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_bandwidth, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_STABLE:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
default:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
}
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
MODE4_DEBUG("-> SELECTED: ID=%3u\n"
"\t%s aggregator ID=%3u\n",
port->aggregator_port_id,
- port->aggregator_port_id == slave_id ?
+ port->aggregator_port_id == member_id ?
"aggregator not found, using default" : "aggregator found",
port->aggregator_port_id);
}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
}
static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
struct rte_mbuf *lacp_pkt) {
struct lacpdu_header *lacp;
struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
partner = &lacp->lacpdu.partner;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
/* This LACP frame is sending to the bonding port
* so pass it to rx_machine.
*/
- rx_machine(internals, slave_id, &lacp->lacpdu);
+ rx_machine(internals, member_id, &lacp->lacpdu);
} else {
char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
}
rte_pktmbuf_free(lacp_pkt);
} else
- rx_machine(internals, slave_id, NULL);
+ rx_machine(internals, member_id, NULL);
}
static void
bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
- uint16_t slave_id)
+ uint16_t member_id)
{
#define DEDICATED_QUEUE_BURST_SIZE 32
struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ uint16_t rx_count = rte_eth_rx_burst(member_id,
internals->mode4.dedicated_queues.rx_qid,
lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
uint16_t i;
for (i = 0; i < rx_count; i++)
- bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ bond_mode_8023ad_handle_slow_pkt(internals, member_id,
lacp_pkt[i]);
} else {
- rx_machine_update(internals, slave_id, NULL);
+ rx_machine_update(internals, member_id, NULL);
}
}
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
struct bond_dev_private *internals = bond_dev->data->dev_private;
struct port *port;
struct rte_eth_link link_info;
- struct rte_ether_addr slave_addr;
+ struct rte_ether_addr member_addr;
struct rte_mbuf *lacp_pkt = NULL;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
/* Update link status on each port */
- for (i = 0; i < internals->active_slave_count; i++) {
+ for (i = 0; i < internals->active_member_count; i++) {
uint16_t key;
int ret;
- slave_id = internals->active_slaves[i];
- ret = rte_eth_link_get_nowait(slave_id, &link_info);
+ member_id = internals->active_members[i];
+ ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_id, rte_strerror(-ret));
}
if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
key = 0;
}
- rte_eth_macaddr_get(slave_id, &slave_addr);
- port = &bond_mode_8023ad_ports[slave_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
+ port = &bond_mode_8023ad_ports[member_id];
key = rte_cpu_to_be_16(key);
if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, NTT);
}
- if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
- rte_ether_addr_copy(&slave_addr, &port->actor.system);
- if (port->aggregator_port_id == slave_id)
+ if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+ rte_ether_addr_copy(&member_addr, &port->actor.system);
+ if (port->aggregator_port_id == member_id)
SM_FLAG_SET(port, NTT);
}
}
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if ((port->actor.key &
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (retval != 0)
lacp_pkt = NULL;
- rx_machine_update(internals, slave_id, lacp_pkt);
+ rx_machine_update(internals, member_id, lacp_pkt);
} else {
bond_mode_8023ad_dedicated_rxq_process(internals,
- slave_id);
+ member_id);
}
- periodic_machine(internals, slave_id);
- mux_machine(internals, slave_id);
- tx_machine(internals, slave_id);
- selection_logic(internals, slave_id);
+ periodic_machine(internals, member_id);
+ mux_machine(internals, member_id);
+ tx_machine(internals, member_id);
+ selection_logic(internals, member_id);
SM_FLAG_CLR(port, BEGIN);
- show_warnings(slave_id);
+ show_warnings(member_id);
}
rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
}
static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
{
int ret;
- ret = rte_eth_allmulticast_enable(slave_id);
+ ret = rte_eth_allmulticast_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_allmulticast_get(slave_id)) {
+ if (rte_eth_allmulticast_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_ALLMULTI;
return 0;
}
- ret = rte_eth_promiscuous_enable(slave_id);
+ ret = rte_eth_promiscuous_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_promiscuous_get(slave_id)) {
+ if (rte_eth_promiscuous_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_PROMISC;
return 0;
}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
}
static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
{
int ret;
- switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+ switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
case BOND_8023AD_FORCED_ALLMULTI:
- RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
- ret = rte_eth_allmulticast_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+ ret = rte_eth_allmulticast_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
case BOND_8023AD_FORCED_PROMISC:
- RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
- ret = rte_eth_promiscuous_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+ ret = rte_eth_promiscuous_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
}
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
- uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+ uint16_t member_id)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct port_params initial = {
.system = { { 0 } },
.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
- /* Given slave mus not be in active list */
- RTE_ASSERT(find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) == internals->active_slave_count);
+ /* Given member mus not be in active list */
+ RTE_ASSERT(find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) == internals->active_member_count);
RTE_SET_USED(internals); /* used only for assert when enabled */
memcpy(&port->actor, &initial, sizeof(struct port_params));
/* Standard requires that port ID must be grater than 0.
* Add 1 do get corresponding port_number */
- port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+ port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
memcpy(&port->partner, &initial, sizeof(struct port_params));
memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
port->sm_flags = SM_FLAGS_BEGIN;
/* use this port as aggregator */
- port->aggregator_port_id = slave_id;
+ port->aggregator_port_id = member_id;
- if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
- RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
- slave_id);
+ if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+ RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+ member_id);
}
timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_ASSERT(port->rx_ring == NULL);
RTE_ASSERT(port->tx_ring == NULL);
- socket_id = rte_eth_dev_socket_id(slave_id);
+ socket_id = rte_eth_dev_socket_id(member_id);
if (socket_id == -1)
socket_id = rte_socket_id();
element_size = sizeof(struct slow_protocol_frame) +
RTE_PKTMBUF_HEADROOM;
- /* The size of the mempool should be at least:
- * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
- total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+ /*
+ * The size of the mempool should be at least:
+ * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
+ */
+ total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
total_tx_desc += bd_tx_q->nb_tx_desc;
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
port->rx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
if (port->rx_ring == NULL) {
- rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
/* TX ring is at least one pkt longer to make room for marker packet. */
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
port->tx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
if (port->tx_ring == NULL) {
- rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
}
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
- uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+ uint16_t member_id)
{
void *pkt = NULL;
struct port *port = NULL;
uint8_t old_partner_state;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
ACTOR_STATE_CLR(port, AGGREGATION);
port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
old_partner_state = port->partner_state;
record_default(port);
- bond_mode_8023ad_unregister_lacp_mac(slave_id);
+ bond_mode_8023ad_unregister_lacp_mac(member_id);
/* If partner timeout state changes then disable timer */
if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct rte_ether_addr slave_addr;
- struct port *slave, *agg_slave;
- uint16_t slave_id, i, j;
+ struct rte_ether_addr member_addr;
+ struct port *member, *agg_member;
+ uint16_t member_id, i, j;
bond_mode_8023ad_stop(bond_dev);
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- slave = &bond_mode_8023ad_ports[slave_id];
- rte_eth_macaddr_get(slave_id, &slave_addr);
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ member = &bond_mode_8023ad_ports[member_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
- if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+ if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
continue;
- rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+ rte_ether_addr_copy(&member_addr, &member->actor.system);
/* Do nothing if this port is not an aggregator. In other case
* Set NTT flag on every port that use this aggregator. */
- if (slave->aggregator_port_id != slave_id)
+ if (member->aggregator_port_id != member_id)
continue;
- for (j = 0; j < internals->active_slave_count; j++) {
- agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
- if (agg_slave->aggregator_port_id == slave_id)
- SM_FLAG_SET(agg_slave, NTT);
+ for (j = 0; j < internals->active_member_count; j++) {
+ agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+ if (agg_member->aggregator_port_id == member_id)
+ SM_FLAG_SET(agg_member, NTT);
}
}
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
struct bond_dev_private *internals = bond_dev->data->dev_private;
uint16_t i;
- for (i = 0; i < internals->active_slave_count; i++)
- bond_mode_8023ad_activate_slave(bond_dev,
- internals->active_slaves[i]);
+ for (i = 0; i < internals->active_member_count; i++)
+ bond_mode_8023ad_activate_member(bond_dev,
+ internals->active_members[i]);
return 0;
}
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt)
+ uint16_t member_id, struct rte_mbuf *pkt)
{
struct mode8023ad_private *mode4 = &internals->mode4;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct marker_header *m_hdr;
uint64_t marker_timer, old_marker_timer;
int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
}
} else {
/* Send packet directly to the slow queue */
- uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+ uint16_t tx_count = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, 1);
- tx_count = rte_eth_tx_burst(slave_id,
+ tx_count = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, tx_count);
if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
goto free_out;
}
} else
- rx_machine_update(internals, slave_id, pkt);
+ rx_machine_update(internals, member_id, pkt);
} else {
wrn = WRN_UNKNOWN_SLOW_TYPE;
goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *info)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
bond_dev = &rte_eth_devices[port_id];
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
info->selected = port->selected;
info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
}
static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
return -EINVAL;
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
}
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, DISTRIBUTING);
}
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, COLLECTING);
}
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
struct mode8023ad_private *mode4 = &internals->mode4;
struct port *port;
void *pkt = NULL;
- uint16_t i, slave_id;
+ uint16_t i, member_id;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
/* This is LACP frame so pass it to rx callback.
* Callback is responsible for freeing mbuf.
*/
- mode4->slowrx_cb(slave_id, lacp_pkt);
+ mode4->slowrx_cb(member_id, lacp_pkt);
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
uint16_t system_priority;
/**< System priority (unused in current implementation) */
struct rte_ether_addr system;
- /**< System ID - Slave MAC address, same as bonding MAC address */
+ /**< System ID - Member MAC address, same as bonding MAC address */
uint16_t key;
/**< Speed information (implementation dependent) and duplex. */
uint16_t port_priority;
/**< Priority of this (unused in current implementation) */
uint16_t port_number;
- /**< Port number. It corresponds to slave port id. */
+ /**< Port number. It corresponds to member port id. */
} __rte_packed __rte_aligned(2);
struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
enum rte_bond_8023ad_agg_selection agg_selection;
};
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
enum rte_bond_8023ad_selection selected;
uint8_t actor_state;
struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
/**
* @internal
*
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
*
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param conf buffer for configuration
* @return
* 0 - if ok
- * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ * -EINVAL if conf is NULL or member id is invalid (not a member of given
* bonded device or is not inactive).
*/
+__rte_experimental
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf)
+{
+ return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
#ifdef __cplusplus
}
#endif
/**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when collection enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
/**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when distribution enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
/**
* LACPDU transmit path for external 802.3ad state machine. Caller retains
* ownership of the packet on failure.
*
* @param port_id Bonding device id
- * @param slave_id Port ID of valid slave device.
+ * @param member_id Port ID of valid member device.
* @param lacp_pkt mbuf containing LACPDU.
*
* @return
* 0 on success, negative value otherwise.
*/
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt);
/**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
*
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
* dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
* for processing in the LACP state machine, this removes the need to filter
* these packets in the bonded devices data path. The additional tx queue is
* used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
*
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
* filter rule required for rx and have enough queues that one rx and tx queue
* can be reserved for the LACP state machines control packets.
*
@@ -296,7 +305,7 @@ int
rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
/**
- * Disable slow queue on slaves
+ * Disable slow queue on members
*
* This function disables hardware slow packet filter.
*
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
}
static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
{
uint16_t idx;
- idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
- internals->mode6.last_slave = idx;
- return internals->active_slaves[idx];
+ idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+ internals->mode6.last_member = idx;
+ return internals->active_members[idx];
}
int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
/* Fill hash table with initial values */
memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
rte_spinlock_init(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
internals->mode6.ntt = 0;
/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
/*
* We got reply for ARP Request send by the application. We need to
* update client table when received data differ from what is stored
- * in ALB table and issue sending update packet to that slave.
+ * in ALB table and issue sending update packet to that member.
*/
rte_spinlock_lock(&internals->mode6.lock);
if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
client_info->cli_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_sha,
&client_info->cli_mac);
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
&arp->arp_data.arp_tha,
&client_info->cli_mac);
}
- rte_eth_macaddr_get(client_info->slave_idx,
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
}
- /* Assign new slave to this client and update src mac in ARP */
+ /* Assign new member to this client and update src mac in ARP */
client_info->in_use = 1;
client_info->ntt = 0;
client_info->app_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_tha,
&client_info->cli_mac);
client_info->cli_ip = arp->arp_data.arp_tip;
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
{
struct rte_ether_hdr *eth_h;
struct rte_arp_hdr *arp_h;
- uint16_t slave_idx;
+ uint16_t member_idx;
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
arp_h->arp_plen = sizeof(uint32_t);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
- slave_idx = client_info->slave_idx;
+ member_idx = client_info->member_idx;
rte_spinlock_unlock(&internals->mode6.lock);
- return slave_idx;
+ return member_idx;
}
void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
int i;
- /* If active slave count is 0, it's pointless to refresh alb table */
- if (internals->active_slave_count <= 0)
+ /* If active member count is 0, it's pointless to refresh alb table */
+ if (internals->active_member_count <= 0)
return;
rte_spinlock_lock(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
internals->mode6.ntt = 1;
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
uint32_t cli_ip;
/**< Client IP address */
- uint16_t slave_idx;
- /**< Index of slave on which we connect with that client */
+ uint16_t member_idx;
+ /**< Index of member on which we connect with that client */
uint8_t in_use;
/**< Flag indicating if entry in client table is currently used */
uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
/**< Mempool for creating ARP update packets */
uint8_t ntt;
/**< Flag indicating if we need to send update to any client on next tx */
- uint32_t last_slave;
- /**< Index of last used slave in client table */
+ uint32_t last_member;
+ /**< Index of last used member in client table */
rte_spinlock_t lock;
};
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
struct bond_dev_private *internals);
/**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
* connection. On Reply function also updates data in client table.
*
* @param eth_h ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_upd(struct client_data *client_info,
struct rte_mbuf *pkt, struct bond_dev_private *internals);
/**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
*
* @param bond_dev Pointer to bonded device struct.
*/
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
}
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
{
int i;
struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- /* Check if any of slave devices is a bonded device */
- for (i = 0; i < internals->slave_count; i++)
- if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+ /* Check if any of member devices is a bonded device */
+ for (i = 0; i < internals->member_count; i++)
+ if (valid_bonded_port_id(internals->members[i].port_id) == 0)
return 1;
return 0;
}
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
{
- RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
- /* Verify that slave_port_id refers to a non bonded port */
- if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+ /* Verify that member_port_id refers to a non bonded port */
+ if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
internals->mode == BONDING_MODE_8023AD) {
- RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
- " mode as slave is also a bonded device, only "
+ RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+ " mode as member is also a bonded device, only "
"physical devices can be support in this mode.");
return -1;
}
- if (internals->port_id == slave_port_id) {
+ if (internals->port_id == member_port_id) {
RTE_BOND_LOG(ERR,
- "Cannot add the bonded device itself as its slave.");
+ "Cannot add the bonded device itself as its member.");
return -1;
}
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
}
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD)
- bond_mode_8023ad_activate_slave(eth_dev, port_id);
+ bond_mode_8023ad_activate_member(eth_dev, port_id);
if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB) {
- internals->tlb_slaves_order[active_count] = port_id;
+ internals->tlb_members_order[active_count] = port_id;
}
- RTE_ASSERT(internals->active_slave_count <
- (RTE_DIM(internals->active_slaves) - 1));
+ RTE_ASSERT(internals->active_member_count <
+ (RTE_DIM(internals->active_members) - 1));
- internals->active_slaves[internals->active_slave_count] = port_id;
- internals->active_slave_count++;
+ internals->active_members[internals->active_member_count] = port_id;
+ internals->active_member_count++;
if (internals->mode == BONDING_MODE_TLB)
- bond_tlb_activate_slave(internals);
+ bond_tlb_activate_member(internals);
if (internals->mode == BONDING_MODE_ALB)
bond_mode_alb_client_list_upd(eth_dev);
}
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
- uint16_t slave_pos;
+ uint16_t member_pos;
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD) {
bond_mode_8023ad_stop(eth_dev);
- bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+ bond_mode_8023ad_deactivate_member(eth_dev, port_id);
} else if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB)
bond_tlb_disable(internals);
- slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+ member_pos = find_member_by_id(internals->active_members, active_count,
port_id);
- /* If slave was not at the end of the list
- * shift active slaves up active array list */
- if (slave_pos < active_count) {
+ /*
+ * If member was not at the end of the list
+ * shift active members up active array list.
+ */
+ if (member_pos < active_count) {
active_count--;
- memmove(internals->active_slaves + slave_pos,
- internals->active_slaves + slave_pos + 1,
- (active_count - slave_pos) *
- sizeof(internals->active_slaves[0]));
+ memmove(internals->active_members + member_pos,
+ internals->active_members + member_pos + 1,
+ (active_count - member_pos) *
+ sizeof(internals->active_members[0]));
}
- RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
- internals->active_slave_count = active_count;
+ RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+ internals->active_member_count = active_count;
if (eth_dev->data->dev_started) {
if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
}
static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
if (unlikely(slab & mask)) {
uint16_t vlan_id = pos + i;
- res = rte_eth_dev_vlan_filter(slave_port_id,
+ res = rte_eth_dev_vlan_filter(member_port_id,
vlan_id, 1);
}
}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
{
struct rte_flow *flow;
struct rte_flow_error ferror;
- uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+ uint16_t member_port_id = internals->members[member_id].port_id;
if (internals->flow_isolated_valid != 0) {
- if (rte_eth_dev_stop(slave_port_id) != 0) {
+ if (rte_eth_dev_stop(member_port_id) != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_port_id);
+ member_port_id);
return -1;
}
- if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+ if (rte_flow_isolate(member_port_id, internals->flow_isolated,
&ferror)) {
- RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
- " %d: %s", slave_id, ferror.message ?
+ RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+ " %d: %s", member_id, ferror.message ?
ferror.message : "(no stated reason)");
return -1;
}
}
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- flow->flows[slave_id] = rte_flow_create(slave_port_id,
+ flow->flows[member_id] = rte_flow_create(member_port_id,
flow->rule.attr,
flow->rule.pattern,
flow->rule.actions,
&ferror);
- if (flow->flows[slave_id] == NULL) {
- RTE_BOND_LOG(ERR, "Cannot create flow for slave"
- " %d: %s", slave_id,
+ if (flow->flows[member_id] == NULL) {
+ RTE_BOND_LOG(ERR, "Cannot create flow for member"
+ " %d: %s", member_id,
ferror.message ? ferror.message :
"(no stated reason)");
- /* Destroy successful bond flows from the slave */
+ /* Destroy successful bond flows from the member */
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- flow->flows[slave_id],
+ if (flow->flows[member_id] != NULL) {
+ rte_flow_destroy(member_port_id,
+ flow->flows[member_id],
&ferror);
- flow->flows[slave_id] = NULL;
+ flow->flows[member_id] = NULL;
}
}
return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
}
static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
internals->reta_size = di->reta_size;
internals->rss_key_len = di->hash_key_size;
- /* Inherit Rx offload capabilities from the first slave device */
+ /* Inherit Rx offload capabilities from the first member device */
internals->rx_offload_capa = di->rx_offload_capa;
internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
- /* Inherit maximum Rx packet size from the first slave device */
+ /* Inherit maximum Rx packet size from the first member device */
internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
- /* Inherit default Rx queue settings from the first slave device */
+ /* Inherit default Rx queue settings from the first member device */
memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
rxconf_i->rx_thresh.pthresh = 0;
rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
/* Setting this to zero should effectively enable default values */
rxconf_i->rx_free_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
rxconf_i->rx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
- /* Inherit Tx offload capabilities from the first slave device */
+ /* Inherit Tx offload capabilities from the first member device */
internals->tx_offload_capa = di->tx_offload_capa;
internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
- /* Inherit default Tx queue settings from the first slave device */
+ /* Inherit default Tx queue settings from the first member device */
memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
txconf_i->tx_thresh.pthresh = 0;
txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
/*
* Setting these parameters to zero assumes that default
- * values will be configured implicitly by slave devices.
+ * values will be configured implicitly by member devices.
*/
txconf_i->tx_free_thresh = 0;
txconf_i->tx_rs_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
txconf_i->tx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
/*
- * If at least one slave device suggests enabling this
- * setting by default, enable it for all slave devices
+ * If at least one member device suggests enabling this
+ * setting by default, enable it for all member devices
* since disabling it may not be necessarily supported.
*/
if (rxconf->rx_drop_en == 1)
rxconf_i->rx_drop_en = 1;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal rx_queue_offload_capa
* value. Thus, the new internal value of default Rx queue offloads
* has to be masked by rx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
internals->rx_queue_offload_capa;
/*
- * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+ * RETA size is GCD of all members RETA sizes, so, if all sizes will be
* the power of 2, the lower one is GCD
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
if (internals->rss_key_len > di->hash_key_size) {
- RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ RTE_BOND_LOG(WARNING, "member has different rss key size, "
"configuring rss may fail");
internals->rss_key_len = di->hash_key_size;
}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
}
static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal tx_queue_offload_capa
* value. Thus, the new internal value of default Tx queue offloads
* has to be masked by tx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
internals->tx_queue_offload_capa;
}
static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
- memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+ memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
}
static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
- slave_desc_lim->nb_max);
+ member_desc_lim->nb_max);
bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
- slave_desc_lim->nb_min);
+ member_desc_lim->nb_min);
bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
- slave_desc_lim->nb_align);
+ member_desc_lim->nb_align);
if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
}
/* Treat maximum number of segments equal to 0 as unspecified */
- if (slave_desc_lim->nb_seg_max != 0 &&
+ if (member_desc_lim->nb_seg_max != 0 &&
(bond_desc_lim->nb_seg_max == 0 ||
- slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
- bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
- if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+ member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+ bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+ if (member_desc_lim->nb_mtu_seg_max != 0 &&
(bond_desc_lim->nb_mtu_seg_max == 0 ||
- slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
- bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+ member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+ bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
return 0;
}
static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
{
- struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+ struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
struct bond_dev_private *internals;
struct rte_eth_link link_props;
struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
- RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+ RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
return -1;
}
- ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+ ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port_id, strerror(-ret));
+ __func__, member_port_id, strerror(-ret));
return ret;
}
if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
- RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
- slave_port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+ member_port_id);
return -1;
}
- slave_add(internals, slave_eth_dev);
+ member_add(internals, member_eth_dev);
- /* We need to store slaves reta_size to be able to synchronize RETA for all
- * slave devices even if its sizes are different.
+ /* We need to store members reta_size to be able to synchronize RETA for all
+ * member devices even if its sizes are different.
*/
- internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+ internals->members[internals->member_count].reta_size = dev_info.reta_size;
- if (internals->slave_count < 1) {
- /* if MAC is not user defined then use MAC of first slave add to
+ if (internals->member_count < 1) {
+ /* if MAC is not user defined then use MAC of first member add to
* bonded device */
if (!internals->user_defined_mac) {
if (mac_address_set(bonded_eth_dev,
- slave_eth_dev->data->mac_addrs)) {
+ member_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to set MAC address");
return -1;
}
}
- /* Make primary slave */
- internals->primary_port = slave_port_id;
- internals->current_primary_port = slave_port_id;
+ /* Make primary member */
+ internals->primary_port = member_port_id;
+ internals->current_primary_port = member_port_id;
internals->speed_capa = dev_info.speed_capa;
- /* Inherit queues settings from first slave */
- internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
- internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+ /* Inherit queues settings from first member */
+ internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+ internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
- eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
- eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
&dev_info.rx_desc_lim);
- eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
&dev_info.tx_desc_lim);
} else {
int ret;
internals->speed_capa &= dev_info.speed_capa;
- eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->rx_desc_lim, &dev_info.rx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+ &dev_info.rx_desc_lim);
if (ret != 0)
return ret;
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->tx_desc_lim, &dev_info.tx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+ &dev_info.tx_desc_lim);
if (ret != 0)
return ret;
}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
- if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
- RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
- slave_port_id);
+ if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+ member_port_id);
return -1;
}
- /* Add additional MAC addresses to the slave */
- if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
- RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
- slave_port_id);
+ /* Add additional MAC addresses to the member */
+ if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+ member_port_id);
return -1;
}
- internals->slave_count++;
+ internals->member_count++;
if (bonded_eth_dev->data->dev_started) {
- if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
- slave_port_id);
+ if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+ member_port_id);
return -1;
}
- if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
- slave_port_id);
+ if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+ member_port_id);
return -1;
}
}
- /* Update all slave devices MACs */
- mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs */
+ mac_address_members_update(bonded_eth_dev);
/* Register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
- /* If bonded device is started then we can add the slave to our active
- * slave array */
+ /*
+ * If bonded device is started then we can add the member to our active
+ * member array.
+ */
if (bonded_eth_dev->data->dev_started) {
- ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+ ret = rte_eth_link_get_nowait(member_port_id, &link_props);
if (ret < 0) {
- rte_eth_dev_callback_unregister(slave_port_id,
+ rte_eth_dev_callback_unregister(member_port_id,
RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&bonded_eth_dev->data->port_id);
- internals->slave_count--;
+ internals->member_count--;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_port_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_port_id, rte_strerror(-ret));
return -1;
}
if (link_props.link_status == RTE_ETH_LINK_UP) {
- if (internals->active_slave_count == 0 &&
+ if (internals->active_member_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
- slave_port_id);
+ member_port_id);
}
}
- /* Add slave details to bonded device */
- slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+ /* Add member details to bonded device */
+ member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
- slave_vlan_filter_set(bonded_port_id, slave_port_id);
+ member_vlan_filter_set(bonded_port_id, member_port_id);
return 0;
}
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
- uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+ uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct rte_flow_error flow_error;
struct rte_flow *flow;
- int i, slave_idx;
+ int i, member_idx;
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) < 0)
+ if (valid_member_port_id(internals, member_port_id) < 0)
return -1;
- /* first remove from active slave list */
- slave_idx = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_port_id);
+ /* first remove from active member list */
+ member_idx = find_member_by_id(internals->active_members,
+ internals->active_member_count, member_port_id);
- if (slave_idx < internals->active_slave_count)
- deactivate_slave(bonded_eth_dev, slave_port_id);
+ if (member_idx < internals->active_member_count)
+ deactivate_member(bonded_eth_dev, member_port_id);
- slave_idx = -1;
- /* now find in slave list */
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == slave_port_id) {
- slave_idx = i;
+ member_idx = -1;
+ /* now find in member list */
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == member_port_id) {
+ member_idx = i;
break;
}
- if (slave_idx < 0) {
- RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
- internals->slave_count);
+ if (member_idx < 0) {
+ RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+ internals->member_count);
return -1;
}
/* Un-register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&rte_eth_devices[bonded_port_id].data->port_id);
- /* Restore original MAC address of slave device */
- rte_eth_dev_default_mac_addr_set(slave_port_id,
- &(internals->slaves[slave_idx].persisted_mac_addr));
+ /* Restore original MAC address of member device */
+ rte_eth_dev_default_mac_addr_set(member_port_id,
+ &internals->members[member_idx].persisted_mac_addr);
- /* remove additional MAC addresses from the slave */
- slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+ /* remove additional MAC addresses from the member */
+ member_remove_mac_addresses(bonded_eth_dev, member_port_id);
/*
- * Remove bond device flows from slave device.
+ * Remove bond device flows from member device.
* Note: don't restore flow isolate mode.
*/
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_idx] != NULL) {
- rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+ if (flow->flows[member_idx] != NULL) {
+ rte_flow_destroy(member_port_id, flow->flows[member_idx],
&flow_error);
- flow->flows[slave_idx] = NULL;
+ flow->flows[member_idx] = NULL;
}
}
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- slave_remove(internals, slave_eth_dev);
- slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ member_remove(internals, member_eth_dev);
+ member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
- /* first slave in the active list will be the primary by default,
+ /* first member in the active list will be the primary by default,
* otherwise use first device in list */
- if (internals->current_primary_port == slave_port_id) {
- if (internals->active_slave_count > 0)
- internals->current_primary_port = internals->active_slaves[0];
- else if (internals->slave_count > 0)
- internals->current_primary_port = internals->slaves[0].port_id;
+ if (internals->current_primary_port == member_port_id) {
+ if (internals->active_member_count > 0)
+ internals->current_primary_port = internals->active_members[0];
+ else if (internals->member_count > 0)
+ internals->current_primary_port = internals->members[0].port_id;
else
internals->primary_port = 0;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
}
- if (internals->active_slave_count < 1) {
- /* if no slaves are any longer attached to bonded device and MAC is not
+ if (internals->active_member_count < 1) {
+ /*
+ * if no members are any longer attached to bonded device and MAC is not
* user defined then clear MAC of bonded device as it will be reset
- * when a new slave is added */
- if (internals->slave_count < 1 && !internals->user_defined_mac)
+ * when a new member is added.
+ */
+ if (internals->member_count < 1 && !internals->user_defined_mac)
memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
}
- if (internals->slave_count == 0) {
+ if (internals->member_count == 0) {
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
}
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
- if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+ if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
mode == BONDING_MODE_8023AD)
return -1;
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
}
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct bond_dev_private *internals;
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
internals->user_defined_primary_port = 1;
- internals->primary_port = slave_port_id;
+ internals->primary_port = member_port_id;
- bond_ethdev_primary_set(internals, slave_port_id);
+ bond_ethdev_primary_set(internals, member_port_id);
return 0;
}
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count < 1)
+ if (internals->member_count < 1)
return -1;
return internals->current_primary_port;
}
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count > len)
+ if (internals->member_count > len)
return -1;
- for (i = 0; i < internals->slave_count; i++)
- slaves[i] = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++)
+ members[i] = internals->members[i].port_id;
- return internals->slave_count;
+ return internals->member_count;
}
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ if (internals->active_member_count > len)
return -1;
- memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ memcpy(members, internals->active_members,
+ internals->active_member_count * sizeof(internals->active_members[0]));
- return internals->active_slave_count;
+ return internals->active_member_count;
}
int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
internals->user_defined_mac = 1;
- /* Update all slave devices MACs*/
- if (internals->slave_count > 0)
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs*/
+ if (internals->member_count > 0)
+ return mac_address_members_update(bonded_eth_dev);
return 0;
}
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
internals->user_defined_mac = 0;
- if (internals->slave_count > 0) {
- int slave_port;
- /* Get the primary slave location based on the primary port
- * number as, while slave_add(), we will keep the primary
- * slave based on slave_count,but not based on the primary port.
+ if (internals->member_count > 0) {
+ int member_port;
+ /* Get the primary member location based on the primary port
+ * number as, while member_add(), we will keep the primary
+ * member based on member_count,but not based on the primary port.
*/
- for (slave_port = 0; slave_port < internals->slave_count;
- slave_port++) {
- if (internals->slaves[slave_port].port_id ==
+ for (member_port = 0; member_port < internals->member_count;
+ member_port++) {
+ if (internals->members[member_port].port_id ==
internals->primary_port)
break;
}
/* Set MAC Address of Bonded Device */
if (mac_address_set(bonded_eth_dev,
- &internals->slaves[slave_port].persisted_mac_addr)
+ &internals->members[member_port].persisted_mac_addr)
!= 0) {
RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
return -1;
}
- /* Update all slave devices MAC addresses */
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MAC addresses */
+ return mac_address_members_update(bonded_eth_dev);
}
- /* No need to update anything as no slaves present */
+ /* No need to update anything as no members present */
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
#include "eth_bond_private.h"
const char *pmd_bond_init_valid_arguments[] = {
- PMD_BOND_SLAVE_PORT_KVARG,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
+ PMD_BOND_MEMBER_PORT_KVARG,
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
PMD_BOND_MODE_KVARG,
PMD_BOND_XMIT_POLICY_KVARG,
PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
}
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args)
{
- struct bond_ethdev_slave_ports *slave_ports;
+ struct bond_ethdev_member_ports *member_ports;
if (value == NULL || extra_args == NULL)
return -1;
- slave_ports = extra_args;
+ member_ports = extra_args;
- if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+ if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
int port_id = parse_port_id(value);
if (port_id < 0) {
- RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+ RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
value);
return -1;
} else
- slave_ports->slaves[slave_ports->slave_count++] =
+ member_ports->members[member_ports->member_count++] =
port_id;
}
return 0;
}
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
case BONDING_MODE_ALB:
return 0;
default:
- RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+ RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
return -1;
}
}
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
}
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
- int primary_slave_port_id;
+ int primary_member_port_id;
if (value == NULL || extra_args == NULL)
return -1;
- primary_slave_port_id = parse_port_id(value);
- if (primary_slave_port_id < 0)
+ primary_member_port_id = parse_port_id(value);
+ if (primary_member_port_id < 0)
return -1;
- *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+ *(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_validate(internals->members[i].port_id, attr,
patterns, actions, err);
if (ret) {
RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
return ret;
}
}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
NULL, rte_strerror(ENOMEM));
return NULL;
}
- for (i = 0; i < internals->slave_count; i++) {
- flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ flow->flows[i] = rte_flow_create(internals->members[i].port_id,
attr, patterns, actions, err);
if (unlikely(flow->flows[i] == NULL)) {
- RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+ RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
i);
goto err;
}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
return flow;
err:
- /* Destroy all slaves flows. */
- for (i = 0; i < internals->slave_count; i++) {
+ /* Destroy all members flows. */
+ for (i = 0; i < internals->member_count; i++) {
if (flow->flows[i] != NULL)
- rte_flow_destroy(internals->slaves[i].port_id,
+ rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
}
bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
int i;
int ret = 0;
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
int lret;
if (unlikely(flow->flows[i] == NULL))
continue;
- lret = rte_flow_destroy(internals->slaves[i].port_id,
+ lret = rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
if (unlikely(lret != 0)) {
- RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+ RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
" %d", i, lret);
ret = lret;
}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
int ret = 0;
int lret;
- /* Destroy all bond flows from its slaves instead of flushing them to
+ /* Destroy all bond flows from its members instead of flushing them to
* keep the LACP flow or any other external flows.
*/
RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
ret = lret;
}
if (unlikely(ret != 0))
- RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+ RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
return ret;
}
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *err)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_flow_query_count slave_count;
+ struct rte_flow_query_count member_count;
int i;
int ret;
count->bytes = 0;
count->hits = 0;
- rte_memcpy(&slave_count, count, sizeof(slave_count));
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_query(internals->slaves[i].port_id,
+ rte_memcpy(&member_count, count, sizeof(member_count));
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_query(internals->members[i].port_id,
flow->flows[i], action,
- &slave_count, err);
+ &member_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
- " slave %d: %d", i, ret);
+ " member %d: %d", i, ret);
return ret;
}
- count->bytes += slave_count.bytes;
- count->hits += slave_count.hits;
- slave_count.bytes = 0;
- slave_count.hits = 0;
+ count->bytes += member_count.bytes;
+ count->hits += member_count.hits;
+ member_count.bytes = 0;
+ member_count.hits = 0;
}
return 0;
}
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_isolate(internals->members[i].port_id, set, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
internals->flow_isolated_valid = 0;
return ret;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct bond_dev_private *internals;
uint16_t num_rx_total = 0;
- uint16_t slave_count;
- uint16_t active_slave;
+ uint16_t member_count;
+ uint16_t active_member;
int i;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- slave_count = internals->active_slave_count;
- active_slave = bd_rx_q->active_slave;
+ member_count = internals->active_member_count;
+ active_member = bd_rx_q->active_member;
- for (i = 0; i < slave_count && nb_pkts; i++) {
- uint16_t num_rx_slave;
+ for (i = 0; i < member_count && nb_pkts; i++) {
+ uint16_t num_rx_member;
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave =
- rte_eth_rx_burst(internals->active_slaves[active_slave],
+ /*
+ * Offset of pointer to *bufs increases as packets are received
+ * from other members.
+ */
+ num_rx_member =
+ rte_eth_rx_burst(internals->active_members[active_member],
bd_rx_q->queue_id,
bufs + num_rx_total, nb_pkts);
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
- if (++active_slave >= slave_count)
- active_slave = 0;
+ num_rx_total += num_rx_member;
+ nb_pkts -= num_rx_member;
+ if (++active_member >= member_count)
+ active_member = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port) {
- struct rte_eth_dev_info slave_info;
+ uint16_t member_port) {
+ struct rte_eth_dev_info member_info;
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
}
};
- int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+ int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
flow_item_8023ad, actions, &error);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
- __func__, error.message, slave_port,
+ RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+ __func__, error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
- ret = rte_eth_dev_info_get(slave_port, &slave_info);
+ ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port, strerror(-ret));
+ __func__, member_port, strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
- slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+ if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+ member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
RTE_BOND_LOG(ERR,
- "%s: Slave %d capabilities doesn't allow allocating additional queues",
- __func__, slave_port);
+ "%s: Member %d capabilities doesn't allow allocating additional queues",
+ __func__, member_port);
return -1;
}
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
uint16_t idx;
int ret;
- /* Verify if all slaves in bonding supports flow director and */
- if (internals->slave_count > 0) {
+ /* Verify if all members in bonding supports flow director and */
+ if (internals->member_count > 0) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
- for (idx = 0; idx < internals->slave_count; idx++) {
+ for (idx = 0; idx < internals->member_count; idx++) {
if (bond_ethdev_8023ad_flow_verify(bond_dev,
- internals->slaves[idx].port_id) != 0)
+ internals->members[idx].port_id) != 0)
return -1;
}
}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
}
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
}
};
- internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+ internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
&flow_attr_8023ad, flow_item_8023ad, actions, &error);
- if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+ if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
- "(slave_port=%d queue_id=%d)",
- error.message, slave_port,
+ "(member_port=%d queue_id=%d)",
+ error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
const uint16_t ether_type_slow_be =
rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint16_t slaves[RTE_MAX_ETHPORTS];
- uint16_t slave_count, idx;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ uint16_t member_count, idx;
- uint8_t collecting; /* current slave collecting status */
+ uint8_t collecting; /* current member collecting status */
const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
uint16_t j;
uint16_t k;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
+ member_count = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * member_count);
- idx = bd_rx_q->active_slave;
- if (idx >= slave_count) {
- bd_rx_q->active_slave = 0;
+ idx = bd_rx_q->active_member;
+ if (idx >= member_count) {
+ bd_rx_q->active_member = 0;
idx = 0;
}
- for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+ for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+ collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
COLLECTING);
- /* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+ /* Read packets from this member */
+ num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
- * - slave is not in collecting state,
+ * - member is not in collecting state,
* - bonding interface is not in promiscuous mode and
* packet address isn't in mac_addrs array:
* - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
!allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
- internals, slaves[idx], bufs[j]);
+ internals, members[idx], bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
} else
j++;
}
- if (unlikely(++idx == slave_count))
+ if (unlikely(++idx == member_count))
idx = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
#ifdef RTE_LIBRTE_BOND_DEBUG_ALB
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
- uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+ uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
- uint16_t num_of_slaves;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members;
+ uint16_t members[RTE_MAX_ETHPORTS];
- uint16_t num_tx_total = 0, num_tx_slave;
+ uint16_t num_tx_total = 0, num_tx_member;
- static int slave_idx = 0;
- int i, cslave_idx = 0, tx_fail_total = 0;
+ static int member_idx;
+ int i, cmember_idx = 0, tx_fail_total = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- /* Populate slaves mbuf with which packets are to be sent on it */
+ /* Populate members mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
- cslave_idx = (slave_idx + i) % num_of_slaves;
- slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+ cmember_idx = (member_idx + i) % num_of_members;
+ member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
}
- /* increment current slave index so the next call to tx burst starts on the
- * next slave */
- slave_idx = ++cslave_idx;
+ /*
+ * increment current member index so the next call to tx burst starts on the
+ * next member.
+ */
+ member_idx = ++cmember_idx;
- /* Send packet burst on each slave device */
- for (i = 0; i < num_of_slaves; i++) {
- if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_prepare(slaves[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_pkts[i]);
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- slave_bufs[i], num_tx_slave);
+ /* Send packet burst on each member device */
+ for (i = 0; i < num_of_members; i++) {
+ if (member_nb_pkts[i] > 0) {
+ num_tx_member = rte_eth_tx_prepare(members[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_pkts[i]);
+ num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+ member_bufs[i], num_tx_member);
/* if tx burst fails move packets to end of bufs */
- if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
- int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+ if (unlikely(num_tx_member < member_nb_pkts[i])) {
+ int tx_fail_member = member_nb_pkts[i] - num_tx_member;
- tx_fail_total += tx_fail_slave;
+ tx_fail_total += tx_fail_member;
memcpy(&bufs[nb_pkts - tx_fail_total],
- &slave_bufs[i][num_tx_slave],
- tx_fail_slave * sizeof(bufs[0]));
+ &member_bufs[i][num_tx_member],
+ tx_fail_member * sizeof(bufs[0]));
}
- num_tx_total += num_tx_slave;
+ num_tx_total += num_tx_member;
}
}
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
return 0;
nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i] = (hash ^= hash >> 8) % slave_count;
+ members[i] = (hash ^= hash >> 8) % member_count;
}
}
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
uint16_t i;
struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
-struct bwg_slave {
+struct bwg_member {
uint64_t bwg_left_int;
uint64_t bwg_left_remainder;
- uint16_t slave;
+ uint16_t member;
};
void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
int i;
- for (i = 0; i < internals->active_slave_count; i++) {
- tlb_last_obytets[internals->active_slaves[i]] = 0;
- }
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
static int
bandwidth_cmp(const void *a, const void *b)
{
- const struct bwg_slave *bwg_a = a;
- const struct bwg_slave *bwg_b = b;
+ const struct bwg_member *bwg_a = a;
+ const struct bwg_member *bwg_b = b;
int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
static void
bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
- struct bwg_slave *bwg_slave)
+ struct bwg_member *bwg_member)
{
struct rte_eth_link link_status;
int ret;
ret = rte_eth_link_get_nowait(port_id, &link_status);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
port_id, rte_strerror(-ret));
return;
}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
if (link_bwg == 0)
return;
link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
- bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
- bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+ bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+ bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
}
static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
{
struct bond_dev_private *internals = arg;
- struct rte_eth_stats slave_stats;
- struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ struct rte_eth_stats member_stats;
+ struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
uint64_t tx_bytes;
uint8_t update_stats = 0;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
- internals->slave_update_idx++;
+ internals->member_update_idx++;
- if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+ if (internals->member_update_idx >= REORDER_PERIOD_MS)
update_stats = 1;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- rte_eth_stats_get(slave_id, &slave_stats);
- tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
- bandwidth_left(slave_id, tx_bytes,
- internals->slave_update_idx, &bwg_array[i]);
- bwg_array[i].slave = slave_id;
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ rte_eth_stats_get(member_id, &member_stats);
+ tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+ bandwidth_left(member_id, tx_bytes,
+ internals->member_update_idx, &bwg_array[i]);
+ bwg_array[i].member = member_id;
if (update_stats) {
- tlb_last_obytets[slave_id] = slave_stats.obytes;
+ tlb_last_obytets[member_id] = member_stats.obytes;
}
}
if (update_stats == 1)
- internals->slave_update_idx = 0;
+ internals->member_update_idx = 0;
- slave_count = i;
- qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
- for (i = 0; i < slave_count; i++)
- internals->tlb_slaves_order[i] = bwg_array[i].slave;
+ member_count = i;
+ qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+ for (i = 0; i < member_count; i++)
+ internals->tlb_members_order[i] = bwg_array[i].member;
- rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+ rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
(struct bond_dev_private *)internals);
}
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0, num_tx_prep;
uint16_t i, j;
- uint16_t num_of_slaves = internals->active_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members = internals->active_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_ether_hdr *ether_hdr;
- struct rte_ether_addr primary_slave_addr;
- struct rte_ether_addr active_slave_addr;
+ struct rte_ether_addr primary_member_addr;
+ struct rte_ether_addr active_member_addr;
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+ memcpy(members, internals->tlb_members_order,
+ sizeof(internals->tlb_members_order[0]) * num_of_members);
- rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+ rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
if (nb_pkts > 3) {
for (i = 0; i < 3; i++)
rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
}
- for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ for (i = 0; i < num_of_members; i++) {
+ rte_eth_macaddr_get(members[i], &active_member_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
if (rte_is_same_ether_addr(ðer_hdr->src_addr,
- &primary_slave_addr))
- rte_ether_addr_copy(&active_slave_addr,
+ &primary_member_addr))
+ rte_ether_addr_copy(&active_member_addr,
ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
- mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+ mode6_debug("TX IPv4:", ether_hdr, members[i],
+ &burst_number_TX);
#endif
}
- num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+ num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, nb_pkts - num_tx_total);
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, num_tx_prep);
if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
void
bond_tlb_disable(struct bond_dev_private *internals)
{
- rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+ rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
}
void
bond_tlb_enable(struct bond_dev_private *internals)
{
- bond_ethdev_update_tlb_slave_cb(internals);
+ bond_ethdev_update_tlb_member_cb(internals);
}
static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct client_data *client_info;
/*
- * We create transmit buffers for every slave and one additional to send
+ * We create transmit buffers for every member and one additional to send
* through tlb. In worst case every packet will be send on one port.
*/
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
- uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+ uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
* We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_send, num_not_send = 0;
uint16_t num_tx_total = 0;
- uint16_t slave_idx;
+ uint16_t member_idx;
int i, j;
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
offset = get_vlan_offset(eth_h, ðer_type);
if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
- slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+ member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
+ rte_eth_macaddr_get(member_idx, ð_h->src_addr);
- /* Add packet to slave tx buffer */
- slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
- slave_bufs_pkts[slave_idx]++;
+ /* Add packet to member tx buffer */
+ member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+ member_bufs_pkts[member_idx]++;
} else {
/* If packet is not ARP, send it with TLB policy */
- slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+ member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
bufs[i];
- slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+ member_bufs_pkts[RTE_MAX_ETHPORTS]++;
}
}
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- /* Allocate new packet to send ARP update on current slave */
+ /* Allocate new packet to send ARP update on current member */
upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
if (upd_pkt == NULL) {
RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
upd_pkt->data_len = pkt_size;
upd_pkt->pkt_len = pkt_size;
- slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+ member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
internals);
/* Add packet to update tx buffer */
- update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
- update_bufs_pkts[slave_idx]++;
+ update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+ update_bufs_pkts[member_idx]++;
}
}
internals->mode6.ntt = 0;
}
- /* Send ARP packets on proper slaves */
+ /* Send ARP packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (slave_bufs_pkts[i] > 0) {
+ if (member_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
- slave_bufs[i], slave_bufs_pkts[i]);
+ member_bufs[i], member_bufs_pkts[i]);
num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
- slave_bufs[i], num_send);
- for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+ member_bufs[i], num_send);
+ for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[i][nb_pkts - 1 - j];
+ member_bufs[i][nb_pkts - 1 - j];
}
num_tx_total += num_send;
- num_not_send += slave_bufs_pkts[i] - num_send;
+ num_not_send += member_bufs_pkts[i] - num_send;
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
/* Print TX stats including update packets */
- for (j = 0; j < slave_bufs_pkts[i]; j++) {
- eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+ for (j = 0; j < member_bufs_pkts[i]; j++) {
+ eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
}
#endif
}
}
- /* Send update packets on proper slaves */
+ /* Send update packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (update_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
for (j = 0; j < update_bufs_pkts[i]; j++) {
eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
}
#endif
}
}
/* Send non-ARP packets using tlb policy */
- if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+ if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
num_send = bond_ethdev_tx_burst_tlb(queue,
- slave_bufs[RTE_MAX_ETHPORTS],
- slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+ member_bufs[RTE_MAX_ETHPORTS],
+ member_bufs_pkts[RTE_MAX_ETHPORTS]);
- for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+ for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+ member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
}
num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static inline uint16_t
tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
- uint16_t *slave_port_ids, uint16_t slave_count)
+ uint16_t *member_port_ids, uint16_t member_count)
{
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- /* Array to sort mbufs for transmission on each slave into */
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
- /* Number of mbufs for transmission on each slave */
- uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
- /* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[nb_bufs];
+ /* Array to sort mbufs for transmission on each member into */
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+ /* Number of mbufs for transmission on each member */
+ uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+ /* Mapping array generated by hash function to map mbufs to members */
+ uint16_t bufs_member_port_idxs[nb_bufs];
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
uint16_t i;
/*
- * Populate slaves mbuf with the packets which are to be sent on it
- * selecting output slave using hash based on xmit policy
+ * Populate members mbuf with the packets which are to be sent on it
+ * selecting output member using hash based on xmit policy
*/
- internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
- bufs_slave_port_idxs);
+ internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+ bufs_member_port_idxs);
for (i = 0; i < nb_bufs; i++) {
- /* Populate slave mbuf arrays with mbufs for that slave. */
- uint16_t slave_idx = bufs_slave_port_idxs[i];
+ /* Populate member mbuf arrays with mbufs for that member. */
+ uint16_t member_idx = bufs_member_port_idxs[i];
- slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+ member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
}
- /* Send packet burst on each slave device */
- for (i = 0; i < slave_count; i++) {
- if (slave_nb_bufs[i] == 0)
+ /* Send packet burst on each member device */
+ for (i = 0; i < member_count; i++) {
+ if (member_nb_bufs[i] == 0)
continue;
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_bufs[i]);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_tx_count);
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_bufs[i]);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_tx_count);
- total_tx_count += slave_tx_count;
+ total_tx_count += member_tx_count;
/* If tx burst fails move packets to end of bufs */
- if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
- int slave_tx_fail_count = slave_nb_bufs[i] -
- slave_tx_count;
- total_tx_fail_count += slave_tx_fail_count;
+ if (unlikely(member_tx_count < member_nb_bufs[i])) {
+ int member_tx_fail_count = member_nb_bufs[i] -
+ member_tx_count;
+ total_tx_fail_count += member_tx_fail_count;
memcpy(&bufs[nb_bufs - total_tx_fail_count],
- &slave_bufs[i][slave_tx_count],
- slave_tx_fail_count * sizeof(bufs[0]));
+ &member_bufs[i][member_tx_count],
+ member_tx_fail_count * sizeof(bufs[0]));
}
}
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
if (unlikely(nb_bufs == 0))
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting
*/
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
- return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
- slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+ member_count);
}
static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
- uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t dist_slave_count;
+ uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t dist_member_count;
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t i;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
if (dedicated_txq)
goto skip_tx_ring;
/* Check for LACP control packets and send if available */
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
struct rte_mbuf *ctrl_pkt = NULL;
if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (rte_ring_dequeue(port->tx_ring,
(void **)&ctrl_pkt) != -ENOENT) {
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
bd_tx_q->queue_id, &ctrl_pkt, 1);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
/*
* re-enqueue LAG control plane packets to buffering
* ring if transmission fails so the packet isn't lost.
*/
- if (slave_tx_count != 1)
+ if (member_tx_count != 1)
rte_ring_enqueue(port->tx_ring, ctrl_pkt);
}
}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (unlikely(nb_bufs == 0))
return 0;
- dist_slave_count = 0;
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ dist_member_count = 0;
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
if (ACTOR_STATE(port, DISTRIBUTING))
- dist_slave_port_ids[dist_slave_count++] =
- slave_port_ids[i];
+ dist_member_port_ids[dist_member_count++] =
+ member_port_ids[i];
}
- if (unlikely(dist_slave_count < 1))
+ if (unlikely(dist_member_count < 1))
return 0;
- return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
- dist_slave_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+ dist_member_count);
}
static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint8_t tx_failed_flag = 0;
- uint16_t num_of_slaves;
+ uint16_t num_of_members;
uint16_t max_nb_of_tx_pkts = 0;
- int slave_tx_total[RTE_MAX_ETHPORTS];
- int i, most_successful_tx_slave = -1;
+ int member_tx_total[RTE_MAX_ETHPORTS];
+ int i, most_successful_tx_member = -1;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return 0;
/* It is rare that bond different PMDs together, so just call tx-prepare once */
- nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+ nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
- rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+ rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
- /* Transmit burst on each active slave */
- for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ /* Transmit burst on each active member */
+ for (i = 0; i < num_of_members; i++) {
+ member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs, nb_pkts);
- if (unlikely(slave_tx_total[i] < nb_pkts))
+ if (unlikely(member_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
- /* record the value and slave index for the slave which transmits the
+ /* record the value and member index for the member which transmits the
* maximum number of packets */
- if (slave_tx_total[i] > max_nb_of_tx_pkts) {
- max_nb_of_tx_pkts = slave_tx_total[i];
- most_successful_tx_slave = i;
+ if (member_tx_total[i] > max_nb_of_tx_pkts) {
+ max_nb_of_tx_pkts = member_tx_total[i];
+ most_successful_tx_member = i;
}
}
- /* if slaves fail to transmit packets from burst, the calling application
+ /* if members fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
- * handle failures of all packets except those of the most successful slave
+ * handle failures of all packets except those of the most successful member
*/
if (unlikely(tx_failed_flag))
- for (i = 0; i < num_of_slaves; i++)
- if (i != most_successful_tx_slave)
- while (slave_tx_total[i] < nb_pkts)
- rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+ for (i = 0; i < num_of_members; i++)
+ if (i != most_successful_tx_member)
+ while (member_tx_total[i] < nb_pkts)
+ rte_pktmbuf_free(bufs[member_tx_total[i]++]);
return max_nb_of_tx_pkts;
}
static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
/**
* If in mode 4 then save the link properties of the first
- * slave, all subsequent slaves must match these properties
+ * member, all subsequent members must match these properties
*/
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- bond_link->link_autoneg = slave_link->link_autoneg;
- bond_link->link_duplex = slave_link->link_duplex;
- bond_link->link_speed = slave_link->link_speed;
+ bond_link->link_autoneg = member_link->link_autoneg;
+ bond_link->link_duplex = member_link->link_duplex;
+ bond_link->link_speed = member_link->link_speed;
} else {
/**
* In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
static int
link_properties_valid(struct rte_eth_dev *ethdev,
- struct rte_eth_link *slave_link)
+ struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- if (bond_link->link_duplex != slave_link->link_duplex ||
- bond_link->link_autoneg != slave_link->link_autoneg ||
- bond_link->link_speed != slave_link->link_speed)
+ if (bond_link->link_duplex != member_link->link_duplex ||
+ bond_link->link_autoneg != member_link->link_autoneg ||
+ bond_link->link_speed != member_link->link_speed)
return -1;
}
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
static const struct rte_ether_addr null_mac_addr;
/*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
*/
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, ret;
struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+ ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
if (ret < 0) {
/* rollback */
for (i--; i > 0; i--)
- rte_eth_dev_mac_addr_remove(slave_port_id,
+ rte_eth_dev_mac_addr_remove(member_port_id,
&bonded_eth_dev->data->mac_addrs[i]);
return ret;
}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
/*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
*/
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, rc, ret;
struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+ ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
/* save only the first error */
if (ret < 0 && rc == 0)
rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
bool set;
int i;
- /* Update slave devices MAC addresses */
- if (internals->slave_count < 1)
+ /* Update member devices MAC addresses */
+ if (internals->member_count < 1)
return -1;
switch (internals->mode) {
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
+ internals->members[i].port_id,
bonded_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
case BONDING_MODE_ALB:
default:
set = true;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id ==
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id ==
internals->current_primary_port) {
if (rte_eth_dev_default_mac_addr_set(
internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
}
} else {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
- &internals->slaves[i].persisted_mac_addr)) {
+ internals->members[i].port_id,
+ &internals->members[i].persisted_mac_addr)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
}
}
}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+ struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
if (port->slow_pool == NULL) {
char mem_name[256];
- int slave_id = slave_eth_dev->data->port_id;
+ int member_id = member_eth_dev->data->port_id;
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
- slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+ member_id);
port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
- slave_eth_dev->data->numa_node);
+ member_eth_dev->data->numa_node);
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->slow_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
}
if (internals->mode4.dedicated_queues.enabled == 1) {
/* Configure slow Rx queue */
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid, 128,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid,
errval);
return errval;
}
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid, 512,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid,
errval);
return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
}
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- /* Stop slave */
- errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+ /* Stop member */
+ errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
if (errval != 0)
RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
- /* Enable interrupts on slave device if supported */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+ /* Enable interrupts on member device if supported */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
- /* If RSS is enabled for bonding, try to enable it for slaves */
+ /* If RSS is enabled for bonding, try to enable it for members */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
} else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
}
- slave_eth_dev->data->dev_conf.rxmode.mtu =
+ member_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- slave_eth_dev->data->dev_conf.link_speeds =
+ member_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads =
+ member_eth_dev->data->dev_conf.txmode.offloads =
bonded_eth_dev->data->dev_conf.txmode.offloads;
- slave_eth_dev->data->dev_conf.rxmode.offloads =
+ member_eth_dev->data->dev_conf.rxmode.offloads =
bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* Configure device */
- errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
nb_rx_queues, nb_tx_queues,
- &(slave_eth_dev->data->dev_conf));
+ &member_eth_dev->data->dev_conf);
if (errval != 0) {
- RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+ member_eth_dev->data->port_id, errval);
return errval;
}
- errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
bonded_eth_dev->data->mtu);
if (errval != 0 && errval != -ENOTSUP) {
RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
return errval;
}
return 0;
}
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
uint16_t q_id;
struct rte_flow_error flow_error;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+ uint16_t member_port_id = member_eth_dev->data->port_id;
/* Setup Rx Queues */
for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_rx_queue_setup(member_port_id, q_id,
bd_rx_q->nb_rx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_tx_queue_setup(member_port_id, q_id,
bd_tx_q->nb_tx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&bd_tx_q->tx_conf);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
- if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+ if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
!= 0)
return errval;
errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
- if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
- errval = rte_flow_destroy(slave_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+ if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+ errval = rte_flow_destroy(member_port_id,
+ internals->mode4.dedicated_queues.flow[member_port_id],
&flow_error);
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
}
/* Start device */
- errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+ errval = rte_eth_dev_start(member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return -1;
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
internals = bonded_eth_dev->data->dev_private;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == member_port_id) {
errval = rte_eth_dev_rss_reta_update(
- slave_eth_dev->data->port_id,
+ member_port_id,
&internals->reta_conf[0],
- internals->slaves[i].reta_size);
+ internals->members[i].reta_size);
if (errval != 0) {
RTE_BOND_LOG(WARNING,
- "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+ "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
" RSS Configuration for bonding may be inconsistent.",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
break;
}
}
}
- /* If lsc interrupt is set, check initial slave's link status */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
- slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
- bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+ /* If lsc interrupt is set, check initial member's link status */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+ bond_ethdev_lsc_event_callback(member_port_id,
RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
NULL);
}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
}
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t i;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id ==
- slave_eth_dev->data->port_id)
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id ==
+ member_eth_dev->data->port_id)
break;
- if (i < (internals->slave_count - 1)) {
+ if (i < (internals->member_count - 1)) {
struct rte_flow *flow;
- memmove(&internals->slaves[i], &internals->slaves[i + 1],
- sizeof(internals->slaves[0]) *
- (internals->slave_count - i - 1));
+ memmove(&internals->members[i], &internals->members[i + 1],
+ sizeof(internals->members[0]) *
+ (internals->member_count - i - 1));
TAILQ_FOREACH(flow, &internals->flow_list, next) {
memmove(&flow->flows[i], &flow->flows[i + 1],
sizeof(flow->flows[0]) *
- (internals->slave_count - i - 1));
- flow->flows[internals->slave_count - 1] = NULL;
+ (internals->member_count - i - 1));
+ flow->flows[internals->member_count - 1] = NULL;
}
}
- internals->slave_count--;
+ internals->member_count--;
- /* force reconfiguration of slave interfaces */
- rte_eth_dev_internal_reset(slave_eth_dev);
+ /* force reconfiguration of member interfaces */
+ rte_eth_dev_internal_reset(member_eth_dev);
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
- struct bond_slave_details *slave_details =
- &internals->slaves[internals->slave_count];
+ struct bond_member_details *member_details =
+ &internals->members[internals->member_count];
- slave_details->port_id = slave_eth_dev->data->port_id;
- slave_details->last_link_status = 0;
+ member_details->port_id = member_eth_dev->data->port_id;
+ member_details->last_link_status = 0;
- /* Mark slave devices that don't support interrupts so we can
+ /* Mark member devices that don't support interrupts so we can
* compensate when we start the bond
*/
- if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
- slave_details->link_status_poll_enabled = 1;
- }
+ if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+ member_details->link_status_poll_enabled = 1;
- slave_details->link_status_wait_to_complete = 0;
+ member_details->link_status_wait_to_complete = 0;
/* clean tlb_last_obytes when adding port for bonding device */
- memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+ memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
sizeof(struct rte_ether_addr));
}
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id)
+ uint16_t member_port_id)
{
int i;
- if (internals->active_slave_count < 1)
- internals->current_primary_port = slave_port_id;
+ if (internals->active_member_count < 1)
+ internals->current_primary_port = member_port_id;
else
- /* Search bonded device slave ports for new proposed primary port */
- for (i = 0; i < internals->active_slave_count; i++) {
- if (internals->active_slaves[i] == slave_port_id)
- internals->current_primary_port = slave_port_id;
+ /* Search bonded device member ports for new proposed primary port */
+ for (i = 0; i < internals->active_member_count; i++) {
+ if (internals->active_members[i] == member_port_id)
+ internals->current_primary_port = member_port_id;
}
}
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals;
int i;
- /* slave eth dev will be started by bonded device */
+ /* member eth dev will be started by bonded device */
if (check_for_bonded_ethdev(eth_dev)) {
- RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+ RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
eth_dev->data->port_id);
return -1;
}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- if (internals->slave_count == 0) {
- RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+ if (internals->member_count == 0) {
+ RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
goto out_err;
}
if (internals->user_defined_mac == 0) {
struct rte_ether_addr *new_mac_addr = NULL;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == internals->primary_port)
- new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == internals->primary_port)
+ new_mac_addr = &internals->members[i].persisted_mac_addr;
if (new_mac_addr == NULL)
goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
}
- /* Reconfigure each slave device if starting bonded device */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(eth_dev, slave_ethdev) != 0) {
+ /* Reconfigure each member device if starting bonded device */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to reconfigure slave device (%d)",
+ "bonded port (%d) failed to reconfigure member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- if (slave_start(eth_dev, slave_ethdev) != 0) {
+ if (member_start(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to start slave device (%d)",
+ "bonded port (%d) failed to start member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- /* We will need to poll for link status if any slave doesn't
+ /* We will need to poll for link status if any member doesn't
* support interrupts
*/
- if (internals->slaves[i].link_status_poll_enabled)
+ if (internals->members[i].link_status_poll_enabled)
internals->link_status_polling_enabled = 1;
}
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
if (internals->link_status_polling_enabled) {
rte_eal_alarm_set(
internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor,
+ bond_ethdev_member_link_status_change_monitor,
(void *)&rte_eth_devices[internals->port_id]);
}
- /* Update all slave devices MACs*/
- if (mac_address_slaves_update(eth_dev) != 0)
+ /* Update all member devices MACs*/
+ if (mac_address_members_update(eth_dev) != 0)
goto out_err;
if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
bond_mode_8023ad_stop(eth_dev);
/* Discard all messages to/from mode 4 state machines */
- for (i = 0; i < internals->active_slave_count; i++) {
- port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+ for (i = 0; i < internals->active_member_count; i++) {
+ port = &bond_mode_8023ad_ports[internals->active_members[i]];
RTE_ASSERT(port->rx_ring != NULL);
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
if (internals->mode == BONDING_MODE_TLB ||
internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
- for (i = 0; i < internals->active_slave_count; i++)
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t slave_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t member_id = internals->members[i].port_id;
- internals->slaves[i].last_link_status = 0;
- ret = rte_eth_dev_stop(slave_id);
+ internals->members[i].last_link_status = 0;
+ ret = rte_eth_dev_stop(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_id);
+ member_id);
return ret;
}
- /* active slaves need to be deactivated. */
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) !=
- internals->active_slave_count)
- deactivate_slave(eth_dev, slave_id);
+ /* active members need to be deactivated. */
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) !=
+ internals->active_member_count)
+ deactivate_member(eth_dev, member_id);
}
return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
/* Flush flows in all back-end devices before removing them */
bond_flow_ops.flush(dev, &ferror);
- while (internals->slave_count != skipped) {
- uint16_t port_id = internals->slaves[skipped].port_id;
+ while (internals->member_count != skipped) {
+ uint16_t port_id = internals->members[skipped].port_id;
int ret;
ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
continue;
}
- if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+ if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
RTE_BOND_LOG(ERR,
"Failed to remove port %d from bonded device %s",
port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct bond_slave_details slave;
+ struct bond_member_details member;
int ret;
uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_ETHER_MAX_JUMBO_FRAME_LEN;
/* Max number of tx/rx queues that the bonded device can support is the
- * minimum values of the bonded slaves, as all slaves must be capable
+ * minimum values of the bonded members, as all members must be capable
* of supporting the same number of tx/rx queues.
*/
- if (internals->slave_count > 0) {
- struct rte_eth_dev_info slave_info;
+ if (internals->member_count > 0) {
+ struct rte_eth_dev_info member_info;
uint16_t idx;
- for (idx = 0; idx < internals->slave_count; idx++) {
- slave = internals->slaves[idx];
- ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+ for (idx = 0; idx < internals->member_count; idx++) {
+ member = internals->members[idx];
+ ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
__func__,
- slave.port_id,
+ member.port_id,
strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < max_nb_rx_queues)
- max_nb_rx_queues = slave_info.max_rx_queues;
+ if (member_info.max_rx_queues < max_nb_rx_queues)
+ max_nb_rx_queues = member_info.max_rx_queues;
- if (slave_info.max_tx_queues < max_nb_tx_queues)
- max_nb_tx_queues = slave_info.max_tx_queues;
+ if (member_info.max_tx_queues < max_nb_tx_queues)
+ max_nb_tx_queues = member_info.max_tx_queues;
}
}
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
uint16_t i;
struct bond_dev_private *internals = dev->data->dev_private;
- /* don't do this while a slave is being added */
+ /* don't do this while a member is being added */
rte_spinlock_lock(&internals->lock);
if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
else
rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
if (res == ENOTSUP)
RTE_BOND_LOG(WARNING,
- "Setting VLAN filter on slave port %u not supported.",
+ "Setting VLAN filter on member port %u not supported.",
port_id);
}
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
{
- struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+ struct rte_eth_dev *bonded_ethdev, *member_ethdev;
struct bond_dev_private *internals;
- /* Default value for polling slave found is true as we don't want to
+ /* Default value for polling member found is true as we don't want to
* disable the polling thread if we cannot get the lock */
- int i, polling_slave_found = 1;
+ int i, polling_member_found = 1;
if (cb_arg == NULL)
return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
!internals->link_status_polling_enabled)
return;
- /* If device is currently being configured then don't check slaves link
+ /* If device is currently being configured then don't check members link
* status, wait until next period */
if (rte_spinlock_trylock(&internals->lock)) {
- if (internals->slave_count > 0)
- polling_slave_found = 0;
+ if (internals->member_count > 0)
+ polling_member_found = 0;
- for (i = 0; i < internals->slave_count; i++) {
- if (!internals->slaves[i].link_status_poll_enabled)
+ for (i = 0; i < internals->member_count; i++) {
+ if (!internals->members[i].link_status_poll_enabled)
continue;
- slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
- polling_slave_found = 1;
+ member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+ polling_member_found = 1;
- /* Update slave link status */
- (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
- internals->slaves[i].link_status_wait_to_complete);
+ /* Update member link status */
+ (*member_ethdev->dev_ops->link_update)(member_ethdev,
+ internals->members[i].link_status_wait_to_complete);
/* if link status has changed since last checked then call lsc
* event callback */
- if (slave_ethdev->data->dev_link.link_status !=
- internals->slaves[i].last_link_status) {
- bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+ if (member_ethdev->data->dev_link.link_status !=
+ internals->members[i].last_link_status) {
+ bond_ethdev_lsc_event_callback(internals->members[i].port_id,
RTE_ETH_EVENT_INTR_LSC,
&bonded_ethdev->data->port_id,
NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
rte_spinlock_unlock(&internals->lock);
}
- if (polling_slave_found)
- /* Set alarm to continue monitoring link status of slave ethdev's */
+ if (polling_member_found)
+ /* Set alarm to continue monitoring link status of member ethdev's */
rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor, cb_arg);
+ bond_ethdev_member_link_status_change_monitor, cb_arg);
}
static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
struct bond_dev_private *bond_ctx;
- struct rte_eth_link slave_link;
+ struct rte_eth_link member_link;
bool one_link_update_succeeded;
uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
- bond_ctx->active_slave_count == 0) {
+ bond_ctx->active_member_count == 0) {
ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
case BONDING_MODE_BROADCAST:
/**
* Setting link speed to UINT32_MAX to ensure we pick up the
- * value of the first active slave
+ * value of the first active member
*/
ethdev->data->dev_link.link_speed = UINT32_MAX;
/**
- * link speed is minimum value of all the slaves link speed as
- * packet loss will occur on this slave if transmission at rates
+ * link speed is minimum value of all the members link speed as
+ * packet loss will occur on this member if transmission at rates
* greater than this are attempted
*/
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
return 0;
}
- if (slave_link.link_speed <
+ if (member_link.link_speed <
ethdev->data->dev_link.link_speed)
ethdev->data->dev_link.link_speed =
- slave_link.link_speed;
+ member_link.link_speed;
}
break;
case BONDING_MODE_ACTIVE_BACKUP:
- /* Current primary slave */
- ret = link_update(bond_ctx->current_primary_port, &slave_link);
+ /* Current primary member */
+ ret = link_update(bond_ctx->current_primary_port, &member_link);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
bond_ctx->current_primary_port,
rte_strerror(-ret));
return 0;
}
- ethdev->data->dev_link.link_speed = slave_link.link_speed;
+ ethdev->data->dev_link.link_speed = member_link.link_speed;
break;
case BONDING_MODE_8023AD:
ethdev->data->dev_link.link_autoneg =
- bond_ctx->mode4.slave_link.link_autoneg;
+ bond_ctx->mode4.member_link.link_autoneg;
ethdev->data->dev_link.link_duplex =
- bond_ctx->mode4.slave_link.link_duplex;
+ bond_ctx->mode4.member_link.link_duplex;
/* fall through */
/* to update link speed */
case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
default:
/**
* In theses mode the maximum theoretical link speed is the sum
- * of all the slaves
+ * of all the members
*/
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
continue;
}
one_link_update_succeeded = true;
ethdev->data->dev_link.link_speed +=
- slave_link.link_speed;
+ member_link.link_speed;
}
if (!one_link_update_succeeded) {
- RTE_BOND_LOG(ERR, "All slaves link get failed");
+ RTE_BOND_LOG(ERR, "All members link get failed");
return 0;
}
}
@@ -2602,27 +2606,27 @@ static int
bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_eth_stats slave_stats;
+ struct rte_eth_stats member_stats;
int i, j;
- for (i = 0; i < internals->slave_count; i++) {
- rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+ for (i = 0; i < internals->member_count; i++) {
+ rte_eth_stats_get(internals->members[i].port_id, &member_stats);
- stats->ipackets += slave_stats.ipackets;
- stats->opackets += slave_stats.opackets;
- stats->ibytes += slave_stats.ibytes;
- stats->obytes += slave_stats.obytes;
- stats->imissed += slave_stats.imissed;
- stats->ierrors += slave_stats.ierrors;
- stats->oerrors += slave_stats.oerrors;
- stats->rx_nombuf += slave_stats.rx_nombuf;
+ stats->ipackets += member_stats.ipackets;
+ stats->opackets += member_stats.opackets;
+ stats->ibytes += member_stats.ibytes;
+ stats->obytes += member_stats.obytes;
+ stats->imissed += member_stats.imissed;
+ stats->ierrors += member_stats.ierrors;
+ stats->oerrors += member_stats.oerrors;
+ stats->rx_nombuf += member_stats.rx_nombuf;
for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
- stats->q_ipackets[j] += slave_stats.q_ipackets[j];
- stats->q_opackets[j] += slave_stats.q_opackets[j];
- stats->q_ibytes[j] += slave_stats.q_ibytes[j];
- stats->q_obytes[j] += slave_stats.q_obytes[j];
- stats->q_errors[j] += slave_stats.q_errors[j];
+ stats->q_ipackets[j] += member_stats.q_ipackets[j];
+ stats->q_opackets[j] += member_stats.q_opackets[j];
+ stats->q_ibytes[j] += member_stats.q_ibytes[j];
+ stats->q_obytes[j] += member_stats.q_obytes[j];
+ stats->q_errors[j] += member_stats.q_errors[j];
}
}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
int err;
int ret;
- for (i = 0, err = 0; i < internals->slave_count; i++) {
- ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+ for (i = 0, err = 0; i < internals->member_count; i++) {
+ ret = rte_eth_stats_reset(internals->members[i].port_id);
if (ret != 0)
err = ret;
}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_promiscuous_enable(port_id);
if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
"Failed to enable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
BOND_8023AD_FORCED_PROMISC) {
- slave_ok++;
+ member_ok++;
continue;
}
ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
"Failed to disable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As promiscuous mode is propagated to all slaves for these
+ /* As promiscuous mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As promiscuous mode is propagated only to primary slave
+ /* As promiscuous mode is propagated only to primary member
* for these mode. When active/standby switchover, promiscuous
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_allmulticast_enable(port_id);
if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
"Failed to enable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
"Failed to disable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As allmulticast mode is propagated to all slaves for these
+ /* As allmulticast mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As allmulticast mode is propagated only to primary slave
+ /* As allmulticast mode is propagated only to primary member
* for these mode. When active/standby switchover, allmulticast
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
int ret;
uint8_t lsc_flag = 0;
- int valid_slave = 0;
- uint16_t active_pos, slave_idx;
+ int valid_member = 0;
+ uint16_t active_pos, member_idx;
uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (!bonded_eth_dev->data->dev_started)
return rc;
- /* verify that port_id is a valid slave of bonded port */
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == port_id) {
- valid_slave = 1;
- slave_idx = i;
+ /* verify that port_id is a valid member of bonded port */
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == port_id) {
+ valid_member = 1;
+ member_idx = i;
break;
}
}
- if (!valid_slave)
+ if (!valid_member)
return rc;
/* Synchronize lsc callback parallel calls either by real link event
- * from the slaves PMDs or by the bonding PMD itself.
+ * from the members PMDs or by the bonding PMD itself.
*/
rte_spinlock_lock(&internals->lsc_lock);
/* Search for port in active port list */
- active_pos = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, port_id);
+ active_pos = find_member_by_id(internals->active_members,
+ internals->active_member_count, port_id);
ret = rte_eth_link_get_nowait(port_id, &link);
if (ret < 0)
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
if (ret == 0 && link.link_status) {
- if (active_pos < internals->active_slave_count)
+ if (active_pos < internals->active_member_count)
goto link_update;
/* check link state properties if bonded link is up*/
if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
- "for slave %d in bonding mode %d",
+ "for member %d in bonding mode %d",
port_id, internals->mode);
} else {
- /* inherit slave link properties */
+ /* inherit member link properties */
link_properties_set(bonded_eth_dev, &link);
}
- /* If no active slave ports then set this port to be
+ /* If no active member ports then set this port to be
* the primary port.
*/
- if (internals->active_slave_count < 1) {
- /* If first active slave, then change link status */
+ if (internals->active_member_count < 1) {
+ /* If first active member, then change link status */
bonded_eth_dev->data->dev_link.link_status =
RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
- activate_slave(bonded_eth_dev, port_id);
+ activate_member(bonded_eth_dev, port_id);
/* If the user has defined the primary port then default to
* using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
internals->primary_port == port_id)
bond_ethdev_primary_set(internals, port_id);
} else {
- if (active_pos == internals->active_slave_count)
+ if (active_pos == internals->active_member_count)
goto link_update;
- /* Remove from active slave list */
- deactivate_slave(bonded_eth_dev, port_id);
+ /* Remove from active member list */
+ deactivate_member(bonded_eth_dev, port_id);
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
lsc_flag = 1;
- /* Update primary id, take first active slave from list or if none
+ /* Update primary id, take first active member from list or if none
* available set to -1 */
if (port_id == internals->current_primary_port) {
- if (internals->active_slave_count > 0)
+ if (internals->active_member_count > 0)
bond_ethdev_primary_set(internals,
- internals->active_slaves[0]);
+ internals->active_members[0]);
else
internals->current_primary_port = internals->primary_port;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
link_update:
/**
* Update bonded device link properties after any change to active
- * slaves
+ * members
*/
bond_ethdev_link_update(bonded_eth_dev, 0);
- internals->slaves[slave_idx].last_link_status = link.link_status;
+ internals->members[member_idx].last_link_status = link.link_status;
if (lsc_flag) {
/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
{
unsigned i, j;
int result = 0;
- int slave_reta_size;
+ int member_reta_size;
unsigned reta_count;
struct bond_dev_private *internals = dev->data->dev_private;
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
sizeof(internals->reta_conf[0]) * reta_count);
- /* Propagate RETA over slaves */
- for (i = 0; i < internals->slave_count; i++) {
- slave_reta_size = internals->slaves[i].reta_size;
- result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
- &internals->reta_conf[0], slave_reta_size);
+ /* Propagate RETA over members */
+ for (i = 0; i < internals->member_count; i++) {
+ member_reta_size = internals->members[i].reta_size;
+ result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+ &internals->reta_conf[0], member_reta_size);
if (result < 0)
return result;
}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
bond_rss_conf.rss_key_len = internals->rss_key_len;
}
- for (i = 0; i < internals->slave_count; i++) {
- result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
&bond_rss_conf);
if (result < 0)
return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
static int
bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mtu_set == NULL) {
rte_spinlock_unlock(&internals->lock);
return -ENOTSUP;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
if (ret < 0) {
rte_spinlock_unlock(&internals->lock);
return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
struct rte_ether_addr *mac_addr,
__rte_unused uint32_t index, uint32_t vmdq)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
- *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+ *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
ret = -ENOTSUP;
goto end;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
mac_addr, vmdq);
if (ret < 0) {
/* rollback */
for (i--; i >= 0; i--)
rte_eth_dev_mac_addr_remove(
- internals->slaves[i].port_id, mac_addr);
+ internals->members[i].port_id, mac_addr);
goto end;
}
}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
static void
bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
goto end;
}
struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
- for (i = 0; i < internals->slave_count; i++)
- rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++)
+ rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
mac_addr);
end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
fprintf(f, "\n");
}
- if (internals->slave_count > 0) {
- fprintf(f, "\tSlaves (%u): [", internals->slave_count);
- for (i = 0; i < internals->slave_count - 1; i++)
- fprintf(f, "%u ", internals->slaves[i].port_id);
+ if (internals->member_count > 0) {
+ fprintf(f, "\tMembers (%u): [", internals->member_count);
+ for (i = 0; i < internals->member_count - 1; i++)
+ fprintf(f, "%u ", internals->members[i].port_id);
- fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+ fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
} else {
- fprintf(f, "\tSlaves: []\n");
+ fprintf(f, "\tMembers: []\n");
}
- if (internals->active_slave_count > 0) {
- fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
- for (i = 0; i < internals->active_slave_count - 1; i++)
- fprintf(f, "%u ", internals->active_slaves[i]);
+ if (internals->active_member_count > 0) {
+ fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+ for (i = 0; i < internals->active_member_count - 1; i++)
+ fprintf(f, "%u ", internals->active_members[i]);
- fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+ fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
} else {
- fprintf(f, "\tActive Slaves: []\n");
+ fprintf(f, "\tActive Members: []\n");
}
if (internals->user_defined_primary_port)
fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
- if (internals->slave_count > 0)
+ if (internals->member_count > 0)
fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
}
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
}
static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
{
char a_state[256] = { 0 };
char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
static void
dump_lacp(uint16_t port_id, FILE *f)
{
- struct rte_eth_bond_8023ad_slave_info slave_info;
+ struct rte_eth_bond_8023ad_member_info member_info;
struct rte_eth_bond_8023ad_conf port_conf;
- uint16_t slaves[RTE_MAX_ETHPORTS];
- int num_active_slaves;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ int num_active_members;
int i, ret;
fprintf(f, " - Lacp info:\n");
- num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+ num_active_members = rte_eth_bond_active_members_get(port_id, members,
RTE_MAX_ETHPORTS);
- if (num_active_slaves < 0) {
- fprintf(f, "\tFailed to get active slave list for port %u\n",
+ if (num_active_members < 0) {
+ fprintf(f, "\tFailed to get active member list for port %u\n",
port_id);
return;
}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
}
dump_lacp_conf(&port_conf, f);
- for (i = 0; i < num_active_slaves; i++) {
- ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
- &slave_info);
+ for (i = 0; i < num_active_members; i++) {
+ ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+ &member_info);
if (ret) {
- fprintf(f, "\tGet slave device %u 8023ad info failed\n",
- slaves[i]);
+ fprintf(f, "\tGet member device %u 8023ad info failed\n",
+ members[i]);
return;
}
- fprintf(f, "\tSlave Port: %u\n", slaves[i]);
- dump_lacp_slave(&slave_info, f);
+ fprintf(f, "\tMember Port: %u\n", members[i]);
+ dump_lacp_member(&member_info, f);
}
}
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->link_down_delay_ms = 0;
internals->link_up_delay_ms = 0;
- internals->slave_count = 0;
- internals->active_slave_count = 0;
+ internals->member_count = 0;
+ internals->active_member_count = 0;
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->rx_desc_lim.nb_align = 1;
internals->tx_desc_lim.nb_align = 1;
- memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
- memset(internals->slaves, 0, sizeof(internals->slaves));
+ memset(internals->active_members, 0, sizeof(internals->active_members));
+ memset(internals->members, 0, sizeof(internals->members));
TAILQ_INIT(&internals->flow_list);
internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
/* Parse link bonding mode */
if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
- &bond_ethdev_parse_slave_mode_kvarg,
+ &bond_ethdev_parse_member_mode_kvarg,
&bonding_mode) != 0) {
RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
RTE_ASSERT(eth_dev->device == &dev->device);
internals = eth_dev->data->dev_private;
- if (internals->slave_count != 0)
+ if (internals->member_count != 0)
return -EBUSY;
if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
return ret;
}
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
* have been allocated */
static int
bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
if ((link_speeds &
(internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
- RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+ RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
return -EINVAL;
}
/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- /* Parse/add slave ports to bonded device */
- if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
- struct bond_ethdev_slave_ports slave_ports;
+ /* Parse/add member ports to bonded device */
+ if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+ struct bond_ethdev_member_ports member_ports;
unsigned i;
- memset(&slave_ports, 0, sizeof(slave_ports));
+ memset(&member_ports, 0, sizeof(member_ports));
- if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
- &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+ if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+ &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to parse slave ports for bonded device %s",
+ "Failed to parse member ports for bonded device %s",
name);
return -1;
}
- for (i = 0; i < slave_ports.slave_count; i++) {
- if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+ for (i = 0; i < member_ports.member_count; i++) {
+ if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to add port %d as slave to bonded device %s",
- slave_ports.slaves[i], name);
+ "Failed to add port %d as member to bonded device %s",
+ member_ports.members[i], name);
}
}
} else {
- RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+ RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
return -1;
}
- /* Parse/set primary slave port id*/
- arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+ /* Parse/set primary member port id*/
+ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
if (arg_count == 1) {
- uint16_t primary_slave_port_id;
+ uint16_t primary_member_port_id;
if (rte_kvargs_process(kvlist,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
- &bond_ethdev_parse_primary_slave_port_id_kvarg,
- &primary_slave_port_id) < 0) {
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
+ &bond_ethdev_parse_primary_member_port_id_kvarg,
+ &primary_member_port_id) < 0) {
RTE_BOND_LOG(INFO,
- "Invalid primary slave port id specified for bonded device %s",
+ "Invalid primary member port id specified for bonded device %s",
name);
return -1;
}
/* Set balance mode transmit policy*/
- if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+ if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
!= 0) {
RTE_BOND_LOG(ERR,
- "Failed to set primary slave port %d on bonded device %s",
- primary_slave_port_id, name);
+ "Failed to set primary member port %d on bonded device %s",
+ primary_member_port_id, name);
return -1;
}
} else if (arg_count > 1) {
RTE_BOND_LOG(INFO,
- "Primary slave can be specified only once for bonded device %s",
+ "Primary member can be specified only once for bonded device %s",
name);
return -1;
}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
return -1;
}
- /* configure slaves so we can pass mtu setting */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(dev, slave_ethdev) != 0) {
+ /* configure members so we can pass mtu setting */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to configure slave device (%d)",
+ "bonded port (%d) failed to configure member device (%d)",
dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
- "slave=<ifc> "
+ "member=<ifc> "
"primary=<ifc> "
"mode=[0-6] "
"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
rte_eth_bond_8023ad_ext_distrib_get;
rte_eth_bond_8023ad_ext_slowtx;
rte_eth_bond_8023ad_setup;
- rte_eth_bond_8023ad_slave_info;
- rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
rte_eth_bond_mode_set;
rte_eth_bond_primary_get;
rte_eth_bond_primary_set;
- rte_eth_bond_slave_add;
- rte_eth_bond_slave_remove;
- rte_eth_bond_slaves_get;
rte_eth_bond_xmit_policy_get;
rte_eth_bond_xmit_policy_set;
local: *;
};
+
+EXPERIMENTAL {
+ # added in 23.07
+ global:
+ rte_eth_bond_8023ad_member_info;
+ rte_eth_bond_active_members_get;
+ rte_eth_bond_member_add;
+ rte_eth_bond_member_remove;
+ rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
RTE_ETHER_ADDR_BYTES(&addr))
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
static uint16_t BOND_PORT = 0xffff;
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
};
static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
{
int retval;
uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
"failed (res=%d)\n", BOND_PORT, retval);
- for (i = 0; i < slaves_count; i++) {
- if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
- rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
- slaves[i], BOND_PORT);
+ for (i = 0; i < members_count; i++) {
+ if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+ rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+ members[i], BOND_PORT);
}
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
if (retval < 0)
rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
- printf("Waiting for slaves to become active...");
+ printf("Waiting for members to become active...");
while (wait_counter) {
- uint16_t act_slaves[16] = {0};
- if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
- slaves_count) {
+ uint16_t act_members[16] = {0};
+ if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+ members_count) {
printf("\n");
break;
}
sleep(1);
printf("...");
if (--wait_counter == 0)
- rte_exit(-1, "\nFailed to activate slaves\n");
+ rte_exit(-1, "\nFailed to activate members\n");
}
retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
"send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
- "show - shows some bond info: ex. active slaves etc.\n"
+ "show - shows some bond info: ex. active members etc.\n"
"help - prints help.\n"
"quit - terminate all threads and quit.\n"
);
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
struct cmdline *cl,
__rte_unused void *data)
{
- uint16_t slaves[16] = {0};
+ uint16_t members[16] = {0};
uint8_t len = 16;
struct rte_ether_addr addr;
uint16_t i;
int ret;
- for (i = 0; i < slaves_count; i++) {
+ for (i = 0; i < members_count; i++) {
ret = rte_eth_macaddr_get(i, &addr);
if (ret != 0) {
cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
rte_spinlock_lock(&global_flag_stru_p->lock);
cmdline_printf(cl,
- "Active_slaves:%d "
+ "Active_members:%d "
"packets received:Tot:%d Arp:%d IPv4:%d\n",
- rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+ rte_eth_bond_active_members_get(BOND_PORT, members, len),
global_flag_stru_p->port_packets[0],
global_flag_stru_p->port_packets[1],
global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
/* initialize all ports */
- slaves_count = nb_ports;
+ members_count = nb_ports;
RTE_ETH_FOREACH_DEV(i) {
- slave_port_init(i, mbuf_pool);
- slaves[i] = i;
+ member_port_init(i, mbuf_pool);
+ members[i] = i;
}
bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0)
/** Device supports link state interrupt */
#define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE \
+ do { \
+ RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+ RTE_ETH_DEV_BONDED_MEMBER \
+ } while (0)
/** Device supports device removal interrupt */
#define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3)
/** Device is port representor */
--
2.39.1
^ permalink raw reply [relevance 1%]
* [PATCH v3] net/bonding: replace master/slave to main/member
2023-05-18 6:32 1% ` [PATCH v2] " Chaoyong He
@ 2023-05-18 7:01 1% ` Chaoyong He
2023-05-18 8:44 1% ` [PATCH v4] " Chaoyong He
0 siblings, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18 7:01 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw
From: Long Wu <long.wu@corigine.com>
This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.
The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.
Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.
Mark the old visible API's as deprecated and remove
from the ABI.
Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
v2:
* Modify related doc.
* Add 'RTE_DEPRECATED' to related APIs.
v3:
* Fix the check warning about 'CamelCase'.
---
app/test-pmd/testpmd.c | 112 +-
app/test-pmd/testpmd.h | 8 +-
app/test/test_link_bonding.c | 2792 +++++++++--------
app/test/test_link_bonding_mode4.c | 588 ++--
| 166 +-
doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
doc/guides/nics/bnxt.rst | 4 +-
doc/guides/prog_guide/img/bond-mode-1.svg | 2 +-
.../link_bonding_poll_mode_drv_lib.rst | 222 +-
drivers/net/bonding/bonding_testpmd.c | 178 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
drivers/net/bonding/eth_bond_private.h | 108 +-
drivers/net/bonding/rte_eth_bond.h | 126 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 372 +--
drivers/net/bonding/rte_eth_bond_8023ad.h | 75 +-
drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
drivers/net/bonding/rte_eth_bond_api.c | 474 +--
drivers/net/bonding/rte_eth_bond_args.c | 32 +-
drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 1384 ++++----
drivers/net/bonding/version.map | 15 +-
examples/bond/main.c | 40 +-
lib/ethdev/rte_ethdev.h | 9 +-
24 files changed, 3505 insertions(+), 3384 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
{
#ifdef RTE_NET_BOND
- portid_t slave_pids[RTE_MAX_ETHPORTS];
+ portid_t member_pids[RTE_MAX_ETHPORTS];
struct rte_port *port;
- int num_slaves;
- portid_t slave_pid;
+ int num_members;
+ portid_t member_pid;
int i;
- num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+ num_members = rte_eth_bond_members_get(bond_pid, member_pids,
RTE_MAX_ETHPORTS);
- if (num_slaves < 0) {
- fprintf(stderr, "Failed to get slave list for port = %u\n",
+ if (num_members < 0) {
+ fprintf(stderr, "Failed to get member list for port = %u\n",
bond_pid);
- return num_slaves;
+ return num_members;
}
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- port = &ports[slave_pid];
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ port = &ports[member_pid];
port->port_status =
is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Starting a bonded port also starts all slaves under the bonded
+ * Starting a bonded port also starts all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, false);
+ return change_bonding_member_port_status(port_id, false);
}
return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Stopping a bonded port also stops all slaves under the bonded
+ * Stopping a bonded port also stops all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, true);
+ return change_bonding_member_port_status(port_id, true);
}
return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
port = &ports[pi];
/* Check if there is a port which is not started */
if ((port->port_status != RTE_PORT_STARTED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
}
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
struct rte_port *port = &ports[port_id];
if ((port->port_status != RTE_PORT_STOPPED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
return 1;
}
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
/*
* Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
* to update the port configurations of bonding device.
*/
static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
continue;
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
}
static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
{
struct rte_port *port;
- portid_t slave_pid;
+ portid_t member_pid;
uint16_t i;
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- if (port_is_started(slave_pid) == 1) {
- if (rte_eth_dev_stop(slave_pid) != 0)
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ if (port_is_started(member_pid) == 1) {
+ if (rte_eth_dev_stop(member_pid) != 0)
fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
- slave_pid);
+ member_pid);
- port = &ports[slave_pid];
+ port = &ports[member_pid];
port->port_status = RTE_PORT_STOPPED;
}
- clear_port_slave_flag(slave_pid);
+ clear_port_member_flag(member_pid);
- /* Close slave device when testpmd quit or is killed. */
+ /* Close member device when testpmd quit or is killed. */
if (cl_quit == 1 || f_quit == 1)
- rte_eth_dev_close(slave_pid);
+ rte_eth_dev_close(member_pid);
}
}
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
{
portid_t pi;
struct rte_port *port;
- portid_t slave_pids[RTE_MAX_ETHPORTS];
- int num_slaves = 0;
+ portid_t member_pids[RTE_MAX_ETHPORTS];
+ int num_members = 0;
if (port_id_is_invalid(pid, ENABLED_WARN))
return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
flush_port_owned_resources(pi);
#ifdef RTE_NET_BOND
if (port->bond_flag == 1)
- num_slaves = rte_eth_bond_slaves_get(pi,
- slave_pids, RTE_MAX_ETHPORTS);
+ num_members = rte_eth_bond_members_get(pi,
+ member_pids, RTE_MAX_ETHPORTS);
#endif
rte_eth_dev_close(pi);
/*
- * If this port is bonded device, all slaves under the
+ * If this port is bonded device, all members under the
* device need to be removed or closed.
*/
- if (port->bond_flag == 1 && num_slaves > 0)
- clear_bonding_slave_device(slave_pids,
- num_slaves);
+ if (port->bond_flag == 1 && num_members > 0)
+ clear_bonding_member_device(member_pids,
+ num_members);
}
free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
}
}
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 1;
+ port = &ports[member_pid];
+ port->member_flag = 1;
}
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 0;
+ port = &ports[member_pid];
+ port->member_flag = 0;
}
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
{
struct rte_port *port;
struct rte_eth_dev_info dev_info;
int ret;
- port = &ports[slave_pid];
- ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+ port = &ports[member_pid];
+ ret = eth_dev_info_get_print_err(member_pid, &dev_info);
if (ret != 0) {
TESTPMD_LOG(ERR,
"Failed to get device info for port id %d,"
- "cannot determine if the port is a bonded slave",
- slave_pid);
+ "cannot determine if the port is a bonded member",
+ member_pid);
return 0;
}
- if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+ if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
return 1;
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
queueid_t queue_nb; /**< nb. of queues for flow rules */
uint32_t queue_sz; /**< size of a queue for flow rules */
- uint8_t slave_flag : 1, /**< bonding slave port */
+ uint8_t member_flag : 1, /**< bonding member port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before forward */
update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
void dev_set_link_up(portid_t pid);
void dev_set_link_down(portid_t pid);
void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
#define INVALID_BONDING_MODE (-1)
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
struct link_bonding_unittest_params {
int16_t bonded_port_id;
- int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
- uint16_t bonded_slave_count;
+ int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+ uint16_t bonded_member_count;
uint8_t bonding_mode;
uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
struct rte_mempool *mbuf_pool;
- struct rte_ether_addr *default_slave_mac;
+ struct rte_ether_addr *default_member_mac;
struct rte_ether_addr *default_bonded_mac;
/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
static struct link_bonding_unittest_params default_params = {
.bonded_port_id = -1,
- .slave_port_ids = { -1 },
- .bonded_slave_count = 0,
+ .member_port_ids = { -1 },
+ .bonded_member_count = 0,
.bonding_mode = BONDING_MODE_ROUND_ROBIN,
.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
.mbuf_pool = NULL,
- .default_slave_mac = (struct rte_ether_addr *)slave_mac,
+ .default_member_mac = (struct rte_ether_addr *)member_mac,
.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
return 0;
}
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
test_setup(void)
{
int i, nb_mbuf_per_pool;
- struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+ struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
/* Allocate ethernet packet header with space for VLAN header */
if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
}
/* Create / Initialize virtual eth devs */
- if (!slaves_initialized) {
+ if (!members_initialized) {
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
@@ -243,16 +243,16 @@ test_setup(void)
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
- test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+ test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+ TEST_ASSERT(test_params->member_port_ids[i] >= 0,
"Failed to create virtual virtual ethdev %s", pmd_name);
TEST_ASSERT_SUCCESS(configure_ethdev(
- test_params->slave_port_ids[i], 1, 0),
+ test_params->member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s", pmd_name);
}
- slaves_initialized = 1;
+ members_initialized = 1;
}
return 0;
@@ -261,9 +261,9 @@ test_setup(void)
static int
test_create_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
/* Don't try to recreate bonded device if re-running test suite*/
if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
test_params->bonded_port_id, test_params->bonding_mode);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of members %d is great than expected %d.",
+ current_member_count, 0);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members %d is great than expected %d.",
+ current_member_count, 0);
return 0;
}
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
}
static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave (%d) to bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member (%d) to bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
- "Number of slaves (%d) is greater than expected (%d).",
- current_slave_count, test_params->bonded_slave_count + 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+ "Number of members (%d) is greater than expected (%d).",
+ current_member_count, test_params->bonded_member_count + 1);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d).\n",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not as expected (%d).\n",
+ current_member_count, 0);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
return 0;
}
static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
struct rte_ether_addr read_mac_addr, *mac_addr;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count-1]),
- "Failed to remove slave %d from bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count-1]),
+ "Failed to remove member %d from bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
- "Number of slaves (%d) is great than expected (%d).\n",
- current_slave_count, test_params->bonded_slave_count - 1);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+ "Number of members (%d) is great than expected (%d).\n",
+ current_member_count, test_params->bonded_member_count - 1);
- mac_addr = (struct rte_ether_addr *)slave_mac;
+ mac_addr = (struct rte_ether_addr *)member_mac;
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
- test_params->bonded_slave_count-1;
+ test_params->bonded_member_count-1;
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ test_params->member_port_ids[test_params->bonded_member_count-1],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
0);
- test_params->bonded_slave_count--;
+ test_params->bonded_member_count--;
return 0;
}
static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
- test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+ test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
static int bonded_id = 2;
static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
{
- int port_id, current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int port_id, current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- test_add_slave_to_bonded_device();
+ test_add_member_to_bonded_device();
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 1,
- "Number of slaves (%d) is not that expected (%d).",
- current_slave_count, 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 1,
+ "Number of members (%d) is not that expected (%d).",
+ current_member_count, 1);
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
rte_socket_id());
TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
- TEST_ASSERT(rte_eth_bond_slave_add(port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+ TEST_ASSERT(rte_eth_bond_member_add(port_id,
+ test_params->member_port_ids[test_params->bonded_member_count - 1])
< 0,
- "Added slave (%d) to bonded port (%d) unexpectedly.",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ "Added member (%d) to bonded port (%d) unexpectedly.",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
port_id);
- return test_remove_slave_from_bonded_device();
+ return test_remove_member_from_bonded_device();
}
static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
/* Invalid port id */
- current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+ current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- /* Invalid slaves pointer */
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+ /* Invalid members pointer */
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
+ current_member_count = rte_eth_bond_active_members_get(
test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
/* non bonded device*/
- current_slave_count = rte_eth_bond_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
{
int i;
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
{
int i;
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
}
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
{
struct rte_eth_link link_status;
- int current_slave_count, current_bonding_mode, primary_port;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count, current_bonding_mode, primary_port;
+ uint16_t members[RTE_MAX_ETHPORTS];
int retval;
- /* Add slave to bonded device*/
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ /* Add member to bonded device*/
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- /* Change link status of virtual pmd so it will be added to the active
- * slave list of the bonded device*/
+ /*
+ * Change link status of virtual pmd so it will be added to the active
+ * member list of the bonded device.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+ test_params->member_port_ids[test_params->bonded_member_count-1], 1);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
current_bonding_mode, test_params->bonding_mode);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port (%d) is not expected value (%d).",
- primary_port, test_params->slave_port_ids[0]);
+ primary_port, test_params->member_port_ids[0]);
retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
static int
test_stop_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_eth_link link_status;
int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
"Bonded port (%d) status (%d) is not expected value (%d).",
test_params->bonded_port_id, link_status.link_status, 0);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, 0);
return 0;
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- /* Clean up and remove slaves from bonded device */
+ /* Clean up and remove members from bonded device */
free_virtualpmd_tx_queue();
- while (test_params->bonded_slave_count > 0)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "test_remove_slave_from_bonded_device failed");
+ while (test_params->bonded_member_count > 0)
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "test_remove_member_from_bonded_device failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+ TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
bonding_modes[i]),
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+ bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
TEST_ASSERT(bonding_mode < 0,
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
{
int i, j, retval;
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr *expected_mac_addr;
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
/* Invalid port ID */
TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
- test_params->slave_port_ids[i]),
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
- /* Set slave as primary
- * Verify slave it is now primary slave
- * Verify that MAC address of bonded device is that of primary slave
- * Verify that MAC address of all bonded slaves are that of primary slave
+ /* Set member as primary
+ * Verify member it is now primary member
+ * Verify that MAC address of bonded device is that of primary member
+ * Verify that MAC address of all bonded members are that of primary member
*/
for (i = 0; i < 4; i++) {
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(retval >= 0,
"Failed to read primary port from bonded port (%d)\n",
test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+ TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
"Bonded port (%d) primary port (%d) not expected value (%d)\n",
test_params->bonded_port_id, retval,
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
/* stop/start bonded eth dev to apply new MAC */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
"Failed to start bonded port %d",
test_params->bonded_port_id);
- expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+ expected_mac_addr = (struct rte_ether_addr *)&member_mac;
expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Check primary slave MAC */
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Check primary member MAC */
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (j = 0; j < 4; j++) {
if (j != i) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+ test_params->member_port_ids[j],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[j]);
+ test_params->member_port_ids[j]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary "
+ "member port mac address not set to that of primary "
"port");
}
}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
"read primary port from expectedly");
- /* Test with slave port */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+ /* Test with member port */
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
"read primary port from expectedly\n");
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to stop and remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to stop and remove members from bonded device");
- /* No slaves */
+ /* No members */
TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
"read primary port from expectedly\n");
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
/* Non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
- test_params->slave_port_ids[0], mac_addr),
+ test_params->member_port_ids[0], mac_addr),
"Expected call to failed as invalid port specified.");
/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
"Failed to set MAC address on bonded port (%d)",
test_params->bonded_port_id);
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++) {
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.\n");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++) {
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.\n");
}
/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (i = 0; i < 4; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary port");
+ "member port mac address not set to that of primary port");
}
/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
test_params->bonded_port_id);
TEST_ASSERT_FAIL(
- rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+ rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
"Reset MAC address on bonded port (%d) unexpectedly",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* test resetting mac address on bonded device with no slaves */
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to remove slaves and stop bonded device");
+ /* test resetting mac address on bonded device with no members */
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to remove members and stop bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
return 0;
}
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
static int
test_set_bonded_port_initialization_mac_assignment(void)
{
- int i, slave_count;
+ int i, member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
static int bonded_port_id = -1;
- static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+ static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
- struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+ struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
/* Initialize default values for MAC addresses */
- memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
- memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+ memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+ memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
/*
- * 1. a - Create / configure bonded / slave ethdevs
+ * 1. a - Create / configure bonded / member ethdevs
*/
if (bonded_port_id == -1) {
bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
"Failed to configure bonded ethdev");
}
- if (!mac_slaves_initialized) {
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ if (!mac_members_initialized) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
i + 100;
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
- "eth_slave_%d", i);
+ "eth_member_%d", i);
- slave_port_ids[i] = virtual_ethdev_create(pmd_name,
- &slave_mac_addr, rte_socket_id(), 1);
+ member_port_ids[i] = virtual_ethdev_create(pmd_name,
+ &member_mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(slave_port_ids[i] >= 0,
- "Failed to create slave ethdev %s",
+ TEST_ASSERT(member_port_ids[i] >= 0,
+ "Failed to create member ethdev %s",
pmd_name);
- TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+ TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s",
pmd_name);
}
- mac_slaves_initialized = 1;
+ mac_members_initialized = 1;
}
/*
- * 2. Add slave ethdevs to bonded device
+ * 2. Add member ethdevs to bonded device
*/
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
- slave_port_ids[i]),
- "Failed to add slave (%d) to bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to add member (%d) to bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
- "Number of slaves (%d) is not as expected (%d)",
- slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+ "Number of members (%d) is not as expected (%d)",
+ member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
/* 4. a - Start bonded ethdev
- * b - Enable slave devices
- * c - Verify bonded/slaves ethdev MAC addresses
+ * b - Enable member devices
+ * c - Verify bonded/members ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
"Failed to start bonded pmd eth device %d.",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- slave_port_ids[i], 1);
+ member_port_ids[i], 1);
}
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
+ member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 7. a - Change primary port
* b - Stop / Start bonded port
- * d - Verify slave ethdev MAC addresses
+ * d - Verify member ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
- slave_port_ids[2]),
+ member_port_ids[2]),
"failed to set primary port on bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
+ member_port_ids[2]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 6. a - Stop bonded ethdev
- * b - remove slave ethdevs
- * c - Verify slave ethdevs MACs are restored
+ * b - remove member ethdevs
+ * c - Verify member ethdevs MACs are restored
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
"Failed to stop bonded port %u",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
- slave_port_ids[i]),
- "Failed to remove slave %d from bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to remove member %d from bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of slaves (%d) is great than expected (%d).",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of members (%d) is great than expected (%d).",
+ member_count, 0);
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
return 0;
}
static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
- uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+ uint16_t number_of_members, uint8_t enable_member)
{
/* Configure bonded device */
TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
- "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
- number_of_slaves);
-
- /* Add slaves to bonded device */
- while (number_of_slaves > test_params->bonded_slave_count)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave (%d to bonding port (%d).",
- test_params->bonded_slave_count - 1,
+ "with (%d) members.", test_params->bonded_port_id, bonding_mode,
+ number_of_members);
+
+ /* Add members to bonded device */
+ while (number_of_members > test_params->bonded_member_count)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member (%d to bonding port (%d).",
+ test_params->bonded_member_count - 1,
test_params->bonded_port_id);
/* Set link bonding mode */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- if (enable_slave)
- enable_bonded_slaves();
+ if (enable_member)
+ enable_bonded_members();
return 0;
}
static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
{
int i;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
- "Failed to add slaves to bonded device");
+ "Failed to add members to bonded device");
- /* Enabled slave devices */
- for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+ /* Enabled member devices */
+ for (i = 0; i < test_params->bonded_member_count + 1; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave to bonded port.\n");
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member to bonded port.\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count]);
+ test_params->member_port_ids[test_params->bonded_member_count]);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT 4
#define TEST_LSC_WAIT_TIMEOUT_US 500000
int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
static int
test_status_interrupt(void)
{
- int slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- /* initialized bonding device with T slaves */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* initialized bonding device with T members */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 1,
- TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+ TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
- /* Bring all 4 slaves link status to down and test that we have received a
+ /* Bring all 4 members link status to down and test that we have received a
* lsc interrupts */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
"Received a link status change interrupt unexpectedly");
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, 0);
- /* bring one slave port up so link status will change */
+ /* bring one member port up so link status will change */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- /* Verify that calling the same slave lsc interrupt doesn't cause another
+ /* Verify that calling the same member lsc interrupt doesn't cause another
* lsc interrupt from bonded device */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size <= MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size / test_params->bonded_slave_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ (uint64_t)burst_size / test_params->bonded_member_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
pkt_burst, burst_size), 0,
"tx burst return unexpected value");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
rte_pktmbuf_free(mbufs[i]);
}
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE (64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT (22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (1)
static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
{
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
int i, first_fail_idx, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0,
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
/* Copy references to packets which we expect not to be transmitted */
- first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
- TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+ first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ (TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+ TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
- (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+ (i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
}
- /* Set virtual slave to only fail transmission of
- * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+ /*
+ * Set virtual member to only fail transmission of
+ * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ (uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- int slave_expected_tx_count;
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ int member_expected_tx_count;
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
- slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
- test_params->bonded_slave_count;
+ member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+ test_params->bonded_member_count;
- if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
- slave_expected_tx_count = slave_expected_tx_count -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+ if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+ member_expected_tx_count = member_expected_tx_count -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)slave_expected_tx_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[i],
- (unsigned int)port_stats.opackets, slave_expected_tx_count);
+ (uint64_t)member_expected_tx_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.opackets, member_expected_tx_count);
}
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
- free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
{
struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
int i, j, burst_size = 25;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
- /* Verify bonded slave devices rx count */
- /* Verify slave ports tx stats */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ /* Verify member ports tx stats */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
- /* Reset bonded slaves stats */
- rte_eth_stats_reset(test_params->slave_port_ids[j]);
+ /* Reset bonded members stats */
+ rte_eth_stats_reset(test_params->member_port_ids[j]);
}
/* reset bonded device stats */
rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+ int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
int i, nb_rx;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
burst_size[i], "burst generation failed");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2],
(unsigned int)port_stats.ipackets, burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3],
(unsigned int)port_stats.ipackets, 0);
/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_2),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that all MACs are the same as first slave added to bonded dev */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Verify that all MACs are the same as first member added to bonded dev */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary"
+ "member port (%d) mac address has changed to that of primary"
" port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagate to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagate to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(
memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary"
- " port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary"
+ " port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
- sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
- " that of new primary port\n", test_params->slave_port_ids[i]);
+ sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+ " that of new primary port\n", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
int i, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
"Port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
{
struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
- struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
/* NULL all pointers in array to simplify cleanup */
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+ /* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
* in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves eth_devs link status to down */
+ /* Set 2 members eth_devs link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count,
- TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).\n",
- slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count,
+ TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).\n",
+ member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
burst_size = 20;
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test burst of traffic
* 2. Transmit burst on bonded eth_dev
* 3. Verify stats for bonded eth_dev (opackets = burst_size)
- * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
TEST_ASSERT_EQUAL(
generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[0], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[1], (int)port_stats.opackets, 0);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[2], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[3], (int)port_stats.opackets, 0);
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test bursts of traffic
* 2. Add bursts on to virtual eth_devs
* 3. Rx burst on bonded eth_dev, expected (burst_ size *
- * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+ * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
* 4. Verify stats for bonded eth_dev
- * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
- for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size);
}
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
- (struct rte_ether_addr *)polling_slave_mac;
- char slave_name[RTE_ETH_NAME_MAX_LEN];
+ (struct rte_ether_addr *)polling_member_mac;
+ char member_name[RTE_ETH_NAME_MAX_LEN];
int i;
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
- /* Generate slave name / MAC address */
- snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+ /* Generate member name / MAC address */
+ snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Create slave devices with no ISR Support */
- if (polling_test_slaves[i] == -1) {
- polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+ /* Create member devices with no ISR Support */
+ if (polling_test_members[i] == -1) {
+ polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
rte_socket_id(), 0);
- TEST_ASSERT(polling_test_slaves[i] >= 0,
- "Failed to create virtual virtual ethdev %s\n", slave_name);
+ TEST_ASSERT(polling_test_members[i] >= 0,
+ "Failed to create virtual ethdev %s\n", member_name);
- /* Configure slave */
- TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
- "Failed to configure virtual ethdev %s(%d)", slave_name,
- polling_test_slaves[i]);
+ /* Configure member */
+ TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+ "Failed to configure virtual ethdev %s(%d)", member_name,
+ polling_test_members[i]);
}
- /* Add slave to bonded device */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to add slave %s(%d) to bonded device %d",
- slave_name, polling_test_slaves[i],
+ /* Add member to bonded device */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to add member %s(%d) to bonded device %d",
+ member_name, polling_test_members[i],
test_params->bonded_port_id);
}
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* link status change callback for first slave link up */
+ /* link status change callback for first member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+ virtual_ethdev_set_link_status(polling_test_members[0], 1);
TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
- /* no link status change callback for second slave link up */
+ /* no link status change callback for second member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+ virtual_ethdev_set_link_status(polling_test_members[1], 1);
TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
- /* link status change callback for both slave links down */
+ /* link status change callback for both member links down */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
- virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+ virtual_ethdev_set_link_status(polling_test_members[0], 0);
+ virtual_ethdev_set_link_status(polling_test_members[1], 0);
TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+ /* Clean up and remove members from bonded device */
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_SUCCESS(
- rte_eth_bond_slave_remove(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to remove slave %d from bonded port (%d)",
- polling_test_slaves[i], test_params->bonded_port_id);
+ rte_eth_bond_member_remove(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to remove member %d from bonded port (%d)",
+ polling_test_members[i], test_params->bonded_port_id);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
- if (test_params->slave_port_ids[i] == primary_port) {
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+ if (test_params->member_port_ids[i] == primary_port) {
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
} else {
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, 0);
}
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
pkts_burst, burst_size), 0, "Sending empty burst failed");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
static int
test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
int i, j, burst_size = 17;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
burst_size, "burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
"rte_eth_rx_burst failed");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)", test_params->slave_port_ids[i],
- (unsigned int)port_stats.ipackets, burst_size);
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.ipackets,
+ burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)\n", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected "
- "(%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected "
+ "(%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
} else {
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode enabled",
+ test_params->member_port_ids[i]);
}
}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active member count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(
test_params->bonded_port_id, 0, &pkt_burst[0][0],
burst_size), burst_size, "rte_eth_tx_burst failed");
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected",
test_params->bonded_port_id);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
/** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
static int
test_balance_xmit_policy_configuration(void)
{
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
/* Invalid port id */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
/* Set xmit policy on non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
- test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
+ test_params->member_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
"Expected call to failed as invalid port specified.");
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
"Expected call to failed as invalid port specified.");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
static int
test_balance_l2_tx_burst(void)
{
- struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
- int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+ struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+ int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
uint16_t pktlen;
int i;
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
"failed to generate packet burst");
/* Send burst 1 on bonded port */
- for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
&pkts_burst[i][0], burst_size[i]),
burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
burst_size[0] + burst_size[1]);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
burst_size[1]);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, 0, pkts_burst_1,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
return balance_l34_tx_burst(0, 0, 0, 0, 1);
}
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 (40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2 (20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT (25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (0)
static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
- struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+ struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+ struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
- struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+ struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, first_tx_fail_idx, tx_count_1, tx_count_2;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0,
- TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
"Failed to generate test packet burst 1");
- first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+ first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
/* copy mbuf references for expected transmission failures */
- for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+ for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Failed to generate test packet burst 2");
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Transmit burst 1 */
tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
- TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Transmit burst 2 */
tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+ (uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ (TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1],
+ (uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
static int
test_balance_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+ int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
0, 0), burst_size[i],
"failed to generate packet burst");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
"Failed to initialise bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]),
+ test_params->member_port_ids[1]),
"Failed to set bonded port (%d) primary port to (%d)\n",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected\n",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected\n",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
"Failed to set balance xmit policy.");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- /* Send to sets of packet burst and verify that they are balanced across
- * slaves */
+ /*
+ * Send to sets of packet burst and verify that they are balanced across
+ * members.
+ */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[2], (int)port_stats.opackets,
+ test_params->member_port_ids[2], (int)port_stats.opackets,
burst_size);
- /* verify that all packets get send on primary slave when no other slaves
+ /* verify that all packets get send on primary member when no other members
* are available */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 1);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 1);
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size + burst_size);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 1);
+ test_params->member_port_ids[2], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"Failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.ipackets,
burst_size * 3);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 2, 1),
"Failed to initialise bonded device");
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size * test_params->bonded_slave_count,
+ (uint64_t)burst_size * test_params->bonded_member_count,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, burst_size);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
"transmitted an unexpected number of packets");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT (3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE (40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT (15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT (10)
static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
- struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+ struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+ struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0,
- TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
- expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
}
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[0],
+ test_params->member_port_ids[0],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[1],
+ test_params->member_port_ids[1],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[2],
+ test_params->member_port_ids[2],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[0],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[0],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[1],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ test_params->member_port_ids[1],
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[2],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[2],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Transmit burst */
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
}
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Verify that all mbufs who transmission failed have a ref value of one */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
static int
test_broadcast_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+ int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
burst_size[i], "failed to generate packet burst");
}
- /* Add rx data to slave 0 */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member 0 */
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs allocate for rx testing */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
- /* Verify that all MACs are the same as first slave added to bonded
+ /* Verify that all MACs are the same as first member added to bonded
* device */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary "
+ "member port (%d) mac address has changed to that of primary "
"port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
1), "Failed to initialise bonded device");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- for (i = 0; i < test_params->bonded_slave_count; i++)
- rte_eth_stats_reset(test_params->slave_port_ids[i]);
+ for (i = 0; i < test_params->bonded_member_count; i++)
+ rte_eth_stats_reset(test_params->member_port_ids[i]);
- /* Verify that pkts are not sent on slaves with link status down */
+ /* Verify that pkts are not sent on members with link status down */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"rte_eth_tx_burst failed\n");
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
- TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+ TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
"(%d) port_stats.opackets (%d) not as expected (%d)\n",
test_params->bonded_port_id, (int)port_stats.opackets,
- burst_size * slave_count);
+ burst_size * member_count);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
free(test_params->pkt_eth_hdr);
test_params->pkt_eth_hdr = NULL;
- /* Clean up and remove slaves from bonded device */
- remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ remove_members_and_stop_bonded_device();
}
static void
free_virtualpmd_tx_queue(void)
{
- int i, slave_port, to_free_cnt;
+ int i, member_port, to_free_cnt;
struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
/* Free tx queue of virtual pmd */
- for (slave_port = 0; slave_port < test_params->bonded_slave_count;
- slave_port++) {
+ for (member_port = 0; member_port < test_params->bonded_member_count;
+ member_port++) {
to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_port],
+ test_params->member_port_ids[member_port],
pkts_to_free, MAX_PKT_BURST);
for (i = 0; i < to_free_cnt; i++)
rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
uint16_t pktlen;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
(BONDING_MODE_TLB, 1, 3, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
RTE_ETHER_TYPE_IPV4, 0, 0);
} else {
initialize_eth_header(test_params->pkt_eth_hdr,
- (struct rte_ether_addr *)test_params->default_slave_mac,
+ (struct rte_ether_addr *)test_params->default_member_mac,
(struct rte_ether_addr *)dst_mac_0,
RTE_ETHER_TYPE_IPV4, 0, 0);
}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
sum_ports_opackets += port_stats[i].opackets;
}
TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
- "Total packets sent by slaves is not equal to packets sent by bond interface");
+ "Total packets sent by members is not equal to packets sent by bond interface");
- /* checking if distribution of packets is balanced over slaves */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* checking if distribution of packets is balanced over members */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT(port_stats[i].obytes > 0 &&
port_stats[i].obytes < all_bond_obytes,
- "Packets are not balanced over slaves");
+ "Packets are not balanced over members");
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
burst_size);
TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
- /* Clean ugit checkout masterp and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean ugit checkout masterp and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
static int
test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
uint16_t i, j, nb_rx, burst_size = 17;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 4, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 2, 1),
"Failed to initialize bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
- * MAC hasn't been changed */
+ /*
+ * Verify that bonded MACs is that of first member and that the other member
+ * MAC hasn't been changed.
+ */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, (int)4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
- * has changed */
+ /*
+ * Bring primary port down, verify that active member count is 3 and primary
+ * has changed.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
rte_delay_us(500000);
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
for (i = 0; i < 4; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
rte_delay_us(11000);
}
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
burst_size)
return -1;
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ALB_SLAVE_COUNT 2
+#define TEST_ALB_MEMBER_COUNT 2
static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
struct rte_ether_hdr *eth_pkt;
struct rte_arp_hdr *arp_pkt;
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
- slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count;
+ member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
RTE_ARP_OP_REPLY);
rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
- slave_mac1 =
- rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 =
- rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 =
+ rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 =
+ rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
* Checking if packets are properly distributed on bonding ports. Packets
* 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+ int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
- * Checking if update ARP packets were properly send on slave ports.
+ * Checking if update ARP packets were properly send on member ports.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+ test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
nb_pkts_sum += nb_pkts;
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
/*
* Checking if VLAN headers in generated ARP Update packet are correct.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
retval = 0;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
burst_size = 32;
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite = {
.unit_test_cases = {
TEST_CASE(test_create_bonded_device),
TEST_CASE(test_create_bonded_device_with_invalid_params),
- TEST_CASE(test_add_slave_to_bonded_device),
- TEST_CASE(test_add_slave_to_invalid_bonded_device),
- TEST_CASE(test_remove_slave_from_bonded_device),
- TEST_CASE(test_remove_slave_from_invalid_bonded_device),
- TEST_CASE(test_get_slaves_from_bonded_device),
- TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
- TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+ TEST_CASE(test_add_member_to_bonded_device),
+ TEST_CASE(test_add_member_to_invalid_bonded_device),
+ TEST_CASE(test_remove_member_from_bonded_device),
+ TEST_CASE(test_remove_member_from_invalid_bonded_device),
+ TEST_CASE(test_get_members_from_bonded_device),
+ TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+ TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
TEST_CASE(test_start_bonded_device),
TEST_CASE(test_stop_bonded_device),
TEST_CASE(test_set_bonding_mode),
- TEST_CASE(test_set_primary_slave),
+ TEST_CASE(test_set_primary_member),
TEST_CASE(test_set_explicit_bonded_mac),
TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
TEST_CASE(test_status_interrupt),
- TEST_CASE(test_adding_slave_after_bonded_device_started),
+ TEST_CASE(test_adding_member_after_bonded_device_started),
TEST_CASE(test_roundrobin_tx_burst),
- TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
- TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
- TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+ TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+ TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+ TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
- TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+ TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
TEST_CASE(test_activebackup_verify_mac_assignment),
- TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+ TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
TEST_CASE(test_balance_xmit_policy_configuration),
TEST_CASE(test_balance_l2_tx_burst),
TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
- TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+ TEST_CASE(test_balance_tx_burst_member_tx_fail),
TEST_CASE(test_balance_rx_burst),
TEST_CASE(test_balance_verify_promiscuous_enable_disable),
TEST_CASE(test_balance_verify_mac_assignment),
- TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
TEST_CASE(test_tlb_tx_burst),
TEST_CASE(test_tlb_rx_burst),
TEST_CASE(test_tlb_verify_mac_assignment),
TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
- TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+ TEST_CASE(test_tlb_verify_member_link_status_change_failover),
TEST_CASE(test_alb_change_mac_in_reply_sent),
TEST_CASE(test_alb_reply_from_client),
TEST_CASE(test_alb_receive_vlan_reply),
TEST_CASE(test_alb_ipv4_tx),
TEST_CASE(test_broadcast_tx_burst),
- TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+ TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
TEST_CASE(test_broadcast_rx_burst),
TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
TEST_CASE(test_broadcast_verify_mac_assignment),
- TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
TEST_CASE(test_reconfigure_bonded_device),
TEST_CASE(test_close_bonded_device),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RX_RING_SIZE 1024
#define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
#define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
-#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT ("net_virt_%d_tx")
#define INVALID_SOCKET_ID (-1)
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
};
-struct slave_conf {
+struct member_conf {
struct rte_ring *rx_queue;
struct rte_ring *tx_queue;
uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
struct link_bonding_unittest_params {
uint8_t bonded_port_id;
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
-#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
static struct link_bonding_unittest_params test_params = {
.bonded_port_id = INVALID_PORT_ID,
- .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+ .member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
.mbuf_pool = NULL,
};
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test and satisfy given condition.
*
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* _condition condition that need to be checked
*/
#define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
if (!!(_condition))
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
* device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* */
-#define FOR_EACH_SLAVE(_i, _slave) \
- FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+ FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
/*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
* buffer for packets
* size size of buffer
* return number of packets or negative error number
*/
static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+ return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
size, NULL);
}
/*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
* buffer for packets
* size number of packets to be injected
* return number of queued packets or negative error number
*/
static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+ return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
size, NULL);
}
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
}
static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
{
struct rte_ether_addr addr, addr_check;
int retval;
/* Some sanity check */
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
- RTE_VERIFY(slave->bonded == 0);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+ RTE_VERIFY(member->bonded == 0);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- rte_ether_addr_copy(&slave_mac_default, &addr);
- addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+ rte_ether_addr_copy(&member_mac_default, &addr);
+ addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
- rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+ rte_eth_dev_mac_addr_remove(member->port_id, &addr);
- TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
- "Failed to set slave MAC address");
+ TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+ "Failed to set member MAC address");
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
- slave->port_id),
- "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
- (uint8_t)(slave - test_params.slave_ports), slave->port_id,
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+ member->port_id),
+ "Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+ (uint8_t)(member - test_params.member_ports), member->port_id,
test_params.bonded_port_id);
- slave->bonded = 1;
+ member->bonded = 1;
if (start) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
- "Failed to start slave %u", slave->port_id);
+ TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+ "Failed to start member %u", member->port_id);
}
- retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
- TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+ retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+ TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
strerror(-retval));
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
- "Slave MAC address is not as expected");
+ "Member MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(member->lacp_parnter_state == 0);
return 0;
}
static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
{
- ptrdiff_t slave_idx = slave - test_params.slave_ports;
+ ptrdiff_t member_idx = member - test_params.member_ports;
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
- RTE_VERIFY(slave->bonded == 1);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(member->bonded == 1);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
- slave->port_id), 0,
- "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
- (uint8_t)slave_idx, slave->port_id,
+ TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+ member->port_id), 0,
+ "Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+ (uint8_t)member_idx, member->port_id,
test_params.bonded_port_id);
- slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ member->bonded = 0;
+ member->lacp_parnter_state = 0;
return 0;
}
static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
- lacpdu_rx_count[slave_id]++;
+ lacpdu_rx_count[member_id]++;
rte_pktmbuf_free(lacp_pkt);
}
static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
{
uint8_t i;
int ret;
RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
- for (i = 0; i < slave_count; i++) {
- TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+ for (i = 0; i < member_count; i++) {
+ TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
"Failed to add port %u to bonded device.\n",
- test_params.slave_ports[i].port_id);
+ test_params.member_ports[i].port_id);
}
/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
int retval;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint16_t i;
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
"Failed to stop bonded port %u",
test_params.bonded_port_id);
- FOR_EACH_SLAVE(i, slave)
- remove_slave(slave);
+ FOR_EACH_MEMBER(i, member)
+ remove_member(member);
- retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
- RTE_DIM(slaves));
+ retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+ RTE_DIM(members));
TEST_ASSERT_EQUAL(retval, 0,
- "Expected bonded device %u have 0 slaves but returned %d.",
+ "Expected bonded device %u have 0 members but returned %d.",
test_params.bonded_port_id, retval);
- FOR_EACH_PORT(i, slave) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+ FOR_EACH_PORT(i, member) {
+ TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
"Failed to stop bonded port %u",
- slave->port_id);
+ member->port_id);
- TEST_ASSERT(slave->bonded == 0,
- "Port id=%u is still marked as enslaved.", slave->port_id);
+ TEST_ASSERT(member->bonded == 0,
+ "Port id=%u is still marked as enmemberd.", member->port_id);
}
return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
{
int retval, nb_mbuf_per_pool;
char name[RTE_ETH_NAME_MAX_LEN];
- struct slave_conf *port;
+ struct member_conf *port;
const uint8_t socket_id = rte_socket_id();
uint16_t i;
@@ -400,10 +400,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(i, port) {
- port = &test_params.slave_ports[i];
+ port = &test_params.member_ports[i];
if (port->rx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
}
if (port->tx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
}
if (port->port_id == INVALID_PORT_ID) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
retval = rte_eth_from_rings(name, &port->rx_queue, 1,
&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
* frame but not LACP
*/
static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
/* Change source address to partner address */
rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ member->lacp_parnter_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
}
/*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
*
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
* all other packets. Prepares response LACP and sends it back.
*
* return number of LACP received and replied, -1 on error.
*/
static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
{
int retval;
struct rte_mbuf *rx_buf[MAX_PKT_BURST];
struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
uint16_t lacp_tx_buf_cnt = 0, i;
- retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
- TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
- slave->port_id);
+ retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+ TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+ member->port_id);
for (i = 0; i < (uint16_t)retval; i++) {
- if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+ if (make_lacp_reply(member, rx_buf[i]) == 0) {
/* reply with actor's LACP */
lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
if (lacp_tx_buf_cnt == 0)
return 0;
- retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+ retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
if (retval <= lacp_tx_buf_cnt) {
/* retval might be negative */
for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
}
TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
- "Failed to equeue lacp packets into slave %u tx queue.",
- slave->port_id);
+ "Failed to equeue lacp packets into member %u tx queue.",
+ member->port_id);
return lacp_tx_buf_cnt;
}
/*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
* return 0 if handshake not completed, 1 if handshake was complete,
*/
static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
{
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return member->lacp_parnter_state == expected_state;
}
static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
static int
bond_handshake(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *buf[MAX_PKT_BURST];
uint16_t nb_pkts;
- uint8_t all_slaves_done, i, j;
- uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+ uint8_t all_members_done, i, j;
+ uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
const unsigned delay = bond_get_update_timeout_ms();
/* Exchange LACP frames */
- all_slaves_done = 0;
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ all_members_done = 0;
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
rte_delay_ms(delay);
- all_slaves_done = 1;
- FOR_EACH_SLAVE(j, slave) {
- /* If response already send, skip slave */
+ all_members_done = 1;
+ FOR_EACH_MEMBER(j, member) {
+ /* If response already send, skip member */
if (status[j] != 0)
continue;
- if (bond_handshake_reply(slave) < 0) {
- all_slaves_done = 0;
+ if (bond_handshake_reply(member) < 0) {
+ all_members_done = 0;
break;
}
- status[j] = bond_handshake_done(slave);
+ status[j] = bond_handshake_done(member);
if (status[j] == 0)
- all_slaves_done = 0;
+ all_members_done = 0;
}
nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
}
/* If response didn't send - report failure */
- TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+ TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
/* If flags doesn't match - report failure */
- return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+ return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
}
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
static int
test_mode4_lacp(void)
{
int retval;
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
/* Test LACP handshake function */
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
{
int retval;
/* Test and verify for Stable mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_STABLE,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify for Bandwidth mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify selection for count mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_COUNT,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
}
static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
struct rte_ether_addr *src_mac,
struct rte_ether_addr *dst_mac, uint16_t count)
{
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
if (retval != (int)count)
return retval;
- retval = slave_put_pkts(slave, pkts, count);
+ retval = member_put_pkts(member, pkts, count);
if (retval > 0 && retval != count)
free_pkts(&pkts[retval], count - retval);
TEST_ASSERT_EQUAL(retval, count,
- "Failed to enqueue packets into slave %u RX queue", slave->port_id);
+ "Failed to enqueue packets into member %u RX queue", member->port_id);
return TEST_SUCCESS;
}
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
static int
test_mode4_rx(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
struct rte_ether_addr dst_mac;
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -838,7 +838,7 @@ test_mode4_rx(void)
dst_mac.addr_bytes[0] += 2;
/* First try with promiscuous mode enabled.
- * Add 2 packets to each slave. First with bonding MAC address, second with
+ * Add 2 packets to each member. First with bonding MAC address, second with
* different. Check if we received all of them. */
retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect 2 packets per slave */
+ /* Expect 2 packets per member */
expected_pkts_cnt += 2;
}
@@ -894,16 +894,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect only one packet per slave */
+ /* Expect only one packet per member */
expected_pkts_cnt += 1;
}
@@ -927,19 +927,19 @@ test_mode4_rx(void)
TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
"Expected %u packets but received only %d", expected_pkts_cnt, retval);
- /* Link down test: simulate link down for first slave. */
+ /* Link down test: simulate link down for first member. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- /* Find first slave and make link down on it*/
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ /* Find first member and make link down on it*/
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding */
for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
- /* Put packet to each slave */
- FOR_EACH_SLAVE(i, slave) {
+ /* Put packet to each member */
+ FOR_EACH_MEMBER(i, member) {
void *pkt = NULL;
- dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+ dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
- src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+ src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
if (retval > 0)
free_pkts(pkts, retval);
- while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+ while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
rte_pktmbuf_free(pkt);
- if (slave_down_id == slave->port_id)
+ if (member_down_id == member->port_id)
TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
else
TEST_ASSERT_NOT_EQUAL(retval, 0,
- "Expected to receive some packets on slave %u.",
- slave->port_id);
- rte_eth_dev_start(slave->port_id);
+ "Expected to receive some packets on member %u.",
+ member->port_id);
+ rte_eth_dev_start(member->port_id);
for (j = 0; j < 5; j++) {
- TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+ TEST_ASSERT(bond_handshake_reply(member) >= 0,
"Handshake after link up");
- if (bond_handshake_done(slave) == 1)
+ if (bond_handshake_done(member) == 1)
break;
}
- TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
}
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
static int
test_mode4_tx_burst(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets were transmitted properly. Every slave should have
+ /* Check if packets were transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+ "member %u unexpectedly transmitted %d SLOW packets", member->port_id,
slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
pkts_cnt += normal_cnt;
}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- /* Link down test:
- * simulate link down for first slave. */
+ /*
+ * Link down test:
+ * simulate link down for first member.
+ */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding. */
for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets was transmitted properly. Every slave should have
+ /* Check if packets was transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
- if (slave_down_id == slave->port_id) {
+ if (member_down_id == member->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
- normal_cnt + slow_cnt, slave->port_id);
+ "member %u enexpectedly transmitted %u packets",
+ normal_cnt + slow_cnt, member->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets",
- slave->port_id, slow_cnt);
+ "member %u unexpectedly transmitted %d SLOW packets",
+ member->port_id, slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
}
pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
{
struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
rte_ether_addr_copy(&parnter_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
offsetof(struct marker, reserved_90) -
offsetof(struct marker, requester_port);
RTE_VERIFY(marker_hdr->marker.info_length == 16);
- marker_hdr->marker.requester_port = slave->port_id + 1;
+ marker_hdr->marker.requester_port = member->port_id + 1;
marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
marker_hdr->marker.terminator_length = 0;
}
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
static int
test_mode4_marker(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *pkts[MAX_PKT_BURST];
struct rte_mbuf *marker_pkt;
struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
uint8_t i, j;
const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
- retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+ retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
delay = bond_get_update_timeout_ms();
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
- init_marker(marker_pkt, slave);
+ init_marker(marker_pkt, member);
- retval = slave_put_pkts(slave, &marker_pkt, 1);
+ retval = member_put_pkts(member, &marker_pkt, 1);
if (retval != 1)
rte_pktmbuf_free(marker_pkt);
TEST_ASSERT_EQUAL(retval, 1,
- "Failed to send marker packet to slave %u", slave->port_id);
+ "Failed to send marker packet to member %u", member->port_id);
for (j = 0; j < 20; ++j) {
rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
/* Check if LACP packet was send by state machines
First and only packet must be a maker response */
- retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+ retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
if (retval == 0)
continue;
if (retval > 1)
free_pkts(pkts, retval);
- TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+ TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
nb_pkts = retval;
marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
TEST_ASSERT(j < 20, "Marker response not found");
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
static int
test_mode4_expired(void)
{
- struct slave_conf *slave, *exp_slave = NULL;
+ struct member_conf *member, *exp_member = NULL;
struct rte_mbuf *pkts[MAX_PKT_BURST];
int retval;
uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
struct rte_eth_bond_8023ad_conf conf;
- retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
0);
/* Set custom timeouts to make test last shorter. */
rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
/* Wait for new settings to be applied. */
for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
- FOR_EACH_SLAVE(j, slave)
- bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(j, member)
+ bond_handshake_reply(member);
rte_delay_ms(conf.update_timeout_ms);
}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- /* Find first slave */
- FOR_EACH_SLAVE(i, slave) {
- exp_slave = slave;
+ /* Find first member */
+ FOR_EACH_MEMBER(i, member) {
+ exp_member = member;
break;
}
- RTE_VERIFY(exp_slave != NULL);
+ RTE_VERIFY(exp_member != NULL);
/* When one of partners do not send or respond to LACP frame in
* conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
retval);
- FOR_EACH_SLAVE(i, slave) {
- retval = bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(i, member) {
+ retval = bond_handshake_reply(member);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that suppose to be expired. */
- if (slave == exp_slave) {
- while (rte_ring_count(slave->rx_queue) > 0) {
+ /* Remove replay for member that suppose to be expired. */
+ if (member == exp_member) {
+ while (rte_ring_count(member->rx_queue) > 0) {
void *pkt = NULL;
- rte_ring_dequeue(slave->rx_queue, &pkt);
+ rte_ring_dequeue(member->rx_queue, &pkt);
rte_pktmbuf_free(pkt);
}
}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
retval);
}
- /* After test only expected slave should be in EXPIRED state */
- FOR_EACH_SLAVE(i, slave) {
- if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
- "Slave %u should be in expired.", slave->port_id);
+ /* After test only expected member should be in EXPIRED state */
+ FOR_EACH_MEMBER(i, member) {
+ if (member == exp_member)
+ TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+ "Member %u should be in expired.", member->port_id);
else
- TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
- "Slave %u should be operational.", slave->port_id);
+ TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+ "Member %u should be operational.", member->port_id);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
* . try to transmit lacpdu (should fail)
* . try to set collecting and distributing flags (should fail)
* reconfigure w/external sm
- * . transmit one lacpdu on each slave using new api
- * . make sure each slave receives one lacpdu using the callback api
- * . transmit one data pdu on each slave (should fail)
+ * . transmit one lacpdu on each member using new api
+ * . make sure each member receives one lacpdu using the callback api
+ * . transmit one data pdu on each member (should fail)
* . enable distribution and collection, send one data pdu each again
*/
int retval;
- struct slave_conf *slave = NULL;
+ struct member_conf *member = NULL;
uint8_t i;
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]),
- "Slave should not allow manual LACP xmit");
+ member->port_id, lacp_tx_buf[i]),
+ "Member should not allow manual LACP xmit");
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
test_params.bonded_port_id,
- slave->port_id, 1),
- "Slave should not allow external state controls");
+ member->port_id, 1),
+ "Member should not allow external state controls");
}
free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
test_mode4_ext_lacp(void)
{
int retval;
- struct slave_conf *slave = NULL;
- uint8_t all_slaves_done = 0, i;
+ struct member_conf *member = NULL;
+ uint8_t all_members_done = 0, i;
uint16_t nb_pkts;
const unsigned int delay = bond_get_update_timeout_ms();
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
- struct rte_mbuf *buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+ struct rte_mbuf *buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
for (i = 0; i < 30; ++i)
rte_delay_ms(delay);
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
retval = rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]);
+ member->port_id, lacp_tx_buf[i]);
TEST_ASSERT_SUCCESS(retval,
- "Slave should allow manual LACP xmit");
+ "Member should allow manual LACP xmit");
}
nb_pkts = bond_tx(NULL, 0);
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
- FOR_EACH_SLAVE(i, slave) {
- nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
- TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+ FOR_EACH_MEMBER(i, member) {
+ nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+ TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
nb_pkts, i);
- slave_put_pkts(slave, buf, nb_pkts);
+ member_put_pkts(member, buf, nb_pkts);
}
nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
/* wait for the periodic callback to run */
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
uint8_t s, total = 0;
rte_delay_ms(delay);
- FOR_EACH_SLAVE(s, slave) {
- total += lacpdu_rx_count[slave->port_id];
+ FOR_EACH_MEMBER(s, member) {
+ total += lacpdu_rx_count[member->port_id];
}
- if (total >= SLAVE_COUNT)
- all_slaves_done = 1;
+ if (total >= MEMBER_COUNT)
+ all_members_done = 1;
}
- FOR_EACH_SLAVE(i, slave) {
- TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
- "Slave port %u should have received 1 lacpdu (count=%u)",
- slave->port_id,
- lacpdu_rx_count[slave->port_id]);
+ FOR_EACH_MEMBER(i, member) {
+ TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+ "Member port %u should have received 1 lacpdu (count=%u)",
+ member->port_id,
+ lacpdu_rx_count[member->port_id]);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
static int
check_environment(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i, env_state;
- uint16_t slaves[RTE_DIM(test_params.slave_ports)];
- int slaves_count;
+ uint16_t members[RTE_DIM(test_params.member_ports)];
+ int members_count;
env_state = 0;
FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
break;
}
- slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
- slaves, RTE_DIM(slaves));
+ members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+ members, RTE_DIM(members));
- if (slaves_count != 0)
+ if (members_count != 0)
env_state |= 0x10;
TEST_ASSERT_EQUAL(env_state, 0,
"Environment not clean (port %u):%s%s%s%s%s",
port->port_id,
- env_state & 0x01 ? " slave rx queue not clean" : "",
- env_state & 0x02 ? " slave tx queue not clean" : "",
- env_state & 0x04 ? " port marked as enslaved" : "",
- env_state & 0x80 ? " slave state is not reset" : "",
- env_state & 0x10 ? " slave count not equal 0" : ".");
+ env_state & 0x01 ? " member rx queue not clean" : "",
+ env_state & 0x02 ? " member tx queue not clean" : "",
+ env_state & 0x04 ? " port marked as enmemberd" : "",
+ env_state & 0x80 ? " member state is not reset" : "",
+ env_state & 0x10 ? " member count not equal 0" : ".");
return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
static int
test_mode4_executor(int (*test_func)(void))
{
- struct slave_conf *port;
+ struct member_conf *port;
int test_result;
uint8_t i;
void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
FOR_EACH_PORT(i, port) {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RXTX_RING_SIZE 1024
#define RXTX_QUEUE_COUNT 4
#define BONDED_DEV_NAME ("net_bonding_rss")
-#define SLAVE_DEV_NAME_FMT ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT ("rssconf_member%d_q%d")
#define NUM_MBUFS 8191
#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-struct slave_conf {
+struct member_conf {
uint16_t port_id;
struct rte_eth_dev_info dev_info;
@@ -54,7 +54,7 @@ struct slave_conf {
uint8_t rss_key[40];
struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- uint8_t is_slave;
+ uint8_t is_member;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
};
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
static struct link_bonding_rssconf_unittest_params test_params = {
.bond_port_id = INVALID_PORT_ID,
- .slave_ports = {
- [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+ .member_ports = {
+ [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
},
.mbuf_pool = NULL,
};
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
static int
configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
}
/**
- * Remove all slaves from bonding
+ * Remove all members from bonding
*/
static int
-remove_slaves(void)
+remove_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+ port = &test_params.member_ports[n];
+ if (port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
test_params.bond_port_id, port->port_id),
- "Cannot remove slave %d from bonding", port->port_id);
- port->is_slave = 0;
+ "Cannot remove member %d from bonding", port->port_id);
+ port->is_member = 0;
}
}
@@ -173,30 +173,30 @@ remove_slaves(void)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+ TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
"Failed to stop port %u", test_params.bond_port_id);
return TEST_SUCCESS;
}
/**
- * Add all slaves to bonding
+ * Add all members to bonding
*/
static int
-bond_slaves(void)
+bond_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (!port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot attach slave %d to the bonding",
+ port = &test_params.member_ports[n];
+ if (!port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot attach member %d to the bonding",
port->port_id);
- port->is_slave = 1;
+ port->is_member = 1;
}
}
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
}
/**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
* port is synced with bonding port.
*/
static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
{
unsigned i;
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
}
/**
- * Fetch slaves RETA
+ * Fetch members RETA
*/
static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
unsigned j;
for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
}
/**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
*/
static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
{
- struct slave_conf *port = &(test_params.slave_ports[0]);
+ struct member_conf *port = &(test_params.member_ports[0]);
- /* 1. Remove first slave from bonding */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
- port->port_id), "Cannot remove slave #d from bonding");
+ /* 1. Remove first member from bonding */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+ port->port_id), "Cannot remove member #d from bonding");
- /* 2. Change removed (ex-)slave and bonding configuration to different
+ /* 2. Change removed (ex-)member and bonding configuration to different
* values
*/
reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
bond_reta_fetch();
reta_set(port->port_id, 2, port->dev_info.reta_size);
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 0,
- "Removed slave didn't should be synchronized with bonding port");
+ "Removed member didn't should be synchronized with bonding port");
- /* 3. Add (ex-)slave and check if configuration changed*/
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot add slave");
+ /* 3. Add (ex-)member and check if configuration changed*/
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot add member");
bond_reta_fetch();
- slave_reta_fetch(port);
+ member_reta_fetch(port);
return reta_check_synced(port);
}
/**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
*/
static int
test_propagate(void)
{
unsigned i;
uint8_t n;
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t bond_rss_key[40];
struct rte_eth_rss_conf bond_rss_conf;
@@ -349,18 +349,18 @@ test_propagate(void)
retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
&bond_rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
- "Hash function not propagated for slave %d",
+ "Hash function not propagated for member %d",
port->port_id);
}
@@ -376,11 +376,11 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
memset(port->rss_conf.rss_key, 0, 40);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
}
memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&(port->rss_conf));
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
/* compare keys */
retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
sizeof(bond_rss_key));
- TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+ TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
port->port_id);
}
}
@@ -416,10 +416,10 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
}
TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
bond_reta_fetch();
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
}
}
@@ -459,29 +459,29 @@ test_rss(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
- TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+ TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
/**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
*/
static int
test_rss_config_lazy(void)
{
struct rte_eth_rss_conf bond_rss_conf = {0};
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t rss_key[40];
uint64_t rss_hf;
int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
}
- /* Set all keys to zero for all slaves */
+ /* Set all keys to zero for all members */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+ TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
memset(port->rss_key, 0, sizeof(port->rss_key));
port->rss_conf.rss_key = port->rss_key;
port->rss_conf.rss_key_len = sizeof(port->rss_key);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
}
/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
/* Test RETA propagation */
for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
}
retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
@@ -579,13 +579,13 @@ test_setup(void)
int retval;
int port_id;
char name[256];
- struct slave_conf *port;
+ struct member_conf *port;
struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
if (test_params.mbuf_pool == NULL) {
test_params.mbuf_pool = rte_pktmbuf_pool_create(
- "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+ "RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
port_id = rte_eth_dev_count_avail();
- snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+ snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
retval = rte_vdev_init(name, "size=64,copy=0");
TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
}
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
----------
A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the
backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
testpmd> create bonded device 1 0
Created new bonded device net_bond_testpmd_0 on (port 2).
- testpmd> add bonding slave 0 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 0 2
+ testpmd> add bonding member 1 2
testpmd> show bonding config 2
The syntax of the ``testpmd`` command is:
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
Set primary to P1 before starting bonding port.
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
Use P2 only for forwarding.
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
testpmd> start
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
.. code-block:: console
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
testpmd> clear port stats all
testpmd> set bonding primary 0 2
- testpmd> remove bonding slave 1 2
+ testpmd> remove bonding member 1 2
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
.. code-block:: console
- testpmd> remove bonding slave 0 2
+ testpmd> remove bonding member 0 2
testpmd> show bonding config 2
testpmd> port stop 0
testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
.. code-block:: console
- dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
- (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+ dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+ (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
Vector Processing
-----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
v:langID="1033"
v:metric="true"
v:viewMarkup="false"><v:userDefs><v:ud
- v:nameU="msvSubprocessMaster"
+ v:nameU="msvSubprocessMain"
v:prompt=""
v:val="VT4(Rectangle)" /><v:ud
v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..519a364105 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
``rte_eth_dev`` ports of the same speed and duplex to provide similar
capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
and a switch. The new bonded PMD will then process these interfaces based on
the mode of operation specified to provide support for features such as
redundant links, fault tolerance and/or load balancing.
The librte_net_bond library exports a C API which provides an API for the
creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
.. note::
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides load balancing and fault tolerance by transmission of
- packets in sequential order from the first available slave device through
+ packets in sequential order from the first available member device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
Active Backup (Mode 1)
- In this mode only one slave in the bond is active at any time, a different
- slave becomes active if, and only if, the primary active slave fails,
- thereby providing fault tolerance to slave failure. The single logical
+ In this mode only one member in the bond is active at any time, a different
+ member becomes active if, and only if, the primary active member fails,
+ thereby providing fault tolerance to member failure. The single logical
bonded interface's MAC address is externally visible on only one NIC (port)
to avoid confusing the network switch.
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides transmit load balancing (based on the selected
transmission policy) and fault tolerance. The default policy (layer2) uses
a simple calculation based on the packet flow source and destination MAC
- addresses as well as the number of active slaves available to the bonded
- device to classify the packet to a specific slave to transmit on. Alternate
+ addresses as well as the number of active members available to the bonded
+ device to classify the packet to a specific member to transmit on. Alternate
transmission policies supported are layer 2+3, this takes the IP source and
- destination addresses into the calculation of the transmit slave port and
+ destination addresses into the calculation of the transmit member port and
the final supported policy is layer 3+4, this uses IP source and
destination addresses as well as the TCP/UDP source and destination port.
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
Broadcast (Mode 3)
- This mode provides fault tolerance by transmission of packets on all slave
+ This mode provides fault tolerance by transmission of packets on all member
ports.
* **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
intervals period of less than 100ms.
#. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
- where N is the number of slaves. This is a space required for LACP
+ where N is the number of members. This is a space required for LACP
frames. Additionally LACP packets are included in the statistics, but
they are not returned to the application.
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides an adaptive transmit load balancing. It dynamically
- changes the transmitting slave, according to the computed load. Statistics
+ changes the transmitting member, according to the computed load. Statistics
are collected in 100ms intervals and scheduled every 10ms.
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
programmatically via the C API ``rte_eth_bond_create`` function.
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
next rte flow operations:
Validate:
- - Validate flow for each slave, failure at least for one slave causes to
+ - Validate flow for each member, failure at least for one member causes to
bond validation failure.
Create:
- - Create the flow in all slaves.
- - Save all the slaves created flows objects in bonding internal flow
+ - Create the flow in all members.
+ - Save all the members created flows objects in bonding internal flow
structure.
- - Failure in flow creation for existed slave rejects the flow.
- - Failure in flow creation for new slaves in slave adding time rejects
- the slave.
+ - Failure in flow creation for existed member rejects the flow.
+ - Failure in flow creation for new members in member adding time rejects
+ the member.
Destroy:
- - Destroy the flow in all slaves and release the bond internal flow
+ - Destroy the flow in all members and release the bond internal flow
memory.
Flush:
- - Destroy all the bonding PMD flows in all the slaves.
+ - Destroy all the bonding PMD flows in all the members.
.. note::
- Don't call slaves flush directly, It destroys all the slave flows which
+ Don't call members flush directly, It destroys all the member flows which
may include external flows or the bond internal LACP flow.
Query:
- - Summarize flow counters from all the slaves, relevant only for
+ - Summarize flow counters from all the members, relevant only for
``RTE_FLOW_ACTION_TYPE_COUNT``.
Isolate:
- - Call to flow isolate for all slaves.
- - Failure in flow isolation for existed slave rejects the isolate mode.
- - Failure in flow isolation for new slaves in slave adding time rejects
- the slave.
+ - Call to flow isolate for all members.
+ - Failure in flow isolation for existed member rejects the isolate mode.
+ - Failure in flow isolation for new members in member adding time rejects
+ the member.
All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
Link Status Change Interrupts / Polling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
Link bonding devices support the registration of a link status change callback,
using the ``rte_eth_dev_callback_register`` API, this will be called when the
status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
The link bonding library also supports devices which do not implement link
status change interrupts, this is achieved by polling the devices link status at
a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
whether the device supports interrupts or whether the link status should be
monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
these parameters.
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
itself can be started.
To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
Like all other PMD, all functions exported by a PMD are lock-free functions
that are assumed not to be invoked in parallel on different logical cores to
work on the same target object.
It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
bonded device to read.
Configuration
@@ -265,25 +265,25 @@ Configuration
Link bonding devices are created using the ``rte_eth_bond_create`` API
which requires a unique device name, the bonding mode,
and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
the device is in balance XOR mode.
-Slave Devices
+Member Devices
^^^^^^^^^^^^^
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
configuration of the bonded device on being added to a bonded device.
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
-Primary Slave
+Primary Member
^^^^^^^^^^^^^
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
device is in active backup mode. A different port will only be used if, and
only if, the current primary port goes down. If the user does not specify a
primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
^^^^^^^^^^^
The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
Balance XOR Transmit Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
* **Layer 2:** Ethernet MAC address based balancing is the default
transmission policy for Balance XOR bonding mode. It uses a simple XOR
calculation on the source MAC address and destination MAC address of the
- packet and then calculate the modulus of this value to calculate the slave
+ packet and then calculate the modulus of this value to calculate the member
device to transmit the packet on.
* **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
combination of source/destination MAC addresses and the source/destination
- IP addresses of the data packet to decide which slave port the packet will
+ IP addresses of the data packet to decide which member port the packet will
be transmitted on.
* **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
of source/destination IP Address and the source/destination UDP ports of
- the packet of the data packet to decide which slave port the packet will be
+ the packet of the data packet to decide which member port the packet will be
transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
which will be used must be setup using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup``.
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
before it can be started using ``rte_eth_dev_start``.
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
bonding device then the link status of the bonding device will go down.
It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
-* A least one slave device is provided with for each bonded device definition.
+* A least one member device is provided with for each bonded device definition.
* The operation mode of the bonded device being created is provided.
@@ -404,20 +404,20 @@ The different options are:
mode=2
-* slave: Defines the PMD device which will be added as slave to the bonded
+* member: Defines the PMD device which will be added as member to the bonded
device. This option can be selected multiple times, for each device to be
- added as a slave. Physical devices should be specified using their PCI
+ added as a member. Physical devices should be specified using their PCI
address, in the format domain:bus:devid.function
.. code-block:: console
- slave=0000:0a:00.0,slave=0000:0a:00.1
+ member=0000:0a:00.0,member=0000:0a:00.1
-* primary: Optional parameter which defines the primary slave port,
- is used in active backup mode to select the primary slave for data TX/RX if
+* primary: Optional parameter which defines the primary member port,
+ is used in active backup mode to select the primary member for data TX/RX if
it is available. The primary port also is used to select the MAC address to
- use when it is not defined by the user. This defaults to the first slave
- added to the device if it is specified. The primary device must be a slave
+ use when it is not defined by the user. This defaults to the first member
+ added to the device if it is specified. The primary device must be a member
of the bonded device.
.. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
socket_id=0
* mac: Optional parameter to select a MAC address for link bonding device,
- this overrides the value of the primary slave device.
+ this overrides the value of the primary member device.
.. code-block:: console
@@ -474,29 +474,29 @@ The different options are:
Examples of Usage
^^^^^^^^^^^^^^^^^
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
.. _bonding_testpmd_commands:
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
testpmd> create bonded device 1 0
created new bonded device (port X)
-add bonding slave
+add bonding member
~~~~~~~~~~~~~~~~~
Adds Ethernet device to a Link Bonding device::
- testpmd> add bonding slave (slave id) (port id)
+ testpmd> add bonding member (member id) (port id)
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
- testpmd> add bonding slave 6 10
+ testpmd> add bonding member 6 10
-remove bonding slave
+remove bonding member
~~~~~~~~~~~~~~~~~~~~
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
- testpmd> remove bonding slave (slave id) (port id)
+ testpmd> remove bonding member (member id) (port id)
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
- testpmd> remove bonding slave 6 10
+ testpmd> remove bonding member 6 10
set bonding mode
~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
set bonding primary
~~~~~~~~~~~~~~~~~~~
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
- testpmd> set bonding primary (slave id) (port id)
+ testpmd> set bonding primary (member id) (port id)
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
testpmd> set bonding primary 6 10
@@ -590,7 +590,7 @@ set bonding mon_period
Set the link status monitoring polling period in milliseconds for a bonding device.
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
When the mon_period is set to a value greater than 0 then all PMD's which do not support
link status ISR will be queried every polling interval to check if their link status has changed::
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
set bonding lacp dedicated_queue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
when in mode 4 (link-aggregation-802.3ad)::
testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
testpmd> show bonding config (port id)
For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
in balance mode with a transmission policy of layer 2+3::
testpmd> show bonding config 9
- Dev basic:
Bonding mode: BALANCE(2)
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
- Slaves (3): [1 3 4]
- Active Slaves (3): [1 3 4]
+ Members (3): [1 3 4]
+ Active Members (3): [1 3 4]
Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
cmdline_fixed_string_t set;
cmdline_fixed_string_t bonding;
cmdline_fixed_string_t primary;
- portid_t slave_id;
+ portid_t member_id;
portid_t port_id;
};
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
struct cmd_set_bonding_primary_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* Set the primary slave for a bonded device. */
- if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
- fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
- master_port_id);
+ /* Set the primary member for a bonded device. */
+ if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+ fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+ main_port_id);
return;
}
init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, RTE_UINT16);
+ member_id, RTE_UINT16);
static cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
port_id, RTE_UINT16);
static cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
- .help_str = "set bonding primary <slave_id> <port_id>: "
- "Set the primary slave for port_id",
+ .help_str = "set bonding primary <member_id> <port_id>: "
+ "Set the primary member for port_id",
.data = NULL,
.tokens = {
(void *)&cmd_setbonding_primary_set,
(void *)&cmd_setbonding_primary_bonding,
(void *)&cmd_setbonding_primary_primary,
- (void *)&cmd_setbonding_primary_slave,
+ (void *)&cmd_setbonding_primary_member,
(void *)&cmd_setbonding_primary_port,
NULL
}
};
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
cmdline_fixed_string_t add;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_add_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_add_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* add the slave for a bonded device. */
- if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+ /* add the member for a bonded device. */
+ if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to add slave %d to master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to add member %d to main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
- ports[master_port_id].update_conf = 1;
+ ports[main_port_id].update_conf = 1;
init_port_config();
- set_port_slave_flag(slave_port_id);
+ set_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
- .f = cmd_add_bonding_slave_parsed,
- .help_str = "add bonding slave <slave_id> <port_id>: "
- "Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+ .f = cmd_add_bonding_member_parsed,
+ .help_str = "add bonding member <member_id> <port_id>: "
+ "Add a member device to a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_addbonding_slave_add,
- (void *)&cmd_addbonding_slave_bonding,
- (void *)&cmd_addbonding_slave_slave,
- (void *)&cmd_addbonding_slave_slaveid,
- (void *)&cmd_addbonding_slave_port,
+ (void *)&cmd_addbonding_member_add,
+ (void *)&cmd_addbonding_member_bonding,
+ (void *)&cmd_addbonding_member_member,
+ (void *)&cmd_addbonding_member_memberid,
+ (void *)&cmd_addbonding_member_port,
NULL
}
};
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
cmdline_fixed_string_t remove;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_remove_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_remove_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* remove the slave from a bonded device. */
- if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+ /* remove the member from a bonded device. */
+ if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to remove slave %d from master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to remove member %d from main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
init_port_config();
- clear_port_slave_flag(slave_port_id);
+ clear_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
- .f = cmd_remove_bonding_slave_parsed,
- .help_str = "remove bonding slave <slave_id> <port_id>: "
- "Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+ .f = cmd_remove_bonding_member_parsed,
+ .help_str = "remove bonding member <member_id> <port_id>: "
+ "Remove a member device from a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_removebonding_slave_remove,
- (void *)&cmd_removebonding_slave_bonding,
- (void *)&cmd_removebonding_slave_slave,
- (void *)&cmd_removebonding_slave_slaveid,
- (void *)&cmd_removebonding_slave_port,
+ (void *)&cmd_removebonding_member_remove,
+ (void *)&cmd_removebonding_member_bonding,
+ (void *)&cmd_removebonding_member_member,
+ (void *)&cmd_removebonding_member_memberid,
+ (void *)&cmd_removebonding_member_port,
NULL
}
};
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
},
{
&cmd_set_bonding_primary,
- "set bonding primary (slave_id) (port_id)\n"
- " Set the primary slave for a bonded device.\n",
+ "set bonding primary (member_id) (port_id)\n"
+ " Set the primary member for a bonded device.\n",
},
{
- &cmd_add_bonding_slave,
- "add bonding slave (slave_id) (port_id)\n"
- " Add a slave device to a bonded device.\n",
+ &cmd_add_bonding_member,
+ "add bonding member (member_id) (port_id)\n"
+ " Add a member device to a bonded device.\n",
},
{
- &cmd_remove_bonding_slave,
- "remove bonding slave (slave_id) (port_id)\n"
- " Remove a slave device from a bonded device.\n",
+ &cmd_remove_bonding_member,
+ "remove bonding member (member_id) (port_id)\n"
+ " Remove a member device from a bonded device.\n",
},
{
&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..77892c0601 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
#include "rte_eth_bond_8023ad.h"
#define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_RX_PKTS 3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_TX_PKTS 1
/**
* Timeouts definitions (5.4.4 in 802.1AX documentation).
*/
@@ -113,7 +113,7 @@ struct port {
enum rte_bond_8023ad_selection selected;
/** Indicates if either allmulti or promisc has been enforced on the
- * slave so that we can receive lacp packets
+ * member so that we can receive lacp packets
*/
#define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
#define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
uint8_t external_sm;
struct rte_ether_addr mac_addr;
- struct rte_eth_link slave_link;
- /***< slave link properties */
+ struct rte_eth_link member_link;
+ /***< member link properties */
/**
* Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
/**
* @internal
*
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
*
* @param dev Bonded interface
* @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
/**
* @internal
*
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
*
* @param dev Bonded interface
* @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
*
* Passes given slow packet to state machines management logic.
* @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
* @param slot_pkt Slow packet.
*/
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt);
+ uint16_t member_id, struct rte_mbuf *pkt);
/**
* @internal
*
- * Appends given slave used slave
+ * Appends given member used member
*
* @param dev Bonded interface.
- * @param port_id Slave port ID to be added
+ * @param port_id Member port ID to be added
*
* @return
* 0 on success, negative value otherwise.
*/
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
/**
* @internal
*
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
*
* @param dev Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
*
* @return
* 0 on success, negative value otherwise.
*/
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
/**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
* @param bond_dev Bonded device
*/
void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port);
+ uint16_t member_port);
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
int
bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
#include "eth_bond_8023ad_private.h"
#include "rte_eth_bond_alb.h"
-#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG ("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG ("primary")
#define PMD_BOND_MODE_KVARG ("mode")
#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
/** Port Queue Mapping Structure */
struct bond_rx_queue {
uint16_t queue_id;
- /**< Next active_slave to poll */
- uint16_t active_slave;
+ /**< Next active_member to poll */
+ uint16_t active_member;
/**< Queue Id */
struct bond_dev_private *dev_private;
/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
/**< Copy of TX configuration structure for queue */
};
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
- uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
- uint16_t slave_count; /**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+ uint16_t members[RTE_MAX_ETHPORTS]; /**< Member port id array */
+ uint16_t member_count; /**< Number of members */
};
-struct bond_slave_details {
+struct bond_member_details {
uint16_t port_id;
uint8_t link_status_poll_enabled;
uint8_t link_status_wait_to_complete;
uint8_t last_link_status;
- /**< Port Id of slave eth_dev */
+ /**< Port Id of member eth_dev */
struct rte_ether_addr persisted_mac_addr;
uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next;
- /* Slaves flows */
+ /* Members flows */
struct rte_flow *flows[RTE_MAX_ETHPORTS];
/* Flow description for synchronization */
struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
};
typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
/** Link Bonding PMD device private configuration Structure */
struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
rte_spinlock_t lock;
rte_spinlock_t lsc_lock;
- uint16_t primary_port; /**< Primary Slave Port */
- uint16_t current_primary_port; /**< Primary Slave Port */
+ uint16_t primary_port; /**< Primary Member Port */
+ uint16_t current_primary_port; /**< Primary Member Port */
uint16_t user_defined_primary_port;
/**< Flag for whether primary port is user defined or not */
@@ -137,16 +137,16 @@ struct bond_dev_private {
uint16_t nb_rx_queues; /**< Total number of rx queues */
uint16_t nb_tx_queues; /**< Total number of tx queues*/
- uint16_t active_slave_count; /**< Number of active slaves */
- uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
+ uint16_t active_member_count; /**< Number of active members */
+ uint16_t active_members[RTE_MAX_ETHPORTS]; /**< Active member list */
- uint16_t slave_count; /**< Number of bonded slaves */
- struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
- /**< Array of bonded slaves details */
+ uint16_t member_count; /**< Number of bonded members */
+ struct bond_member_details members[RTE_MAX_ETHPORTS];
+ /**< Array of bonded members details */
struct mode8023ad_private mode4;
- uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
- /**< TLB active slaves send order */
+ uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+ /**< TLB active members send order */
struct mode_alb_private mode6;
uint64_t rx_offload_capa; /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
uint8_t rss_key_len; /**< hash key length in bytes. */
struct rte_kvargs *kvlist;
- uint8_t slave_update_idx;
+ uint8_t member_update_idx;
bool kvargs_processing_is_done;
@@ -191,19 +191,21 @@ struct bond_dev_private {
extern const struct eth_dev_ops default_dev_ops;
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
uint16_t pos;
- for (pos = 0; pos < slaves_count; pos++) {
- if (slave_id == slaves[pos])
+ for (pos = 0; pos < members_count; pos++) {
+ if (member_id == members[pos])
break;
}
@@ -217,13 +219,13 @@ int
valid_bonded_port_id(uint16_t port_id);
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
int
mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *dst_mac_addr);
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id);
+ uint16_t member_port_id);
int
bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
void *param, void *ret_param);
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args);
int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
const char *value, void *extra_args);
int
@@ -323,7 +325,7 @@ void
bond_tlb_enable(struct bond_dev_private *internals);
void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
int
bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
*
* RTE Link Bonding Ethernet Device
* Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
* these interfaces based on the mode of operation specified and supported.
* This implementation supports 4 modes of operation round robin, active backup
* balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
#define BONDING_MODE_ROUND_ROBIN (0)
/**< Round Robin (Mode 0).
* In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
#define BONDING_MODE_ACTIVE_BACKUP (1)
/**< Active Backup (Mode 1).
* In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
#define BONDING_MODE_BALANCE (2)
/**< Balance (Mode 2).
* In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
* See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
#define BONDING_MODE_BROADCAST (3)
/**< Broadcast (Mode 3).
* In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
#define BONDING_MODE_8023AD (4)
/**< 802.3AD (Mode 4).
*
@@ -62,22 +66,22 @@ extern "C" {
* be handled with the expected latency and this may cause the link status to be
* incorrectly marked as down or failure to correctly negotiate with peers.
* - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
*/
#define BONDING_MODE_TLB (5)
/**< Adaptive TLB (Mode 5)
* This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
#define BONDING_MODE_ALB (6)
/**< Adaptive Load Balancing (Mode 6)
* This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
* bonding driver intercepts ARP replies send by local system and overwrites its
* source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
* information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
*/
/* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
rte_eth_bond_free(const char *name);
/**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
/**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
/**
* Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
rte_eth_bond_mode_get(uint16_t bonded_port_id);
/**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
/**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
*
* @return
- * Port Id of primary slave on success, -1 on failure
+ * Port Id of primary member on success, -1 on failure
*/
int
rte_eth_bond_primary_get(uint16_t bonded_port_id);
/**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of slaves associated with bonded device on success,
+ * Number of members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
/**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
* device.
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of active slaves associated with bonded device on success,
+ * Number of active members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
/**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
*
* @param bonded_port_id Port ID of bonded device.
* @param mac_addr MAC Address to use on bonded device overriding
- * slaves MAC addresses
+ * members MAC addresses
*
* @return
* 0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
struct rte_ether_addr *mac_addr);
/**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
*
* @param bonded_port_id Port ID of bonded device.
*
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
/**
* Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
*
* @param bonded_port_id Port ID of bonded device.
* @param internal_ms Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
/**
* Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
*
* @param bonded_port_id Port ID of bonded device.
*
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..ac9f414e74 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
#define MODE4_DEBUG(fmt, ...) \
rte_log(RTE_LOG_DEBUG, bond_logtype, \
"%6u [Port %u: %s] " fmt, \
- bond_dbg_get_time_diff_ms(), slave_id, \
+ bond_dbg_get_time_diff_ms(), member_id, \
__func__, ##__VA_ARGS__)
static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
}
static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
uint8_t warnings;
do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
if (warnings & WRN_RX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+ "Member %u: failed to enqueue LACP packet into RX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will notwork correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_TX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+ "Member %u: failed to enqueue LACP packet into TX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will not work correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_RX_MARKER_TO_FAST)
- RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+ member_id);
if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
RTE_BOND_LOG(INFO,
- "Slave %u: ignoring unknown slow protocol frame type",
- slave_id);
+ "Member %u: ignoring unknown slow protocol frame type",
+ member_id);
}
if (warnings & WRN_UNKNOWN_MARKER_TYPE)
- RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+ member_id);
if (warnings & WRN_NOT_LACP_CAPABLE)
- MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+ MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
}
static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
* @param port Port on which LACPDU was received.
*/
static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
struct lacpdu *lacp)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
uint64_t timeout;
if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
* @param port Port to handle state machine.
*/
static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Calculate if either site is LACP enabled */
uint64_t timeout;
uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port Port to handle state machine.
*/
static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Save current state for later use */
const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing started.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing started.",
+ internals->port_id, member_id);
}
} else {
if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing stopped.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing stopped.",
+ internals->port_id, member_id);
}
}
}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port
*/
static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
struct rte_mbuf *lacp_pkt = NULL;
struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
/* Source and destination MAC */
rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
return;
}
} else {
- uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+ uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, 1);
- pkts_sent = rte_eth_tx_burst(slave_id,
+ pkts_sent = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, pkts_sent);
if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
* @param port_pos Port to assign.
*/
static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
{
struct port *agg, *port;
- uint16_t slaves_count, new_agg_id, i, j = 0;
- uint16_t *slaves;
+ uint16_t members_count, new_agg_id, i, j = 0;
+ uint16_t *members;
uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
- uint16_t default_slave = 0;
+ uint16_t default_member = 0;
struct rte_eth_link link_info;
uint16_t agg_new_idx = 0;
int ret;
- slaves = internals->active_slaves;
- slaves_count = internals->active_slave_count;
- port = &bond_mode_8023ad_ports[slave_id];
+ members = internals->active_members;
+ members_count = internals->active_member_count;
+ port = &bond_mode_8023ad_ports[member_id];
/* Search for aggregator suitable for this port */
- for (i = 0; i < slaves_count; ++i) {
- agg = &bond_mode_8023ad_ports[slaves[i]];
+ for (i = 0; i < members_count; ++i) {
+ agg = &bond_mode_8023ad_ports[members[i]];
/* Skip ports that are not aggregators */
- if (agg->aggregator_port_id != slaves[i])
+ if (agg->aggregator_port_id != members[i])
continue;
- ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+ ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slaves[i], rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ members[i], rte_strerror(-ret));
continue;
}
agg_count[i] += 1;
agg_bandwidth[i] += link_info.link_speed;
- /* Actors system ID is not checked since all slave device have the same
+ /* Actors system ID is not checked since all member device have the same
* ID (MAC address). */
if ((agg->actor.key == port->actor.key &&
agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
if (j == 0)
- default_slave = i;
+ default_member = i;
j++;
}
}
switch (internals->mode4.agg_selection) {
case AGG_COUNT:
- agg_new_idx = max_index(agg_count, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_count, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_BANDWIDTH:
- agg_new_idx = max_index(agg_bandwidth, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_bandwidth, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_STABLE:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
default:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
}
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
MODE4_DEBUG("-> SELECTED: ID=%3u\n"
"\t%s aggregator ID=%3u\n",
port->aggregator_port_id,
- port->aggregator_port_id == slave_id ?
+ port->aggregator_port_id == member_id ?
"aggregator not found, using default" : "aggregator found",
port->aggregator_port_id);
}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
}
static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
struct rte_mbuf *lacp_pkt) {
struct lacpdu_header *lacp;
struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
partner = &lacp->lacpdu.partner;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
/* This LACP frame is sending to the bonding port
* so pass it to rx_machine.
*/
- rx_machine(internals, slave_id, &lacp->lacpdu);
+ rx_machine(internals, member_id, &lacp->lacpdu);
} else {
char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
}
rte_pktmbuf_free(lacp_pkt);
} else
- rx_machine(internals, slave_id, NULL);
+ rx_machine(internals, member_id, NULL);
}
static void
bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
- uint16_t slave_id)
+ uint16_t member_id)
{
#define DEDICATED_QUEUE_BURST_SIZE 32
struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ uint16_t rx_count = rte_eth_rx_burst(member_id,
internals->mode4.dedicated_queues.rx_qid,
lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
uint16_t i;
for (i = 0; i < rx_count; i++)
- bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ bond_mode_8023ad_handle_slow_pkt(internals, member_id,
lacp_pkt[i]);
} else {
- rx_machine_update(internals, slave_id, NULL);
+ rx_machine_update(internals, member_id, NULL);
}
}
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
struct bond_dev_private *internals = bond_dev->data->dev_private;
struct port *port;
struct rte_eth_link link_info;
- struct rte_ether_addr slave_addr;
+ struct rte_ether_addr member_addr;
struct rte_mbuf *lacp_pkt = NULL;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
/* Update link status on each port */
- for (i = 0; i < internals->active_slave_count; i++) {
+ for (i = 0; i < internals->active_member_count; i++) {
uint16_t key;
int ret;
- slave_id = internals->active_slaves[i];
- ret = rte_eth_link_get_nowait(slave_id, &link_info);
+ member_id = internals->active_members[i];
+ ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_id, rte_strerror(-ret));
}
if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
key = 0;
}
- rte_eth_macaddr_get(slave_id, &slave_addr);
- port = &bond_mode_8023ad_ports[slave_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
+ port = &bond_mode_8023ad_ports[member_id];
key = rte_cpu_to_be_16(key);
if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, NTT);
}
- if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
- rte_ether_addr_copy(&slave_addr, &port->actor.system);
- if (port->aggregator_port_id == slave_id)
+ if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+ rte_ether_addr_copy(&member_addr, &port->actor.system);
+ if (port->aggregator_port_id == member_id)
SM_FLAG_SET(port, NTT);
}
}
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if ((port->actor.key &
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (retval != 0)
lacp_pkt = NULL;
- rx_machine_update(internals, slave_id, lacp_pkt);
+ rx_machine_update(internals, member_id, lacp_pkt);
} else {
bond_mode_8023ad_dedicated_rxq_process(internals,
- slave_id);
+ member_id);
}
- periodic_machine(internals, slave_id);
- mux_machine(internals, slave_id);
- tx_machine(internals, slave_id);
- selection_logic(internals, slave_id);
+ periodic_machine(internals, member_id);
+ mux_machine(internals, member_id);
+ tx_machine(internals, member_id);
+ selection_logic(internals, member_id);
SM_FLAG_CLR(port, BEGIN);
- show_warnings(slave_id);
+ show_warnings(member_id);
}
rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
}
static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
{
int ret;
- ret = rte_eth_allmulticast_enable(slave_id);
+ ret = rte_eth_allmulticast_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_allmulticast_get(slave_id)) {
+ if (rte_eth_allmulticast_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_ALLMULTI;
return 0;
}
- ret = rte_eth_promiscuous_enable(slave_id);
+ ret = rte_eth_promiscuous_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_promiscuous_get(slave_id)) {
+ if (rte_eth_promiscuous_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_PROMISC;
return 0;
}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
}
static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
{
int ret;
- switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+ switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
case BOND_8023AD_FORCED_ALLMULTI:
- RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
- ret = rte_eth_allmulticast_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+ ret = rte_eth_allmulticast_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
case BOND_8023AD_FORCED_PROMISC:
- RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
- ret = rte_eth_promiscuous_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+ ret = rte_eth_promiscuous_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
}
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
- uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+ uint16_t member_id)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct port_params initial = {
.system = { { 0 } },
.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
- /* Given slave mus not be in active list */
- RTE_ASSERT(find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) == internals->active_slave_count);
+ /* Given member mus not be in active list */
+ RTE_ASSERT(find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) == internals->active_member_count);
RTE_SET_USED(internals); /* used only for assert when enabled */
memcpy(&port->actor, &initial, sizeof(struct port_params));
/* Standard requires that port ID must be grater than 0.
* Add 1 do get corresponding port_number */
- port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+ port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
memcpy(&port->partner, &initial, sizeof(struct port_params));
memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
port->sm_flags = SM_FLAGS_BEGIN;
/* use this port as aggregator */
- port->aggregator_port_id = slave_id;
+ port->aggregator_port_id = member_id;
- if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
- RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
- slave_id);
+ if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+ RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+ member_id);
}
timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_ASSERT(port->rx_ring == NULL);
RTE_ASSERT(port->tx_ring == NULL);
- socket_id = rte_eth_dev_socket_id(slave_id);
+ socket_id = rte_eth_dev_socket_id(member_id);
if (socket_id == -1)
socket_id = rte_socket_id();
element_size = sizeof(struct slow_protocol_frame) +
RTE_PKTMBUF_HEADROOM;
- /* The size of the mempool should be at least:
- * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
- total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+ /*
+ * The size of the mempool should be at least:
+ * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
+ */
+ total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
total_tx_desc += bd_tx_q->nb_tx_desc;
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
port->rx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
if (port->rx_ring == NULL) {
- rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
/* TX ring is at least one pkt longer to make room for marker packet. */
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
port->tx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
if (port->tx_ring == NULL) {
- rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
}
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
- uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+ uint16_t member_id)
{
void *pkt = NULL;
struct port *port = NULL;
uint8_t old_partner_state;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
ACTOR_STATE_CLR(port, AGGREGATION);
port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
old_partner_state = port->partner_state;
record_default(port);
- bond_mode_8023ad_unregister_lacp_mac(slave_id);
+ bond_mode_8023ad_unregister_lacp_mac(member_id);
/* If partner timeout state changes then disable timer */
if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct rte_ether_addr slave_addr;
- struct port *slave, *agg_slave;
- uint16_t slave_id, i, j;
+ struct rte_ether_addr member_addr;
+ struct port *member, *agg_member;
+ uint16_t member_id, i, j;
bond_mode_8023ad_stop(bond_dev);
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- slave = &bond_mode_8023ad_ports[slave_id];
- rte_eth_macaddr_get(slave_id, &slave_addr);
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ member = &bond_mode_8023ad_ports[member_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
- if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+ if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
continue;
- rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+ rte_ether_addr_copy(&member_addr, &member->actor.system);
/* Do nothing if this port is not an aggregator. In other case
* Set NTT flag on every port that use this aggregator. */
- if (slave->aggregator_port_id != slave_id)
+ if (member->aggregator_port_id != member_id)
continue;
- for (j = 0; j < internals->active_slave_count; j++) {
- agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
- if (agg_slave->aggregator_port_id == slave_id)
- SM_FLAG_SET(agg_slave, NTT);
+ for (j = 0; j < internals->active_member_count; j++) {
+ agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+ if (agg_member->aggregator_port_id == member_id)
+ SM_FLAG_SET(agg_member, NTT);
}
}
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
struct bond_dev_private *internals = bond_dev->data->dev_private;
uint16_t i;
- for (i = 0; i < internals->active_slave_count; i++)
- bond_mode_8023ad_activate_slave(bond_dev,
- internals->active_slaves[i]);
+ for (i = 0; i < internals->active_member_count; i++)
+ bond_mode_8023ad_activate_member(bond_dev,
+ internals->active_members[i]);
return 0;
}
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt)
+ uint16_t member_id, struct rte_mbuf *pkt)
{
struct mode8023ad_private *mode4 = &internals->mode4;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct marker_header *m_hdr;
uint64_t marker_timer, old_marker_timer;
int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
}
} else {
/* Send packet directly to the slow queue */
- uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+ uint16_t tx_count = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, 1);
- tx_count = rte_eth_tx_burst(slave_id,
+ tx_count = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, tx_count);
if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
goto free_out;
}
} else
- rx_machine_update(internals, slave_id, pkt);
+ rx_machine_update(internals, member_id, pkt);
} else {
wrn = WRN_UNKNOWN_SLOW_TYPE;
goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *info)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
bond_dev = &rte_eth_devices[port_id];
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
info->selected = port->selected;
info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
}
static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
return -EINVAL;
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
}
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, DISTRIBUTING);
}
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, COLLECTING);
}
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
struct mode8023ad_private *mode4 = &internals->mode4;
struct port *port;
void *pkt = NULL;
- uint16_t i, slave_id;
+ uint16_t i, member_id;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
/* This is LACP frame so pass it to rx callback.
* Callback is responsible for freeing mbuf.
*/
- mode4->slowrx_cb(slave_id, lacp_pkt);
+ mode4->slowrx_cb(member_id, lacp_pkt);
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
uint16_t system_priority;
/**< System priority (unused in current implementation) */
struct rte_ether_addr system;
- /**< System ID - Slave MAC address, same as bonding MAC address */
+ /**< System ID - Member MAC address, same as bonding MAC address */
uint16_t key;
/**< Speed information (implementation dependent) and duplex. */
uint16_t port_priority;
/**< Priority of this (unused in current implementation) */
uint16_t port_number;
- /**< Port number. It corresponds to slave port id. */
+ /**< Port number. It corresponds to member port id. */
} __rte_packed __rte_aligned(2);
struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
enum rte_bond_8023ad_agg_selection agg_selection;
};
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
enum rte_bond_8023ad_selection selected;
uint8_t actor_state;
struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
/**
* @internal
*
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
*
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param conf buffer for configuration
* @return
* 0 - if ok
- * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ * -EINVAL if conf is NULL or member id is invalid (not a member of given
* bonded device or is not inactive).
*/
+__rte_experimental
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf)
+{
+ return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
#ifdef __cplusplus
}
#endif
/**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when collection enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
/**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when distribution enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
/**
* LACPDU transmit path for external 802.3ad state machine. Caller retains
* ownership of the packet on failure.
*
* @param port_id Bonding device id
- * @param slave_id Port ID of valid slave device.
+ * @param member_id Port ID of valid member device.
* @param lacp_pkt mbuf containing LACPDU.
*
* @return
* 0 on success, negative value otherwise.
*/
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt);
/**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
*
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
* dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
* for processing in the LACP state machine, this removes the need to filter
* these packets in the bonded devices data path. The additional tx queue is
* used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
*
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
* filter rule required for rx and have enough queues that one rx and tx queue
* can be reserved for the LACP state machines control packets.
*
@@ -296,7 +305,7 @@ int
rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
/**
- * Disable slow queue on slaves
+ * Disable slow queue on members
*
* This function disables hardware slow packet filter.
*
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
}
static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
{
uint16_t idx;
- idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
- internals->mode6.last_slave = idx;
- return internals->active_slaves[idx];
+ idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+ internals->mode6.last_member = idx;
+ return internals->active_members[idx];
}
int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
/* Fill hash table with initial values */
memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
rte_spinlock_init(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
internals->mode6.ntt = 0;
/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
/*
* We got reply for ARP Request send by the application. We need to
* update client table when received data differ from what is stored
- * in ALB table and issue sending update packet to that slave.
+ * in ALB table and issue sending update packet to that member.
*/
rte_spinlock_lock(&internals->mode6.lock);
if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
client_info->cli_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_sha,
&client_info->cli_mac);
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
&arp->arp_data.arp_tha,
&client_info->cli_mac);
}
- rte_eth_macaddr_get(client_info->slave_idx,
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
}
- /* Assign new slave to this client and update src mac in ARP */
+ /* Assign new member to this client and update src mac in ARP */
client_info->in_use = 1;
client_info->ntt = 0;
client_info->app_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_tha,
&client_info->cli_mac);
client_info->cli_ip = arp->arp_data.arp_tip;
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
{
struct rte_ether_hdr *eth_h;
struct rte_arp_hdr *arp_h;
- uint16_t slave_idx;
+ uint16_t member_idx;
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
arp_h->arp_plen = sizeof(uint32_t);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
- slave_idx = client_info->slave_idx;
+ member_idx = client_info->member_idx;
rte_spinlock_unlock(&internals->mode6.lock);
- return slave_idx;
+ return member_idx;
}
void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
int i;
- /* If active slave count is 0, it's pointless to refresh alb table */
- if (internals->active_slave_count <= 0)
+ /* If active member count is 0, it's pointless to refresh alb table */
+ if (internals->active_member_count <= 0)
return;
rte_spinlock_lock(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
internals->mode6.ntt = 1;
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
uint32_t cli_ip;
/**< Client IP address */
- uint16_t slave_idx;
- /**< Index of slave on which we connect with that client */
+ uint16_t member_idx;
+ /**< Index of member on which we connect with that client */
uint8_t in_use;
/**< Flag indicating if entry in client table is currently used */
uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
/**< Mempool for creating ARP update packets */
uint8_t ntt;
/**< Flag indicating if we need to send update to any client on next tx */
- uint32_t last_slave;
- /**< Index of last used slave in client table */
+ uint32_t last_member;
+ /**< Index of last used member in client table */
rte_spinlock_t lock;
};
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
struct bond_dev_private *internals);
/**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
* connection. On Reply function also updates data in client table.
*
* @param eth_h ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_upd(struct client_data *client_info,
struct rte_mbuf *pkt, struct bond_dev_private *internals);
/**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
*
* @param bond_dev Pointer to bonded device struct.
*/
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
}
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
{
int i;
struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- /* Check if any of slave devices is a bonded device */
- for (i = 0; i < internals->slave_count; i++)
- if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+ /* Check if any of member devices is a bonded device */
+ for (i = 0; i < internals->member_count; i++)
+ if (valid_bonded_port_id(internals->members[i].port_id) == 0)
return 1;
return 0;
}
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
{
- RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
- /* Verify that slave_port_id refers to a non bonded port */
- if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+ /* Verify that member_port_id refers to a non bonded port */
+ if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
internals->mode == BONDING_MODE_8023AD) {
- RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
- " mode as slave is also a bonded device, only "
+ RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+ " mode as member is also a bonded device, only "
"physical devices can be support in this mode.");
return -1;
}
- if (internals->port_id == slave_port_id) {
+ if (internals->port_id == member_port_id) {
RTE_BOND_LOG(ERR,
- "Cannot add the bonded device itself as its slave.");
+ "Cannot add the bonded device itself as its member.");
return -1;
}
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
}
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD)
- bond_mode_8023ad_activate_slave(eth_dev, port_id);
+ bond_mode_8023ad_activate_member(eth_dev, port_id);
if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB) {
- internals->tlb_slaves_order[active_count] = port_id;
+ internals->tlb_members_order[active_count] = port_id;
}
- RTE_ASSERT(internals->active_slave_count <
- (RTE_DIM(internals->active_slaves) - 1));
+ RTE_ASSERT(internals->active_member_count <
+ (RTE_DIM(internals->active_members) - 1));
- internals->active_slaves[internals->active_slave_count] = port_id;
- internals->active_slave_count++;
+ internals->active_members[internals->active_member_count] = port_id;
+ internals->active_member_count++;
if (internals->mode == BONDING_MODE_TLB)
- bond_tlb_activate_slave(internals);
+ bond_tlb_activate_member(internals);
if (internals->mode == BONDING_MODE_ALB)
bond_mode_alb_client_list_upd(eth_dev);
}
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
- uint16_t slave_pos;
+ uint16_t member_pos;
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD) {
bond_mode_8023ad_stop(eth_dev);
- bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+ bond_mode_8023ad_deactivate_member(eth_dev, port_id);
} else if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB)
bond_tlb_disable(internals);
- slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+ member_pos = find_member_by_id(internals->active_members, active_count,
port_id);
- /* If slave was not at the end of the list
- * shift active slaves up active array list */
- if (slave_pos < active_count) {
+ /*
+ * If member was not at the end of the list
+ * shift active members up active array list.
+ */
+ if (member_pos < active_count) {
active_count--;
- memmove(internals->active_slaves + slave_pos,
- internals->active_slaves + slave_pos + 1,
- (active_count - slave_pos) *
- sizeof(internals->active_slaves[0]));
+ memmove(internals->active_members + member_pos,
+ internals->active_members + member_pos + 1,
+ (active_count - member_pos) *
+ sizeof(internals->active_members[0]));
}
- RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
- internals->active_slave_count = active_count;
+ RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+ internals->active_member_count = active_count;
if (eth_dev->data->dev_started) {
if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
}
static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
if (unlikely(slab & mask)) {
uint16_t vlan_id = pos + i;
- res = rte_eth_dev_vlan_filter(slave_port_id,
+ res = rte_eth_dev_vlan_filter(member_port_id,
vlan_id, 1);
}
}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
{
struct rte_flow *flow;
struct rte_flow_error ferror;
- uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+ uint16_t member_port_id = internals->members[member_id].port_id;
if (internals->flow_isolated_valid != 0) {
- if (rte_eth_dev_stop(slave_port_id) != 0) {
+ if (rte_eth_dev_stop(member_port_id) != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_port_id);
+ member_port_id);
return -1;
}
- if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+ if (rte_flow_isolate(member_port_id, internals->flow_isolated,
&ferror)) {
- RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
- " %d: %s", slave_id, ferror.message ?
+ RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+ " %d: %s", member_id, ferror.message ?
ferror.message : "(no stated reason)");
return -1;
}
}
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- flow->flows[slave_id] = rte_flow_create(slave_port_id,
+ flow->flows[member_id] = rte_flow_create(member_port_id,
flow->rule.attr,
flow->rule.pattern,
flow->rule.actions,
&ferror);
- if (flow->flows[slave_id] == NULL) {
- RTE_BOND_LOG(ERR, "Cannot create flow for slave"
- " %d: %s", slave_id,
+ if (flow->flows[member_id] == NULL) {
+ RTE_BOND_LOG(ERR, "Cannot create flow for member"
+ " %d: %s", member_id,
ferror.message ? ferror.message :
"(no stated reason)");
- /* Destroy successful bond flows from the slave */
+ /* Destroy successful bond flows from the member */
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- flow->flows[slave_id],
+ if (flow->flows[member_id] != NULL) {
+ rte_flow_destroy(member_port_id,
+ flow->flows[member_id],
&ferror);
- flow->flows[slave_id] = NULL;
+ flow->flows[member_id] = NULL;
}
}
return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
}
static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
internals->reta_size = di->reta_size;
internals->rss_key_len = di->hash_key_size;
- /* Inherit Rx offload capabilities from the first slave device */
+ /* Inherit Rx offload capabilities from the first member device */
internals->rx_offload_capa = di->rx_offload_capa;
internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
- /* Inherit maximum Rx packet size from the first slave device */
+ /* Inherit maximum Rx packet size from the first member device */
internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
- /* Inherit default Rx queue settings from the first slave device */
+ /* Inherit default Rx queue settings from the first member device */
memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
rxconf_i->rx_thresh.pthresh = 0;
rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
/* Setting this to zero should effectively enable default values */
rxconf_i->rx_free_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
rxconf_i->rx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
- /* Inherit Tx offload capabilities from the first slave device */
+ /* Inherit Tx offload capabilities from the first member device */
internals->tx_offload_capa = di->tx_offload_capa;
internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
- /* Inherit default Tx queue settings from the first slave device */
+ /* Inherit default Tx queue settings from the first member device */
memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
txconf_i->tx_thresh.pthresh = 0;
txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
/*
* Setting these parameters to zero assumes that default
- * values will be configured implicitly by slave devices.
+ * values will be configured implicitly by member devices.
*/
txconf_i->tx_free_thresh = 0;
txconf_i->tx_rs_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
txconf_i->tx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
/*
- * If at least one slave device suggests enabling this
- * setting by default, enable it for all slave devices
+ * If at least one member device suggests enabling this
+ * setting by default, enable it for all member devices
* since disabling it may not be necessarily supported.
*/
if (rxconf->rx_drop_en == 1)
rxconf_i->rx_drop_en = 1;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal rx_queue_offload_capa
* value. Thus, the new internal value of default Rx queue offloads
* has to be masked by rx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
internals->rx_queue_offload_capa;
/*
- * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+ * RETA size is GCD of all members RETA sizes, so, if all sizes will be
* the power of 2, the lower one is GCD
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
if (internals->rss_key_len > di->hash_key_size) {
- RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ RTE_BOND_LOG(WARNING, "member has different rss key size, "
"configuring rss may fail");
internals->rss_key_len = di->hash_key_size;
}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
}
static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal tx_queue_offload_capa
* value. Thus, the new internal value of default Tx queue offloads
* has to be masked by tx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
internals->tx_queue_offload_capa;
}
static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
- memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+ memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
}
static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
- slave_desc_lim->nb_max);
+ member_desc_lim->nb_max);
bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
- slave_desc_lim->nb_min);
+ member_desc_lim->nb_min);
bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
- slave_desc_lim->nb_align);
+ member_desc_lim->nb_align);
if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
}
/* Treat maximum number of segments equal to 0 as unspecified */
- if (slave_desc_lim->nb_seg_max != 0 &&
+ if (member_desc_lim->nb_seg_max != 0 &&
(bond_desc_lim->nb_seg_max == 0 ||
- slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
- bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
- if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+ member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+ bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+ if (member_desc_lim->nb_mtu_seg_max != 0 &&
(bond_desc_lim->nb_mtu_seg_max == 0 ||
- slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
- bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+ member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+ bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
return 0;
}
static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
{
- struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+ struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
struct bond_dev_private *internals;
struct rte_eth_link link_props;
struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
- RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+ RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
return -1;
}
- ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+ ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port_id, strerror(-ret));
+ __func__, member_port_id, strerror(-ret));
return ret;
}
if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
- RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
- slave_port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+ member_port_id);
return -1;
}
- slave_add(internals, slave_eth_dev);
+ member_add(internals, member_eth_dev);
- /* We need to store slaves reta_size to be able to synchronize RETA for all
- * slave devices even if its sizes are different.
+ /* We need to store members reta_size to be able to synchronize RETA for all
+ * member devices even if its sizes are different.
*/
- internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+ internals->members[internals->member_count].reta_size = dev_info.reta_size;
- if (internals->slave_count < 1) {
- /* if MAC is not user defined then use MAC of first slave add to
+ if (internals->member_count < 1) {
+ /* if MAC is not user defined then use MAC of first member add to
* bonded device */
if (!internals->user_defined_mac) {
if (mac_address_set(bonded_eth_dev,
- slave_eth_dev->data->mac_addrs)) {
+ member_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to set MAC address");
return -1;
}
}
- /* Make primary slave */
- internals->primary_port = slave_port_id;
- internals->current_primary_port = slave_port_id;
+ /* Make primary member */
+ internals->primary_port = member_port_id;
+ internals->current_primary_port = member_port_id;
internals->speed_capa = dev_info.speed_capa;
- /* Inherit queues settings from first slave */
- internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
- internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+ /* Inherit queues settings from first member */
+ internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+ internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
- eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
- eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
&dev_info.rx_desc_lim);
- eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
&dev_info.tx_desc_lim);
} else {
int ret;
internals->speed_capa &= dev_info.speed_capa;
- eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->rx_desc_lim, &dev_info.rx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+ &dev_info.rx_desc_lim);
if (ret != 0)
return ret;
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->tx_desc_lim, &dev_info.tx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+ &dev_info.tx_desc_lim);
if (ret != 0)
return ret;
}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
- if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
- RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
- slave_port_id);
+ if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+ member_port_id);
return -1;
}
- /* Add additional MAC addresses to the slave */
- if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
- RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
- slave_port_id);
+ /* Add additional MAC addresses to the member */
+ if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+ member_port_id);
return -1;
}
- internals->slave_count++;
+ internals->member_count++;
if (bonded_eth_dev->data->dev_started) {
- if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
- slave_port_id);
+ if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+ member_port_id);
return -1;
}
- if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
- slave_port_id);
+ if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+ member_port_id);
return -1;
}
}
- /* Update all slave devices MACs */
- mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs */
+ mac_address_members_update(bonded_eth_dev);
/* Register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
- /* If bonded device is started then we can add the slave to our active
- * slave array */
+ /*
+ * If bonded device is started then we can add the member to our active
+ * member array.
+ */
if (bonded_eth_dev->data->dev_started) {
- ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+ ret = rte_eth_link_get_nowait(member_port_id, &link_props);
if (ret < 0) {
- rte_eth_dev_callback_unregister(slave_port_id,
+ rte_eth_dev_callback_unregister(member_port_id,
RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&bonded_eth_dev->data->port_id);
- internals->slave_count--;
+ internals->member_count--;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_port_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_port_id, rte_strerror(-ret));
return -1;
}
if (link_props.link_status == RTE_ETH_LINK_UP) {
- if (internals->active_slave_count == 0 &&
+ if (internals->active_member_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
- slave_port_id);
+ member_port_id);
}
}
- /* Add slave details to bonded device */
- slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+ /* Add member details to bonded device */
+ member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
- slave_vlan_filter_set(bonded_port_id, slave_port_id);
+ member_vlan_filter_set(bonded_port_id, member_port_id);
return 0;
}
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
- uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+ uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct rte_flow_error flow_error;
struct rte_flow *flow;
- int i, slave_idx;
+ int i, member_idx;
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) < 0)
+ if (valid_member_port_id(internals, member_port_id) < 0)
return -1;
- /* first remove from active slave list */
- slave_idx = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_port_id);
+ /* first remove from active member list */
+ member_idx = find_member_by_id(internals->active_members,
+ internals->active_member_count, member_port_id);
- if (slave_idx < internals->active_slave_count)
- deactivate_slave(bonded_eth_dev, slave_port_id);
+ if (member_idx < internals->active_member_count)
+ deactivate_member(bonded_eth_dev, member_port_id);
- slave_idx = -1;
- /* now find in slave list */
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == slave_port_id) {
- slave_idx = i;
+ member_idx = -1;
+ /* now find in member list */
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == member_port_id) {
+ member_idx = i;
break;
}
- if (slave_idx < 0) {
- RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
- internals->slave_count);
+ if (member_idx < 0) {
+ RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+ internals->member_count);
return -1;
}
/* Un-register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&rte_eth_devices[bonded_port_id].data->port_id);
- /* Restore original MAC address of slave device */
- rte_eth_dev_default_mac_addr_set(slave_port_id,
- &(internals->slaves[slave_idx].persisted_mac_addr));
+ /* Restore original MAC address of member device */
+ rte_eth_dev_default_mac_addr_set(member_port_id,
+ &internals->members[member_idx].persisted_mac_addr);
- /* remove additional MAC addresses from the slave */
- slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+ /* remove additional MAC addresses from the member */
+ member_remove_mac_addresses(bonded_eth_dev, member_port_id);
/*
- * Remove bond device flows from slave device.
+ * Remove bond device flows from member device.
* Note: don't restore flow isolate mode.
*/
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_idx] != NULL) {
- rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+ if (flow->flows[member_idx] != NULL) {
+ rte_flow_destroy(member_port_id, flow->flows[member_idx],
&flow_error);
- flow->flows[slave_idx] = NULL;
+ flow->flows[member_idx] = NULL;
}
}
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- slave_remove(internals, slave_eth_dev);
- slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ member_remove(internals, member_eth_dev);
+ member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
- /* first slave in the active list will be the primary by default,
+ /* first member in the active list will be the primary by default,
* otherwise use first device in list */
- if (internals->current_primary_port == slave_port_id) {
- if (internals->active_slave_count > 0)
- internals->current_primary_port = internals->active_slaves[0];
- else if (internals->slave_count > 0)
- internals->current_primary_port = internals->slaves[0].port_id;
+ if (internals->current_primary_port == member_port_id) {
+ if (internals->active_member_count > 0)
+ internals->current_primary_port = internals->active_members[0];
+ else if (internals->member_count > 0)
+ internals->current_primary_port = internals->members[0].port_id;
else
internals->primary_port = 0;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
}
- if (internals->active_slave_count < 1) {
- /* if no slaves are any longer attached to bonded device and MAC is not
+ if (internals->active_member_count < 1) {
+ /*
+ * if no members are any longer attached to bonded device and MAC is not
* user defined then clear MAC of bonded device as it will be reset
- * when a new slave is added */
- if (internals->slave_count < 1 && !internals->user_defined_mac)
+ * when a new member is added.
+ */
+ if (internals->member_count < 1 && !internals->user_defined_mac)
memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
}
- if (internals->slave_count == 0) {
+ if (internals->member_count == 0) {
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
}
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
- if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+ if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
mode == BONDING_MODE_8023AD)
return -1;
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
}
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct bond_dev_private *internals;
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
internals->user_defined_primary_port = 1;
- internals->primary_port = slave_port_id;
+ internals->primary_port = member_port_id;
- bond_ethdev_primary_set(internals, slave_port_id);
+ bond_ethdev_primary_set(internals, member_port_id);
return 0;
}
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count < 1)
+ if (internals->member_count < 1)
return -1;
return internals->current_primary_port;
}
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count > len)
+ if (internals->member_count > len)
return -1;
- for (i = 0; i < internals->slave_count; i++)
- slaves[i] = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++)
+ members[i] = internals->members[i].port_id;
- return internals->slave_count;
+ return internals->member_count;
}
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ if (internals->active_member_count > len)
return -1;
- memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ memcpy(members, internals->active_members,
+ internals->active_member_count * sizeof(internals->active_members[0]));
- return internals->active_slave_count;
+ return internals->active_member_count;
}
int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
internals->user_defined_mac = 1;
- /* Update all slave devices MACs*/
- if (internals->slave_count > 0)
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs*/
+ if (internals->member_count > 0)
+ return mac_address_members_update(bonded_eth_dev);
return 0;
}
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
internals->user_defined_mac = 0;
- if (internals->slave_count > 0) {
- int slave_port;
- /* Get the primary slave location based on the primary port
- * number as, while slave_add(), we will keep the primary
- * slave based on slave_count,but not based on the primary port.
+ if (internals->member_count > 0) {
+ int member_port;
+ /* Get the primary member location based on the primary port
+ * number as, while member_add(), we will keep the primary
+ * member based on member_count,but not based on the primary port.
*/
- for (slave_port = 0; slave_port < internals->slave_count;
- slave_port++) {
- if (internals->slaves[slave_port].port_id ==
+ for (member_port = 0; member_port < internals->member_count;
+ member_port++) {
+ if (internals->members[member_port].port_id ==
internals->primary_port)
break;
}
/* Set MAC Address of Bonded Device */
if (mac_address_set(bonded_eth_dev,
- &internals->slaves[slave_port].persisted_mac_addr)
+ &internals->members[member_port].persisted_mac_addr)
!= 0) {
RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
return -1;
}
- /* Update all slave devices MAC addresses */
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MAC addresses */
+ return mac_address_members_update(bonded_eth_dev);
}
- /* No need to update anything as no slaves present */
+ /* No need to update anything as no members present */
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
#include "eth_bond_private.h"
const char *pmd_bond_init_valid_arguments[] = {
- PMD_BOND_SLAVE_PORT_KVARG,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
+ PMD_BOND_MEMBER_PORT_KVARG,
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
PMD_BOND_MODE_KVARG,
PMD_BOND_XMIT_POLICY_KVARG,
PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
}
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args)
{
- struct bond_ethdev_slave_ports *slave_ports;
+ struct bond_ethdev_member_ports *member_ports;
if (value == NULL || extra_args == NULL)
return -1;
- slave_ports = extra_args;
+ member_ports = extra_args;
- if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+ if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
int port_id = parse_port_id(value);
if (port_id < 0) {
- RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+ RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
value);
return -1;
} else
- slave_ports->slaves[slave_ports->slave_count++] =
+ member_ports->members[member_ports->member_count++] =
port_id;
}
return 0;
}
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
case BONDING_MODE_ALB:
return 0;
default:
- RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+ RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
return -1;
}
}
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
}
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
- int primary_slave_port_id;
+ int primary_member_port_id;
if (value == NULL || extra_args == NULL)
return -1;
- primary_slave_port_id = parse_port_id(value);
- if (primary_slave_port_id < 0)
+ primary_member_port_id = parse_port_id(value);
+ if (primary_member_port_id < 0)
return -1;
- *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+ *(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_validate(internals->members[i].port_id, attr,
patterns, actions, err);
if (ret) {
RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
return ret;
}
}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
NULL, rte_strerror(ENOMEM));
return NULL;
}
- for (i = 0; i < internals->slave_count; i++) {
- flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ flow->flows[i] = rte_flow_create(internals->members[i].port_id,
attr, patterns, actions, err);
if (unlikely(flow->flows[i] == NULL)) {
- RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+ RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
i);
goto err;
}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
return flow;
err:
- /* Destroy all slaves flows. */
- for (i = 0; i < internals->slave_count; i++) {
+ /* Destroy all members flows. */
+ for (i = 0; i < internals->member_count; i++) {
if (flow->flows[i] != NULL)
- rte_flow_destroy(internals->slaves[i].port_id,
+ rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
}
bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
int i;
int ret = 0;
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
int lret;
if (unlikely(flow->flows[i] == NULL))
continue;
- lret = rte_flow_destroy(internals->slaves[i].port_id,
+ lret = rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
if (unlikely(lret != 0)) {
- RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+ RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
" %d", i, lret);
ret = lret;
}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
int ret = 0;
int lret;
- /* Destroy all bond flows from its slaves instead of flushing them to
+ /* Destroy all bond flows from its members instead of flushing them to
* keep the LACP flow or any other external flows.
*/
RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
ret = lret;
}
if (unlikely(ret != 0))
- RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+ RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
return ret;
}
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *err)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_flow_query_count slave_count;
+ struct rte_flow_query_count member_count;
int i;
int ret;
count->bytes = 0;
count->hits = 0;
- rte_memcpy(&slave_count, count, sizeof(slave_count));
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_query(internals->slaves[i].port_id,
+ rte_memcpy(&member_count, count, sizeof(member_count));
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_query(internals->members[i].port_id,
flow->flows[i], action,
- &slave_count, err);
+ &member_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
- " slave %d: %d", i, ret);
+ " member %d: %d", i, ret);
return ret;
}
- count->bytes += slave_count.bytes;
- count->hits += slave_count.hits;
- slave_count.bytes = 0;
- slave_count.hits = 0;
+ count->bytes += member_count.bytes;
+ count->hits += member_count.hits;
+ member_count.bytes = 0;
+ member_count.hits = 0;
}
return 0;
}
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_isolate(internals->members[i].port_id, set, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
internals->flow_isolated_valid = 0;
return ret;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct bond_dev_private *internals;
uint16_t num_rx_total = 0;
- uint16_t slave_count;
- uint16_t active_slave;
+ uint16_t member_count;
+ uint16_t active_member;
int i;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- slave_count = internals->active_slave_count;
- active_slave = bd_rx_q->active_slave;
+ member_count = internals->active_member_count;
+ active_member = bd_rx_q->active_member;
- for (i = 0; i < slave_count && nb_pkts; i++) {
- uint16_t num_rx_slave;
+ for (i = 0; i < member_count && nb_pkts; i++) {
+ uint16_t num_rx_member;
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave =
- rte_eth_rx_burst(internals->active_slaves[active_slave],
+ /*
+ * Offset of pointer to *bufs increases as packets are received
+ * from other members.
+ */
+ num_rx_member =
+ rte_eth_rx_burst(internals->active_members[active_member],
bd_rx_q->queue_id,
bufs + num_rx_total, nb_pkts);
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
- if (++active_slave >= slave_count)
- active_slave = 0;
+ num_rx_total += num_rx_member;
+ nb_pkts -= num_rx_member;
+ if (++active_member >= member_count)
+ active_member = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port) {
- struct rte_eth_dev_info slave_info;
+ uint16_t member_port) {
+ struct rte_eth_dev_info member_info;
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
}
};
- int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+ int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
flow_item_8023ad, actions, &error);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
- __func__, error.message, slave_port,
+ RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+ __func__, error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
- ret = rte_eth_dev_info_get(slave_port, &slave_info);
+ ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port, strerror(-ret));
+ __func__, member_port, strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
- slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+ if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+ member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
RTE_BOND_LOG(ERR,
- "%s: Slave %d capabilities doesn't allow allocating additional queues",
- __func__, slave_port);
+ "%s: Member %d capabilities doesn't allow allocating additional queues",
+ __func__, member_port);
return -1;
}
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
uint16_t idx;
int ret;
- /* Verify if all slaves in bonding supports flow director and */
- if (internals->slave_count > 0) {
+ /* Verify if all members in bonding supports flow director and */
+ if (internals->member_count > 0) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
- for (idx = 0; idx < internals->slave_count; idx++) {
+ for (idx = 0; idx < internals->member_count; idx++) {
if (bond_ethdev_8023ad_flow_verify(bond_dev,
- internals->slaves[idx].port_id) != 0)
+ internals->members[idx].port_id) != 0)
return -1;
}
}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
}
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
}
};
- internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+ internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
&flow_attr_8023ad, flow_item_8023ad, actions, &error);
- if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+ if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
- "(slave_port=%d queue_id=%d)",
- error.message, slave_port,
+ "(member_port=%d queue_id=%d)",
+ error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
const uint16_t ether_type_slow_be =
rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint16_t slaves[RTE_MAX_ETHPORTS];
- uint16_t slave_count, idx;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ uint16_t member_count, idx;
- uint8_t collecting; /* current slave collecting status */
+ uint8_t collecting; /* current member collecting status */
const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
uint16_t j;
uint16_t k;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
+ member_count = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * member_count);
- idx = bd_rx_q->active_slave;
- if (idx >= slave_count) {
- bd_rx_q->active_slave = 0;
+ idx = bd_rx_q->active_member;
+ if (idx >= member_count) {
+ bd_rx_q->active_member = 0;
idx = 0;
}
- for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+ for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+ collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
COLLECTING);
- /* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+ /* Read packets from this member */
+ num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
- * - slave is not in collecting state,
+ * - member is not in collecting state,
* - bonding interface is not in promiscuous mode and
* packet address isn't in mac_addrs array:
* - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
!allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
- internals, slaves[idx], bufs[j]);
+ internals, members[idx], bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
} else
j++;
}
- if (unlikely(++idx == slave_count))
+ if (unlikely(++idx == member_count))
idx = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
#ifdef RTE_LIBRTE_BOND_DEBUG_ALB
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
- uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+ uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
- uint16_t num_of_slaves;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members;
+ uint16_t members[RTE_MAX_ETHPORTS];
- uint16_t num_tx_total = 0, num_tx_slave;
+ uint16_t num_tx_total = 0, num_tx_member;
- static int slave_idx = 0;
- int i, cslave_idx = 0, tx_fail_total = 0;
+ static int member_idx;
+ int i, cmember_idx = 0, tx_fail_total = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- /* Populate slaves mbuf with which packets are to be sent on it */
+ /* Populate members mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
- cslave_idx = (slave_idx + i) % num_of_slaves;
- slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+ cmember_idx = (member_idx + i) % num_of_members;
+ member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
}
- /* increment current slave index so the next call to tx burst starts on the
- * next slave */
- slave_idx = ++cslave_idx;
+ /*
+ * increment current member index so the next call to tx burst starts on the
+ * next member.
+ */
+ member_idx = ++cmember_idx;
- /* Send packet burst on each slave device */
- for (i = 0; i < num_of_slaves; i++) {
- if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_prepare(slaves[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_pkts[i]);
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- slave_bufs[i], num_tx_slave);
+ /* Send packet burst on each member device */
+ for (i = 0; i < num_of_members; i++) {
+ if (member_nb_pkts[i] > 0) {
+ num_tx_member = rte_eth_tx_prepare(members[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_pkts[i]);
+ num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+ member_bufs[i], num_tx_member);
/* if tx burst fails move packets to end of bufs */
- if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
- int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+ if (unlikely(num_tx_member < member_nb_pkts[i])) {
+ int tx_fail_member = member_nb_pkts[i] - num_tx_member;
- tx_fail_total += tx_fail_slave;
+ tx_fail_total += tx_fail_member;
memcpy(&bufs[nb_pkts - tx_fail_total],
- &slave_bufs[i][num_tx_slave],
- tx_fail_slave * sizeof(bufs[0]));
+ &member_bufs[i][num_tx_member],
+ tx_fail_member * sizeof(bufs[0]));
}
- num_tx_total += num_tx_slave;
+ num_tx_total += num_tx_member;
}
}
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
return 0;
nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i] = (hash ^= hash >> 8) % slave_count;
+ members[i] = (hash ^= hash >> 8) % member_count;
}
}
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
uint16_t i;
struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
-struct bwg_slave {
+struct bwg_member {
uint64_t bwg_left_int;
uint64_t bwg_left_remainder;
- uint16_t slave;
+ uint16_t member;
};
void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
int i;
- for (i = 0; i < internals->active_slave_count; i++) {
- tlb_last_obytets[internals->active_slaves[i]] = 0;
- }
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
static int
bandwidth_cmp(const void *a, const void *b)
{
- const struct bwg_slave *bwg_a = a;
- const struct bwg_slave *bwg_b = b;
+ const struct bwg_member *bwg_a = a;
+ const struct bwg_member *bwg_b = b;
int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
static void
bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
- struct bwg_slave *bwg_slave)
+ struct bwg_member *bwg_member)
{
struct rte_eth_link link_status;
int ret;
ret = rte_eth_link_get_nowait(port_id, &link_status);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
port_id, rte_strerror(-ret));
return;
}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
if (link_bwg == 0)
return;
link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
- bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
- bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+ bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+ bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
}
static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
{
struct bond_dev_private *internals = arg;
- struct rte_eth_stats slave_stats;
- struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ struct rte_eth_stats member_stats;
+ struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
uint64_t tx_bytes;
uint8_t update_stats = 0;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
- internals->slave_update_idx++;
+ internals->member_update_idx++;
- if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+ if (internals->member_update_idx >= REORDER_PERIOD_MS)
update_stats = 1;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- rte_eth_stats_get(slave_id, &slave_stats);
- tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
- bandwidth_left(slave_id, tx_bytes,
- internals->slave_update_idx, &bwg_array[i]);
- bwg_array[i].slave = slave_id;
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ rte_eth_stats_get(member_id, &member_stats);
+ tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+ bandwidth_left(member_id, tx_bytes,
+ internals->member_update_idx, &bwg_array[i]);
+ bwg_array[i].member = member_id;
if (update_stats) {
- tlb_last_obytets[slave_id] = slave_stats.obytes;
+ tlb_last_obytets[member_id] = member_stats.obytes;
}
}
if (update_stats == 1)
- internals->slave_update_idx = 0;
+ internals->member_update_idx = 0;
- slave_count = i;
- qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
- for (i = 0; i < slave_count; i++)
- internals->tlb_slaves_order[i] = bwg_array[i].slave;
+ member_count = i;
+ qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+ for (i = 0; i < member_count; i++)
+ internals->tlb_members_order[i] = bwg_array[i].member;
- rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+ rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
(struct bond_dev_private *)internals);
}
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0, num_tx_prep;
uint16_t i, j;
- uint16_t num_of_slaves = internals->active_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members = internals->active_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_ether_hdr *ether_hdr;
- struct rte_ether_addr primary_slave_addr;
- struct rte_ether_addr active_slave_addr;
+ struct rte_ether_addr primary_member_addr;
+ struct rte_ether_addr active_member_addr;
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+ memcpy(members, internals->tlb_members_order,
+ sizeof(internals->tlb_members_order[0]) * num_of_members);
- rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+ rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
if (nb_pkts > 3) {
for (i = 0; i < 3; i++)
rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
}
- for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ for (i = 0; i < num_of_members; i++) {
+ rte_eth_macaddr_get(members[i], &active_member_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
if (rte_is_same_ether_addr(ðer_hdr->src_addr,
- &primary_slave_addr))
- rte_ether_addr_copy(&active_slave_addr,
+ &primary_member_addr))
+ rte_ether_addr_copy(&active_member_addr,
ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
- mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+ mode6_debug("TX IPv4:", ether_hdr, members[i],
+ &burst_number_TX);
#endif
}
- num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+ num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, nb_pkts - num_tx_total);
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, num_tx_prep);
if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
void
bond_tlb_disable(struct bond_dev_private *internals)
{
- rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+ rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
}
void
bond_tlb_enable(struct bond_dev_private *internals)
{
- bond_ethdev_update_tlb_slave_cb(internals);
+ bond_ethdev_update_tlb_member_cb(internals);
}
static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct client_data *client_info;
/*
- * We create transmit buffers for every slave and one additional to send
+ * We create transmit buffers for every member and one additional to send
* through tlb. In worst case every packet will be send on one port.
*/
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
- uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+ uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
* We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_send, num_not_send = 0;
uint16_t num_tx_total = 0;
- uint16_t slave_idx;
+ uint16_t member_idx;
int i, j;
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
offset = get_vlan_offset(eth_h, ðer_type);
if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
- slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+ member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
+ rte_eth_macaddr_get(member_idx, ð_h->src_addr);
- /* Add packet to slave tx buffer */
- slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
- slave_bufs_pkts[slave_idx]++;
+ /* Add packet to member tx buffer */
+ member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+ member_bufs_pkts[member_idx]++;
} else {
/* If packet is not ARP, send it with TLB policy */
- slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+ member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
bufs[i];
- slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+ member_bufs_pkts[RTE_MAX_ETHPORTS]++;
}
}
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- /* Allocate new packet to send ARP update on current slave */
+ /* Allocate new packet to send ARP update on current member */
upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
if (upd_pkt == NULL) {
RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
upd_pkt->data_len = pkt_size;
upd_pkt->pkt_len = pkt_size;
- slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+ member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
internals);
/* Add packet to update tx buffer */
- update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
- update_bufs_pkts[slave_idx]++;
+ update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+ update_bufs_pkts[member_idx]++;
}
}
internals->mode6.ntt = 0;
}
- /* Send ARP packets on proper slaves */
+ /* Send ARP packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (slave_bufs_pkts[i] > 0) {
+ if (member_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
- slave_bufs[i], slave_bufs_pkts[i]);
+ member_bufs[i], member_bufs_pkts[i]);
num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
- slave_bufs[i], num_send);
- for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+ member_bufs[i], num_send);
+ for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[i][nb_pkts - 1 - j];
+ member_bufs[i][nb_pkts - 1 - j];
}
num_tx_total += num_send;
- num_not_send += slave_bufs_pkts[i] - num_send;
+ num_not_send += member_bufs_pkts[i] - num_send;
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
/* Print TX stats including update packets */
- for (j = 0; j < slave_bufs_pkts[i]; j++) {
- eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+ for (j = 0; j < member_bufs_pkts[i]; j++) {
+ eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
}
#endif
}
}
- /* Send update packets on proper slaves */
+ /* Send update packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (update_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
for (j = 0; j < update_bufs_pkts[i]; j++) {
eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
}
#endif
}
}
/* Send non-ARP packets using tlb policy */
- if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+ if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
num_send = bond_ethdev_tx_burst_tlb(queue,
- slave_bufs[RTE_MAX_ETHPORTS],
- slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+ member_bufs[RTE_MAX_ETHPORTS],
+ member_bufs_pkts[RTE_MAX_ETHPORTS]);
- for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+ for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+ member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
}
num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static inline uint16_t
tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
- uint16_t *slave_port_ids, uint16_t slave_count)
+ uint16_t *member_port_ids, uint16_t member_count)
{
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- /* Array to sort mbufs for transmission on each slave into */
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
- /* Number of mbufs for transmission on each slave */
- uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
- /* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[nb_bufs];
+ /* Array to sort mbufs for transmission on each member into */
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+ /* Number of mbufs for transmission on each member */
+ uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+ /* Mapping array generated by hash function to map mbufs to members */
+ uint16_t bufs_member_port_idxs[nb_bufs];
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
uint16_t i;
/*
- * Populate slaves mbuf with the packets which are to be sent on it
- * selecting output slave using hash based on xmit policy
+ * Populate members mbuf with the packets which are to be sent on it
+ * selecting output member using hash based on xmit policy
*/
- internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
- bufs_slave_port_idxs);
+ internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+ bufs_member_port_idxs);
for (i = 0; i < nb_bufs; i++) {
- /* Populate slave mbuf arrays with mbufs for that slave. */
- uint16_t slave_idx = bufs_slave_port_idxs[i];
+ /* Populate member mbuf arrays with mbufs for that member. */
+ uint16_t member_idx = bufs_member_port_idxs[i];
- slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+ member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
}
- /* Send packet burst on each slave device */
- for (i = 0; i < slave_count; i++) {
- if (slave_nb_bufs[i] == 0)
+ /* Send packet burst on each member device */
+ for (i = 0; i < member_count; i++) {
+ if (member_nb_bufs[i] == 0)
continue;
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_bufs[i]);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_tx_count);
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_bufs[i]);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_tx_count);
- total_tx_count += slave_tx_count;
+ total_tx_count += member_tx_count;
/* If tx burst fails move packets to end of bufs */
- if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
- int slave_tx_fail_count = slave_nb_bufs[i] -
- slave_tx_count;
- total_tx_fail_count += slave_tx_fail_count;
+ if (unlikely(member_tx_count < member_nb_bufs[i])) {
+ int member_tx_fail_count = member_nb_bufs[i] -
+ member_tx_count;
+ total_tx_fail_count += member_tx_fail_count;
memcpy(&bufs[nb_bufs - total_tx_fail_count],
- &slave_bufs[i][slave_tx_count],
- slave_tx_fail_count * sizeof(bufs[0]));
+ &member_bufs[i][member_tx_count],
+ member_tx_fail_count * sizeof(bufs[0]));
}
}
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
if (unlikely(nb_bufs == 0))
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting
*/
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
- return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
- slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+ member_count);
}
static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
- uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t dist_slave_count;
+ uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t dist_member_count;
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t i;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
if (dedicated_txq)
goto skip_tx_ring;
/* Check for LACP control packets and send if available */
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
struct rte_mbuf *ctrl_pkt = NULL;
if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (rte_ring_dequeue(port->tx_ring,
(void **)&ctrl_pkt) != -ENOENT) {
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
bd_tx_q->queue_id, &ctrl_pkt, 1);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
/*
* re-enqueue LAG control plane packets to buffering
* ring if transmission fails so the packet isn't lost.
*/
- if (slave_tx_count != 1)
+ if (member_tx_count != 1)
rte_ring_enqueue(port->tx_ring, ctrl_pkt);
}
}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (unlikely(nb_bufs == 0))
return 0;
- dist_slave_count = 0;
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ dist_member_count = 0;
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
if (ACTOR_STATE(port, DISTRIBUTING))
- dist_slave_port_ids[dist_slave_count++] =
- slave_port_ids[i];
+ dist_member_port_ids[dist_member_count++] =
+ member_port_ids[i];
}
- if (unlikely(dist_slave_count < 1))
+ if (unlikely(dist_member_count < 1))
return 0;
- return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
- dist_slave_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+ dist_member_count);
}
static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint8_t tx_failed_flag = 0;
- uint16_t num_of_slaves;
+ uint16_t num_of_members;
uint16_t max_nb_of_tx_pkts = 0;
- int slave_tx_total[RTE_MAX_ETHPORTS];
- int i, most_successful_tx_slave = -1;
+ int member_tx_total[RTE_MAX_ETHPORTS];
+ int i, most_successful_tx_member = -1;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return 0;
/* It is rare that bond different PMDs together, so just call tx-prepare once */
- nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+ nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
- rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+ rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
- /* Transmit burst on each active slave */
- for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ /* Transmit burst on each active member */
+ for (i = 0; i < num_of_members; i++) {
+ member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs, nb_pkts);
- if (unlikely(slave_tx_total[i] < nb_pkts))
+ if (unlikely(member_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
- /* record the value and slave index for the slave which transmits the
+ /* record the value and member index for the member which transmits the
* maximum number of packets */
- if (slave_tx_total[i] > max_nb_of_tx_pkts) {
- max_nb_of_tx_pkts = slave_tx_total[i];
- most_successful_tx_slave = i;
+ if (member_tx_total[i] > max_nb_of_tx_pkts) {
+ max_nb_of_tx_pkts = member_tx_total[i];
+ most_successful_tx_member = i;
}
}
- /* if slaves fail to transmit packets from burst, the calling application
+ /* if members fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
- * handle failures of all packets except those of the most successful slave
+ * handle failures of all packets except those of the most successful member
*/
if (unlikely(tx_failed_flag))
- for (i = 0; i < num_of_slaves; i++)
- if (i != most_successful_tx_slave)
- while (slave_tx_total[i] < nb_pkts)
- rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+ for (i = 0; i < num_of_members; i++)
+ if (i != most_successful_tx_member)
+ while (member_tx_total[i] < nb_pkts)
+ rte_pktmbuf_free(bufs[member_tx_total[i]++]);
return max_nb_of_tx_pkts;
}
static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
/**
* If in mode 4 then save the link properties of the first
- * slave, all subsequent slaves must match these properties
+ * member, all subsequent members must match these properties
*/
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- bond_link->link_autoneg = slave_link->link_autoneg;
- bond_link->link_duplex = slave_link->link_duplex;
- bond_link->link_speed = slave_link->link_speed;
+ bond_link->link_autoneg = member_link->link_autoneg;
+ bond_link->link_duplex = member_link->link_duplex;
+ bond_link->link_speed = member_link->link_speed;
} else {
/**
* In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
static int
link_properties_valid(struct rte_eth_dev *ethdev,
- struct rte_eth_link *slave_link)
+ struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- if (bond_link->link_duplex != slave_link->link_duplex ||
- bond_link->link_autoneg != slave_link->link_autoneg ||
- bond_link->link_speed != slave_link->link_speed)
+ if (bond_link->link_duplex != member_link->link_duplex ||
+ bond_link->link_autoneg != member_link->link_autoneg ||
+ bond_link->link_speed != member_link->link_speed)
return -1;
}
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
static const struct rte_ether_addr null_mac_addr;
/*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
*/
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, ret;
struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+ ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
if (ret < 0) {
/* rollback */
for (i--; i > 0; i--)
- rte_eth_dev_mac_addr_remove(slave_port_id,
+ rte_eth_dev_mac_addr_remove(member_port_id,
&bonded_eth_dev->data->mac_addrs[i]);
return ret;
}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
/*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
*/
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, rc, ret;
struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+ ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
/* save only the first error */
if (ret < 0 && rc == 0)
rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
bool set;
int i;
- /* Update slave devices MAC addresses */
- if (internals->slave_count < 1)
+ /* Update member devices MAC addresses */
+ if (internals->member_count < 1)
return -1;
switch (internals->mode) {
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
+ internals->members[i].port_id,
bonded_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
case BONDING_MODE_ALB:
default:
set = true;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id ==
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id ==
internals->current_primary_port) {
if (rte_eth_dev_default_mac_addr_set(
internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
}
} else {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
- &internals->slaves[i].persisted_mac_addr)) {
+ internals->members[i].port_id,
+ &internals->members[i].persisted_mac_addr)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
}
}
}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+ struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
if (port->slow_pool == NULL) {
char mem_name[256];
- int slave_id = slave_eth_dev->data->port_id;
+ int member_id = member_eth_dev->data->port_id;
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
- slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+ member_id);
port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
- slave_eth_dev->data->numa_node);
+ member_eth_dev->data->numa_node);
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->slow_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
}
if (internals->mode4.dedicated_queues.enabled == 1) {
/* Configure slow Rx queue */
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid, 128,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid,
errval);
return errval;
}
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid, 512,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid,
errval);
return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
}
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- /* Stop slave */
- errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+ /* Stop member */
+ errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
if (errval != 0)
RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
- /* Enable interrupts on slave device if supported */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+ /* Enable interrupts on member device if supported */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
- /* If RSS is enabled for bonding, try to enable it for slaves */
+ /* If RSS is enabled for bonding, try to enable it for members */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
} else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
}
- slave_eth_dev->data->dev_conf.rxmode.mtu =
+ member_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- slave_eth_dev->data->dev_conf.link_speeds =
+ member_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads =
+ member_eth_dev->data->dev_conf.txmode.offloads =
bonded_eth_dev->data->dev_conf.txmode.offloads;
- slave_eth_dev->data->dev_conf.rxmode.offloads =
+ member_eth_dev->data->dev_conf.rxmode.offloads =
bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* Configure device */
- errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
nb_rx_queues, nb_tx_queues,
- &(slave_eth_dev->data->dev_conf));
+ &member_eth_dev->data->dev_conf);
if (errval != 0) {
- RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+ member_eth_dev->data->port_id, errval);
return errval;
}
- errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
bonded_eth_dev->data->mtu);
if (errval != 0 && errval != -ENOTSUP) {
RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
return errval;
}
return 0;
}
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
uint16_t q_id;
struct rte_flow_error flow_error;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+ uint16_t member_port_id = member_eth_dev->data->port_id;
/* Setup Rx Queues */
for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_rx_queue_setup(member_port_id, q_id,
bd_rx_q->nb_rx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_tx_queue_setup(member_port_id, q_id,
bd_tx_q->nb_tx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&bd_tx_q->tx_conf);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
- if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+ if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
!= 0)
return errval;
errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
- if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
- errval = rte_flow_destroy(slave_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+ if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+ errval = rte_flow_destroy(member_port_id,
+ internals->mode4.dedicated_queues.flow[member_port_id],
&flow_error);
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
}
/* Start device */
- errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+ errval = rte_eth_dev_start(member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return -1;
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
internals = bonded_eth_dev->data->dev_private;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == member_port_id) {
errval = rte_eth_dev_rss_reta_update(
- slave_eth_dev->data->port_id,
+ member_port_id,
&internals->reta_conf[0],
- internals->slaves[i].reta_size);
+ internals->members[i].reta_size);
if (errval != 0) {
RTE_BOND_LOG(WARNING,
- "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+ "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
" RSS Configuration for bonding may be inconsistent.",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
break;
}
}
}
- /* If lsc interrupt is set, check initial slave's link status */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
- slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
- bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+ /* If lsc interrupt is set, check initial member's link status */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+ bond_ethdev_lsc_event_callback(member_port_id,
RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
NULL);
}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
}
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t i;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id ==
- slave_eth_dev->data->port_id)
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id ==
+ member_eth_dev->data->port_id)
break;
- if (i < (internals->slave_count - 1)) {
+ if (i < (internals->member_count - 1)) {
struct rte_flow *flow;
- memmove(&internals->slaves[i], &internals->slaves[i + 1],
- sizeof(internals->slaves[0]) *
- (internals->slave_count - i - 1));
+ memmove(&internals->members[i], &internals->members[i + 1],
+ sizeof(internals->members[0]) *
+ (internals->member_count - i - 1));
TAILQ_FOREACH(flow, &internals->flow_list, next) {
memmove(&flow->flows[i], &flow->flows[i + 1],
sizeof(flow->flows[0]) *
- (internals->slave_count - i - 1));
- flow->flows[internals->slave_count - 1] = NULL;
+ (internals->member_count - i - 1));
+ flow->flows[internals->member_count - 1] = NULL;
}
}
- internals->slave_count--;
+ internals->member_count--;
- /* force reconfiguration of slave interfaces */
- rte_eth_dev_internal_reset(slave_eth_dev);
+ /* force reconfiguration of member interfaces */
+ rte_eth_dev_internal_reset(member_eth_dev);
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
- struct bond_slave_details *slave_details =
- &internals->slaves[internals->slave_count];
+ struct bond_member_details *member_details =
+ &internals->members[internals->member_count];
- slave_details->port_id = slave_eth_dev->data->port_id;
- slave_details->last_link_status = 0;
+ member_details->port_id = member_eth_dev->data->port_id;
+ member_details->last_link_status = 0;
- /* Mark slave devices that don't support interrupts so we can
+ /* Mark member devices that don't support interrupts so we can
* compensate when we start the bond
*/
- if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
- slave_details->link_status_poll_enabled = 1;
- }
+ if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+ member_details->link_status_poll_enabled = 1;
- slave_details->link_status_wait_to_complete = 0;
+ member_details->link_status_wait_to_complete = 0;
/* clean tlb_last_obytes when adding port for bonding device */
- memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+ memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
sizeof(struct rte_ether_addr));
}
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id)
+ uint16_t member_port_id)
{
int i;
- if (internals->active_slave_count < 1)
- internals->current_primary_port = slave_port_id;
+ if (internals->active_member_count < 1)
+ internals->current_primary_port = member_port_id;
else
- /* Search bonded device slave ports for new proposed primary port */
- for (i = 0; i < internals->active_slave_count; i++) {
- if (internals->active_slaves[i] == slave_port_id)
- internals->current_primary_port = slave_port_id;
+ /* Search bonded device member ports for new proposed primary port */
+ for (i = 0; i < internals->active_member_count; i++) {
+ if (internals->active_members[i] == member_port_id)
+ internals->current_primary_port = member_port_id;
}
}
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals;
int i;
- /* slave eth dev will be started by bonded device */
+ /* member eth dev will be started by bonded device */
if (check_for_bonded_ethdev(eth_dev)) {
- RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+ RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
eth_dev->data->port_id);
return -1;
}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- if (internals->slave_count == 0) {
- RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+ if (internals->member_count == 0) {
+ RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
goto out_err;
}
if (internals->user_defined_mac == 0) {
struct rte_ether_addr *new_mac_addr = NULL;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == internals->primary_port)
- new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == internals->primary_port)
+ new_mac_addr = &internals->members[i].persisted_mac_addr;
if (new_mac_addr == NULL)
goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
}
- /* Reconfigure each slave device if starting bonded device */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(eth_dev, slave_ethdev) != 0) {
+ /* Reconfigure each member device if starting bonded device */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to reconfigure slave device (%d)",
+ "bonded port (%d) failed to reconfigure member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- if (slave_start(eth_dev, slave_ethdev) != 0) {
+ if (member_start(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to start slave device (%d)",
+ "bonded port (%d) failed to start member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- /* We will need to poll for link status if any slave doesn't
+ /* We will need to poll for link status if any member doesn't
* support interrupts
*/
- if (internals->slaves[i].link_status_poll_enabled)
+ if (internals->members[i].link_status_poll_enabled)
internals->link_status_polling_enabled = 1;
}
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
if (internals->link_status_polling_enabled) {
rte_eal_alarm_set(
internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor,
+ bond_ethdev_member_link_status_change_monitor,
(void *)&rte_eth_devices[internals->port_id]);
}
- /* Update all slave devices MACs*/
- if (mac_address_slaves_update(eth_dev) != 0)
+ /* Update all member devices MACs*/
+ if (mac_address_members_update(eth_dev) != 0)
goto out_err;
if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
bond_mode_8023ad_stop(eth_dev);
/* Discard all messages to/from mode 4 state machines */
- for (i = 0; i < internals->active_slave_count; i++) {
- port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+ for (i = 0; i < internals->active_member_count; i++) {
+ port = &bond_mode_8023ad_ports[internals->active_members[i]];
RTE_ASSERT(port->rx_ring != NULL);
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
if (internals->mode == BONDING_MODE_TLB ||
internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
- for (i = 0; i < internals->active_slave_count; i++)
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t slave_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t member_id = internals->members[i].port_id;
- internals->slaves[i].last_link_status = 0;
- ret = rte_eth_dev_stop(slave_id);
+ internals->members[i].last_link_status = 0;
+ ret = rte_eth_dev_stop(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_id);
+ member_id);
return ret;
}
- /* active slaves need to be deactivated. */
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) !=
- internals->active_slave_count)
- deactivate_slave(eth_dev, slave_id);
+ /* active members need to be deactivated. */
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) !=
+ internals->active_member_count)
+ deactivate_member(eth_dev, member_id);
}
return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
/* Flush flows in all back-end devices before removing them */
bond_flow_ops.flush(dev, &ferror);
- while (internals->slave_count != skipped) {
- uint16_t port_id = internals->slaves[skipped].port_id;
+ while (internals->member_count != skipped) {
+ uint16_t port_id = internals->members[skipped].port_id;
int ret;
ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
continue;
}
- if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+ if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
RTE_BOND_LOG(ERR,
"Failed to remove port %d from bonded device %s",
port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct bond_slave_details slave;
+ struct bond_member_details member;
int ret;
uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_ETHER_MAX_JUMBO_FRAME_LEN;
/* Max number of tx/rx queues that the bonded device can support is the
- * minimum values of the bonded slaves, as all slaves must be capable
+ * minimum values of the bonded members, as all members must be capable
* of supporting the same number of tx/rx queues.
*/
- if (internals->slave_count > 0) {
- struct rte_eth_dev_info slave_info;
+ if (internals->member_count > 0) {
+ struct rte_eth_dev_info member_info;
uint16_t idx;
- for (idx = 0; idx < internals->slave_count; idx++) {
- slave = internals->slaves[idx];
- ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+ for (idx = 0; idx < internals->member_count; idx++) {
+ member = internals->members[idx];
+ ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
__func__,
- slave.port_id,
+ member.port_id,
strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < max_nb_rx_queues)
- max_nb_rx_queues = slave_info.max_rx_queues;
+ if (member_info.max_rx_queues < max_nb_rx_queues)
+ max_nb_rx_queues = member_info.max_rx_queues;
- if (slave_info.max_tx_queues < max_nb_tx_queues)
- max_nb_tx_queues = slave_info.max_tx_queues;
+ if (member_info.max_tx_queues < max_nb_tx_queues)
+ max_nb_tx_queues = member_info.max_tx_queues;
}
}
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
uint16_t i;
struct bond_dev_private *internals = dev->data->dev_private;
- /* don't do this while a slave is being added */
+ /* don't do this while a member is being added */
rte_spinlock_lock(&internals->lock);
if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
else
rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
if (res == ENOTSUP)
RTE_BOND_LOG(WARNING,
- "Setting VLAN filter on slave port %u not supported.",
+ "Setting VLAN filter on member port %u not supported.",
port_id);
}
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
{
- struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+ struct rte_eth_dev *bonded_ethdev, *member_ethdev;
struct bond_dev_private *internals;
- /* Default value for polling slave found is true as we don't want to
+ /* Default value for polling member found is true as we don't want to
* disable the polling thread if we cannot get the lock */
- int i, polling_slave_found = 1;
+ int i, polling_member_found = 1;
if (cb_arg == NULL)
return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
!internals->link_status_polling_enabled)
return;
- /* If device is currently being configured then don't check slaves link
+ /* If device is currently being configured then don't check members link
* status, wait until next period */
if (rte_spinlock_trylock(&internals->lock)) {
- if (internals->slave_count > 0)
- polling_slave_found = 0;
+ if (internals->member_count > 0)
+ polling_member_found = 0;
- for (i = 0; i < internals->slave_count; i++) {
- if (!internals->slaves[i].link_status_poll_enabled)
+ for (i = 0; i < internals->member_count; i++) {
+ if (!internals->members[i].link_status_poll_enabled)
continue;
- slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
- polling_slave_found = 1;
+ member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+ polling_member_found = 1;
- /* Update slave link status */
- (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
- internals->slaves[i].link_status_wait_to_complete);
+ /* Update member link status */
+ (*member_ethdev->dev_ops->link_update)(member_ethdev,
+ internals->members[i].link_status_wait_to_complete);
/* if link status has changed since last checked then call lsc
* event callback */
- if (slave_ethdev->data->dev_link.link_status !=
- internals->slaves[i].last_link_status) {
- bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+ if (member_ethdev->data->dev_link.link_status !=
+ internals->members[i].last_link_status) {
+ bond_ethdev_lsc_event_callback(internals->members[i].port_id,
RTE_ETH_EVENT_INTR_LSC,
&bonded_ethdev->data->port_id,
NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
rte_spinlock_unlock(&internals->lock);
}
- if (polling_slave_found)
- /* Set alarm to continue monitoring link status of slave ethdev's */
+ if (polling_member_found)
+ /* Set alarm to continue monitoring link status of member ethdev's */
rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor, cb_arg);
+ bond_ethdev_member_link_status_change_monitor, cb_arg);
}
static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
struct bond_dev_private *bond_ctx;
- struct rte_eth_link slave_link;
+ struct rte_eth_link member_link;
bool one_link_update_succeeded;
uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
- bond_ctx->active_slave_count == 0) {
+ bond_ctx->active_member_count == 0) {
ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
case BONDING_MODE_BROADCAST:
/**
* Setting link speed to UINT32_MAX to ensure we pick up the
- * value of the first active slave
+ * value of the first active member
*/
ethdev->data->dev_link.link_speed = UINT32_MAX;
/**
- * link speed is minimum value of all the slaves link speed as
- * packet loss will occur on this slave if transmission at rates
+ * link speed is minimum value of all the members link speed as
+ * packet loss will occur on this member if transmission at rates
* greater than this are attempted
*/
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
return 0;
}
- if (slave_link.link_speed <
+ if (member_link.link_speed <
ethdev->data->dev_link.link_speed)
ethdev->data->dev_link.link_speed =
- slave_link.link_speed;
+ member_link.link_speed;
}
break;
case BONDING_MODE_ACTIVE_BACKUP:
- /* Current primary slave */
- ret = link_update(bond_ctx->current_primary_port, &slave_link);
+ /* Current primary member */
+ ret = link_update(bond_ctx->current_primary_port, &member_link);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
bond_ctx->current_primary_port,
rte_strerror(-ret));
return 0;
}
- ethdev->data->dev_link.link_speed = slave_link.link_speed;
+ ethdev->data->dev_link.link_speed = member_link.link_speed;
break;
case BONDING_MODE_8023AD:
ethdev->data->dev_link.link_autoneg =
- bond_ctx->mode4.slave_link.link_autoneg;
+ bond_ctx->mode4.member_link.link_autoneg;
ethdev->data->dev_link.link_duplex =
- bond_ctx->mode4.slave_link.link_duplex;
+ bond_ctx->mode4.member_link.link_duplex;
/* fall through */
/* to update link speed */
case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
default:
/**
* In theses mode the maximum theoretical link speed is the sum
- * of all the slaves
+ * of all the members
*/
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
continue;
}
one_link_update_succeeded = true;
ethdev->data->dev_link.link_speed +=
- slave_link.link_speed;
+ member_link.link_speed;
}
if (!one_link_update_succeeded) {
- RTE_BOND_LOG(ERR, "All slaves link get failed");
+ RTE_BOND_LOG(ERR, "All members link get failed");
return 0;
}
}
@@ -2602,27 +2606,27 @@ static int
bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_eth_stats slave_stats;
+ struct rte_eth_stats member_stats;
int i, j;
- for (i = 0; i < internals->slave_count; i++) {
- rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+ for (i = 0; i < internals->member_count; i++) {
+ rte_eth_stats_get(internals->members[i].port_id, &member_stats);
- stats->ipackets += slave_stats.ipackets;
- stats->opackets += slave_stats.opackets;
- stats->ibytes += slave_stats.ibytes;
- stats->obytes += slave_stats.obytes;
- stats->imissed += slave_stats.imissed;
- stats->ierrors += slave_stats.ierrors;
- stats->oerrors += slave_stats.oerrors;
- stats->rx_nombuf += slave_stats.rx_nombuf;
+ stats->ipackets += member_stats.ipackets;
+ stats->opackets += member_stats.opackets;
+ stats->ibytes += member_stats.ibytes;
+ stats->obytes += member_stats.obytes;
+ stats->imissed += member_stats.imissed;
+ stats->ierrors += member_stats.ierrors;
+ stats->oerrors += member_stats.oerrors;
+ stats->rx_nombuf += member_stats.rx_nombuf;
for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
- stats->q_ipackets[j] += slave_stats.q_ipackets[j];
- stats->q_opackets[j] += slave_stats.q_opackets[j];
- stats->q_ibytes[j] += slave_stats.q_ibytes[j];
- stats->q_obytes[j] += slave_stats.q_obytes[j];
- stats->q_errors[j] += slave_stats.q_errors[j];
+ stats->q_ipackets[j] += member_stats.q_ipackets[j];
+ stats->q_opackets[j] += member_stats.q_opackets[j];
+ stats->q_ibytes[j] += member_stats.q_ibytes[j];
+ stats->q_obytes[j] += member_stats.q_obytes[j];
+ stats->q_errors[j] += member_stats.q_errors[j];
}
}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
int err;
int ret;
- for (i = 0, err = 0; i < internals->slave_count; i++) {
- ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+ for (i = 0, err = 0; i < internals->member_count; i++) {
+ ret = rte_eth_stats_reset(internals->members[i].port_id);
if (ret != 0)
err = ret;
}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_promiscuous_enable(port_id);
if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
"Failed to enable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
BOND_8023AD_FORCED_PROMISC) {
- slave_ok++;
+ member_ok++;
continue;
}
ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
"Failed to disable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As promiscuous mode is propagated to all slaves for these
+ /* As promiscuous mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As promiscuous mode is propagated only to primary slave
+ /* As promiscuous mode is propagated only to primary member
* for these mode. When active/standby switchover, promiscuous
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_allmulticast_enable(port_id);
if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
"Failed to enable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
"Failed to disable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As allmulticast mode is propagated to all slaves for these
+ /* As allmulticast mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As allmulticast mode is propagated only to primary slave
+ /* As allmulticast mode is propagated only to primary member
* for these mode. When active/standby switchover, allmulticast
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
int ret;
uint8_t lsc_flag = 0;
- int valid_slave = 0;
- uint16_t active_pos, slave_idx;
+ int valid_member = 0;
+ uint16_t active_pos, member_idx;
uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (!bonded_eth_dev->data->dev_started)
return rc;
- /* verify that port_id is a valid slave of bonded port */
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == port_id) {
- valid_slave = 1;
- slave_idx = i;
+ /* verify that port_id is a valid member of bonded port */
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == port_id) {
+ valid_member = 1;
+ member_idx = i;
break;
}
}
- if (!valid_slave)
+ if (!valid_member)
return rc;
/* Synchronize lsc callback parallel calls either by real link event
- * from the slaves PMDs or by the bonding PMD itself.
+ * from the members PMDs or by the bonding PMD itself.
*/
rte_spinlock_lock(&internals->lsc_lock);
/* Search for port in active port list */
- active_pos = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, port_id);
+ active_pos = find_member_by_id(internals->active_members,
+ internals->active_member_count, port_id);
ret = rte_eth_link_get_nowait(port_id, &link);
if (ret < 0)
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
if (ret == 0 && link.link_status) {
- if (active_pos < internals->active_slave_count)
+ if (active_pos < internals->active_member_count)
goto link_update;
/* check link state properties if bonded link is up*/
if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
- "for slave %d in bonding mode %d",
+ "for member %d in bonding mode %d",
port_id, internals->mode);
} else {
- /* inherit slave link properties */
+ /* inherit member link properties */
link_properties_set(bonded_eth_dev, &link);
}
- /* If no active slave ports then set this port to be
+ /* If no active member ports then set this port to be
* the primary port.
*/
- if (internals->active_slave_count < 1) {
- /* If first active slave, then change link status */
+ if (internals->active_member_count < 1) {
+ /* If first active member, then change link status */
bonded_eth_dev->data->dev_link.link_status =
RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
- activate_slave(bonded_eth_dev, port_id);
+ activate_member(bonded_eth_dev, port_id);
/* If the user has defined the primary port then default to
* using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
internals->primary_port == port_id)
bond_ethdev_primary_set(internals, port_id);
} else {
- if (active_pos == internals->active_slave_count)
+ if (active_pos == internals->active_member_count)
goto link_update;
- /* Remove from active slave list */
- deactivate_slave(bonded_eth_dev, port_id);
+ /* Remove from active member list */
+ deactivate_member(bonded_eth_dev, port_id);
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
lsc_flag = 1;
- /* Update primary id, take first active slave from list or if none
+ /* Update primary id, take first active member from list or if none
* available set to -1 */
if (port_id == internals->current_primary_port) {
- if (internals->active_slave_count > 0)
+ if (internals->active_member_count > 0)
bond_ethdev_primary_set(internals,
- internals->active_slaves[0]);
+ internals->active_members[0]);
else
internals->current_primary_port = internals->primary_port;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
link_update:
/**
* Update bonded device link properties after any change to active
- * slaves
+ * members
*/
bond_ethdev_link_update(bonded_eth_dev, 0);
- internals->slaves[slave_idx].last_link_status = link.link_status;
+ internals->members[member_idx].last_link_status = link.link_status;
if (lsc_flag) {
/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
{
unsigned i, j;
int result = 0;
- int slave_reta_size;
+ int member_reta_size;
unsigned reta_count;
struct bond_dev_private *internals = dev->data->dev_private;
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
sizeof(internals->reta_conf[0]) * reta_count);
- /* Propagate RETA over slaves */
- for (i = 0; i < internals->slave_count; i++) {
- slave_reta_size = internals->slaves[i].reta_size;
- result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
- &internals->reta_conf[0], slave_reta_size);
+ /* Propagate RETA over members */
+ for (i = 0; i < internals->member_count; i++) {
+ member_reta_size = internals->members[i].reta_size;
+ result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+ &internals->reta_conf[0], member_reta_size);
if (result < 0)
return result;
}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
bond_rss_conf.rss_key_len = internals->rss_key_len;
}
- for (i = 0; i < internals->slave_count; i++) {
- result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
&bond_rss_conf);
if (result < 0)
return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
static int
bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mtu_set == NULL) {
rte_spinlock_unlock(&internals->lock);
return -ENOTSUP;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
if (ret < 0) {
rte_spinlock_unlock(&internals->lock);
return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
struct rte_ether_addr *mac_addr,
__rte_unused uint32_t index, uint32_t vmdq)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
- *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+ *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
ret = -ENOTSUP;
goto end;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
mac_addr, vmdq);
if (ret < 0) {
/* rollback */
for (i--; i >= 0; i--)
rte_eth_dev_mac_addr_remove(
- internals->slaves[i].port_id, mac_addr);
+ internals->members[i].port_id, mac_addr);
goto end;
}
}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
static void
bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
goto end;
}
struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
- for (i = 0; i < internals->slave_count; i++)
- rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++)
+ rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
mac_addr);
end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
fprintf(f, "\n");
}
- if (internals->slave_count > 0) {
- fprintf(f, "\tSlaves (%u): [", internals->slave_count);
- for (i = 0; i < internals->slave_count - 1; i++)
- fprintf(f, "%u ", internals->slaves[i].port_id);
+ if (internals->member_count > 0) {
+ fprintf(f, "\tMembers (%u): [", internals->member_count);
+ for (i = 0; i < internals->member_count - 1; i++)
+ fprintf(f, "%u ", internals->members[i].port_id);
- fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+ fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
} else {
- fprintf(f, "\tSlaves: []\n");
+ fprintf(f, "\tMembers: []\n");
}
- if (internals->active_slave_count > 0) {
- fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
- for (i = 0; i < internals->active_slave_count - 1; i++)
- fprintf(f, "%u ", internals->active_slaves[i]);
+ if (internals->active_member_count > 0) {
+ fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+ for (i = 0; i < internals->active_member_count - 1; i++)
+ fprintf(f, "%u ", internals->active_members[i]);
- fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+ fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
} else {
- fprintf(f, "\tActive Slaves: []\n");
+ fprintf(f, "\tActive Members: []\n");
}
if (internals->user_defined_primary_port)
fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
- if (internals->slave_count > 0)
+ if (internals->member_count > 0)
fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
}
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
}
static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
{
char a_state[256] = { 0 };
char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
static void
dump_lacp(uint16_t port_id, FILE *f)
{
- struct rte_eth_bond_8023ad_slave_info slave_info;
+ struct rte_eth_bond_8023ad_member_info member_info;
struct rte_eth_bond_8023ad_conf port_conf;
- uint16_t slaves[RTE_MAX_ETHPORTS];
- int num_active_slaves;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ int num_active_members;
int i, ret;
fprintf(f, " - Lacp info:\n");
- num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+ num_active_members = rte_eth_bond_active_members_get(port_id, members,
RTE_MAX_ETHPORTS);
- if (num_active_slaves < 0) {
- fprintf(f, "\tFailed to get active slave list for port %u\n",
+ if (num_active_members < 0) {
+ fprintf(f, "\tFailed to get active member list for port %u\n",
port_id);
return;
}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
}
dump_lacp_conf(&port_conf, f);
- for (i = 0; i < num_active_slaves; i++) {
- ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
- &slave_info);
+ for (i = 0; i < num_active_members; i++) {
+ ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+ &member_info);
if (ret) {
- fprintf(f, "\tGet slave device %u 8023ad info failed\n",
- slaves[i]);
+ fprintf(f, "\tGet member device %u 8023ad info failed\n",
+ members[i]);
return;
}
- fprintf(f, "\tSlave Port: %u\n", slaves[i]);
- dump_lacp_slave(&slave_info, f);
+ fprintf(f, "\tMember Port: %u\n", members[i]);
+ dump_lacp_member(&member_info, f);
}
}
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->link_down_delay_ms = 0;
internals->link_up_delay_ms = 0;
- internals->slave_count = 0;
- internals->active_slave_count = 0;
+ internals->member_count = 0;
+ internals->active_member_count = 0;
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->rx_desc_lim.nb_align = 1;
internals->tx_desc_lim.nb_align = 1;
- memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
- memset(internals->slaves, 0, sizeof(internals->slaves));
+ memset(internals->active_members, 0, sizeof(internals->active_members));
+ memset(internals->members, 0, sizeof(internals->members));
TAILQ_INIT(&internals->flow_list);
internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
/* Parse link bonding mode */
if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
- &bond_ethdev_parse_slave_mode_kvarg,
+ &bond_ethdev_parse_member_mode_kvarg,
&bonding_mode) != 0) {
RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
RTE_ASSERT(eth_dev->device == &dev->device);
internals = eth_dev->data->dev_private;
- if (internals->slave_count != 0)
+ if (internals->member_count != 0)
return -EBUSY;
if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
return ret;
}
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
* have been allocated */
static int
bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
if ((link_speeds &
(internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
- RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+ RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
return -EINVAL;
}
/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- /* Parse/add slave ports to bonded device */
- if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
- struct bond_ethdev_slave_ports slave_ports;
+ /* Parse/add member ports to bonded device */
+ if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+ struct bond_ethdev_member_ports member_ports;
unsigned i;
- memset(&slave_ports, 0, sizeof(slave_ports));
+ memset(&member_ports, 0, sizeof(member_ports));
- if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
- &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+ if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+ &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to parse slave ports for bonded device %s",
+ "Failed to parse member ports for bonded device %s",
name);
return -1;
}
- for (i = 0; i < slave_ports.slave_count; i++) {
- if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+ for (i = 0; i < member_ports.member_count; i++) {
+ if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to add port %d as slave to bonded device %s",
- slave_ports.slaves[i], name);
+ "Failed to add port %d as member to bonded device %s",
+ member_ports.members[i], name);
}
}
} else {
- RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+ RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
return -1;
}
- /* Parse/set primary slave port id*/
- arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+ /* Parse/set primary member port id*/
+ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
if (arg_count == 1) {
- uint16_t primary_slave_port_id;
+ uint16_t primary_member_port_id;
if (rte_kvargs_process(kvlist,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
- &bond_ethdev_parse_primary_slave_port_id_kvarg,
- &primary_slave_port_id) < 0) {
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
+ &bond_ethdev_parse_primary_member_port_id_kvarg,
+ &primary_member_port_id) < 0) {
RTE_BOND_LOG(INFO,
- "Invalid primary slave port id specified for bonded device %s",
+ "Invalid primary member port id specified for bonded device %s",
name);
return -1;
}
/* Set balance mode transmit policy*/
- if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+ if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
!= 0) {
RTE_BOND_LOG(ERR,
- "Failed to set primary slave port %d on bonded device %s",
- primary_slave_port_id, name);
+ "Failed to set primary member port %d on bonded device %s",
+ primary_member_port_id, name);
return -1;
}
} else if (arg_count > 1) {
RTE_BOND_LOG(INFO,
- "Primary slave can be specified only once for bonded device %s",
+ "Primary member can be specified only once for bonded device %s",
name);
return -1;
}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
return -1;
}
- /* configure slaves so we can pass mtu setting */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(dev, slave_ethdev) != 0) {
+ /* configure members so we can pass mtu setting */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to configure slave device (%d)",
+ "bonded port (%d) failed to configure member device (%d)",
dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
- "slave=<ifc> "
+ "member=<ifc> "
"primary=<ifc> "
"mode=[0-6] "
"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
rte_eth_bond_8023ad_ext_distrib_get;
rte_eth_bond_8023ad_ext_slowtx;
rte_eth_bond_8023ad_setup;
- rte_eth_bond_8023ad_slave_info;
- rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
rte_eth_bond_mode_set;
rte_eth_bond_primary_get;
rte_eth_bond_primary_set;
- rte_eth_bond_slave_add;
- rte_eth_bond_slave_remove;
- rte_eth_bond_slaves_get;
rte_eth_bond_xmit_policy_get;
rte_eth_bond_xmit_policy_set;
local: *;
};
+
+EXPERIMENTAL {
+ # added in 23.07
+ global:
+ rte_eth_bond_8023ad_member_info;
+ rte_eth_bond_active_members_get;
+ rte_eth_bond_member_add;
+ rte_eth_bond_member_remove;
+ rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
RTE_ETHER_ADDR_BYTES(&addr))
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
static uint16_t BOND_PORT = 0xffff;
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
};
static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
{
int retval;
uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
"failed (res=%d)\n", BOND_PORT, retval);
- for (i = 0; i < slaves_count; i++) {
- if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
- rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
- slaves[i], BOND_PORT);
+ for (i = 0; i < members_count; i++) {
+ if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+ rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+ members[i], BOND_PORT);
}
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
if (retval < 0)
rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
- printf("Waiting for slaves to become active...");
+ printf("Waiting for members to become active...");
while (wait_counter) {
- uint16_t act_slaves[16] = {0};
- if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
- slaves_count) {
+ uint16_t act_members[16] = {0};
+ if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+ members_count) {
printf("\n");
break;
}
sleep(1);
printf("...");
if (--wait_counter == 0)
- rte_exit(-1, "\nFailed to activate slaves\n");
+ rte_exit(-1, "\nFailed to activate members\n");
}
retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
"send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
- "show - shows some bond info: ex. active slaves etc.\n"
+ "show - shows some bond info: ex. active members etc.\n"
"help - prints help.\n"
"quit - terminate all threads and quit.\n"
);
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
struct cmdline *cl,
__rte_unused void *data)
{
- uint16_t slaves[16] = {0};
+ uint16_t members[16] = {0};
uint8_t len = 16;
struct rte_ether_addr addr;
uint16_t i;
int ret;
- for (i = 0; i < slaves_count; i++) {
+ for (i = 0; i < members_count; i++) {
ret = rte_eth_macaddr_get(i, &addr);
if (ret != 0) {
cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
rte_spinlock_lock(&global_flag_stru_p->lock);
cmdline_printf(cl,
- "Active_slaves:%d "
+ "Active_members:%d "
"packets received:Tot:%d Arp:%d IPv4:%d\n",
- rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+ rte_eth_bond_active_members_get(BOND_PORT, members, len),
global_flag_stru_p->port_packets[0],
global_flag_stru_p->port_packets[1],
global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
/* initialize all ports */
- slaves_count = nb_ports;
+ members_count = nb_ports;
RTE_ETH_FOREACH_DEV(i) {
- slave_port_init(i, mbuf_pool);
- slaves[i] = i;
+ member_port_init(i, mbuf_pool);
+ members[i] = i;
}
bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0)
/** Device supports link state interrupt */
#define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE \
+ do { \
+ RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+ RTE_ETH_DEV_BONDED_MEMBER \
+ } while (0)
/** Device supports device removal interrupt */
#define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3)
/** Device is port representor */
--
2.39.1
^ permalink raw reply [relevance 1%]
* [PATCH v2] net/bonding: replace master/slave to main/member
2023-05-17 14:52 1% ` Stephen Hemminger
@ 2023-05-18 6:32 1% ` Chaoyong He
2023-05-18 7:01 1% ` [PATCH v3] " Chaoyong He
1 sibling, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18 6:32 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw
From: Long Wu <long.wu@corigine.com>
This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.
The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.
Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.
Mark the old visible API's as deprecated and remove
from the ABI.
Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
app/test-pmd/testpmd.c | 112 +-
app/test-pmd/testpmd.h | 8 +-
app/test/test_link_bonding.c | 2792 +++++++++--------
app/test/test_link_bonding_mode4.c | 588 ++--
| 166 +-
doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
doc/guides/nics/bnxt.rst | 4 +-
doc/guides/prog_guide/img/bond-mode-1.svg | 2 +-
.../link_bonding_poll_mode_drv_lib.rst | 222 +-
drivers/net/bonding/bonding_testpmd.c | 178 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
drivers/net/bonding/eth_bond_private.h | 108 +-
drivers/net/bonding/rte_eth_bond.h | 126 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 372 +--
drivers/net/bonding/rte_eth_bond_8023ad.h | 75 +-
drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
drivers/net/bonding/rte_eth_bond_api.c | 474 +--
drivers/net/bonding/rte_eth_bond_args.c | 32 +-
drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 1384 ++++----
drivers/net/bonding/version.map | 15 +-
examples/bond/main.c | 40 +-
lib/ethdev/rte_ethdev.h | 9 +-
24 files changed, 3505 insertions(+), 3384 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
{
#ifdef RTE_NET_BOND
- portid_t slave_pids[RTE_MAX_ETHPORTS];
+ portid_t member_pids[RTE_MAX_ETHPORTS];
struct rte_port *port;
- int num_slaves;
- portid_t slave_pid;
+ int num_members;
+ portid_t member_pid;
int i;
- num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+ num_members = rte_eth_bond_members_get(bond_pid, member_pids,
RTE_MAX_ETHPORTS);
- if (num_slaves < 0) {
- fprintf(stderr, "Failed to get slave list for port = %u\n",
+ if (num_members < 0) {
+ fprintf(stderr, "Failed to get member list for port = %u\n",
bond_pid);
- return num_slaves;
+ return num_members;
}
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- port = &ports[slave_pid];
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ port = &ports[member_pid];
port->port_status =
is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Starting a bonded port also starts all slaves under the bonded
+ * Starting a bonded port also starts all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, false);
+ return change_bonding_member_port_status(port_id, false);
}
return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Stopping a bonded port also stops all slaves under the bonded
+ * Stopping a bonded port also stops all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, true);
+ return change_bonding_member_port_status(port_id, true);
}
return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
port = &ports[pi];
/* Check if there is a port which is not started */
if ((port->port_status != RTE_PORT_STARTED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
}
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
struct rte_port *port = &ports[port_id];
if ((port->port_status != RTE_PORT_STOPPED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
return 1;
}
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
/*
* Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
* to update the port configurations of bonding device.
*/
static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
continue;
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
}
static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
{
struct rte_port *port;
- portid_t slave_pid;
+ portid_t member_pid;
uint16_t i;
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- if (port_is_started(slave_pid) == 1) {
- if (rte_eth_dev_stop(slave_pid) != 0)
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ if (port_is_started(member_pid) == 1) {
+ if (rte_eth_dev_stop(member_pid) != 0)
fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
- slave_pid);
+ member_pid);
- port = &ports[slave_pid];
+ port = &ports[member_pid];
port->port_status = RTE_PORT_STOPPED;
}
- clear_port_slave_flag(slave_pid);
+ clear_port_member_flag(member_pid);
- /* Close slave device when testpmd quit or is killed. */
+ /* Close member device when testpmd quit or is killed. */
if (cl_quit == 1 || f_quit == 1)
- rte_eth_dev_close(slave_pid);
+ rte_eth_dev_close(member_pid);
}
}
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
{
portid_t pi;
struct rte_port *port;
- portid_t slave_pids[RTE_MAX_ETHPORTS];
- int num_slaves = 0;
+ portid_t member_pids[RTE_MAX_ETHPORTS];
+ int num_members = 0;
if (port_id_is_invalid(pid, ENABLED_WARN))
return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
flush_port_owned_resources(pi);
#ifdef RTE_NET_BOND
if (port->bond_flag == 1)
- num_slaves = rte_eth_bond_slaves_get(pi,
- slave_pids, RTE_MAX_ETHPORTS);
+ num_members = rte_eth_bond_members_get(pi,
+ member_pids, RTE_MAX_ETHPORTS);
#endif
rte_eth_dev_close(pi);
/*
- * If this port is bonded device, all slaves under the
+ * If this port is bonded device, all members under the
* device need to be removed or closed.
*/
- if (port->bond_flag == 1 && num_slaves > 0)
- clear_bonding_slave_device(slave_pids,
- num_slaves);
+ if (port->bond_flag == 1 && num_members > 0)
+ clear_bonding_member_device(member_pids,
+ num_members);
}
free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
}
}
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 1;
+ port = &ports[member_pid];
+ port->member_flag = 1;
}
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 0;
+ port = &ports[member_pid];
+ port->member_flag = 0;
}
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
{
struct rte_port *port;
struct rte_eth_dev_info dev_info;
int ret;
- port = &ports[slave_pid];
- ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+ port = &ports[member_pid];
+ ret = eth_dev_info_get_print_err(member_pid, &dev_info);
if (ret != 0) {
TESTPMD_LOG(ERR,
"Failed to get device info for port id %d,"
- "cannot determine if the port is a bonded slave",
- slave_pid);
+ "cannot determine if the port is a bonded member",
+ member_pid);
return 0;
}
- if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+ if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
return 1;
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
queueid_t queue_nb; /**< nb. of queues for flow rules */
uint32_t queue_sz; /**< size of a queue for flow rules */
- uint8_t slave_flag : 1, /**< bonding slave port */
+ uint8_t member_flag : 1, /**< bonding member port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before forward */
update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
void dev_set_link_up(portid_t pid);
void dev_set_link_down(portid_t pid);
void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
#define INVALID_BONDING_MODE (-1)
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
struct link_bonding_unittest_params {
int16_t bonded_port_id;
- int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
- uint16_t bonded_slave_count;
+ int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+ uint16_t bonded_member_count;
uint8_t bonding_mode;
uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
struct rte_mempool *mbuf_pool;
- struct rte_ether_addr *default_slave_mac;
+ struct rte_ether_addr *default_member_mac;
struct rte_ether_addr *default_bonded_mac;
/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
static struct link_bonding_unittest_params default_params = {
.bonded_port_id = -1,
- .slave_port_ids = { -1 },
- .bonded_slave_count = 0,
+ .member_port_ids = { -1 },
+ .bonded_member_count = 0,
.bonding_mode = BONDING_MODE_ROUND_ROBIN,
.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
.mbuf_pool = NULL,
- .default_slave_mac = (struct rte_ether_addr *)slave_mac,
+ .default_member_mac = (struct rte_ether_addr *)member_mac,
.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
return 0;
}
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
test_setup(void)
{
int i, nb_mbuf_per_pool;
- struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+ struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
/* Allocate ethernet packet header with space for VLAN header */
if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
}
/* Create / Initialize virtual eth devs */
- if (!slaves_initialized) {
+ if (!members_initialized) {
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
@@ -243,16 +243,16 @@ test_setup(void)
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
- test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+ test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+ TEST_ASSERT(test_params->member_port_ids[i] >= 0,
"Failed to create virtual virtual ethdev %s", pmd_name);
TEST_ASSERT_SUCCESS(configure_ethdev(
- test_params->slave_port_ids[i], 1, 0),
+ test_params->member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s", pmd_name);
}
- slaves_initialized = 1;
+ members_initialized = 1;
}
return 0;
@@ -261,9 +261,9 @@ test_setup(void)
static int
test_create_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
/* Don't try to recreate bonded device if re-running test suite*/
if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
test_params->bonded_port_id, test_params->bonding_mode);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of members %d is great than expected %d.",
+ current_member_count, 0);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members %d is great than expected %d.",
+ current_member_count, 0);
return 0;
}
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
}
static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave (%d) to bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member (%d) to bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
- "Number of slaves (%d) is greater than expected (%d).",
- current_slave_count, test_params->bonded_slave_count + 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+ "Number of members (%d) is greater than expected (%d).",
+ current_member_count, test_params->bonded_member_count + 1);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d).\n",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not as expected (%d).\n",
+ current_member_count, 0);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
return 0;
}
static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
struct rte_ether_addr read_mac_addr, *mac_addr;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count-1]),
- "Failed to remove slave %d from bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count-1]),
+ "Failed to remove member %d from bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
- "Number of slaves (%d) is great than expected (%d).\n",
- current_slave_count, test_params->bonded_slave_count - 1);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+ "Number of members (%d) is great than expected (%d).\n",
+ current_member_count, test_params->bonded_member_count - 1);
- mac_addr = (struct rte_ether_addr *)slave_mac;
+ mac_addr = (struct rte_ether_addr *)member_mac;
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
- test_params->bonded_slave_count-1;
+ test_params->bonded_member_count-1;
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ test_params->member_port_ids[test_params->bonded_member_count-1],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
0);
- test_params->bonded_slave_count--;
+ test_params->bonded_member_count--;
return 0;
}
static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
- test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+ test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
static int bonded_id = 2;
static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
{
- int port_id, current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int port_id, current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- test_add_slave_to_bonded_device();
+ test_add_member_to_bonded_device();
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 1,
- "Number of slaves (%d) is not that expected (%d).",
- current_slave_count, 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 1,
+ "Number of members (%d) is not that expected (%d).",
+ current_member_count, 1);
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
rte_socket_id());
TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
- TEST_ASSERT(rte_eth_bond_slave_add(port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+ TEST_ASSERT(rte_eth_bond_member_add(port_id,
+ test_params->member_port_ids[test_params->bonded_member_count - 1])
< 0,
- "Added slave (%d) to bonded port (%d) unexpectedly.",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ "Added member (%d) to bonded port (%d) unexpectedly.",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
port_id);
- return test_remove_slave_from_bonded_device();
+ return test_remove_member_from_bonded_device();
}
static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
/* Invalid port id */
- current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+ current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- /* Invalid slaves pointer */
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+ /* Invalid members pointer */
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
+ current_member_count = rte_eth_bond_active_members_get(
test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
/* non bonded device*/
- current_slave_count = rte_eth_bond_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
{
int i;
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
{
int i;
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
}
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
{
struct rte_eth_link link_status;
- int current_slave_count, current_bonding_mode, primary_port;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count, current_bonding_mode, primary_port;
+ uint16_t members[RTE_MAX_ETHPORTS];
int retval;
- /* Add slave to bonded device*/
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ /* Add member to bonded device*/
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- /* Change link status of virtual pmd so it will be added to the active
- * slave list of the bonded device*/
+ /*
+ * Change link status of virtual pmd so it will be added to the active
+ * member list of the bonded device.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+ test_params->member_port_ids[test_params->bonded_member_count-1], 1);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
current_bonding_mode, test_params->bonding_mode);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port (%d) is not expected value (%d).",
- primary_port, test_params->slave_port_ids[0]);
+ primary_port, test_params->member_port_ids[0]);
retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
static int
test_stop_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_eth_link link_status;
int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
"Bonded port (%d) status (%d) is not expected value (%d).",
test_params->bonded_port_id, link_status.link_status, 0);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, 0);
return 0;
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- /* Clean up and remove slaves from bonded device */
+ /* Clean up and remove members from bonded device */
free_virtualpmd_tx_queue();
- while (test_params->bonded_slave_count > 0)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "test_remove_slave_from_bonded_device failed");
+ while (test_params->bonded_member_count > 0)
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "test_remove_member_from_bonded_device failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+ TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
bonding_modes[i]),
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+ bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
TEST_ASSERT(bonding_mode < 0,
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
{
int i, j, retval;
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr *expected_mac_addr;
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
/* Invalid port ID */
TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
- test_params->slave_port_ids[i]),
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
- /* Set slave as primary
- * Verify slave it is now primary slave
- * Verify that MAC address of bonded device is that of primary slave
- * Verify that MAC address of all bonded slaves are that of primary slave
+ /* Set member as primary
+ * Verify member it is now primary member
+ * Verify that MAC address of bonded device is that of primary member
+ * Verify that MAC address of all bonded members are that of primary member
*/
for (i = 0; i < 4; i++) {
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(retval >= 0,
"Failed to read primary port from bonded port (%d)\n",
test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+ TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
"Bonded port (%d) primary port (%d) not expected value (%d)\n",
test_params->bonded_port_id, retval,
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
/* stop/start bonded eth dev to apply new MAC */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
"Failed to start bonded port %d",
test_params->bonded_port_id);
- expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+ expected_mac_addr = (struct rte_ether_addr *)&member_mac;
expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Check primary slave MAC */
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Check primary member MAC */
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (j = 0; j < 4; j++) {
if (j != i) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+ test_params->member_port_ids[j],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[j]);
+ test_params->member_port_ids[j]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary "
+ "member port mac address not set to that of primary "
"port");
}
}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
"read primary port from expectedly");
- /* Test with slave port */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+ /* Test with member port */
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
"read primary port from expectedly\n");
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to stop and remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to stop and remove members from bonded device");
- /* No slaves */
+ /* No members */
TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
"read primary port from expectedly\n");
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
/* Non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
- test_params->slave_port_ids[0], mac_addr),
+ test_params->member_port_ids[0], mac_addr),
"Expected call to failed as invalid port specified.");
/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
"Failed to set MAC address on bonded port (%d)",
test_params->bonded_port_id);
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++) {
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.\n");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++) {
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.\n");
}
/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (i = 0; i < 4; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary port");
+ "member port mac address not set to that of primary port");
}
/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
test_params->bonded_port_id);
TEST_ASSERT_FAIL(
- rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+ rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
"Reset MAC address on bonded port (%d) unexpectedly",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* test resetting mac address on bonded device with no slaves */
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to remove slaves and stop bonded device");
+ /* test resetting mac address on bonded device with no members */
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to remove members and stop bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
return 0;
}
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
static int
test_set_bonded_port_initialization_mac_assignment(void)
{
- int i, slave_count;
+ int i, member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
static int bonded_port_id = -1;
- static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+ static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
- struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+ struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
/* Initialize default values for MAC addresses */
- memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
- memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+ memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+ memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
/*
- * 1. a - Create / configure bonded / slave ethdevs
+ * 1. a - Create / configure bonded / member ethdevs
*/
if (bonded_port_id == -1) {
bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
"Failed to configure bonded ethdev");
}
- if (!mac_slaves_initialized) {
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ if (!mac_members_initialized) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
i + 100;
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
- "eth_slave_%d", i);
+ "eth_member_%d", i);
- slave_port_ids[i] = virtual_ethdev_create(pmd_name,
- &slave_mac_addr, rte_socket_id(), 1);
+ member_port_ids[i] = virtual_ethdev_create(pmd_name,
+ &member_mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(slave_port_ids[i] >= 0,
- "Failed to create slave ethdev %s",
+ TEST_ASSERT(member_port_ids[i] >= 0,
+ "Failed to create member ethdev %s",
pmd_name);
- TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+ TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s",
pmd_name);
}
- mac_slaves_initialized = 1;
+ mac_members_initialized = 1;
}
/*
- * 2. Add slave ethdevs to bonded device
+ * 2. Add member ethdevs to bonded device
*/
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
- slave_port_ids[i]),
- "Failed to add slave (%d) to bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to add member (%d) to bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
- "Number of slaves (%d) is not as expected (%d)",
- slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+ "Number of members (%d) is not as expected (%d)",
+ member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
/* 4. a - Start bonded ethdev
- * b - Enable slave devices
- * c - Verify bonded/slaves ethdev MAC addresses
+ * b - Enable member devices
+ * c - Verify bonded/members ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
"Failed to start bonded pmd eth device %d.",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- slave_port_ids[i], 1);
+ member_port_ids[i], 1);
}
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
+ member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 7. a - Change primary port
* b - Stop / Start bonded port
- * d - Verify slave ethdev MAC addresses
+ * d - Verify member ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
- slave_port_ids[2]),
+ member_port_ids[2]),
"failed to set primary port on bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
+ member_port_ids[2]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 6. a - Stop bonded ethdev
- * b - remove slave ethdevs
- * c - Verify slave ethdevs MACs are restored
+ * b - remove member ethdevs
+ * c - Verify member ethdevs MACs are restored
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
"Failed to stop bonded port %u",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
- slave_port_ids[i]),
- "Failed to remove slave %d from bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to remove member %d from bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of slaves (%d) is great than expected (%d).",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of members (%d) is great than expected (%d).",
+ member_count, 0);
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
return 0;
}
static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
- uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+ uint16_t number_of_members, uint8_t enable_member)
{
/* Configure bonded device */
TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
- "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
- number_of_slaves);
-
- /* Add slaves to bonded device */
- while (number_of_slaves > test_params->bonded_slave_count)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave (%d to bonding port (%d).",
- test_params->bonded_slave_count - 1,
+ "with (%d) members.", test_params->bonded_port_id, bonding_mode,
+ number_of_members);
+
+ /* Add members to bonded device */
+ while (number_of_members > test_params->bonded_member_count)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member (%d to bonding port (%d).",
+ test_params->bonded_member_count - 1,
test_params->bonded_port_id);
/* Set link bonding mode */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- if (enable_slave)
- enable_bonded_slaves();
+ if (enable_member)
+ enable_bonded_members();
return 0;
}
static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
{
int i;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
- "Failed to add slaves to bonded device");
+ "Failed to add members to bonded device");
- /* Enabled slave devices */
- for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+ /* Enabled member devices */
+ for (i = 0; i < test_params->bonded_member_count + 1; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave to bonded port.\n");
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member to bonded port.\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count]);
+ test_params->member_port_ids[test_params->bonded_member_count]);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT 4
#define TEST_LSC_WAIT_TIMEOUT_US 500000
int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
static int
test_status_interrupt(void)
{
- int slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- /* initialized bonding device with T slaves */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* initialized bonding device with T members */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 1,
- TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+ TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
- /* Bring all 4 slaves link status to down and test that we have received a
+ /* Bring all 4 members link status to down and test that we have received a
* lsc interrupts */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
"Received a link status change interrupt unexpectedly");
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, 0);
- /* bring one slave port up so link status will change */
+ /* bring one member port up so link status will change */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- /* Verify that calling the same slave lsc interrupt doesn't cause another
+ /* Verify that calling the same member lsc interrupt doesn't cause another
* lsc interrupt from bonded device */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size <= MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size / test_params->bonded_slave_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ (uint64_t)burst_size / test_params->bonded_member_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
pkt_burst, burst_size), 0,
"tx burst return unexpected value");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
rte_pktmbuf_free(mbufs[i]);
}
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE (64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT (22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (1)
static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
{
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
int i, first_fail_idx, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0,
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
/* Copy references to packets which we expect not to be transmitted */
- first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
- TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+ first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ (TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+ TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
- (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+ (i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
}
- /* Set virtual slave to only fail transmission of
- * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+ /*
+ * Set virtual member to only fail transmission of
+ * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ (uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- int slave_expected_tx_count;
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ int member_expected_tx_count;
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
- slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
- test_params->bonded_slave_count;
+ member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+ test_params->bonded_member_count;
- if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
- slave_expected_tx_count = slave_expected_tx_count -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+ if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+ member_expected_tx_count = member_expected_tx_count -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)slave_expected_tx_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[i],
- (unsigned int)port_stats.opackets, slave_expected_tx_count);
+ (uint64_t)member_expected_tx_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.opackets, member_expected_tx_count);
}
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
- free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
{
struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
int i, j, burst_size = 25;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
- /* Verify bonded slave devices rx count */
- /* Verify slave ports tx stats */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ /* Verify member ports tx stats */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
- /* Reset bonded slaves stats */
- rte_eth_stats_reset(test_params->slave_port_ids[j]);
+ /* Reset bonded members stats */
+ rte_eth_stats_reset(test_params->member_port_ids[j]);
}
/* reset bonded device stats */
rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+ int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
int i, nb_rx;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
burst_size[i], "burst generation failed");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2],
(unsigned int)port_stats.ipackets, burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3],
(unsigned int)port_stats.ipackets, 0);
/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_2),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that all MACs are the same as first slave added to bonded dev */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Verify that all MACs are the same as first member added to bonded dev */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary"
+ "member port (%d) mac address has changed to that of primary"
" port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagate to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagate to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(
memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary"
- " port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary"
+ " port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
- sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
- " that of new primary port\n", test_params->slave_port_ids[i]);
+ sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+ " that of new primary port\n", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
int i, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
"Port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
{
struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
- struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
/* NULL all pointers in array to simplify cleanup */
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+ /* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
* in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves eth_devs link status to down */
+ /* Set 2 members eth_devs link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count,
- TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).\n",
- slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count,
+ TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).\n",
+ member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
burst_size = 20;
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test burst of traffic
* 2. Transmit burst on bonded eth_dev
* 3. Verify stats for bonded eth_dev (opackets = burst_size)
- * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
TEST_ASSERT_EQUAL(
generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[0], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[1], (int)port_stats.opackets, 0);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[2], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[3], (int)port_stats.opackets, 0);
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test bursts of traffic
* 2. Add bursts on to virtual eth_devs
* 3. Rx burst on bonded eth_dev, expected (burst_ size *
- * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+ * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
* 4. Verify stats for bonded eth_dev
- * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
- for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size);
}
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
- (struct rte_ether_addr *)polling_slave_mac;
- char slave_name[RTE_ETH_NAME_MAX_LEN];
+ (struct rte_ether_addr *)polling_member_mac;
+ char member_name[RTE_ETH_NAME_MAX_LEN];
int i;
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
- /* Generate slave name / MAC address */
- snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+ /* Generate member name / MAC address */
+ snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Create slave devices with no ISR Support */
- if (polling_test_slaves[i] == -1) {
- polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+ /* Create member devices with no ISR Support */
+ if (polling_test_members[i] == -1) {
+ polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
rte_socket_id(), 0);
- TEST_ASSERT(polling_test_slaves[i] >= 0,
- "Failed to create virtual virtual ethdev %s\n", slave_name);
+ TEST_ASSERT(polling_test_members[i] >= 0,
+ "Failed to create virtual ethdev %s\n", member_name);
- /* Configure slave */
- TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
- "Failed to configure virtual ethdev %s(%d)", slave_name,
- polling_test_slaves[i]);
+ /* Configure member */
+ TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+ "Failed to configure virtual ethdev %s(%d)", member_name,
+ polling_test_members[i]);
}
- /* Add slave to bonded device */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to add slave %s(%d) to bonded device %d",
- slave_name, polling_test_slaves[i],
+ /* Add member to bonded device */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to add member %s(%d) to bonded device %d",
+ member_name, polling_test_members[i],
test_params->bonded_port_id);
}
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* link status change callback for first slave link up */
+ /* link status change callback for first member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+ virtual_ethdev_set_link_status(polling_test_members[0], 1);
TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
- /* no link status change callback for second slave link up */
+ /* no link status change callback for second member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+ virtual_ethdev_set_link_status(polling_test_members[1], 1);
TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
- /* link status change callback for both slave links down */
+ /* link status change callback for both member links down */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
- virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+ virtual_ethdev_set_link_status(polling_test_members[0], 0);
+ virtual_ethdev_set_link_status(polling_test_members[1], 0);
TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+ /* Clean up and remove members from bonded device */
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_SUCCESS(
- rte_eth_bond_slave_remove(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to remove slave %d from bonded port (%d)",
- polling_test_slaves[i], test_params->bonded_port_id);
+ rte_eth_bond_member_remove(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to remove member %d from bonded port (%d)",
+ polling_test_members[i], test_params->bonded_port_id);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
- if (test_params->slave_port_ids[i] == primary_port) {
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+ if (test_params->member_port_ids[i] == primary_port) {
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
} else {
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, 0);
}
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
pkts_burst, burst_size), 0, "Sending empty burst failed");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
static int
test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
int i, j, burst_size = 17;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
burst_size, "burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
"rte_eth_rx_burst failed");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)", test_params->slave_port_ids[i],
- (unsigned int)port_stats.ipackets, burst_size);
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.ipackets,
+ burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)\n", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected "
- "(%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected "
+ "(%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
} else {
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode enabled",
+ test_params->member_port_ids[i]);
}
}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active member count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(
test_params->bonded_port_id, 0, &pkt_burst[0][0],
burst_size), burst_size, "rte_eth_tx_burst failed");
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected",
test_params->bonded_port_id);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
/** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
static int
test_balance_xmit_policy_configuration(void)
{
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
/* Invalid port id */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
/* Set xmit policy on non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
- test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
+ test_params->member_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
"Expected call to failed as invalid port specified.");
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
"Expected call to failed as invalid port specified.");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
static int
test_balance_l2_tx_burst(void)
{
- struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
- int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+ struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+ int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
uint16_t pktlen;
int i;
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
"failed to generate packet burst");
/* Send burst 1 on bonded port */
- for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
&pkts_burst[i][0], burst_size[i]),
burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
burst_size[0] + burst_size[1]);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
burst_size[1]);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, 0, pkts_burst_1,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
return balance_l34_tx_burst(0, 0, 0, 0, 1);
}
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 (40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2 (20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT (25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (0)
static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
- struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+ struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+ struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
- struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+ struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, first_tx_fail_idx, tx_count_1, tx_count_2;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0,
- TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
"Failed to generate test packet burst 1");
- first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+ first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
/* copy mbuf references for expected transmission failures */
- for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+ for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Failed to generate test packet burst 2");
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Transmit burst 1 */
tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
- TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Transmit burst 2 */
tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+ (uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ (TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1],
+ (uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
static int
test_balance_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+ int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
0, 0), burst_size[i],
"failed to generate packet burst");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
"Failed to initialise bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]),
+ test_params->member_port_ids[1]),
"Failed to set bonded port (%d) primary port to (%d)\n",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected\n",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected\n",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
"Failed to set balance xmit policy.");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- /* Send to sets of packet burst and verify that they are balanced across
- * slaves */
+ /*
+ * Send to sets of packet burst and verify that they are balanced across
+ * members.
+ */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[2], (int)port_stats.opackets,
+ test_params->member_port_ids[2], (int)port_stats.opackets,
burst_size);
- /* verify that all packets get send on primary slave when no other slaves
+ /* verify that all packets get send on primary member when no other members
* are available */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 1);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 1);
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size + burst_size);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 1);
+ test_params->member_port_ids[2], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"Failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.ipackets,
burst_size * 3);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 2, 1),
"Failed to initialise bonded device");
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size * test_params->bonded_slave_count,
+ (uint64_t)burst_size * test_params->bonded_member_count,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, burst_size);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
"transmitted an unexpected number of packets");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT (3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE (40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT (15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT (10)
static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
- struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+ struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+ struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0,
- TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
- expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
}
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[0],
+ test_params->member_port_ids[0],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[1],
+ test_params->member_port_ids[1],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[2],
+ test_params->member_port_ids[2],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[0],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[0],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[1],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ test_params->member_port_ids[1],
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[2],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[2],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Transmit burst */
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
}
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Verify that all mbufs who transmission failed have a ref value of one */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
static int
test_broadcast_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+ int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
burst_size[i], "failed to generate packet burst");
}
- /* Add rx data to slave 0 */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member 0 */
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs allocate for rx testing */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
- /* Verify that all MACs are the same as first slave added to bonded
+ /* Verify that all MACs are the same as first member added to bonded
* device */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary "
+ "member port (%d) mac address has changed to that of primary "
"port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
1), "Failed to initialise bonded device");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- for (i = 0; i < test_params->bonded_slave_count; i++)
- rte_eth_stats_reset(test_params->slave_port_ids[i]);
+ for (i = 0; i < test_params->bonded_member_count; i++)
+ rte_eth_stats_reset(test_params->member_port_ids[i]);
- /* Verify that pkts are not sent on slaves with link status down */
+ /* Verify that pkts are not sent on members with link status down */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"rte_eth_tx_burst failed\n");
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
- TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+ TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
"(%d) port_stats.opackets (%d) not as expected (%d)\n",
test_params->bonded_port_id, (int)port_stats.opackets,
- burst_size * slave_count);
+ burst_size * member_count);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
free(test_params->pkt_eth_hdr);
test_params->pkt_eth_hdr = NULL;
- /* Clean up and remove slaves from bonded device */
- remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ remove_members_and_stop_bonded_device();
}
static void
free_virtualpmd_tx_queue(void)
{
- int i, slave_port, to_free_cnt;
+ int i, member_port, to_free_cnt;
struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
/* Free tx queue of virtual pmd */
- for (slave_port = 0; slave_port < test_params->bonded_slave_count;
- slave_port++) {
+ for (member_port = 0; member_port < test_params->bonded_member_count;
+ member_port++) {
to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_port],
+ test_params->member_port_ids[member_port],
pkts_to_free, MAX_PKT_BURST);
for (i = 0; i < to_free_cnt; i++)
rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
uint16_t pktlen;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
(BONDING_MODE_TLB, 1, 3, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
RTE_ETHER_TYPE_IPV4, 0, 0);
} else {
initialize_eth_header(test_params->pkt_eth_hdr,
- (struct rte_ether_addr *)test_params->default_slave_mac,
+ (struct rte_ether_addr *)test_params->default_member_mac,
(struct rte_ether_addr *)dst_mac_0,
RTE_ETHER_TYPE_IPV4, 0, 0);
}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
sum_ports_opackets += port_stats[i].opackets;
}
TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
- "Total packets sent by slaves is not equal to packets sent by bond interface");
+ "Total packets sent by members is not equal to packets sent by bond interface");
- /* checking if distribution of packets is balanced over slaves */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* checking if distribution of packets is balanced over members */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT(port_stats[i].obytes > 0 &&
port_stats[i].obytes < all_bond_obytes,
- "Packets are not balanced over slaves");
+ "Packets are not balanced over members");
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
burst_size);
TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
- /* Clean ugit checkout masterp and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean ugit checkout masterp and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
static int
test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
uint16_t i, j, nb_rx, burst_size = 17;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 4, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 2, 1),
"Failed to initialize bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
- * MAC hasn't been changed */
+ /*
+ * Verify that bonded MACs is that of first member and that the other member
+ * MAC hasn't been changed.
+ */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, (int)4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
- * has changed */
+ /*
+ * Bring primary port down, verify that active member count is 3 and primary
+ * has changed.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
rte_delay_us(500000);
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
for (i = 0; i < 4; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
rte_delay_us(11000);
}
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
burst_size)
return -1;
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ALB_SLAVE_COUNT 2
+#define TEST_ALB_MEMBER_COUNT 2
static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
struct rte_ether_hdr *eth_pkt;
struct rte_arp_hdr *arp_pkt;
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
- slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count;
+ member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
RTE_ARP_OP_REPLY);
rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
- slave_mac1 =
- rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 =
- rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 =
+ rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 =
+ rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
* Checking if packets are properly distributed on bonding ports. Packets
* 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+ int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
- * Checking if update ARP packets were properly send on slave ports.
+ * Checking if update ARP packets were properly send on member ports.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+ test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
nb_pkts_sum += nb_pkts;
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
/*
* Checking if VLAN headers in generated ARP Update packet are correct.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
retval = 0;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
burst_size = 32;
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite = {
.unit_test_cases = {
TEST_CASE(test_create_bonded_device),
TEST_CASE(test_create_bonded_device_with_invalid_params),
- TEST_CASE(test_add_slave_to_bonded_device),
- TEST_CASE(test_add_slave_to_invalid_bonded_device),
- TEST_CASE(test_remove_slave_from_bonded_device),
- TEST_CASE(test_remove_slave_from_invalid_bonded_device),
- TEST_CASE(test_get_slaves_from_bonded_device),
- TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
- TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+ TEST_CASE(test_add_member_to_bonded_device),
+ TEST_CASE(test_add_member_to_invalid_bonded_device),
+ TEST_CASE(test_remove_member_from_bonded_device),
+ TEST_CASE(test_remove_member_from_invalid_bonded_device),
+ TEST_CASE(test_get_members_from_bonded_device),
+ TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+ TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
TEST_CASE(test_start_bonded_device),
TEST_CASE(test_stop_bonded_device),
TEST_CASE(test_set_bonding_mode),
- TEST_CASE(test_set_primary_slave),
+ TEST_CASE(test_set_primary_member),
TEST_CASE(test_set_explicit_bonded_mac),
TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
TEST_CASE(test_status_interrupt),
- TEST_CASE(test_adding_slave_after_bonded_device_started),
+ TEST_CASE(test_adding_member_after_bonded_device_started),
TEST_CASE(test_roundrobin_tx_burst),
- TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
- TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
- TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+ TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+ TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+ TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
- TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+ TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
TEST_CASE(test_activebackup_verify_mac_assignment),
- TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+ TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
TEST_CASE(test_balance_xmit_policy_configuration),
TEST_CASE(test_balance_l2_tx_burst),
TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
- TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+ TEST_CASE(test_balance_tx_burst_member_tx_fail),
TEST_CASE(test_balance_rx_burst),
TEST_CASE(test_balance_verify_promiscuous_enable_disable),
TEST_CASE(test_balance_verify_mac_assignment),
- TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
TEST_CASE(test_tlb_tx_burst),
TEST_CASE(test_tlb_rx_burst),
TEST_CASE(test_tlb_verify_mac_assignment),
TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
- TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+ TEST_CASE(test_tlb_verify_member_link_status_change_failover),
TEST_CASE(test_alb_change_mac_in_reply_sent),
TEST_CASE(test_alb_reply_from_client),
TEST_CASE(test_alb_receive_vlan_reply),
TEST_CASE(test_alb_ipv4_tx),
TEST_CASE(test_broadcast_tx_burst),
- TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+ TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
TEST_CASE(test_broadcast_rx_burst),
TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
TEST_CASE(test_broadcast_verify_mac_assignment),
- TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
TEST_CASE(test_reconfigure_bonded_device),
TEST_CASE(test_close_bonded_device),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RX_RING_SIZE 1024
#define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
#define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
-#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT ("net_virt_%d_tx")
#define INVALID_SOCKET_ID (-1)
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
};
-struct slave_conf {
+struct member_conf {
struct rte_ring *rx_queue;
struct rte_ring *tx_queue;
uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
struct link_bonding_unittest_params {
uint8_t bonded_port_id;
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
-#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
static struct link_bonding_unittest_params test_params = {
.bonded_port_id = INVALID_PORT_ID,
- .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+ .member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
.mbuf_pool = NULL,
};
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test and satisfy given condition.
*
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* _condition condition that need to be checked
*/
#define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
if (!!(_condition))
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
* device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* */
-#define FOR_EACH_SLAVE(_i, _slave) \
- FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+ FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
/*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
* buffer for packets
* size size of buffer
* return number of packets or negative error number
*/
static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+ return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
size, NULL);
}
/*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
* buffer for packets
* size number of packets to be injected
* return number of queued packets or negative error number
*/
static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+ return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
size, NULL);
}
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
}
static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
{
struct rte_ether_addr addr, addr_check;
int retval;
/* Some sanity check */
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
- RTE_VERIFY(slave->bonded == 0);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+ RTE_VERIFY(member->bonded == 0);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- rte_ether_addr_copy(&slave_mac_default, &addr);
- addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+ rte_ether_addr_copy(&member_mac_default, &addr);
+ addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
- rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+ rte_eth_dev_mac_addr_remove(member->port_id, &addr);
- TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
- "Failed to set slave MAC address");
+ TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+ "Failed to set member MAC address");
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
- slave->port_id),
- "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
- (uint8_t)(slave - test_params.slave_ports), slave->port_id,
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+ member->port_id),
+ "Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+ (uint8_t)(member - test_params.member_ports), member->port_id,
test_params.bonded_port_id);
- slave->bonded = 1;
+ member->bonded = 1;
if (start) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
- "Failed to start slave %u", slave->port_id);
+ TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+ "Failed to start member %u", member->port_id);
}
- retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
- TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+ retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+ TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
strerror(-retval));
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
- "Slave MAC address is not as expected");
+ "Member MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(member->lacp_parnter_state == 0);
return 0;
}
static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
{
- ptrdiff_t slave_idx = slave - test_params.slave_ports;
+ ptrdiff_t member_idx = member - test_params.member_ports;
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
- RTE_VERIFY(slave->bonded == 1);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(member->bonded == 1);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
- slave->port_id), 0,
- "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
- (uint8_t)slave_idx, slave->port_id,
+ TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+ member->port_id), 0,
+ "Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+ (uint8_t)member_idx, member->port_id,
test_params.bonded_port_id);
- slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ member->bonded = 0;
+ member->lacp_parnter_state = 0;
return 0;
}
static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
- lacpdu_rx_count[slave_id]++;
+ lacpdu_rx_count[member_id]++;
rte_pktmbuf_free(lacp_pkt);
}
static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
{
uint8_t i;
int ret;
RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
- for (i = 0; i < slave_count; i++) {
- TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+ for (i = 0; i < member_count; i++) {
+ TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
"Failed to add port %u to bonded device.\n",
- test_params.slave_ports[i].port_id);
+ test_params.member_ports[i].port_id);
}
/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
int retval;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint16_t i;
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
"Failed to stop bonded port %u",
test_params.bonded_port_id);
- FOR_EACH_SLAVE(i, slave)
- remove_slave(slave);
+ FOR_EACH_MEMBER(i, member)
+ remove_member(member);
- retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
- RTE_DIM(slaves));
+ retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+ RTE_DIM(members));
TEST_ASSERT_EQUAL(retval, 0,
- "Expected bonded device %u have 0 slaves but returned %d.",
+ "Expected bonded device %u have 0 members but returned %d.",
test_params.bonded_port_id, retval);
- FOR_EACH_PORT(i, slave) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+ FOR_EACH_PORT(i, member) {
+ TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
"Failed to stop bonded port %u",
- slave->port_id);
+ member->port_id);
- TEST_ASSERT(slave->bonded == 0,
- "Port id=%u is still marked as enslaved.", slave->port_id);
+ TEST_ASSERT(member->bonded == 0,
+ "Port id=%u is still marked as enmemberd.", member->port_id);
}
return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
{
int retval, nb_mbuf_per_pool;
char name[RTE_ETH_NAME_MAX_LEN];
- struct slave_conf *port;
+ struct member_conf *port;
const uint8_t socket_id = rte_socket_id();
uint16_t i;
@@ -400,10 +400,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(i, port) {
- port = &test_params.slave_ports[i];
+ port = &test_params.member_ports[i];
if (port->rx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
}
if (port->tx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
}
if (port->port_id == INVALID_PORT_ID) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
retval = rte_eth_from_rings(name, &port->rx_queue, 1,
&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
* frame but not LACP
*/
static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
/* Change source address to partner address */
rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ member->lacp_parnter_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
}
/*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
*
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
* all other packets. Prepares response LACP and sends it back.
*
* return number of LACP received and replied, -1 on error.
*/
static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
{
int retval;
struct rte_mbuf *rx_buf[MAX_PKT_BURST];
struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
uint16_t lacp_tx_buf_cnt = 0, i;
- retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
- TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
- slave->port_id);
+ retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+ TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+ member->port_id);
for (i = 0; i < (uint16_t)retval; i++) {
- if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+ if (make_lacp_reply(member, rx_buf[i]) == 0) {
/* reply with actor's LACP */
lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
if (lacp_tx_buf_cnt == 0)
return 0;
- retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+ retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
if (retval <= lacp_tx_buf_cnt) {
/* retval might be negative */
for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
}
TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
- "Failed to equeue lacp packets into slave %u tx queue.",
- slave->port_id);
+ "Failed to equeue lacp packets into member %u tx queue.",
+ member->port_id);
return lacp_tx_buf_cnt;
}
/*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
* return 0 if handshake not completed, 1 if handshake was complete,
*/
static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
{
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return member->lacp_parnter_state == expected_state;
}
static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
static int
bond_handshake(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *buf[MAX_PKT_BURST];
uint16_t nb_pkts;
- uint8_t all_slaves_done, i, j;
- uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+ uint8_t all_members_done, i, j;
+ uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
const unsigned delay = bond_get_update_timeout_ms();
/* Exchange LACP frames */
- all_slaves_done = 0;
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ all_members_done = 0;
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
rte_delay_ms(delay);
- all_slaves_done = 1;
- FOR_EACH_SLAVE(j, slave) {
- /* If response already send, skip slave */
+ all_members_done = 1;
+ FOR_EACH_MEMBER(j, member) {
+ /* If response already send, skip member */
if (status[j] != 0)
continue;
- if (bond_handshake_reply(slave) < 0) {
- all_slaves_done = 0;
+ if (bond_handshake_reply(member) < 0) {
+ all_members_done = 0;
break;
}
- status[j] = bond_handshake_done(slave);
+ status[j] = bond_handshake_done(member);
if (status[j] == 0)
- all_slaves_done = 0;
+ all_members_done = 0;
}
nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
}
/* If response didn't send - report failure */
- TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+ TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
/* If flags doesn't match - report failure */
- return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+ return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
}
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
static int
test_mode4_lacp(void)
{
int retval;
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
/* Test LACP handshake function */
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
{
int retval;
/* Test and verify for Stable mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_STABLE,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify for Bandwidth mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify selection for count mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_COUNT,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
}
static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
struct rte_ether_addr *src_mac,
struct rte_ether_addr *dst_mac, uint16_t count)
{
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
if (retval != (int)count)
return retval;
- retval = slave_put_pkts(slave, pkts, count);
+ retval = member_put_pkts(member, pkts, count);
if (retval > 0 && retval != count)
free_pkts(&pkts[retval], count - retval);
TEST_ASSERT_EQUAL(retval, count,
- "Failed to enqueue packets into slave %u RX queue", slave->port_id);
+ "Failed to enqueue packets into member %u RX queue", member->port_id);
return TEST_SUCCESS;
}
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
static int
test_mode4_rx(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
struct rte_ether_addr dst_mac;
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -838,7 +838,7 @@ test_mode4_rx(void)
dst_mac.addr_bytes[0] += 2;
/* First try with promiscuous mode enabled.
- * Add 2 packets to each slave. First with bonding MAC address, second with
+ * Add 2 packets to each member. First with bonding MAC address, second with
* different. Check if we received all of them. */
retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect 2 packets per slave */
+ /* Expect 2 packets per member */
expected_pkts_cnt += 2;
}
@@ -894,16 +894,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect only one packet per slave */
+ /* Expect only one packet per member */
expected_pkts_cnt += 1;
}
@@ -927,19 +927,19 @@ test_mode4_rx(void)
TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
"Expected %u packets but received only %d", expected_pkts_cnt, retval);
- /* Link down test: simulate link down for first slave. */
+ /* Link down test: simulate link down for first member. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- /* Find first slave and make link down on it*/
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ /* Find first member and make link down on it*/
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding */
for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
- /* Put packet to each slave */
- FOR_EACH_SLAVE(i, slave) {
+ /* Put packet to each member */
+ FOR_EACH_MEMBER(i, member) {
void *pkt = NULL;
- dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+ dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
- src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+ src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
if (retval > 0)
free_pkts(pkts, retval);
- while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+ while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
rte_pktmbuf_free(pkt);
- if (slave_down_id == slave->port_id)
+ if (member_down_id == member->port_id)
TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
else
TEST_ASSERT_NOT_EQUAL(retval, 0,
- "Expected to receive some packets on slave %u.",
- slave->port_id);
- rte_eth_dev_start(slave->port_id);
+ "Expected to receive some packets on member %u.",
+ member->port_id);
+ rte_eth_dev_start(member->port_id);
for (j = 0; j < 5; j++) {
- TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+ TEST_ASSERT(bond_handshake_reply(member) >= 0,
"Handshake after link up");
- if (bond_handshake_done(slave) == 1)
+ if (bond_handshake_done(member) == 1)
break;
}
- TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
}
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
static int
test_mode4_tx_burst(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets were transmitted properly. Every slave should have
+ /* Check if packets were transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+ "member %u unexpectedly transmitted %d SLOW packets", member->port_id,
slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
pkts_cnt += normal_cnt;
}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- /* Link down test:
- * simulate link down for first slave. */
+ /*
+ * Link down test:
+ * simulate link down for first member.
+ */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding. */
for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets was transmitted properly. Every slave should have
+ /* Check if packets was transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
- if (slave_down_id == slave->port_id) {
+ if (member_down_id == member->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
- normal_cnt + slow_cnt, slave->port_id);
+ "member %u enexpectedly transmitted %u packets",
+ normal_cnt + slow_cnt, member->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets",
- slave->port_id, slow_cnt);
+ "member %u unexpectedly transmitted %d SLOW packets",
+ member->port_id, slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
}
pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
{
struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
rte_ether_addr_copy(&parnter_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
offsetof(struct marker, reserved_90) -
offsetof(struct marker, requester_port);
RTE_VERIFY(marker_hdr->marker.info_length == 16);
- marker_hdr->marker.requester_port = slave->port_id + 1;
+ marker_hdr->marker.requester_port = member->port_id + 1;
marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
marker_hdr->marker.terminator_length = 0;
}
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
static int
test_mode4_marker(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *pkts[MAX_PKT_BURST];
struct rte_mbuf *marker_pkt;
struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
uint8_t i, j;
const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
- retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+ retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
delay = bond_get_update_timeout_ms();
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
- init_marker(marker_pkt, slave);
+ init_marker(marker_pkt, member);
- retval = slave_put_pkts(slave, &marker_pkt, 1);
+ retval = member_put_pkts(member, &marker_pkt, 1);
if (retval != 1)
rte_pktmbuf_free(marker_pkt);
TEST_ASSERT_EQUAL(retval, 1,
- "Failed to send marker packet to slave %u", slave->port_id);
+ "Failed to send marker packet to member %u", member->port_id);
for (j = 0; j < 20; ++j) {
rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
/* Check if LACP packet was send by state machines
First and only packet must be a maker response */
- retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+ retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
if (retval == 0)
continue;
if (retval > 1)
free_pkts(pkts, retval);
- TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+ TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
nb_pkts = retval;
marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
TEST_ASSERT(j < 20, "Marker response not found");
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
static int
test_mode4_expired(void)
{
- struct slave_conf *slave, *exp_slave = NULL;
+ struct member_conf *member, *exp_member = NULL;
struct rte_mbuf *pkts[MAX_PKT_BURST];
int retval;
uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
struct rte_eth_bond_8023ad_conf conf;
- retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
0);
/* Set custom timeouts to make test last shorter. */
rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
/* Wait for new settings to be applied. */
for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
- FOR_EACH_SLAVE(j, slave)
- bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(j, member)
+ bond_handshake_reply(member);
rte_delay_ms(conf.update_timeout_ms);
}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- /* Find first slave */
- FOR_EACH_SLAVE(i, slave) {
- exp_slave = slave;
+ /* Find first member */
+ FOR_EACH_MEMBER(i, member) {
+ exp_member = member;
break;
}
- RTE_VERIFY(exp_slave != NULL);
+ RTE_VERIFY(exp_member != NULL);
/* When one of partners do not send or respond to LACP frame in
* conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
retval);
- FOR_EACH_SLAVE(i, slave) {
- retval = bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(i, member) {
+ retval = bond_handshake_reply(member);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that suppose to be expired. */
- if (slave == exp_slave) {
- while (rte_ring_count(slave->rx_queue) > 0) {
+ /* Remove replay for member that suppose to be expired. */
+ if (member == exp_member) {
+ while (rte_ring_count(member->rx_queue) > 0) {
void *pkt = NULL;
- rte_ring_dequeue(slave->rx_queue, &pkt);
+ rte_ring_dequeue(member->rx_queue, &pkt);
rte_pktmbuf_free(pkt);
}
}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
retval);
}
- /* After test only expected slave should be in EXPIRED state */
- FOR_EACH_SLAVE(i, slave) {
- if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
- "Slave %u should be in expired.", slave->port_id);
+ /* After test only expected member should be in EXPIRED state */
+ FOR_EACH_MEMBER(i, member) {
+ if (member == exp_member)
+ TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+ "Member %u should be in expired.", member->port_id);
else
- TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
- "Slave %u should be operational.", slave->port_id);
+ TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+ "Member %u should be operational.", member->port_id);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
* . try to transmit lacpdu (should fail)
* . try to set collecting and distributing flags (should fail)
* reconfigure w/external sm
- * . transmit one lacpdu on each slave using new api
- * . make sure each slave receives one lacpdu using the callback api
- * . transmit one data pdu on each slave (should fail)
+ * . transmit one lacpdu on each member using new api
+ * . make sure each member receives one lacpdu using the callback api
+ * . transmit one data pdu on each member (should fail)
* . enable distribution and collection, send one data pdu each again
*/
int retval;
- struct slave_conf *slave = NULL;
+ struct member_conf *member = NULL;
uint8_t i;
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]),
- "Slave should not allow manual LACP xmit");
+ member->port_id, lacp_tx_buf[i]),
+ "Member should not allow manual LACP xmit");
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
test_params.bonded_port_id,
- slave->port_id, 1),
- "Slave should not allow external state controls");
+ member->port_id, 1),
+ "Member should not allow external state controls");
}
free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
test_mode4_ext_lacp(void)
{
int retval;
- struct slave_conf *slave = NULL;
- uint8_t all_slaves_done = 0, i;
+ struct member_conf *member = NULL;
+ uint8_t all_members_done = 0, i;
uint16_t nb_pkts;
const unsigned int delay = bond_get_update_timeout_ms();
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
- struct rte_mbuf *buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+ struct rte_mbuf *buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
for (i = 0; i < 30; ++i)
rte_delay_ms(delay);
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
retval = rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]);
+ member->port_id, lacp_tx_buf[i]);
TEST_ASSERT_SUCCESS(retval,
- "Slave should allow manual LACP xmit");
+ "Member should allow manual LACP xmit");
}
nb_pkts = bond_tx(NULL, 0);
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
- FOR_EACH_SLAVE(i, slave) {
- nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
- TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+ FOR_EACH_MEMBER(i, member) {
+ nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+ TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
nb_pkts, i);
- slave_put_pkts(slave, buf, nb_pkts);
+ member_put_pkts(member, buf, nb_pkts);
}
nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
/* wait for the periodic callback to run */
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
uint8_t s, total = 0;
rte_delay_ms(delay);
- FOR_EACH_SLAVE(s, slave) {
- total += lacpdu_rx_count[slave->port_id];
+ FOR_EACH_MEMBER(s, member) {
+ total += lacpdu_rx_count[member->port_id];
}
- if (total >= SLAVE_COUNT)
- all_slaves_done = 1;
+ if (total >= MEMBER_COUNT)
+ all_members_done = 1;
}
- FOR_EACH_SLAVE(i, slave) {
- TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
- "Slave port %u should have received 1 lacpdu (count=%u)",
- slave->port_id,
- lacpdu_rx_count[slave->port_id]);
+ FOR_EACH_MEMBER(i, member) {
+ TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+ "Member port %u should have received 1 lacpdu (count=%u)",
+ member->port_id,
+ lacpdu_rx_count[member->port_id]);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
static int
check_environment(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i, env_state;
- uint16_t slaves[RTE_DIM(test_params.slave_ports)];
- int slaves_count;
+ uint16_t members[RTE_DIM(test_params.member_ports)];
+ int members_count;
env_state = 0;
FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
break;
}
- slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
- slaves, RTE_DIM(slaves));
+ members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+ members, RTE_DIM(members));
- if (slaves_count != 0)
+ if (members_count != 0)
env_state |= 0x10;
TEST_ASSERT_EQUAL(env_state, 0,
"Environment not clean (port %u):%s%s%s%s%s",
port->port_id,
- env_state & 0x01 ? " slave rx queue not clean" : "",
- env_state & 0x02 ? " slave tx queue not clean" : "",
- env_state & 0x04 ? " port marked as enslaved" : "",
- env_state & 0x80 ? " slave state is not reset" : "",
- env_state & 0x10 ? " slave count not equal 0" : ".");
+ env_state & 0x01 ? " member rx queue not clean" : "",
+ env_state & 0x02 ? " member tx queue not clean" : "",
+ env_state & 0x04 ? " port marked as enmemberd" : "",
+ env_state & 0x80 ? " member state is not reset" : "",
+ env_state & 0x10 ? " member count not equal 0" : ".");
return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
static int
test_mode4_executor(int (*test_func)(void))
{
- struct slave_conf *port;
+ struct member_conf *port;
int test_result;
uint8_t i;
void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
FOR_EACH_PORT(i, port) {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RXTX_RING_SIZE 1024
#define RXTX_QUEUE_COUNT 4
#define BONDED_DEV_NAME ("net_bonding_rss")
-#define SLAVE_DEV_NAME_FMT ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT ("rssconf_member%d_q%d")
#define NUM_MBUFS 8191
#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-struct slave_conf {
+struct member_conf {
uint16_t port_id;
struct rte_eth_dev_info dev_info;
@@ -54,7 +54,7 @@ struct slave_conf {
uint8_t rss_key[40];
struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- uint8_t is_slave;
+ uint8_t is_member;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
};
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
static struct link_bonding_rssconf_unittest_params test_params = {
.bond_port_id = INVALID_PORT_ID,
- .slave_ports = {
- [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+ .member_ports = {
+ [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
},
.mbuf_pool = NULL,
};
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
static int
configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
}
/**
- * Remove all slaves from bonding
+ * Remove all members from bonding
*/
static int
-remove_slaves(void)
+remove_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+ port = &test_params.member_ports[n];
+ if (port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
test_params.bond_port_id, port->port_id),
- "Cannot remove slave %d from bonding", port->port_id);
- port->is_slave = 0;
+ "Cannot remove member %d from bonding", port->port_id);
+ port->is_member = 0;
}
}
@@ -173,30 +173,30 @@ remove_slaves(void)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+ TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
"Failed to stop port %u", test_params.bond_port_id);
return TEST_SUCCESS;
}
/**
- * Add all slaves to bonding
+ * Add all members to bonding
*/
static int
-bond_slaves(void)
+bond_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (!port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot attach slave %d to the bonding",
+ port = &test_params.member_ports[n];
+ if (!port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot attach member %d to the bonding",
port->port_id);
- port->is_slave = 1;
+ port->is_member = 1;
}
}
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
}
/**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
* port is synced with bonding port.
*/
static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
{
unsigned i;
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
}
/**
- * Fetch slaves RETA
+ * Fetch members RETA
*/
static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
unsigned j;
for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
}
/**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
*/
static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
{
- struct slave_conf *port = &(test_params.slave_ports[0]);
+ struct member_conf *port = &(test_params.member_ports[0]);
- /* 1. Remove first slave from bonding */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
- port->port_id), "Cannot remove slave #d from bonding");
+ /* 1. Remove first member from bonding */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+ port->port_id), "Cannot remove member #d from bonding");
- /* 2. Change removed (ex-)slave and bonding configuration to different
+ /* 2. Change removed (ex-)member and bonding configuration to different
* values
*/
reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
bond_reta_fetch();
reta_set(port->port_id, 2, port->dev_info.reta_size);
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 0,
- "Removed slave didn't should be synchronized with bonding port");
+ "Removed member didn't should be synchronized with bonding port");
- /* 3. Add (ex-)slave and check if configuration changed*/
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot add slave");
+ /* 3. Add (ex-)member and check if configuration changed*/
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot add member");
bond_reta_fetch();
- slave_reta_fetch(port);
+ member_reta_fetch(port);
return reta_check_synced(port);
}
/**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
*/
static int
test_propagate(void)
{
unsigned i;
uint8_t n;
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t bond_rss_key[40];
struct rte_eth_rss_conf bond_rss_conf;
@@ -349,18 +349,18 @@ test_propagate(void)
retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
&bond_rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
- "Hash function not propagated for slave %d",
+ "Hash function not propagated for member %d",
port->port_id);
}
@@ -376,11 +376,11 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
memset(port->rss_conf.rss_key, 0, 40);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
}
memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&(port->rss_conf));
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
/* compare keys */
retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
sizeof(bond_rss_key));
- TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+ TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
port->port_id);
}
}
@@ -416,10 +416,10 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
}
TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
bond_reta_fetch();
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
}
}
@@ -459,29 +459,29 @@ test_rss(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
- TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+ TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
/**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
*/
static int
test_rss_config_lazy(void)
{
struct rte_eth_rss_conf bond_rss_conf = {0};
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t rss_key[40];
uint64_t rss_hf;
int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
}
- /* Set all keys to zero for all slaves */
+ /* Set all keys to zero for all members */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+ TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
memset(port->rss_key, 0, sizeof(port->rss_key));
port->rss_conf.rss_key = port->rss_key;
port->rss_conf.rss_key_len = sizeof(port->rss_key);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
}
/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
/* Test RETA propagation */
for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
}
retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
@@ -579,13 +579,13 @@ test_setup(void)
int retval;
int port_id;
char name[256];
- struct slave_conf *port;
+ struct member_conf *port;
struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
if (test_params.mbuf_pool == NULL) {
test_params.mbuf_pool = rte_pktmbuf_pool_create(
- "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+ "RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
port_id = rte_eth_dev_count_avail();
- snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+ snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
retval = rte_vdev_init(name, "size=64,copy=0");
TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
}
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
----------
A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the
backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
testpmd> create bonded device 1 0
Created new bonded device net_bond_testpmd_0 on (port 2).
- testpmd> add bonding slave 0 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 0 2
+ testpmd> add bonding member 1 2
testpmd> show bonding config 2
The syntax of the ``testpmd`` command is:
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
Set primary to P1 before starting bonding port.
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
Use P2 only for forwarding.
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
testpmd> start
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
.. code-block:: console
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
testpmd> clear port stats all
testpmd> set bonding primary 0 2
- testpmd> remove bonding slave 1 2
+ testpmd> remove bonding member 1 2
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
.. code-block:: console
- testpmd> remove bonding slave 0 2
+ testpmd> remove bonding member 0 2
testpmd> show bonding config 2
testpmd> port stop 0
testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
.. code-block:: console
- dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
- (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+ dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+ (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
Vector Processing
-----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
v:langID="1033"
v:metric="true"
v:viewMarkup="false"><v:userDefs><v:ud
- v:nameU="msvSubprocessMaster"
+ v:nameU="msvSubprocessMain"
v:prompt=""
v:val="VT4(Rectangle)" /><v:ud
v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..519a364105 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
``rte_eth_dev`` ports of the same speed and duplex to provide similar
capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
and a switch. The new bonded PMD will then process these interfaces based on
the mode of operation specified to provide support for features such as
redundant links, fault tolerance and/or load balancing.
The librte_net_bond library exports a C API which provides an API for the
creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
.. note::
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides load balancing and fault tolerance by transmission of
- packets in sequential order from the first available slave device through
+ packets in sequential order from the first available member device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
Active Backup (Mode 1)
- In this mode only one slave in the bond is active at any time, a different
- slave becomes active if, and only if, the primary active slave fails,
- thereby providing fault tolerance to slave failure. The single logical
+ In this mode only one member in the bond is active at any time, a different
+ member becomes active if, and only if, the primary active member fails,
+ thereby providing fault tolerance to member failure. The single logical
bonded interface's MAC address is externally visible on only one NIC (port)
to avoid confusing the network switch.
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides transmit load balancing (based on the selected
transmission policy) and fault tolerance. The default policy (layer2) uses
a simple calculation based on the packet flow source and destination MAC
- addresses as well as the number of active slaves available to the bonded
- device to classify the packet to a specific slave to transmit on. Alternate
+ addresses as well as the number of active members available to the bonded
+ device to classify the packet to a specific member to transmit on. Alternate
transmission policies supported are layer 2+3, this takes the IP source and
- destination addresses into the calculation of the transmit slave port and
+ destination addresses into the calculation of the transmit member port and
the final supported policy is layer 3+4, this uses IP source and
destination addresses as well as the TCP/UDP source and destination port.
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
Broadcast (Mode 3)
- This mode provides fault tolerance by transmission of packets on all slave
+ This mode provides fault tolerance by transmission of packets on all member
ports.
* **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
intervals period of less than 100ms.
#. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
- where N is the number of slaves. This is a space required for LACP
+ where N is the number of members. This is a space required for LACP
frames. Additionally LACP packets are included in the statistics, but
they are not returned to the application.
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides an adaptive transmit load balancing. It dynamically
- changes the transmitting slave, according to the computed load. Statistics
+ changes the transmitting member, according to the computed load. Statistics
are collected in 100ms intervals and scheduled every 10ms.
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
programmatically via the C API ``rte_eth_bond_create`` function.
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
next rte flow operations:
Validate:
- - Validate flow for each slave, failure at least for one slave causes to
+ - Validate flow for each member, failure at least for one member causes to
bond validation failure.
Create:
- - Create the flow in all slaves.
- - Save all the slaves created flows objects in bonding internal flow
+ - Create the flow in all members.
+ - Save all the members created flows objects in bonding internal flow
structure.
- - Failure in flow creation for existed slave rejects the flow.
- - Failure in flow creation for new slaves in slave adding time rejects
- the slave.
+ - Failure in flow creation for existed member rejects the flow.
+ - Failure in flow creation for new members in member adding time rejects
+ the member.
Destroy:
- - Destroy the flow in all slaves and release the bond internal flow
+ - Destroy the flow in all members and release the bond internal flow
memory.
Flush:
- - Destroy all the bonding PMD flows in all the slaves.
+ - Destroy all the bonding PMD flows in all the members.
.. note::
- Don't call slaves flush directly, It destroys all the slave flows which
+ Don't call members flush directly, It destroys all the member flows which
may include external flows or the bond internal LACP flow.
Query:
- - Summarize flow counters from all the slaves, relevant only for
+ - Summarize flow counters from all the members, relevant only for
``RTE_FLOW_ACTION_TYPE_COUNT``.
Isolate:
- - Call to flow isolate for all slaves.
- - Failure in flow isolation for existed slave rejects the isolate mode.
- - Failure in flow isolation for new slaves in slave adding time rejects
- the slave.
+ - Call to flow isolate for all members.
+ - Failure in flow isolation for existed member rejects the isolate mode.
+ - Failure in flow isolation for new members in member adding time rejects
+ the member.
All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
Link Status Change Interrupts / Polling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
Link bonding devices support the registration of a link status change callback,
using the ``rte_eth_dev_callback_register`` API, this will be called when the
status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
The link bonding library also supports devices which do not implement link
status change interrupts, this is achieved by polling the devices link status at
a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
whether the device supports interrupts or whether the link status should be
monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
these parameters.
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
itself can be started.
To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
Like all other PMD, all functions exported by a PMD are lock-free functions
that are assumed not to be invoked in parallel on different logical cores to
work on the same target object.
It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
bonded device to read.
Configuration
@@ -265,25 +265,25 @@ Configuration
Link bonding devices are created using the ``rte_eth_bond_create`` API
which requires a unique device name, the bonding mode,
and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
the device is in balance XOR mode.
-Slave Devices
+Member Devices
^^^^^^^^^^^^^
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
configuration of the bonded device on being added to a bonded device.
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
-Primary Slave
+Primary Member
^^^^^^^^^^^^^
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
device is in active backup mode. A different port will only be used if, and
only if, the current primary port goes down. If the user does not specify a
primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
^^^^^^^^^^^
The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
Balance XOR Transmit Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
* **Layer 2:** Ethernet MAC address based balancing is the default
transmission policy for Balance XOR bonding mode. It uses a simple XOR
calculation on the source MAC address and destination MAC address of the
- packet and then calculate the modulus of this value to calculate the slave
+ packet and then calculate the modulus of this value to calculate the member
device to transmit the packet on.
* **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
combination of source/destination MAC addresses and the source/destination
- IP addresses of the data packet to decide which slave port the packet will
+ IP addresses of the data packet to decide which member port the packet will
be transmitted on.
* **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
of source/destination IP Address and the source/destination UDP ports of
- the packet of the data packet to decide which slave port the packet will be
+ the packet of the data packet to decide which member port the packet will be
transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
which will be used must be setup using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup``.
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
before it can be started using ``rte_eth_dev_start``.
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
bonding device then the link status of the bonding device will go down.
It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
-* A least one slave device is provided with for each bonded device definition.
+* A least one member device is provided with for each bonded device definition.
* The operation mode of the bonded device being created is provided.
@@ -404,20 +404,20 @@ The different options are:
mode=2
-* slave: Defines the PMD device which will be added as slave to the bonded
+* member: Defines the PMD device which will be added as member to the bonded
device. This option can be selected multiple times, for each device to be
- added as a slave. Physical devices should be specified using their PCI
+ added as a member. Physical devices should be specified using their PCI
address, in the format domain:bus:devid.function
.. code-block:: console
- slave=0000:0a:00.0,slave=0000:0a:00.1
+ member=0000:0a:00.0,member=0000:0a:00.1
-* primary: Optional parameter which defines the primary slave port,
- is used in active backup mode to select the primary slave for data TX/RX if
+* primary: Optional parameter which defines the primary member port,
+ is used in active backup mode to select the primary member for data TX/RX if
it is available. The primary port also is used to select the MAC address to
- use when it is not defined by the user. This defaults to the first slave
- added to the device if it is specified. The primary device must be a slave
+ use when it is not defined by the user. This defaults to the first member
+ added to the device if it is specified. The primary device must be a member
of the bonded device.
.. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
socket_id=0
* mac: Optional parameter to select a MAC address for link bonding device,
- this overrides the value of the primary slave device.
+ this overrides the value of the primary member device.
.. code-block:: console
@@ -474,29 +474,29 @@ The different options are:
Examples of Usage
^^^^^^^^^^^^^^^^^
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
.. _bonding_testpmd_commands:
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
testpmd> create bonded device 1 0
created new bonded device (port X)
-add bonding slave
+add bonding member
~~~~~~~~~~~~~~~~~
Adds Ethernet device to a Link Bonding device::
- testpmd> add bonding slave (slave id) (port id)
+ testpmd> add bonding member (member id) (port id)
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
- testpmd> add bonding slave 6 10
+ testpmd> add bonding member 6 10
-remove bonding slave
+remove bonding member
~~~~~~~~~~~~~~~~~~~~
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
- testpmd> remove bonding slave (slave id) (port id)
+ testpmd> remove bonding member (member id) (port id)
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
- testpmd> remove bonding slave 6 10
+ testpmd> remove bonding member 6 10
set bonding mode
~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
set bonding primary
~~~~~~~~~~~~~~~~~~~
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
- testpmd> set bonding primary (slave id) (port id)
+ testpmd> set bonding primary (member id) (port id)
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
testpmd> set bonding primary 6 10
@@ -590,7 +590,7 @@ set bonding mon_period
Set the link status monitoring polling period in milliseconds for a bonding device.
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
When the mon_period is set to a value greater than 0 then all PMD's which do not support
link status ISR will be queried every polling interval to check if their link status has changed::
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
set bonding lacp dedicated_queue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
when in mode 4 (link-aggregation-802.3ad)::
testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
testpmd> show bonding config (port id)
For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
in balance mode with a transmission policy of layer 2+3::
testpmd> show bonding config 9
- Dev basic:
Bonding mode: BALANCE(2)
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
- Slaves (3): [1 3 4]
- Active Slaves (3): [1 3 4]
+ Members (3): [1 3 4]
+ Active Members (3): [1 3 4]
Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
cmdline_fixed_string_t set;
cmdline_fixed_string_t bonding;
cmdline_fixed_string_t primary;
- portid_t slave_id;
+ portid_t member_id;
portid_t port_id;
};
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
struct cmd_set_bonding_primary_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* Set the primary slave for a bonded device. */
- if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
- fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
- master_port_id);
+ /* Set the primary member for a bonded device. */
+ if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+ fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+ main_port_id);
return;
}
init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, RTE_UINT16);
+ member_id, RTE_UINT16);
static cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
port_id, RTE_UINT16);
static cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
- .help_str = "set bonding primary <slave_id> <port_id>: "
- "Set the primary slave for port_id",
+ .help_str = "set bonding primary <member_id> <port_id>: "
+ "Set the primary member for port_id",
.data = NULL,
.tokens = {
(void *)&cmd_setbonding_primary_set,
(void *)&cmd_setbonding_primary_bonding,
(void *)&cmd_setbonding_primary_primary,
- (void *)&cmd_setbonding_primary_slave,
+ (void *)&cmd_setbonding_primary_member,
(void *)&cmd_setbonding_primary_port,
NULL
}
};
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
cmdline_fixed_string_t add;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_add_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_add_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* add the slave for a bonded device. */
- if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+ /* add the member for a bonded device. */
+ if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to add slave %d to master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to add member %d to main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
- ports[master_port_id].update_conf = 1;
+ ports[main_port_id].update_conf = 1;
init_port_config();
- set_port_slave_flag(slave_port_id);
+ set_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
- .f = cmd_add_bonding_slave_parsed,
- .help_str = "add bonding slave <slave_id> <port_id>: "
- "Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+ .f = cmd_add_bonding_member_parsed,
+ .help_str = "add bonding member <member_id> <port_id>: "
+ "Add a member device to a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_addbonding_slave_add,
- (void *)&cmd_addbonding_slave_bonding,
- (void *)&cmd_addbonding_slave_slave,
- (void *)&cmd_addbonding_slave_slaveid,
- (void *)&cmd_addbonding_slave_port,
+ (void *)&cmd_addbonding_member_add,
+ (void *)&cmd_addbonding_member_bonding,
+ (void *)&cmd_addbonding_member_member,
+ (void *)&cmd_addbonding_member_memberid,
+ (void *)&cmd_addbonding_member_port,
NULL
}
};
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
cmdline_fixed_string_t remove;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_remove_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_remove_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* remove the slave from a bonded device. */
- if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+ /* remove the member from a bonded device. */
+ if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to remove slave %d from master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to remove member %d from main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
init_port_config();
- clear_port_slave_flag(slave_port_id);
+ clear_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
- .f = cmd_remove_bonding_slave_parsed,
- .help_str = "remove bonding slave <slave_id> <port_id>: "
- "Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+ .f = cmd_remove_bonding_member_parsed,
+ .help_str = "remove bonding member <member_id> <port_id>: "
+ "Remove a member device from a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_removebonding_slave_remove,
- (void *)&cmd_removebonding_slave_bonding,
- (void *)&cmd_removebonding_slave_slave,
- (void *)&cmd_removebonding_slave_slaveid,
- (void *)&cmd_removebonding_slave_port,
+ (void *)&cmd_removebonding_member_remove,
+ (void *)&cmd_removebonding_member_bonding,
+ (void *)&cmd_removebonding_member_member,
+ (void *)&cmd_removebonding_member_memberid,
+ (void *)&cmd_removebonding_member_port,
NULL
}
};
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
},
{
&cmd_set_bonding_primary,
- "set bonding primary (slave_id) (port_id)\n"
- " Set the primary slave for a bonded device.\n",
+ "set bonding primary (member_id) (port_id)\n"
+ " Set the primary member for a bonded device.\n",
},
{
- &cmd_add_bonding_slave,
- "add bonding slave (slave_id) (port_id)\n"
- " Add a slave device to a bonded device.\n",
+ &cmd_add_bonding_member,
+ "add bonding member (member_id) (port_id)\n"
+ " Add a member device to a bonded device.\n",
},
{
- &cmd_remove_bonding_slave,
- "remove bonding slave (slave_id) (port_id)\n"
- " Remove a slave device from a bonded device.\n",
+ &cmd_remove_bonding_member,
+ "remove bonding member (member_id) (port_id)\n"
+ " Remove a member device from a bonded device.\n",
},
{
&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..9d35d8aa47 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
#include "rte_eth_bond_8023ad.h"
#define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_Member_RX_PKTS 3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_Member_TX_PKTS 1
/**
* Timeouts definitions (5.4.4 in 802.1AX documentation).
*/
@@ -113,7 +113,7 @@ struct port {
enum rte_bond_8023ad_selection selected;
/** Indicates if either allmulti or promisc has been enforced on the
- * slave so that we can receive lacp packets
+ * member so that we can receive lacp packets
*/
#define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
#define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
uint8_t external_sm;
struct rte_ether_addr mac_addr;
- struct rte_eth_link slave_link;
- /***< slave link properties */
+ struct rte_eth_link member_link;
+ /***< member link properties */
/**
* Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
/**
* @internal
*
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
*
* @param dev Bonded interface
* @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
/**
* @internal
*
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
*
* @param dev Bonded interface
* @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
*
* Passes given slow packet to state machines management logic.
* @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
* @param slot_pkt Slow packet.
*/
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt);
+ uint16_t member_id, struct rte_mbuf *pkt);
/**
* @internal
*
- * Appends given slave used slave
+ * Appends given member used member
*
* @param dev Bonded interface.
- * @param port_id Slave port ID to be added
+ * @param port_id Member port ID to be added
*
* @return
* 0 on success, negative value otherwise.
*/
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
/**
* @internal
*
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
*
* @param dev Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
*
* @return
* 0 on success, negative value otherwise.
*/
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
/**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
* @param bond_dev Bonded device
*/
void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port);
+ uint16_t member_port);
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
int
bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
#include "eth_bond_8023ad_private.h"
#include "rte_eth_bond_alb.h"
-#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG ("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG ("primary")
#define PMD_BOND_MODE_KVARG ("mode")
#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
/** Port Queue Mapping Structure */
struct bond_rx_queue {
uint16_t queue_id;
- /**< Next active_slave to poll */
- uint16_t active_slave;
+ /**< Next active_member to poll */
+ uint16_t active_member;
/**< Queue Id */
struct bond_dev_private *dev_private;
/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
/**< Copy of TX configuration structure for queue */
};
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
- uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
- uint16_t slave_count; /**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+ uint16_t members[RTE_MAX_ETHPORTS]; /**< Member port id array */
+ uint16_t member_count; /**< Number of members */
};
-struct bond_slave_details {
+struct bond_member_details {
uint16_t port_id;
uint8_t link_status_poll_enabled;
uint8_t link_status_wait_to_complete;
uint8_t last_link_status;
- /**< Port Id of slave eth_dev */
+ /**< Port Id of member eth_dev */
struct rte_ether_addr persisted_mac_addr;
uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next;
- /* Slaves flows */
+ /* Members flows */
struct rte_flow *flows[RTE_MAX_ETHPORTS];
/* Flow description for synchronization */
struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
};
typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
/** Link Bonding PMD device private configuration Structure */
struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
rte_spinlock_t lock;
rte_spinlock_t lsc_lock;
- uint16_t primary_port; /**< Primary Slave Port */
- uint16_t current_primary_port; /**< Primary Slave Port */
+ uint16_t primary_port; /**< Primary Member Port */
+ uint16_t current_primary_port; /**< Primary Member Port */
uint16_t user_defined_primary_port;
/**< Flag for whether primary port is user defined or not */
@@ -137,16 +137,16 @@ struct bond_dev_private {
uint16_t nb_rx_queues; /**< Total number of rx queues */
uint16_t nb_tx_queues; /**< Total number of tx queues*/
- uint16_t active_slave_count; /**< Number of active slaves */
- uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
+ uint16_t active_member_count; /**< Number of active members */
+ uint16_t active_members[RTE_MAX_ETHPORTS]; /**< Active member list */
- uint16_t slave_count; /**< Number of bonded slaves */
- struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
- /**< Array of bonded slaves details */
+ uint16_t member_count; /**< Number of bonded members */
+ struct bond_member_details members[RTE_MAX_ETHPORTS];
+ /**< Array of bonded members details */
struct mode8023ad_private mode4;
- uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
- /**< TLB active slaves send order */
+ uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+ /**< TLB active members send order */
struct mode_alb_private mode6;
uint64_t rx_offload_capa; /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
uint8_t rss_key_len; /**< hash key length in bytes. */
struct rte_kvargs *kvlist;
- uint8_t slave_update_idx;
+ uint8_t member_update_idx;
bool kvargs_processing_is_done;
@@ -191,19 +191,21 @@ struct bond_dev_private {
extern const struct eth_dev_ops default_dev_ops;
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
uint16_t pos;
- for (pos = 0; pos < slaves_count; pos++) {
- if (slave_id == slaves[pos])
+ for (pos = 0; pos < members_count; pos++) {
+ if (member_id == members[pos])
break;
}
@@ -217,13 +219,13 @@ int
valid_bonded_port_id(uint16_t port_id);
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
int
mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *dst_mac_addr);
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id);
+ uint16_t member_port_id);
int
bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
void *param, void *ret_param);
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args);
int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
const char *value, void *extra_args);
int
@@ -323,7 +325,7 @@ void
bond_tlb_enable(struct bond_dev_private *internals);
void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
int
bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
*
* RTE Link Bonding Ethernet Device
* Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
* these interfaces based on the mode of operation specified and supported.
* This implementation supports 4 modes of operation round robin, active backup
* balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
#define BONDING_MODE_ROUND_ROBIN (0)
/**< Round Robin (Mode 0).
* In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
#define BONDING_MODE_ACTIVE_BACKUP (1)
/**< Active Backup (Mode 1).
* In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
#define BONDING_MODE_BALANCE (2)
/**< Balance (Mode 2).
* In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
* See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
#define BONDING_MODE_BROADCAST (3)
/**< Broadcast (Mode 3).
* In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
#define BONDING_MODE_8023AD (4)
/**< 802.3AD (Mode 4).
*
@@ -62,22 +66,22 @@ extern "C" {
* be handled with the expected latency and this may cause the link status to be
* incorrectly marked as down or failure to correctly negotiate with peers.
* - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
*/
#define BONDING_MODE_TLB (5)
/**< Adaptive TLB (Mode 5)
* This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
#define BONDING_MODE_ALB (6)
/**< Adaptive Load Balancing (Mode 6)
* This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
* bonding driver intercepts ARP replies send by local system and overwrites its
* source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
* information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
*/
/* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
rte_eth_bond_free(const char *name);
/**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
/**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+ return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
/**
* Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
rte_eth_bond_mode_get(uint16_t bonded_port_id);
/**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
/**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
*
* @return
- * Port Id of primary slave on success, -1 on failure
+ * Port Id of primary member on success, -1 on failure
*/
int
rte_eth_bond_primary_get(uint16_t bonded_port_id);
/**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of slaves associated with bonded device on success,
+ * Number of members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
/**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
* device.
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of active slaves associated with bonded device on success,
+ * Number of active members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len)
+{
+ return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
/**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
*
* @param bonded_port_id Port ID of bonded device.
* @param mac_addr MAC Address to use on bonded device overriding
- * slaves MAC addresses
+ * members MAC addresses
*
* @return
* 0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
struct rte_ether_addr *mac_addr);
/**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
*
* @param bonded_port_id Port ID of bonded device.
*
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
/**
* Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
*
* @param bonded_port_id Port ID of bonded device.
* @param internal_ms Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
/**
* Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
*
* @param bonded_port_id Port ID of bonded device.
*
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..7cf44d0595 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
#define MODE4_DEBUG(fmt, ...) \
rte_log(RTE_LOG_DEBUG, bond_logtype, \
"%6u [Port %u: %s] " fmt, \
- bond_dbg_get_time_diff_ms(), slave_id, \
+ bond_dbg_get_time_diff_ms(), member_id, \
__func__, ##__VA_ARGS__)
static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
}
static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
uint8_t warnings;
do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
if (warnings & WRN_RX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+ "Member %u: failed to enqueue LACP packet into RX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will notwork correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_TX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+ "Member %u: failed to enqueue LACP packet into TX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will not work correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_RX_MARKER_TO_FAST)
- RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+ member_id);
if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
RTE_BOND_LOG(INFO,
- "Slave %u: ignoring unknown slow protocol frame type",
- slave_id);
+ "Member %u: ignoring unknown slow protocol frame type",
+ member_id);
}
if (warnings & WRN_UNKNOWN_MARKER_TYPE)
- RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+ member_id);
if (warnings & WRN_NOT_LACP_CAPABLE)
- MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+ MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
}
static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
* @param port Port on which LACPDU was received.
*/
static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
struct lacpdu *lacp)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
uint64_t timeout;
if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
* @param port Port to handle state machine.
*/
static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Calculate if either site is LACP enabled */
uint64_t timeout;
uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port Port to handle state machine.
*/
static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Save current state for later use */
const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing started.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing started.",
+ internals->port_id, member_id);
}
} else {
if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing stopped.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing stopped.",
+ internals->port_id, member_id);
}
}
}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port
*/
static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
struct rte_mbuf *lacp_pkt = NULL;
struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
/* Source and destination MAC */
rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
return;
}
} else {
- uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+ uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, 1);
- pkts_sent = rte_eth_tx_burst(slave_id,
+ pkts_sent = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, pkts_sent);
if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
* @param port_pos Port to assign.
*/
static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
{
struct port *agg, *port;
- uint16_t slaves_count, new_agg_id, i, j = 0;
- uint16_t *slaves;
+ uint16_t members_count, new_agg_id, i, j = 0;
+ uint16_t *members;
uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
- uint16_t default_slave = 0;
+ uint16_t default_member = 0;
struct rte_eth_link link_info;
uint16_t agg_new_idx = 0;
int ret;
- slaves = internals->active_slaves;
- slaves_count = internals->active_slave_count;
- port = &bond_mode_8023ad_ports[slave_id];
+ members = internals->active_members;
+ members_count = internals->active_member_count;
+ port = &bond_mode_8023ad_ports[member_id];
/* Search for aggregator suitable for this port */
- for (i = 0; i < slaves_count; ++i) {
- agg = &bond_mode_8023ad_ports[slaves[i]];
+ for (i = 0; i < members_count; ++i) {
+ agg = &bond_mode_8023ad_ports[members[i]];
/* Skip ports that are not aggregators */
- if (agg->aggregator_port_id != slaves[i])
+ if (agg->aggregator_port_id != members[i])
continue;
- ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+ ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slaves[i], rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ members[i], rte_strerror(-ret));
continue;
}
agg_count[i] += 1;
agg_bandwidth[i] += link_info.link_speed;
- /* Actors system ID is not checked since all slave device have the same
+ /* Actors system ID is not checked since all member device have the same
* ID (MAC address). */
if ((agg->actor.key == port->actor.key &&
agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
if (j == 0)
- default_slave = i;
+ default_member = i;
j++;
}
}
switch (internals->mode4.agg_selection) {
case AGG_COUNT:
- agg_new_idx = max_index(agg_count, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_count, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_BANDWIDTH:
- agg_new_idx = max_index(agg_bandwidth, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_bandwidth, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_STABLE:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
default:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
}
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
MODE4_DEBUG("-> SELECTED: ID=%3u\n"
"\t%s aggregator ID=%3u\n",
port->aggregator_port_id,
- port->aggregator_port_id == slave_id ?
+ port->aggregator_port_id == member_id ?
"aggregator not found, using default" : "aggregator found",
port->aggregator_port_id);
}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
}
static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
struct rte_mbuf *lacp_pkt) {
struct lacpdu_header *lacp;
struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
partner = &lacp->lacpdu.partner;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
/* This LACP frame is sending to the bonding port
* so pass it to rx_machine.
*/
- rx_machine(internals, slave_id, &lacp->lacpdu);
+ rx_machine(internals, member_id, &lacp->lacpdu);
} else {
char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
}
rte_pktmbuf_free(lacp_pkt);
} else
- rx_machine(internals, slave_id, NULL);
+ rx_machine(internals, member_id, NULL);
}
static void
bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
- uint16_t slave_id)
+ uint16_t member_id)
{
#define DEDICATED_QUEUE_BURST_SIZE 32
struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ uint16_t rx_count = rte_eth_rx_burst(member_id,
internals->mode4.dedicated_queues.rx_qid,
lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
uint16_t i;
for (i = 0; i < rx_count; i++)
- bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ bond_mode_8023ad_handle_slow_pkt(internals, member_id,
lacp_pkt[i]);
} else {
- rx_machine_update(internals, slave_id, NULL);
+ rx_machine_update(internals, member_id, NULL);
}
}
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
struct bond_dev_private *internals = bond_dev->data->dev_private;
struct port *port;
struct rte_eth_link link_info;
- struct rte_ether_addr slave_addr;
+ struct rte_ether_addr member_addr;
struct rte_mbuf *lacp_pkt = NULL;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
/* Update link status on each port */
- for (i = 0; i < internals->active_slave_count; i++) {
+ for (i = 0; i < internals->active_member_count; i++) {
uint16_t key;
int ret;
- slave_id = internals->active_slaves[i];
- ret = rte_eth_link_get_nowait(slave_id, &link_info);
+ member_id = internals->active_members[i];
+ ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_id, rte_strerror(-ret));
}
if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
key = 0;
}
- rte_eth_macaddr_get(slave_id, &slave_addr);
- port = &bond_mode_8023ad_ports[slave_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
+ port = &bond_mode_8023ad_ports[member_id];
key = rte_cpu_to_be_16(key);
if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, NTT);
}
- if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
- rte_ether_addr_copy(&slave_addr, &port->actor.system);
- if (port->aggregator_port_id == slave_id)
+ if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+ rte_ether_addr_copy(&member_addr, &port->actor.system);
+ if (port->aggregator_port_id == member_id)
SM_FLAG_SET(port, NTT);
}
}
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if ((port->actor.key &
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (retval != 0)
lacp_pkt = NULL;
- rx_machine_update(internals, slave_id, lacp_pkt);
+ rx_machine_update(internals, member_id, lacp_pkt);
} else {
bond_mode_8023ad_dedicated_rxq_process(internals,
- slave_id);
+ member_id);
}
- periodic_machine(internals, slave_id);
- mux_machine(internals, slave_id);
- tx_machine(internals, slave_id);
- selection_logic(internals, slave_id);
+ periodic_machine(internals, member_id);
+ mux_machine(internals, member_id);
+ tx_machine(internals, member_id);
+ selection_logic(internals, member_id);
SM_FLAG_CLR(port, BEGIN);
- show_warnings(slave_id);
+ show_warnings(member_id);
}
rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
}
static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
{
int ret;
- ret = rte_eth_allmulticast_enable(slave_id);
+ ret = rte_eth_allmulticast_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_allmulticast_get(slave_id)) {
+ if (rte_eth_allmulticast_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_ALLMULTI;
return 0;
}
- ret = rte_eth_promiscuous_enable(slave_id);
+ ret = rte_eth_promiscuous_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_promiscuous_get(slave_id)) {
+ if (rte_eth_promiscuous_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_PROMISC;
return 0;
}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
}
static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
{
int ret;
- switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+ switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
case BOND_8023AD_FORCED_ALLMULTI:
- RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
- ret = rte_eth_allmulticast_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+ ret = rte_eth_allmulticast_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
case BOND_8023AD_FORCED_PROMISC:
- RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
- ret = rte_eth_promiscuous_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+ ret = rte_eth_promiscuous_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
}
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
- uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+ uint16_t member_id)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct port_params initial = {
.system = { { 0 } },
.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
- /* Given slave mus not be in active list */
- RTE_ASSERT(find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) == internals->active_slave_count);
+ /* Given member mus not be in active list */
+ RTE_ASSERT(find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) == internals->active_member_count);
RTE_SET_USED(internals); /* used only for assert when enabled */
memcpy(&port->actor, &initial, sizeof(struct port_params));
/* Standard requires that port ID must be grater than 0.
* Add 1 do get corresponding port_number */
- port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+ port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
memcpy(&port->partner, &initial, sizeof(struct port_params));
memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
port->sm_flags = SM_FLAGS_BEGIN;
/* use this port as aggregator */
- port->aggregator_port_id = slave_id;
+ port->aggregator_port_id = member_id;
- if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
- RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
- slave_id);
+ if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+ RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+ member_id);
}
timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_ASSERT(port->rx_ring == NULL);
RTE_ASSERT(port->tx_ring == NULL);
- socket_id = rte_eth_dev_socket_id(slave_id);
+ socket_id = rte_eth_dev_socket_id(member_id);
if (socket_id == -1)
socket_id = rte_socket_id();
element_size = sizeof(struct slow_protocol_frame) +
RTE_PKTMBUF_HEADROOM;
- /* The size of the mempool should be at least:
- * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
- total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+ /*
+ * The size of the mempool should be at least:
+ * the sum of the TX descriptors + BOND_MODE_8023AX_Member_TX_PKTS.
+ */
+ total_tx_desc = BOND_MODE_8023AX_Member_TX_PKTS;
for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
total_tx_desc += bd_tx_q->nb_tx_desc;
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
port->rx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_Member_RX_PKTS), socket_id, 0);
if (port->rx_ring == NULL) {
- rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
/* TX ring is at least one pkt longer to make room for marker packet. */
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
port->tx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_Member_TX_PKTS + 1), socket_id, 0);
if (port->tx_ring == NULL) {
- rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
}
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
- uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+ uint16_t member_id)
{
void *pkt = NULL;
struct port *port = NULL;
uint8_t old_partner_state;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
ACTOR_STATE_CLR(port, AGGREGATION);
port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
old_partner_state = port->partner_state;
record_default(port);
- bond_mode_8023ad_unregister_lacp_mac(slave_id);
+ bond_mode_8023ad_unregister_lacp_mac(member_id);
/* If partner timeout state changes then disable timer */
if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct rte_ether_addr slave_addr;
- struct port *slave, *agg_slave;
- uint16_t slave_id, i, j;
+ struct rte_ether_addr member_addr;
+ struct port *member, *agg_member;
+ uint16_t member_id, i, j;
bond_mode_8023ad_stop(bond_dev);
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- slave = &bond_mode_8023ad_ports[slave_id];
- rte_eth_macaddr_get(slave_id, &slave_addr);
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ member = &bond_mode_8023ad_ports[member_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
- if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+ if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
continue;
- rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+ rte_ether_addr_copy(&member_addr, &member->actor.system);
/* Do nothing if this port is not an aggregator. In other case
* Set NTT flag on every port that use this aggregator. */
- if (slave->aggregator_port_id != slave_id)
+ if (member->aggregator_port_id != member_id)
continue;
- for (j = 0; j < internals->active_slave_count; j++) {
- agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
- if (agg_slave->aggregator_port_id == slave_id)
- SM_FLAG_SET(agg_slave, NTT);
+ for (j = 0; j < internals->active_member_count; j++) {
+ agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+ if (agg_member->aggregator_port_id == member_id)
+ SM_FLAG_SET(agg_member, NTT);
}
}
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
struct bond_dev_private *internals = bond_dev->data->dev_private;
uint16_t i;
- for (i = 0; i < internals->active_slave_count; i++)
- bond_mode_8023ad_activate_slave(bond_dev,
- internals->active_slaves[i]);
+ for (i = 0; i < internals->active_member_count; i++)
+ bond_mode_8023ad_activate_member(bond_dev,
+ internals->active_members[i]);
return 0;
}
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt)
+ uint16_t member_id, struct rte_mbuf *pkt)
{
struct mode8023ad_private *mode4 = &internals->mode4;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct marker_header *m_hdr;
uint64_t marker_timer, old_marker_timer;
int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
}
} else {
/* Send packet directly to the slow queue */
- uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+ uint16_t tx_count = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, 1);
- tx_count = rte_eth_tx_burst(slave_id,
+ tx_count = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, tx_count);
if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
goto free_out;
}
} else
- rx_machine_update(internals, slave_id, pkt);
+ rx_machine_update(internals, member_id, pkt);
} else {
wrn = WRN_UNKNOWN_SLOW_TYPE;
goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *info)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
bond_dev = &rte_eth_devices[port_id];
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
info->selected = port->selected;
info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
}
static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
return -EINVAL;
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
}
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, DISTRIBUTING);
}
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, COLLECTING);
}
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
struct mode8023ad_private *mode4 = &internals->mode4;
struct port *port;
void *pkt = NULL;
- uint16_t i, slave_id;
+ uint16_t i, member_id;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
/* This is LACP frame so pass it to rx callback.
* Callback is responsible for freeing mbuf.
*/
- mode4->slowrx_cb(slave_id, lacp_pkt);
+ mode4->slowrx_cb(member_id, lacp_pkt);
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
uint16_t system_priority;
/**< System priority (unused in current implementation) */
struct rte_ether_addr system;
- /**< System ID - Slave MAC address, same as bonding MAC address */
+ /**< System ID - Member MAC address, same as bonding MAC address */
uint16_t key;
/**< Speed information (implementation dependent) and duplex. */
uint16_t port_priority;
/**< Priority of this (unused in current implementation) */
uint16_t port_number;
- /**< Port number. It corresponds to slave port id. */
+ /**< Port number. It corresponds to member port id. */
} __rte_packed __rte_aligned(2);
struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
enum rte_bond_8023ad_agg_selection agg_selection;
};
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
enum rte_bond_8023ad_selection selected;
uint8_t actor_state;
struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
/**
* @internal
*
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
*
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param conf buffer for configuration
* @return
* 0 - if ok
- * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ * -EINVAL if conf is NULL or member id is invalid (not a member of given
* bonded device or is not inactive).
*/
+__rte_experimental
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf)
+{
+ return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
#ifdef __cplusplus
}
#endif
/**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when collection enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
/**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when distribution enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
/**
* LACPDU transmit path for external 802.3ad state machine. Caller retains
* ownership of the packet on failure.
*
* @param port_id Bonding device id
- * @param slave_id Port ID of valid slave device.
+ * @param member_id Port ID of valid member device.
* @param lacp_pkt mbuf containing LACPDU.
*
* @return
* 0 on success, negative value otherwise.
*/
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt);
/**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
*
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
* dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
* for processing in the LACP state machine, this removes the need to filter
* these packets in the bonded devices data path. The additional tx queue is
* used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
*
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
* filter rule required for rx and have enough queues that one rx and tx queue
* can be reserved for the LACP state machines control packets.
*
@@ -296,7 +305,7 @@ int
rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
/**
- * Disable slow queue on slaves
+ * Disable slow queue on members
*
* This function disables hardware slow packet filter.
*
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
}
static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
{
uint16_t idx;
- idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
- internals->mode6.last_slave = idx;
- return internals->active_slaves[idx];
+ idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+ internals->mode6.last_member = idx;
+ return internals->active_members[idx];
}
int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
/* Fill hash table with initial values */
memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
rte_spinlock_init(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
internals->mode6.ntt = 0;
/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
/*
* We got reply for ARP Request send by the application. We need to
* update client table when received data differ from what is stored
- * in ALB table and issue sending update packet to that slave.
+ * in ALB table and issue sending update packet to that member.
*/
rte_spinlock_lock(&internals->mode6.lock);
if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
client_info->cli_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_sha,
&client_info->cli_mac);
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
&arp->arp_data.arp_tha,
&client_info->cli_mac);
}
- rte_eth_macaddr_get(client_info->slave_idx,
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
}
- /* Assign new slave to this client and update src mac in ARP */
+ /* Assign new member to this client and update src mac in ARP */
client_info->in_use = 1;
client_info->ntt = 0;
client_info->app_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_tha,
&client_info->cli_mac);
client_info->cli_ip = arp->arp_data.arp_tip;
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
{
struct rte_ether_hdr *eth_h;
struct rte_arp_hdr *arp_h;
- uint16_t slave_idx;
+ uint16_t member_idx;
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
arp_h->arp_plen = sizeof(uint32_t);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
- slave_idx = client_info->slave_idx;
+ member_idx = client_info->member_idx;
rte_spinlock_unlock(&internals->mode6.lock);
- return slave_idx;
+ return member_idx;
}
void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
int i;
- /* If active slave count is 0, it's pointless to refresh alb table */
- if (internals->active_slave_count <= 0)
+ /* If active member count is 0, it's pointless to refresh alb table */
+ if (internals->active_member_count <= 0)
return;
rte_spinlock_lock(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
internals->mode6.ntt = 1;
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
uint32_t cli_ip;
/**< Client IP address */
- uint16_t slave_idx;
- /**< Index of slave on which we connect with that client */
+ uint16_t member_idx;
+ /**< Index of member on which we connect with that client */
uint8_t in_use;
/**< Flag indicating if entry in client table is currently used */
uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
/**< Mempool for creating ARP update packets */
uint8_t ntt;
/**< Flag indicating if we need to send update to any client on next tx */
- uint32_t last_slave;
- /**< Index of last used slave in client table */
+ uint32_t last_member;
+ /**< Index of last used member in client table */
rte_spinlock_t lock;
};
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
struct bond_dev_private *internals);
/**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
* connection. On Reply function also updates data in client table.
*
* @param eth_h ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_upd(struct client_data *client_info,
struct rte_mbuf *pkt, struct bond_dev_private *internals);
/**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
*
* @param bond_dev Pointer to bonded device struct.
*/
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
}
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
{
int i;
struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- /* Check if any of slave devices is a bonded device */
- for (i = 0; i < internals->slave_count; i++)
- if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+ /* Check if any of member devices is a bonded device */
+ for (i = 0; i < internals->member_count; i++)
+ if (valid_bonded_port_id(internals->members[i].port_id) == 0)
return 1;
return 0;
}
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
{
- RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
- /* Verify that slave_port_id refers to a non bonded port */
- if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+ /* Verify that member_port_id refers to a non bonded port */
+ if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
internals->mode == BONDING_MODE_8023AD) {
- RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
- " mode as slave is also a bonded device, only "
+ RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+ " mode as member is also a bonded device, only "
"physical devices can be support in this mode.");
return -1;
}
- if (internals->port_id == slave_port_id) {
+ if (internals->port_id == member_port_id) {
RTE_BOND_LOG(ERR,
- "Cannot add the bonded device itself as its slave.");
+ "Cannot add the bonded device itself as its member.");
return -1;
}
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
}
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD)
- bond_mode_8023ad_activate_slave(eth_dev, port_id);
+ bond_mode_8023ad_activate_member(eth_dev, port_id);
if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB) {
- internals->tlb_slaves_order[active_count] = port_id;
+ internals->tlb_members_order[active_count] = port_id;
}
- RTE_ASSERT(internals->active_slave_count <
- (RTE_DIM(internals->active_slaves) - 1));
+ RTE_ASSERT(internals->active_member_count <
+ (RTE_DIM(internals->active_members) - 1));
- internals->active_slaves[internals->active_slave_count] = port_id;
- internals->active_slave_count++;
+ internals->active_members[internals->active_member_count] = port_id;
+ internals->active_member_count++;
if (internals->mode == BONDING_MODE_TLB)
- bond_tlb_activate_slave(internals);
+ bond_tlb_activate_member(internals);
if (internals->mode == BONDING_MODE_ALB)
bond_mode_alb_client_list_upd(eth_dev);
}
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
- uint16_t slave_pos;
+ uint16_t member_pos;
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD) {
bond_mode_8023ad_stop(eth_dev);
- bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+ bond_mode_8023ad_deactivate_member(eth_dev, port_id);
} else if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB)
bond_tlb_disable(internals);
- slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+ member_pos = find_member_by_id(internals->active_members, active_count,
port_id);
- /* If slave was not at the end of the list
- * shift active slaves up active array list */
- if (slave_pos < active_count) {
+ /*
+ * If member was not at the end of the list
+ * shift active members up active array list.
+ */
+ if (member_pos < active_count) {
active_count--;
- memmove(internals->active_slaves + slave_pos,
- internals->active_slaves + slave_pos + 1,
- (active_count - slave_pos) *
- sizeof(internals->active_slaves[0]));
+ memmove(internals->active_members + member_pos,
+ internals->active_members + member_pos + 1,
+ (active_count - member_pos) *
+ sizeof(internals->active_members[0]));
}
- RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
- internals->active_slave_count = active_count;
+ RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+ internals->active_member_count = active_count;
if (eth_dev->data->dev_started) {
if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
}
static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
if (unlikely(slab & mask)) {
uint16_t vlan_id = pos + i;
- res = rte_eth_dev_vlan_filter(slave_port_id,
+ res = rte_eth_dev_vlan_filter(member_port_id,
vlan_id, 1);
}
}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
{
struct rte_flow *flow;
struct rte_flow_error ferror;
- uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+ uint16_t member_port_id = internals->members[member_id].port_id;
if (internals->flow_isolated_valid != 0) {
- if (rte_eth_dev_stop(slave_port_id) != 0) {
+ if (rte_eth_dev_stop(member_port_id) != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_port_id);
+ member_port_id);
return -1;
}
- if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+ if (rte_flow_isolate(member_port_id, internals->flow_isolated,
&ferror)) {
- RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
- " %d: %s", slave_id, ferror.message ?
+ RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+ " %d: %s", member_id, ferror.message ?
ferror.message : "(no stated reason)");
return -1;
}
}
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- flow->flows[slave_id] = rte_flow_create(slave_port_id,
+ flow->flows[member_id] = rte_flow_create(member_port_id,
flow->rule.attr,
flow->rule.pattern,
flow->rule.actions,
&ferror);
- if (flow->flows[slave_id] == NULL) {
- RTE_BOND_LOG(ERR, "Cannot create flow for slave"
- " %d: %s", slave_id,
+ if (flow->flows[member_id] == NULL) {
+ RTE_BOND_LOG(ERR, "Cannot create flow for member"
+ " %d: %s", member_id,
ferror.message ? ferror.message :
"(no stated reason)");
- /* Destroy successful bond flows from the slave */
+ /* Destroy successful bond flows from the member */
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- flow->flows[slave_id],
+ if (flow->flows[member_id] != NULL) {
+ rte_flow_destroy(member_port_id,
+ flow->flows[member_id],
&ferror);
- flow->flows[slave_id] = NULL;
+ flow->flows[member_id] = NULL;
}
}
return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
}
static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
internals->reta_size = di->reta_size;
internals->rss_key_len = di->hash_key_size;
- /* Inherit Rx offload capabilities from the first slave device */
+ /* Inherit Rx offload capabilities from the first member device */
internals->rx_offload_capa = di->rx_offload_capa;
internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
- /* Inherit maximum Rx packet size from the first slave device */
+ /* Inherit maximum Rx packet size from the first member device */
internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
- /* Inherit default Rx queue settings from the first slave device */
+ /* Inherit default Rx queue settings from the first member device */
memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
rxconf_i->rx_thresh.pthresh = 0;
rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
/* Setting this to zero should effectively enable default values */
rxconf_i->rx_free_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
rxconf_i->rx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
- /* Inherit Tx offload capabilities from the first slave device */
+ /* Inherit Tx offload capabilities from the first member device */
internals->tx_offload_capa = di->tx_offload_capa;
internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
- /* Inherit default Tx queue settings from the first slave device */
+ /* Inherit default Tx queue settings from the first member device */
memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
txconf_i->tx_thresh.pthresh = 0;
txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
/*
* Setting these parameters to zero assumes that default
- * values will be configured implicitly by slave devices.
+ * values will be configured implicitly by member devices.
*/
txconf_i->tx_free_thresh = 0;
txconf_i->tx_rs_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
txconf_i->tx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
/*
- * If at least one slave device suggests enabling this
- * setting by default, enable it for all slave devices
+ * If at least one member device suggests enabling this
+ * setting by default, enable it for all member devices
* since disabling it may not be necessarily supported.
*/
if (rxconf->rx_drop_en == 1)
rxconf_i->rx_drop_en = 1;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal rx_queue_offload_capa
* value. Thus, the new internal value of default Rx queue offloads
* has to be masked by rx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
internals->rx_queue_offload_capa;
/*
- * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+ * RETA size is GCD of all members RETA sizes, so, if all sizes will be
* the power of 2, the lower one is GCD
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
if (internals->rss_key_len > di->hash_key_size) {
- RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ RTE_BOND_LOG(WARNING, "member has different rss key size, "
"configuring rss may fail");
internals->rss_key_len = di->hash_key_size;
}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
}
static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal tx_queue_offload_capa
* value. Thus, the new internal value of default Tx queue offloads
* has to be masked by tx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
internals->tx_queue_offload_capa;
}
static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
- memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+ memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
}
static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
- slave_desc_lim->nb_max);
+ member_desc_lim->nb_max);
bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
- slave_desc_lim->nb_min);
+ member_desc_lim->nb_min);
bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
- slave_desc_lim->nb_align);
+ member_desc_lim->nb_align);
if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
}
/* Treat maximum number of segments equal to 0 as unspecified */
- if (slave_desc_lim->nb_seg_max != 0 &&
+ if (member_desc_lim->nb_seg_max != 0 &&
(bond_desc_lim->nb_seg_max == 0 ||
- slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
- bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
- if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+ member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+ bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+ if (member_desc_lim->nb_mtu_seg_max != 0 &&
(bond_desc_lim->nb_mtu_seg_max == 0 ||
- slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
- bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+ member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+ bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
return 0;
}
static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
{
- struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+ struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
struct bond_dev_private *internals;
struct rte_eth_link link_props;
struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
- RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+ RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
return -1;
}
- ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+ ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port_id, strerror(-ret));
+ __func__, member_port_id, strerror(-ret));
return ret;
}
if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
- RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
- slave_port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+ member_port_id);
return -1;
}
- slave_add(internals, slave_eth_dev);
+ member_add(internals, member_eth_dev);
- /* We need to store slaves reta_size to be able to synchronize RETA for all
- * slave devices even if its sizes are different.
+ /* We need to store members reta_size to be able to synchronize RETA for all
+ * member devices even if its sizes are different.
*/
- internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+ internals->members[internals->member_count].reta_size = dev_info.reta_size;
- if (internals->slave_count < 1) {
- /* if MAC is not user defined then use MAC of first slave add to
+ if (internals->member_count < 1) {
+ /* if MAC is not user defined then use MAC of first member add to
* bonded device */
if (!internals->user_defined_mac) {
if (mac_address_set(bonded_eth_dev,
- slave_eth_dev->data->mac_addrs)) {
+ member_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to set MAC address");
return -1;
}
}
- /* Make primary slave */
- internals->primary_port = slave_port_id;
- internals->current_primary_port = slave_port_id;
+ /* Make primary member */
+ internals->primary_port = member_port_id;
+ internals->current_primary_port = member_port_id;
internals->speed_capa = dev_info.speed_capa;
- /* Inherit queues settings from first slave */
- internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
- internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+ /* Inherit queues settings from first member */
+ internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+ internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
- eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
- eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
&dev_info.rx_desc_lim);
- eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
&dev_info.tx_desc_lim);
} else {
int ret;
internals->speed_capa &= dev_info.speed_capa;
- eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->rx_desc_lim, &dev_info.rx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+ &dev_info.rx_desc_lim);
if (ret != 0)
return ret;
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->tx_desc_lim, &dev_info.tx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+ &dev_info.tx_desc_lim);
if (ret != 0)
return ret;
}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
- if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
- RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
- slave_port_id);
+ if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+ member_port_id);
return -1;
}
- /* Add additional MAC addresses to the slave */
- if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
- RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
- slave_port_id);
+ /* Add additional MAC addresses to the member */
+ if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+ member_port_id);
return -1;
}
- internals->slave_count++;
+ internals->member_count++;
if (bonded_eth_dev->data->dev_started) {
- if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
- slave_port_id);
+ if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+ member_port_id);
return -1;
}
- if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
- slave_port_id);
+ if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+ member_port_id);
return -1;
}
}
- /* Update all slave devices MACs */
- mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs */
+ mac_address_members_update(bonded_eth_dev);
/* Register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
- /* If bonded device is started then we can add the slave to our active
- * slave array */
+ /*
+ * If bonded device is started then we can add the member to our active
+ * member array.
+ */
if (bonded_eth_dev->data->dev_started) {
- ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+ ret = rte_eth_link_get_nowait(member_port_id, &link_props);
if (ret < 0) {
- rte_eth_dev_callback_unregister(slave_port_id,
+ rte_eth_dev_callback_unregister(member_port_id,
RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&bonded_eth_dev->data->port_id);
- internals->slave_count--;
+ internals->member_count--;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_port_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_port_id, rte_strerror(-ret));
return -1;
}
if (link_props.link_status == RTE_ETH_LINK_UP) {
- if (internals->active_slave_count == 0 &&
+ if (internals->active_member_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
- slave_port_id);
+ member_port_id);
}
}
- /* Add slave details to bonded device */
- slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+ /* Add member details to bonded device */
+ member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
- slave_vlan_filter_set(bonded_port_id, slave_port_id);
+ member_vlan_filter_set(bonded_port_id, member_port_id);
return 0;
}
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
- uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+ uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct rte_flow_error flow_error;
struct rte_flow *flow;
- int i, slave_idx;
+ int i, member_idx;
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) < 0)
+ if (valid_member_port_id(internals, member_port_id) < 0)
return -1;
- /* first remove from active slave list */
- slave_idx = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_port_id);
+ /* first remove from active member list */
+ member_idx = find_member_by_id(internals->active_members,
+ internals->active_member_count, member_port_id);
- if (slave_idx < internals->active_slave_count)
- deactivate_slave(bonded_eth_dev, slave_port_id);
+ if (member_idx < internals->active_member_count)
+ deactivate_member(bonded_eth_dev, member_port_id);
- slave_idx = -1;
- /* now find in slave list */
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == slave_port_id) {
- slave_idx = i;
+ member_idx = -1;
+ /* now find in member list */
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == member_port_id) {
+ member_idx = i;
break;
}
- if (slave_idx < 0) {
- RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
- internals->slave_count);
+ if (member_idx < 0) {
+ RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+ internals->member_count);
return -1;
}
/* Un-register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&rte_eth_devices[bonded_port_id].data->port_id);
- /* Restore original MAC address of slave device */
- rte_eth_dev_default_mac_addr_set(slave_port_id,
- &(internals->slaves[slave_idx].persisted_mac_addr));
+ /* Restore original MAC address of member device */
+ rte_eth_dev_default_mac_addr_set(member_port_id,
+ &internals->members[member_idx].persisted_mac_addr);
- /* remove additional MAC addresses from the slave */
- slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+ /* remove additional MAC addresses from the member */
+ member_remove_mac_addresses(bonded_eth_dev, member_port_id);
/*
- * Remove bond device flows from slave device.
+ * Remove bond device flows from member device.
* Note: don't restore flow isolate mode.
*/
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_idx] != NULL) {
- rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+ if (flow->flows[member_idx] != NULL) {
+ rte_flow_destroy(member_port_id, flow->flows[member_idx],
&flow_error);
- flow->flows[slave_idx] = NULL;
+ flow->flows[member_idx] = NULL;
}
}
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- slave_remove(internals, slave_eth_dev);
- slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ member_remove(internals, member_eth_dev);
+ member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
- /* first slave in the active list will be the primary by default,
+ /* first member in the active list will be the primary by default,
* otherwise use first device in list */
- if (internals->current_primary_port == slave_port_id) {
- if (internals->active_slave_count > 0)
- internals->current_primary_port = internals->active_slaves[0];
- else if (internals->slave_count > 0)
- internals->current_primary_port = internals->slaves[0].port_id;
+ if (internals->current_primary_port == member_port_id) {
+ if (internals->active_member_count > 0)
+ internals->current_primary_port = internals->active_members[0];
+ else if (internals->member_count > 0)
+ internals->current_primary_port = internals->members[0].port_id;
else
internals->primary_port = 0;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
}
- if (internals->active_slave_count < 1) {
- /* if no slaves are any longer attached to bonded device and MAC is not
+ if (internals->active_member_count < 1) {
+ /*
+ * if no members are any longer attached to bonded device and MAC is not
* user defined then clear MAC of bonded device as it will be reset
- * when a new slave is added */
- if (internals->slave_count < 1 && !internals->user_defined_mac)
+ * when a new member is added.
+ */
+ if (internals->member_count < 1 && !internals->user_defined_mac)
memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
}
- if (internals->slave_count == 0) {
+ if (internals->member_count == 0) {
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
}
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
- if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+ if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
mode == BONDING_MODE_8023AD)
return -1;
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
}
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct bond_dev_private *internals;
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
internals->user_defined_primary_port = 1;
- internals->primary_port = slave_port_id;
+ internals->primary_port = member_port_id;
- bond_ethdev_primary_set(internals, slave_port_id);
+ bond_ethdev_primary_set(internals, member_port_id);
return 0;
}
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count < 1)
+ if (internals->member_count < 1)
return -1;
return internals->current_primary_port;
}
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count > len)
+ if (internals->member_count > len)
return -1;
- for (i = 0; i < internals->slave_count; i++)
- slaves[i] = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++)
+ members[i] = internals->members[i].port_id;
- return internals->slave_count;
+ return internals->member_count;
}
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ if (internals->active_member_count > len)
return -1;
- memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ memcpy(members, internals->active_members,
+ internals->active_member_count * sizeof(internals->active_members[0]));
- return internals->active_slave_count;
+ return internals->active_member_count;
}
int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
internals->user_defined_mac = 1;
- /* Update all slave devices MACs*/
- if (internals->slave_count > 0)
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs*/
+ if (internals->member_count > 0)
+ return mac_address_members_update(bonded_eth_dev);
return 0;
}
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
internals->user_defined_mac = 0;
- if (internals->slave_count > 0) {
- int slave_port;
- /* Get the primary slave location based on the primary port
- * number as, while slave_add(), we will keep the primary
- * slave based on slave_count,but not based on the primary port.
+ if (internals->member_count > 0) {
+ int member_port;
+ /* Get the primary member location based on the primary port
+ * number as, while member_add(), we will keep the primary
+ * member based on member_count,but not based on the primary port.
*/
- for (slave_port = 0; slave_port < internals->slave_count;
- slave_port++) {
- if (internals->slaves[slave_port].port_id ==
+ for (member_port = 0; member_port < internals->member_count;
+ member_port++) {
+ if (internals->members[member_port].port_id ==
internals->primary_port)
break;
}
/* Set MAC Address of Bonded Device */
if (mac_address_set(bonded_eth_dev,
- &internals->slaves[slave_port].persisted_mac_addr)
+ &internals->members[member_port].persisted_mac_addr)
!= 0) {
RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
return -1;
}
- /* Update all slave devices MAC addresses */
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MAC addresses */
+ return mac_address_members_update(bonded_eth_dev);
}
- /* No need to update anything as no slaves present */
+ /* No need to update anything as no members present */
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
#include "eth_bond_private.h"
const char *pmd_bond_init_valid_arguments[] = {
- PMD_BOND_SLAVE_PORT_KVARG,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
+ PMD_BOND_MEMBER_PORT_KVARG,
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
PMD_BOND_MODE_KVARG,
PMD_BOND_XMIT_POLICY_KVARG,
PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
}
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args)
{
- struct bond_ethdev_slave_ports *slave_ports;
+ struct bond_ethdev_member_ports *member_ports;
if (value == NULL || extra_args == NULL)
return -1;
- slave_ports = extra_args;
+ member_ports = extra_args;
- if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+ if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
int port_id = parse_port_id(value);
if (port_id < 0) {
- RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+ RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
value);
return -1;
} else
- slave_ports->slaves[slave_ports->slave_count++] =
+ member_ports->members[member_ports->member_count++] =
port_id;
}
return 0;
}
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
case BONDING_MODE_ALB:
return 0;
default:
- RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+ RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
return -1;
}
}
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
}
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
- int primary_slave_port_id;
+ int primary_member_port_id;
if (value == NULL || extra_args == NULL)
return -1;
- primary_slave_port_id = parse_port_id(value);
- if (primary_slave_port_id < 0)
+ primary_member_port_id = parse_port_id(value);
+ if (primary_member_port_id < 0)
return -1;
- *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+ *(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_validate(internals->members[i].port_id, attr,
patterns, actions, err);
if (ret) {
RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
return ret;
}
}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
NULL, rte_strerror(ENOMEM));
return NULL;
}
- for (i = 0; i < internals->slave_count; i++) {
- flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ flow->flows[i] = rte_flow_create(internals->members[i].port_id,
attr, patterns, actions, err);
if (unlikely(flow->flows[i] == NULL)) {
- RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+ RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
i);
goto err;
}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
return flow;
err:
- /* Destroy all slaves flows. */
- for (i = 0; i < internals->slave_count; i++) {
+ /* Destroy all members flows. */
+ for (i = 0; i < internals->member_count; i++) {
if (flow->flows[i] != NULL)
- rte_flow_destroy(internals->slaves[i].port_id,
+ rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
}
bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
int i;
int ret = 0;
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
int lret;
if (unlikely(flow->flows[i] == NULL))
continue;
- lret = rte_flow_destroy(internals->slaves[i].port_id,
+ lret = rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
if (unlikely(lret != 0)) {
- RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+ RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
" %d", i, lret);
ret = lret;
}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
int ret = 0;
int lret;
- /* Destroy all bond flows from its slaves instead of flushing them to
+ /* Destroy all bond flows from its members instead of flushing them to
* keep the LACP flow or any other external flows.
*/
RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
ret = lret;
}
if (unlikely(ret != 0))
- RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+ RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
return ret;
}
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *err)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_flow_query_count slave_count;
+ struct rte_flow_query_count member_count;
int i;
int ret;
count->bytes = 0;
count->hits = 0;
- rte_memcpy(&slave_count, count, sizeof(slave_count));
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_query(internals->slaves[i].port_id,
+ rte_memcpy(&member_count, count, sizeof(member_count));
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_query(internals->members[i].port_id,
flow->flows[i], action,
- &slave_count, err);
+ &member_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
- " slave %d: %d", i, ret);
+ " member %d: %d", i, ret);
return ret;
}
- count->bytes += slave_count.bytes;
- count->hits += slave_count.hits;
- slave_count.bytes = 0;
- slave_count.hits = 0;
+ count->bytes += member_count.bytes;
+ count->hits += member_count.hits;
+ member_count.bytes = 0;
+ member_count.hits = 0;
}
return 0;
}
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_isolate(internals->members[i].port_id, set, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
internals->flow_isolated_valid = 0;
return ret;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct bond_dev_private *internals;
uint16_t num_rx_total = 0;
- uint16_t slave_count;
- uint16_t active_slave;
+ uint16_t member_count;
+ uint16_t active_member;
int i;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- slave_count = internals->active_slave_count;
- active_slave = bd_rx_q->active_slave;
+ member_count = internals->active_member_count;
+ active_member = bd_rx_q->active_member;
- for (i = 0; i < slave_count && nb_pkts; i++) {
- uint16_t num_rx_slave;
+ for (i = 0; i < member_count && nb_pkts; i++) {
+ uint16_t num_rx_member;
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave =
- rte_eth_rx_burst(internals->active_slaves[active_slave],
+ /*
+ * Offset of pointer to *bufs increases as packets are received
+ * from other members.
+ */
+ num_rx_member =
+ rte_eth_rx_burst(internals->active_members[active_member],
bd_rx_q->queue_id,
bufs + num_rx_total, nb_pkts);
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
- if (++active_slave >= slave_count)
- active_slave = 0;
+ num_rx_total += num_rx_member;
+ nb_pkts -= num_rx_member;
+ if (++active_member >= member_count)
+ active_member = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port) {
- struct rte_eth_dev_info slave_info;
+ uint16_t member_port) {
+ struct rte_eth_dev_info member_info;
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
}
};
- int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+ int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
flow_item_8023ad, actions, &error);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
- __func__, error.message, slave_port,
+ RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+ __func__, error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
- ret = rte_eth_dev_info_get(slave_port, &slave_info);
+ ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port, strerror(-ret));
+ __func__, member_port, strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
- slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+ if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+ member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
RTE_BOND_LOG(ERR,
- "%s: Slave %d capabilities doesn't allow allocating additional queues",
- __func__, slave_port);
+ "%s: Member %d capabilities doesn't allow allocating additional queues",
+ __func__, member_port);
return -1;
}
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
uint16_t idx;
int ret;
- /* Verify if all slaves in bonding supports flow director and */
- if (internals->slave_count > 0) {
+ /* Verify if all members in bonding supports flow director and */
+ if (internals->member_count > 0) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
- for (idx = 0; idx < internals->slave_count; idx++) {
+ for (idx = 0; idx < internals->member_count; idx++) {
if (bond_ethdev_8023ad_flow_verify(bond_dev,
- internals->slaves[idx].port_id) != 0)
+ internals->members[idx].port_id) != 0)
return -1;
}
}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
}
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
}
};
- internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+ internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
&flow_attr_8023ad, flow_item_8023ad, actions, &error);
- if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+ if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
- "(slave_port=%d queue_id=%d)",
- error.message, slave_port,
+ "(member_port=%d queue_id=%d)",
+ error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
const uint16_t ether_type_slow_be =
rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint16_t slaves[RTE_MAX_ETHPORTS];
- uint16_t slave_count, idx;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ uint16_t member_count, idx;
- uint8_t collecting; /* current slave collecting status */
+ uint8_t collecting; /* current member collecting status */
const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
uint16_t j;
uint16_t k;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
+ member_count = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * member_count);
- idx = bd_rx_q->active_slave;
- if (idx >= slave_count) {
- bd_rx_q->active_slave = 0;
+ idx = bd_rx_q->active_member;
+ if (idx >= member_count) {
+ bd_rx_q->active_member = 0;
idx = 0;
}
- for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+ for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+ collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
COLLECTING);
- /* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+ /* Read packets from this member */
+ num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
- * - slave is not in collecting state,
+ * - member is not in collecting state,
* - bonding interface is not in promiscuous mode and
* packet address isn't in mac_addrs array:
* - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
!allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
- internals, slaves[idx], bufs[j]);
+ internals, members[idx], bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
} else
j++;
}
- if (unlikely(++idx == slave_count))
+ if (unlikely(++idx == member_count))
idx = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
#ifdef RTE_LIBRTE_BOND_DEBUG_ALB
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
- uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+ uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
- uint16_t num_of_slaves;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members;
+ uint16_t members[RTE_MAX_ETHPORTS];
- uint16_t num_tx_total = 0, num_tx_slave;
+ uint16_t num_tx_total = 0, num_tx_member;
- static int slave_idx = 0;
- int i, cslave_idx = 0, tx_fail_total = 0;
+ static int member_idx;
+ int i, cmember_idx = 0, tx_fail_total = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- /* Populate slaves mbuf with which packets are to be sent on it */
+ /* Populate members mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
- cslave_idx = (slave_idx + i) % num_of_slaves;
- slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+ cmember_idx = (member_idx + i) % num_of_members;
+ member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
}
- /* increment current slave index so the next call to tx burst starts on the
- * next slave */
- slave_idx = ++cslave_idx;
+ /*
+ * increment current member index so the next call to tx burst starts on the
+ * next member.
+ */
+ member_idx = ++cmember_idx;
- /* Send packet burst on each slave device */
- for (i = 0; i < num_of_slaves; i++) {
- if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_prepare(slaves[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_pkts[i]);
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- slave_bufs[i], num_tx_slave);
+ /* Send packet burst on each member device */
+ for (i = 0; i < num_of_members; i++) {
+ if (member_nb_pkts[i] > 0) {
+ num_tx_member = rte_eth_tx_prepare(members[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_pkts[i]);
+ num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+ member_bufs[i], num_tx_member);
/* if tx burst fails move packets to end of bufs */
- if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
- int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+ if (unlikely(num_tx_member < member_nb_pkts[i])) {
+ int tx_fail_member = member_nb_pkts[i] - num_tx_member;
- tx_fail_total += tx_fail_slave;
+ tx_fail_total += tx_fail_member;
memcpy(&bufs[nb_pkts - tx_fail_total],
- &slave_bufs[i][num_tx_slave],
- tx_fail_slave * sizeof(bufs[0]));
+ &member_bufs[i][num_tx_member],
+ tx_fail_member * sizeof(bufs[0]));
}
- num_tx_total += num_tx_slave;
+ num_tx_total += num_tx_member;
}
}
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
return 0;
nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i] = (hash ^= hash >> 8) % slave_count;
+ members[i] = (hash ^= hash >> 8) % member_count;
}
}
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
uint16_t i;
struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
-struct bwg_slave {
+struct bwg_member {
uint64_t bwg_left_int;
uint64_t bwg_left_remainder;
- uint16_t slave;
+ uint16_t member;
};
void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
int i;
- for (i = 0; i < internals->active_slave_count; i++) {
- tlb_last_obytets[internals->active_slaves[i]] = 0;
- }
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
static int
bandwidth_cmp(const void *a, const void *b)
{
- const struct bwg_slave *bwg_a = a;
- const struct bwg_slave *bwg_b = b;
+ const struct bwg_member *bwg_a = a;
+ const struct bwg_member *bwg_b = b;
int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
static void
bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
- struct bwg_slave *bwg_slave)
+ struct bwg_member *bwg_member)
{
struct rte_eth_link link_status;
int ret;
ret = rte_eth_link_get_nowait(port_id, &link_status);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
port_id, rte_strerror(-ret));
return;
}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
if (link_bwg == 0)
return;
link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
- bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
- bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+ bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+ bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
}
static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
{
struct bond_dev_private *internals = arg;
- struct rte_eth_stats slave_stats;
- struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ struct rte_eth_stats member_stats;
+ struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
uint64_t tx_bytes;
uint8_t update_stats = 0;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
- internals->slave_update_idx++;
+ internals->member_update_idx++;
- if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+ if (internals->member_update_idx >= REORDER_PERIOD_MS)
update_stats = 1;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- rte_eth_stats_get(slave_id, &slave_stats);
- tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
- bandwidth_left(slave_id, tx_bytes,
- internals->slave_update_idx, &bwg_array[i]);
- bwg_array[i].slave = slave_id;
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ rte_eth_stats_get(member_id, &member_stats);
+ tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+ bandwidth_left(member_id, tx_bytes,
+ internals->member_update_idx, &bwg_array[i]);
+ bwg_array[i].member = member_id;
if (update_stats) {
- tlb_last_obytets[slave_id] = slave_stats.obytes;
+ tlb_last_obytets[member_id] = member_stats.obytes;
}
}
if (update_stats == 1)
- internals->slave_update_idx = 0;
+ internals->member_update_idx = 0;
- slave_count = i;
- qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
- for (i = 0; i < slave_count; i++)
- internals->tlb_slaves_order[i] = bwg_array[i].slave;
+ member_count = i;
+ qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+ for (i = 0; i < member_count; i++)
+ internals->tlb_members_order[i] = bwg_array[i].member;
- rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+ rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
(struct bond_dev_private *)internals);
}
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0, num_tx_prep;
uint16_t i, j;
- uint16_t num_of_slaves = internals->active_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members = internals->active_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_ether_hdr *ether_hdr;
- struct rte_ether_addr primary_slave_addr;
- struct rte_ether_addr active_slave_addr;
+ struct rte_ether_addr primary_member_addr;
+ struct rte_ether_addr active_member_addr;
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+ memcpy(members, internals->tlb_members_order,
+ sizeof(internals->tlb_members_order[0]) * num_of_members);
- rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+ rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
if (nb_pkts > 3) {
for (i = 0; i < 3; i++)
rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
}
- for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ for (i = 0; i < num_of_members; i++) {
+ rte_eth_macaddr_get(members[i], &active_member_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
if (rte_is_same_ether_addr(ðer_hdr->src_addr,
- &primary_slave_addr))
- rte_ether_addr_copy(&active_slave_addr,
+ &primary_member_addr))
+ rte_ether_addr_copy(&active_member_addr,
ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
- mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+ mode6_debug("TX IPv4:", ether_hdr, members[i],
+ &burst_number_TX);
#endif
}
- num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+ num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, nb_pkts - num_tx_total);
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, num_tx_prep);
if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
void
bond_tlb_disable(struct bond_dev_private *internals)
{
- rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+ rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
}
void
bond_tlb_enable(struct bond_dev_private *internals)
{
- bond_ethdev_update_tlb_slave_cb(internals);
+ bond_ethdev_update_tlb_member_cb(internals);
}
static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct client_data *client_info;
/*
- * We create transmit buffers for every slave and one additional to send
+ * We create transmit buffers for every member and one additional to send
* through tlb. In worst case every packet will be send on one port.
*/
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
- uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+ uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
* We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_send, num_not_send = 0;
uint16_t num_tx_total = 0;
- uint16_t slave_idx;
+ uint16_t member_idx;
int i, j;
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
offset = get_vlan_offset(eth_h, ðer_type);
if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
- slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+ member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
+ rte_eth_macaddr_get(member_idx, ð_h->src_addr);
- /* Add packet to slave tx buffer */
- slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
- slave_bufs_pkts[slave_idx]++;
+ /* Add packet to member tx buffer */
+ member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+ member_bufs_pkts[member_idx]++;
} else {
/* If packet is not ARP, send it with TLB policy */
- slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+ member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
bufs[i];
- slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+ member_bufs_pkts[RTE_MAX_ETHPORTS]++;
}
}
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- /* Allocate new packet to send ARP update on current slave */
+ /* Allocate new packet to send ARP update on current member */
upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
if (upd_pkt == NULL) {
RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
upd_pkt->data_len = pkt_size;
upd_pkt->pkt_len = pkt_size;
- slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+ member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
internals);
/* Add packet to update tx buffer */
- update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
- update_bufs_pkts[slave_idx]++;
+ update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+ update_bufs_pkts[member_idx]++;
}
}
internals->mode6.ntt = 0;
}
- /* Send ARP packets on proper slaves */
+ /* Send ARP packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (slave_bufs_pkts[i] > 0) {
+ if (member_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
- slave_bufs[i], slave_bufs_pkts[i]);
+ member_bufs[i], member_bufs_pkts[i]);
num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
- slave_bufs[i], num_send);
- for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+ member_bufs[i], num_send);
+ for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[i][nb_pkts - 1 - j];
+ member_bufs[i][nb_pkts - 1 - j];
}
num_tx_total += num_send;
- num_not_send += slave_bufs_pkts[i] - num_send;
+ num_not_send += member_bufs_pkts[i] - num_send;
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
/* Print TX stats including update packets */
- for (j = 0; j < slave_bufs_pkts[i]; j++) {
- eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+ for (j = 0; j < member_bufs_pkts[i]; j++) {
+ eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
}
#endif
}
}
- /* Send update packets on proper slaves */
+ /* Send update packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (update_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
for (j = 0; j < update_bufs_pkts[i]; j++) {
eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
}
#endif
}
}
/* Send non-ARP packets using tlb policy */
- if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+ if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
num_send = bond_ethdev_tx_burst_tlb(queue,
- slave_bufs[RTE_MAX_ETHPORTS],
- slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+ member_bufs[RTE_MAX_ETHPORTS],
+ member_bufs_pkts[RTE_MAX_ETHPORTS]);
- for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+ for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+ member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
}
num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static inline uint16_t
tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
- uint16_t *slave_port_ids, uint16_t slave_count)
+ uint16_t *member_port_ids, uint16_t member_count)
{
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- /* Array to sort mbufs for transmission on each slave into */
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
- /* Number of mbufs for transmission on each slave */
- uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
- /* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[nb_bufs];
+ /* Array to sort mbufs for transmission on each member into */
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+ /* Number of mbufs for transmission on each member */
+ uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+ /* Mapping array generated by hash function to map mbufs to members */
+ uint16_t bufs_member_port_idxs[nb_bufs];
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
uint16_t i;
/*
- * Populate slaves mbuf with the packets which are to be sent on it
- * selecting output slave using hash based on xmit policy
+ * Populate members mbuf with the packets which are to be sent on it
+ * selecting output member using hash based on xmit policy
*/
- internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
- bufs_slave_port_idxs);
+ internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+ bufs_member_port_idxs);
for (i = 0; i < nb_bufs; i++) {
- /* Populate slave mbuf arrays with mbufs for that slave. */
- uint16_t slave_idx = bufs_slave_port_idxs[i];
+ /* Populate member mbuf arrays with mbufs for that member. */
+ uint16_t member_idx = bufs_member_port_idxs[i];
- slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+ member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
}
- /* Send packet burst on each slave device */
- for (i = 0; i < slave_count; i++) {
- if (slave_nb_bufs[i] == 0)
+ /* Send packet burst on each member device */
+ for (i = 0; i < member_count; i++) {
+ if (member_nb_bufs[i] == 0)
continue;
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_bufs[i]);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_tx_count);
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_bufs[i]);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_tx_count);
- total_tx_count += slave_tx_count;
+ total_tx_count += member_tx_count;
/* If tx burst fails move packets to end of bufs */
- if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
- int slave_tx_fail_count = slave_nb_bufs[i] -
- slave_tx_count;
- total_tx_fail_count += slave_tx_fail_count;
+ if (unlikely(member_tx_count < member_nb_bufs[i])) {
+ int member_tx_fail_count = member_nb_bufs[i] -
+ member_tx_count;
+ total_tx_fail_count += member_tx_fail_count;
memcpy(&bufs[nb_bufs - total_tx_fail_count],
- &slave_bufs[i][slave_tx_count],
- slave_tx_fail_count * sizeof(bufs[0]));
+ &member_bufs[i][member_tx_count],
+ member_tx_fail_count * sizeof(bufs[0]));
}
}
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
if (unlikely(nb_bufs == 0))
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting
*/
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
- return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
- slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+ member_count);
}
static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
- uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t dist_slave_count;
+ uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t dist_member_count;
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t i;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
if (dedicated_txq)
goto skip_tx_ring;
/* Check for LACP control packets and send if available */
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
struct rte_mbuf *ctrl_pkt = NULL;
if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (rte_ring_dequeue(port->tx_ring,
(void **)&ctrl_pkt) != -ENOENT) {
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
bd_tx_q->queue_id, &ctrl_pkt, 1);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
/*
* re-enqueue LAG control plane packets to buffering
* ring if transmission fails so the packet isn't lost.
*/
- if (slave_tx_count != 1)
+ if (member_tx_count != 1)
rte_ring_enqueue(port->tx_ring, ctrl_pkt);
}
}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (unlikely(nb_bufs == 0))
return 0;
- dist_slave_count = 0;
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ dist_member_count = 0;
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
if (ACTOR_STATE(port, DISTRIBUTING))
- dist_slave_port_ids[dist_slave_count++] =
- slave_port_ids[i];
+ dist_member_port_ids[dist_member_count++] =
+ member_port_ids[i];
}
- if (unlikely(dist_slave_count < 1))
+ if (unlikely(dist_member_count < 1))
return 0;
- return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
- dist_slave_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+ dist_member_count);
}
static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint8_t tx_failed_flag = 0;
- uint16_t num_of_slaves;
+ uint16_t num_of_members;
uint16_t max_nb_of_tx_pkts = 0;
- int slave_tx_total[RTE_MAX_ETHPORTS];
- int i, most_successful_tx_slave = -1;
+ int member_tx_total[RTE_MAX_ETHPORTS];
+ int i, most_successful_tx_member = -1;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return 0;
/* It is rare that bond different PMDs together, so just call tx-prepare once */
- nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+ nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
- rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+ rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
- /* Transmit burst on each active slave */
- for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ /* Transmit burst on each active member */
+ for (i = 0; i < num_of_members; i++) {
+ member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs, nb_pkts);
- if (unlikely(slave_tx_total[i] < nb_pkts))
+ if (unlikely(member_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
- /* record the value and slave index for the slave which transmits the
+ /* record the value and member index for the member which transmits the
* maximum number of packets */
- if (slave_tx_total[i] > max_nb_of_tx_pkts) {
- max_nb_of_tx_pkts = slave_tx_total[i];
- most_successful_tx_slave = i;
+ if (member_tx_total[i] > max_nb_of_tx_pkts) {
+ max_nb_of_tx_pkts = member_tx_total[i];
+ most_successful_tx_member = i;
}
}
- /* if slaves fail to transmit packets from burst, the calling application
+ /* if members fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
- * handle failures of all packets except those of the most successful slave
+ * handle failures of all packets except those of the most successful member
*/
if (unlikely(tx_failed_flag))
- for (i = 0; i < num_of_slaves; i++)
- if (i != most_successful_tx_slave)
- while (slave_tx_total[i] < nb_pkts)
- rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+ for (i = 0; i < num_of_members; i++)
+ if (i != most_successful_tx_member)
+ while (member_tx_total[i] < nb_pkts)
+ rte_pktmbuf_free(bufs[member_tx_total[i]++]);
return max_nb_of_tx_pkts;
}
static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
/**
* If in mode 4 then save the link properties of the first
- * slave, all subsequent slaves must match these properties
+ * member, all subsequent members must match these properties
*/
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- bond_link->link_autoneg = slave_link->link_autoneg;
- bond_link->link_duplex = slave_link->link_duplex;
- bond_link->link_speed = slave_link->link_speed;
+ bond_link->link_autoneg = member_link->link_autoneg;
+ bond_link->link_duplex = member_link->link_duplex;
+ bond_link->link_speed = member_link->link_speed;
} else {
/**
* In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
static int
link_properties_valid(struct rte_eth_dev *ethdev,
- struct rte_eth_link *slave_link)
+ struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- if (bond_link->link_duplex != slave_link->link_duplex ||
- bond_link->link_autoneg != slave_link->link_autoneg ||
- bond_link->link_speed != slave_link->link_speed)
+ if (bond_link->link_duplex != member_link->link_duplex ||
+ bond_link->link_autoneg != member_link->link_autoneg ||
+ bond_link->link_speed != member_link->link_speed)
return -1;
}
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
static const struct rte_ether_addr null_mac_addr;
/*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
*/
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, ret;
struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+ ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
if (ret < 0) {
/* rollback */
for (i--; i > 0; i--)
- rte_eth_dev_mac_addr_remove(slave_port_id,
+ rte_eth_dev_mac_addr_remove(member_port_id,
&bonded_eth_dev->data->mac_addrs[i]);
return ret;
}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
/*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
*/
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, rc, ret;
struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+ ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
/* save only the first error */
if (ret < 0 && rc == 0)
rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
bool set;
int i;
- /* Update slave devices MAC addresses */
- if (internals->slave_count < 1)
+ /* Update member devices MAC addresses */
+ if (internals->member_count < 1)
return -1;
switch (internals->mode) {
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
+ internals->members[i].port_id,
bonded_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
case BONDING_MODE_ALB:
default:
set = true;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id ==
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id ==
internals->current_primary_port) {
if (rte_eth_dev_default_mac_addr_set(
internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
}
} else {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
- &internals->slaves[i].persisted_mac_addr)) {
+ internals->members[i].port_id,
+ &internals->members[i].persisted_mac_addr)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
}
}
}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+ struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
if (port->slow_pool == NULL) {
char mem_name[256];
- int slave_id = slave_eth_dev->data->port_id;
+ int member_id = member_eth_dev->data->port_id;
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
- slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+ member_id);
port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
- slave_eth_dev->data->numa_node);
+ member_eth_dev->data->numa_node);
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->slow_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
}
if (internals->mode4.dedicated_queues.enabled == 1) {
/* Configure slow Rx queue */
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid, 128,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid,
errval);
return errval;
}
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid, 512,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid,
errval);
return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
}
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- /* Stop slave */
- errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+ /* Stop member */
+ errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
if (errval != 0)
RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
- /* Enable interrupts on slave device if supported */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+ /* Enable interrupts on member device if supported */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
- /* If RSS is enabled for bonding, try to enable it for slaves */
+ /* If RSS is enabled for bonding, try to enable it for members */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
} else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
}
- slave_eth_dev->data->dev_conf.rxmode.mtu =
+ member_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- slave_eth_dev->data->dev_conf.link_speeds =
+ member_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads =
+ member_eth_dev->data->dev_conf.txmode.offloads =
bonded_eth_dev->data->dev_conf.txmode.offloads;
- slave_eth_dev->data->dev_conf.rxmode.offloads =
+ member_eth_dev->data->dev_conf.rxmode.offloads =
bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* Configure device */
- errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
nb_rx_queues, nb_tx_queues,
- &(slave_eth_dev->data->dev_conf));
+ &member_eth_dev->data->dev_conf);
if (errval != 0) {
- RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+ member_eth_dev->data->port_id, errval);
return errval;
}
- errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
bonded_eth_dev->data->mtu);
if (errval != 0 && errval != -ENOTSUP) {
RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
return errval;
}
return 0;
}
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
uint16_t q_id;
struct rte_flow_error flow_error;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+ uint16_t member_port_id = member_eth_dev->data->port_id;
/* Setup Rx Queues */
for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_rx_queue_setup(member_port_id, q_id,
bd_rx_q->nb_rx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_tx_queue_setup(member_port_id, q_id,
bd_tx_q->nb_tx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&bd_tx_q->tx_conf);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
- if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+ if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
!= 0)
return errval;
errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
- if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
- errval = rte_flow_destroy(slave_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+ if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+ errval = rte_flow_destroy(member_port_id,
+ internals->mode4.dedicated_queues.flow[member_port_id],
&flow_error);
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
}
/* Start device */
- errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+ errval = rte_eth_dev_start(member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return -1;
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
internals = bonded_eth_dev->data->dev_private;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == member_port_id) {
errval = rte_eth_dev_rss_reta_update(
- slave_eth_dev->data->port_id,
+ member_port_id,
&internals->reta_conf[0],
- internals->slaves[i].reta_size);
+ internals->members[i].reta_size);
if (errval != 0) {
RTE_BOND_LOG(WARNING,
- "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+ "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
" RSS Configuration for bonding may be inconsistent.",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
break;
}
}
}
- /* If lsc interrupt is set, check initial slave's link status */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
- slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
- bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+ /* If lsc interrupt is set, check initial member's link status */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+ bond_ethdev_lsc_event_callback(member_port_id,
RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
NULL);
}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
}
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t i;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id ==
- slave_eth_dev->data->port_id)
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id ==
+ member_eth_dev->data->port_id)
break;
- if (i < (internals->slave_count - 1)) {
+ if (i < (internals->member_count - 1)) {
struct rte_flow *flow;
- memmove(&internals->slaves[i], &internals->slaves[i + 1],
- sizeof(internals->slaves[0]) *
- (internals->slave_count - i - 1));
+ memmove(&internals->members[i], &internals->members[i + 1],
+ sizeof(internals->members[0]) *
+ (internals->member_count - i - 1));
TAILQ_FOREACH(flow, &internals->flow_list, next) {
memmove(&flow->flows[i], &flow->flows[i + 1],
sizeof(flow->flows[0]) *
- (internals->slave_count - i - 1));
- flow->flows[internals->slave_count - 1] = NULL;
+ (internals->member_count - i - 1));
+ flow->flows[internals->member_count - 1] = NULL;
}
}
- internals->slave_count--;
+ internals->member_count--;
- /* force reconfiguration of slave interfaces */
- rte_eth_dev_internal_reset(slave_eth_dev);
+ /* force reconfiguration of member interfaces */
+ rte_eth_dev_internal_reset(member_eth_dev);
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
- struct bond_slave_details *slave_details =
- &internals->slaves[internals->slave_count];
+ struct bond_member_details *member_details =
+ &internals->members[internals->member_count];
- slave_details->port_id = slave_eth_dev->data->port_id;
- slave_details->last_link_status = 0;
+ member_details->port_id = member_eth_dev->data->port_id;
+ member_details->last_link_status = 0;
- /* Mark slave devices that don't support interrupts so we can
+ /* Mark member devices that don't support interrupts so we can
* compensate when we start the bond
*/
- if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
- slave_details->link_status_poll_enabled = 1;
- }
+ if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+ member_details->link_status_poll_enabled = 1;
- slave_details->link_status_wait_to_complete = 0;
+ member_details->link_status_wait_to_complete = 0;
/* clean tlb_last_obytes when adding port for bonding device */
- memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+ memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
sizeof(struct rte_ether_addr));
}
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id)
+ uint16_t member_port_id)
{
int i;
- if (internals->active_slave_count < 1)
- internals->current_primary_port = slave_port_id;
+ if (internals->active_member_count < 1)
+ internals->current_primary_port = member_port_id;
else
- /* Search bonded device slave ports for new proposed primary port */
- for (i = 0; i < internals->active_slave_count; i++) {
- if (internals->active_slaves[i] == slave_port_id)
- internals->current_primary_port = slave_port_id;
+ /* Search bonded device member ports for new proposed primary port */
+ for (i = 0; i < internals->active_member_count; i++) {
+ if (internals->active_members[i] == member_port_id)
+ internals->current_primary_port = member_port_id;
}
}
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals;
int i;
- /* slave eth dev will be started by bonded device */
+ /* member eth dev will be started by bonded device */
if (check_for_bonded_ethdev(eth_dev)) {
- RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+ RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
eth_dev->data->port_id);
return -1;
}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- if (internals->slave_count == 0) {
- RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+ if (internals->member_count == 0) {
+ RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
goto out_err;
}
if (internals->user_defined_mac == 0) {
struct rte_ether_addr *new_mac_addr = NULL;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == internals->primary_port)
- new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == internals->primary_port)
+ new_mac_addr = &internals->members[i].persisted_mac_addr;
if (new_mac_addr == NULL)
goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
}
- /* Reconfigure each slave device if starting bonded device */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(eth_dev, slave_ethdev) != 0) {
+ /* Reconfigure each member device if starting bonded device */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to reconfigure slave device (%d)",
+ "bonded port (%d) failed to reconfigure member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- if (slave_start(eth_dev, slave_ethdev) != 0) {
+ if (member_start(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to start slave device (%d)",
+ "bonded port (%d) failed to start member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- /* We will need to poll for link status if any slave doesn't
+ /* We will need to poll for link status if any member doesn't
* support interrupts
*/
- if (internals->slaves[i].link_status_poll_enabled)
+ if (internals->members[i].link_status_poll_enabled)
internals->link_status_polling_enabled = 1;
}
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
if (internals->link_status_polling_enabled) {
rte_eal_alarm_set(
internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor,
+ bond_ethdev_member_link_status_change_monitor,
(void *)&rte_eth_devices[internals->port_id]);
}
- /* Update all slave devices MACs*/
- if (mac_address_slaves_update(eth_dev) != 0)
+ /* Update all member devices MACs*/
+ if (mac_address_members_update(eth_dev) != 0)
goto out_err;
if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
bond_mode_8023ad_stop(eth_dev);
/* Discard all messages to/from mode 4 state machines */
- for (i = 0; i < internals->active_slave_count; i++) {
- port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+ for (i = 0; i < internals->active_member_count; i++) {
+ port = &bond_mode_8023ad_ports[internals->active_members[i]];
RTE_ASSERT(port->rx_ring != NULL);
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
if (internals->mode == BONDING_MODE_TLB ||
internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
- for (i = 0; i < internals->active_slave_count; i++)
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t slave_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t member_id = internals->members[i].port_id;
- internals->slaves[i].last_link_status = 0;
- ret = rte_eth_dev_stop(slave_id);
+ internals->members[i].last_link_status = 0;
+ ret = rte_eth_dev_stop(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_id);
+ member_id);
return ret;
}
- /* active slaves need to be deactivated. */
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) !=
- internals->active_slave_count)
- deactivate_slave(eth_dev, slave_id);
+ /* active members need to be deactivated. */
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) !=
+ internals->active_member_count)
+ deactivate_member(eth_dev, member_id);
}
return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
/* Flush flows in all back-end devices before removing them */
bond_flow_ops.flush(dev, &ferror);
- while (internals->slave_count != skipped) {
- uint16_t port_id = internals->slaves[skipped].port_id;
+ while (internals->member_count != skipped) {
+ uint16_t port_id = internals->members[skipped].port_id;
int ret;
ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
continue;
}
- if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+ if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
RTE_BOND_LOG(ERR,
"Failed to remove port %d from bonded device %s",
port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct bond_slave_details slave;
+ struct bond_member_details member;
int ret;
uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_ETHER_MAX_JUMBO_FRAME_LEN;
/* Max number of tx/rx queues that the bonded device can support is the
- * minimum values of the bonded slaves, as all slaves must be capable
+ * minimum values of the bonded members, as all members must be capable
* of supporting the same number of tx/rx queues.
*/
- if (internals->slave_count > 0) {
- struct rte_eth_dev_info slave_info;
+ if (internals->member_count > 0) {
+ struct rte_eth_dev_info member_info;
uint16_t idx;
- for (idx = 0; idx < internals->slave_count; idx++) {
- slave = internals->slaves[idx];
- ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+ for (idx = 0; idx < internals->member_count; idx++) {
+ member = internals->members[idx];
+ ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
__func__,
- slave.port_id,
+ member.port_id,
strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < max_nb_rx_queues)
- max_nb_rx_queues = slave_info.max_rx_queues;
+ if (member_info.max_rx_queues < max_nb_rx_queues)
+ max_nb_rx_queues = member_info.max_rx_queues;
- if (slave_info.max_tx_queues < max_nb_tx_queues)
- max_nb_tx_queues = slave_info.max_tx_queues;
+ if (member_info.max_tx_queues < max_nb_tx_queues)
+ max_nb_tx_queues = member_info.max_tx_queues;
}
}
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
uint16_t i;
struct bond_dev_private *internals = dev->data->dev_private;
- /* don't do this while a slave is being added */
+ /* don't do this while a member is being added */
rte_spinlock_lock(&internals->lock);
if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
else
rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
if (res == ENOTSUP)
RTE_BOND_LOG(WARNING,
- "Setting VLAN filter on slave port %u not supported.",
+ "Setting VLAN filter on member port %u not supported.",
port_id);
}
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
{
- struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+ struct rte_eth_dev *bonded_ethdev, *member_ethdev;
struct bond_dev_private *internals;
- /* Default value for polling slave found is true as we don't want to
+ /* Default value for polling member found is true as we don't want to
* disable the polling thread if we cannot get the lock */
- int i, polling_slave_found = 1;
+ int i, polling_member_found = 1;
if (cb_arg == NULL)
return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
!internals->link_status_polling_enabled)
return;
- /* If device is currently being configured then don't check slaves link
+ /* If device is currently being configured then don't check members link
* status, wait until next period */
if (rte_spinlock_trylock(&internals->lock)) {
- if (internals->slave_count > 0)
- polling_slave_found = 0;
+ if (internals->member_count > 0)
+ polling_member_found = 0;
- for (i = 0; i < internals->slave_count; i++) {
- if (!internals->slaves[i].link_status_poll_enabled)
+ for (i = 0; i < internals->member_count; i++) {
+ if (!internals->members[i].link_status_poll_enabled)
continue;
- slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
- polling_slave_found = 1;
+ member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+ polling_member_found = 1;
- /* Update slave link status */
- (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
- internals->slaves[i].link_status_wait_to_complete);
+ /* Update member link status */
+ (*member_ethdev->dev_ops->link_update)(member_ethdev,
+ internals->members[i].link_status_wait_to_complete);
/* if link status has changed since last checked then call lsc
* event callback */
- if (slave_ethdev->data->dev_link.link_status !=
- internals->slaves[i].last_link_status) {
- bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+ if (member_ethdev->data->dev_link.link_status !=
+ internals->members[i].last_link_status) {
+ bond_ethdev_lsc_event_callback(internals->members[i].port_id,
RTE_ETH_EVENT_INTR_LSC,
&bonded_ethdev->data->port_id,
NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
rte_spinlock_unlock(&internals->lock);
}
- if (polling_slave_found)
- /* Set alarm to continue monitoring link status of slave ethdev's */
+ if (polling_member_found)
+ /* Set alarm to continue monitoring link status of member ethdev's */
rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor, cb_arg);
+ bond_ethdev_member_link_status_change_monitor, cb_arg);
}
static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
struct bond_dev_private *bond_ctx;
- struct rte_eth_link slave_link;
+ struct rte_eth_link member_link;
bool one_link_update_succeeded;
uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
- bond_ctx->active_slave_count == 0) {
+ bond_ctx->active_member_count == 0) {
ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
case BONDING_MODE_BROADCAST:
/**
* Setting link speed to UINT32_MAX to ensure we pick up the
- * value of the first active slave
+ * value of the first active member
*/
ethdev->data->dev_link.link_speed = UINT32_MAX;
/**
- * link speed is minimum value of all the slaves link speed as
- * packet loss will occur on this slave if transmission at rates
+ * link speed is minimum value of all the members link speed as
+ * packet loss will occur on this member if transmission at rates
* greater than this are attempted
*/
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
return 0;
}
- if (slave_link.link_speed <
+ if (member_link.link_speed <
ethdev->data->dev_link.link_speed)
ethdev->data->dev_link.link_speed =
- slave_link.link_speed;
+ member_link.link_speed;
}
break;
case BONDING_MODE_ACTIVE_BACKUP:
- /* Current primary slave */
- ret = link_update(bond_ctx->current_primary_port, &slave_link);
+ /* Current primary member */
+ ret = link_update(bond_ctx->current_primary_port, &member_link);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
bond_ctx->current_primary_port,
rte_strerror(-ret));
return 0;
}
- ethdev->data->dev_link.link_speed = slave_link.link_speed;
+ ethdev->data->dev_link.link_speed = member_link.link_speed;
break;
case BONDING_MODE_8023AD:
ethdev->data->dev_link.link_autoneg =
- bond_ctx->mode4.slave_link.link_autoneg;
+ bond_ctx->mode4.member_link.link_autoneg;
ethdev->data->dev_link.link_duplex =
- bond_ctx->mode4.slave_link.link_duplex;
+ bond_ctx->mode4.member_link.link_duplex;
/* fall through */
/* to update link speed */
case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
default:
/**
* In theses mode the maximum theoretical link speed is the sum
- * of all the slaves
+ * of all the members
*/
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
continue;
}
one_link_update_succeeded = true;
ethdev->data->dev_link.link_speed +=
- slave_link.link_speed;
+ member_link.link_speed;
}
if (!one_link_update_succeeded) {
- RTE_BOND_LOG(ERR, "All slaves link get failed");
+ RTE_BOND_LOG(ERR, "All members link get failed");
return 0;
}
}
@@ -2602,27 +2606,27 @@ static int
bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_eth_stats slave_stats;
+ struct rte_eth_stats member_stats;
int i, j;
- for (i = 0; i < internals->slave_count; i++) {
- rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+ for (i = 0; i < internals->member_count; i++) {
+ rte_eth_stats_get(internals->members[i].port_id, &member_stats);
- stats->ipackets += slave_stats.ipackets;
- stats->opackets += slave_stats.opackets;
- stats->ibytes += slave_stats.ibytes;
- stats->obytes += slave_stats.obytes;
- stats->imissed += slave_stats.imissed;
- stats->ierrors += slave_stats.ierrors;
- stats->oerrors += slave_stats.oerrors;
- stats->rx_nombuf += slave_stats.rx_nombuf;
+ stats->ipackets += member_stats.ipackets;
+ stats->opackets += member_stats.opackets;
+ stats->ibytes += member_stats.ibytes;
+ stats->obytes += member_stats.obytes;
+ stats->imissed += member_stats.imissed;
+ stats->ierrors += member_stats.ierrors;
+ stats->oerrors += member_stats.oerrors;
+ stats->rx_nombuf += member_stats.rx_nombuf;
for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
- stats->q_ipackets[j] += slave_stats.q_ipackets[j];
- stats->q_opackets[j] += slave_stats.q_opackets[j];
- stats->q_ibytes[j] += slave_stats.q_ibytes[j];
- stats->q_obytes[j] += slave_stats.q_obytes[j];
- stats->q_errors[j] += slave_stats.q_errors[j];
+ stats->q_ipackets[j] += member_stats.q_ipackets[j];
+ stats->q_opackets[j] += member_stats.q_opackets[j];
+ stats->q_ibytes[j] += member_stats.q_ibytes[j];
+ stats->q_obytes[j] += member_stats.q_obytes[j];
+ stats->q_errors[j] += member_stats.q_errors[j];
}
}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
int err;
int ret;
- for (i = 0, err = 0; i < internals->slave_count; i++) {
- ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+ for (i = 0, err = 0; i < internals->member_count; i++) {
+ ret = rte_eth_stats_reset(internals->members[i].port_id);
if (ret != 0)
err = ret;
}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_promiscuous_enable(port_id);
if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
"Failed to enable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
BOND_8023AD_FORCED_PROMISC) {
- slave_ok++;
+ member_ok++;
continue;
}
ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
"Failed to disable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As promiscuous mode is propagated to all slaves for these
+ /* As promiscuous mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As promiscuous mode is propagated only to primary slave
+ /* As promiscuous mode is propagated only to primary member
* for these mode. When active/standby switchover, promiscuous
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_allmulticast_enable(port_id);
if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
"Failed to enable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
"Failed to disable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As allmulticast mode is propagated to all slaves for these
+ /* As allmulticast mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As allmulticast mode is propagated only to primary slave
+ /* As allmulticast mode is propagated only to primary member
* for these mode. When active/standby switchover, allmulticast
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
int ret;
uint8_t lsc_flag = 0;
- int valid_slave = 0;
- uint16_t active_pos, slave_idx;
+ int valid_member = 0;
+ uint16_t active_pos, member_idx;
uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (!bonded_eth_dev->data->dev_started)
return rc;
- /* verify that port_id is a valid slave of bonded port */
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == port_id) {
- valid_slave = 1;
- slave_idx = i;
+ /* verify that port_id is a valid member of bonded port */
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == port_id) {
+ valid_member = 1;
+ member_idx = i;
break;
}
}
- if (!valid_slave)
+ if (!valid_member)
return rc;
/* Synchronize lsc callback parallel calls either by real link event
- * from the slaves PMDs or by the bonding PMD itself.
+ * from the members PMDs or by the bonding PMD itself.
*/
rte_spinlock_lock(&internals->lsc_lock);
/* Search for port in active port list */
- active_pos = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, port_id);
+ active_pos = find_member_by_id(internals->active_members,
+ internals->active_member_count, port_id);
ret = rte_eth_link_get_nowait(port_id, &link);
if (ret < 0)
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
if (ret == 0 && link.link_status) {
- if (active_pos < internals->active_slave_count)
+ if (active_pos < internals->active_member_count)
goto link_update;
/* check link state properties if bonded link is up*/
if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
- "for slave %d in bonding mode %d",
+ "for member %d in bonding mode %d",
port_id, internals->mode);
} else {
- /* inherit slave link properties */
+ /* inherit member link properties */
link_properties_set(bonded_eth_dev, &link);
}
- /* If no active slave ports then set this port to be
+ /* If no active member ports then set this port to be
* the primary port.
*/
- if (internals->active_slave_count < 1) {
- /* If first active slave, then change link status */
+ if (internals->active_member_count < 1) {
+ /* If first active member, then change link status */
bonded_eth_dev->data->dev_link.link_status =
RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
- activate_slave(bonded_eth_dev, port_id);
+ activate_member(bonded_eth_dev, port_id);
/* If the user has defined the primary port then default to
* using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
internals->primary_port == port_id)
bond_ethdev_primary_set(internals, port_id);
} else {
- if (active_pos == internals->active_slave_count)
+ if (active_pos == internals->active_member_count)
goto link_update;
- /* Remove from active slave list */
- deactivate_slave(bonded_eth_dev, port_id);
+ /* Remove from active member list */
+ deactivate_member(bonded_eth_dev, port_id);
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
lsc_flag = 1;
- /* Update primary id, take first active slave from list or if none
+ /* Update primary id, take first active member from list or if none
* available set to -1 */
if (port_id == internals->current_primary_port) {
- if (internals->active_slave_count > 0)
+ if (internals->active_member_count > 0)
bond_ethdev_primary_set(internals,
- internals->active_slaves[0]);
+ internals->active_members[0]);
else
internals->current_primary_port = internals->primary_port;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
link_update:
/**
* Update bonded device link properties after any change to active
- * slaves
+ * members
*/
bond_ethdev_link_update(bonded_eth_dev, 0);
- internals->slaves[slave_idx].last_link_status = link.link_status;
+ internals->members[member_idx].last_link_status = link.link_status;
if (lsc_flag) {
/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
{
unsigned i, j;
int result = 0;
- int slave_reta_size;
+ int member_reta_size;
unsigned reta_count;
struct bond_dev_private *internals = dev->data->dev_private;
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
sizeof(internals->reta_conf[0]) * reta_count);
- /* Propagate RETA over slaves */
- for (i = 0; i < internals->slave_count; i++) {
- slave_reta_size = internals->slaves[i].reta_size;
- result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
- &internals->reta_conf[0], slave_reta_size);
+ /* Propagate RETA over members */
+ for (i = 0; i < internals->member_count; i++) {
+ member_reta_size = internals->members[i].reta_size;
+ result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+ &internals->reta_conf[0], member_reta_size);
if (result < 0)
return result;
}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
bond_rss_conf.rss_key_len = internals->rss_key_len;
}
- for (i = 0; i < internals->slave_count; i++) {
- result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
&bond_rss_conf);
if (result < 0)
return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
static int
bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mtu_set == NULL) {
rte_spinlock_unlock(&internals->lock);
return -ENOTSUP;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
if (ret < 0) {
rte_spinlock_unlock(&internals->lock);
return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
struct rte_ether_addr *mac_addr,
__rte_unused uint32_t index, uint32_t vmdq)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
- *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+ *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
ret = -ENOTSUP;
goto end;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
mac_addr, vmdq);
if (ret < 0) {
/* rollback */
for (i--; i >= 0; i--)
rte_eth_dev_mac_addr_remove(
- internals->slaves[i].port_id, mac_addr);
+ internals->members[i].port_id, mac_addr);
goto end;
}
}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
static void
bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
goto end;
}
struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
- for (i = 0; i < internals->slave_count; i++)
- rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++)
+ rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
mac_addr);
end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
fprintf(f, "\n");
}
- if (internals->slave_count > 0) {
- fprintf(f, "\tSlaves (%u): [", internals->slave_count);
- for (i = 0; i < internals->slave_count - 1; i++)
- fprintf(f, "%u ", internals->slaves[i].port_id);
+ if (internals->member_count > 0) {
+ fprintf(f, "\tMembers (%u): [", internals->member_count);
+ for (i = 0; i < internals->member_count - 1; i++)
+ fprintf(f, "%u ", internals->members[i].port_id);
- fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+ fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
} else {
- fprintf(f, "\tSlaves: []\n");
+ fprintf(f, "\tMembers: []\n");
}
- if (internals->active_slave_count > 0) {
- fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
- for (i = 0; i < internals->active_slave_count - 1; i++)
- fprintf(f, "%u ", internals->active_slaves[i]);
+ if (internals->active_member_count > 0) {
+ fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+ for (i = 0; i < internals->active_member_count - 1; i++)
+ fprintf(f, "%u ", internals->active_members[i]);
- fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+ fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
} else {
- fprintf(f, "\tActive Slaves: []\n");
+ fprintf(f, "\tActive Members: []\n");
}
if (internals->user_defined_primary_port)
fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
- if (internals->slave_count > 0)
+ if (internals->member_count > 0)
fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
}
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
}
static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
{
char a_state[256] = { 0 };
char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
static void
dump_lacp(uint16_t port_id, FILE *f)
{
- struct rte_eth_bond_8023ad_slave_info slave_info;
+ struct rte_eth_bond_8023ad_member_info member_info;
struct rte_eth_bond_8023ad_conf port_conf;
- uint16_t slaves[RTE_MAX_ETHPORTS];
- int num_active_slaves;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ int num_active_members;
int i, ret;
fprintf(f, " - Lacp info:\n");
- num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+ num_active_members = rte_eth_bond_active_members_get(port_id, members,
RTE_MAX_ETHPORTS);
- if (num_active_slaves < 0) {
- fprintf(f, "\tFailed to get active slave list for port %u\n",
+ if (num_active_members < 0) {
+ fprintf(f, "\tFailed to get active member list for port %u\n",
port_id);
return;
}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
}
dump_lacp_conf(&port_conf, f);
- for (i = 0; i < num_active_slaves; i++) {
- ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
- &slave_info);
+ for (i = 0; i < num_active_members; i++) {
+ ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+ &member_info);
if (ret) {
- fprintf(f, "\tGet slave device %u 8023ad info failed\n",
- slaves[i]);
+ fprintf(f, "\tGet member device %u 8023ad info failed\n",
+ members[i]);
return;
}
- fprintf(f, "\tSlave Port: %u\n", slaves[i]);
- dump_lacp_slave(&slave_info, f);
+ fprintf(f, "\tMember Port: %u\n", members[i]);
+ dump_lacp_member(&member_info, f);
}
}
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->link_down_delay_ms = 0;
internals->link_up_delay_ms = 0;
- internals->slave_count = 0;
- internals->active_slave_count = 0;
+ internals->member_count = 0;
+ internals->active_member_count = 0;
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->rx_desc_lim.nb_align = 1;
internals->tx_desc_lim.nb_align = 1;
- memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
- memset(internals->slaves, 0, sizeof(internals->slaves));
+ memset(internals->active_members, 0, sizeof(internals->active_members));
+ memset(internals->members, 0, sizeof(internals->members));
TAILQ_INIT(&internals->flow_list);
internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
/* Parse link bonding mode */
if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
- &bond_ethdev_parse_slave_mode_kvarg,
+ &bond_ethdev_parse_member_mode_kvarg,
&bonding_mode) != 0) {
RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
RTE_ASSERT(eth_dev->device == &dev->device);
internals = eth_dev->data->dev_private;
- if (internals->slave_count != 0)
+ if (internals->member_count != 0)
return -EBUSY;
if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
return ret;
}
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
* have been allocated */
static int
bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
if ((link_speeds &
(internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
- RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+ RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
return -EINVAL;
}
/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- /* Parse/add slave ports to bonded device */
- if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
- struct bond_ethdev_slave_ports slave_ports;
+ /* Parse/add member ports to bonded device */
+ if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+ struct bond_ethdev_member_ports member_ports;
unsigned i;
- memset(&slave_ports, 0, sizeof(slave_ports));
+ memset(&member_ports, 0, sizeof(member_ports));
- if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
- &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+ if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+ &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to parse slave ports for bonded device %s",
+ "Failed to parse member ports for bonded device %s",
name);
return -1;
}
- for (i = 0; i < slave_ports.slave_count; i++) {
- if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+ for (i = 0; i < member_ports.member_count; i++) {
+ if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to add port %d as slave to bonded device %s",
- slave_ports.slaves[i], name);
+ "Failed to add port %d as member to bonded device %s",
+ member_ports.members[i], name);
}
}
} else {
- RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+ RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
return -1;
}
- /* Parse/set primary slave port id*/
- arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+ /* Parse/set primary member port id*/
+ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
if (arg_count == 1) {
- uint16_t primary_slave_port_id;
+ uint16_t primary_member_port_id;
if (rte_kvargs_process(kvlist,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
- &bond_ethdev_parse_primary_slave_port_id_kvarg,
- &primary_slave_port_id) < 0) {
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
+ &bond_ethdev_parse_primary_member_port_id_kvarg,
+ &primary_member_port_id) < 0) {
RTE_BOND_LOG(INFO,
- "Invalid primary slave port id specified for bonded device %s",
+ "Invalid primary member port id specified for bonded device %s",
name);
return -1;
}
/* Set balance mode transmit policy*/
- if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+ if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
!= 0) {
RTE_BOND_LOG(ERR,
- "Failed to set primary slave port %d on bonded device %s",
- primary_slave_port_id, name);
+ "Failed to set primary member port %d on bonded device %s",
+ primary_member_port_id, name);
return -1;
}
} else if (arg_count > 1) {
RTE_BOND_LOG(INFO,
- "Primary slave can be specified only once for bonded device %s",
+ "Primary member can be specified only once for bonded device %s",
name);
return -1;
}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
return -1;
}
- /* configure slaves so we can pass mtu setting */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(dev, slave_ethdev) != 0) {
+ /* configure members so we can pass mtu setting */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to configure slave device (%d)",
+ "bonded port (%d) failed to configure member device (%d)",
dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
- "slave=<ifc> "
+ "member=<ifc> "
"primary=<ifc> "
"mode=[0-6] "
"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
rte_eth_bond_8023ad_ext_distrib_get;
rte_eth_bond_8023ad_ext_slowtx;
rte_eth_bond_8023ad_setup;
- rte_eth_bond_8023ad_slave_info;
- rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
rte_eth_bond_mode_set;
rte_eth_bond_primary_get;
rte_eth_bond_primary_set;
- rte_eth_bond_slave_add;
- rte_eth_bond_slave_remove;
- rte_eth_bond_slaves_get;
rte_eth_bond_xmit_policy_get;
rte_eth_bond_xmit_policy_set;
local: *;
};
+
+EXPERIMENTAL {
+ # added in 23.07
+ global:
+ rte_eth_bond_8023ad_member_info;
+ rte_eth_bond_active_members_get;
+ rte_eth_bond_member_add;
+ rte_eth_bond_member_remove;
+ rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
RTE_ETHER_ADDR_BYTES(&addr))
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
static uint16_t BOND_PORT = 0xffff;
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
};
static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
{
int retval;
uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
"failed (res=%d)\n", BOND_PORT, retval);
- for (i = 0; i < slaves_count; i++) {
- if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
- rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
- slaves[i], BOND_PORT);
+ for (i = 0; i < members_count; i++) {
+ if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+ rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+ members[i], BOND_PORT);
}
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
if (retval < 0)
rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
- printf("Waiting for slaves to become active...");
+ printf("Waiting for members to become active...");
while (wait_counter) {
- uint16_t act_slaves[16] = {0};
- if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
- slaves_count) {
+ uint16_t act_members[16] = {0};
+ if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+ members_count) {
printf("\n");
break;
}
sleep(1);
printf("...");
if (--wait_counter == 0)
- rte_exit(-1, "\nFailed to activate slaves\n");
+ rte_exit(-1, "\nFailed to activate members\n");
}
retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
"send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
- "show - shows some bond info: ex. active slaves etc.\n"
+ "show - shows some bond info: ex. active members etc.\n"
"help - prints help.\n"
"quit - terminate all threads and quit.\n"
);
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
struct cmdline *cl,
__rte_unused void *data)
{
- uint16_t slaves[16] = {0};
+ uint16_t members[16] = {0};
uint8_t len = 16;
struct rte_ether_addr addr;
uint16_t i;
int ret;
- for (i = 0; i < slaves_count; i++) {
+ for (i = 0; i < members_count; i++) {
ret = rte_eth_macaddr_get(i, &addr);
if (ret != 0) {
cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
rte_spinlock_lock(&global_flag_stru_p->lock);
cmdline_printf(cl,
- "Active_slaves:%d "
+ "Active_members:%d "
"packets received:Tot:%d Arp:%d IPv4:%d\n",
- rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+ rte_eth_bond_active_members_get(BOND_PORT, members, len),
global_flag_stru_p->port_packets[0],
global_flag_stru_p->port_packets[1],
global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
/* initialize all ports */
- slaves_count = nb_ports;
+ members_count = nb_ports;
RTE_ETH_FOREACH_DEV(i) {
- slave_port_init(i, mbuf_pool);
- slaves[i] = i;
+ member_port_init(i, mbuf_pool);
+ members[i] = i;
}
bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0)
/** Device supports link state interrupt */
#define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE \
+ do { \
+ RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+ RTE_ETH_DEV_BONDED_MEMBER \
+ } while (0)
/** Device supports device removal interrupt */
#define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3)
/** Device is port representor */
--
2.39.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
2023-05-08 13:44 1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
@ 2023-05-17 15:47 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-17 15:47 UTC (permalink / raw)
To: jerinj
Cc: dev, Thomas Monjalon, Anatoly Burakov, david.marchand, ferruh.yigit
On Mon, May 8, 2023 at 7:15 PM <jerinj@marvell.com> wrote:
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
>
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
>
> Due to the above reasons, the driver removed from DPDK 23.07.
>
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst and skipped removed
> driver file in ABI check in devtools/libabigail.abignore.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
> v2:
> - Skip driver ABI check (Ferruh)
> - Addressed the review comments in
> http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)
Applied to dpdk-next-net-mrvl/for-next-net. Thanks
>
> MAINTAINERS | 8 -
> devtools/libabigail.abignore | 1 +
> doc/guides/nics/features/liquidio.ini | 29 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/liquidio.rst | 169 --
> doc/guides/rel_notes/deprecation.rst | 7 -
> doc/guides/rel_notes/release_23_07.rst | 2 +
> drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
> drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
> drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
> drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
> drivers/net/liquidio/base/lio_mbox.c | 246 ---
> drivers/net/liquidio/base/lio_mbox.h | 102 -
> drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
> drivers/net/liquidio/lio_ethdev.h | 179 --
> drivers/net/liquidio/lio_logs.h | 58 -
> drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
> drivers/net/liquidio/lio_rxtx.h | 740 --------
> drivers/net/liquidio/lio_struct.h | 661 -------
> drivers/net/liquidio/meson.build | 16 -
> drivers/net/meson.build | 1 -
> 21 files changed, 3 insertions(+), 7148 deletions(-)
> delete mode 100644 doc/guides/nics/features/liquidio.ini
> delete mode 100644 doc/guides/nics/liquidio.rst
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
> delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
> delete mode 100644 drivers/net/liquidio/lio_ethdev.c
> delete mode 100644 drivers/net/liquidio/lio_ethdev.h
> delete mode 100644 drivers/net/liquidio/lio_logs.h
> delete mode 100644 drivers/net/liquidio/lio_rxtx.c
> delete mode 100644 drivers/net/liquidio/lio_rxtx.h
> delete mode 100644 drivers/net/liquidio/lio_struct.h
> delete mode 100644 drivers/net/liquidio/meson.build
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8df23e5099..0157c26dd2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -681,14 +681,6 @@ F: drivers/net/thunderx/
> F: doc/guides/nics/thunderx.rst
> F: doc/guides/nics/features/thunderx.ini
>
> -Cavium LiquidIO - UNMAINTAINED
> -M: Shijith Thotton <sthotton@marvell.com>
> -M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
> -T: git://dpdk.org/next/dpdk-next-net-mrvl
> -F: drivers/net/liquidio/
> -F: doc/guides/nics/liquidio.rst
> -F: doc/guides/nics/features/liquidio.ini
> -
> Cavium OCTEON TX
> M: Harman Kalra <hkalra@marvell.com>
> T: git://dpdk.org/next/dpdk-next-net-mrvl
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 3ff51509de..c0361bfc7b 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -25,6 +25,7 @@
> ;
> ; SKIP_LIBRARY=librte_common_mlx5_glue
> ; SKIP_LIBRARY=librte_net_mlx4_glue
> +; SKIP_LIBRARY=librte_net_liquidio
>
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> ; Experimental APIs exceptions ;
> diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
> deleted file mode 100644
> index a8bde282e0..0000000000
> --- a/doc/guides/nics/features/liquidio.ini
> +++ /dev/null
> @@ -1,29 +0,0 @@
> -;
> -; Supported features of the 'LiquidIO' network poll mode driver.
> -;
> -; Refer to default.ini for the full list of available PMD features.
> -;
> -[Features]
> -Speed capabilities = Y
> -Link status = Y
> -Link status event = Y
> -MTU update = Y
> -Scattered Rx = Y
> -Promiscuous mode = Y
> -Allmulticast mode = Y
> -RSS hash = Y
> -RSS key update = Y
> -RSS reta update = Y
> -VLAN filter = Y
> -CRC offload = Y
> -VLAN offload = P
> -L3 checksum offload = Y
> -L4 checksum offload = Y
> -Inner L3 checksum = Y
> -Inner L4 checksum = Y
> -Basic stats = Y
> -Extended stats = Y
> -Multiprocess aware = Y
> -Linux = Y
> -x86-64 = Y
> -Usage doc = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 5c9d1edf5e..31296822e5 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -44,7 +44,6 @@ Network Interface Controller Drivers
> ipn3ke
> ixgbe
> kni
> - liquidio
> mana
> memif
> mlx4
> diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
> deleted file mode 100644
> index f893b3b539..0000000000
> --- a/doc/guides/nics/liquidio.rst
> +++ /dev/null
> @@ -1,169 +0,0 @@
> -.. SPDX-License-Identifier: BSD-3-Clause
> - Copyright(c) 2017 Cavium, Inc
> -
> -LiquidIO VF Poll Mode Driver
> -============================
> -
> -The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
> -Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
> -done using kernel driver.
> -
> -More information can be found at `Cavium Official Website
> -<http://cavium.com/LiquidIO_Adapters.html>`_.
> -
> -Supported LiquidIO Adapters
> ------------------------------
> -
> -- LiquidIO II CN2350 210SV/225SV
> -- LiquidIO II CN2350 210SVPT
> -- LiquidIO II CN2360 210SV/225SV
> -- LiquidIO II CN2360 210SVPT
> -
> -
> -SR-IOV: Prerequisites and Sample Application Notes
> ---------------------------------------------------
> -
> -This section provides instructions to configure SR-IOV with Linux OS.
> -
> -#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
> -
> - .. code-block:: console
> -
> - lspci -s <slot> -vvv
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - [...]
> - Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
> - [...]
> - Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
> - [...]
> - Kernel driver in use: LiquidIO
> -
> -#. Load the kernel module:
> -
> - .. code-block:: console
> -
> - modprobe liquidio
> -
> -#. Bring up the PF ports:
> -
> - .. code-block:: console
> -
> - ifconfig p4p1 up
> - ifconfig p4p2 up
> -
> -#. Change PF MTU if required:
> -
> - .. code-block:: console
> -
> - ifconfig p4p1 mtu 9000
> - ifconfig p4p2 mtu 9000
> -
> -#. Create VF device(s):
> -
> - Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
> - of the parent PF.
> -
> - .. code-block:: console
> -
> - echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
> - echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
> -
> -#. Assign VF MAC address:
> -
> - Assign MAC address to the VF using iproute2 utility. The syntax is::
> -
> - ip link set <PF iface> vf <VF id> mac <macaddr>
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
> -
> -#. Assign VF(s) to VM.
> -
> - The VF devices may be passed through to the guest VM using qemu or
> - virt-manager or virsh etc.
> -
> - Example qemu guest launch command:
> -
> - .. code-block:: console
> -
> - ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
> - -cpu host -m 4096 -smp 4 \
> - -drive file=<disk_file>,if=none,id=disk1,format=<type> \
> - -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
> - -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
> -
> -#. Running testpmd
> -
> - Refer to the document
> - :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
> - ``testpmd`` application.
> -
> - .. note::
> -
> - Use ``igb_uio`` instead of ``vfio-pci`` in VM.
> -
> - Example output:
> -
> - .. code-block:: console
> -
> - [...]
> - EAL: PCI device 0000:03:00.3 on NUMA socket 0
> - EAL: probe driver: 177d:9712 net_liovf
> - EAL: using IOMMU type 1 (Type 1)
> - PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
> - EAL: PCI device 0000:03:08.3 on NUMA socket 0
> - EAL: probe driver: 177d:9712 net_liovf
> - PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
> - Interactive-mode selected
> - USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
> - Configuring Port 0 (socket 0)
> - PMD: net_liovf[03:00.3]INFO: Starting port 0
> - Port 0: F2:A8:1B:5E:B4:66
> - Configuring Port 1 (socket 0)
> - PMD: net_liovf[03:08.3]INFO: Starting port 1
> - Port 1: 32:76:CC:EE:56:D7
> - Checking link statuses...
> - Port 0 Link Up - speed 10000 Mbps - full-duplex
> - Port 1 Link Up - speed 10000 Mbps - full-duplex
> - Done
> - testpmd>
> -
> -#. Enabling VF promiscuous mode
> -
> - One VF per PF can be marked as trusted for promiscuous mode.
> -
> - .. code-block:: console
> -
> - ip link set dev <PF iface> vf <VF id> trust on
> -
> -
> -Limitations
> ------------
> -
> -VF MTU
> -~~~~~~
> -
> -VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
> -
> -VLAN offload
> -~~~~~~~~~~~~
> -
> -Tx VLAN insertion is not supported and consequently VLAN offload feature is
> -marked partial.
> -
> -Ring size
> -~~~~~~~~~
> -
> -Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
> -
> -CRC stripping
> -~~~~~~~~~~~~~
> -
> -LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..8e1cdd677a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -121,13 +121,6 @@ Deprecation Notices
> * net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
> This decision has been made to alleviate the burden of maintaining a discontinued product.
>
> -* net/liquidio: Remove LiquidIO ethdev driver.
> - The LiquidIO product line has been substituted
> - with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
> - DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> - because of the absence of updates in the driver.
> - Due to the above reasons, the driver will be unavailable from DPDK 23.07.
> -
> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> to have another parameter ``qp_id`` to return the queue pair ID
> which got error interrupt to the application,
> diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..f13a7b32b6 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -68,6 +68,8 @@ Removed Items
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
> +
>
> API Changes
> -----------
> diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
> deleted file mode 100644
> index 9f28504b53..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_reg.h
> +++ /dev/null
> @@ -1,165 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_REG_H_
> -#define _LIO_23XX_REG_H_
> -
> -/* ###################### REQUEST QUEUE ######################### */
> -
> -/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_START64 0x10010
> -
> -/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
> -
> -/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START 0x10030
> -
> -/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64 0x10040
> -
> -/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
> - * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
> - */
> -#define CN23XX_SLI_PKT_INPUT_CONTROL_START64 0x10000
> -
> -/* ------- Request Queue Macros --------- */
> -
> -/* Each Input Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_IQ_OFFSET 0x20000
> -
> -#define CN23XX_SLI_IQ_PKT_CONTROL64(iq) \
> - (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_BASE_ADDR64(iq) \
> - (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_SIZE(iq) \
> - (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_DOORBELL(iq) \
> - (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_INSTR_COUNT64(iq) \
> - (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -/* Number of instructions to be read in one MAC read request.
> - * setting to Max value(4)
> - */
> -#define CN23XX_PKT_INPUT_CTL_RDSIZE (3 << 25)
> -#define CN23XX_PKT_INPUT_CTL_IS_64B (1 << 24)
> -#define CN23XX_PKT_INPUT_CTL_RST (1 << 23)
> -#define CN23XX_PKT_INPUT_CTL_QUIET (1 << 28)
> -#define CN23XX_PKT_INPUT_CTL_RING_ENB (1 << 22)
> -#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP (1 << 6)
> -#define CN23XX_PKT_INPUT_CTL_USE_CSR (1 << 4)
> -#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP (2)
> -
> -/* These bits[47:44] select the Physical function number within the MAC */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS 45
> -/* These bits[43:32] select the function number within the PF */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS 32
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK \
> - (CN23XX_PKT_INPUT_CTL_RDSIZE | \
> - CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
> - CN23XX_PKT_INPUT_CTL_USE_CSR)
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK \
> - (CN23XX_PKT_INPUT_CTL_RDSIZE | \
> - CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
> - CN23XX_PKT_INPUT_CTL_USE_CSR | \
> - CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
> -#endif
> -
> -/* ############################ OUTPUT QUEUE ######################### */
> -
> -/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
> -#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START 0x10050
> -
> -/* 64 registers for Output queue buffer and info size
> - * SLI_PKT(0..63)_OUT_SIZE
> - */
> -#define CN23XX_SLI_PKT_OUT_SIZE 0x10060
> -
> -/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
> -#define CN23XX_SLI_SLIST_BADDR_START64 0x10070
> -
> -/* 64 registers for Output Queue Packet Credits
> - * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
> - */
> -#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
> -
> -/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START 0x10090
> -
> -/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_CNTS_START 0x100B0
> -
> -/* Each Output Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_OQ_OFFSET 0x20000
> -
> -/* ------- Output Queue Macros --------- */
> -
> -#define CN23XX_SLI_OQ_PKT_CONTROL(oq) \
> - (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BASE_ADDR64(oq) \
> - (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_SIZE(oq) \
> - (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq) \
> - (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_SENT(oq) \
> - (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_CREDIT(oq) \
> - (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -/* ------------------ Masks ---------------- */
> -#define CN23XX_PKT_OUTPUT_CTL_IPTR (1 << 11)
> -#define CN23XX_PKT_OUTPUT_CTL_ES (1 << 9)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR (1 << 8)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR (1 << 7)
> -#define CN23XX_PKT_OUTPUT_CTL_DPTR (1 << 6)
> -#define CN23XX_PKT_OUTPUT_CTL_BMODE (1 << 5)
> -#define CN23XX_PKT_OUTPUT_CTL_ES_P (1 << 3)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR_P (1 << 2)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR_P (1 << 1)
> -#define CN23XX_PKT_OUTPUT_CTL_RING_ENB (1 << 0)
> -
> -/* Rings per Virtual Function [RO] */
> -#define CN23XX_PKT_INPUT_CTL_RPVF_MASK 0x3F
> -#define CN23XX_PKT_INPUT_CTL_RPVF_POS 48
> -
> -/* These bits[47:44][RO] give the Physical function
> - * number info within the MAC
> - */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK 0x7
> -
> -/* These bits[43:32][RO] give the virtual function
> - * number info within the PF
> - */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK 0x1FFF
> -
> -/* ######################### Mailbox Reg Macros ######################## */
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START 0x10200
> -#define CN23XX_VF_SLI_PKT_MBOX_INT_START 0x10210
> -
> -#define CN23XX_SLI_MBOX_OFFSET 0x20000
> -#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET 0x8
> -
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx) \
> - (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START + \
> - ((q) * CN23XX_SLI_MBOX_OFFSET + \
> - (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
> -
> -#define CN23XX_VF_SLI_PKT_MBOX_INT(q) \
> - (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
> -
> -#endif /* _LIO_23XX_REG_H_ */
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
> deleted file mode 100644
> index c6b8310b71..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.c
> +++ /dev/null
> @@ -1,513 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <string.h>
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_23xx_reg.h"
> -#include "lio_mbox.h"
> -
> -static int
> -cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
> -{
> - uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
> - uint64_t d64, q_no;
> - int ret_val = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < num_queues; q_no++) {
> - /* set RST bit to 1. This bit applies to both IQ and OQ */
> - d64 = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - d64);
> - }
> -
> - /* wait until the RST bit is clear or the RST and QUIET bits are set */
> - for (q_no = 0; q_no < num_queues; q_no++) {
> - volatile uint64_t reg_val;
> -
> - reg_val = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
> - !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
> - loop) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - loop = loop - 1;
> - }
> -
> - if (loop == 0) {
> - lio_dev_err(lio_dev,
> - "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
> - (unsigned long)q_no);
> - return -1;
> - }
> -
> - reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> -
> - reg_val = lio_read_csr64(
> - lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
> - lio_dev_err(lio_dev,
> - "clearing the reset failed for qno: %lu\n",
> - (unsigned long)q_no);
> - ret_val = -1;
> - }
> - }
> -
> - return ret_val;
> -}
> -
> -static int
> -cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
> -{
> - uint64_t q_no;
> - uint64_t d64;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (cn23xx_vf_reset_io_queues(lio_dev,
> - lio_dev->sriov_info.rings_per_vf))
> - return -1;
> -
> - for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
> - 0xFFFFFFFF);
> -
> - d64 = lio_read_csr64(lio_dev,
> - CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
> -
> - d64 &= 0xEFFFFFFFFFFFFFFFL;
> -
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
> - d64);
> -
> - /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
> - * the Input Queues
> - */
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - CN23XX_PKT_INPUT_CTL_MASK);
> - }
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
> -{
> - uint32_t reg_val;
> - uint32_t q_no;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
> - 0xFFFFFFFF);
> -
> - reg_val =
> - lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
> -
> - reg_val &= 0xEFFFFFFFFFFFFFFFL;
> -
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
> -
> - reg_val =
> - lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> -
> - /* set IPTR & DPTR */
> - reg_val |=
> - (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
> -
> - /* reset BMODE */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
> -
> - /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> - * for Output Queue Scatter List
> - * reset ROR_P, NSR_P
> - */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#endif
> - /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> - * for Output Queue Data
> - * reset ROR, NSR
> - */
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
> - reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
> - /* set the ES bit */
> - reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
> -
> - /* write all the selected settings */
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> - reg_val);
> - }
> -}
> -
> -static int
> -cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
> -{
> - PMD_INIT_FUNC_TRACE();
> -
> - if (cn23xx_vf_setup_global_input_regs(lio_dev))
> - return -1;
> -
> - cn23xx_vf_setup_global_output_regs(lio_dev);
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - uint64_t pkt_in_done = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Write the start of the input queue's ring and its size */
> - lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
> - iq->base_addr_dma);
> - lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
> -
> - /* Remember the doorbell & instruction count register addr
> - * for this queue
> - */
> - iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_IQ_DOORBELL(iq_no);
> - iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
> - lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
> - iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
> -
> - /* Store the current instruction counter (used in flush_iq
> - * calculation)
> - */
> - pkt_in_done = rte_read64(iq->inst_cnt_reg);
> -
> - /* Clear the count by writing back what we read, but don't
> - * enable data traffic here
> - */
> - rte_write64(pkt_in_done, iq->inst_cnt_reg);
> -}
> -
> -static void
> -cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
> -{
> - struct lio_droq *droq = lio_dev->droq[oq_no];
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
> - droq->desc_ring_dma);
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
> -
> - lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
> - (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
> -
> - /* Get the mapped address of the pkt_sent and pkts_credit regs */
> - droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_OQ_PKTS_SENT(oq_no);
> - droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
> -}
> -
> -static void
> -cn23xx_vf_free_mbox(struct lio_device *lio_dev)
> -{
> - PMD_INIT_FUNC_TRACE();
> -
> - rte_free(lio_dev->mbox[0]);
> - lio_dev->mbox[0] = NULL;
> -
> - rte_free(lio_dev->mbox);
> - lio_dev->mbox = NULL;
> -}
> -
> -static int
> -cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
> -{
> - struct lio_mbox *mbox;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (lio_dev->mbox == NULL) {
> - lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
> - if (lio_dev->mbox == NULL)
> - return -ENOMEM;
> - }
> -
> - mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
> - if (mbox == NULL) {
> - rte_free(lio_dev->mbox);
> - lio_dev->mbox = NULL;
> - return -ENOMEM;
> - }
> -
> - rte_spinlock_init(&mbox->lock);
> -
> - mbox->lio_dev = lio_dev;
> -
> - mbox->q_no = 0;
> -
> - mbox->state = LIO_MBOX_STATE_IDLE;
> -
> - /* VF mbox interrupt reg */
> - mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_VF_SLI_PKT_MBOX_INT(0);
> - /* VF reads from SIG0 reg */
> - mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
> - /* VF writes into SIG1 reg */
> - mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
> - CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
> -
> - lio_dev->mbox[0] = mbox;
> -
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> - return 0;
> -}
> -
> -static int
> -cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
> -{
> - uint32_t q_no;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
> - uint64_t reg_val;
> -
> - /* set the corresponding IQ IS_64B bit */
> - if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
> - lio_write_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> - }
> -
> - /* set the corresponding IQ ENB bit */
> - if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
> - reg_val = lio_read_csr64(
> - lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> - reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
> - lio_write_csr64(lio_dev,
> - CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> - reg_val);
> - }
> - }
> - for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
> - uint32_t reg_val;
> -
> - /* set the corresponding OQ ENB bit */
> - if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
> - reg_val = lio_read_csr(
> - lio_dev,
> - CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> - reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
> - lio_write_csr(lio_dev,
> - CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> - reg_val);
> - }
> - }
> -
> - return 0;
> -}
> -
> -static void
> -cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
> -{
> - uint32_t num_queues;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* per HRM, rings can only be disabled via reset operation,
> - * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
> - */
> - num_queues = lio_dev->num_iqs;
> - if (num_queues < lio_dev->num_oqs)
> - num_queues = lio_dev->num_oqs;
> -
> - cn23xx_vf_reset_io_queues(lio_dev, num_queues);
> -}
> -
> -void
> -cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> -
> - memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
> - mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> - mbox_cmd.msg.s.resp_needed = 0;
> - mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
> - mbox_cmd.msg.s.len = 1;
> - mbox_cmd.q_no = 0;
> - mbox_cmd.recv_len = 0;
> - mbox_cmd.recv_status = 0;
> - mbox_cmd.fn = NULL;
> - mbox_cmd.fn_arg = 0;
> -
> - lio_mbox_write(lio_dev, &mbox_cmd);
> -}
> -
> -static void
> -cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *cmd, void *arg)
> -{
> - uint32_t major = 0;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
> - if (cmd->recv_len > 1) {
> - struct lio_version *lio_ver = (struct lio_version *)cmd->data;
> -
> - major = lio_ver->major;
> - major = major << 16;
> - }
> -
> - rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
> -}
> -
> -int
> -cn23xx_pfvf_handshake(struct lio_device *lio_dev)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> - struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
> - uint32_t q_no, count = 0;
> - rte_atomic64_t status;
> - uint32_t pfmajor;
> - uint32_t vfmajor;
> - uint32_t ret;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Sending VF_ACTIVE indication to the PF driver */
> - lio_dev_dbg(lio_dev, "requesting info from PF\n");
> -
> - mbox_cmd.msg.mbox_msg64 = 0;
> - mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> - mbox_cmd.msg.s.resp_needed = 1;
> - mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
> - mbox_cmd.msg.s.len = 2;
> - mbox_cmd.data[0] = 0;
> - lio_ver->major = LIO_BASE_MAJOR_VERSION;
> - lio_ver->minor = LIO_BASE_MINOR_VERSION;
> - lio_ver->micro = LIO_BASE_MICRO_VERSION;
> - mbox_cmd.q_no = 0;
> - mbox_cmd.recv_len = 0;
> - mbox_cmd.recv_status = 0;
> - mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
> - mbox_cmd.fn_arg = (void *)&status;
> -
> - if (lio_mbox_write(lio_dev, &mbox_cmd)) {
> - lio_dev_err(lio_dev, "Write to mailbox failed\n");
> - return -1;
> - }
> -
> - rte_atomic64_set(&status, 0);
> -
> - do {
> - rte_delay_ms(1);
> - } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
> -
> - ret = rte_atomic64_read(&status);
> - if (ret == 0) {
> - lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
> - return -1;
> - }
> -
> - for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
> - lio_dev->instr_queue[q_no]->txpciq.s.pkind =
> - lio_dev->pfvf_hsword.pkind;
> -
> - vfmajor = LIO_BASE_MAJOR_VERSION;
> - pfmajor = ret >> 16;
> - if (pfmajor != vfmajor) {
> - lio_dev_err(lio_dev,
> - "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
> - vfmajor, pfmajor);
> - ret = -EPERM;
> - } else {
> - lio_dev_dbg(lio_dev,
> - "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
> - vfmajor, pfmajor);
> - ret = 0;
> - }
> -
> - lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
> - lio_dev->pfvf_hsword.pkind);
> -
> - return ret;
> -}
> -
> -void
> -cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
> -{
> - uint64_t mbox_int_val;
> -
> - /* read and clear by writing 1 */
> - mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
> - rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
> - if (lio_mbox_read(lio_dev->mbox[0]))
> - lio_mbox_process_message(lio_dev->mbox[0]);
> -}
> -
> -int
> -cn23xx_vf_setup_device(struct lio_device *lio_dev)
> -{
> - uint64_t reg_val;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* INPUT_CONTROL[RPVF] gives the VF IOq count */
> - reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
> -
> - lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
> - CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
> - lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
> - CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
> -
> - reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
> -
> - lio_dev->sriov_info.rings_per_vf =
> - reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
> -
> - lio_dev->default_config = lio_get_conf(lio_dev);
> - if (lio_dev->default_config == NULL)
> - return -1;
> -
> - lio_dev->fn_list.setup_iq_regs = cn23xx_vf_setup_iq_regs;
> - lio_dev->fn_list.setup_oq_regs = cn23xx_vf_setup_oq_regs;
> - lio_dev->fn_list.setup_mbox = cn23xx_vf_setup_mbox;
> - lio_dev->fn_list.free_mbox = cn23xx_vf_free_mbox;
> -
> - lio_dev->fn_list.setup_device_regs = cn23xx_vf_setup_device_regs;
> -
> - lio_dev->fn_list.enable_io_queues = cn23xx_vf_enable_io_queues;
> - lio_dev->fn_list.disable_io_queues = cn23xx_vf_disable_io_queues;
> -
> - return 0;
> -}
> -
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
> deleted file mode 100644
> index 8e5362db15..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.h
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_VF_H_
> -#define _LIO_23XX_VF_H_
> -
> -#include <stdio.h>
> -
> -#include "lio_struct.h"
> -
> -static const struct lio_config default_cn23xx_conf = {
> - .card_type = LIO_23XX,
> - .card_name = LIO_23XX_NAME,
> - /** IQ attributes */
> - .iq = {
> - .max_iqs = CN23XX_CFG_IO_QUEUES,
> - .pending_list_size =
> - (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
> - .instr_type = OCTEON_64BYTE_INSTR,
> - },
> -
> - /** OQ attributes */
> - .oq = {
> - .max_oqs = CN23XX_CFG_IO_QUEUES,
> - .info_ptr = OCTEON_OQ_INFOPTR_MODE,
> - .refill_threshold = CN23XX_OQ_REFIL_THRESHOLD,
> - },
> -
> - .num_nic_ports = CN23XX_DEFAULT_NUM_PORTS,
> - .num_def_rx_descs = CN23XX_MAX_OQ_DESCRIPTORS,
> - .num_def_tx_descs = CN23XX_MAX_IQ_DESCRIPTORS,
> - .def_rx_buf_size = CN23XX_OQ_BUF_SIZE,
> -};
> -
> -static inline const struct lio_config *
> -lio_get_conf(struct lio_device *lio_dev)
> -{
> - const struct lio_config *default_lio_conf = NULL;
> -
> - /* check the LIO Device model & return the corresponding lio
> - * configuration
> - */
> - default_lio_conf = &default_cn23xx_conf;
> -
> - if (default_lio_conf == NULL) {
> - lio_dev_err(lio_dev, "Configuration verification failed\n");
> - return NULL;
> - }
> -
> - return default_lio_conf;
> -}
> -
> -#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000
> -
> -void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
> -
> -int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
> -
> -int cn23xx_vf_setup_device(struct lio_device *lio_dev);
> -
> -void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
> -#endif /* _LIO_23XX_VF_H_ */
> diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
> deleted file mode 100644
> index 5e119c1241..0000000000
> --- a/drivers/net/liquidio/base/lio_hw_defs.h
> +++ /dev/null
> @@ -1,239 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_HW_DEFS_H_
> -#define _LIO_HW_DEFS_H_
> -
> -#include <rte_io.h>
> -
> -#ifndef PCI_VENDOR_ID_CAVIUM
> -#define PCI_VENDOR_ID_CAVIUM 0x177D
> -#endif
> -
> -#define LIO_CN23XX_VF_VID 0x9712
> -
> -/* CN23xx subsystem device ids */
> -#define PCI_SUBSYS_DEV_ID_CN2350_210 0x0004
> -#define PCI_SUBSYS_DEV_ID_CN2360_210 0x0005
> -#define PCI_SUBSYS_DEV_ID_CN2360_225 0x0006
> -#define PCI_SUBSYS_DEV_ID_CN2350_225 0x0007
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3 0x0008
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3 0x0009
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT 0x000a
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT 0x000b
> -
> -/* --------------------------CONFIG VALUES------------------------ */
> -
> -/* CN23xx IQ configuration macros */
> -#define CN23XX_MAX_RINGS_PER_PF 64
> -#define CN23XX_MAX_RINGS_PER_VF 8
> -
> -#define CN23XX_MAX_INPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_IQ_DESCRIPTORS 512
> -#define CN23XX_MIN_IQ_DESCRIPTORS 128
> -
> -#define CN23XX_MAX_OUTPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_OQ_DESCRIPTORS 512
> -#define CN23XX_MIN_OQ_DESCRIPTORS 128
> -#define CN23XX_OQ_BUF_SIZE 1536
> -
> -#define CN23XX_OQ_REFIL_THRESHOLD 16
> -
> -#define CN23XX_DEFAULT_NUM_PORTS 1
> -
> -#define CN23XX_CFG_IO_QUEUES CN23XX_MAX_RINGS_PER_PF
> -
> -/* common OCTEON configuration macros */
> -#define OCTEON_64BYTE_INSTR 64
> -#define OCTEON_OQ_INFOPTR_MODE 1
> -
> -/* Max IOQs per LIO Link */
> -#define LIO_MAX_IOQS_PER_IF 64
> -
> -/* Wait time in milliseconds for FLR */
> -#define LIO_PCI_FLR_WAIT 100
> -
> -enum lio_card_type {
> - LIO_23XX /* 23xx */
> -};
> -
> -#define LIO_23XX_NAME "23xx"
> -
> -#define LIO_DEV_RUNNING 0xc
> -
> -#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg) \
> - ((cfg)->default_config->oq.refill_threshold)
> -#define LIO_NUM_DEF_TX_DESCS_CFG(cfg) \
> - ((cfg)->default_config->num_def_tx_descs)
> -
> -#define LIO_IQ_INSTR_TYPE(cfg) ((cfg)->default_config->iq.instr_type)
> -
> -/* The following config values are fixed and should not be modified. */
> -
> -/* Maximum number of Instruction queues */
> -#define LIO_MAX_INSTR_QUEUES(lio_dev) CN23XX_MAX_RINGS_PER_VF
> -
> -#define LIO_MAX_POSSIBLE_INSTR_QUEUES CN23XX_MAX_INPUT_QUEUES
> -#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES CN23XX_MAX_OUTPUT_QUEUES
> -
> -#define LIO_DEVICE_NAME_LEN 32
> -#define LIO_BASE_MAJOR_VERSION 1
> -#define LIO_BASE_MINOR_VERSION 5
> -#define LIO_BASE_MICRO_VERSION 1
> -
> -#define LIO_FW_VERSION_LENGTH 32
> -
> -#define LIO_Q_RECONF_MIN_VERSION "1.7.0"
> -#define LIO_VF_TRUST_MIN_VERSION "1.7.1"
> -
> -/** Tag types used by Octeon cores in its work. */
> -enum octeon_tag_type {
> - OCTEON_ORDERED_TAG = 0,
> - OCTEON_ATOMIC_TAG = 1,
> -};
> -
> -/* pre-defined host->NIC tag values */
> -#define LIO_CONTROL (0x11111110)
> -#define LIO_DATA(i) (0x11111111 + (i))
> -
> -/* used for NIC operations */
> -#define LIO_OPCODE 1
> -
> -/* Subcodes are used by host driver/apps to identify the sub-operation
> - * for the core. They only need to by unique for a given subsystem.
> - */
> -#define LIO_OPCODE_SUBCODE(op, sub) \
> - ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
> -
> -/** LIO_OPCODE subcodes */
> -/* This subcode is sent by core PCI driver to indicate cores are ready. */
> -#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */
> -#define LIO_OPCODE_CMD 0x03
> -#define LIO_OPCODE_INFO 0x04
> -#define LIO_OPCODE_PORT_STATS 0x05
> -#define LIO_OPCODE_IF_CFG 0x09
> -
> -#define LIO_MIN_RX_BUF_SIZE 64
> -#define LIO_MAX_RX_PKTLEN (64 * 1024)
> -
> -/* NIC Command types */
> -#define LIO_CMD_CHANGE_MTU 0x1
> -#define LIO_CMD_CHANGE_DEVFLAGS 0x3
> -#define LIO_CMD_RX_CTL 0x4
> -#define LIO_CMD_CLEAR_STATS 0x6
> -#define LIO_CMD_SET_RSS 0xD
> -#define LIO_CMD_TNL_RX_CSUM_CTL 0x10
> -#define LIO_CMD_TNL_TX_CSUM_CTL 0x11
> -#define LIO_CMD_ADD_VLAN_FILTER 0x17
> -#define LIO_CMD_DEL_VLAN_FILTER 0x18
> -#define LIO_CMD_VXLAN_PORT_CONFIG 0x19
> -#define LIO_CMD_QUEUE_COUNT_CTL 0x1f
> -
> -#define LIO_CMD_VXLAN_PORT_ADD 0x0
> -#define LIO_CMD_VXLAN_PORT_DEL 0x1
> -#define LIO_CMD_RXCSUM_ENABLE 0x0
> -#define LIO_CMD_TXCSUM_ENABLE 0x0
> -
> -/* RX(packets coming from wire) Checksum verification flags */
> -/* TCP/UDP csum */
> -#define LIO_L4_CSUM_VERIFIED 0x1
> -#define LIO_IP_CSUM_VERIFIED 0x2
> -
> -/* RSS */
> -#define LIO_RSS_PARAM_DISABLE_RSS 0x10
> -#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED 0x08
> -#define LIO_RSS_PARAM_ITABLE_UNCHANGED 0x04
> -#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED 0x02
> -
> -#define LIO_RSS_HASH_IPV4 0x100
> -#define LIO_RSS_HASH_TCP_IPV4 0x200
> -#define LIO_RSS_HASH_IPV6 0x400
> -#define LIO_RSS_HASH_TCP_IPV6 0x1000
> -#define LIO_RSS_HASH_IPV6_EX 0x800
> -#define LIO_RSS_HASH_TCP_IPV6_EX 0x2000
> -
> -#define LIO_RSS_OFFLOAD_ALL ( \
> - LIO_RSS_HASH_IPV4 | \
> - LIO_RSS_HASH_TCP_IPV4 | \
> - LIO_RSS_HASH_IPV6 | \
> - LIO_RSS_HASH_TCP_IPV6 | \
> - LIO_RSS_HASH_IPV6_EX | \
> - LIO_RSS_HASH_TCP_IPV6_EX)
> -
> -#define LIO_RSS_MAX_TABLE_SZ 128
> -#define LIO_RSS_MAX_KEY_SZ 40
> -#define LIO_RSS_PARAM_SIZE 16
> -
> -/* Interface flags communicated between host driver and core app. */
> -enum lio_ifflags {
> - LIO_IFFLAG_PROMISC = 0x01,
> - LIO_IFFLAG_ALLMULTI = 0x02,
> - LIO_IFFLAG_UNICAST = 0x10
> -};
> -
> -/* Routines for reading and writing CSRs */
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define lio_write_csr(lio_dev, reg_off, value) \
> - do { \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - typeof(value) _value = value; \
> - PMD_REGS_LOG(_dev, \
> - "Write32: Reg: 0x%08lx Val: 0x%08lx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long)_value); \
> - rte_write32(_value, _dev->hw_addr + _reg_off); \
> - } while (0)
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64) \
> - do { \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - typeof(val64) _val64 = val64; \
> - PMD_REGS_LOG( \
> - _dev, \
> - "Write64: Reg: 0x%08lx Val: 0x%016llx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long long)_val64); \
> - rte_write64(_val64, _dev->hw_addr + _reg_off); \
> - } while (0)
> -
> -#define lio_read_csr(lio_dev, reg_off) \
> - ({ \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - uint32_t val = rte_read32(_dev->hw_addr + _reg_off); \
> - PMD_REGS_LOG(_dev, \
> - "Read32: Reg: 0x%08lx Val: 0x%08lx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long)val); \
> - val; \
> - })
> -
> -#define lio_read_csr64(lio_dev, reg_off) \
> - ({ \
> - typeof(lio_dev) _dev = lio_dev; \
> - typeof(reg_off) _reg_off = reg_off; \
> - uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off); \
> - PMD_REGS_LOG( \
> - _dev, \
> - "Read64: Reg: 0x%08lx Val: 0x%016llx\n", \
> - (unsigned long)_reg_off, \
> - (unsigned long long)val64); \
> - val64; \
> - })
> -#else
> -#define lio_write_csr(lio_dev, reg_off, value) \
> - rte_write32(value, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64) \
> - rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr(lio_dev, reg_off) \
> - rte_read32((lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr64(lio_dev, reg_off) \
> - rte_read64((lio_dev)->hw_addr + (reg_off))
> -#endif
> -#endif /* _LIO_HW_DEFS_H_ */
> diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
> deleted file mode 100644
> index 2ac2b1b334..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.c
> +++ /dev/null
> @@ -1,246 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_mbox.h"
> -
> -/**
> - * lio_mbox_read:
> - * @mbox: Pointer mailbox
> - *
> - * Reads the 8-bytes of data from the mbox register
> - * Writes back the acknowledgment indicating completion of read
> - */
> -int
> -lio_mbox_read(struct lio_mbox *mbox)
> -{
> - union lio_mbox_message msg;
> - int ret = 0;
> -
> - msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
> -
> - if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
> - return 0;
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> - mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
> - msg.mbox_msg64;
> - mbox->mbox_req.recv_len++;
> - } else {
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> - mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
> - msg.mbox_msg64;
> - mbox->mbox_resp.recv_len++;
> - } else {
> - if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
> - (msg.s.type == LIO_MBOX_REQUEST)) {
> - mbox->state &= ~LIO_MBOX_STATE_IDLE;
> - mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
> - mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
> - mbox->mbox_req.q_no = mbox->q_no;
> - mbox->mbox_req.recv_len = 1;
> - } else {
> - if ((mbox->state &
> - LIO_MBOX_STATE_RES_PENDING) &&
> - (msg.s.type == LIO_MBOX_RESPONSE)) {
> - mbox->state &=
> - ~LIO_MBOX_STATE_RES_PENDING;
> - mbox->state |=
> - LIO_MBOX_STATE_RES_RECEIVING;
> - mbox->mbox_resp.msg.mbox_msg64 =
> - msg.mbox_msg64;
> - mbox->mbox_resp.q_no = mbox->q_no;
> - mbox->mbox_resp.recv_len = 1;
> - } else {
> - rte_write64(LIO_PFVFERR,
> - mbox->mbox_read_reg);
> - mbox->state |= LIO_MBOX_STATE_ERROR;
> - return -1;
> - }
> - }
> - }
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> - if (mbox->mbox_req.recv_len < msg.s.len) {
> - ret = 0;
> - } else {
> - mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
> - mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
> - ret = 1;
> - }
> - } else {
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> - if (mbox->mbox_resp.recv_len < msg.s.len) {
> - ret = 0;
> - } else {
> - mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
> - mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
> - ret = 1;
> - }
> - } else {
> - RTE_ASSERT(0);
> - }
> - }
> -
> - rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
> -
> - return ret;
> -}
> -
> -/**
> - * lio_mbox_write:
> - * @lio_dev: Pointer lio device
> - * @mbox_cmd: Cmd to send to mailbox.
> - *
> - * Populates the queue specific mbox structure
> - * with cmd information.
> - * Write the cmd to mbox register
> - */
> -int
> -lio_mbox_write(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *mbox_cmd)
> -{
> - struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
> - uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
> -
> - if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
> - !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
> - return LIO_MBOX_STATUS_FAILED;
> -
> - if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
> - !(mbox->state & LIO_MBOX_STATE_IDLE))
> - return LIO_MBOX_STATUS_BUSY;
> -
> - if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
> - rte_memcpy(&mbox->mbox_resp, mbox_cmd,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_RES_PENDING;
> - }
> -
> - count = 0;
> -
> - while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
> - rte_delay_ms(1);
> - if (count++ == 1000) {
> - ret = LIO_MBOX_STATUS_FAILED;
> - break;
> - }
> - }
> -
> - if (ret == LIO_MBOX_STATUS_SUCCESS) {
> - rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
> - for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
> - count = 0;
> - while (rte_read64(mbox->mbox_write_reg) !=
> - LIO_PFVFACK) {
> - rte_delay_ms(1);
> - if (count++ == 1000) {
> - ret = LIO_MBOX_STATUS_FAILED;
> - break;
> - }
> - }
> - rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
> - }
> - }
> -
> - if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - } else {
> - if ((!mbox_cmd->msg.s.resp_needed) ||
> - (ret == LIO_MBOX_STATUS_FAILED)) {
> - mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
> - if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
> - LIO_MBOX_STATE_REQ_RECEIVED)))
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - }
> - }
> -
> - return ret;
> -}
> -
> -/**
> - * lio_mbox_process_cmd:
> - * @mbox: Pointer mailbox
> - * @mbox_cmd: Pointer to command received
> - *
> - * Process the cmd received in mbox
> - */
> -static int
> -lio_mbox_process_cmd(struct lio_mbox *mbox,
> - struct lio_mbox_cmd *mbox_cmd)
> -{
> - struct lio_device *lio_dev = mbox->lio_dev;
> -
> - if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
> - lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
> -
> - return 0;
> -}
> -
> -/**
> - * Process the received mbox message.
> - */
> -int
> -lio_mbox_process_message(struct lio_mbox *mbox)
> -{
> - struct lio_mbox_cmd mbox_cmd;
> -
> - if (mbox->state & LIO_MBOX_STATE_ERROR) {
> - if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
> - LIO_MBOX_STATE_RES_RECEIVING)) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - mbox_cmd.recv_status = 1;
> - if (mbox_cmd.fn)
> - mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
> - mbox_cmd.fn_arg);
> -
> - return 0;
> - }
> -
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> - return 0;
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> - sizeof(struct lio_mbox_cmd));
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - mbox_cmd.recv_status = 0;
> - if (mbox_cmd.fn)
> - mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
> -
> - return 0;
> - }
> -
> - if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
> - rte_memcpy(&mbox_cmd, &mbox->mbox_req,
> - sizeof(struct lio_mbox_cmd));
> - if (!mbox_cmd.msg.s.resp_needed) {
> - mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
> - if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
> - mbox->state = LIO_MBOX_STATE_IDLE;
> - rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> - }
> -
> - lio_mbox_process_cmd(mbox, &mbox_cmd);
> -
> - return 0;
> - }
> -
> - RTE_ASSERT(0);
> -
> - return 0;
> -}
> diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
> deleted file mode 100644
> index 457917e91f..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.h
> +++ /dev/null
> @@ -1,102 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_MBOX_H_
> -#define _LIO_MBOX_H_
> -
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -
> -/* Macros for Mail Box Communication */
> -
> -#define LIO_MBOX_DATA_MAX 32
> -
> -#define LIO_VF_ACTIVE 0x1
> -#define LIO_VF_FLR_REQUEST 0x2
> -#define LIO_CORES_CRASHED 0x3
> -
> -/* Macro for Read acknowledgment */
> -#define LIO_PFVFACK 0xffffffffffffffff
> -#define LIO_PFVFSIG 0x1122334455667788
> -#define LIO_PFVFERR 0xDEADDEADDEADDEAD
> -
> -enum lio_mbox_cmd_status {
> - LIO_MBOX_STATUS_SUCCESS = 0,
> - LIO_MBOX_STATUS_FAILED = 1,
> - LIO_MBOX_STATUS_BUSY = 2
> -};
> -
> -enum lio_mbox_message_type {
> - LIO_MBOX_REQUEST = 0,
> - LIO_MBOX_RESPONSE = 1
> -};
> -
> -union lio_mbox_message {
> - uint64_t mbox_msg64;
> - struct {
> - uint16_t type : 1;
> - uint16_t resp_needed : 1;
> - uint16_t cmd : 6;
> - uint16_t len : 8;
> - uint8_t params[6];
> - } s;
> -};
> -
> -typedef void (*lio_mbox_callback)(void *, void *, void *);
> -
> -struct lio_mbox_cmd {
> - union lio_mbox_message msg;
> - uint64_t data[LIO_MBOX_DATA_MAX];
> - uint32_t q_no;
> - uint32_t recv_len;
> - uint32_t recv_status;
> - lio_mbox_callback fn;
> - void *fn_arg;
> -};
> -
> -enum lio_mbox_state {
> - LIO_MBOX_STATE_IDLE = 1,
> - LIO_MBOX_STATE_REQ_RECEIVING = 2,
> - LIO_MBOX_STATE_REQ_RECEIVED = 4,
> - LIO_MBOX_STATE_RES_PENDING = 8,
> - LIO_MBOX_STATE_RES_RECEIVING = 16,
> - LIO_MBOX_STATE_RES_RECEIVED = 16,
> - LIO_MBOX_STATE_ERROR = 32
> -};
> -
> -struct lio_mbox {
> - /* A spinlock to protect access to this q_mbox. */
> - rte_spinlock_t lock;
> -
> - struct lio_device *lio_dev;
> -
> - uint32_t q_no;
> -
> - enum lio_mbox_state state;
> -
> - /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
> - void *mbox_int_reg;
> -
> - /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
> - * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
> - */
> - void *mbox_write_reg;
> -
> - /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
> - * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
> - */
> - void *mbox_read_reg;
> -
> - struct lio_mbox_cmd mbox_req;
> -
> - struct lio_mbox_cmd mbox_resp;
> -
> -};
> -
> -int lio_mbox_read(struct lio_mbox *mbox);
> -int lio_mbox_write(struct lio_device *lio_dev,
> - struct lio_mbox_cmd *mbox_cmd);
> -int lio_mbox_process_message(struct lio_mbox *mbox);
> -#endif /* _LIO_MBOX_H_ */
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> deleted file mode 100644
> index ebcfbb1a5c..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ /dev/null
> @@ -1,2147 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <rte_string_fns.h>
> -#include <ethdev_driver.h>
> -#include <ethdev_pci.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -#include <rte_alarm.h>
> -#include <rte_ether.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -/* Default RSS key in use */
> -static uint8_t lio_rss_key[40] = {
> - 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
> - 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
> - 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
> - 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
> - 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
> -};
> -
> -static const struct rte_eth_desc_lim lio_rx_desc_lim = {
> - .nb_max = CN23XX_MAX_OQ_DESCRIPTORS,
> - .nb_min = CN23XX_MIN_OQ_DESCRIPTORS,
> - .nb_align = 1,
> -};
> -
> -static const struct rte_eth_desc_lim lio_tx_desc_lim = {
> - .nb_max = CN23XX_MAX_IQ_DESCRIPTORS,
> - .nb_min = CN23XX_MIN_IQ_DESCRIPTORS,
> - .nb_align = 1,
> -};
> -
> -/* Wait for control command to reach nic. */
> -static uint16_t
> -lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
> - struct lio_dev_ctrl_cmd *ctrl_cmd)
> -{
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -
> - while ((ctrl_cmd->cond == 0) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> - rte_delay_ms(1);
> - }
> -
> - return !timeout;
> -}
> -
> -/**
> - * \brief Send Rx control command
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - * @param start_stop whether to start or stop
> - */
> -static int
> -lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
> - ctrl_pkt.ncmd.s.param1 = start_stop;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send RX Control message\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "RX Control command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/* store statistics names and its offset in stats structure */
> -struct rte_lio_xstats_name_off {
> - char name[RTE_ETH_XSTATS_NAME_SIZE];
> - unsigned int offset;
> -};
> -
> -static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
> - {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
> - {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
> - {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
> - {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
> - {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
> - {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
> - {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
> - {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
> - {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
> - {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
> - {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
> - {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
> - {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
> - {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_broadcast_pkts",
> - (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_multicast_pkts",
> - (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
> - total_collisions)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
> - sizeof(struct octeon_rx_stats)},
> - {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
> - sizeof(struct octeon_rx_stats)},
> -};
> -
> -#define LIO_NB_XSTATS RTE_DIM(rte_lio_stats_strings)
> -
> -/* Get hw stats of the port */
> -static int
> -lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
> - unsigned int n)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - struct octeon_link_stats *hw_stats;
> - struct lio_link_stats_resp *resp;
> - struct lio_soft_command *sc;
> - uint32_t resp_size;
> - unsigned int i;
> - int retval;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (n < LIO_NB_XSTATS)
> - return LIO_NB_XSTATS;
> -
> - resp_size = sizeof(struct lio_link_stats_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return -ENOMEM;
> -
> - resp = (struct lio_link_stats_resp *)sc->virtrptr;
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_PORT_STATS, 0, 0, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
> - retval);
> - goto get_stats_fail;
> - }
> -
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - lio_process_ordered_list(lio_dev);
> - rte_delay_ms(1);
> - }
> -
> - retval = resp->status;
> - if (retval) {
> - lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
> - goto get_stats_fail;
> - }
> -
> - lio_swap_8B_data((uint64_t *)(&resp->link_stats),
> - sizeof(struct octeon_link_stats) >> 3);
> -
> - hw_stats = &resp->link_stats;
> -
> - for (i = 0; i < LIO_NB_XSTATS; i++) {
> - xstats[i].id = i;
> - xstats[i].value =
> - *(uint64_t *)(((char *)hw_stats) +
> - rte_lio_stats_strings[i].offset);
> - }
> -
> - lio_free_soft_command(sc);
> -
> - return LIO_NB_XSTATS;
> -
> -get_stats_fail:
> - lio_free_soft_command(sc);
> -
> - return -1;
> -}
> -
> -static int
> -lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
> - struct rte_eth_xstat_name *xstats_names,
> - unsigned limit __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - unsigned int i;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (xstats_names == NULL)
> - return LIO_NB_XSTATS;
> -
> - /* Note: limit checked in rte_eth_xstats_names() */
> -
> - for (i = 0; i < LIO_NB_XSTATS; i++) {
> - snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
> - "%s", rte_lio_stats_strings[i].name);
> - }
> -
> - return LIO_NB_XSTATS;
> -}
> -
> -/* Reset hw stats for the port */
> -static int
> -lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> - int ret;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
> - if (ret != 0) {
> - lio_dev_err(lio_dev, "Failed to send clear stats command\n");
> - return ret;
> - }
> -
> - ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
> - if (ret != 0) {
> - lio_dev_err(lio_dev, "Clear stats command timed out\n");
> - return ret;
> - }
> -
> - /* clear stored per queue stats */
> - if (*eth_dev->dev_ops->stats_reset == NULL)
> - return 0;
> - return (*eth_dev->dev_ops->stats_reset)(eth_dev);
> -}
> -
> -/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
> -static int
> -lio_dev_stats_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_stats *stats)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_droq_stats *oq_stats;
> - struct lio_iq_stats *iq_stats;
> - struct lio_instr_queue *txq;
> - struct lio_droq *droq;
> - int i, iq_no, oq_no;
> - uint64_t bytes = 0;
> - uint64_t pkts = 0;
> - uint64_t drop = 0;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> - txq = lio_dev->instr_queue[iq_no];
> - if (txq != NULL) {
> - iq_stats = &txq->stats;
> - pkts += iq_stats->tx_done;
> - drop += iq_stats->tx_dropped;
> - bytes += iq_stats->tx_tot_bytes;
> - }
> - }
> -
> - stats->opackets = pkts;
> - stats->obytes = bytes;
> - stats->oerrors = drop;
> -
> - pkts = 0;
> - drop = 0;
> - bytes = 0;
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> - droq = lio_dev->droq[oq_no];
> - if (droq != NULL) {
> - oq_stats = &droq->stats;
> - pkts += oq_stats->rx_pkts_received;
> - drop += (oq_stats->rx_dropped +
> - oq_stats->dropped_toomany +
> - oq_stats->dropped_nomem);
> - bytes += oq_stats->rx_bytes_received;
> - }
> - }
> - stats->ibytes = bytes;
> - stats->ipackets = pkts;
> - stats->ierrors = drop;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_droq_stats *oq_stats;
> - struct lio_iq_stats *iq_stats;
> - struct lio_instr_queue *txq;
> - struct lio_droq *droq;
> - int i, iq_no, oq_no;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> - txq = lio_dev->instr_queue[iq_no];
> - if (txq != NULL) {
> - iq_stats = &txq->stats;
> - memset(iq_stats, 0, sizeof(struct lio_iq_stats));
> - }
> - }
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> - droq = lio_dev->droq[oq_no];
> - if (droq != NULL) {
> - oq_stats = &droq->stats;
> - memset(oq_stats, 0, sizeof(struct lio_droq_stats));
> - }
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_info_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_dev_info *devinfo)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> -
> - switch (pci_dev->id.subsystem_device_id) {
> - /* CN23xx 10G cards */
> - case PCI_SUBSYS_DEV_ID_CN2350_210:
> - case PCI_SUBSYS_DEV_ID_CN2360_210:
> - case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
> - case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
> - case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
> - case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> - break;
> - /* CN23xx 25G cards */
> - case PCI_SUBSYS_DEV_ID_CN2350_225:
> - case PCI_SUBSYS_DEV_ID_CN2360_225:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
> - break;
> - default:
> - devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> - lio_dev_err(lio_dev,
> - "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
> - return -EINVAL;
> - }
> -
> - devinfo->max_rx_queues = lio_dev->max_rx_queues;
> - devinfo->max_tx_queues = lio_dev->max_tx_queues;
> -
> - devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
> - devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
> -
> - devinfo->max_mac_addrs = 1;
> -
> - devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> - RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
> - RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
> - RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> - RTE_ETH_RX_OFFLOAD_RSS_HASH);
> - devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
> - RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> - RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
> - RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
> -
> - devinfo->rx_desc_lim = lio_rx_desc_lim;
> - devinfo->tx_desc_lim = lio_tx_desc_lim;
> -
> - devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
> - devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> - devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
> - RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> - RTE_ETH_RSS_IPV6 |
> - RTE_ETH_RSS_NONFRAG_IPV6_TCP |
> - RTE_ETH_RSS_IPV6_EX |
> - RTE_ETH_RSS_IPV6_TCP_EX);
> - return 0;
> -}
> -
> -static int
> -lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
> - ctrl_pkt.ncmd.s.param1 = mtu;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Command to change MTU timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_reta_entry64 *reta_conf,
> - uint16_t reta_size)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct lio_rss_set *rss_param;
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> - int i, j, index;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> - lio_dev_err(lio_dev,
> - "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> - reta_size, LIO_RSS_MAX_TABLE_SZ);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> - ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - rss_param->param.flags = 0xF;
> - rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
> - rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
> -
> - for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
> - for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
> - if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
> - index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
> - rss_state->itable[index] = reta_conf[i].reta[j];
> - }
> - }
> - }
> -
> - rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
> - memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
> -
> - lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to set rss hash\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Set rss hash timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_reta_entry64 *reta_conf,
> - uint16_t reta_size)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - int i, num;
> -
> - if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> - lio_dev_err(lio_dev,
> - "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> - reta_size, LIO_RSS_MAX_TABLE_SZ);
> - return -EINVAL;
> - }
> -
> - num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
> -
> - for (i = 0; i < num; i++) {
> - memcpy(reta_conf->reta,
> - &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
> - RTE_ETH_RETA_GROUP_SIZE);
> - reta_conf++;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_conf *rss_conf)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - uint8_t *hash_key = NULL;
> - uint64_t rss_hf = 0;
> -
> - if (rss_state->hash_disable) {
> - lio_dev_info(lio_dev, "RSS disabled in nic\n");
> - rss_conf->rss_hf = 0;
> - return 0;
> - }
> -
> - /* Get key value */
> - hash_key = rss_conf->rss_key;
> - if (hash_key != NULL)
> - memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
> -
> - if (rss_state->ip)
> - rss_hf |= RTE_ETH_RSS_IPV4;
> - if (rss_state->tcp_hash)
> - rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
> - if (rss_state->ipv6)
> - rss_hf |= RTE_ETH_RSS_IPV6;
> - if (rss_state->ipv6_tcp_hash)
> - rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
> - if (rss_state->ipv6_ex)
> - rss_hf |= RTE_ETH_RSS_IPV6_EX;
> - if (rss_state->ipv6_tcp_ex_hash)
> - rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
> -
> - rss_conf->rss_hf = rss_hf;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
> - struct rte_eth_rss_conf *rss_conf)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct lio_rss_set *rss_param;
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
> - lio_dev->port_id);
> - return -EINVAL;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> - ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - rss_param->param.flags = 0xF;
> -
> - if (rss_conf->rss_key) {
> - rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
> - rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> - rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
> - memcpy(rss_state->hash_key, rss_conf->rss_key,
> - rss_state->hash_key_size);
> - memcpy(rss_param->key, rss_state->hash_key,
> - rss_state->hash_key_size);
> - }
> -
> - if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> - /* Can't disable rss through hash flags,
> - * if it is enabled by default during init
> - */
> - if (!rss_state->hash_disable)
> - return -EINVAL;
> -
> - /* This is for --disable-rss during testpmd launch */
> - rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
> - } else {
> - uint32_t hashinfo = 0;
> -
> - /* Can't enable rss if disabled by default during init */
> - if (rss_state->hash_disable)
> - return -EINVAL;
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
> - hashinfo |= LIO_RSS_HASH_IPV4;
> - rss_state->ip = 1;
> - } else {
> - rss_state->ip = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV4;
> - rss_state->tcp_hash = 1;
> - } else {
> - rss_state->tcp_hash = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
> - hashinfo |= LIO_RSS_HASH_IPV6;
> - rss_state->ipv6 = 1;
> - } else {
> - rss_state->ipv6 = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV6;
> - rss_state->ipv6_tcp_hash = 1;
> - } else {
> - rss_state->ipv6_tcp_hash = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
> - hashinfo |= LIO_RSS_HASH_IPV6_EX;
> - rss_state->ipv6_ex = 1;
> - } else {
> - rss_state->ipv6_ex = 0;
> - }
> -
> - if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
> - hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
> - rss_state->ipv6_tcp_ex_hash = 1;
> - } else {
> - rss_state->ipv6_tcp_ex_hash = 0;
> - }
> -
> - rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
> - rss_param->param.hashinfo = hashinfo;
> - }
> -
> - lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to set rss hash\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Set rss hash timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Add vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - * udp tunnel conf
> - *
> - * @return
> - * On success return 0
> - * On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
> - struct rte_eth_udp_tunnel *udp_tnl)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (udp_tnl == NULL)
> - return -EINVAL;
> -
> - if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> - lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> - return -1;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> - ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> - ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Remove vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - * udp tunnel conf
> - *
> - * @return
> - * On success return 0
> - * On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
> - struct rte_eth_udp_tunnel *udp_tnl)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (udp_tnl == NULL)
> - return -EINVAL;
> -
> - if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> - lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> - return -1;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> - ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> - ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (lio_dev->linfo.vlan_is_admin_assigned)
> - return -EPERM;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = on ?
> - LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
> - ctrl_pkt.ncmd.s.param1 = vlan_id;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
> - on ? "add" : "remove");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
> - on ? "add" : "remove");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static uint64_t
> -lio_hweight64(uint64_t w)
> -{
> - uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
> -
> - res =
> - (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
> - res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
> - res = res + (res >> 8);
> - res = res + (res >> 16);
> -
> - return (res + (res >> 32)) & 0x00000000000000FFul;
> -}
> -
> -static int
> -lio_dev_link_update(struct rte_eth_dev *eth_dev,
> - int wait_to_complete __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_eth_link link;
> -
> - /* Initialize */
> - memset(&link, 0, sizeof(link));
> - link.link_status = RTE_ETH_LINK_DOWN;
> - link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> - link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> - link.link_autoneg = RTE_ETH_LINK_AUTONEG;
> -
> - /* Return what we found */
> - if (lio_dev->linfo.link.s.link_up == 0) {
> - /* Interface is down */
> - return rte_eth_linkstatus_set(eth_dev, &link);
> - }
> -
> - link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
> - link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> - switch (lio_dev->linfo.link.s.speed) {
> - case LIO_LINK_SPEED_10000:
> - link.link_speed = RTE_ETH_SPEED_NUM_10G;
> - break;
> - case LIO_LINK_SPEED_25000:
> - link.link_speed = RTE_ETH_SPEED_NUM_25G;
> - break;
> - default:
> - link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> - link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> - }
> -
> - return rte_eth_linkstatus_set(eth_dev, &link);
> -}
> -
> -/**
> - * \brief Net device enable, disable allmulticast
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - *
> - * @return
> - * On success return 0
> - * On failure return negative errno
> - */
> -static int
> -lio_change_dev_flag(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - /* Create a ctrl pkt command to be sent to core app. */
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
> - ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send change flag message\n");
> - return -EAGAIN;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Change dev flag command timed out\n");
> - return -ETIMEDOUT;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_VF_TRUST_MIN_VERSION);
> - return -EAGAIN;
> - }
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_VF_TRUST_MIN_VERSION);
> - return -EAGAIN;
> - }
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
> - lio_dev->port_id);
> - return -EAGAIN;
> - }
> -
> - lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
> - return lio_change_dev_flag(eth_dev);
> -}
> -
> -static void
> -lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct rte_eth_rss_reta_entry64 reta_conf[8];
> - struct rte_eth_rss_conf rss_conf;
> - uint16_t i;
> -
> - /* Configure the RSS key and the RSS protocols used to compute
> - * the RSS hash of input packets.
> - */
> - rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
> - if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> - rss_state->hash_disable = 1;
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> - return;
> - }
> -
> - if (rss_conf.rss_key == NULL)
> - rss_conf.rss_key = lio_rss_key; /* Default hash key */
> -
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> -
> - memset(reta_conf, 0, sizeof(reta_conf));
> - for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
> - uint8_t q_idx, conf_idx, reta_idx;
> -
> - q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
> - i % eth_dev->data->nb_rx_queues : 0);
> - conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
> - reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
> - reta_conf[conf_idx].reta[reta_idx] = q_idx;
> - reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
> - }
> -
> - lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
> -}
> -
> -static void
> -lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> - struct rte_eth_rss_conf rss_conf;
> -
> - switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
> - case RTE_ETH_MQ_RX_RSS:
> - lio_dev_rss_configure(eth_dev);
> - break;
> - case RTE_ETH_MQ_RX_NONE:
> - /* if mq_mode is none, disable rss mode. */
> - default:
> - memset(&rss_conf, 0, sizeof(rss_conf));
> - rss_state->hash_disable = 1;
> - lio_dev_rss_hash_update(eth_dev, &rss_conf);
> - }
> -}
> -
> -/**
> - * Setup our receive queue/ringbuffer. This is the
> - * queue the Octeon uses to send us packets and
> - * responses. We are given a memory pool for our
> - * packet buffers that are used to populate the receive
> - * queue.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - * @param q_no
> - * Queue number
> - * @param num_rx_descs
> - * Number of entries in the queue
> - * @param socket_id
> - * Where to allocate memory
> - * @param rx_conf
> - * Pointer to the struction rte_eth_rxconf
> - * @param mp
> - * Pointer to the packet pool
> - *
> - * @return
> - * - On success, return 0
> - * - On failure, return -1
> - */
> -static int
> -lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> - uint16_t num_rx_descs, unsigned int socket_id,
> - const struct rte_eth_rxconf *rx_conf __rte_unused,
> - struct rte_mempool *mp)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct rte_pktmbuf_pool_private *mbp_priv;
> - uint32_t fw_mapped_oq;
> - uint16_t buf_size;
> -
> - if (q_no >= lio_dev->nb_rx_queues) {
> - lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
> - return -EINVAL;
> - }
> -
> - lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
> -
> - fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
> -
> - /* Free previous allocation if any */
> - if (eth_dev->data->rx_queues[q_no] != NULL) {
> - lio_dev_rx_queue_release(eth_dev, q_no);
> - eth_dev->data->rx_queues[q_no] = NULL;
> - }
> -
> - mbp_priv = rte_mempool_get_priv(mp);
> - buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
> -
> - if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
> - socket_id)) {
> - lio_dev_err(lio_dev, "droq allocation failed\n");
> - return -1;
> - }
> -
> - eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
> -
> - return 0;
> -}
> -
> -/**
> - * Release the receive queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - * Pointer to Ethernet device structure.
> - * @param q_no
> - * Receive queue index.
> - *
> - * @return
> - * - nothing
> - */
> -void
> -lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> - struct lio_droq *droq = dev->data->rx_queues[q_no];
> - int oq_no;
> -
> - if (droq) {
> - oq_no = droq->q_no;
> - lio_delete_droq_queue(droq->lio_dev, oq_no);
> - }
> -}
> -
> -/**
> - * Allocate and initialize SW ring. Initialize associated HW registers.
> - *
> - * @param eth_dev
> - * Pointer to structure rte_eth_dev
> - *
> - * @param q_no
> - * Queue number
> - *
> - * @param num_tx_descs
> - * Number of ringbuffer descriptors
> - *
> - * @param socket_id
> - * NUMA socket id, used for memory allocations
> - *
> - * @param tx_conf
> - * Pointer to the structure rte_eth_txconf
> - *
> - * @return
> - * - On success, return 0
> - * - On failure, return -errno value
> - */
> -static int
> -lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> - uint16_t num_tx_descs, unsigned int socket_id,
> - const struct rte_eth_txconf *tx_conf __rte_unused)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
> - int retval;
> -
> - if (q_no >= lio_dev->nb_tx_queues) {
> - lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
> - return -EINVAL;
> - }
> -
> - lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
> -
> - /* Free previous allocation if any */
> - if (eth_dev->data->tx_queues[q_no] != NULL) {
> - lio_dev_tx_queue_release(eth_dev, q_no);
> - eth_dev->data->tx_queues[q_no] = NULL;
> - }
> -
> - retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
> - num_tx_descs, lio_dev, socket_id);
> -
> - if (retval) {
> - lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
> - return retval;
> - }
> -
> - retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
> - lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
> - socket_id);
> -
> - if (retval) {
> - lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
> - return retval;
> - }
> -
> - eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
> -
> - return 0;
> -}
> -
> -/**
> - * Release the transmit queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - * Pointer to Ethernet device structure.
> - * @param q_no
> - * Transmit queue index.
> - *
> - * @return
> - * - nothing
> - */
> -void
> -lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> - struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
> - uint32_t fw_mapped_iq_no;
> -
> -
> - if (tq) {
> - /* Free sg_list */
> - lio_delete_sglist(tq);
> -
> - fw_mapped_iq_no = tq->txpciq.s.q_no;
> - lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
> - }
> -}
> -
> -/**
> - * Api to check link state.
> - */
> -static void
> -lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - struct lio_link_status_resp *resp;
> - union octeon_link_status *ls;
> - struct lio_soft_command *sc;
> - uint32_t resp_size;
> -
> - if (!lio_dev->intf_open)
> - return;
> -
> - resp_size = sizeof(struct lio_link_status_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return;
> -
> - resp = (struct lio_link_status_resp *)sc->virtrptr;
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_INFO, 0, 0, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
> - goto get_status_fail;
> -
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - rte_delay_ms(1);
> - }
> -
> - if (resp->status)
> - goto get_status_fail;
> -
> - ls = &resp->link_info.link;
> -
> - lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
> -
> - if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
> - if (ls->s.mtu < eth_dev->data->mtu) {
> - lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
> - ls->s.mtu);
> - eth_dev->data->mtu = ls->s.mtu;
> - }
> - lio_dev->linfo.link.link_status64 = ls->link_status64;
> - lio_dev_link_update(eth_dev, 0);
> - }
> -
> - lio_free_soft_command(sc);
> -
> - return;
> -
> -get_status_fail:
> - lio_free_soft_command(sc);
> -}
> -
> -/* This function will be invoked every LSC_TIMEOUT ns (100ms)
> - * and will update link state if it changes.
> - */
> -static void
> -lio_sync_link_state_check(void *eth_dev)
> -{
> - struct lio_device *lio_dev =
> - (((struct rte_eth_dev *)eth_dev)->data->dev_private);
> -
> - if (lio_dev->port_configured)
> - lio_dev_get_link_status(eth_dev);
> -
> - /* Schedule periodic link status check.
> - * Stop check if interface is close and start again while opening.
> - */
> - if (lio_dev->intf_open)
> - rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
> - eth_dev);
> -}
> -
> -static int
> -lio_dev_start(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - int ret = 0;
> -
> - lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
> -
> - if (lio_dev->fn_list.enable_io_queues(lio_dev))
> - return -1;
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 1))
> - return -1;
> -
> - /* Ready for link status updates */
> - lio_dev->intf_open = 1;
> - rte_mb();
> -
> - /* Configure RSS if device configured with multiple RX queues. */
> - lio_dev_mq_rx_configure(eth_dev);
> -
> - /* Before update the link info,
> - * must set linfo.link.link_status64 to 0.
> - */
> - lio_dev->linfo.link.link_status64 = 0;
> -
> - /* start polling for lsc */
> - ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
> - lio_sync_link_state_check,
> - eth_dev);
> - if (ret) {
> - lio_dev_err(lio_dev,
> - "link state check handler creation failed\n");
> - goto dev_lsc_handle_error;
> - }
> -
> - while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
> - rte_delay_ms(1);
> -
> - if (lio_dev->linfo.link.link_status64 == 0) {
> - ret = -1;
> - goto dev_mtu_set_error;
> - }
> -
> - ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> - if (ret != 0)
> - goto dev_mtu_set_error;
> -
> - return 0;
> -
> -dev_mtu_set_error:
> - rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> -dev_lsc_handle_error:
> - lio_dev->intf_open = 0;
> - lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> - return ret;
> -}
> -
> -/* Stop device and disable input/output functions */
> -static int
> -lio_dev_stop(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
> - eth_dev->data->dev_started = 0;
> - lio_dev->intf_open = 0;
> - rte_mb();
> -
> - /* Cancel callback if still running. */
> - rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> - lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> - lio_wait_for_instr_fetch(lio_dev);
> -
> - /* Clear recorded link status */
> - lio_dev->linfo.link.link_status64 = 0;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> - return 0;
> - }
> -
> - if (lio_dev->linfo.link.s.link_up) {
> - lio_dev_info(lio_dev, "Link is already UP\n");
> - return 0;
> - }
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
> - lio_dev_err(lio_dev, "Unable to set Link UP\n");
> - return -1;
> - }
> -
> - lio_dev->linfo.link.s.link_up = 1;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - if (!lio_dev->intf_open) {
> - lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> - return 0;
> - }
> -
> - if (!lio_dev->linfo.link.s.link_up) {
> - lio_dev_info(lio_dev, "Link is already DOWN\n");
> - return 0;
> - }
> -
> - lio_dev->linfo.link.s.link_up = 0;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> -
> - if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
> - lio_dev->linfo.link.s.link_up = 1;
> - eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> - lio_dev_err(lio_dev, "Unable to set Link Down\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> - * Reset and stop the device. This occurs on the first
> - * call to this routine. Subsequent calls will simply
> - * return. NB: This will require the NIC to be rebooted.
> - *
> - * @param eth_dev
> - * Pointer to the structure rte_eth_dev
> - *
> - * @return
> - * - nothing
> - */
> -static int
> -lio_dev_close(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int ret = 0;
> -
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
> -
> - if (lio_dev->intf_open)
> - ret = lio_dev_stop(eth_dev);
> -
> - /* Reset ioq regs */
> - lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> - if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> - cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> - rte_delay_ms(LIO_PCI_FLR_WAIT);
> - }
> -
> - /* lio_free_mbox */
> - lio_dev->fn_list.free_mbox(lio_dev);
> -
> - /* Free glist resources */
> - rte_free(lio_dev->glist_head);
> - rte_free(lio_dev->glist_lock);
> - lio_dev->glist_head = NULL;
> - lio_dev->glist_lock = NULL;
> -
> - lio_dev->port_configured = 0;
> -
> - /* Delete all queues */
> - lio_dev_clear_queues(eth_dev);
> -
> - return ret;
> -}
> -
> -/**
> - * Enable tunnel rx checksum verification from firmware.
> - */
> -static void
> -lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
> - ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
> - return;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> - lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
> -}
> -
> -/**
> - * Enable checksum calculation for inner packet in a tunnel.
> - */
> -static void
> -lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
> - ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
> - return;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> - lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
> -}
> -
> -static int
> -lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
> - int num_rxq)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - struct lio_dev_ctrl_cmd ctrl_cmd;
> - struct lio_ctrl_pkt ctrl_pkt;
> -
> - if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
> - lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> - LIO_Q_RECONF_MIN_VERSION);
> - return -ENOTSUP;
> - }
> -
> - /* flush added to prevent cmd failure
> - * incase the queue is full
> - */
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> - memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> - memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> - ctrl_cmd.eth_dev = eth_dev;
> - ctrl_cmd.cond = 0;
> -
> - ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
> - ctrl_pkt.ncmd.s.param1 = num_txq;
> - ctrl_pkt.ncmd.s.param2 = num_rxq;
> - ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> - if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> - lio_dev_err(lio_dev, "Failed to send queue count control command\n");
> - return -1;
> - }
> -
> - if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> - lio_dev_err(lio_dev, "Queue count control command timed out\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - int ret;
> -
> - if (lio_dev->nb_rx_queues != num_rxq ||
> - lio_dev->nb_tx_queues != num_txq) {
> - if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
> - return -1;
> - lio_dev->nb_rx_queues = num_rxq;
> - lio_dev->nb_tx_queues = num_txq;
> - }
> -
> - if (lio_dev->intf_open) {
> - ret = lio_dev_stop(eth_dev);
> - if (ret != 0)
> - return ret;
> - }
> -
> - /* Reset ioq registers */
> - if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to configure device registers\n");
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -static int
> -lio_dev_configure(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> - int retval, num_iqueues, num_oqueues;
> - uint8_t mac[RTE_ETHER_ADDR_LEN], i;
> - struct lio_if_cfg_resp *resp;
> - struct lio_soft_command *sc;
> - union lio_if_cfg if_cfg;
> - uint32_t resp_size;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - RTE_ETH_RX_OFFLOAD_RSS_HASH;
> -
> - /* Inform firmware about change in number of queues to use.
> - * Disable IO queues and reset registers for re-configuration.
> - */
> - if (lio_dev->port_configured)
> - return lio_reconf_queues(eth_dev,
> - eth_dev->data->nb_tx_queues,
> - eth_dev->data->nb_rx_queues);
> -
> - lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
> - lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
> -
> - /* Set max number of queues which can be re-configured. */
> - lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
> - lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
> -
> - resp_size = sizeof(struct lio_if_cfg_resp);
> - sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> - if (sc == NULL)
> - return -ENOMEM;
> -
> - resp = (struct lio_if_cfg_resp *)sc->virtrptr;
> -
> - /* Firmware doesn't have capability to reconfigure the queues,
> - * Claim all queues, and use as many required
> - */
> - if_cfg.if_cfg64 = 0;
> - if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
> - if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
> - if_cfg.s.base_queue = 0;
> -
> - if_cfg.s.gmx_port_id = lio_dev->pf_num;
> -
> - lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> - LIO_OPCODE_IF_CFG, 0,
> - if_cfg.if_cfg64, 0);
> -
> - /* Setting wait time in seconds */
> - sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
> - retval);
> - /* Soft instr is freed by driver in case of failure. */
> - goto nic_config_fail;
> - }
> -
> - /* Sleep on a wait queue till the cond flag indicates that the
> - * response arrived or timed-out.
> - */
> - while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> - lio_process_ordered_list(lio_dev);
> - rte_delay_ms(1);
> - }
> -
> - retval = resp->status;
> - if (retval) {
> - lio_dev_err(lio_dev, "iq/oq config failed\n");
> - goto nic_config_fail;
> - }
> -
> - strlcpy(lio_dev->firmware_version,
> - resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
> -
> - lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
> - sizeof(struct octeon_if_cfg_info) >> 3);
> -
> - num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
> - num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
> -
> - if (!(num_iqueues) || !(num_oqueues)) {
> - lio_dev_err(lio_dev,
> - "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
> - (unsigned long)resp->cfg_info.iqmask,
> - (unsigned long)resp->cfg_info.oqmask);
> - goto nic_config_fail;
> - }
> -
> - lio_dev_dbg(lio_dev,
> - "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
> - eth_dev->data->port_id,
> - (unsigned long)resp->cfg_info.iqmask,
> - (unsigned long)resp->cfg_info.oqmask,
> - num_iqueues, num_oqueues);
> -
> - lio_dev->linfo.num_rxpciq = num_oqueues;
> - lio_dev->linfo.num_txpciq = num_iqueues;
> -
> - for (i = 0; i < num_oqueues; i++) {
> - lio_dev->linfo.rxpciq[i].rxpciq64 =
> - resp->cfg_info.linfo.rxpciq[i].rxpciq64;
> - lio_dev_dbg(lio_dev, "index %d OQ %d\n",
> - i, lio_dev->linfo.rxpciq[i].s.q_no);
> - }
> -
> - for (i = 0; i < num_iqueues; i++) {
> - lio_dev->linfo.txpciq[i].txpciq64 =
> - resp->cfg_info.linfo.txpciq[i].txpciq64;
> - lio_dev_dbg(lio_dev, "index %d IQ %d\n",
> - i, lio_dev->linfo.txpciq[i].s.q_no);
> - }
> -
> - lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
> - lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
> - lio_dev->linfo.link.link_status64 =
> - resp->cfg_info.linfo.link.link_status64;
> -
> - /* 64-bit swap required on LE machines */
> - lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
> - for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
> - mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
> - 2 + i));
> -
> - /* Copy the permanent MAC address */
> - rte_ether_addr_copy((struct rte_ether_addr *)mac,
> - ð_dev->data->mac_addrs[0]);
> -
> - /* enable firmware checksum support for tunnel packets */
> - lio_enable_hw_tunnel_rx_checksum(eth_dev);
> - lio_enable_hw_tunnel_tx_checksum(eth_dev);
> -
> - lio_dev->glist_lock =
> - rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
> - if (lio_dev->glist_lock == NULL)
> - return -ENOMEM;
> -
> - lio_dev->glist_head =
> - rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
> - 0);
> - if (lio_dev->glist_head == NULL) {
> - rte_free(lio_dev->glist_lock);
> - lio_dev->glist_lock = NULL;
> - return -ENOMEM;
> - }
> -
> - lio_dev_link_update(eth_dev, 0);
> -
> - lio_dev->port_configured = 1;
> -
> - lio_free_soft_command(sc);
> -
> - /* Reset ioq regs */
> - lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> - /* Free iq_0 used during init */
> - lio_free_instr_queue0(lio_dev);
> -
> - return 0;
> -
> -nic_config_fail:
> - lio_dev_err(lio_dev, "Failed retval %d\n", retval);
> - lio_free_soft_command(sc);
> - lio_free_instr_queue0(lio_dev);
> -
> - return -ENODEV;
> -}
> -
> -/* Define our ethernet definitions */
> -static const struct eth_dev_ops liovf_eth_dev_ops = {
> - .dev_configure = lio_dev_configure,
> - .dev_start = lio_dev_start,
> - .dev_stop = lio_dev_stop,
> - .dev_set_link_up = lio_dev_set_link_up,
> - .dev_set_link_down = lio_dev_set_link_down,
> - .dev_close = lio_dev_close,
> - .promiscuous_enable = lio_dev_promiscuous_enable,
> - .promiscuous_disable = lio_dev_promiscuous_disable,
> - .allmulticast_enable = lio_dev_allmulticast_enable,
> - .allmulticast_disable = lio_dev_allmulticast_disable,
> - .link_update = lio_dev_link_update,
> - .stats_get = lio_dev_stats_get,
> - .xstats_get = lio_dev_xstats_get,
> - .xstats_get_names = lio_dev_xstats_get_names,
> - .stats_reset = lio_dev_stats_reset,
> - .xstats_reset = lio_dev_xstats_reset,
> - .dev_infos_get = lio_dev_info_get,
> - .vlan_filter_set = lio_dev_vlan_filter_set,
> - .rx_queue_setup = lio_dev_rx_queue_setup,
> - .rx_queue_release = lio_dev_rx_queue_release,
> - .tx_queue_setup = lio_dev_tx_queue_setup,
> - .tx_queue_release = lio_dev_tx_queue_release,
> - .reta_update = lio_dev_rss_reta_update,
> - .reta_query = lio_dev_rss_reta_query,
> - .rss_hash_conf_get = lio_dev_rss_hash_conf_get,
> - .rss_hash_update = lio_dev_rss_hash_update,
> - .udp_tunnel_port_add = lio_dev_udp_tunnel_add,
> - .udp_tunnel_port_del = lio_dev_udp_tunnel_del,
> - .mtu_set = lio_dev_mtu_set,
> -};
> -
> -static void
> -lio_check_pf_hs_response(void *lio_dev)
> -{
> - struct lio_device *dev = lio_dev;
> -
> - /* check till response arrives */
> - if (dev->pfvf_hsword.coproc_tics_per_us)
> - return;
> -
> - cn23xx_vf_handle_mbox(dev);
> -
> - rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
> -}
> -
> -/**
> - * \brief Identify the LIO device and to map the BAR address space
> - * @param lio_dev lio device
> - */
> -static int
> -lio_chip_specific_setup(struct lio_device *lio_dev)
> -{
> - struct rte_pci_device *pdev = lio_dev->pci_dev;
> - uint32_t dev_id = pdev->id.device_id;
> - const char *s;
> - int ret = 1;
> -
> - switch (dev_id) {
> - case LIO_CN23XX_VF_VID:
> - lio_dev->chip_id = LIO_CN23XX_VF_VID;
> - ret = cn23xx_vf_setup_device(lio_dev);
> - s = "CN23XX VF";
> - break;
> - default:
> - s = "?";
> - lio_dev_err(lio_dev, "Unsupported Chip\n");
> - }
> -
> - if (!ret)
> - lio_dev_info(lio_dev, "DEVICE : %s\n", s);
> -
> - return ret;
> -}
> -
> -static int
> -lio_first_time_init(struct lio_device *lio_dev,
> - struct rte_pci_device *pdev)
> -{
> - int dpdk_queues;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* set dpdk specific pci device pointer */
> - lio_dev->pci_dev = pdev;
> -
> - /* Identify the LIO type and set device ops */
> - if (lio_chip_specific_setup(lio_dev)) {
> - lio_dev_err(lio_dev, "Chip specific setup failed\n");
> - return -1;
> - }
> -
> - /* Initialize soft command buffer pool */
> - if (lio_setup_sc_buffer_pool(lio_dev)) {
> - lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
> - return -1;
> - }
> -
> - /* Initialize lists to manage the requests of different types that
> - * arrive from applications for this lio device.
> - */
> - lio_setup_response_list(lio_dev);
> -
> - if (lio_dev->fn_list.setup_mbox(lio_dev)) {
> - lio_dev_err(lio_dev, "Mailbox setup failed\n");
> - goto error;
> - }
> -
> - /* Check PF response */
> - lio_check_pf_hs_response((void *)lio_dev);
> -
> - /* Do handshake and exit if incompatible PF driver */
> - if (cn23xx_pfvf_handshake(lio_dev))
> - goto error;
> -
> - /* Request and wait for device reset. */
> - if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> - cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> - /* FLR wait time doubled as a precaution. */
> - rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
> - }
> -
> - if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to configure device registers\n");
> - goto error;
> - }
> -
> - if (lio_setup_instr_queue0(lio_dev)) {
> - lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
> - goto error;
> - }
> -
> - dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
> -
> - lio_dev->max_tx_queues = dpdk_queues;
> - lio_dev->max_rx_queues = dpdk_queues;
> -
> - /* Enable input and output queues for this device */
> - if (lio_dev->fn_list.enable_io_queues(lio_dev))
> - goto error;
> -
> - return 0;
> -
> -error:
> - lio_free_sc_buffer_pool(lio_dev);
> - if (lio_dev->mbox[0])
> - lio_dev->fn_list.free_mbox(lio_dev);
> - if (lio_dev->instr_queue[0])
> - lio_free_instr_queue0(lio_dev);
> -
> - return -1;
> -}
> -
> -static int
> -lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - /* lio_free_sc_buffer_pool */
> - lio_free_sc_buffer_pool(lio_dev);
> -
> - return 0;
> -}
> -
> -static int
> -lio_eth_dev_init(struct rte_eth_dev *eth_dev)
> -{
> - struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
> - struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
> - eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
> -
> - /* Primary does the initialization. */
> - if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> - return 0;
> -
> - rte_eth_copy_pci_info(eth_dev, pdev);
> -
> - if (pdev->mem_resource[0].addr) {
> - lio_dev->hw_addr = pdev->mem_resource[0].addr;
> - } else {
> - PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
> - return -ENODEV;
> - }
> -
> - lio_dev->eth_dev = eth_dev;
> - /* set lio device print string */
> - snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
> - "%s[%02x:%02x.%x]", pdev->driver->driver.name,
> - pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
> -
> - lio_dev->port_id = eth_dev->data->port_id;
> -
> - if (lio_first_time_init(lio_dev, pdev)) {
> - lio_dev_err(lio_dev, "Device init failed\n");
> - return -EINVAL;
> - }
> -
> - eth_dev->dev_ops = &liovf_eth_dev_ops;
> - eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
> - if (eth_dev->data->mac_addrs == NULL) {
> - lio_dev_err(lio_dev,
> - "MAC addresses memory allocation failed\n");
> - eth_dev->dev_ops = NULL;
> - eth_dev->rx_pkt_burst = NULL;
> - eth_dev->tx_pkt_burst = NULL;
> - return -ENOMEM;
> - }
> -
> - rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
> - rte_wmb();
> -
> - lio_dev->port_configured = 0;
> - /* Always allow unicast packets */
> - lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
> -
> - return 0;
> -}
> -
> -static int
> -lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> - struct rte_pci_device *pci_dev)
> -{
> - return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
> - lio_eth_dev_init);
> -}
> -
> -static int
> -lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
> -{
> - return rte_eth_dev_pci_generic_remove(pci_dev,
> - lio_eth_dev_uninit);
> -}
> -
> -/* Set of PCI devices this driver supports */
> -static const struct rte_pci_id pci_id_liovf_map[] = {
> - { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
> - { .vendor_id = 0, /* sentinel */ }
> -};
> -
> -static struct rte_pci_driver rte_liovf_pmd = {
> - .id_table = pci_id_liovf_map,
> - .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> - .probe = lio_eth_dev_pci_probe,
> - .remove = lio_eth_dev_pci_remove,
> -};
> -
> -RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
> -RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
> -RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
> diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
> deleted file mode 100644
> index ece2b03858..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.h
> +++ /dev/null
> @@ -1,179 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_ETHDEV_H_
> -#define _LIO_ETHDEV_H_
> -
> -#include <stdint.h>
> -
> -#include "lio_struct.h"
> -
> -/* timeout to check link state updates from firmware in us */
> -#define LIO_LSC_TIMEOUT 100000 /* 100000us (100ms) */
> -#define LIO_MAX_CMD_TIMEOUT 10000 /* 10000ms (10s) */
> -
> -/* The max frame size with default MTU */
> -#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
> -
> -#define LIO_DEV(_eth_dev) ((_eth_dev)->data->dev_private)
> -
> -/* LIO Response condition variable */
> -struct lio_dev_ctrl_cmd {
> - struct rte_eth_dev *eth_dev;
> - uint64_t cond;
> -};
> -
> -enum lio_bus_speed {
> - LIO_LINK_SPEED_UNKNOWN = 0,
> - LIO_LINK_SPEED_10000 = 10000,
> - LIO_LINK_SPEED_25000 = 25000
> -};
> -
> -struct octeon_if_cfg_info {
> - uint64_t iqmask; /** mask for IQs enabled for the port */
> - uint64_t oqmask; /** mask for OQs enabled for the port */
> - struct octeon_link_info linfo; /** initial link information */
> - char lio_firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_rx_stats {
> - /* link-level stats */
> - uint64_t total_rcvd;
> - uint64_t bytes_rcvd;
> - uint64_t total_bcst;
> - uint64_t total_mcst;
> - uint64_t runts;
> - uint64_t ctl_rcvd;
> - uint64_t fifo_err; /* Accounts for over/under-run of buffers */
> - uint64_t dmac_drop;
> - uint64_t fcs_err;
> - uint64_t jabber_err;
> - uint64_t l2_err;
> - uint64_t frame_err;
> -
> - /* firmware stats */
> - uint64_t fw_total_rcvd;
> - uint64_t fw_total_fwd;
> - uint64_t fw_total_fwd_bytes;
> - uint64_t fw_err_pko;
> - uint64_t fw_err_link;
> - uint64_t fw_err_drop;
> - uint64_t fw_rx_vxlan;
> - uint64_t fw_rx_vxlan_err;
> -
> - /* LRO */
> - uint64_t fw_lro_pkts; /* Number of packets that are LROed */
> - uint64_t fw_lro_octs; /* Number of octets that are LROed */
> - uint64_t fw_total_lro; /* Number of LRO packets formed */
> - uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
> - uint64_t fw_lro_aborts_port;
> - uint64_t fw_lro_aborts_seq;
> - uint64_t fw_lro_aborts_tsval;
> - uint64_t fw_lro_aborts_timer;
> - /* intrmod: packet forward rate */
> - uint64_t fwd_rate;
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_tx_stats {
> - /* link-level stats */
> - uint64_t total_pkts_sent;
> - uint64_t total_bytes_sent;
> - uint64_t mcast_pkts_sent;
> - uint64_t bcast_pkts_sent;
> - uint64_t ctl_sent;
> - uint64_t one_collision_sent; /* Packets sent after one collision */
> - /* Packets sent after multiple collision */
> - uint64_t multi_collision_sent;
> - /* Packets not sent due to max collisions */
> - uint64_t max_collision_fail;
> - /* Packets not sent due to max deferrals */
> - uint64_t max_deferral_fail;
> - /* Accounts for over/under-run of buffers */
> - uint64_t fifo_err;
> - uint64_t runts;
> - uint64_t total_collisions; /* Total number of collisions detected */
> -
> - /* firmware stats */
> - uint64_t fw_total_sent;
> - uint64_t fw_total_fwd;
> - uint64_t fw_total_fwd_bytes;
> - uint64_t fw_err_pko;
> - uint64_t fw_err_link;
> - uint64_t fw_err_drop;
> - uint64_t fw_err_tso;
> - uint64_t fw_tso; /* number of tso requests */
> - uint64_t fw_tso_fwd; /* number of packets segmented in tso */
> - uint64_t fw_tx_vxlan;
> -};
> -
> -struct octeon_link_stats {
> - struct octeon_rx_stats fromwire;
> - struct octeon_tx_stats fromhost;
> -};
> -
> -union lio_if_cfg {
> - uint64_t if_cfg64;
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t base_queue : 16;
> - uint64_t num_iqueues : 16;
> - uint64_t num_oqueues : 16;
> - uint64_t gmx_port_id : 8;
> - uint64_t vf_id : 8;
> -#else
> - uint64_t vf_id : 8;
> - uint64_t gmx_port_id : 8;
> - uint64_t num_oqueues : 16;
> - uint64_t num_iqueues : 16;
> - uint64_t base_queue : 16;
> -#endif
> - } s;
> -};
> -
> -struct lio_if_cfg_resp {
> - uint64_t rh;
> - struct octeon_if_cfg_info cfg_info;
> - uint64_t status;
> -};
> -
> -struct lio_link_stats_resp {
> - uint64_t rh;
> - struct octeon_link_stats link_stats;
> - uint64_t status;
> -};
> -
> -struct lio_link_status_resp {
> - uint64_t rh;
> - struct octeon_link_info link_info;
> - uint64_t status;
> -};
> -
> -struct lio_rss_set {
> - struct param {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t flags : 16;
> - uint64_t hashinfo : 32;
> - uint64_t itablesize : 16;
> - uint64_t hashkeysize : 16;
> - uint64_t reserved : 48;
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t itablesize : 16;
> - uint64_t hashinfo : 32;
> - uint64_t flags : 16;
> - uint64_t reserved : 48;
> - uint64_t hashkeysize : 16;
> -#endif
> - } param;
> -
> - uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
> - uint8_t key[LIO_RSS_MAX_KEY_SZ];
> -};
> -
> -void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -#endif /* _LIO_ETHDEV_H_ */
> diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
> deleted file mode 100644
> index f227827081..0000000000
> --- a/drivers/net/liquidio/lio_logs.h
> +++ /dev/null
> @@ -1,58 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_LOGS_H_
> -#define _LIO_LOGS_H_
> -
> -extern int lio_logtype_driver;
> -#define lio_dev_printf(lio_dev, level, fmt, args...) \
> - rte_log(RTE_LOG_ ## level, lio_logtype_driver, \
> - "%s" fmt, (lio_dev)->dev_string, ##args)
> -
> -#define lio_dev_info(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
> -
> -#define lio_dev_err(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
> -
> -extern int lio_logtype_init;
> -#define PMD_INIT_LOG(level, fmt, args...) \
> - rte_log(RTE_LOG_ ## level, lio_logtype_init, \
> - fmt, ## args)
> -
> -/* Enable these through config options */
> -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
> -
> -#define lio_dev_dbg(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_RX
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_TX
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
> -#define PMD_MBOX_LOG(lio_dev, level, fmt, args...) \
> - lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
> -#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define PMD_REGS_LOG(lio_dev, fmt, args...) \
> - lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
> -#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
> -
> -#endif /* _LIO_LOGS_H_ */
> diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
> deleted file mode 100644
> index e09798ddd7..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.c
> +++ /dev/null
> @@ -1,1804 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -#define LIO_MAX_SG 12
> -/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
> -#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
> -#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
> -
> -static void
> -lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
> -{
> - uint32_t count = 0;
> -
> - do {
> - count += droq->buffer_size;
> - } while (count < LIO_MAX_RX_PKTLEN);
> -}
> -
> -static void
> -lio_droq_reset_indices(struct lio_droq *droq)
> -{
> - droq->read_idx = 0;
> - droq->write_idx = 0;
> - droq->refill_idx = 0;
> - droq->refill_count = 0;
> - rte_atomic64_set(&droq->pkts_pending, 0);
> -}
> -
> -static void
> -lio_droq_destroy_ring_buffers(struct lio_droq *droq)
> -{
> - uint32_t i;
> -
> - for (i = 0; i < droq->nb_desc; i++) {
> - if (droq->recv_buf_list[i].buffer) {
> - rte_pktmbuf_free((struct rte_mbuf *)
> - droq->recv_buf_list[i].buffer);
> - droq->recv_buf_list[i].buffer = NULL;
> - }
> - }
> -
> - lio_droq_reset_indices(droq);
> -}
> -
> -static int
> -lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
> - struct lio_droq *droq)
> -{
> - struct lio_droq_desc *desc_ring = droq->desc_ring;
> - uint32_t i;
> - void *buf;
> -
> - for (i = 0; i < droq->nb_desc; i++) {
> - buf = rte_pktmbuf_alloc(droq->mpool);
> - if (buf == NULL) {
> - lio_dev_err(lio_dev, "buffer alloc failed\n");
> - droq->stats.rx_alloc_failure++;
> - lio_droq_destroy_ring_buffers(droq);
> - return -ENOMEM;
> - }
> -
> - droq->recv_buf_list[i].buffer = buf;
> - droq->info_list[i].length = 0;
> -
> - /* map ring buffers into memory */
> - desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
> - desc_ring[i].buffer_ptr =
> - lio_map_ring(droq->recv_buf_list[i].buffer);
> - }
> -
> - lio_droq_reset_indices(droq);
> -
> - lio_droq_compute_max_packet_bufs(droq);
> -
> - return 0;
> -}
> -
> -static void
> -lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
> -{
> - const struct rte_memzone *mz_tmp;
> - int ret = 0;
> -
> - if (mz == NULL) {
> - lio_dev_err(lio_dev, "Memzone NULL\n");
> - return;
> - }
> -
> - mz_tmp = rte_memzone_lookup(mz->name);
> - if (mz_tmp == NULL) {
> - lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
> - return;
> - }
> -
> - ret = rte_memzone_free(mz);
> - if (ret)
> - lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
> -}
> -
> -/**
> - * Frees the space for descriptor ring for the droq.
> - *
> - * @param lio_dev - pointer to the lio device structure
> - * @param q_no - droq no.
> - */
> -static void
> -lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - struct lio_droq *droq = lio_dev->droq[q_no];
> -
> - lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> - lio_droq_destroy_ring_buffers(droq);
> - rte_free(droq->recv_buf_list);
> - droq->recv_buf_list = NULL;
> - lio_dma_zone_free(lio_dev, droq->info_mz);
> - lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
> -
> - memset(droq, 0, LIO_DROQ_SIZE);
> -}
> -
> -static void *
> -lio_alloc_info_buffer(struct lio_device *lio_dev,
> - struct lio_droq *droq, unsigned int socket_id)
> -{
> - droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "info_list", droq->q_no,
> - (droq->nb_desc *
> - LIO_DROQ_INFO_SIZE),
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> -
> - if (droq->info_mz == NULL)
> - return NULL;
> -
> - droq->info_list_dma = droq->info_mz->iova;
> - droq->info_alloc_size = droq->info_mz->len;
> - droq->info_base_addr = (size_t)droq->info_mz->addr;
> -
> - return droq->info_mz->addr;
> -}
> -
> -/**
> - * Allocates space for the descriptor ring for the droq and
> - * sets the base addr, num desc etc in Octeon registers.
> - *
> - * @param lio_dev - pointer to the lio device structure
> - * @param q_no - droq no.
> - * @param app_ctx - pointer to application context
> - * @return Success: 0 Failure: -1
> - */
> -static int
> -lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
> - uint32_t num_descs, uint32_t desc_size,
> - struct rte_mempool *mpool, unsigned int socket_id)
> -{
> - uint32_t c_refill_threshold;
> - uint32_t desc_ring_size;
> - struct lio_droq *droq;
> -
> - lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> - droq = lio_dev->droq[q_no];
> - droq->lio_dev = lio_dev;
> - droq->q_no = q_no;
> - droq->mpool = mpool;
> -
> - c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
> -
> - droq->nb_desc = num_descs;
> - droq->buffer_size = desc_size;
> -
> - desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
> - droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "droq", q_no,
> - desc_ring_size,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> -
> - if (droq->desc_ring_mz == NULL) {
> - lio_dev_err(lio_dev,
> - "Output queue %d ring alloc failed\n", q_no);
> - return -1;
> - }
> -
> - droq->desc_ring_dma = droq->desc_ring_mz->iova;
> - droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
> -
> - lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
> - q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
> - lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
> - droq->nb_desc);
> -
> - droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
> - if (droq->info_list == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
> - goto init_droq_fail;
> - }
> -
> - droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
> - (droq->nb_desc *
> - LIO_DROQ_RECVBUF_SIZE),
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (droq->recv_buf_list == NULL) {
> - lio_dev_err(lio_dev,
> - "Output queue recv buf list alloc failed\n");
> - goto init_droq_fail;
> - }
> -
> - if (lio_droq_setup_ring_buffers(lio_dev, droq))
> - goto init_droq_fail;
> -
> - droq->refill_threshold = c_refill_threshold;
> -
> - rte_spinlock_init(&droq->lock);
> -
> - lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
> -
> - lio_dev->io_qmask.oq |= (1ULL << q_no);
> -
> - return 0;
> -
> -init_droq_fail:
> - lio_delete_droq(lio_dev, q_no);
> -
> - return -1;
> -}
> -
> -int
> -lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
> - int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
> -{
> - struct lio_droq *droq;
> -
> - PMD_INIT_FUNC_TRACE();
> -
> - /* Allocate the DS for the new droq. */
> - droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (droq == NULL)
> - return -ENOMEM;
> -
> - lio_dev->droq[oq_no] = droq;
> -
> - /* Initialize the Droq */
> - if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
> - socket_id)) {
> - lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
> - rte_free(lio_dev->droq[oq_no]);
> - lio_dev->droq[oq_no] = NULL;
> - return -ENOMEM;
> - }
> -
> - lio_dev->num_oqs++;
> -
> - lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
> -
> - /* Send credit for octeon output queues. credits are always
> - * sent after the output queue is enabled.
> - */
> - rte_write32(lio_dev->droq[oq_no]->nb_desc,
> - lio_dev->droq[oq_no]->pkts_credit_reg);
> - rte_wmb();
> -
> - return 0;
> -}
> -
> -static inline uint32_t
> -lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
> -{
> - uint32_t buf_cnt = 0;
> -
> - while (total_len > (buf_size * buf_cnt))
> - buf_cnt++;
> -
> - return buf_cnt;
> -}
> -
> -/* If we were not able to refill all buffers, try to move around
> - * the buffers that were not dispatched.
> - */
> -static inline uint32_t
> -lio_droq_refill_pullup_descs(struct lio_droq *droq,
> - struct lio_droq_desc *desc_ring)
> -{
> - uint32_t refill_index = droq->refill_idx;
> - uint32_t desc_refilled = 0;
> -
> - while (refill_index != droq->read_idx) {
> - if (droq->recv_buf_list[refill_index].buffer) {
> - droq->recv_buf_list[droq->refill_idx].buffer =
> - droq->recv_buf_list[refill_index].buffer;
> - desc_ring[droq->refill_idx].buffer_ptr =
> - desc_ring[refill_index].buffer_ptr;
> - droq->recv_buf_list[refill_index].buffer = NULL;
> - desc_ring[refill_index].buffer_ptr = 0;
> - do {
> - droq->refill_idx = lio_incr_index(
> - droq->refill_idx, 1,
> - droq->nb_desc);
> - desc_refilled++;
> - droq->refill_count--;
> - } while (droq->recv_buf_list[droq->refill_idx].buffer);
> - }
> - refill_index = lio_incr_index(refill_index, 1,
> - droq->nb_desc);
> - } /* while */
> -
> - return desc_refilled;
> -}
> -
> -/* lio_droq_refill
> - *
> - * @param droq - droq in which descriptors require new buffers.
> - *
> - * Description:
> - * Called during normal DROQ processing in interrupt mode or by the poll
> - * thread to refill the descriptors from which buffers were dispatched
> - * to upper layers. Attempts to allocate new buffers. If that fails, moves
> - * up buffers (that were not dispatched) to form a contiguous ring.
> - *
> - * Returns:
> - * No of descriptors refilled.
> - *
> - * Locks:
> - * This routine is called with droq->lock held.
> - */
> -static uint32_t
> -lio_droq_refill(struct lio_droq *droq)
> -{
> - struct lio_droq_desc *desc_ring;
> - uint32_t desc_refilled = 0;
> - void *buf = NULL;
> -
> - desc_ring = droq->desc_ring;
> -
> - while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
> - /* If a valid buffer exists (happens if there is no dispatch),
> - * reuse the buffer, else allocate.
> - */
> - if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
> - buf = rte_pktmbuf_alloc(droq->mpool);
> - /* If a buffer could not be allocated, no point in
> - * continuing
> - */
> - if (buf == NULL) {
> - droq->stats.rx_alloc_failure++;
> - break;
> - }
> -
> - droq->recv_buf_list[droq->refill_idx].buffer = buf;
> - }
> -
> - desc_ring[droq->refill_idx].buffer_ptr =
> - lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
> - /* Reset any previous values in the length field. */
> - droq->info_list[droq->refill_idx].length = 0;
> -
> - droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
> - droq->nb_desc);
> - desc_refilled++;
> - droq->refill_count--;
> - }
> -
> - if (droq->refill_count)
> - desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
> -
> - /* if droq->refill_count
> - * The refill count would not change in pass two. We only moved buffers
> - * to close the gap in the ring, but we would still have the same no. of
> - * buffers to refill.
> - */
> - return desc_refilled;
> -}
> -
> -static int
> -lio_droq_fast_process_packet(struct lio_device *lio_dev,
> - struct lio_droq *droq,
> - struct rte_mbuf **rx_pkts)
> -{
> - struct rte_mbuf *nicbuf = NULL;
> - struct lio_droq_info *info;
> - uint32_t total_len = 0;
> - int data_total_len = 0;
> - uint32_t pkt_len = 0;
> - union octeon_rh *rh;
> - int data_pkts = 0;
> -
> - info = &droq->info_list[droq->read_idx];
> - lio_swap_8B_data((uint64_t *)info, 2);
> -
> - if (!info->length)
> - return -1;
> -
> - /* Len of resp hdr in included in the received data len. */
> - info->length -= OCTEON_RH_SIZE;
> - rh = &info->rh;
> -
> - total_len += (uint32_t)info->length;
> -
> - if (lio_opcode_slow_path(rh)) {
> - uint32_t buf_cnt;
> -
> - buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
> - (uint32_t)info->length);
> - droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
> - droq->nb_desc);
> - droq->refill_count += buf_cnt;
> - } else {
> - if (info->length <= droq->buffer_size) {
> - if (rh->r_dh.has_hash)
> - pkt_len = (uint32_t)(info->length - 8);
> - else
> - pkt_len = (uint32_t)info->length;
> -
> - nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
> - droq->recv_buf_list[droq->read_idx].buffer = NULL;
> - droq->read_idx = lio_incr_index(
> - droq->read_idx, 1,
> - droq->nb_desc);
> - droq->refill_count++;
> -
> - if (likely(nicbuf != NULL)) {
> - /* We don't have a way to pass flags yet */
> - nicbuf->ol_flags = 0;
> - if (rh->r_dh.has_hash) {
> - uint64_t *hash_ptr;
> -
> - nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
> - hash_ptr = rte_pktmbuf_mtod(nicbuf,
> - uint64_t *);
> - lio_swap_8B_data(hash_ptr, 1);
> - nicbuf->hash.rss = (uint32_t)*hash_ptr;
> - nicbuf->data_off += 8;
> - }
> -
> - nicbuf->pkt_len = pkt_len;
> - nicbuf->data_len = pkt_len;
> - nicbuf->port = lio_dev->port_id;
> - /* Store the mbuf */
> - rx_pkts[data_pkts++] = nicbuf;
> - data_total_len += pkt_len;
> - }
> -
> - /* Prefetch buffer pointers when on a cache line
> - * boundary
> - */
> - if ((droq->read_idx & 3) == 0) {
> - rte_prefetch0(
> - &droq->recv_buf_list[droq->read_idx]);
> - rte_prefetch0(
> - &droq->info_list[droq->read_idx]);
> - }
> - } else {
> - struct rte_mbuf *first_buf = NULL;
> - struct rte_mbuf *last_buf = NULL;
> -
> - while (pkt_len < info->length) {
> - int cpy_len = 0;
> -
> - cpy_len = ((pkt_len + droq->buffer_size) >
> - info->length)
> - ? ((uint32_t)info->length -
> - pkt_len)
> - : droq->buffer_size;
> -
> - nicbuf =
> - droq->recv_buf_list[droq->read_idx].buffer;
> - droq->recv_buf_list[droq->read_idx].buffer =
> - NULL;
> -
> - if (likely(nicbuf != NULL)) {
> - /* Note the first seg */
> - if (!pkt_len)
> - first_buf = nicbuf;
> -
> - nicbuf->port = lio_dev->port_id;
> - /* We don't have a way to pass
> - * flags yet
> - */
> - nicbuf->ol_flags = 0;
> - if ((!pkt_len) && (rh->r_dh.has_hash)) {
> - uint64_t *hash_ptr;
> -
> - nicbuf->ol_flags |=
> - RTE_MBUF_F_RX_RSS_HASH;
> - hash_ptr = rte_pktmbuf_mtod(
> - nicbuf, uint64_t *);
> - lio_swap_8B_data(hash_ptr, 1);
> - nicbuf->hash.rss =
> - (uint32_t)*hash_ptr;
> - nicbuf->data_off += 8;
> - nicbuf->pkt_len = cpy_len - 8;
> - nicbuf->data_len = cpy_len - 8;
> - } else {
> - nicbuf->pkt_len = cpy_len;
> - nicbuf->data_len = cpy_len;
> - }
> -
> - if (pkt_len)
> - first_buf->nb_segs++;
> -
> - if (last_buf)
> - last_buf->next = nicbuf;
> -
> - last_buf = nicbuf;
> - } else {
> - PMD_RX_LOG(lio_dev, ERR, "no buf\n");
> - }
> -
> - pkt_len += cpy_len;
> - droq->read_idx = lio_incr_index(
> - droq->read_idx,
> - 1, droq->nb_desc);
> - droq->refill_count++;
> -
> - /* Prefetch buffer pointers when on a
> - * cache line boundary
> - */
> - if ((droq->read_idx & 3) == 0) {
> - rte_prefetch0(&droq->recv_buf_list
> - [droq->read_idx]);
> -
> - rte_prefetch0(
> - &droq->info_list[droq->read_idx]);
> - }
> - }
> - rx_pkts[data_pkts++] = first_buf;
> - if (rh->r_dh.has_hash)
> - data_total_len += (pkt_len - 8);
> - else
> - data_total_len += pkt_len;
> - }
> -
> - /* Inform upper layer about packet checksum verification */
> - struct rte_mbuf *m = rx_pkts[data_pkts - 1];
> -
> - if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
> - m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
> -
> - if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
> - m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
> - }
> -
> - if (droq->refill_count >= droq->refill_threshold) {
> - int desc_refilled = lio_droq_refill(droq);
> -
> - /* Flush the droq descriptor data to memory to be sure
> - * that when we update the credits the data in memory is
> - * accurate.
> - */
> - rte_wmb();
> - rte_write32(desc_refilled, droq->pkts_credit_reg);
> - /* make sure mmio write completes */
> - rte_wmb();
> - }
> -
> - info->length = 0;
> - info->rh.rh64 = 0;
> -
> - droq->stats.pkts_received++;
> - droq->stats.rx_pkts_received += data_pkts;
> - droq->stats.rx_bytes_received += data_total_len;
> - droq->stats.bytes_received += total_len;
> -
> - return data_pkts;
> -}
> -
> -static uint32_t
> -lio_droq_fast_process_packets(struct lio_device *lio_dev,
> - struct lio_droq *droq,
> - struct rte_mbuf **rx_pkts,
> - uint32_t pkts_to_process)
> -{
> - int ret, data_pkts = 0;
> - uint32_t pkt;
> -
> - for (pkt = 0; pkt < pkts_to_process; pkt++) {
> - ret = lio_droq_fast_process_packet(lio_dev, droq,
> - &rx_pkts[data_pkts]);
> - if (ret < 0) {
> - lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
> - lio_dev->port_id, droq->q_no,
> - droq->read_idx, pkts_to_process);
> - break;
> - }
> - data_pkts += ret;
> - }
> -
> - rte_atomic64_sub(&droq->pkts_pending, pkt);
> -
> - return data_pkts;
> -}
> -
> -static inline uint32_t
> -lio_droq_check_hw_for_pkts(struct lio_droq *droq)
> -{
> - uint32_t last_count;
> - uint32_t pkt_count;
> -
> - pkt_count = rte_read32(droq->pkts_sent_reg);
> -
> - last_count = pkt_count - droq->pkt_count;
> - droq->pkt_count = pkt_count;
> -
> - if (last_count)
> - rte_atomic64_add(&droq->pkts_pending, last_count);
> -
> - return last_count;
> -}
> -
> -uint16_t
> -lio_dev_recv_pkts(void *rx_queue,
> - struct rte_mbuf **rx_pkts,
> - uint16_t budget)
> -{
> - struct lio_droq *droq = rx_queue;
> - struct lio_device *lio_dev = droq->lio_dev;
> - uint32_t pkts_processed = 0;
> - uint32_t pkt_count = 0;
> -
> - lio_droq_check_hw_for_pkts(droq);
> -
> - pkt_count = rte_atomic64_read(&droq->pkts_pending);
> - if (!pkt_count)
> - return 0;
> -
> - if (pkt_count > budget)
> - pkt_count = budget;
> -
> - /* Grab the lock */
> - rte_spinlock_lock(&droq->lock);
> - pkts_processed = lio_droq_fast_process_packets(lio_dev,
> - droq, rx_pkts,
> - pkt_count);
> -
> - if (droq->pkt_count) {
> - rte_write32(droq->pkt_count, droq->pkts_sent_reg);
> - droq->pkt_count = 0;
> - }
> -
> - /* Release the spin lock */
> - rte_spinlock_unlock(&droq->lock);
> -
> - return pkts_processed;
> -}
> -
> -void
> -lio_delete_droq_queue(struct lio_device *lio_dev,
> - int oq_no)
> -{
> - lio_delete_droq(lio_dev, oq_no);
> - lio_dev->num_oqs--;
> - rte_free(lio_dev->droq[oq_no]);
> - lio_dev->droq[oq_no] = NULL;
> -}
> -
> -/**
> - * lio_init_instr_queue()
> - * @param lio_dev - pointer to the lio device structure.
> - * @param txpciq - queue to be initialized.
> - *
> - * Called at driver init time for each input queue. iq_conf has the
> - * configuration parameters for the queue.
> - *
> - * @return Success: 0 Failure: -1
> - */
> -static int
> -lio_init_instr_queue(struct lio_device *lio_dev,
> - union octeon_txpciq txpciq,
> - uint32_t num_descs, unsigned int socket_id)
> -{
> - uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> - struct lio_instr_queue *iq;
> - uint32_t instr_type;
> - uint32_t q_size;
> -
> - instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
> -
> - q_size = instr_type * num_descs;
> - iq = lio_dev->instr_queue[iq_no];
> - iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> - "instr_queue", iq_no, q_size,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (iq->iq_mz == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
> - iq_no);
> - return -1;
> - }
> -
> - iq->base_addr_dma = iq->iq_mz->iova;
> - iq->base_addr = (uint8_t *)iq->iq_mz->addr;
> -
> - iq->nb_desc = num_descs;
> -
> - /* Initialize a list to holds requests that have been posted to Octeon
> - * but has yet to be fetched by octeon
> - */
> - iq->request_list = rte_zmalloc_socket("request_list",
> - sizeof(*iq->request_list) *
> - num_descs,
> - RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (iq->request_list == NULL) {
> - lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
> - iq_no);
> - lio_dma_zone_free(lio_dev, iq->iq_mz);
> - return -1;
> - }
> -
> - lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
> - iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
> - iq->nb_desc);
> -
> - iq->lio_dev = lio_dev;
> - iq->txpciq.txpciq64 = txpciq.txpciq64;
> - iq->fill_cnt = 0;
> - iq->host_write_index = 0;
> - iq->lio_read_index = 0;
> - iq->flush_index = 0;
> -
> - rte_atomic64_set(&iq->instr_pending, 0);
> -
> - /* Initialize the spinlock for this instruction queue */
> - rte_spinlock_init(&iq->lock);
> - rte_spinlock_init(&iq->post_lock);
> -
> - rte_atomic64_clear(&iq->iq_flush_running);
> -
> - lio_dev->io_qmask.iq |= (1ULL << iq_no);
> -
> - /* Set the 32B/64B mode for each input queue */
> - lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
> - iq->iqcmd_64B = (instr_type == 64);
> -
> - lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
> -
> - return 0;
> -}
> -
> -int
> -lio_setup_instr_queue0(struct lio_device *lio_dev)
> -{
> - union octeon_txpciq txpciq;
> - uint32_t num_descs = 0;
> - uint32_t iq_no = 0;
> -
> - num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
> -
> - lio_dev->num_iqs = 0;
> -
> - lio_dev->instr_queue[0] = rte_zmalloc(NULL,
> - sizeof(struct lio_instr_queue), 0);
> - if (lio_dev->instr_queue[0] == NULL)
> - return -ENOMEM;
> -
> - lio_dev->instr_queue[0]->q_index = 0;
> - lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
> - txpciq.txpciq64 = 0;
> - txpciq.s.q_no = iq_no;
> - txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
> - txpciq.s.use_qpg = 0;
> - txpciq.s.qpg = 0;
> - if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
> - rte_free(lio_dev->instr_queue[0]);
> - lio_dev->instr_queue[0] = NULL;
> - return -1;
> - }
> -
> - lio_dev->num_iqs++;
> -
> - return 0;
> -}
> -
> -/**
> - * lio_delete_instr_queue()
> - * @param lio_dev - pointer to the lio device structure.
> - * @param iq_no - queue to be deleted.
> - *
> - * Called at driver unload time for each input queue. Deletes all
> - * allocated resources for the input queue.
> - */
> -static void
> -lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -
> - rte_free(iq->request_list);
> - iq->request_list = NULL;
> - lio_dma_zone_free(lio_dev, iq->iq_mz);
> -}
> -
> -void
> -lio_free_instr_queue0(struct lio_device *lio_dev)
> -{
> - lio_delete_instr_queue(lio_dev, 0);
> - rte_free(lio_dev->instr_queue[0]);
> - lio_dev->instr_queue[0] = NULL;
> - lio_dev->num_iqs--;
> -}
> -
> -/* Return 0 on success, -1 on failure */
> -int
> -lio_setup_iq(struct lio_device *lio_dev, int q_index,
> - union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
> - unsigned int socket_id)
> -{
> - uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> -
> - lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
> - sizeof(struct lio_instr_queue),
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (lio_dev->instr_queue[iq_no] == NULL)
> - return -1;
> -
> - lio_dev->instr_queue[iq_no]->q_index = q_index;
> - lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
> -
> - if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
> - rte_free(lio_dev->instr_queue[iq_no]);
> - lio_dev->instr_queue[iq_no] = NULL;
> - return -1;
> - }
> -
> - lio_dev->num_iqs++;
> -
> - return 0;
> -}
> -
> -int
> -lio_wait_for_instr_fetch(struct lio_device *lio_dev)
> -{
> - int pending, instr_cnt;
> - int i, retry = 1000;
> -
> - do {
> - instr_cnt = 0;
> -
> - for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
> - if (!(lio_dev->io_qmask.iq & (1ULL << i)))
> - continue;
> -
> - if (lio_dev->instr_queue[i] == NULL)
> - break;
> -
> - pending = rte_atomic64_read(
> - &lio_dev->instr_queue[i]->instr_pending);
> - if (pending)
> - lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
> -
> - instr_cnt += pending;
> - }
> -
> - if (instr_cnt == 0)
> - break;
> -
> - rte_delay_ms(1);
> -
> - } while (retry-- && instr_cnt);
> -
> - return instr_cnt;
> -}
> -
> -static inline void
> -lio_ring_doorbell(struct lio_device *lio_dev,
> - struct lio_instr_queue *iq)
> -{
> - if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
> - rte_write32(iq->fill_cnt, iq->doorbell_reg);
> - /* make sure doorbell write goes through */
> - rte_wmb();
> - iq->fill_cnt = 0;
> - }
> -}
> -
> -static inline void
> -copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> - uint8_t *iqptr, cmdsize;
> -
> - cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
> - iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
> -
> - rte_memcpy(iqptr, cmd, cmdsize);
> -}
> -
> -static inline struct lio_iq_post_status
> -post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> - struct lio_iq_post_status st;
> -
> - st.status = LIO_IQ_SEND_OK;
> -
> - /* This ensures that the read index does not wrap around to the same
> - * position if queue gets full before Octeon could fetch any instr.
> - */
> - if (rte_atomic64_read(&iq->instr_pending) >=
> - (int32_t)(iq->nb_desc - 1)) {
> - st.status = LIO_IQ_SEND_FAILED;
> - st.index = -1;
> - return st;
> - }
> -
> - if (rte_atomic64_read(&iq->instr_pending) >=
> - (int32_t)(iq->nb_desc - 2))
> - st.status = LIO_IQ_SEND_STOP;
> -
> - copy_cmd_into_iq(iq, cmd);
> -
> - /* "index" is returned, host_write_index is modified. */
> - st.index = iq->host_write_index;
> - iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
> - iq->nb_desc);
> - iq->fill_cnt++;
> -
> - /* Flush the command into memory. We need to be sure the data is in
> - * memory before indicating that the instruction is pending.
> - */
> - rte_wmb();
> -
> - rte_atomic64_inc(&iq->instr_pending);
> -
> - return st;
> -}
> -
> -static inline void
> -lio_add_to_request_list(struct lio_instr_queue *iq,
> - int idx, void *buf, int reqtype)
> -{
> - iq->request_list[idx].buf = buf;
> - iq->request_list[idx].reqtype = reqtype;
> -}
> -
> -static inline void
> -lio_free_netsgbuf(void *buf)
> -{
> - struct lio_buf_free_info *finfo = buf;
> - struct lio_device *lio_dev = finfo->lio_dev;
> - struct rte_mbuf *m = finfo->mbuf;
> - struct lio_gather *g = finfo->g;
> - uint8_t iq = finfo->iq_no;
> -
> - /* This will take care of multiple segments also */
> - rte_pktmbuf_free(m);
> -
> - rte_spinlock_lock(&lio_dev->glist_lock[iq]);
> - STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
> - rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
> - rte_free(finfo);
> -}
> -
> -/* Can only run in process context */
> -static int
> -lio_process_iq_request_list(struct lio_device *lio_dev,
> - struct lio_instr_queue *iq)
> -{
> - struct octeon_instr_irh *irh = NULL;
> - uint32_t old = iq->flush_index;
> - struct lio_soft_command *sc;
> - uint32_t inst_count = 0;
> - int reqtype;
> - void *buf;
> -
> - while (old != iq->lio_read_index) {
> - reqtype = iq->request_list[old].reqtype;
> - buf = iq->request_list[old].buf;
> -
> - if (reqtype == LIO_REQTYPE_NONE)
> - goto skip_this;
> -
> - switch (reqtype) {
> - case LIO_REQTYPE_NORESP_NET:
> - rte_pktmbuf_free((struct rte_mbuf *)buf);
> - break;
> - case LIO_REQTYPE_NORESP_NET_SG:
> - lio_free_netsgbuf(buf);
> - break;
> - case LIO_REQTYPE_SOFT_COMMAND:
> - sc = buf;
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - if (irh->rflag) {
> - /* We're expecting a response from Octeon.
> - * It's up to lio_process_ordered_list() to
> - * process sc. Add sc to the ordered soft
> - * command response list because we expect
> - * a response from Octeon.
> - */
> - rte_spinlock_lock(&lio_dev->response_list.lock);
> - rte_atomic64_inc(
> - &lio_dev->response_list.pending_req_count);
> - STAILQ_INSERT_TAIL(
> - &lio_dev->response_list.head,
> - &sc->node, entries);
> - rte_spinlock_unlock(
> - &lio_dev->response_list.lock);
> - } else {
> - if (sc->callback) {
> - /* This callback must not sleep */
> - sc->callback(LIO_REQUEST_DONE,
> - sc->callback_arg);
> - }
> - }
> - break;
> - default:
> - lio_dev_err(lio_dev,
> - "Unknown reqtype: %d buf: %p at idx %d\n",
> - reqtype, buf, old);
> - }
> -
> - iq->request_list[old].buf = NULL;
> - iq->request_list[old].reqtype = 0;
> -
> -skip_this:
> - inst_count++;
> - old = lio_incr_index(old, 1, iq->nb_desc);
> - }
> -
> - iq->flush_index = old;
> -
> - return inst_count;
> -}
> -
> -static void
> -lio_update_read_index(struct lio_instr_queue *iq)
> -{
> - uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
> - uint32_t last_done;
> -
> - last_done = pkt_in_done - iq->pkt_in_done;
> - iq->pkt_in_done = pkt_in_done;
> -
> - /* Add last_done and modulo with the IQ size to get new index */
> - iq->lio_read_index = (iq->lio_read_index +
> - (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
> - iq->nb_desc;
> -}
> -
> -int
> -lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
> -{
> - uint32_t inst_processed = 0;
> - int tx_done = 1;
> -
> - if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
> - return tx_done;
> -
> - rte_spinlock_lock(&iq->lock);
> -
> - lio_update_read_index(iq);
> -
> - do {
> - /* Process any outstanding IQ packets. */
> - if (iq->flush_index == iq->lio_read_index)
> - break;
> -
> - inst_processed = lio_process_iq_request_list(lio_dev, iq);
> -
> - if (inst_processed) {
> - rte_atomic64_sub(&iq->instr_pending, inst_processed);
> - iq->stats.instr_processed += inst_processed;
> - }
> -
> - inst_processed = 0;
> -
> - } while (1);
> -
> - rte_spinlock_unlock(&iq->lock);
> -
> - rte_atomic64_clear(&iq->iq_flush_running);
> -
> - return tx_done;
> -}
> -
> -static int
> -lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
> - void *buf, uint32_t datasize, uint32_t reqtype)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - struct lio_iq_post_status st;
> -
> - rte_spinlock_lock(&iq->post_lock);
> -
> - st = post_command2(iq, cmd);
> -
> - if (st.status != LIO_IQ_SEND_FAILED) {
> - lio_add_to_request_list(iq, st.index, buf, reqtype);
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
> - datasize);
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
> -
> - lio_ring_doorbell(lio_dev, iq);
> - } else {
> - LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
> - }
> -
> - rte_spinlock_unlock(&iq->post_lock);
> -
> - return st.status;
> -}
> -
> -void
> -lio_prepare_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc, uint8_t opcode,
> - uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
> - uint64_t ossp1)
> -{
> - struct octeon_instr_pki_ih3 *pki_ih3;
> - struct octeon_instr_ih3 *ih3;
> - struct octeon_instr_irh *irh;
> - struct octeon_instr_rdp *rdp;
> -
> - RTE_ASSERT(opcode <= 15);
> - RTE_ASSERT(subcode <= 127);
> -
> - ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> -
> - ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
> -
> - pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
> -
> - pki_ih3->w = 1;
> - pki_ih3->raw = 1;
> - pki_ih3->utag = 1;
> - pki_ih3->uqpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
> - pki_ih3->utt = 1;
> -
> - pki_ih3->tag = LIO_CONTROL;
> - pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
> - pki_ih3->qpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
> - pki_ih3->pm = 0x7;
> - pki_ih3->sl = 8;
> -
> - if (sc->datasize)
> - ih3->dlengsz = sc->datasize;
> -
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - irh->opcode = opcode;
> - irh->subcode = subcode;
> -
> - /* opcode/subcode specific parameters (ossp) */
> - irh->ossp = irh_ossp;
> - sc->cmd.cmd3.ossp[0] = ossp0;
> - sc->cmd.cmd3.ossp[1] = ossp1;
> -
> - if (sc->rdatasize) {
> - rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
> - rdp->pcie_port = lio_dev->pcie_port;
> - rdp->rlen = sc->rdatasize;
> - irh->rflag = 1;
> - /* PKI IH3 */
> - ih3->fsz = OCTEON_SOFT_CMD_RESP_IH3;
> - } else {
> - irh->rflag = 0;
> - /* PKI IH3 */
> - ih3->fsz = OCTEON_PCI_CMD_O3;
> - }
> -}
> -
> -int
> -lio_send_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc)
> -{
> - struct octeon_instr_ih3 *ih3;
> - struct octeon_instr_irh *irh;
> - uint32_t len = 0;
> -
> - ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> - if (ih3->dlengsz) {
> - RTE_ASSERT(sc->dmadptr);
> - sc->cmd.cmd3.dptr = sc->dmadptr;
> - }
> -
> - irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> - if (irh->rflag) {
> - RTE_ASSERT(sc->dmarptr);
> - RTE_ASSERT(sc->status_word != NULL);
> - *sc->status_word = LIO_COMPLETION_WORD_INIT;
> - sc->cmd.cmd3.rptr = sc->dmarptr;
> - }
> -
> - len = (uint32_t)ih3->dlengsz;
> -
> - if (sc->wait_time)
> - sc->timeout = lio_uptime + sc->wait_time;
> -
> - return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
> - LIO_REQTYPE_SOFT_COMMAND);
> -}
> -
> -int
> -lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> - char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
> - uint16_t buf_size;
> -
> - buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
> - snprintf(sc_pool_name, sizeof(sc_pool_name),
> - "lio_sc_pool_%u", lio_dev->port_id);
> - lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
> - LIO_MAX_SOFT_COMMAND_BUFFERS,
> - 0, 0, buf_size, SOCKET_ID_ANY);
> - return 0;
> -}
> -
> -void
> -lio_free_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> - rte_mempool_free(lio_dev->sc_buf_pool);
> -}
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
> - uint32_t rdatasize, uint32_t ctxsize)
> -{
> - uint32_t offset = sizeof(struct lio_soft_command);
> - struct lio_soft_command *sc;
> - struct rte_mbuf *m;
> - uint64_t dma_addr;
> -
> - RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
> - LIO_SOFT_COMMAND_BUFFER_SIZE);
> -
> - m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
> - if (m == NULL) {
> - lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
> - return NULL;
> - }
> -
> - /* set rte_mbuf data size and there is only 1 segment */
> - m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> - m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> -
> - /* use rte_mbuf buffer for soft command */
> - sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
> - memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
> - sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
> - sc->dma_addr = rte_mbuf_data_iova(m);
> - sc->mbuf = m;
> -
> - dma_addr = sc->dma_addr;
> -
> - if (ctxsize) {
> - sc->ctxptr = (uint8_t *)sc + offset;
> - sc->ctxsize = ctxsize;
> - }
> -
> - /* Start data at 128 byte boundary */
> - offset = (offset + ctxsize + 127) & 0xffffff80;
> -
> - if (datasize) {
> - sc->virtdptr = (uint8_t *)sc + offset;
> - sc->dmadptr = dma_addr + offset;
> - sc->datasize = datasize;
> - }
> -
> - /* Start rdata at 128 byte boundary */
> - offset = (offset + datasize + 127) & 0xffffff80;
> -
> - if (rdatasize) {
> - RTE_ASSERT(rdatasize >= 16);
> - sc->virtrptr = (uint8_t *)sc + offset;
> - sc->dmarptr = dma_addr + offset;
> - sc->rdatasize = rdatasize;
> - sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
> - rdatasize - 8);
> - }
> -
> - return sc;
> -}
> -
> -void
> -lio_free_soft_command(struct lio_soft_command *sc)
> -{
> - rte_pktmbuf_free(sc->mbuf);
> -}
> -
> -void
> -lio_setup_response_list(struct lio_device *lio_dev)
> -{
> - STAILQ_INIT(&lio_dev->response_list.head);
> - rte_spinlock_init(&lio_dev->response_list.lock);
> - rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
> -}
> -
> -int
> -lio_process_ordered_list(struct lio_device *lio_dev)
> -{
> - int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
> - struct lio_response_list *ordered_sc_list;
> - struct lio_soft_command *sc;
> - int request_complete = 0;
> - uint64_t status64;
> - uint32_t status;
> -
> - ordered_sc_list = &lio_dev->response_list;
> -
> - do {
> - rte_spinlock_lock(&ordered_sc_list->lock);
> -
> - if (STAILQ_EMPTY(&ordered_sc_list->head)) {
> - /* ordered_sc_list is empty; there is
> - * nothing to process
> - */
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> - return -1;
> - }
> -
> - sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
> - struct lio_soft_command, node);
> -
> - status = LIO_REQUEST_PENDING;
> -
> - /* check if octeon has finished DMA'ing a response
> - * to where rptr is pointing to
> - */
> - status64 = *sc->status_word;
> -
> - if (status64 != LIO_COMPLETION_WORD_INIT) {
> - /* This logic ensures that all 64b have been written.
> - * 1. check byte 0 for non-FF
> - * 2. if non-FF, then swap result from BE to host order
> - * 3. check byte 7 (swapped to 0) for non-FF
> - * 4. if non-FF, use the low 32-bit status code
> - * 5. if either byte 0 or byte 7 is FF, don't use status
> - */
> - if ((status64 & 0xff) != 0xff) {
> - lio_swap_8B_data(&status64, 1);
> - if (((status64 & 0xff) != 0xff)) {
> - /* retrieve 16-bit firmware status */
> - status = (uint32_t)(status64 &
> - 0xffffULL);
> - if (status) {
> - status =
> - LIO_FIRMWARE_STATUS_CODE(
> - status);
> - } else {
> - /* i.e. no error */
> - status = LIO_REQUEST_DONE;
> - }
> - }
> - }
> - } else if ((sc->timeout && lio_check_timeout(lio_uptime,
> - sc->timeout))) {
> - lio_dev_err(lio_dev,
> - "cmd failed, timeout (%ld, %ld)\n",
> - (long)lio_uptime, (long)sc->timeout);
> - status = LIO_REQUEST_TIMEOUT;
> - }
> -
> - if (status != LIO_REQUEST_PENDING) {
> - /* we have received a response or we have timed out.
> - * remove node from linked list
> - */
> - STAILQ_REMOVE(&ordered_sc_list->head,
> - &sc->node, lio_stailq_node, entries);
> - rte_atomic64_dec(
> - &lio_dev->response_list.pending_req_count);
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> -
> - if (sc->callback)
> - sc->callback(status, sc->callback_arg);
> -
> - request_complete++;
> - } else {
> - /* no response yet */
> - request_complete = 0;
> - rte_spinlock_unlock(&ordered_sc_list->lock);
> - }
> -
> - /* If we hit the Max Ordered requests to process every loop,
> - * we quit and let this function be invoked the next time
> - * the poll thread runs to process the remaining requests.
> - * This function can take up the entire CPU if there is
> - * no upper limit to the requests processed.
> - */
> - if (request_complete >= resp_to_process)
> - break;
> - } while (request_complete);
> -
> - return 0;
> -}
> -
> -static inline struct lio_stailq_node *
> -list_delete_first_node(struct lio_stailq_head *head)
> -{
> - struct lio_stailq_node *node;
> -
> - if (STAILQ_EMPTY(head))
> - node = NULL;
> - else
> - node = STAILQ_FIRST(head);
> -
> - if (node)
> - STAILQ_REMOVE(head, node, lio_stailq_node, entries);
> -
> - return node;
> -}
> -
> -void
> -lio_delete_sglist(struct lio_instr_queue *txq)
> -{
> - struct lio_device *lio_dev = txq->lio_dev;
> - int iq_no = txq->q_index;
> - struct lio_gather *g;
> -
> - if (lio_dev->glist_head == NULL)
> - return;
> -
> - do {
> - g = (struct lio_gather *)list_delete_first_node(
> - &lio_dev->glist_head[iq_no]);
> - if (g) {
> - if (g->sg)
> - rte_free(
> - (void *)((unsigned long)g->sg - g->adjust));
> - rte_free(g);
> - }
> - } while (g);
> -}
> -
> -/**
> - * \brief Setup gather lists
> - * @param lio per-network private data
> - */
> -int
> -lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> - int fw_mapped_iq, int num_descs, unsigned int socket_id)
> -{
> - struct lio_gather *g;
> - int i;
> -
> - rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
> -
> - STAILQ_INIT(&lio_dev->glist_head[iq_no]);
> -
> - for (i = 0; i < num_descs; i++) {
> - g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
> - socket_id);
> - if (g == NULL) {
> - lio_dev_err(lio_dev,
> - "lio_gather memory allocation failed for qno %d\n",
> - iq_no);
> - break;
> - }
> -
> - g->sg_size =
> - ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
> -
> - g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
> - RTE_CACHE_LINE_SIZE, socket_id);
> - if (g->sg == NULL) {
> - lio_dev_err(lio_dev,
> - "sg list memory allocation failed for qno %d\n",
> - iq_no);
> - rte_free(g);
> - break;
> - }
> -
> - /* The gather component should be aligned on 64-bit boundary */
> - if (((unsigned long)g->sg) & 7) {
> - g->adjust = 8 - (((unsigned long)g->sg) & 7);
> - g->sg =
> - (struct lio_sg_entry *)((unsigned long)g->sg +
> - g->adjust);
> - }
> -
> - STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
> - entries);
> - }
> -
> - if (i != num_descs) {
> - lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
> - return -ENOMEM;
> - }
> -
> - return 0;
> -}
> -
> -void
> -lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
> -{
> - lio_delete_instr_queue(lio_dev, iq_no);
> - rte_free(lio_dev->instr_queue[iq_no]);
> - lio_dev->instr_queue[iq_no] = NULL;
> - lio_dev->num_iqs--;
> -}
> -
> -static inline uint32_t
> -lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
> - (uint32_t)rte_atomic64_read(
> - &lio_dev->instr_queue[q_no]->instr_pending));
> -}
> -
> -static inline int
> -lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
> -{
> - return ((uint32_t)rte_atomic64_read(
> - &lio_dev->instr_queue[q_no]->instr_pending) >=
> - (lio_dev->instr_queue[q_no]->nb_desc - 2));
> -}
> -
> -static int
> -lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
> -{
> - struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> - uint32_t count = 10000;
> -
> - while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
> - --count)
> - lio_flush_iq(lio_dev, iq);
> -
> - return count ? 0 : 1;
> -}
> -
> -static void
> -lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
> -{
> - struct lio_soft_command *sc = sc_ptr;
> - struct lio_dev_ctrl_cmd *ctrl_cmd;
> - struct lio_ctrl_pkt *ctrl_pkt;
> -
> - ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
> - ctrl_cmd = ctrl_pkt->ctrl_cmd;
> - ctrl_cmd->cond = 1;
> -
> - lio_free_soft_command(sc);
> -}
> -
> -static inline struct lio_soft_command *
> -lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
> - struct lio_ctrl_pkt *ctrl_pkt)
> -{
> - struct lio_soft_command *sc = NULL;
> - uint32_t uddsize, datasize;
> - uint32_t rdatasize;
> - uint8_t *data;
> -
> - uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
> -
> - datasize = OCTEON_CMD_SIZE + uddsize;
> - rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
> -
> - sc = lio_alloc_soft_command(lio_dev, datasize,
> - rdatasize, sizeof(struct lio_ctrl_pkt));
> - if (sc == NULL)
> - return NULL;
> -
> - rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
> -
> - data = (uint8_t *)sc->virtdptr;
> -
> - rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
> -
> - lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
> -
> - if (uddsize) {
> - /* Endian-Swap for UDD should have been done by caller. */
> - rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
> - }
> -
> - sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
> -
> - lio_prepare_soft_command(lio_dev, sc,
> - LIO_OPCODE, LIO_OPCODE_CMD,
> - 0, 0, 0);
> -
> - sc->callback = lio_ctrl_cmd_callback;
> - sc->callback_arg = sc;
> - sc->wait_time = ctrl_pkt->wait_time;
> -
> - return sc;
> -}
> -
> -int
> -lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
> -{
> - struct lio_soft_command *sc = NULL;
> - int retval;
> -
> - sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
> - if (sc == NULL) {
> - lio_dev_err(lio_dev, "soft command allocation failed\n");
> - return -1;
> - }
> -
> - retval = lio_send_soft_command(lio_dev, sc);
> - if (retval == LIO_IQ_SEND_FAILED) {
> - lio_free_soft_command(sc);
> - lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
> - lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
> - return -1;
> - }
> -
> - return retval;
> -}
> -
> -/** Send data packet to the device
> - * @param lio_dev - lio device pointer
> - * @param ndata - control structure with queueing, and buffer information
> - *
> - * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -static inline int
> -lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
> -{
> - return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
> - ndata->buf, ndata->datasize, ndata->reqtype);
> -}
> -
> -uint16_t
> -lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
> -{
> - struct lio_instr_queue *txq = tx_queue;
> - union lio_cmd_setup cmdsetup;
> - struct lio_device *lio_dev;
> - struct lio_iq_stats *stats;
> - struct lio_data_pkt ndata;
> - int i, processed = 0;
> - struct rte_mbuf *m;
> - uint32_t tag = 0;
> - int status = 0;
> - int iq_no;
> -
> - lio_dev = txq->lio_dev;
> - iq_no = txq->txpciq.s.q_no;
> - stats = &lio_dev->instr_queue[iq_no]->stats;
> -
> - if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
> - PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
> - lio_dev->linfo.link.s.link_up);
> - goto xmit_failed;
> - }
> -
> - lio_dev_cleanup_iq(lio_dev, iq_no);
> -
> - for (i = 0; i < nb_pkts; i++) {
> - uint32_t pkt_len = 0;
> -
> - m = pkts[i];
> -
> - /* Prepare the attributes for the data to be passed to BASE. */
> - memset(&ndata, 0, sizeof(struct lio_data_pkt));
> -
> - ndata.buf = m;
> -
> - ndata.q_no = iq_no;
> - if (lio_iq_is_full(lio_dev, ndata.q_no)) {
> - stats->tx_iq_busy++;
> - if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "Transmit failed iq:%d full\n",
> - ndata.q_no);
> - break;
> - }
> - }
> -
> - cmdsetup.cmd_setup64 = 0;
> - cmdsetup.s.iq_no = iq_no;
> -
> - /* check checksum offload flags to form cmd */
> - if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
> - cmdsetup.s.ip_csum = 1;
> -
> - if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
> - cmdsetup.s.tnl_csum = 1;
> - else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
> - (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
> - cmdsetup.s.transport_csum = 1;
> -
> - if (m->nb_segs == 1) {
> - pkt_len = rte_pktmbuf_data_len(m);
> - cmdsetup.s.u.datasize = pkt_len;
> - lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> - &cmdsetup, tag);
> - ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
> - ndata.reqtype = LIO_REQTYPE_NORESP_NET;
> - } else {
> - struct lio_buf_free_info *finfo;
> - struct lio_gather *g;
> - rte_iova_t phyaddr;
> - int i, frags;
> -
> - finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
> - sizeof(*finfo), 0);
> - if (finfo == NULL) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "free buffer alloc failed\n");
> - goto xmit_failed;
> - }
> -
> - rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
> - g = (struct lio_gather *)list_delete_first_node(
> - &lio_dev->glist_head[iq_no]);
> - rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
> - if (g == NULL) {
> - PMD_TX_LOG(lio_dev, ERR,
> - "Transmit scatter gather: glist null!\n");
> - goto xmit_failed;
> - }
> -
> - cmdsetup.s.gather = 1;
> - cmdsetup.s.u.gatherptrs = m->nb_segs;
> - lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> - &cmdsetup, tag);
> -
> - memset(g->sg, 0, g->sg_size);
> - g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
> - lio_add_sg_size(&g->sg[0], m->data_len, 0);
> - pkt_len = m->data_len;
> - finfo->mbuf = m;
> -
> - /* First seg taken care above */
> - frags = m->nb_segs - 1;
> - i = 1;
> - m = m->next;
> - while (frags--) {
> - g->sg[(i >> 2)].ptr[(i & 3)] =
> - rte_mbuf_data_iova(m);
> - lio_add_sg_size(&g->sg[(i >> 2)],
> - m->data_len, (i & 3));
> - pkt_len += m->data_len;
> - i++;
> - m = m->next;
> - }
> -
> - phyaddr = rte_mem_virt2iova(g->sg);
> - if (phyaddr == RTE_BAD_IOVA) {
> - PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
> - goto xmit_failed;
> - }
> -
> - ndata.cmd.cmd3.dptr = phyaddr;
> - ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
> -
> - finfo->g = g;
> - finfo->lio_dev = lio_dev;
> - finfo->iq_no = (uint64_t)iq_no;
> - ndata.buf = finfo;
> - }
> -
> - ndata.datasize = pkt_len;
> -
> - status = lio_send_data_pkt(lio_dev, &ndata);
> -
> - if (unlikely(status == LIO_IQ_SEND_FAILED)) {
> - PMD_TX_LOG(lio_dev, ERR, "send failed\n");
> - break;
> - }
> -
> - if (unlikely(status == LIO_IQ_SEND_STOP)) {
> - PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
> - /* create space as iq is full */
> - lio_dev_cleanup_iq(lio_dev, iq_no);
> - }
> -
> - stats->tx_done++;
> - stats->tx_tot_bytes += pkt_len;
> - processed++;
> - }
> -
> -xmit_failed:
> - stats->tx_dropped += (nb_pkts - processed);
> -
> - return processed;
> -}
> -
> -void
> -lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
> -{
> - struct lio_instr_queue *txq;
> - struct lio_droq *rxq;
> - uint16_t i;
> -
> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> - txq = eth_dev->data->tx_queues[i];
> - if (txq != NULL) {
> - lio_dev_tx_queue_release(eth_dev, i);
> - eth_dev->data->tx_queues[i] = NULL;
> - }
> - }
> -
> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> - rxq = eth_dev->data->rx_queues[i];
> - if (rxq != NULL) {
> - lio_dev_rx_queue_release(eth_dev, i);
> - eth_dev->data->rx_queues[i] = NULL;
> - }
> - }
> -}
> diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
> deleted file mode 100644
> index d2a45104f0..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.h
> +++ /dev/null
> @@ -1,740 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_RXTX_H_
> -#define _LIO_RXTX_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_memory.h>
> -
> -#include "lio_struct.h"
> -
> -#ifndef ROUNDUP4
> -#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
> -#endif
> -
> -#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem) \
> - (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
> -
> -#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
> -
> -#define lio_uptime \
> - (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
> -
> -/** Descriptor format.
> - * The descriptor ring is made of descriptors which have 2 64-bit values:
> - * -# Physical (bus) address of the data buffer.
> - * -# Physical (bus) address of a lio_droq_info structure.
> - * The device DMA's incoming packets and its information at the address
> - * given by these descriptor fields.
> - */
> -struct lio_droq_desc {
> - /** The buffer pointer */
> - uint64_t buffer_ptr;
> -
> - /** The Info pointer */
> - uint64_t info_ptr;
> -};
> -
> -#define LIO_DROQ_DESC_SIZE (sizeof(struct lio_droq_desc))
> -
> -/** Information about packet DMA'ed by Octeon.
> - * The format of the information available at Info Pointer after Octeon
> - * has posted a packet. Not all descriptors have valid information. Only
> - * the Info field of the first descriptor for a packet has information
> - * about the packet.
> - */
> -struct lio_droq_info {
> - /** The Output Receive Header. */
> - union octeon_rh rh;
> -
> - /** The Length of the packet. */
> - uint64_t length;
> -};
> -
> -#define LIO_DROQ_INFO_SIZE (sizeof(struct lio_droq_info))
> -
> -/** Pointer to data buffer.
> - * Driver keeps a pointer to the data buffer that it made available to
> - * the Octeon device. Since the descriptor ring keeps physical (bus)
> - * addresses, this field is required for the driver to keep track of
> - * the virtual address pointers.
> - */
> -struct lio_recv_buffer {
> - /** Packet buffer, including meta data. */
> - void *buffer;
> -
> - /** Data in the packet buffer. */
> - uint8_t *data;
> -
> -};
> -
> -#define LIO_DROQ_RECVBUF_SIZE (sizeof(struct lio_recv_buffer))
> -
> -#define LIO_DROQ_SIZE (sizeof(struct lio_droq))
> -
> -#define LIO_IQ_SEND_OK 0
> -#define LIO_IQ_SEND_STOP 1
> -#define LIO_IQ_SEND_FAILED -1
> -
> -/* conditions */
> -#define LIO_REQTYPE_NONE 0
> -#define LIO_REQTYPE_NORESP_NET 1
> -#define LIO_REQTYPE_NORESP_NET_SG 2
> -#define LIO_REQTYPE_SOFT_COMMAND 3
> -
> -struct lio_request_list {
> - uint32_t reqtype;
> - void *buf;
> -};
> -
> -/*---------------------- INSTRUCTION FORMAT ----------------------------*/
> -
> -struct lio_instr3_64B {
> - /** Pointer where the input data is available. */
> - uint64_t dptr;
> -
> - /** Instruction Header. */
> - uint64_t ih3;
> -
> - /** Instruction Header. */
> - uint64_t pki_ih3;
> -
> - /** Input Request Header. */
> - uint64_t irh;
> -
> - /** opcode/subcode specific parameters */
> - uint64_t ossp[2];
> -
> - /** Return Data Parameters */
> - uint64_t rdp;
> -
> - /** Pointer where the response for a RAW mode packet will be written
> - * by Octeon.
> - */
> - uint64_t rptr;
> -
> -};
> -
> -union lio_instr_64B {
> - struct lio_instr3_64B cmd3;
> -};
> -
> -/** The size of each buffer in soft command buffer pool */
> -#define LIO_SOFT_COMMAND_BUFFER_SIZE 1536
> -
> -/** Maximum number of buffers to allocate into soft command buffer pool */
> -#define LIO_MAX_SOFT_COMMAND_BUFFERS 255
> -
> -struct lio_soft_command {
> - /** Soft command buffer info. */
> - struct lio_stailq_node node;
> - uint64_t dma_addr;
> - uint32_t size;
> -
> - /** Command and return status */
> - union lio_instr_64B cmd;
> -
> -#define LIO_COMPLETION_WORD_INIT 0xffffffffffffffffULL
> - uint64_t *status_word;
> -
> - /** Data buffer info */
> - void *virtdptr;
> - uint64_t dmadptr;
> - uint32_t datasize;
> -
> - /** Return buffer info */
> - void *virtrptr;
> - uint64_t dmarptr;
> - uint32_t rdatasize;
> -
> - /** Context buffer info */
> - void *ctxptr;
> - uint32_t ctxsize;
> -
> - /** Time out and callback */
> - size_t wait_time;
> - size_t timeout;
> - uint32_t iq_no;
> - void (*callback)(uint32_t, void *);
> - void *callback_arg;
> - struct rte_mbuf *mbuf;
> -};
> -
> -struct lio_iq_post_status {
> - int status;
> - int index;
> -};
> -
> -/* wqe
> - * --------------- 0
> - * | wqe word0-3 |
> - * --------------- 32
> - * | PCI IH |
> - * --------------- 40
> - * | RPTR |
> - * --------------- 48
> - * | PCI IRH |
> - * --------------- 56
> - * | OCTEON_CMD |
> - * --------------- 64
> - * | Addtl 8-BData |
> - * | |
> - * ---------------
> - */
> -
> -union octeon_cmd {
> - uint64_t cmd64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t cmd : 5;
> -
> - uint64_t more : 6; /* How many udd words follow the command */
> -
> - uint64_t reserved : 29;
> -
> - uint64_t param1 : 16;
> -
> - uint64_t param2 : 8;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> - uint64_t param2 : 8;
> -
> - uint64_t param1 : 16;
> -
> - uint64_t reserved : 29;
> -
> - uint64_t more : 6;
> -
> - uint64_t cmd : 5;
> -
> -#endif
> - } s;
> -};
> -
> -#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
> -
> -/* Maximum number of 8-byte words can be
> - * sent in a NIC control message.
> - */
> -#define LIO_MAX_NCTRL_UDD 32
> -
> -/* Structure of control information passed by driver to the BASE
> - * layer when sending control commands to Octeon device software.
> - */
> -struct lio_ctrl_pkt {
> - /** Command to be passed to the Octeon device software. */
> - union octeon_cmd ncmd;
> -
> - /** Send buffer */
> - void *data;
> - uint64_t dmadata;
> -
> - /** Response buffer */
> - void *rdata;
> - uint64_t dmardata;
> -
> - /** Additional data that may be needed by some commands. */
> - uint64_t udd[LIO_MAX_NCTRL_UDD];
> -
> - /** Input queue to use to send this command. */
> - uint64_t iq_no;
> -
> - /** Time to wait for Octeon software to respond to this control command.
> - * If wait_time is 0, BASE assumes no response is expected.
> - */
> - size_t wait_time;
> -
> - struct lio_dev_ctrl_cmd *ctrl_cmd;
> -};
> -
> -/** Structure of data information passed by driver to the BASE
> - * layer when forwarding data to Octeon device software.
> - */
> -struct lio_data_pkt {
> - /** Pointer to information maintained by NIC module for this packet. The
> - * BASE layer passes this as-is to the driver.
> - */
> - void *buf;
> -
> - /** Type of buffer passed in "buf" above. */
> - uint32_t reqtype;
> -
> - /** Total data bytes to be transferred in this command. */
> - uint32_t datasize;
> -
> - /** Command to be passed to the Octeon device software. */
> - union lio_instr_64B cmd;
> -
> - /** Input queue to use to send this command. */
> - uint32_t q_no;
> -};
> -
> -/** Structure passed by driver to BASE layer to prepare a command to send
> - * network data to Octeon.
> - */
> -union lio_cmd_setup {
> - struct {
> - uint32_t iq_no : 8;
> - uint32_t gather : 1;
> - uint32_t timestamp : 1;
> - uint32_t ip_csum : 1;
> - uint32_t transport_csum : 1;
> - uint32_t tnl_csum : 1;
> - uint32_t rsvd : 19;
> -
> - union {
> - uint32_t datasize;
> - uint32_t gatherptrs;
> - } u;
> - } s;
> -
> - uint64_t cmd_setup64;
> -};
> -
> -/* Instruction Header */
> -struct octeon_instr_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** Reserved3 */
> - uint64_t reserved3 : 1;
> -
> - /** Gather indicator 1=gather*/
> - uint64_t gather : 1;
> -
> - /** Data length OR no. of entries in gather list */
> - uint64_t dlengsz : 14;
> -
> - /** Front Data size */
> - uint64_t fsz : 6;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 4;
> -
> - /** PKI port kind - PKIND */
> - uint64_t pkind : 6;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - /** Reserved1 */
> - uint64_t reserved1 : 32;
> -
> - /** PKI port kind - PKIND */
> - uint64_t pkind : 6;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 4;
> -
> - /** Front Data size */
> - uint64_t fsz : 6;
> -
> - /** Data length OR no. of entries in gather list */
> - uint64_t dlengsz : 14;
> -
> - /** Gather indicator 1=gather*/
> - uint64_t gather : 1;
> -
> - /** Reserved3 */
> - uint64_t reserved3 : 1;
> -
> -#endif
> -};
> -
> -/* PKI Instruction Header(PKI IH) */
> -struct octeon_instr_pki_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** Wider bit */
> - uint64_t w : 1;
> -
> - /** Raw mode indicator 1 = RAW */
> - uint64_t raw : 1;
> -
> - /** Use Tag */
> - uint64_t utag : 1;
> -
> - /** Use QPG */
> - uint64_t uqpg : 1;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 1;
> -
> - /** Parse Mode */
> - uint64_t pm : 3;
> -
> - /** Skip Length */
> - uint64_t sl : 8;
> -
> - /** Use Tag Type */
> - uint64_t utt : 1;
> -
> - /** Tag type */
> - uint64_t tagtype : 2;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 2;
> -
> - /** QPG Value */
> - uint64_t qpg : 11;
> -
> - /** Tag Value */
> - uint64_t tag : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> - /** Tag Value */
> - uint64_t tag : 32;
> -
> - /** QPG Value */
> - uint64_t qpg : 11;
> -
> - /** Reserved1 */
> - uint64_t reserved1 : 2;
> -
> - /** Tag type */
> - uint64_t tagtype : 2;
> -
> - /** Use Tag Type */
> - uint64_t utt : 1;
> -
> - /** Skip Length */
> - uint64_t sl : 8;
> -
> - /** Parse Mode */
> - uint64_t pm : 3;
> -
> - /** Reserved2 */
> - uint64_t reserved2 : 1;
> -
> - /** Use QPG */
> - uint64_t uqpg : 1;
> -
> - /** Use Tag */
> - uint64_t utag : 1;
> -
> - /** Raw mode indicator 1 = RAW */
> - uint64_t raw : 1;
> -
> - /** Wider bit */
> - uint64_t w : 1;
> -#endif
> -};
> -
> -/** Input Request Header */
> -struct octeon_instr_irh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t opcode : 4;
> - uint64_t rflag : 1;
> - uint64_t subcode : 7;
> - uint64_t vlan : 12;
> - uint64_t priority : 3;
> - uint64_t reserved : 5;
> - uint64_t ossp : 32; /* opcode/subcode specific parameters */
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t ossp : 32; /* opcode/subcode specific parameters */
> - uint64_t reserved : 5;
> - uint64_t priority : 3;
> - uint64_t vlan : 12;
> - uint64_t subcode : 7;
> - uint64_t rflag : 1;
> - uint64_t opcode : 4;
> -#endif
> -};
> -
> -/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
> -#define OCTEON_SOFT_CMD_RESP_IH3 (40 + 8)
> -/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
> -#define OCTEON_PCI_CMD_O3 (24 + 8)
> -
> -/** Return Data Parameters */
> -struct octeon_instr_rdp {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t reserved : 49;
> - uint64_t pcie_port : 3;
> - uint64_t rlen : 12;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - uint64_t rlen : 12;
> - uint64_t pcie_port : 3;
> - uint64_t reserved : 49;
> -#endif
> -};
> -
> -union octeon_packet_params {
> - uint32_t pkt_params32;
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint32_t reserved : 24;
> - uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
> - /* Perform Outer transport header checksum */
> - uint32_t transport_csum : 1;
> - /* Find tunnel, and perform transport csum. */
> - uint32_t tnl_csum : 1;
> - uint32_t tsflag : 1; /* Timestamp this packet */
> - uint32_t ipsec_ops : 4; /* IPsec operation */
> -#else
> - uint32_t ipsec_ops : 4;
> - uint32_t tsflag : 1;
> - uint32_t tnl_csum : 1;
> - uint32_t transport_csum : 1;
> - uint32_t ip_csum : 1;
> - uint32_t reserved : 7;
> -#endif
> - } s;
> -};
> -
> -/** Utility function to prepare a 64B NIC instruction based on a setup command
> - * @param cmd - pointer to instruction to be filled in.
> - * @param setup - pointer to the setup structure
> - * @param q_no - which queue for back pressure
> - *
> - * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
> - */
> -static inline void
> -lio_prepare_pci_cmd(struct lio_device *lio_dev,
> - union lio_instr_64B *cmd,
> - union lio_cmd_setup *setup,
> - uint32_t tag)
> -{
> - union octeon_packet_params packet_params;
> - struct octeon_instr_pki_ih3 *pki_ih3;
> - struct octeon_instr_irh *irh;
> - struct octeon_instr_ih3 *ih3;
> - int port;
> -
> - memset(cmd, 0, sizeof(union lio_instr_64B));
> -
> - ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
> - pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
> -
> - /* assume that rflag is cleared so therefore front data will only have
> - * irh and ossp[1] and ossp[2] for a total of 24 bytes
> - */
> - ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
> - /* PKI IH */
> - ih3->fsz = OCTEON_PCI_CMD_O3;
> -
> - if (!setup->s.gather) {
> - ih3->dlengsz = setup->s.u.datasize;
> - } else {
> - ih3->gather = 1;
> - ih3->dlengsz = setup->s.u.gatherptrs;
> - }
> -
> - pki_ih3->w = 1;
> - pki_ih3->raw = 0;
> - pki_ih3->utag = 0;
> - pki_ih3->utt = 1;
> - pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
> -
> - port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
> -
> - if (tag)
> - pki_ih3->tag = tag;
> - else
> - pki_ih3->tag = LIO_DATA(port);
> -
> - pki_ih3->tagtype = OCTEON_ORDERED_TAG;
> - pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
> - pki_ih3->pm = 0x0; /* parse from L2 */
> - pki_ih3->sl = 32; /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
> -
> - irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
> -
> - irh->opcode = LIO_OPCODE;
> - irh->subcode = LIO_OPCODE_NW_DATA;
> -
> - packet_params.pkt_params32 = 0;
> - packet_params.s.ip_csum = setup->s.ip_csum;
> - packet_params.s.transport_csum = setup->s.transport_csum;
> - packet_params.s.tnl_csum = setup->s.tnl_csum;
> - packet_params.s.tsflag = setup->s.timestamp;
> -
> - irh->ossp = packet_params.pkt_params32;
> -}
> -
> -int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
> -void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev,
> - uint32_t datasize, uint32_t rdatasize,
> - uint32_t ctxsize);
> -void lio_prepare_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc,
> - uint8_t opcode, uint8_t subcode,
> - uint32_t irh_ossp, uint64_t ossp0,
> - uint64_t ossp1);
> -int lio_send_soft_command(struct lio_device *lio_dev,
> - struct lio_soft_command *sc);
> -void lio_free_soft_command(struct lio_soft_command *sc);
> -
> -/** Send control packet to the device
> - * @param lio_dev - lio device pointer
> - * @param nctrl - control structure with command, timeout, and callback info
> - *
> - * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -int lio_send_ctrl_pkt(struct lio_device *lio_dev,
> - struct lio_ctrl_pkt *ctrl_pkt);
> -
> -/** Maximum ordered requests to process in every invocation of
> - * lio_process_ordered_list(). The function will continue to process requests
> - * as long as it can find one that has finished processing. If it keeps
> - * finding requests that have completed, the function can run for ever. The
> - * value defined here sets an upper limit on the number of requests it can
> - * process before it returns control to the poll thread.
> - */
> -#define LIO_MAX_ORD_REQS_TO_PROCESS 4096
> -
> -/** Error codes used in Octeon Host-Core communication.
> - *
> - * 31 16 15 0
> - * ----------------------------
> - * | | |
> - * ----------------------------
> - * Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
> - * are reserved to identify the group to which the error code belongs. The
> - * lower 16-bits, called Minor Error Number, carry the actual code.
> - *
> - * So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
> - */
> -/** Status for a request.
> - * If the request is successfully queued, the driver will return
> - * a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
> - * the driver if the response for request failed to arrive before a
> - * time-out period or if the request processing * got interrupted due to
> - * a signal respectively.
> - */
> -enum {
> - /** A value of 0x00000000 indicates no error i.e. success */
> - LIO_REQUEST_DONE = 0x00000000,
> - /** (Major number: 0x0000; Minor Number: 0x0001) */
> - LIO_REQUEST_PENDING = 0x00000001,
> - LIO_REQUEST_TIMEOUT = 0x00000003,
> -
> -};
> -
> -/*------ Error codes used by firmware (bits 15..0 set by firmware */
> -#define LIO_FIRMWARE_MAJOR_ERROR_CODE 0x0001
> -#define LIO_FIRMWARE_STATUS_CODE(status) \
> - ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
> -
> -/** Initialize the response lists. The number of response lists to create is
> - * given by count.
> - * @param lio_dev - the lio device structure.
> - */
> -void lio_setup_response_list(struct lio_device *lio_dev);
> -
> -/** Check the status of first entry in the ordered list. If the instruction at
> - * that entry finished processing or has timed-out, the entry is cleaned.
> - * @param lio_dev - the lio device structure.
> - * @return 1 if the ordered list is empty, 0 otherwise.
> - */
> -int lio_process_ordered_list(struct lio_device *lio_dev);
> -
> -#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count) \
> - (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
> -
> -static inline void
> -lio_swap_8B_data(uint64_t *data, uint32_t blocks)
> -{
> - while (blocks) {
> - *data = rte_cpu_to_be_64(*data);
> - blocks--;
> - data++;
> - }
> -}
> -
> -static inline uint64_t
> -lio_map_ring(void *buf)
> -{
> - rte_iova_t dma_addr;
> -
> - dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
> -
> - return (uint64_t)dma_addr;
> -}
> -
> -static inline uint64_t
> -lio_map_ring_info(struct lio_droq *droq, uint32_t i)
> -{
> - rte_iova_t dma_addr;
> -
> - dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
> -
> - return (uint64_t)dma_addr;
> -}
> -
> -static inline int
> -lio_opcode_slow_path(union octeon_rh *rh)
> -{
> - uint16_t subcode1, subcode2;
> -
> - subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
> - subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
> -
> - return subcode2 != subcode1;
> -}
> -
> -static inline void
> -lio_add_sg_size(struct lio_sg_entry *sg_entry,
> - uint16_t size, uint32_t pos)
> -{
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - sg_entry->u.size[pos] = size;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - sg_entry->u.size[3 - pos] = size;
> -#endif
> -}
> -
> -/* Macro to increment index.
> - * Index is incremented by count; if the sum exceeds
> - * max, index is wrapped-around to the start.
> - */
> -static inline uint32_t
> -lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
> -{
> - if ((index + count) >= max)
> - index = index + count - max;
> - else
> - index += count;
> -
> - return index;
> -}
> -
> -int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
> - int desc_size, struct rte_mempool *mpool,
> - unsigned int socket_id);
> -uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> - uint16_t budget);
> -void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
> -
> -void lio_delete_sglist(struct lio_instr_queue *txq);
> -int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> - int fw_mapped_iq, int num_descs, unsigned int socket_id);
> -uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
> - uint16_t nb_pkts);
> -int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
> -int lio_setup_iq(struct lio_device *lio_dev, int q_index,
> - union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
> - unsigned int socket_id);
> -int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
> -void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
> -/** Setup instruction queue zero for the device
> - * @param lio_dev which lio device to setup
> - *
> - * @return 0 if success. -1 if fails
> - */
> -int lio_setup_instr_queue0(struct lio_device *lio_dev);
> -void lio_free_instr_queue0(struct lio_device *lio_dev);
> -void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
> -#endif /* _LIO_RXTX_H_ */
> diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
> deleted file mode 100644
> index 10270c560e..0000000000
> --- a/drivers/net/liquidio/lio_struct.h
> +++ /dev/null
> @@ -1,661 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_STRUCT_H_
> -#define _LIO_STRUCT_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -#include <sys/queue.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_atomic.h>
> -
> -#include "lio_hw_defs.h"
> -
> -struct lio_stailq_node {
> - STAILQ_ENTRY(lio_stailq_node) entries;
> -};
> -
> -STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
> -
> -struct lio_version {
> - uint16_t major;
> - uint16_t minor;
> - uint16_t micro;
> - uint16_t reserved;
> -};
> -
> -/** Input Queue statistics. Each input queue has four stats fields. */
> -struct lio_iq_stats {
> - uint64_t instr_posted; /**< Instructions posted to this queue. */
> - uint64_t instr_processed; /**< Instructions processed in this queue. */
> - uint64_t instr_dropped; /**< Instructions that could not be processed */
> - uint64_t bytes_sent; /**< Bytes sent through this queue. */
> - uint64_t tx_done; /**< Num of packets sent to network. */
> - uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
> - uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
> - uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
> -};
> -
> -/** Output Queue statistics. Each output queue has four stats fields. */
> -struct lio_droq_stats {
> - /** Number of packets received in this queue. */
> - uint64_t pkts_received;
> -
> - /** Bytes received by this queue. */
> - uint64_t bytes_received;
> -
> - /** Packets dropped due to no memory available. */
> - uint64_t dropped_nomem;
> -
> - /** Packets dropped due to large number of pkts to process. */
> - uint64_t dropped_toomany;
> -
> - /** Number of packets sent to stack from this queue. */
> - uint64_t rx_pkts_received;
> -
> - /** Number of Bytes sent to stack from this queue. */
> - uint64_t rx_bytes_received;
> -
> - /** Num of Packets dropped due to receive path failures. */
> - uint64_t rx_dropped;
> -
> - /** Num of vxlan packets received; */
> - uint64_t rx_vxlan;
> -
> - /** Num of failures of rte_pktmbuf_alloc() */
> - uint64_t rx_alloc_failure;
> -
> -};
> -
> -/** The Descriptor Ring Output Queue structure.
> - * This structure has all the information required to implement a
> - * DROQ.
> - */
> -struct lio_droq {
> - /** A spinlock to protect access to this ring. */
> - rte_spinlock_t lock;
> -
> - uint32_t q_no;
> -
> - uint32_t pkt_count;
> -
> - struct lio_device *lio_dev;
> -
> - /** The 8B aligned descriptor ring starts at this address. */
> - struct lio_droq_desc *desc_ring;
> -
> - /** Index in the ring where the driver should read the next packet */
> - uint32_t read_idx;
> -
> - /** Index in the ring where Octeon will write the next packet */
> - uint32_t write_idx;
> -
> - /** Index in the ring where the driver will refill the descriptor's
> - * buffer
> - */
> - uint32_t refill_idx;
> -
> - /** Packets pending to be processed */
> - rte_atomic64_t pkts_pending;
> -
> - /** Number of descriptors in this ring. */
> - uint32_t nb_desc;
> -
> - /** The number of descriptors pending refill. */
> - uint32_t refill_count;
> -
> - uint32_t refill_threshold;
> -
> - /** The 8B aligned info ptrs begin from this address. */
> - struct lio_droq_info *info_list;
> -
> - /** The receive buffer list. This list has the virtual addresses of the
> - * buffers.
> - */
> - struct lio_recv_buffer *recv_buf_list;
> -
> - /** The size of each buffer pointed by the buffer pointer. */
> - uint32_t buffer_size;
> -
> - /** Pointer to the mapped packet credit register.
> - * Host writes number of info/buffer ptrs available to this register
> - */
> - void *pkts_credit_reg;
> -
> - /** Pointer to the mapped packet sent register.
> - * Octeon writes the number of packets DMA'ed to host memory
> - * in this register.
> - */
> - void *pkts_sent_reg;
> -
> - /** Statistics for this DROQ. */
> - struct lio_droq_stats stats;
> -
> - /** DMA mapped address of the DROQ descriptor ring. */
> - size_t desc_ring_dma;
> -
> - /** Info ptr list are allocated at this virtual address. */
> - size_t info_base_addr;
> -
> - /** DMA mapped address of the info list */
> - size_t info_list_dma;
> -
> - /** Allocated size of info list. */
> - uint32_t info_alloc_size;
> -
> - /** Memory zone **/
> - const struct rte_memzone *desc_ring_mz;
> - const struct rte_memzone *info_mz;
> - struct rte_mempool *mpool;
> -};
> -
> -/** Receive Header */
> -union octeon_rh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t rh64;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t reserved : 17;
> - uint64_t ossp : 32; /** opcode/subcode specific parameters */
> - } r;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t extra : 28;
> - uint64_t vlan : 12;
> - uint64_t priority : 3;
> - uint64_t csum_verified : 3; /** checksum verified. */
> - uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
> - uint64_t encap_on : 1;
> - uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> - } r_dh;
> - struct {
> - uint64_t opcode : 4;
> - uint64_t subcode : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t reserved : 8;
> - uint64_t extra : 25;
> - uint64_t gmxport : 16;
> - } r_nic_info;
> -#else
> - uint64_t rh64;
> - struct {
> - uint64_t ossp : 32; /** opcode/subcode specific parameters */
> - uint64_t reserved : 17;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r;
> - struct {
> - uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> - uint64_t encap_on : 1;
> - uint64_t has_hwtstamp : 1; /** 1 = has hwtstamp */
> - uint64_t csum_verified : 3; /** checksum verified. */
> - uint64_t priority : 3;
> - uint64_t vlan : 12;
> - uint64_t extra : 28;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r_dh;
> - struct {
> - uint64_t gmxport : 16;
> - uint64_t extra : 25;
> - uint64_t reserved : 8;
> - uint64_t len : 3; /** additional 64-bit words */
> - uint64_t subcode : 8;
> - uint64_t opcode : 4;
> - } r_nic_info;
> -#endif
> -};
> -
> -#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
> -
> -/** The txpciq info passed to host from the firmware */
> -union octeon_txpciq {
> - uint64_t txpciq64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t q_no : 8;
> - uint64_t port : 8;
> - uint64_t pkind : 6;
> - uint64_t use_qpg : 1;
> - uint64_t qpg : 11;
> - uint64_t aura_num : 10;
> - uint64_t reserved : 20;
> -#else
> - uint64_t reserved : 20;
> - uint64_t aura_num : 10;
> - uint64_t qpg : 11;
> - uint64_t use_qpg : 1;
> - uint64_t pkind : 6;
> - uint64_t port : 8;
> - uint64_t q_no : 8;
> -#endif
> - } s;
> -};
> -
> -/** The instruction (input) queue.
> - * The input queue is used to post raw (instruction) mode data or packet
> - * data to Octeon device from the host. Each input queue for
> - * a LIO device has one such structure to represent it.
> - */
> -struct lio_instr_queue {
> - /** A spinlock to protect access to the input ring. */
> - rte_spinlock_t lock;
> -
> - rte_spinlock_t post_lock;
> -
> - struct lio_device *lio_dev;
> -
> - uint32_t pkt_in_done;
> -
> - rte_atomic64_t iq_flush_running;
> -
> - /** Flag that indicates if the queue uses 64 byte commands. */
> - uint32_t iqcmd_64B:1;
> -
> - /** Queue info. */
> - union octeon_txpciq txpciq;
> -
> - uint32_t rsvd:17;
> -
> - uint32_t status:8;
> -
> - /** Number of descriptors in this ring. */
> - uint32_t nb_desc;
> -
> - /** Index in input ring where the driver should write the next packet */
> - uint32_t host_write_index;
> -
> - /** Index in input ring where Octeon is expected to read the next
> - * packet.
> - */
> - uint32_t lio_read_index;
> -
> - /** This index aids in finding the window in the queue where Octeon
> - * has read the commands.
> - */
> - uint32_t flush_index;
> -
> - /** This field keeps track of the instructions pending in this queue. */
> - rte_atomic64_t instr_pending;
> -
> - /** Pointer to the Virtual Base addr of the input ring. */
> - uint8_t *base_addr;
> -
> - struct lio_request_list *request_list;
> -
> - /** Octeon doorbell register for the ring. */
> - void *doorbell_reg;
> -
> - /** Octeon instruction count register for this ring. */
> - void *inst_cnt_reg;
> -
> - /** Number of instructions pending to be posted to Octeon. */
> - uint32_t fill_cnt;
> -
> - /** Statistics for this input queue. */
> - struct lio_iq_stats stats;
> -
> - /** DMA mapped base address of the input descriptor ring. */
> - uint64_t base_addr_dma;
> -
> - /** Application context */
> - void *app_ctx;
> -
> - /* network stack queue index */
> - int q_index;
> -
> - /* Memory zone */
> - const struct rte_memzone *iq_mz;
> -};
> -
> -/** This structure is used by driver to store information required
> - * to free the mbuff when the packet has been fetched by Octeon.
> - * Bytes offset below assume worst-case of a 64-bit system.
> - */
> -struct lio_buf_free_info {
> - /** Bytes 1-8. Pointer to network device private structure. */
> - struct lio_device *lio_dev;
> -
> - /** Bytes 9-16. Pointer to mbuff. */
> - struct rte_mbuf *mbuf;
> -
> - /** Bytes 17-24. Pointer to gather list. */
> - struct lio_gather *g;
> -
> - /** Bytes 25-32. Physical address of mbuf->data or gather list. */
> - uint64_t dptr;
> -
> - /** Bytes 33-47. Piggybacked soft command, if any */
> - struct lio_soft_command *sc;
> -
> - /** Bytes 48-63. iq no */
> - uint64_t iq_no;
> -};
> -
> -/* The Scatter-Gather List Entry. The scatter or gather component used with
> - * input instruction has this format.
> - */
> -struct lio_sg_entry {
> - /** The first 64 bit gives the size of data in each dptr. */
> - union {
> - uint16_t size[4];
> - uint64_t size64;
> - } u;
> -
> - /** The 4 dptr pointers for this entry. */
> - uint64_t ptr[4];
> -};
> -
> -#define LIO_SG_ENTRY_SIZE (sizeof(struct lio_sg_entry))
> -
> -/** Structure of a node in list of gather components maintained by
> - * driver for each network device.
> - */
> -struct lio_gather {
> - /** List manipulation. Next and prev pointers. */
> - struct lio_stailq_node list;
> -
> - /** Size of the gather component at sg in bytes. */
> - int sg_size;
> -
> - /** Number of bytes that sg was adjusted to make it 8B-aligned. */
> - int adjust;
> -
> - /** Gather component that can accommodate max sized fragment list
> - * received from the IP layer.
> - */
> - struct lio_sg_entry *sg;
> -};
> -
> -struct lio_rss_ctx {
> - uint16_t hash_key_size;
> - uint8_t hash_key[LIO_RSS_MAX_KEY_SZ];
> - /* Ideally a factor of number of queues */
> - uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
> - uint8_t itable_size;
> - uint8_t ip;
> - uint8_t tcp_hash;
> - uint8_t ipv6;
> - uint8_t ipv6_tcp_hash;
> - uint8_t ipv6_ex;
> - uint8_t ipv6_tcp_ex_hash;
> - uint8_t hash_disable;
> -};
> -
> -struct lio_io_enable {
> - uint64_t iq;
> - uint64_t oq;
> - uint64_t iq64B;
> -};
> -
> -struct lio_fn_list {
> - void (*setup_iq_regs)(struct lio_device *, uint32_t);
> - void (*setup_oq_regs)(struct lio_device *, uint32_t);
> -
> - int (*setup_mbox)(struct lio_device *);
> - void (*free_mbox)(struct lio_device *);
> -
> - int (*setup_device_regs)(struct lio_device *);
> - int (*enable_io_queues)(struct lio_device *);
> - void (*disable_io_queues)(struct lio_device *);
> -};
> -
> -struct lio_pf_vf_hs_word {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> - /** PKIND value assigned for the DPI interface */
> - uint64_t pkind : 8;
> -
> - /** OCTEON core clock multiplier */
> - uint64_t core_tics_per_us : 16;
> -
> - /** OCTEON coprocessor clock multiplier */
> - uint64_t coproc_tics_per_us : 16;
> -
> - /** app that currently running on OCTEON */
> - uint64_t app_mode : 8;
> -
> - /** RESERVED */
> - uint64_t reserved : 16;
> -
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> - /** RESERVED */
> - uint64_t reserved : 16;
> -
> - /** app that currently running on OCTEON */
> - uint64_t app_mode : 8;
> -
> - /** OCTEON coprocessor clock multiplier */
> - uint64_t coproc_tics_per_us : 16;
> -
> - /** OCTEON core clock multiplier */
> - uint64_t core_tics_per_us : 16;
> -
> - /** PKIND value assigned for the DPI interface */
> - uint64_t pkind : 8;
> -#endif
> -};
> -
> -struct lio_sriov_info {
> - /** Number of rings assigned to VF */
> - uint32_t rings_per_vf;
> -
> - /** Number of VF devices enabled */
> - uint32_t num_vfs;
> -};
> -
> -/* Head of a response list */
> -struct lio_response_list {
> - /** List structure to add delete pending entries to */
> - struct lio_stailq_head head;
> -
> - /** A lock for this response list */
> - rte_spinlock_t lock;
> -
> - rte_atomic64_t pending_req_count;
> -};
> -
> -/* Structure to define the configuration attributes for each Input queue. */
> -struct lio_iq_config {
> - /* Max number of IQs available */
> - uint8_t max_iqs;
> -
> - /** Pending list size (usually set to the sum of the size of all Input
> - * queues)
> - */
> - uint32_t pending_list_size;
> -
> - /** Command size - 32 or 64 bytes */
> - uint32_t instr_type;
> -};
> -
> -/* Structure to define the configuration attributes for each Output queue. */
> -struct lio_oq_config {
> - /* Max number of OQs available */
> - uint8_t max_oqs;
> -
> - /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
> - uint32_t info_ptr;
> -
> - /** The number of buffers that were consumed during packet processing by
> - * the driver on this Output queue before the driver attempts to
> - * replenish the descriptor ring with new buffers.
> - */
> - uint32_t refill_threshold;
> -};
> -
> -/* Structure to define the configuration. */
> -struct lio_config {
> - uint16_t card_type;
> - const char *card_name;
> -
> - /** Input Queue attributes. */
> - struct lio_iq_config iq;
> -
> - /** Output Queue attributes. */
> - struct lio_oq_config oq;
> -
> - int num_nic_ports;
> -
> - int num_def_tx_descs;
> -
> - /* Num of desc for rx rings */
> - int num_def_rx_descs;
> -
> - int def_rx_buf_size;
> -};
> -
> -/** Status of a RGMII Link on Octeon as seen by core driver. */
> -union octeon_link_status {
> - uint64_t link_status64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t duplex : 8;
> - uint64_t mtu : 16;
> - uint64_t speed : 16;
> - uint64_t link_up : 1;
> - uint64_t autoneg : 1;
> - uint64_t if_mode : 5;
> - uint64_t pause : 1;
> - uint64_t flashing : 1;
> - uint64_t reserved : 15;
> -#else
> - uint64_t reserved : 15;
> - uint64_t flashing : 1;
> - uint64_t pause : 1;
> - uint64_t if_mode : 5;
> - uint64_t autoneg : 1;
> - uint64_t link_up : 1;
> - uint64_t speed : 16;
> - uint64_t mtu : 16;
> - uint64_t duplex : 8;
> -#endif
> - } s;
> -};
> -
> -/** The rxpciq info passed to host from the firmware */
> -union octeon_rxpciq {
> - uint64_t rxpciq64;
> -
> - struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t q_no : 8;
> - uint64_t reserved : 56;
> -#else
> - uint64_t reserved : 56;
> - uint64_t q_no : 8;
> -#endif
> - } s;
> -};
> -
> -/** Information for a OCTEON ethernet interface shared between core & host. */
> -struct octeon_link_info {
> - union octeon_link_status link;
> - uint64_t hw_addr;
> -
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> - uint64_t gmxport : 16;
> - uint64_t macaddr_is_admin_assigned : 1;
> - uint64_t vlan_is_admin_assigned : 1;
> - uint64_t rsvd : 30;
> - uint64_t num_txpciq : 8;
> - uint64_t num_rxpciq : 8;
> -#else
> - uint64_t num_rxpciq : 8;
> - uint64_t num_txpciq : 8;
> - uint64_t rsvd : 30;
> - uint64_t vlan_is_admin_assigned : 1;
> - uint64_t macaddr_is_admin_assigned : 1;
> - uint64_t gmxport : 16;
> -#endif
> -
> - union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
> - union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
> -};
> -
> -/* ----------------------- THE LIO DEVICE --------------------------- */
> -/** The lio device.
> - * Each lio device has this structure to represent all its
> - * components.
> - */
> -struct lio_device {
> - /** PCI device pointer */
> - struct rte_pci_device *pci_dev;
> -
> - /** Octeon Chip type */
> - uint16_t chip_id;
> - uint16_t pf_num;
> - uint16_t vf_num;
> -
> - /** This device's PCIe port used for traffic. */
> - uint16_t pcie_port;
> -
> - /** The state of this device */
> - rte_atomic64_t status;
> -
> - uint8_t intf_open;
> -
> - struct octeon_link_info linfo;
> -
> - uint8_t *hw_addr;
> -
> - struct lio_fn_list fn_list;
> -
> - uint32_t num_iqs;
> -
> - /** Guards each glist */
> - rte_spinlock_t *glist_lock;
> - /** Array of gather component linked lists */
> - struct lio_stailq_head *glist_head;
> -
> - /* The pool containing pre allocated buffers used for soft commands */
> - struct rte_mempool *sc_buf_pool;
> -
> - /** The input instruction queues */
> - struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
> -
> - /** The singly-linked tail queues of instruction response */
> - struct lio_response_list response_list;
> -
> - uint32_t num_oqs;
> -
> - /** The DROQ output queues */
> - struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
> -
> - struct lio_io_enable io_qmask;
> -
> - struct lio_sriov_info sriov_info;
> -
> - struct lio_pf_vf_hs_word pfvf_hsword;
> -
> - /** Mail Box details of each lio queue. */
> - struct lio_mbox **mbox;
> -
> - char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
> -
> - const struct lio_config *default_config;
> -
> - struct rte_eth_dev *eth_dev;
> -
> - uint64_t ifflags;
> - uint8_t max_rx_queues;
> - uint8_t max_tx_queues;
> - uint8_t nb_rx_queues;
> - uint8_t nb_tx_queues;
> - uint8_t port_configured;
> - struct lio_rss_ctx rss_state;
> - uint16_t port_id;
> - char firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -#endif /* _LIO_STRUCT_H_ */
> diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
> deleted file mode 100644
> index ebadbf3dea..0000000000
> --- a/drivers/net/liquidio/meson.build
> +++ /dev/null
> @@ -1,16 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> -
> -if is_windows
> - build = false
> - reason = 'not supported on Windows'
> - subdir_done()
> -endif
> -
> -sources = files(
> - 'base/lio_23xx_vf.c',
> - 'base/lio_mbox.c',
> - 'lio_ethdev.c',
> - 'lio_rxtx.c',
> -)
> -includes += include_directories('base')
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index b1df17ce8c..f68bbc27a7 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -36,7 +36,6 @@ drivers = [
> 'ipn3ke',
> 'ixgbe',
> 'kni',
> - 'liquidio',
> 'mana',
> 'memif',
> 'mlx4',
> --
> 2.40.1
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] net/bonding: replace master/slave to main/member
@ 2023-05-17 14:52 1% ` Stephen Hemminger
2023-05-18 6:32 1% ` [PATCH v2] " Chaoyong He
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-17 14:52 UTC (permalink / raw)
To: Chaoyong He; +Cc: dev, oss-drivers, niklas.soderlund, Long Wu, James Hershaw
[-- Attachment #1: Type: text/plain, Size: 2214 bytes --]
On Wed, 17 May 2023 14:59:05 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:
> This patch replaces the usage of the word 'master/slave' with more
> appropriate word 'main/member' in bonding PMD as well as in its docs
> and examples. Also the test app and testpmd were modified to use the
> new wording.
>
> The bonding PMD's public API was modified according to the changes
> in word:
> rte_eth_bond_8023ad_slave_info is now called
> rte_eth_bond_8023ad_member_info,
> rte_eth_bond_active_slaves_get is now called
> rte_eth_bond_active_members_get,
> rte_eth_bond_slave_add is now called
> rte_eth_bond_member_add,
> rte_eth_bond_slave_remove is now called
> rte_eth_bond_member_remove,
> rte_eth_bond_slaves_get is now called
> rte_eth_bond_members_get.
>
> Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
> RTE_ETH_DEV_BONDED_MEMBER.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: James Hershaw <james.hershaw@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
This looks great.
I had started on this and chose the new names as parent and child,
but that choice was arbitrary. Did some background research and
============ ================== ============== ===============
Origin Feature Name Aggregate Name Device Name
============ ================== ============== ===============
IEEE 802.1AX Link Aggregation aggregator port
Linux Bonding master slave
FreeBSD Link Aggregate lagg laggport
Windows Teaming team
OpenVswitch Bonding bond members
Solaris Link Aggregate aggregation datalink
Cisco EtherChannel group channel
Juniper Aggregate Ethernet lag interface lag link
Arista Port Channel group channel
SONiC LAG portchannel member
============ ================== ============== ===============
You also need to modify how this is done since it ends up
being an API change.
My version of the patch had some of that, if you want here it is.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-bonding-replace-use-of-slave-device-with-child-devic.patch --]
[-- Type: text/x-patch, Size: 580804 bytes --]
From 25aea59871533585bbaa18bdf7757e48aecb5380 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 30 Mar 2023 10:24:03 -0700
Subject: [PATCH 05/12] bonding: replace use of slave device with child device
The term slave is inherited from the Linux bonding device and does not
conform to the Linux Foundation Non-Inclusive Naming policy.
Other networking products, operating systems, and 802 standards
do not use the terms master or slave.
For DPDK change to using the terms parent and child when
referring to devices that are managed in bond device.
Mark the old visible API's as deprecated and remove
from the ABI.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test-pmd/testpmd.c | 112 +-
app/test-pmd/testpmd.h | 8 +-
app/test/test_link_bonding.c | 2724 ++++++++---------
app/test/test_link_bonding_mode4.c | 584 ++--
| 166 +-
doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
doc/guides/nics/bnxt.rst | 4 +-
.../link_bonding_poll_mode_drv_lib.rst | 222 +-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +-
drivers/net/bonding/bonding_testpmd.c | 178 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
drivers/net/bonding/eth_bond_private.h | 116 +-
drivers/net/bonding/rte_eth_bond.h | 102 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 370 +--
drivers/net/bonding/rte_eth_bond_8023ad.h | 66 +-
drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
drivers/net/bonding/rte_eth_bond_api.c | 464 +--
drivers/net/bonding/rte_eth_bond_args.c | 32 +-
drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 1368 ++++-----
drivers/net/bonding/version.map | 10 +-
examples/bond/main.c | 40 +-
lib/ethdev/rte_ethdev.h | 6 +-
24 files changed, 3391 insertions(+), 3367 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f9252395..64465d0f151d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_child_port_status(portid_t bond_pid, bool is_stop)
{
#ifdef RTE_NET_BOND
- portid_t slave_pids[RTE_MAX_ETHPORTS];
+ portid_t child_pids[RTE_MAX_ETHPORTS];
struct rte_port *port;
- int num_slaves;
- portid_t slave_pid;
+ int num_children;
+ portid_t child_pid;
int i;
- num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+ num_children = rte_eth_bond_children_get(bond_pid, child_pids,
RTE_MAX_ETHPORTS);
- if (num_slaves < 0) {
- fprintf(stderr, "Failed to get slave list for port = %u\n",
+ if (num_children < 0) {
+ fprintf(stderr, "Failed to get child list for port = %u\n",
bond_pid);
- return num_slaves;
+ return num_children;
}
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- port = &ports[slave_pid];
+ for (i = 0; i < num_children; i++) {
+ child_pid = child_pids[i];
+ port = &ports[child_pid];
port->port_status =
is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Starting a bonded port also starts all slaves under the bonded
+ * Starting a bonded port also starts all children under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these children.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, false);
+ return change_bonding_child_port_status(port_id, false);
}
return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Stopping a bonded port also stops all slaves under the bonded
+ * Stopping a bonded port also stops all children under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these children.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, true);
+ return change_bonding_child_port_status(port_id, true);
}
return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
port = &ports[pi];
/* Check if there is a port which is not started */
if ((port->port_status != RTE_PORT_STARTED) &&
- (port->slave_flag == 0))
+ (port->child_flag == 0))
return 0;
}
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
struct rte_port *port = &ports[port_id];
if ((port->port_status != RTE_PORT_STOPPED) &&
- (port->slave_flag == 0))
+ (port->child_flag == 0))
return 0;
return 1;
}
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
/*
* Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no child is added. And its capability
+ * will be updated when add a new child device. So adding a child device need
* to update the port configurations of bonding device.
*/
static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
continue;
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_child(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_child(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
}
static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_child_device(portid_t *child_pids, uint16_t num_children)
{
struct rte_port *port;
- portid_t slave_pid;
+ portid_t child_pid;
uint16_t i;
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- if (port_is_started(slave_pid) == 1) {
- if (rte_eth_dev_stop(slave_pid) != 0)
+ for (i = 0; i < num_children; i++) {
+ child_pid = child_pids[i];
+ if (port_is_started(child_pid) == 1) {
+ if (rte_eth_dev_stop(child_pid) != 0)
fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
- slave_pid);
+ child_pid);
- port = &ports[slave_pid];
+ port = &ports[child_pid];
port->port_status = RTE_PORT_STOPPED;
}
- clear_port_slave_flag(slave_pid);
+ clear_port_child_flag(child_pid);
- /* Close slave device when testpmd quit or is killed. */
+ /* Close child device when testpmd quit or is killed. */
if (cl_quit == 1 || f_quit == 1)
- rte_eth_dev_close(slave_pid);
+ rte_eth_dev_close(child_pid);
}
}
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
{
portid_t pi;
struct rte_port *port;
- portid_t slave_pids[RTE_MAX_ETHPORTS];
- int num_slaves = 0;
+ portid_t child_pids[RTE_MAX_ETHPORTS];
+ int num_children = 0;
if (port_id_is_invalid(pid, ENABLED_WARN))
return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_child(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
flush_port_owned_resources(pi);
#ifdef RTE_NET_BOND
if (port->bond_flag == 1)
- num_slaves = rte_eth_bond_slaves_get(pi,
- slave_pids, RTE_MAX_ETHPORTS);
+ num_children = rte_eth_bond_children_get(pi,
+ child_pids, RTE_MAX_ETHPORTS);
#endif
rte_eth_dev_close(pi);
/*
- * If this port is bonded device, all slaves under the
+ * If this port is bonded device, all children under the
* device need to be removed or closed.
*/
- if (port->bond_flag == 1 && num_slaves > 0)
- clear_bonding_slave_device(slave_pids,
- num_slaves);
+ if (port->bond_flag == 1 && num_children > 0)
+ clear_bonding_child_device(child_pids,
+ num_children);
}
free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_child(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
}
}
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_child_flag(portid_t child_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 1;
+ port = &ports[child_pid];
+ port->child_flag = 1;
}
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_child_flag(portid_t child_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 0;
+ port = &ports[child_pid];
+ port->child_flag = 0;
}
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_child(portid_t child_pid)
{
struct rte_port *port;
struct rte_eth_dev_info dev_info;
int ret;
- port = &ports[slave_pid];
- ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+ port = &ports[child_pid];
+ ret = eth_dev_info_get_print_err(child_pid, &dev_info);
if (ret != 0) {
TESTPMD_LOG(ERR,
"Failed to get device info for port id %d,"
- "cannot determine if the port is a bonded slave",
- slave_pid);
+ "cannot determine if the port is a bonded child",
+ child_pid);
return 0;
}
- if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+ if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_CHILD) || (port->child_flag == 1))
return 1;
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3c5..51cf600dc49e 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
queueid_t queue_nb; /**< nb. of queues for flow rules */
uint32_t queue_sz; /**< size of a queue for flow rules */
- uint8_t slave_flag : 1, /**< bonding slave port */
+ uint8_t child_flag : 1, /**< bonding child port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before forward */
update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
void dev_set_link_up(portid_t pid);
void dev_set_link_down(portid_t pid);
void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_child_flag(portid_t child_pid);
+void clear_port_child_flag(portid_t child_pid);
+uint8_t port_is_bonding_child(portid_t child_pid);
int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2b3..a0e1e8e833fe 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
#define INVALID_BONDING_MODE (-1)
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t child_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
struct link_bonding_unittest_params {
int16_t bonded_port_id;
- int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
- uint16_t bonded_slave_count;
+ int16_t child_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+ uint16_t bonded_child_count;
uint8_t bonding_mode;
uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
struct rte_mempool *mbuf_pool;
- struct rte_ether_addr *default_slave_mac;
+ struct rte_ether_addr *default_child_mac;
struct rte_ether_addr *default_bonded_mac;
/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
static struct link_bonding_unittest_params default_params = {
.bonded_port_id = -1,
- .slave_port_ids = { -1 },
- .bonded_slave_count = 0,
+ .child_port_ids = { -1 },
+ .bonded_child_count = 0,
.bonding_mode = BONDING_MODE_ROUND_ROBIN,
.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
.mbuf_pool = NULL,
- .default_slave_mac = (struct rte_ether_addr *)slave_mac,
+ .default_child_mac = (struct rte_ether_addr *)child_mac,
.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
return 0;
}
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int children_initialized;
+static int mac_children_initialized;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
test_setup(void)
{
int i, nb_mbuf_per_pool;
- struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+ struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)child_mac;
/* Allocate ethernet packet header with space for VLAN header */
if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
}
/* Create / Initialize virtual eth devs */
- if (!slaves_initialized) {
+ if (!children_initialized) {
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
@@ -243,16 +243,16 @@ test_setup(void)
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
- test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+ test_params->child_port_ids[i] = virtual_ethdev_create(pmd_name,
mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+ TEST_ASSERT(test_params->child_port_ids[i] >= 0,
"Failed to create virtual virtual ethdev %s", pmd_name);
TEST_ASSERT_SUCCESS(configure_ethdev(
- test_params->slave_port_ids[i], 1, 0),
+ test_params->child_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s", pmd_name);
}
- slaves_initialized = 1;
+ children_initialized = 1;
}
return 0;
@@ -261,9 +261,9 @@ test_setup(void)
static int
test_create_bonded_device(void)
{
- int current_slave_count;
+ int current_child_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
/* Don't try to recreate bonded device if re-running test suite*/
if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
test_params->bonded_port_id, test_params->bonding_mode);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_child_count, 0,
+ "Number of children %d is great than expected %d.",
+ current_child_count, 0);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+ current_child_count = rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_child_count, 0,
+ "Number of active children %d is great than expected %d.",
+ current_child_count, 0);
return 0;
}
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
}
static int
-test_add_slave_to_bonded_device(void)
+test_add_child_to_bonded_device(void)
{
- int current_slave_count;
+ int current_child_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave (%d) to bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+ test_params->child_port_ids[test_params->bonded_child_count]),
+ "Failed to add child (%d) to bonded port (%d).",
+ test_params->child_port_ids[test_params->bonded_child_count],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
- "Number of slaves (%d) is greater than expected (%d).",
- current_slave_count, test_params->bonded_slave_count + 1);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count + 1,
+ "Number of children (%d) is greater than expected (%d).",
+ current_child_count, test_params->bonded_child_count + 1);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d).\n",
- current_slave_count, 0);
+ current_child_count = rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, 0,
+ "Number of active children (%d) is not as expected (%d).\n",
+ current_child_count, 0);
- test_params->bonded_slave_count++;
+ test_params->bonded_child_count++;
return 0;
}
static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_child_to_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_child_add(test_params->bonded_port_id + 5,
+ test_params->child_port_ids[test_params->bonded_child_count]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_child_add(test_params->child_port_ids[0],
+ test_params->child_port_ids[test_params->bonded_child_count]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
static int
-test_remove_slave_from_bonded_device(void)
+test_remove_child_from_bonded_device(void)
{
- int current_slave_count;
+ int current_child_count;
struct rte_ether_addr read_mac_addr, *mac_addr;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count-1]),
- "Failed to remove slave %d from bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(test_params->bonded_port_id,
+ test_params->child_port_ids[test_params->bonded_child_count-1]),
+ "Failed to remove child %d from bonded port (%d).",
+ test_params->child_port_ids[test_params->bonded_child_count-1],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
- "Number of slaves (%d) is great than expected (%d).\n",
- current_slave_count, test_params->bonded_slave_count - 1);
+ TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count - 1,
+ "Number of children (%d) is great than expected (%d).\n",
+ current_child_count, test_params->bonded_child_count - 1);
- mac_addr = (struct rte_ether_addr *)slave_mac;
+ mac_addr = (struct rte_ether_addr *)child_mac;
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
- test_params->bonded_slave_count-1;
+ test_params->bonded_child_count-1;
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ test_params->child_port_ids[test_params->bonded_child_count-1],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->child_port_ids[test_params->bonded_child_count-1]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->child_port_ids[test_params->bonded_child_count-1]);
virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
0);
- test_params->bonded_slave_count--;
+ test_params->bonded_child_count--;
return 0;
}
static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_child_from_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+ TEST_ASSERT_FAIL(rte_eth_bond_child_remove(
test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ test_params->child_port_ids[test_params->bonded_child_count - 1]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
- test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ TEST_ASSERT_FAIL(rte_eth_bond_child_remove(
+ test_params->child_port_ids[0],
+ test_params->child_port_ids[test_params->bonded_child_count - 1]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
static int bonded_id = 2;
static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_child_to_bonded_device(void)
{
- int port_id, current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int port_id, current_child_count;
+ uint16_t children[RTE_MAX_ETHPORTS];
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- test_add_slave_to_bonded_device();
+ test_add_child_to_bonded_device();
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 1,
- "Number of slaves (%d) is not that expected (%d).",
- current_slave_count, 1);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, 1,
+ "Number of children (%d) is not that expected (%d).",
+ current_child_count, 1);
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
rte_socket_id());
TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
- TEST_ASSERT(rte_eth_bond_slave_add(port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+ TEST_ASSERT(rte_eth_bond_child_add(port_id,
+ test_params->child_port_ids[test_params->bonded_child_count - 1])
< 0,
- "Added slave (%d) to bonded port (%d) unexpectedly.",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ "Added child (%d) to bonded port (%d) unexpectedly.",
+ test_params->child_port_ids[test_params->bonded_child_count-1],
port_id);
- return test_remove_slave_from_bonded_device();
+ return test_remove_child_from_bonded_device();
}
static int
-test_get_slaves_from_bonded_device(void)
+test_get_children_from_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_child_count;
+ uint16_t children[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child to bonded device");
/* Invalid port id */
- current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+ current_child_count = rte_eth_bond_children_get(INVALID_PORT_ID, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ TEST_ASSERT(current_child_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_child_count = rte_eth_bond_active_children_get(INVALID_PORT_ID,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_child_count < 0,
"Invalid port id unexpectedly succeeded");
- /* Invalid slaves pointer */
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+ /* Invalid children pointer */
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_child_count < 0,
+ "Invalid child array unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
+ current_child_count = rte_eth_bond_active_children_get(
test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_child_count < 0,
+ "Invalid child array unexpectedly succeeded");
/* non bonded device*/
- current_slave_count = rte_eth_bond_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_child_count = rte_eth_bond_children_get(
+ test_params->child_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_child_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_child_count = rte_eth_bond_active_children_get(
+ test_params->child_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_child_count < 0,
"Invalid port id unexpectedly succeeded");
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+ "Failed to remove children from bonded device");
return 0;
}
static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_children_to_from_bonded_device(void)
{
int i;
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child to bonded device");
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+ "Failed to remove children from bonded device");
return 0;
}
static void
-enable_bonded_slaves(void)
+enable_bonded_children(void)
{
int i;
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ virtual_ethdev_tx_burst_fn_set_success(test_params->child_port_ids[i],
1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->child_port_ids[i], 1);
}
}
@@ -556,34 +556,34 @@ test_start_bonded_device(void)
{
struct rte_eth_link link_status;
- int current_slave_count, current_bonding_mode, primary_port;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_child_count, current_bonding_mode, primary_port;
+ uint16_t children[RTE_MAX_ETHPORTS];
int retval;
- /* Add slave to bonded device*/
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ /* Add child to bonded device*/
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child to bonded device");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
/* Change link status of virtual pmd so it will be added to the active
- * slave list of the bonded device*/
+ * child list of the bonded device*/
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+ test_params->child_port_ids[test_params->bonded_child_count-1], 1);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+ "Number of children (%d) is not expected value (%d).",
+ current_child_count, test_params->bonded_child_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_child_count = rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+ "Number of active children (%d) is not expected value (%d).",
+ current_child_count, test_params->bonded_child_count);
current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +591,9 @@ test_start_bonded_device(void)
current_bonding_mode, test_params->bonding_mode);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
"Primary port (%d) is not expected value (%d).",
- primary_port, test_params->slave_port_ids[0]);
+ primary_port, test_params->child_port_ids[0]);
retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
TEST_ASSERT(retval >= 0,
@@ -609,8 +609,8 @@ test_start_bonded_device(void)
static int
test_stop_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_child_count;
+ uint16_t children[RTE_MAX_ETHPORTS];
struct rte_eth_link link_status;
int retval;
@@ -627,29 +627,29 @@ test_stop_bonded_device(void)
"Bonded port (%d) status (%d) is not expected value (%d).",
test_params->bonded_port_id, link_status.link_status, 0);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+ "Number of children (%d) is not expected value (%d).",
+ current_child_count, test_params->bonded_child_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, 0);
+ current_child_count = rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_child_count, 0,
+ "Number of active children (%d) is not expected value (%d).",
+ current_child_count, 0);
return 0;
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
{
- /* Clean up and remove slaves from bonded device */
+ /* Clean up and remove children from bonded device */
free_virtualpmd_tx_queue();
- while (test_params->bonded_slave_count > 0)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "test_remove_slave_from_bonded_device failed");
+ while (test_params->bonded_child_count > 0)
+ TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+ "test_remove_child_from_bonded_device failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -681,10 +681,10 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+ TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->child_port_ids[0],
bonding_modes[i]),
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
bonding_modes[i]),
@@ -704,26 +704,26 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+ bonding_mode = rte_eth_bond_mode_get(test_params->child_port_ids[0]);
TEST_ASSERT(bonding_mode < 0,
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_children_and_stop_bonded_device();
}
static int
-test_set_primary_slave(void)
+test_set_primary_child(void)
{
int i, j, retval;
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr *expected_mac_addr;
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.");
+ /* Add 4 children to bonded device */
+ for (i = test_params->bonded_child_count; i < 4; i++)
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child to bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +732,34 @@ test_set_primary_slave(void)
/* Invalid port ID */
TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
- test_params->slave_port_ids[i]),
+ test_params->child_port_ids[i]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
- test_params->slave_port_ids[i]),
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->child_port_ids[i],
+ test_params->child_port_ids[i]),
"Expected call to failed as invalid port specified.");
- /* Set slave as primary
- * Verify slave it is now primary slave
- * Verify that MAC address of bonded device is that of primary slave
- * Verify that MAC address of all bonded slaves are that of primary slave
+ /* Set child as primary
+ * Verify child it is now primary child
+ * Verify that MAC address of bonded device is that of primary child
+ * Verify that MAC address of all bonded children are that of primary child
*/
for (i = 0; i < 4; i++) {
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[i]),
+ test_params->child_port_ids[i]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->child_port_ids[i]);
retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(retval >= 0,
"Failed to read primary port from bonded port (%d)\n",
test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+ TEST_ASSERT_EQUAL(retval, test_params->child_port_ids[i],
"Bonded port (%d) primary port (%d) not expected value (%d)\n",
test_params->bonded_port_id, retval,
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
/* stop/start bonded eth dev to apply new MAC */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +770,13 @@ test_set_primary_slave(void)
"Failed to start bonded port %d",
test_params->bonded_port_id);
- expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+ expected_mac_addr = (struct rte_ether_addr *)&child_mac;
expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Check primary slave MAC */
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Check primary child MAC */
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
@@ -789,16 +789,16 @@ test_set_primary_slave(void)
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
- /* Check other slaves MACs */
+ /* Check other children MACs */
for (j = 0; j < 4; j++) {
if (j != i) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[j],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[j]);
+ test_params->child_port_ids[j]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary "
+ "child port mac address not set to that of primary "
"port");
}
}
@@ -809,14 +809,14 @@ test_set_primary_slave(void)
TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
"read primary port from expectedly");
- /* Test with slave port */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+ /* Test with child port */
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->child_port_ids[0]),
"read primary port from expectedly\n");
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to stop and remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
+ "Failed to stop and remove children from bonded device");
- /* No slaves */
+ /* No children */
TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
"read primary port from expectedly\n");
@@ -840,7 +840,7 @@ test_set_explicit_bonded_mac(void)
/* Non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
- test_params->slave_port_ids[0], mac_addr),
+ test_params->child_port_ids[0], mac_addr),
"Expected call to failed as invalid port specified.");
/* NULL MAC address */
@@ -853,10 +853,10 @@ test_set_explicit_bonded_mac(void)
"Failed to set MAC address on bonded port (%d)",
test_params->bonded_port_id);
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++) {
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.\n");
+ /* Add 4 children to bonded device */
+ for (i = test_params->bonded_child_count; i < 4; i++) {
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child to bonded device.\n");
}
/* Check bonded MAC */
@@ -866,14 +866,14 @@ test_set_explicit_bonded_mac(void)
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port");
- /* Check other slaves MACs */
+ /* Check other children MACs */
for (i = 0; i < 4; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary port");
+ "child port mac address not set to that of primary port");
}
/* test resetting mac address on bonded device */
@@ -883,13 +883,13 @@ test_set_explicit_bonded_mac(void)
test_params->bonded_port_id);
TEST_ASSERT_FAIL(
- rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+ rte_eth_bond_mac_address_reset(test_params->child_port_ids[0]),
"Reset MAC address on bonded port (%d) unexpectedly",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- /* test resetting mac address on bonded device with no slaves */
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to remove slaves and stop bonded device");
+ /* test resetting mac address on bonded device with no children */
+ TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
+ "Failed to remove children and stop bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +898,25 @@ test_set_explicit_bonded_mac(void)
return 0;
}
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT (3)
static int
test_set_bonded_port_initialization_mac_assignment(void)
{
- int i, slave_count;
+ int i, child_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
static int bonded_port_id = -1;
- static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+ static int child_port_ids[BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT];
- struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+ struct rte_ether_addr child_mac_addr, bonded_mac_addr, read_mac_addr;
/* Initialize default values for MAC addresses */
- memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
- memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+ memcpy(&child_mac_addr, child_mac, sizeof(struct rte_ether_addr));
+ memcpy(&bonded_mac_addr, child_mac, sizeof(struct rte_ether_addr));
/*
- * 1. a - Create / configure bonded / slave ethdevs
+ * 1. a - Create / configure bonded / child ethdevs
*/
if (bonded_port_id == -1) {
bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +927,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
"Failed to configure bonded ethdev");
}
- if (!mac_slaves_initialized) {
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ if (!mac_children_initialized) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
i + 100;
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
- "eth_slave_%d", i);
+ "eth_child_%d", i);
- slave_port_ids[i] = virtual_ethdev_create(pmd_name,
- &slave_mac_addr, rte_socket_id(), 1);
+ child_port_ids[i] = virtual_ethdev_create(pmd_name,
+ &child_mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(slave_port_ids[i] >= 0,
- "Failed to create slave ethdev %s",
+ TEST_ASSERT(child_port_ids[i] >= 0,
+ "Failed to create child ethdev %s",
pmd_name);
- TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+ TEST_ASSERT_SUCCESS(configure_ethdev(child_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s",
pmd_name);
}
- mac_slaves_initialized = 1;
+ mac_children_initialized = 1;
}
/*
- * 2. Add slave ethdevs to bonded device
+ * 2. Add child ethdevs to bonded device
*/
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
- slave_port_ids[i]),
- "Failed to add slave (%d) to bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(bonded_port_id,
+ child_port_ids[i]),
+ "Failed to add child (%d) to bonded port (%d).",
+ child_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ child_count = rte_eth_bond_children_get(bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
- "Number of slaves (%d) is not as expected (%d)",
- slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT, child_count,
+ "Number of children (%d) is not as expected (%d)",
+ child_count, BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT);
/*
@@ -982,16 +982,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
/* 4. a - Start bonded ethdev
- * b - Enable slave devices
- * c - Verify bonded/slaves ethdev MAC addresses
+ * b - Enable child devices
+ * c - Verify bonded/children ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
"Failed to start bonded pmd eth device %d.",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- slave_port_ids[i], 1);
+ child_port_ids[i], 1);
}
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1001,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
+ child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "child port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "child port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "child port 2 mac address not as expected");
/* 7. a - Change primary port
* b - Stop / Start bonded port
- * d - Verify slave ethdev MAC addresses
+ * d - Verify child ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
- slave_port_ids[2]),
+ child_port_ids[2]),
"failed to set primary port on bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1048,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "child port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "child port 1 mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
+ child_port_ids[2]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "child port 2 mac address not as expected");
/* 6. a - Stop bonded ethdev
- * b - remove slave ethdevs
- * c - Verify slave ethdevs MACs are restored
+ * b - remove child ethdevs
+ * c - Verify child ethdevs MACs are restored
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
"Failed to stop bonded port %u",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
- slave_port_ids[i]),
- "Failed to remove slave %d from bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(bonded_port_id,
+ child_port_ids[i]),
+ "Failed to remove child %d from bonded port (%d).",
+ child_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ child_count = rte_eth_bond_children_get(bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of slaves (%d) is great than expected (%d).",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(child_count, 0,
+ "Number of children (%d) is great than expected (%d).",
+ child_count, 0);
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "child port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "child port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ child_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "child port 2 mac address not as expected");
return 0;
}
static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
- uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_children(uint8_t bonding_mode, uint8_t bond_en_isr,
+ uint16_t number_of_children, uint8_t enable_child)
{
/* Configure bonded device */
TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
- "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
- number_of_slaves);
-
- /* Add slaves to bonded device */
- while (number_of_slaves > test_params->bonded_slave_count)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave (%d to bonding port (%d).",
- test_params->bonded_slave_count - 1,
+ "with (%d) children.", test_params->bonded_port_id, bonding_mode,
+ number_of_children);
+
+ /* Add children to bonded device */
+ while (number_of_children > test_params->bonded_child_count)
+ TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+ "Failed to add child (%d to bonding port (%d).",
+ test_params->bonded_child_count - 1,
test_params->bonded_port_id);
/* Set link bonding mode */
@@ -1148,40 +1148,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- if (enable_slave)
- enable_bonded_slaves();
+ if (enable_child)
+ enable_bonded_children();
return 0;
}
static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_child_after_bonded_device_started(void)
{
int i;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
- "Failed to add slaves to bonded device");
+ "Failed to add children to bonded device");
- /* Enabled slave devices */
- for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+ /* Enabled child devices */
+ for (i = 0; i < test_params->bonded_child_count + 1; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->child_port_ids[i], 1);
}
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave to bonded port.\n");
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+ test_params->child_port_ids[test_params->bonded_child_count]),
+ "Failed to add child to bonded port.\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count]);
+ test_params->child_port_ids[test_params->bonded_child_count]);
- test_params->bonded_slave_count++;
+ test_params->bonded_child_count++;
- return remove_slaves_and_stop_bonded_device();
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
+#define TEST_STATUS_INTERRUPT_CHILD_COUNT 4
#define TEST_LSC_WAIT_TIMEOUT_US 500000
int test_lsc_interrupt_count;
@@ -1237,13 +1237,13 @@ lsc_timeout(int wait_us)
static int
test_status_interrupt(void)
{
- int slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int child_count;
+ uint16_t children[RTE_MAX_ETHPORTS];
- /* initialized bonding device with T slaves */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* initialized bonding device with T children */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 1,
- TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+ TEST_STATUS_INTERRUPT_CHILD_COUNT, 1),
"Failed to initialise bonded device");
test_lsc_interrupt_count = 0;
@@ -1253,27 +1253,27 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(child_count, TEST_STATUS_INTERRUPT_CHILD_COUNT,
+ "Number of active children (%d) is not as expected (%d)",
+ child_count, TEST_STATUS_INTERRUPT_CHILD_COUNT);
- /* Bring all 4 slaves link status to down and test that we have received a
+ /* Bring all 4 children link status to down and test that we have received a
* lsc interrupts */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->child_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->child_port_ids[2], 0);
TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
"Received a link status change interrupt unexpectedly");
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1281,18 +1281,18 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(child_count, 0,
+ "Number of active children (%d) is not as expected (%d)",
+ child_count, 0);
- /* bring one slave port up so link status will change */
+ /* bring one child port up so link status will change */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->child_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1301,12 +1301,12 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- /* Verify that calling the same slave lsc interrupt doesn't cause another
+ /* Verify that calling the same child lsc interrupt doesn't cause another
* lsc interrupt from bonded device */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->child_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
"received unexpected interrupt");
@@ -1320,8 +1320,8 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -1398,11 +1398,11 @@ test_roundrobin_tx_burst(void)
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_child_count;
TEST_ASSERT(burst_size <= MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -1423,20 +1423,20 @@ test_roundrobin_tx_burst(void)
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify child ports tx stats */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size / test_params->bonded_slave_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ (uint64_t)burst_size / test_params->bonded_child_count,
+ "Child Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_child_count);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -1444,8 +1444,8 @@ test_roundrobin_tx_burst(void)
pkt_burst, burst_size), 0,
"tx burst return unexpected value");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -1471,13 +1471,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
rte_pktmbuf_free(mbufs[i]);
}
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
+#define TEST_RR_CHILD_TX_FAIL_CHILD_COUNT (2)
+#define TEST_RR_CHILD_TX_FAIL_BURST_SIZE (64)
+#define TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT (22)
+#define TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX (1)
static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_child_tx_fail(void)
{
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1486,49 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
int i, first_fail_idx, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0,
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_RR_CHILD_TX_FAIL_CHILD_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_RR_CHILD_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+ TEST_RR_CHILD_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
/* Copy references to packets which we expect not to be transmitted */
- first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
- TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+ first_fail_idx = (TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+ (TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT *
+ TEST_RR_CHILD_TX_FAIL_CHILD_COUNT)) +
+ TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX;
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
- (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+ (i * TEST_RR_CHILD_TX_FAIL_CHILD_COUNT)];
}
- /* Set virtual slave to only fail transmission of
- * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+ /* Set virtual child to only fail transmission of
+ * TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT packets in burst */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->child_port_ids[TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->child_port_ids[TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX],
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_RR_CHILD_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1538,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ (uint64_t)TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- int slave_expected_tx_count;
+ /* Verify child ports tx stats */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ int child_expected_tx_count;
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
- slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
- test_params->bonded_slave_count;
+ child_expected_tx_count = TEST_RR_CHILD_TX_FAIL_BURST_SIZE /
+ test_params->bonded_child_count;
- if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
- slave_expected_tx_count = slave_expected_tx_count -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+ if (i == TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX)
+ child_expected_tx_count = child_expected_tx_count -
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT;
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)slave_expected_tx_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[i],
- (unsigned int)port_stats.opackets, slave_expected_tx_count);
+ (uint64_t)child_expected_tx_count,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[i],
+ (unsigned int)port_stats.opackets, child_expected_tx_count);
}
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
- free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ free_mbufs(&pkt_burst[tx_count], TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_child(void)
{
struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1585,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
int i, j, burst_size = 25;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ /* Add rx data to child */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -1616,25 +1616,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
- /* Verify bonded slave devices rx count */
- /* Verify slave ports tx stats */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded child devices rx count */
+ /* Verify child ports tx stats */
+ for (j = 0; j < test_params->bonded_child_count; j++) {
+ rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
- /* Reset bonded slaves stats */
- rte_eth_stats_reset(test_params->slave_port_ids[j]);
+ /* Reset bonded children stats */
+ rte_eth_stats_reset(test_params->child_port_ids[j]);
}
/* reset bonded device stats */
rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1646,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT (3)
static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_children(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+ int burst_size[TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT] = { 15, 13, 36 };
int i, nb_rx;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
burst_size[i], "burst generation failed");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to children */
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -1697,29 +1697,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded child devices rx counts */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[2],
(unsigned int)port_stats.ipackets, burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[3],
(unsigned int)port_stats.ipackets, 0);
/* free mbufs */
@@ -1727,8 +1727,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -1739,48 +1739,48 @@ test_roundrobin_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+ test_params->child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[2], &expected_mac_addr_2),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
- /* Verify that all MACs are the same as first slave added to bonded dev */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Verify that all MACs are the same as first child added to bonded dev */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->child_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary"
+ "child port (%d) mac address has changed to that of primary"
" port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
/* stop / start bonded device and verify that primary MAC address is
- * propagate to bonded device and slaves */
+ * propagate to bonded device and children */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
test_params->bonded_port_id);
@@ -1794,16 +1794,16 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(
memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary"
- " port", test_params->slave_port_ids[i]);
+ "child port (%d) mac address not set to that of new primary"
+ " port", test_params->child_port_ids[i]);
}
/* Set explicit MAC address */
@@ -1818,19 +1818,19 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
- sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
- " that of new primary port\n", test_params->slave_port_ids[i]);
+ sizeof(read_mac_addr)), "child port (%d) mac address not set to"
+ " that of new primary port\n", test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -1839,10 +1839,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
int i, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1854,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "child port (%d) promiscuous mode not enabled",
+ test_params->child_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1872,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
"Port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_CHILD_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT (2)
static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_child_link_status_change_behaviour(void)
{
struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
- struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, child_count;
/* NULL all pointers in array to simplify cleanup */
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+ /* Initialize bonded device with TEST_RR_LINK_STATUS_CHILD_COUNT children
* in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+ BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_CHILD_COUNT, 1),
+ "Failed to initialize bonded device with children");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Childs Count /Active Child Count is */
+ child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(child_count, TEST_RR_LINK_STATUS_CHILD_COUNT,
+ "Number of children (%d) is not as expected (%d).",
+ child_count, TEST_RR_LINK_STATUS_CHILD_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, TEST_RR_LINK_STATUS_CHILD_COUNT,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, TEST_RR_LINK_STATUS_CHILD_COUNT);
- /* Set 2 slaves eth_devs link status to down */
+ /* Set 2 children eth_devs link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count,
- TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).\n",
- slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count,
+ TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT,
+ "Number of active children (%d) is not as expected (%d).\n",
+ child_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT);
burst_size = 20;
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on children with link status down:
*
* 1. Generate test burst of traffic
* 2. Transmit burst on bonded eth_dev
* 3. Verify stats for bonded eth_dev (opackets = burst_size)
- * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 4. Verify stats for child eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
TEST_ASSERT_EQUAL(
generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1960,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+ test_params->child_port_ids[0], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+ test_params->child_port_ids[1], (int)port_stats.opackets, 0);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+ test_params->child_port_ids[2], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+ test_params->child_port_ids[3], (int)port_stats.opackets, 0);
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on children with link status down:
*
* 1. Generate test bursts of traffic
* 2. Add bursts on to virtual eth_devs
* 3. Rx burst on bonded eth_dev, expected (burst_ size *
- * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+ * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT) received
* 4. Verify stats for bonded eth_dev
- * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 6. Verify stats for child eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
- for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_RR_LINK_STATUS_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[i][0], burst_size);
}
@@ -2014,49 +2014,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT (2)
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_child_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_children[TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verfiy_polling_child_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
- (struct rte_ether_addr *)polling_slave_mac;
- char slave_name[RTE_ETH_NAME_MAX_LEN];
+ (struct rte_ether_addr *)polling_child_mac;
+ char child_name[RTE_ETH_NAME_MAX_LEN];
int i;
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
- /* Generate slave name / MAC address */
- snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT; i++) {
+ /* Generate child name / MAC address */
+ snprintf(child_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Create slave devices with no ISR Support */
- if (polling_test_slaves[i] == -1) {
- polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+ /* Create child devices with no ISR Support */
+ if (polling_test_children[i] == -1) {
+ polling_test_children[i] = virtual_ethdev_create(child_name, mac_addr,
rte_socket_id(), 0);
- TEST_ASSERT(polling_test_slaves[i] >= 0,
- "Failed to create virtual virtual ethdev %s\n", slave_name);
+ TEST_ASSERT(polling_test_children[i] >= 0,
+ "Failed to create virtual virtual ethdev %s\n", child_name);
- /* Configure slave */
- TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
- "Failed to configure virtual ethdev %s(%d)", slave_name,
- polling_test_slaves[i]);
+ /* Configure child */
+ TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_children[i], 0, 0),
+ "Failed to configure virtual ethdev %s(%d)", child_name,
+ polling_test_children[i]);
}
- /* Add slave to bonded device */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to add slave %s(%d) to bonded device %d",
- slave_name, polling_test_slaves[i],
+ /* Add child to bonded device */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+ polling_test_children[i]),
+ "Failed to add child %s(%d) to bonded device %d",
+ child_name, polling_test_children[i],
test_params->bonded_port_id);
}
@@ -2071,26 +2071,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* link status change callback for first slave link up */
+ /* link status change callback for first child link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+ virtual_ethdev_set_link_status(polling_test_children[0], 1);
TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
- /* no link status change callback for second slave link up */
+ /* no link status change callback for second child link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+ virtual_ethdev_set_link_status(polling_test_children[1], 1);
TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
- /* link status change callback for both slave links down */
+ /* link status change callback for both child links down */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
- virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+ virtual_ethdev_set_link_status(polling_test_children[0], 0);
+ virtual_ethdev_set_link_status(polling_test_children[1], 0);
TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
@@ -2100,17 +2100,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+ /* Clean up and remove children from bonded device */
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT; i++) {
TEST_ASSERT_SUCCESS(
- rte_eth_bond_slave_remove(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to remove slave %d from bonded port (%d)",
- polling_test_slaves[i], test_params->bonded_port_id);
+ rte_eth_bond_child_remove(test_params->bonded_port_id,
+ polling_test_children[i]),
+ "Failed to remove child %d from bonded port (%d)",
+ polling_test_children[i], test_params->bonded_port_id);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_children_and_stop_bonded_device();
}
@@ -2123,9 +2123,9 @@ test_activebackup_tx_burst(void)
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2136,7 @@ test_activebackup_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_child_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -2160,38 +2160,38 @@ test_activebackup_tx_burst(void)
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
- if (test_params->slave_port_ids[i] == primary_port) {
+ /* Verify child ports tx stats */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
+ if (test_params->child_port_ids[i] == primary_port) {
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_child_count);
} else {
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, 0);
}
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
pkts_burst, burst_size), 0, "Sending empty burst failed");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT (4)
static int
test_activebackup_rx_burst(void)
@@ -2205,24 +2205,24 @@ test_activebackup_rx_burst(void)
int i, j, burst_size = 17;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT, 1),
+ "Failed to initialize bonded device with children");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary child for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
burst_size, "burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to child */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -2230,7 +2230,7 @@ test_activebackup_rx_burst(void)
&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
"rte_eth_rx_burst failed");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->child_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2238,27 @@ test_activebackup_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded child devices rx count */
+ for (j = 0; j < test_params->bonded_child_count; j++) {
+ rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)", test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as "
+ "expected (%d)", test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)\n", test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as "
+ "expected (%d)\n", test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_child_count; j++) {
+ rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected "
- "(%d)", test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected "
+ "(%d)", test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -2275,8 +2275,8 @@ test_activebackup_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -2285,14 +2285,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary child for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2304,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->child_port_ids[i]);
+ if (primary_port == test_params->child_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "child port (%d) promiscuous mode not enabled",
+ test_params->child_port_ids[i]);
} else {
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode enabled",
- test_params->slave_port_ids[i]);
+ "child port (%d) promiscuous mode enabled",
+ test_params->child_port_ids[i]);
}
}
@@ -2328,16 +2328,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "child port (%d) promiscuous mode not disabled\n",
+ test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -2346,19 +2346,19 @@ test_activebackup_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 children in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with children");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first child and that the other child
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2368,27 +2368,27 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->child_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2398,24 +2398,24 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[1]);
/* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ * propagated to bonded device and children */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -2432,21 +2432,21 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2462,36 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of bonded port",
+ test_params->child_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_child_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, child_count, primary_port;
burst_size = 21;
@@ -2502,96 +2502,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT, 1),
+ "Failed to initialize bonded device with children");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Childs Count /Active Child Count is */
+ child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(child_count, 4,
+ "Number of children (%d) is not as expected (%d).",
+ child_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, 4,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 children down and verify active child count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->child_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->child_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active child count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->child_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS),
3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[2],
"Primary port not as expected");
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary child */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(
test_params->bonded_port_id, 0, &pkt_burst[0][0],
burst_size), burst_size, "rte_eth_tx_burst failed");
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->child_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->child_port_ids[i], &pkt_burst[i][0], burst_size);
}
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2604,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected",
test_params->bonded_port_id);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->child_port_ids[3]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
/** Balance Mode Tests */
@@ -2633,9 +2633,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
static int
test_balance_xmit_policy_configuration(void)
{
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_children.");
/* Invalid port id */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2644,7 @@ test_balance_xmit_policy_configuration(void)
/* Set xmit policy on non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
- test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
+ test_params->child_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
"Expected call to failed as invalid port specified.");
@@ -2677,25 +2677,25 @@ test_balance_xmit_policy_configuration(void)
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
"Expected call to failed as invalid port specified.");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_CHILD_COUNT (2)
static int
test_balance_l2_tx_burst(void)
{
- struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
- int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+ struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_CHILD_COUNT][MAX_PKT_BURST];
+ int burst_size[TEST_BALANCE_L2_TX_BURST_CHILD_COUNT] = { 10, 15 };
uint16_t pktlen;
int i;
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_CHILD_COUNT, 1),
+ "Failed to initialize_bonded_device_with_children.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2730,7 @@ test_balance_l2_tx_burst(void)
"failed to generate packet burst");
/* Send burst 1 on bonded port */
- for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_L2_TX_BURST_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
&pkts_burst[i][0], burst_size[i]),
burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2745,24 @@ test_balance_l2_tx_burst(void)
burst_size[0] + burst_size[1]);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify child ports tx stats */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)\n",
+ test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
burst_size[1]);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2770,8 +2770,8 @@ test_balance_l2_tx_burst(void)
test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -2785,9 +2785,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_children.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2825,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify child ports tx stats */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2851,8 +2851,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -2897,9 +2897,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_children.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2938,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify child ports tx stats */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2963,8 +2963,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, 0, pkts_burst_1,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3003,27 +3003,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
return balance_l34_tx_burst(0, 0, 0, 0, 1);
}
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
+#define TEST_BAL_CHILD_TX_FAIL_CHILD_COUNT (2)
+#define TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 (40)
+#define TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2 (20)
+#define TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT (25)
+#define TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX (0)
static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_child_tx_fail(void)
{
- struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
- struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+ struct rte_mbuf *pkts_burst_1[TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1];
+ struct rte_mbuf *pkts_burst_2[TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2];
- struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+ struct rte_mbuf *expected_fail_pkts[TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, first_tx_fail_idx, tx_count_1, tx_count_2;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0,
- TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BAL_CHILD_TX_FAIL_CHILD_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3033,46 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1,
"Failed to generate test packet burst 1");
- first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+ first_tx_fail_idx = TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT;
/* copy mbuf references for expected transmission failures */
- for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+ for (i = 0; i < TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT; i++)
expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
"Failed to generate test packet burst 2");
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /* Set virtual child TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX to only fail
+ * transmission of TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT packets of burst */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->child_port_ids[TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->child_port_ids[TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX],
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
/* Transmit burst 1 */
tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1);
- TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ tx_count_1, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3080,94 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Transmit burst 2 */
tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
- TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ tx_count_2, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+ (uint64_t)((TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2),
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ (TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
- /* Verify slave ports tx stats */
+ /* Verify child ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1],
+ (uint64_t)TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
+ "Child Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[1],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_CHILD_COUNT (3)
static int
test_balance_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+ int burst_size[TEST_BALANCE_RX_BURST_CHILD_COUNT] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
0, 0), burst_size[i],
"failed to generate packet burst");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to children */
+ for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3187,33 +3187,33 @@ test_balance_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded child devices rx counts */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3222,8 @@ test_balance_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3232,8 +3232,8 @@ test_balance_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3246,11 +3246,11 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->child_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3262,15 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->child_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3279,19 +3279,19 @@ test_balance_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 children in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BALANCE, 0, 2, 1),
"Failed to initialise bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first child and that the other child
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3301,27 +3301,27 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]),
+ test_params->child_port_ids[1]),
"Failed to set bonded port (%d) primary port to (%d)\n",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3331,24 +3331,24 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[1]);
/* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ * propagated to bonded device and children */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3365,21 +3365,21 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3395,44 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected\n",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not as expected\n",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of bonded port",
+ test_params->child_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_CHILD_COUNT (4)
static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_child_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, child_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_CHILD_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3440,32 @@ test_balance_verify_slave_link_status_change_behaviour(void)
"Failed to set balance xmit policy.");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Childs Count /Active Child Count is */
+ child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT,
+ "Number of children (%d) is not as expected (%d).",
+ child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT);
- /* Set 2 slaves link status to down */
+ /* Set 2 children link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 2);
/* Send to sets of packet burst and verify that they are balanced across
- * slaves */
+ * children */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3491,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->child_port_ids[0], (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[2], (int)port_stats.opackets,
+ test_params->child_port_ids[2], (int)port_stats.opackets,
burst_size);
- /* verify that all packets get send on primary slave when no other slaves
+ /* verify that all packets get send on primary child when no other children
* are available */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->child_port_ids[2], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 1);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 1,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 1);
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3528,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->child_port_ids[0], (int)port_stats.opackets,
burst_size + burst_size);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->child_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->child_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 1);
+ test_params->child_port_ids[2], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->child_port_ids[3], 1);
- for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_LINK_STATUS_CHILD_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"Failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on children with link status down */
rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
MAX_PKT_BURST);
@@ -3564,8 +3564,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.ipackets,
burst_size * 3);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3576,7 +3576,7 @@ test_broadcast_tx_burst(void)
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BROADCAST, 0, 2, 1),
"Failed to initialise bonded device");
@@ -3590,7 +3590,7 @@ test_broadcast_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_child_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -3611,25 +3611,25 @@ test_broadcast_tx_burst(void)
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size * test_params->bonded_slave_count,
+ (uint64_t)burst_size * test_params->bonded_child_count,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify child ports tx stats */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ "Child Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, burst_size);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -3637,159 +3637,159 @@ test_broadcast_tx_burst(void)
test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
"transmitted an unexpected number of packets");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
+#define TEST_BCAST_CHILD_TX_FAIL_CHILD_COUNT (3)
+#define TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE (40)
+#define TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT (15)
+#define TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT (10)
static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_child_tx_fail(void)
{
- struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
- struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+ struct rte_mbuf *pkts_burst[TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE];
+ struct rte_mbuf *expected_fail_pkts[TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BROADCAST, 0,
- TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BCAST_CHILD_TX_FAIL_CHILD_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
- expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+ for (i = 0; i < TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ expected_fail_pkts[i] = pkts_burst[TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT + i];
}
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /* Set virtual child TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX to only fail
+ * transmission of TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT packets of burst */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[0],
+ test_params->child_port_ids[0],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[1],
+ test_params->child_port_ids[1],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[2],
+ test_params->child_port_ids[2],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[0],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->child_port_ids[0],
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[1],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ test_params->child_port_ids[1],
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[2],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->child_port_ids[2],
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
/* Transmit burst */
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ tx_count, TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
}
- /* Verify slave ports tx stats */
+ /* Verify child ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
/* Verify that all mbufs who transmission failed have a ref value of one */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_CHILDS (3)
static int
test_broadcast_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_CHILDS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+ int burst_size[BROADCAST_RX_BURST_NUM_OF_CHILDS] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BROADCAST, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
burst_size[i], "failed to generate packet burst");
}
- /* Add rx data to slave 0 */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to child 0 */
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3810,33 +3810,33 @@ test_broadcast_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded child devices rx counts */
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Child Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->child_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs allocate for rx testing */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3845,8 @@ test_broadcast_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3855,8 +3855,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3870,11 +3870,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->child_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3886,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->child_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -3905,49 +3905,49 @@ test_broadcast_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+ test_params->child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[2], &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
- /* Verify that all MACs are the same as first slave added to bonded
+ /* Verify that all MACs are the same as first child added to bonded
* device */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->child_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary "
+ "child port (%d) mac address has changed to that of primary "
"port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
}
/* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ * propagated to bonded device and children */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3962,16 +3962,16 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "child port (%d) mac address not set to that of new primary "
+ "port", test_params->child_port_ids[i]);
}
/* Set explicit MAC address */
@@ -3986,71 +3986,71 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "child port (%d) mac address not set to that of new primary "
+ "port", test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_CHILDS (4)
static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_child_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_CHILDS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, child_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+ BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_CHILDS,
1), "Failed to initialise bonded device");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Childs Count /Active Child Count is */
+ child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(child_count, 4,
+ "Number of children (%d) is not as expected (%d).",
+ child_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, 4,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 4);
- /* Set 2 slaves link status to down */
+ /* Set 2 children link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, 2,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 2);
- for (i = 0; i < test_params->bonded_slave_count; i++)
- rte_eth_stats_reset(test_params->slave_port_ids[i]);
+ for (i = 0; i < test_params->bonded_child_count; i++)
+ rte_eth_stats_reset(test_params->child_port_ids[i]);
- /* Verify that pkts are not sent on slaves with link status down */
+ /* Verify that pkts are not sent on children with link status down */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4062,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"rte_eth_tx_burst failed\n");
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
- TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+ TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * child_count),
"(%d) port_stats.opackets (%d) not as expected (%d)\n",
test_params->bonded_port_id, (int)port_stats.opackets,
- burst_size * slave_count);
+ burst_size * child_count);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->child_port_ids[3]);
- for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_CHILDS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on children with link status down */
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4110,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -4146,21 +4146,21 @@ testsuite_teardown(void)
free(test_params->pkt_eth_hdr);
test_params->pkt_eth_hdr = NULL;
- /* Clean up and remove slaves from bonded device */
- remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ remove_children_and_stop_bonded_device();
}
static void
free_virtualpmd_tx_queue(void)
{
- int i, slave_port, to_free_cnt;
+ int i, child_port, to_free_cnt;
struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
/* Free tx queue of virtual pmd */
- for (slave_port = 0; slave_port < test_params->bonded_slave_count;
- slave_port++) {
+ for (child_port = 0; child_port < test_params->bonded_child_count;
+ child_port++) {
to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_port],
+ test_params->child_port_ids[child_port],
pkts_to_free, MAX_PKT_BURST);
for (i = 0; i < to_free_cnt; i++)
rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4177,11 @@ test_tlb_tx_burst(void)
uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
uint16_t pktlen;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children
(BONDING_MODE_TLB, 1, 3, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_child_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.\n");
@@ -4197,7 +4197,7 @@ test_tlb_tx_burst(void)
RTE_ETHER_TYPE_IPV4, 0, 0);
} else {
initialize_eth_header(test_params->pkt_eth_hdr,
- (struct rte_ether_addr *)test_params->default_slave_mac,
+ (struct rte_ether_addr *)test_params->default_child_mac,
(struct rte_ether_addr *)dst_mac_0,
RTE_ETHER_TYPE_IPV4, 0, 0);
}
@@ -4234,26 +4234,26 @@ test_tlb_tx_burst(void)
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+ /* Verify child ports tx stats */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
+ rte_eth_stats_get(test_params->child_port_ids[i], &port_stats[i]);
sum_ports_opackets += port_stats[i].opackets;
}
TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
- "Total packets sent by slaves is not equal to packets sent by bond interface");
+ "Total packets sent by children is not equal to packets sent by bond interface");
- /* checking if distribution of packets is balanced over slaves */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* checking if distribution of packets is balanced over children */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
TEST_ASSERT(port_stats[i].obytes > 0 &&
port_stats[i].obytes < all_bond_obytes,
- "Packets are not balanced over slaves");
+ "Packets are not balanced over children");
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all children down and try and transmit */
+ for (i = 0; i < test_params->bonded_child_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->child_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -4261,11 +4261,11 @@ test_tlb_tx_burst(void)
burst_size);
TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
- /* Clean ugit checkout masterp and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT (4)
static int
test_tlb_rx_burst(void)
@@ -4279,26 +4279,26 @@ test_tlb_rx_burst(void)
uint16_t i, j, nb_rx, burst_size = 17;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_TLB,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+ TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT, 1, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary child for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to child */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -4307,7 +4307,7 @@ test_tlb_rx_burst(void)
TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->child_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4315,27 @@ test_tlb_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded child devices rx count */
+ for (j = 0; j < test_params->bonded_child_count; j++) {
+ rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_child_count; j++) {
+ rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->child_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -4348,8 +4348,8 @@ test_tlb_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -4358,14 +4358,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS( initialize_bonded_device_with_children(
BONDING_MODE_TLB, 0, 4, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary child for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4377,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->child_port_ids[i]);
+ if (primary_port == test_params->child_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
@@ -4402,16 +4402,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_child_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->child_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "child port (%d) promiscuous mode not disabled\n",
+ test_params->child_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
@@ -4420,19 +4420,19 @@ test_tlb_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->child_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 children in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_TLB, 0, 2, 1),
"Failed to initialize bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first child and that the other child
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4442,27 +4442,27 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->child_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4472,24 +4472,24 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[1]);
/* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ * propagated to bonded device and children */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -4506,21 +4506,21 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of primary port",
+ test_params->child_port_ids[1]);
/* Set explicit MAC address */
@@ -4537,36 +4537,36 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "child port (%d) mac address not as expected",
+ test_params->child_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "child port (%d) mac address not set to that of bonded port",
+ test_params->child_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_child_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, child_count, primary_port;
burst_size = 21;
@@ -4574,61 +4574,61 @@ test_tlb_verify_slave_link_status_change_failover(void)
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 children in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
BONDING_MODE_TLB, 0,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT, 1),
+ "Failed to initialize bonded device with children");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Childs Count /Active Child Count is */
+ child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(child_count, 4,
+ "Number of children (%d) is not as expected (%d).\n",
+ child_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, (int)4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+ children, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(child_count, (int)4,
+ "Number of children (%d) is not as expected (%d).\n",
+ child_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 children down and verify active child count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->child_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->child_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->child_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->child_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active child count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->child_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+ test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 3,
+ "Number of active children (%d) is not as expected (%d).",
+ child_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[2],
"Primary port not as expected");
rte_delay_us(500000);
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary child */
for (i = 0; i < 4; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4639,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
rte_delay_us(11000);
}
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->child_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->child_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[2]);
+ test_params->child_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->child_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT; i++) {
if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
burst_size)
return -1;
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->child_port_ids[i], &pkt_burst[i][0], burst_size);
}
if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4684,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove children from bonded device */
+ return remove_children_and_stop_bonded_device();
}
-#define TEST_ALB_SLAVE_COUNT 2
+#define TEST_ALB_CHILD_COUNT 2
static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4710,23 @@ test_alb_change_mac_in_reply_sent(void)
struct rte_ether_hdr *eth_pkt;
struct rte_arp_hdr *arp_pkt;
- int slave_idx, nb_pkts, pkt_idx;
+ int child_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *child_mac1, *child_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_children(BONDING_MODE_ALB,
+ 0, TEST_ALB_CHILD_COUNT, 1),
+ "Failed to initialize_bonded_device_with_children.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
- slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count;
+ child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->child_port_ids[child_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4782,18 +4782,18 @@ test_alb_change_mac_in_reply_sent(void)
RTE_ARP_OP_REPLY);
rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
- slave_mac1 =
- rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 =
- rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ child_mac1 =
+ rte_eth_devices[test_params->child_port_ids[0]].data->mac_addrs;
+ child_mac2 =
+ rte_eth_devices[test_params->child_port_ids[1]].data->mac_addrs;
/*
* Checking if packets are properly distributed on bonding ports. Packets
* 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->child_port_ids[child_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4802,14 @@ test_alb_change_mac_in_reply_sent(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (child_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(child_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(child_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4819,7 +4819,7 @@ test_alb_change_mac_in_reply_sent(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_children_and_stop_bonded_device();
return retval;
}
@@ -4832,22 +4832,22 @@ test_alb_reply_from_client(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+ int child_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *child_mac1, *child_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_children(BONDING_MODE_ALB,
+ 0, TEST_ALB_CHILD_COUNT, 1),
+ "Failed to initialize_bonded_device_with_children.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->child_port_ids[child_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4868,7 +4868,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4880,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4892,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4904,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
1);
/*
@@ -4914,15 +4914,15 @@ test_alb_reply_from_client(void)
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ child_mac1 = rte_eth_devices[test_params->child_port_ids[0]].data->mac_addrs;
+ child_mac2 = rte_eth_devices[test_params->child_port_ids[1]].data->mac_addrs;
/*
- * Checking if update ARP packets were properly send on slave ports.
+ * Checking if update ARP packets were properly send on child ports.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+ test_params->child_port_ids[child_idx], pkts_sent, MAX_PKT_BURST);
nb_pkts_sum += nb_pkts;
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4931,14 @@ test_alb_reply_from_client(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (child_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(child_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(child_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4954,7 +4954,7 @@ test_alb_reply_from_client(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_children_and_stop_bonded_device();
return retval;
}
@@ -4968,21 +4968,21 @@ test_alb_receive_vlan_reply(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx;
+ int child_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_children(BONDING_MODE_ALB,
+ 0, TEST_ALB_CHILD_COUNT, 1),
+ "Failed to initialize_bonded_device_with_children.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->child_port_ids[child_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -5007,7 +5007,7 @@ test_alb_receive_vlan_reply(void)
arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
1);
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5016,9 @@ test_alb_receive_vlan_reply(void)
/*
* Checking if VLAN headers in generated ARP Update packet are correct.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->child_port_ids[child_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5049,7 @@ test_alb_receive_vlan_reply(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_children_and_stop_bonded_device();
return retval;
}
@@ -5062,9 +5062,9 @@ test_alb_ipv4_tx(void)
retval = 0;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_children(BONDING_MODE_ALB,
+ 0, TEST_ALB_CHILD_COUNT, 1),
+ "Failed to initialize_bonded_device_with_children.");
burst_size = 32;
@@ -5085,7 +5085,7 @@ test_alb_ipv4_tx(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_children_and_stop_bonded_device();
return retval;
}
@@ -5096,34 +5096,34 @@ static struct unit_test_suite link_bonding_test_suite = {
.unit_test_cases = {
TEST_CASE(test_create_bonded_device),
TEST_CASE(test_create_bonded_device_with_invalid_params),
- TEST_CASE(test_add_slave_to_bonded_device),
- TEST_CASE(test_add_slave_to_invalid_bonded_device),
- TEST_CASE(test_remove_slave_from_bonded_device),
- TEST_CASE(test_remove_slave_from_invalid_bonded_device),
- TEST_CASE(test_get_slaves_from_bonded_device),
- TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
- TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+ TEST_CASE(test_add_child_to_bonded_device),
+ TEST_CASE(test_add_child_to_invalid_bonded_device),
+ TEST_CASE(test_remove_child_from_bonded_device),
+ TEST_CASE(test_remove_child_from_invalid_bonded_device),
+ TEST_CASE(test_get_children_from_bonded_device),
+ TEST_CASE(test_add_already_bonded_child_to_bonded_device),
+ TEST_CASE(test_add_remove_multiple_children_to_from_bonded_device),
TEST_CASE(test_start_bonded_device),
TEST_CASE(test_stop_bonded_device),
TEST_CASE(test_set_bonding_mode),
- TEST_CASE(test_set_primary_slave),
+ TEST_CASE(test_set_primary_child),
TEST_CASE(test_set_explicit_bonded_mac),
TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
TEST_CASE(test_status_interrupt),
- TEST_CASE(test_adding_slave_after_bonded_device_started),
+ TEST_CASE(test_adding_child_after_bonded_device_started),
TEST_CASE(test_roundrobin_tx_burst),
- TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
- TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
- TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+ TEST_CASE(test_roundrobin_tx_burst_child_tx_fail),
+ TEST_CASE(test_roundrobin_rx_burst_on_single_child),
+ TEST_CASE(test_roundrobin_rx_burst_on_multiple_children),
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
- TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_child_link_status_change_behaviour),
+ TEST_CASE(test_roundrobin_verfiy_polling_child_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
TEST_CASE(test_activebackup_verify_mac_assignment),
- TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+ TEST_CASE(test_activebackup_verify_child_link_status_change_failover),
TEST_CASE(test_balance_xmit_policy_configuration),
TEST_CASE(test_balance_l2_tx_burst),
TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5137,26 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
- TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+ TEST_CASE(test_balance_tx_burst_child_tx_fail),
TEST_CASE(test_balance_rx_burst),
TEST_CASE(test_balance_verify_promiscuous_enable_disable),
TEST_CASE(test_balance_verify_mac_assignment),
- TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_balance_verify_child_link_status_change_behaviour),
TEST_CASE(test_tlb_tx_burst),
TEST_CASE(test_tlb_rx_burst),
TEST_CASE(test_tlb_verify_mac_assignment),
TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
- TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+ TEST_CASE(test_tlb_verify_child_link_status_change_failover),
TEST_CASE(test_alb_change_mac_in_reply_sent),
TEST_CASE(test_alb_reply_from_client),
TEST_CASE(test_alb_receive_vlan_reply),
TEST_CASE(test_alb_ipv4_tx),
TEST_CASE(test_broadcast_tx_burst),
- TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+ TEST_CASE(test_broadcast_tx_burst_child_tx_fail),
TEST_CASE(test_broadcast_rx_burst),
TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
TEST_CASE(test_broadcast_verify_mac_assignment),
- TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_broadcast_verify_child_link_status_change_behaviour),
TEST_CASE(test_reconfigure_bonded_device),
TEST_CASE(test_close_bonded_device),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b89..b20ad9c4000d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define CHILD_COUNT (4)
#define RX_RING_SIZE 1024
#define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
#define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
-#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
+#define CHILD_DEV_NAME_FMT ("net_virt_%d")
+#define CHILD_RX_QUEUE_FMT ("net_virt_%d_rx")
+#define CHILD_TX_QUEUE_FMT ("net_virt_%d_tx")
#define INVALID_SOCKET_ID (-1)
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr child_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
};
-struct slave_conf {
+struct child_conf {
struct rte_ring *rx_queue;
struct rte_ring *tx_queue;
uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
struct link_bonding_unittest_params {
uint8_t bonded_port_id;
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct child_conf child_ports[CHILD_COUNT];
struct rte_mempool *mbuf_pool;
};
-#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_CHILD_COUNT RTE_DIM(test_params.child_ports)
+#define TEST_RX_CHILD_COUT TEST_DEFAULT_CHILD_COUNT
+#define TEST_TX_CHILD_COUNT TEST_DEFAULT_CHILD_COUNT
+#define TEST_MARKER_CHILD_COUT TEST_DEFAULT_CHILD_COUNT
+#define TEST_EXPIRED_CHILD_COUNT TEST_DEFAULT_CHILD_COUNT
+#define TEST_PROMISC_CHILD_COUNT TEST_DEFAULT_CHILD_COUNT
static struct link_bonding_unittest_params test_params = {
.bonded_port_id = INVALID_PORT_ID,
- .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+ .child_ports = { [0 ... CHILD_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
.mbuf_pool = NULL,
};
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.child_ports, \
+ RTE_DIM(test_params.child_ports))
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
* in this test and satisfy given condition.
*
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
* _condition condition that need to be checked
*/
#define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
if (!!(_condition))
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a child of a bonded
* device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
* */
-#define FOR_EACH_SLAVE(_i, _slave) \
- FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_CHILD(_i, _child) \
+ FOR_EACH_PORT_IF(_i, _child, (_child)->bonded != 0)
/*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from children TX queue.
+ * child child port
* buffer for packets
* size size of buffer
* return number of packets or negative error number
*/
static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+child_get_pkts(struct child_conf *child, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+ return rte_ring_dequeue_burst(child->tx_queue, (void **)buf,
size, NULL);
}
/*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into children RX queue.
+ * child child port
* buffer for packets
* size number of packets to be injected
* return number of queued packets or negative error number
*/
static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+child_put_pkts(struct child_conf *child, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+ return rte_ring_enqueue_burst(child->rx_queue, (void **)buf,
size, NULL);
}
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
}
static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_child(struct child_conf *child, uint8_t start)
{
struct rte_ether_addr addr, addr_check;
int retval;
/* Some sanity check */
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
- RTE_VERIFY(slave->bonded == 0);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(test_params.child_ports <= child &&
+ child - test_params.child_ports < (int)RTE_DIM(test_params.child_ports));
+ RTE_VERIFY(child->bonded == 0);
+ RTE_VERIFY(child->port_id != INVALID_PORT_ID);
- rte_ether_addr_copy(&slave_mac_default, &addr);
- addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+ rte_ether_addr_copy(&child_mac_default, &addr);
+ addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
- rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+ rte_eth_dev_mac_addr_remove(child->port_id, &addr);
- TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
- "Failed to set slave MAC address");
+ TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(child->port_id, &addr, 0),
+ "Failed to set child MAC address");
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
- slave->port_id),
- "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
- (uint8_t)(slave - test_params.slave_ports), slave->port_id,
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bonded_port_id,
+ child->port_id),
+ "Failed to add child (idx=%u, id=%u) to bonding (id=%u)",
+ (uint8_t)(child - test_params.child_ports), child->port_id,
test_params.bonded_port_id);
- slave->bonded = 1;
+ child->bonded = 1;
if (start) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
- "Failed to start slave %u", slave->port_id);
+ TEST_ASSERT_SUCCESS(rte_eth_dev_start(child->port_id),
+ "Failed to start child %u", child->port_id);
}
- retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
- TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+ retval = rte_eth_macaddr_get(child->port_id, &addr_check);
+ TEST_ASSERT_SUCCESS(retval, "Failed to get child mac address: %s",
strerror(-retval));
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
- "Slave MAC address is not as expected");
+ "Child MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(child->lacp_parnter_state == 0);
return 0;
}
static int
-remove_slave(struct slave_conf *slave)
+remove_child(struct child_conf *child)
{
- ptrdiff_t slave_idx = slave - test_params.slave_ports;
+ ptrdiff_t child_idx = child - test_params.child_ports;
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+ RTE_VERIFY(test_params.child_ports <= child &&
+ child_idx < (ptrdiff_t)RTE_DIM(test_params.child_ports));
- RTE_VERIFY(slave->bonded == 1);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(child->bonded == 1);
+ RTE_VERIFY(child->port_id != INVALID_PORT_ID);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(child->rx_queue), 0,
+ "Child %u tx queue not empty while removing from bonding.",
+ child->port_id);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(child->rx_queue), 0,
+ "Child %u tx queue not empty while removing from bonding.",
+ child->port_id);
- TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
- slave->port_id), 0,
- "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
- (uint8_t)slave_idx, slave->port_id,
+ TEST_ASSERT_EQUAL(rte_eth_bond_child_remove(test_params.bonded_port_id,
+ child->port_id), 0,
+ "Failed to remove child (idx=%u, id=%u) from bonding (id=%u)",
+ (uint8_t)child_idx, child->port_id,
test_params.bonded_port_id);
- slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ child->bonded = 0;
+ child->lacp_parnter_state = 0;
return 0;
}
static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t child_id, struct rte_mbuf *lacp_pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
- lacpdu_rx_count[slave_id]++;
+ lacpdu_rx_count[child_id]++;
rte_pktmbuf_free(lacp_pkt);
}
static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_children(uint16_t child_count, uint8_t external_sm)
{
uint8_t i;
int ret;
RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
- for (i = 0; i < slave_count; i++) {
- TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+ for (i = 0; i < child_count; i++) {
+ TEST_ASSERT_SUCCESS(add_child(&test_params.child_ports[i], 1),
"Failed to add port %u to bonded device.\n",
- test_params.slave_ports[i].port_id);
+ test_params.child_ports[i].port_id);
}
/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
{
- struct slave_conf *slave;
+ struct child_conf *child;
int retval;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
uint16_t i;
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
"Failed to stop bonded port %u",
test_params.bonded_port_id);
- FOR_EACH_SLAVE(i, slave)
- remove_slave(slave);
+ FOR_EACH_CHILD(i, child)
+ remove_child(child);
- retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
- RTE_DIM(slaves));
+ retval = rte_eth_bond_children_get(test_params.bonded_port_id, children,
+ RTE_DIM(children));
TEST_ASSERT_EQUAL(retval, 0,
- "Expected bonded device %u have 0 slaves but returned %d.",
+ "Expected bonded device %u have 0 children but returned %d.",
test_params.bonded_port_id, retval);
- FOR_EACH_PORT(i, slave) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+ FOR_EACH_PORT(i, child) {
+ TEST_ASSERT_SUCCESS(rte_eth_dev_stop(child->port_id),
"Failed to stop bonded port %u",
- slave->port_id);
+ child->port_id);
- TEST_ASSERT(slave->bonded == 0,
- "Port id=%u is still marked as enslaved.", slave->port_id);
+ TEST_ASSERT(child->bonded == 0,
+ "Port id=%u is still marked as enchildd.", child->port_id);
}
return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
{
int retval, nb_mbuf_per_pool;
char name[RTE_ETH_NAME_MAX_LEN];
- struct slave_conf *port;
+ struct child_conf *port;
const uint8_t socket_id = rte_socket_id();
uint16_t i;
@@ -400,10 +400,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(i, port) {
- port = &test_params.slave_ports[i];
+ port = &test_params.child_ports[i];
if (port->rx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), CHILD_RX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
}
if (port->tx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), CHILD_TX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
}
if (port->port_id == INVALID_PORT_ID) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), CHILD_DEV_NAME_FMT, i);
TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
retval = rte_eth_from_rings(name, &port->rx_queue, 1,
&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct child_conf *port;
uint8_t i;
/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
* frame but not LACP
*/
static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct child_conf *child, struct rte_mbuf *pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
/* Change source address to partner address */
rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ child->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ child->lacp_parnter_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
}
/*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given child, search for LACP packet and reply them.
*
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from child. Looks for LACP packet. Drops
* all other packets. Prepares response LACP and sends it back.
*
* return number of LACP received and replied, -1 on error.
*/
static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct child_conf *child)
{
int retval;
struct rte_mbuf *rx_buf[MAX_PKT_BURST];
struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
uint16_t lacp_tx_buf_cnt = 0, i;
- retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
- TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
- slave->port_id);
+ retval = child_get_pkts(child, rx_buf, RTE_DIM(rx_buf));
+ TEST_ASSERT(retval >= 0, "Getting child %u packets failed.",
+ child->port_id);
for (i = 0; i < (uint16_t)retval; i++) {
- if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+ if (make_lacp_reply(child, rx_buf[i]) == 0) {
/* reply with actor's LACP */
lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
if (lacp_tx_buf_cnt == 0)
return 0;
- retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+ retval = child_put_pkts(child, lacp_tx_buf, lacp_tx_buf_cnt);
if (retval <= lacp_tx_buf_cnt) {
/* retval might be negative */
for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
}
TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
- "Failed to equeue lacp packets into slave %u tx queue.",
- slave->port_id);
+ "Failed to equeue lacp packets into child %u tx queue.",
+ child->port_id);
return lacp_tx_buf_cnt;
}
/*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given child tx queue contains packets that make mode 4
+ * handshake complete. It will drain child queue.
* return 0 if handshake not completed, 1 if handshake was complete,
*/
static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct child_conf *child)
{
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return child->lacp_parnter_state == expected_state;
}
static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
static int
bond_handshake(void)
{
- struct slave_conf *slave;
+ struct child_conf *child;
struct rte_mbuf *buf[MAX_PKT_BURST];
uint16_t nb_pkts;
- uint8_t all_slaves_done, i, j;
- uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+ uint8_t all_children_done, i, j;
+ uint8_t status[RTE_DIM(test_params.child_ports)] = { 0 };
const unsigned delay = bond_get_update_timeout_ms();
/* Exchange LACP frames */
- all_slaves_done = 0;
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ all_children_done = 0;
+ for (i = 0; i < 30 && all_children_done == 0; ++i) {
rte_delay_ms(delay);
- all_slaves_done = 1;
- FOR_EACH_SLAVE(j, slave) {
- /* If response already send, skip slave */
+ all_children_done = 1;
+ FOR_EACH_CHILD(j, child) {
+ /* If response already send, skip child */
if (status[j] != 0)
continue;
- if (bond_handshake_reply(slave) < 0) {
- all_slaves_done = 0;
+ if (bond_handshake_reply(child) < 0) {
+ all_children_done = 0;
break;
}
- status[j] = bond_handshake_done(slave);
+ status[j] = bond_handshake_done(child);
if (status[j] == 0)
- all_slaves_done = 0;
+ all_children_done = 0;
}
nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
}
/* If response didn't send - report failure */
- TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+ TEST_ASSERT_EQUAL(all_children_done, 1, "Bond handshake failed\n");
/* If flags doesn't match - report failure */
- return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+ return all_children_done == 1 ? TEST_SUCCESS : TEST_FAILED;
}
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_CHILD_COUT RTE_DIM(test_params.child_ports)
static int
test_mode4_lacp(void)
{
int retval;
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
/* Test LACP handshake function */
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
{
int retval;
/* Test and verify for Stable mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_STABLE,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify for Bandwidth mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify selection for count mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_COUNT,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
}
static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct child_conf *child,
struct rte_ether_addr *src_mac,
struct rte_ether_addr *dst_mac, uint16_t count)
{
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
if (retval != (int)count)
return retval;
- retval = slave_put_pkts(slave, pkts, count);
+ retval = child_put_pkts(child, pkts, count);
if (retval > 0 && retval != count)
free_pkts(&pkts[retval], count - retval);
TEST_ASSERT_EQUAL(retval, count,
- "Failed to enqueue packets into slave %u RX queue", slave->port_id);
+ "Failed to enqueue packets into child %u RX queue", child->port_id);
return TEST_SUCCESS;
}
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
static int
test_mode4_rx(void)
{
- struct slave_conf *slave;
+ struct child_conf *child;
uint16_t i, j;
uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
struct rte_ether_addr dst_mac;
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_children(TEST_PROMISC_CHILD_COUNT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -838,7 +838,7 @@ test_mode4_rx(void)
dst_mac.addr_bytes[0] += 2;
/* First try with promiscuous mode enabled.
- * Add 2 packets to each slave. First with bonding MAC address, second with
+ * Add 2 packets to each child. First with bonding MAC address, second with
* different. Check if we received all of them. */
retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_CHILD(i, child) {
+ retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+ child->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+ child->port_id);
- /* Expect 2 packets per slave */
+ /* Expect 2 packets per child */
expected_pkts_cnt += 2;
}
@@ -894,16 +894,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_CHILD(i, child) {
+ retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+ child->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+ child->port_id);
- /* Expect only one packet per slave */
+ /* Expect only one packet per child */
expected_pkts_cnt += 1;
}
@@ -927,19 +927,19 @@ test_mode4_rx(void)
TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
"Expected %u packets but received only %d", expected_pkts_cnt, retval);
- /* Link down test: simulate link down for first slave. */
+ /* Link down test: simulate link down for first child. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t child_down_id = INVALID_PORT_ID;
- /* Find first slave and make link down on it*/
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ /* Find first child and make link down on it*/
+ FOR_EACH_CHILD(i, child) {
+ rte_eth_dev_set_link_down(child->port_id);
+ child_down_id = child->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(child_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding */
for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
- /* Put packet to each slave */
- FOR_EACH_SLAVE(i, slave) {
+ /* Put packet to each child */
+ FOR_EACH_CHILD(i, child) {
void *pkt = NULL;
- dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+ dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
+ retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
- src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+ src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
+ retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
if (retval > 0)
free_pkts(pkts, retval);
- while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+ while (rte_ring_dequeue(child->rx_queue, (void **)&pkt) == 0)
rte_pktmbuf_free(pkt);
- if (slave_down_id == slave->port_id)
+ if (child_down_id == child->port_id)
TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
else
TEST_ASSERT_NOT_EQUAL(retval, 0,
- "Expected to receive some packets on slave %u.",
- slave->port_id);
- rte_eth_dev_start(slave->port_id);
+ "Expected to receive some packets on child %u.",
+ child->port_id);
+ rte_eth_dev_start(child->port_id);
for (j = 0; j < 5; j++) {
- TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+ TEST_ASSERT(bond_handshake_reply(child) >= 0,
"Handshake after link up");
- if (bond_handshake_done(slave) == 1)
+ if (bond_handshake_done(child) == 1)
break;
}
- TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate child after link up");
}
- remove_slaves_and_stop_bonded_device();
+ remove_children_and_stop_bonded_device();
return TEST_SUCCESS;
}
static int
test_mode4_tx_burst(void)
{
- struct slave_conf *slave;
+ struct child_conf *child;
uint16_t i, j;
uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets were transmitted properly. Every slave should have
+ /* Check if packets were transmitted properly. Every child should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_CHILD(i, child) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = child_get_pkts(child, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(child, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+ "child %u unexpectedly transmitted %d SLOW packets", child->port_id,
slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "child %u did not transmitted any packets", child->port_id);
pkts_cnt += normal_cnt;
}
@@ -1069,18 +1069,18 @@ test_mode4_tx_burst(void)
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
/* Link down test:
- * simulate link down for first slave. */
+ * simulate link down for first child. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t child_down_id = INVALID_PORT_ID;
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ FOR_EACH_CHILD(i, child) {
+ rte_eth_dev_set_link_down(child->port_id);
+ child_down_id = child->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(child_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding. */
for (i = 0; i < 3; i++) {
@@ -1110,19 +1110,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets was transmitted properly. Every slave should have
+ /* Check if packets was transmitted properly. Every child should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_CHILD(i, child) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = child_get_pkts(child, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(child, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1130,17 +1130,17 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
- if (slave_down_id == slave->port_id) {
+ if (child_down_id == child->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
- normal_cnt + slow_cnt, slave->port_id);
+ "child %u enexpectedly transmitted %u packets",
+ normal_cnt + slow_cnt, child->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets",
- slave->port_id, slow_cnt);
+ "child %u unexpectedly transmitted %d SLOW packets",
+ child->port_id, slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "child %u did not transmitted any packets", child->port_id);
}
pkts_cnt += normal_cnt;
@@ -1149,11 +1149,11 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- return remove_slaves_and_stop_bonded_device();
+ return remove_children_and_stop_bonded_device();
}
static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct child_conf *child)
{
struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
struct marker_header *);
@@ -1166,7 +1166,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
rte_ether_addr_copy(&parnter_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ child->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
@@ -1177,7 +1177,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
offsetof(struct marker, reserved_90) -
offsetof(struct marker, requester_port);
RTE_VERIFY(marker_hdr->marker.info_length == 16);
- marker_hdr->marker.requester_port = slave->port_id + 1;
+ marker_hdr->marker.requester_port = child->port_id + 1;
marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
marker_hdr->marker.terminator_length = 0;
}
@@ -1185,7 +1185,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
static int
test_mode4_marker(void)
{
- struct slave_conf *slave;
+ struct child_conf *child;
struct rte_mbuf *pkts[MAX_PKT_BURST];
struct rte_mbuf *marker_pkt;
struct marker_header *marker_hdr;
@@ -1196,7 +1196,7 @@ test_mode4_marker(void)
uint8_t i, j;
const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
- retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+ retval = initialize_bonded_device_with_children(TEST_MARKER_CHILD_COUT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -1205,17 +1205,17 @@ test_mode4_marker(void)
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
delay = bond_get_update_timeout_ms();
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_CHILD(i, child) {
marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
- init_marker(marker_pkt, slave);
+ init_marker(marker_pkt, child);
- retval = slave_put_pkts(slave, &marker_pkt, 1);
+ retval = child_put_pkts(child, &marker_pkt, 1);
if (retval != 1)
rte_pktmbuf_free(marker_pkt);
TEST_ASSERT_EQUAL(retval, 1,
- "Failed to send marker packet to slave %u", slave->port_id);
+ "Failed to send marker packet to child %u", child->port_id);
for (j = 0; j < 20; ++j) {
rte_delay_ms(delay);
@@ -1233,13 +1233,13 @@ test_mode4_marker(void)
/* Check if LACP packet was send by state machines
First and only packet must be a maker response */
- retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+ retval = child_get_pkts(child, pkts, MAX_PKT_BURST);
if (retval == 0)
continue;
if (retval > 1)
free_pkts(pkts, retval);
- TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+ TEST_ASSERT_EQUAL(retval, 1, "failed to get child packets");
nb_pkts = retval;
marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1263,7 @@ test_mode4_marker(void)
TEST_ASSERT(j < 20, "Marker response not found");
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1272,7 +1272,7 @@ test_mode4_marker(void)
static int
test_mode4_expired(void)
{
- struct slave_conf *slave, *exp_slave = NULL;
+ struct child_conf *child, *exp_child = NULL;
struct rte_mbuf *pkts[MAX_PKT_BURST];
int retval;
uint32_t old_delay;
@@ -1282,7 +1282,7 @@ test_mode4_expired(void)
struct rte_eth_bond_8023ad_conf conf;
- retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_children(TEST_EXPIRED_CHILD_COUNT,
0);
/* Set custom timeouts to make test last shorter. */
rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1298,8 @@ test_mode4_expired(void)
/* Wait for new settings to be applied. */
for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
- FOR_EACH_SLAVE(j, slave)
- bond_handshake_reply(slave);
+ FOR_EACH_CHILD(j, child)
+ bond_handshake_reply(child);
rte_delay_ms(conf.update_timeout_ms);
}
@@ -1307,13 +1307,13 @@ test_mode4_expired(void)
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- /* Find first slave */
- FOR_EACH_SLAVE(i, slave) {
- exp_slave = slave;
+ /* Find first child */
+ FOR_EACH_CHILD(i, child) {
+ exp_child = child;
break;
}
- RTE_VERIFY(exp_slave != NULL);
+ RTE_VERIFY(exp_child != NULL);
/* When one of partners do not send or respond to LACP frame in
* conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1325,16 @@ test_mode4_expired(void)
TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
retval);
- FOR_EACH_SLAVE(i, slave) {
- retval = bond_handshake_reply(slave);
+ FOR_EACH_CHILD(i, child) {
+ retval = bond_handshake_reply(child);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that suppose to be expired. */
- if (slave == exp_slave) {
- while (rte_ring_count(slave->rx_queue) > 0) {
+ /* Remove replay for child that suppose to be expired. */
+ if (child == exp_child) {
+ while (rte_ring_count(child->rx_queue) > 0) {
void *pkt = NULL;
- rte_ring_dequeue(slave->rx_queue, &pkt);
+ rte_ring_dequeue(child->rx_queue, &pkt);
rte_pktmbuf_free(pkt);
}
}
@@ -1348,17 +1348,17 @@ test_mode4_expired(void)
retval);
}
- /* After test only expected slave should be in EXPIRED state */
- FOR_EACH_SLAVE(i, slave) {
- if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
- "Slave %u should be in expired.", slave->port_id);
+ /* After test only expected child should be in EXPIRED state */
+ FOR_EACH_CHILD(i, child) {
+ if (child == exp_child)
+ TEST_ASSERT(child->lacp_parnter_state & STATE_EXPIRED,
+ "Child %u should be in expired.", child->port_id);
else
- TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
- "Slave %u should be operational.", slave->port_id);
+ TEST_ASSERT_EQUAL(bond_handshake_done(child), 1,
+ "Child %u should be operational.", child->port_id);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1372,17 +1372,17 @@ test_mode4_ext_ctrl(void)
* . try to transmit lacpdu (should fail)
* . try to set collecting and distributing flags (should fail)
* reconfigure w/external sm
- * . transmit one lacpdu on each slave using new api
- * . make sure each slave receives one lacpdu using the callback api
- * . transmit one data pdu on each slave (should fail)
+ * . transmit one lacpdu on each child using new api
+ * . make sure each child receives one lacpdu using the callback api
+ * . transmit one data pdu on each child (should fail)
* . enable distribution and collection, send one data pdu each again
*/
int retval;
- struct slave_conf *slave = NULL;
+ struct child_conf *child = NULL;
uint8_t i;
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[CHILD_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1396,30 +1396,30 @@ test_mode4_ext_ctrl(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < CHILD_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_CHILD(i, child) {
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]),
- "Slave should not allow manual LACP xmit");
+ child->port_id, lacp_tx_buf[i]),
+ "Child should not allow manual LACP xmit");
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
test_params.bonded_port_id,
- slave->port_id, 1),
- "Slave should not allow external state controls");
+ child->port_id, 1),
+ "Child should not allow external state controls");
}
free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
return TEST_SUCCESS;
@@ -1430,13 +1430,13 @@ static int
test_mode4_ext_lacp(void)
{
int retval;
- struct slave_conf *slave = NULL;
- uint8_t all_slaves_done = 0, i;
+ struct child_conf *child = NULL;
+ uint8_t all_children_done = 0, i;
uint16_t nb_pkts;
const unsigned int delay = bond_get_update_timeout_ms();
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
- struct rte_mbuf *buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[CHILD_COUNT];
+ struct rte_mbuf *buf[CHILD_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1450,14 +1450,14 @@ test_mode4_ext_lacp(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < CHILD_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+ retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1466,22 @@ test_mode4_ext_lacp(void)
for (i = 0; i < 30; ++i)
rte_delay_ms(delay);
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_CHILD(i, child) {
retval = rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]);
+ child->port_id, lacp_tx_buf[i]);
TEST_ASSERT_SUCCESS(retval,
- "Slave should allow manual LACP xmit");
+ "Child should allow manual LACP xmit");
}
nb_pkts = bond_tx(NULL, 0);
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
- FOR_EACH_SLAVE(i, slave) {
- nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
- TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+ FOR_EACH_CHILD(i, child) {
+ nb_pkts = child_get_pkts(child, buf, RTE_DIM(buf));
+ TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on child %d\n",
nb_pkts, i);
- slave_put_pkts(slave, buf, nb_pkts);
+ child_put_pkts(child, buf, nb_pkts);
}
nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1489,26 @@ test_mode4_ext_lacp(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
/* wait for the periodic callback to run */
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ for (i = 0; i < 30 && all_children_done == 0; ++i) {
uint8_t s, total = 0;
rte_delay_ms(delay);
- FOR_EACH_SLAVE(s, slave) {
- total += lacpdu_rx_count[slave->port_id];
+ FOR_EACH_CHILD(s, child) {
+ total += lacpdu_rx_count[child->port_id];
}
- if (total >= SLAVE_COUNT)
- all_slaves_done = 1;
+ if (total >= CHILD_COUNT)
+ all_children_done = 1;
}
- FOR_EACH_SLAVE(i, slave) {
- TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
- "Slave port %u should have received 1 lacpdu (count=%u)",
- slave->port_id,
- lacpdu_rx_count[slave->port_id]);
+ FOR_EACH_CHILD(i, child) {
+ TEST_ASSERT_EQUAL(lacpdu_rx_count[child->port_id], 1,
+ "Child port %u should have received 1 lacpdu (count=%u)",
+ child->port_id,
+ lacpdu_rx_count[child->port_id]);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_children_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1517,10 +1517,10 @@ test_mode4_ext_lacp(void)
static int
check_environment(void)
{
- struct slave_conf *port;
+ struct child_conf *port;
uint8_t i, env_state;
- uint16_t slaves[RTE_DIM(test_params.slave_ports)];
- int slaves_count;
+ uint16_t children[RTE_DIM(test_params.child_ports)];
+ int children_count;
env_state = 0;
FOR_EACH_PORT(i, port) {
@@ -1540,20 +1540,20 @@ check_environment(void)
break;
}
- slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
- slaves, RTE_DIM(slaves));
+ children_count = rte_eth_bond_children_get(test_params.bonded_port_id,
+ children, RTE_DIM(children));
- if (slaves_count != 0)
+ if (children_count != 0)
env_state |= 0x10;
TEST_ASSERT_EQUAL(env_state, 0,
"Environment not clean (port %u):%s%s%s%s%s",
port->port_id,
- env_state & 0x01 ? " slave rx queue not clean" : "",
- env_state & 0x02 ? " slave tx queue not clean" : "",
- env_state & 0x04 ? " port marked as enslaved" : "",
- env_state & 0x80 ? " slave state is not reset" : "",
- env_state & 0x10 ? " slave count not equal 0" : ".");
+ env_state & 0x01 ? " child rx queue not clean" : "",
+ env_state & 0x02 ? " child tx queue not clean" : "",
+ env_state & 0x04 ? " port marked as enchildd" : "",
+ env_state & 0x80 ? " child state is not reset" : "",
+ env_state & 0x10 ? " child count not equal 0" : ".");
return TEST_SUCCESS;
@@ -1562,7 +1562,7 @@ check_environment(void)
static int
test_mode4_executor(int (*test_func)(void))
{
- struct slave_conf *port;
+ struct child_conf *port;
int test_result;
uint8_t i;
void *pkt;
@@ -1581,7 +1581,7 @@ test_mode4_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
"Failed to stop bonded device");
FOR_EACH_PORT(i, port) {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0bf..b1eee6bd4d5a 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define CHILD_COUNT (4)
#define RXTX_RING_SIZE 1024
#define RXTX_QUEUE_COUNT 4
#define BONDED_DEV_NAME ("net_bonding_rss")
-#define SLAVE_DEV_NAME_FMT ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
+#define CHILD_DEV_NAME_FMT ("net_null%d")
+#define CHILD_RXTX_QUEUE_FMT ("rssconf_child%d_q%d")
#define NUM_MBUFS 8191
#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-struct slave_conf {
+struct child_conf {
uint16_t port_id;
struct rte_eth_dev_info dev_info;
@@ -54,7 +54,7 @@ struct slave_conf {
uint8_t rss_key[40];
struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- uint8_t is_slave;
+ uint8_t is_child;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
};
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct child_conf child_ports[CHILD_COUNT];
struct rte_mempool *mbuf_pool;
};
static struct link_bonding_rssconf_unittest_params test_params = {
.bond_port_id = INVALID_PORT_ID,
- .slave_ports = {
- [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+ .child_ports = {
+ [0 ... CHILD_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_child = 0}
},
.mbuf_pool = NULL,
};
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.child_ports, \
+ RTE_DIM(test_params.child_ports))
static int
configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
}
/**
- * Remove all slaves from bonding
+ * Remove all children from bonding
*/
static int
-remove_slaves(void)
+remove_children(void)
{
unsigned n;
- struct slave_conf *port;
+ struct child_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+ port = &test_params.child_ports[n];
+ if (port->is_child) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(
test_params.bond_port_id, port->port_id),
- "Cannot remove slave %d from bonding", port->port_id);
- port->is_slave = 0;
+ "Cannot remove child %d from bonding", port->port_id);
+ port->is_child = 0;
}
}
@@ -173,30 +173,30 @@ remove_slaves(void)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
{
- TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+ TEST_ASSERT_SUCCESS(remove_children(), "Removing children");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
"Failed to stop port %u", test_params.bond_port_id);
return TEST_SUCCESS;
}
/**
- * Add all slaves to bonding
+ * Add all children to bonding
*/
static int
-bond_slaves(void)
+bond_children(void)
{
unsigned n;
- struct slave_conf *port;
+ struct child_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (!port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot attach slave %d to the bonding",
+ port = &test_params.child_ports[n];
+ if (!port->is_child) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bond_port_id,
+ port->port_id), "Cannot attach child %d to the bonding",
port->port_id);
- port->is_slave = 1;
+ port->is_child = 1;
}
}
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
}
/**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if children RETA is synchronized with bonding port. Returns 1 if child
* port is synced with bonding port.
*/
static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct child_conf *port)
{
unsigned i;
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
}
/**
- * Fetch slaves RETA
+ * Fetch children RETA
*/
static int
-slave_reta_fetch(struct slave_conf *port) {
+child_reta_fetch(struct child_conf *port) {
unsigned j;
for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
}
/**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add child to check if children configuration is synced with
+ * the bonding ports values after adding new child.
*/
static int
-slave_remove_and_add(void)
+child_remove_and_add(void)
{
- struct slave_conf *port = &(test_params.slave_ports[0]);
+ struct child_conf *port = &(test_params.child_ports[0]);
- /* 1. Remove first slave from bonding */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
- port->port_id), "Cannot remove slave #d from bonding");
+ /* 1. Remove first child from bonding */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(test_params.bond_port_id,
+ port->port_id), "Cannot remove child #d from bonding");
- /* 2. Change removed (ex-)slave and bonding configuration to different
+ /* 2. Change removed (ex-)child and bonding configuration to different
* values
*/
reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
bond_reta_fetch();
reta_set(port->port_id, 2, port->dev_info.reta_size);
- slave_reta_fetch(port);
+ child_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 0,
- "Removed slave didn't should be synchronized with bonding port");
+ "Removed child didn't should be synchronized with bonding port");
- /* 3. Add (ex-)slave and check if configuration changed*/
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot add slave");
+ /* 3. Add (ex-)child and check if configuration changed*/
+ TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bond_port_id,
+ port->port_id), "Cannot add child");
bond_reta_fetch();
- slave_reta_fetch(port);
+ child_reta_fetch(port);
return reta_check_synced(port);
}
/**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over children.
*/
static int
test_propagate(void)
{
unsigned i;
uint8_t n;
- struct slave_conf *port;
+ struct child_conf *port;
uint8_t bond_rss_key[40];
struct rte_eth_rss_conf bond_rss_conf;
@@ -349,18 +349,18 @@ test_propagate(void)
retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
&bond_rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set children hash function");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take children RSS configuration");
TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
- "Hash function not propagated for slave %d",
+ "Hash function not propagated for child %d",
port->port_id);
}
@@ -376,11 +376,11 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
memset(port->rss_conf.rss_key, 0, 40);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set children RSS keys");
}
memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&(port->rss_conf));
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take children RSS configuration");
/* compare keys */
retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
sizeof(bond_rss_key));
- TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+ TEST_ASSERT(retval == 0, "Key value not propagated for child %d",
port->port_id);
}
}
@@ -416,10 +416,10 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set children RETA");
}
TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
bond_reta_fetch();
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
- slave_reta_fetch(port);
+ child_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
}
}
@@ -459,29 +459,29 @@ test_rss(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_children(), "Bonding children failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
- TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+ TEST_ASSERT(child_remove_and_add() == 1, "remove and add children success.");
- remove_slaves_and_stop_bonded_device();
+ remove_children_and_stop_bonded_device();
return TEST_SUCCESS;
}
/**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and children.
*/
static int
test_rss_config_lazy(void)
{
struct rte_eth_rss_conf bond_rss_conf = {0};
- struct slave_conf *port;
+ struct child_conf *port;
uint8_t rss_key[40];
uint64_t rss_hf;
int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
}
- /* Set all keys to zero for all slaves */
+ /* Set all keys to zero for all children */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+ TEST_ASSERT_SUCCESS(retval, "Cannot get children RSS configuration");
memset(port->rss_key, 0, sizeof(port->rss_key));
port->rss_conf.rss_key = port->rss_key;
port->rss_conf.rss_key_len = sizeof(port->rss_key);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+ TEST_ASSERT(retval != 0, "Succeeded in setting children RSS keys");
}
/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
/* Test RETA propagation */
for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+ TEST_ASSERT(retval != 0, "Succeeded in setting children RETA");
}
retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_children(), "Bonding children failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
- remove_slaves_and_stop_bonded_device();
+ remove_children_and_stop_bonded_device();
return TEST_SUCCESS;
}
@@ -579,13 +579,13 @@ test_setup(void)
int retval;
int port_id;
char name[256];
- struct slave_conf *port;
+ struct child_conf *port;
struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
if (test_params.mbuf_pool == NULL) {
test_params.mbuf_pool = rte_pktmbuf_pool_create(
- "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+ "RSS_MBUF_POOL", NUM_MBUFS * CHILD_COUNT,
MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.child_ports[n];
port_id = rte_eth_dev_count_avail();
- snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+ snprintf(name, sizeof(name), CHILD_DEV_NAME_FMT, port_id);
retval = rte_vdev_init(name, "size=64,copy=0");
TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct child_conf *port;
uint8_t i;
/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
"Failed to stop bonded device");
}
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214ef9..a64a04247c0e 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
----------
A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as childern to the bonded device.
+The VF is set as the primary child of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the
backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
testpmd> create bonded device 1 0
Created new bonded device net_bond_testpmd_0 on (port 2).
- testpmd> add bonding slave 0 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding child 0 2
+ testpmd> add bonding child 1 2
testpmd> show bonding config 2
The syntax of the ``testpmd`` command is:
-set bonding primary (slave id) (port id)
+set bonding primary (child id) (port id)
Set primary to P1 before starting bonding port.
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active children.
Use P2 only for forwarding.
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
testpmd> start
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active children.
.. code-block:: console
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
testpmd> clear port stats all
testpmd> set bonding primary 0 2
- testpmd> remove bonding slave 1 2
+ testpmd> remove bonding child 1 2
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active child.
.. code-block:: console
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active child.
.. code-block:: console
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding child 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
.. code-block:: console
- testpmd> remove bonding slave 0 2
+ testpmd> remove bonding child 0 2
testpmd> show bonding config 2
testpmd> port stop 0
testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a7b..dd91264cd8a2 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
.. code-block:: console
- dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
- (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+ dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,child=<PCI B:D.F device 1>,child=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+ (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,child=0000:82:00.0,child=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
Vector Processing
-----------------
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e356d..f07bb281a727 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
``rte_eth_dev`` ports of the same speed and duplex to provide similar
capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (child) NICs into a single logical interface between a server
and a switch. The new bonded PMD will then process these interfaces based on
the mode of operation specified to provide support for features such as
redundant links, fault tolerance and/or load balancing.
The librte_net_bond library exports a C API which provides an API for the
creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its child devices.
.. note::
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides load balancing and fault tolerance by transmission of
- packets in sequential order from the first available slave device through
+ packets in sequential order from the first available child device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
Active Backup (Mode 1)
- In this mode only one slave in the bond is active at any time, a different
- slave becomes active if, and only if, the primary active slave fails,
- thereby providing fault tolerance to slave failure. The single logical
+ In this mode only one child in the bond is active at any time, a different
+ child becomes active if, and only if, the primary active child fails,
+ thereby providing fault tolerance to child failure. The single logical
bonded interface's MAC address is externally visible on only one NIC (port)
to avoid confusing the network switch.
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides transmit load balancing (based on the selected
transmission policy) and fault tolerance. The default policy (layer2) uses
a simple calculation based on the packet flow source and destination MAC
- addresses as well as the number of active slaves available to the bonded
- device to classify the packet to a specific slave to transmit on. Alternate
+ addresses as well as the number of active children available to the bonded
+ device to classify the packet to a specific child to transmit on. Alternate
transmission policies supported are layer 2+3, this takes the IP source and
- destination addresses into the calculation of the transmit slave port and
+ destination addresses into the calculation of the transmit child port and
the final supported policy is layer 3+4, this uses IP source and
destination addresses as well as the TCP/UDP source and destination port.
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
Broadcast (Mode 3)
- This mode provides fault tolerance by transmission of packets on all slave
+ This mode provides fault tolerance by transmission of packets on all child
ports.
* **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
intervals period of less than 100ms.
#. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
- where N is the number of slaves. This is a space required for LACP
+ where N is the number of children. This is a space required for LACP
frames. Additionally LACP packets are included in the statistics, but
they are not returned to the application.
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides an adaptive transmit load balancing. It dynamically
- changes the transmitting slave, according to the computed load. Statistics
+ changes the transmitting child, according to the computed load. Statistics
are collected in 100ms intervals and scheduled every 10ms.
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
programmatically via the C API ``rte_eth_bond_create`` function.
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of child devices using
+the ``rte_eth_bond_child_add`` / ``rte_eth_bond_child_remove`` APIs.
-After a slave device is added to a bonded device slave is stopped using
+After a child device is added to a bonded device child is stopped using
``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+child and configured as well.
Any flow which was configured to the bond device also is configured to the added
-slave.
+child.
Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all children are synchronized with its configuration. This mode is
+intended to provide RSS configuration on children transparent for client
application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its children. That let to define the meaning
of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of child inside. It is required to ensure
consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded children. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if child
RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the children and default key for device is used.
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded children for the
next rte flow operations:
Validate:
- - Validate flow for each slave, failure at least for one slave causes to
+ - Validate flow for each child, failure at least for one child causes to
bond validation failure.
Create:
- - Create the flow in all slaves.
- - Save all the slaves created flows objects in bonding internal flow
+ - Create the flow in all children.
+ - Save all the children created flows objects in bonding internal flow
structure.
- - Failure in flow creation for existed slave rejects the flow.
- - Failure in flow creation for new slaves in slave adding time rejects
- the slave.
+ - Failure in flow creation for existed child rejects the flow.
+ - Failure in flow creation for new children in child adding time rejects
+ the child.
Destroy:
- - Destroy the flow in all slaves and release the bond internal flow
+ - Destroy the flow in all children and release the bond internal flow
memory.
Flush:
- - Destroy all the bonding PMD flows in all the slaves.
+ - Destroy all the bonding PMD flows in all the children.
.. note::
- Don't call slaves flush directly, It destroys all the slave flows which
+ Don't call children flush directly, It destroys all the child flows which
may include external flows or the bond internal LACP flow.
Query:
- - Summarize flow counters from all the slaves, relevant only for
+ - Summarize flow counters from all the children, relevant only for
``RTE_FLOW_ACTION_TYPE_COUNT``.
Isolate:
- - Call to flow isolate for all slaves.
- - Failure in flow isolation for existed slave rejects the isolate mode.
- - Failure in flow isolation for new slaves in slave adding time rejects
- the slave.
+ - Call to flow isolate for all children.
+ - Failure in flow isolation for existed child rejects the isolate mode.
+ - Failure in flow isolation for new children in child adding time rejects
+ the child.
All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to children).
Link Status Change Interrupts / Polling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
Link bonding devices support the registration of a link status change callback,
using the ``rte_eth_dev_callback_register`` API, this will be called when the
status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 children, the link status will change to up when one child
+becomes active or change to down when all children become inactive. There is no
+callback notification when a single child changes state and the previous
+conditions are not met. If a user wishes to monitor individual children then they
+must register callbacks with that child directly.
The link bonding library also supports devices which do not implement link
status change interrupts, this is achieved by polling the devices link status at
a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a child to
a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
whether the device supports interrupts or whether the link status should be
monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a children to the same bonded device. The bonded device
+inherits these attributes from the first active child added to the bonded
+device and then all further children added to the bonded device must support
these parameters.
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one child before the bonding device
itself can be started.
To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all children should be RSS-capable and support, at least one
common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all child devices support the same key size.
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how children process packets, once a device is added
to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the child.
Like all other PMD, all functions exported by a PMD are lock-free functions
that are assumed not to be invoked in parallel on different logical cores to
work on the same target object.
It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a child devices after they have been to a bonded device since
+packets read directly from the child device will no longer be available to the
bonded device to read.
Configuration
@@ -265,25 +265,25 @@ Configuration
Link bonding devices are created using the ``rte_eth_bond_create`` API
which requires a unique device name, the bonding mode,
and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its child devices,
+its primary child, a user defined MAC address and transmission policy to use if
the device is in balance XOR mode.
-Slave Devices
+Child Devices
^^^^^^^^^^^^^
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` child devices
+of the same speed and duplex. Ethernet devices can be added as a child to a
+maximum of one bonded device. Child devices are reconfigured with the
configuration of the bonded device on being added to a bonded device.
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the child device to its
+original value of removal of a child from it.
-Primary Slave
+Primary Child
^^^^^^^^^^^^^
-The primary slave is used to define the default port to use when a bonded
+The primary child is used to define the default port to use when a bonded
device is in active backup mode. A different port will only be used if, and
only if, the current primary port goes down. If the user does not specify a
primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
^^^^^^^^^^^
The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all child devices depending on the
operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other children will retain their
+original MAC address. In mode 0, 2, 3, 4 all children devices are configure with
the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary children MAC address.
Balance XOR Transmit Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
* **Layer 2:** Ethernet MAC address based balancing is the default
transmission policy for Balance XOR bonding mode. It uses a simple XOR
calculation on the source MAC address and destination MAC address of the
- packet and then calculate the modulus of this value to calculate the slave
+ packet and then calculate the modulus of this value to calculate the child
device to transmit the packet on.
* **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
combination of source/destination MAC addresses and the source/destination
- IP addresses of the data packet to decide which slave port the packet will
+ IP addresses of the data packet to decide which child port the packet will
be transmitted on.
* **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
of source/destination IP Address and the source/destination UDP ports of
- the packet of the data packet to decide which slave port the packet will be
+ the packet of the data packet to decide which child port the packet will be
transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
which will be used must be setup using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup``.
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Child devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_child_add`` / ``rte_eth_bond_child_remove``
+APIs but at least one child device must be added to the link bonding device
before it can be started using ``rte_eth_dev_start``.
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its children, if all
+child device link status are down or if all children are removed from the link
bonding device then the link status of the bonding device will go down.
It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
-* A least one slave device is provided with for each bonded device definition.
+* A least one child device is provided with for each bonded device definition.
* The operation mode of the bonded device being created is provided.
@@ -404,20 +404,20 @@ The different options are:
mode=2
-* slave: Defines the PMD device which will be added as slave to the bonded
+* child: Defines the PMD device which will be added as child to the bonded
device. This option can be selected multiple times, for each device to be
- added as a slave. Physical devices should be specified using their PCI
+ added as a child. Physical devices should be specified using their PCI
address, in the format domain:bus:devid.function
.. code-block:: console
- slave=0000:0a:00.0,slave=0000:0a:00.1
+ child=0000:0a:00.0,child=0000:0a:00.1
-* primary: Optional parameter which defines the primary slave port,
- is used in active backup mode to select the primary slave for data TX/RX if
+* primary: Optional parameter which defines the primary child port,
+ is used in active backup mode to select the primary child for data TX/RX if
it is available. The primary port also is used to select the MAC address to
- use when it is not defined by the user. This defaults to the first slave
- added to the device if it is specified. The primary device must be a slave
+ use when it is not defined by the user. This defaults to the first child
+ added to the device if it is specified. The primary device must be a child
of the bonded device.
.. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
socket_id=0
* mac: Optional parameter to select a MAC address for link bonding device,
- this overrides the value of the primary slave device.
+ this overrides the value of the primary child device.
.. code-block:: console
@@ -474,29 +474,29 @@ The different options are:
Examples of Usage
^^^^^^^^^^^^^^^^^
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two children specified by their PCI address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,child=0000:0a:00.01,child=0000:04:00.00' -- --port-topology=chained
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two children specified by their PCI address and an overriding MAC address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,child=0000:0a:00.01,child=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two children specified, and a primary child specified by their PCI addresses:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,child=0000:0a:00.01,child=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two children specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,child=0000:0a:00.01,child=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
.. _bonding_testpmd_commands:
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
testpmd> create bonded device 1 0
created new bonded device (port X)
-add bonding slave
+add bonding child
~~~~~~~~~~~~~~~~~
Adds Ethernet device to a Link Bonding device::
- testpmd> add bonding slave (slave id) (port id)
+ testpmd> add bonding child (child id) (port id)
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
- testpmd> add bonding slave 6 10
+ testpmd> add bonding child 6 10
-remove bonding slave
+remove bonding child
~~~~~~~~~~~~~~~~~~~~
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet child device from a Link Bonding device::
- testpmd> remove bonding slave (slave id) (port id)
+ testpmd> remove bonding child (child id) (port id)
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet child device (port 6) to a Link Bonding device (port 10)::
- testpmd> remove bonding slave 6 10
+ testpmd> remove bonding child 6 10
set bonding mode
~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
set bonding primary
~~~~~~~~~~~~~~~~~~~
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet child device as the primary device on a Link Bonding device::
- testpmd> set bonding primary (slave id) (port id)
+ testpmd> set bonding primary (child id) (port id)
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet child device (port 6) as the primary port of a Link Bonding device (port 10)::
testpmd> set bonding primary 6 10
@@ -590,7 +590,7 @@ set bonding mon_period
Set the link status monitoring polling period in milliseconds for a bonding device.
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD child devices which do not support link status interrupts.
When the mon_period is set to a value greater than 0 then all PMD's which do not support
link status ISR will be queried every polling interval to check if their link status has changed::
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
set bonding lacp dedicated_queue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices children to handle LACP control plane traffic
when in mode 4 (link-aggregation-802.3ad)::
testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
testpmd> show bonding config (port id)
For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 child devices (1, 3, 4)
in balance mode with a transmission policy of layer 2+3::
testpmd> show bonding config 9
- Dev basic:
Bonding mode: BALANCE(2)
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
- Slaves (3): [1 3 4]
- Active Slaves (3): [1 3 4]
+ Children (3): [1 3 4]
+ Active Children (3): [1 3 4]
Primary: [3]
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 8f2384785930..3e3fb772fd62 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1901,11 +1901,11 @@ In this case, identifier is ``net_pcap0``.
This identifier format is the same as ``--vdev`` format of DPDK applications.
For example, to re-attach a bonded port which has been previously detached,
-the mode and slave parameters must be given.
+the mode and child parameters must be given.
.. code-block:: console
- testpmd> port attach net_bond_0,mode=0,slave=1
+ testpmd> port attach net_bond_0,mode=0,child=1
Attaching a new port...
EAL: Initializing pmd_bond for net_bond_0
EAL: Create bonded device net_bond_0 on port 0 in mode 0 on socket 0.
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada078..c93a2d94883f 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
cmdline_fixed_string_t set;
cmdline_fixed_string_t bonding;
cmdline_fixed_string_t primary;
- portid_t slave_id;
+ portid_t child_id;
portid_t port_id;
};
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
struct cmd_set_bonding_primary_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ portid_t parent_port_id = res->port_id;
+ portid_t child_port_id = res->child_id;
- /* Set the primary slave for a bonded device. */
- if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
- fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
- master_port_id);
+ /* Set the primary child for a bonded device. */
+ if (rte_eth_bond_primary_set(parent_port_id, child_port_id) != 0) {
+ fprintf(stderr, "\t Failed to set primary child for port = %d.\n",
+ parent_port_id);
return;
}
init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_child =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, RTE_UINT16);
+ child_id, RTE_UINT16);
static cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
port_id, RTE_UINT16);
static cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
- .help_str = "set bonding primary <slave_id> <port_id>: "
- "Set the primary slave for port_id",
+ .help_str = "set bonding primary <child_id> <port_id>: "
+ "Set the primary child for port_id",
.data = NULL,
.tokens = {
(void *)&cmd_setbonding_primary_set,
(void *)&cmd_setbonding_primary_bonding,
(void *)&cmd_setbonding_primary_primary,
- (void *)&cmd_setbonding_primary_slave,
+ (void *)&cmd_setbonding_primary_child,
(void *)&cmd_setbonding_primary_port,
NULL
}
};
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD CHILD *** */
+struct cmd_add_bonding_child_result {
cmdline_fixed_string_t add;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t child;
+ portid_t child_id;
portid_t port_id;
};
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_child_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_add_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_add_bonding_child_result *res = parsed_result;
+ portid_t parent_port_id = res->port_id;
+ portid_t child_port_id = res->child_id;
- /* add the slave for a bonded device. */
- if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+ /* add the child for a bonded device. */
+ if (rte_eth_bond_child_add(parent_port_id, child_port_id) != 0) {
fprintf(stderr,
- "\t Failed to add slave %d to master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to add child %d to parent port = %d.\n",
+ child_port_id, parent_port_id);
return;
}
- ports[master_port_id].update_conf = 1;
+ ports[parent_port_id].update_conf = 1;
init_port_config();
- set_port_slave_flag(slave_port_id);
+ set_port_child_flag(child_port_id);
}
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_add =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_child =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
+ child, "child");
+static cmdline_parse_token_num_t cmd_addbonding_child_childid =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_child_result,
+ child_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_child_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_child_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
- .f = cmd_add_bonding_slave_parsed,
- .help_str = "add bonding slave <slave_id> <port_id>: "
- "Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_child = {
+ .f = cmd_add_bonding_child_parsed,
+ .help_str = "add bonding child <child_id> <port_id>: "
+ "Add a child device to a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_addbonding_slave_add,
- (void *)&cmd_addbonding_slave_bonding,
- (void *)&cmd_addbonding_slave_slave,
- (void *)&cmd_addbonding_slave_slaveid,
- (void *)&cmd_addbonding_slave_port,
+ (void *)&cmd_addbonding_child_add,
+ (void *)&cmd_addbonding_child_bonding,
+ (void *)&cmd_addbonding_child_child,
+ (void *)&cmd_addbonding_child_childid,
+ (void *)&cmd_addbonding_child_port,
NULL
}
};
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE CHILD *** */
+struct cmd_remove_bonding_child_result {
cmdline_fixed_string_t remove;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t child;
+ portid_t child_id;
portid_t port_id;
};
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_child_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_remove_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_remove_bonding_child_result *res = parsed_result;
+ portid_t parent_port_id = res->port_id;
+ portid_t child_port_id = res->child_id;
- /* remove the slave from a bonded device. */
- if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+ /* remove the child from a bonded device. */
+ if (rte_eth_bond_child_remove(parent_port_id, child_port_id) != 0) {
fprintf(stderr,
- "\t Failed to remove slave %d from master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to remove child %d from parent port = %d.\n",
+ child_port_id, parent_port_id);
return;
}
init_port_config();
- clear_port_slave_flag(slave_port_id);
+ clear_port_child_flag(child_port_id);
}
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_remove =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_child =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
+ child, "child");
+static cmdline_parse_token_num_t cmd_removebonding_child_childid =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_child_result,
+ child_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_child_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_child_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
- .f = cmd_remove_bonding_slave_parsed,
- .help_str = "remove bonding slave <slave_id> <port_id>: "
- "Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_child = {
+ .f = cmd_remove_bonding_child_parsed,
+ .help_str = "remove bonding child <child_id> <port_id>: "
+ "Remove a child device from a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_removebonding_slave_remove,
- (void *)&cmd_removebonding_slave_bonding,
- (void *)&cmd_removebonding_slave_slave,
- (void *)&cmd_removebonding_slave_slaveid,
- (void *)&cmd_removebonding_slave_port,
+ (void *)&cmd_removebonding_child_remove,
+ (void *)&cmd_removebonding_child_bonding,
+ (void *)&cmd_removebonding_child_child,
+ (void *)&cmd_removebonding_child_childid,
+ (void *)&cmd_removebonding_child_port,
NULL
}
};
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
},
{
&cmd_set_bonding_primary,
- "set bonding primary (slave_id) (port_id)\n"
- " Set the primary slave for a bonded device.\n",
+ "set bonding primary (child_id) (port_id)\n"
+ " Set the primary child for a bonded device.\n",
},
{
- &cmd_add_bonding_slave,
- "add bonding slave (slave_id) (port_id)\n"
- " Add a slave device to a bonded device.\n",
+ &cmd_add_bonding_child,
+ "add bonding child (child_id) (port_id)\n"
+ " Add a child device to a bonded device.\n",
},
{
- &cmd_remove_bonding_slave,
- "remove bonding slave (slave_id) (port_id)\n"
- " Remove a slave device from a bonded device.\n",
+ &cmd_remove_bonding_child,
+ "remove bonding child (child_id) (port_id)\n"
+ " Remove a child device from a bonded device.\n",
},
{
&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1d2..0e5ae90c2bbf 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
#include "rte_eth_bond_8023ad.h"
#define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
+/** Maximum number of packets to one child queued in TX ring. */
+#define BOND_MODE_8023AX_CHILD_RX_PKTS 3
+/** Maximum number of LACP packets from one child queued in TX ring. */
+#define BOND_MODE_8023AX_CHILD_TX_PKTS 1
/**
* Timeouts definitions (5.4.4 in 802.1AX documentation).
*/
@@ -113,7 +113,7 @@ struct port {
enum rte_bond_8023ad_selection selected;
/** Indicates if either allmulti or promisc has been enforced on the
- * slave so that we can receive lacp packets
+ * child so that we can receive lacp packets
*/
#define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
#define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
uint8_t external_sm;
struct rte_ether_addr mac_addr;
- struct rte_eth_link slave_link;
- /***< slave link properties */
+ struct rte_eth_link child_link;
+ /***< child link properties */
/**
* Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
/**
* @internal
*
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active children on bonded interface.
*
* @param dev Bonded interface
* @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
/**
* @internal
*
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and children.
*
* @param dev Bonded interface
* @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
*
* Passes given slow packet to state machines management logic.
* @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param child_id Child port id.
* @param slot_pkt Slow packet.
*/
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt);
+ uint16_t child_id, struct rte_mbuf *pkt);
/**
* @internal
*
- * Appends given slave used slave
+ * Appends given child device
*
* @param dev Bonded interface.
- * @param port_id Slave port ID to be added
+ * @param port_id Child port ID to be added
*
* @return
* 0 on success, negative value otherwise.
*/
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_child(struct rte_eth_dev *dev, uint16_t port_id);
/**
* @internal
*
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given child from 802.1AX mode.
*
* @param dev Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param child_num Position of child in active_children array
*
* @return
* 0 on success, negative value otherwise.
*/
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_child(struct rte_eth_dev *dev, uint16_t child_pos);
/**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its children.
* @param bond_dev Bonded device
*/
void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port);
+ uint16_t child_port);
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t child_port);
int
bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4b3..d6cbf4293a45 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,13 +18,13 @@
#include "eth_bond_8023ad_private.h"
#include "rte_eth_bond_alb.h"
-#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
-#define PMD_BOND_MODE_KVARG ("mode")
-#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
-#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
-#define PMD_BOND_SOCKET_ID_KVARG ("socket_id")
-#define PMD_BOND_MAC_ADDR_KVARG ("mac")
+#define PMD_BOND_CHILD_PORT_KVARG ("child")
+#define PMD_BOND_PRIMARY_CHILD_KVARG ("primary")
+#define PMD_BOND_MODE_KVARG ("mode")
+#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
+#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
+#define PMD_BOND_SOCKET_ID_KVARG ("socket_id")
+#define PMD_BOND_MAC_ADDR_KVARG ("mac")
#define PMD_BOND_LSC_POLL_PERIOD_KVARG ("lsc_poll_period_ms")
#define PMD_BOND_LINK_UP_PROP_DELAY_KVARG ("up_delay")
#define PMD_BOND_LINK_DOWN_PROP_DELAY_KVARG ("down_delay")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
/** Port Queue Mapping Structure */
struct bond_rx_queue {
uint16_t queue_id;
- /**< Next active_slave to poll */
- uint16_t active_slave;
+ /**< Next active_child to poll */
+ uint16_t active_child;
/**< Queue Id */
struct bond_dev_private *dev_private;
/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
/**< Copy of TX configuration structure for queue */
};
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
- uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
- uint16_t slave_count; /**< Number of slaves */
+/** Bonded child devices structure */
+struct bond_ethdev_child_ports {
+ uint16_t children[RTE_MAX_ETHPORTS]; /**< Child port id array */
+ uint16_t child_count; /**< Number of children */
};
-struct bond_slave_details {
+struct bond_child_details {
uint16_t port_id;
uint8_t link_status_poll_enabled;
uint8_t link_status_wait_to_complete;
uint8_t last_link_status;
- /**< Port Id of slave eth_dev */
+ /**< Port Id of child eth_dev */
struct rte_ether_addr persisted_mac_addr;
uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next;
- /* Slaves flows */
+ /* Children flows */
struct rte_flow *flows[RTE_MAX_ETHPORTS];
/* Flow description for synchronization */
struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
};
typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t child_count, uint16_t *children);
/** Link Bonding PMD device private configuration Structure */
struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
rte_spinlock_t lock;
rte_spinlock_t lsc_lock;
- uint16_t primary_port; /**< Primary Slave Port */
- uint16_t current_primary_port; /**< Primary Slave Port */
+ uint16_t primary_port; /**< Primary Child Port */
+ uint16_t current_primary_port; /**< Primary Child Port */
uint16_t user_defined_primary_port;
/**< Flag for whether primary port is user defined or not */
@@ -137,16 +137,16 @@ struct bond_dev_private {
uint16_t nb_rx_queues; /**< Total number of rx queues */
uint16_t nb_tx_queues; /**< Total number of tx queues*/
- uint16_t active_slave_count; /**< Number of active slaves */
- uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
+ uint16_t active_child_count; /**< Number of active children */
+ uint16_t active_children[RTE_MAX_ETHPORTS]; /**< Active child list */
- uint16_t slave_count; /**< Number of bonded slaves */
- struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
- /**< Array of bonded slaves details */
+ uint16_t child_count; /**< Number of bonded children */
+ struct bond_child_details children[RTE_MAX_ETHPORTS];
+ /**< Array of bonded children details */
struct mode8023ad_private mode4;
- uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
- /**< TLB active slaves send order */
+ uint16_t tlb_children_order[RTE_MAX_ETHPORTS];
+ /**< TLB active children send order */
struct mode_alb_private mode6;
uint64_t rx_offload_capa; /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
uint8_t rss_key_len; /**< hash key length in bytes. */
struct rte_kvargs *kvlist;
- uint8_t slave_update_idx;
+ uint8_t child_update_idx;
bool kvargs_processing_is_done;
@@ -191,19 +191,19 @@ struct bond_dev_private {
extern const struct eth_dev_ops default_dev_ops;
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_parent_bonded_ethdev(const struct rte_eth_dev *eth_dev);
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/* Search given child array to find position of given id.
+ * Return child pos or children_count if not found. */
static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_child_by_id(uint16_t *children, uint16_t children_count, uint16_t child_id) {
uint16_t pos;
- for (pos = 0; pos < slaves_count; pos++) {
- if (slave_id == slaves[pos])
+ for (pos = 0; pos < children_count; pos++) {
+ if (child_id == children[pos])
break;
}
@@ -217,13 +217,13 @@ int
valid_bonded_port_id(uint16_t port_id);
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_child_port_id(struct bond_dev_private *internals, uint16_t port_id);
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_child(struct rte_eth_dev *eth_dev, uint16_t port_id);
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_child(struct rte_eth_dev *eth_dev, uint16_t port_id);
int
mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +234,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *dst_mac_addr);
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_children_update(struct rte_eth_dev *bonded_eth_dev);
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+child_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t child_port_id);
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+child_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t child_port_id);
int
bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+child_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *child_eth_dev);
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+child_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *child_eth_dev);
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+child_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *child_eth_dev);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+child_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *child_eth_dev);
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t child_count, uint16_t *children);
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t child_count, uint16_t *children);
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t child_count, uint16_t *children);
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id);
+ uint16_t child_port_id);
int
bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
void *param, void *ret_param);
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_child_port_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_child_mode_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args);
int
@@ -301,7 +301,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_child_port_id_kvarg(const char *key,
const char *value, void *extra_args);
int
@@ -323,7 +323,7 @@ void
bond_tlb_enable(struct bond_dev_private *internals);
void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_child(struct bond_dev_private *internals);
int
bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5fe4..a74eab35dd08 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
*
* RTE Link Bonding Ethernet Device
* Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * NICs into a single logical interface. The bonded device processes
* these interfaces based on the mode of operation specified and supported.
* This implementation supports 4 modes of operation round robin, active backup
* balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,24 @@ extern "C" {
#define BONDING_MODE_ROUND_ROBIN (0)
/**< Round Robin (Mode 0).
* In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active devices of the bonded in a round robin fashion. */
#define BONDING_MODE_ACTIVE_BACKUP (1)
/**< Active Backup (Mode 1).
* In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
+ * device until such point as the primary device is no longer available and then
+ * transmitted packets will be sent on the next available devices. The primary
+ * device can be defined by the user but defaults to the first active device
* available if not specified. */
#define BONDING_MODE_BALANCE (2)
/**< Balance (Mode 2).
* In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * devices using one of three available transmit policies - l2, l2+3 or l3+4.
* See BALANCE_XMIT_POLICY macros definitions for further details on transmit
* policies. */
#define BONDING_MODE_BROADCAST (3)
/**< Broadcast (Mode 3).
* In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active devices of the bonded. */
#define BONDING_MODE_8023AD (4)
/**< 802.3AD (Mode 4).
*
@@ -62,22 +62,22 @@ extern "C" {
* be handled with the expected latency and this may cause the link status to be
* incorrectly marked as down or failure to correctly negotiate with peers.
* - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
+ * to rx_burst should be at least 2 times the device count size.
*
*/
#define BONDING_MODE_TLB (5)
/**< Adaptive TLB (Mode 5)
* This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
+ * changes the transmitting device, according to the computed load. Statistics
* are collected in 100ms intervals and scheduled every 10ms */
#define BONDING_MODE_ALB (6)
/**< Adaptive Load Balancing (Mode 6)
* This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
* bonding driver intercepts ARP replies send by local system and overwrites its
* source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different device interfaces. When local system sends ARP request, it saves IP
* information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of device MACs assigned and ARP reply send to that peer.
*/
/* Balance Mode Transmit Policies */
@@ -113,28 +113,42 @@ int
rte_eth_bond_free(const char *name);
/**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a child to the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param child_port_id Port ID of child device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_child_add(uint16_t bonded_port_id, uint16_t child_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t child_port_id)
+{
+ return rte_eth_bond_child_add(bonded_port_id, child_port_id);
+}
/**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a child device from the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param child_port_id Port ID of child device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_child_remove(uint16_t bonded_port_id, uint16_t child_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t child_port_id)
+{
+ return rte_eth_bond_child_remove(bonded_port_id, child_port_id);
+}
/**
* Set link bonding mode of bonded device
@@ -160,65 +174,73 @@ int
rte_eth_bond_mode_get(uint16_t bonded_port_id);
/**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set child rte_eth_dev as primary of bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param child_port_id Port ID of child device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t child_port_id);
/**
- * Get primary slave of bonded device
+ * Get primary child of bonded device
*
* @param bonded_port_id Port ID of bonded device.
*
* @return
- * Port Id of primary slave on success, -1 on failure
+ * Port Id of primary child on success, -1 on failure
*/
int
rte_eth_bond_primary_get(uint16_t bonded_port_id);
/**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the children of the bonded device
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param children Array to be populated with the current active children
+ * @param len Length of children array
*
* @return
- * Number of slaves associated with bonded device on success,
+ * Number of children associated with bonded device on success,
* negative value otherwise
*/
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_children_get(uint16_t bonded_port_id, uint16_t children[],
+ uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t children[],
+ uint16_t len)
+{
+ return rte_eth_bond_children_get(bonded_port_id, children, len);
+}
/**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active children port id's of the bonded
* device.
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param children Array to be populated with the current active children
+ * @param len Length of children array
*
* @return
- * Number of active slaves associated with bonded device on success,
+ * Number of active children associated with bonded device on success,
* negative value otherwise
*/
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_children_get(uint16_t bonded_port_id, uint16_t children[],
uint16_t len);
/**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's children.
*
* @param bonded_port_id Port ID of bonded device.
- * @param mac_addr MAC Address to use on bonded device overriding
- * slaves MAC addresses
+ * @param mac_addr MAC Address to use on bonded device overriding
+ * children MAC addresses
*
* @return
* 0 on success, negative value otherwise
@@ -228,8 +250,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
struct rte_ether_addr *mac_addr);
/**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary child on bonded device and it's
+ * children.
*
* @param bonded_port_id Port ID of bonded device.
*
@@ -266,7 +288,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
/**
* Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * child devices
*
* @param bonded_port_id Port ID of bonded device.
* @param internal_ms Monitoring interval in milliseconds
@@ -280,7 +302,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
/**
* Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of child devices
*
* @param bonded_port_id Port ID of bonded device.
*
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2caf1..32ac1f47ee6e 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
#define MODE4_DEBUG(fmt, ...) \
rte_log(RTE_LOG_DEBUG, bond_logtype, \
"%6u [Port %u: %s] " fmt, \
- bond_dbg_get_time_diff_ms(), slave_id, \
+ bond_dbg_get_time_diff_ms(), child_id, \
__func__, ##__VA_ARGS__)
static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
}
static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t child_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[child_id];
uint8_t warnings;
do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
if (warnings & WRN_RX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+ "Child %u: failed to enqueue LACP packet into RX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will notwork correctly",
- slave_id);
+ child_id);
}
if (warnings & WRN_TX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+ "Child %u: failed to enqueue LACP packet into TX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will not work correctly",
- slave_id);
+ child_id);
}
if (warnings & WRN_RX_MARKER_TO_FAST)
- RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
- slave_id);
+ RTE_BOND_LOG(INFO, "Child %u: marker to early - ignoring.",
+ child_id);
if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
RTE_BOND_LOG(INFO,
- "Slave %u: ignoring unknown slow protocol frame type",
- slave_id);
+ "Child %u: ignoring unknown slow protocol frame type",
+ child_id);
}
if (warnings & WRN_UNKNOWN_MARKER_TYPE)
- RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
- slave_id);
+ RTE_BOND_LOG(INFO, "Child %u: ignoring unknown marker type",
+ child_id);
if (warnings & WRN_NOT_LACP_CAPABLE)
- MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+ MODE4_DEBUG("Port %u is not LACP capable!\n", child_id);
}
static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
* @param port Port on which LACPDU was received.
*/
static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t child_id,
struct lacpdu *lacp)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[child_id];
uint64_t timeout;
if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
* @param port Port to handle state machine.
*/
static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t child_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[child_id];
/* Calculate if either site is LACP enabled */
uint64_t timeout;
uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port Port to handle state machine.
*/
static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t child_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[child_id];
/* Save current state for later use */
const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing started.",
- internals->port_id, slave_id);
+ "Bond %u: child id %u distributing started.",
+ internals->port_id, child_id);
}
} else {
if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing stopped.",
- internals->port_id, slave_id);
+ "Bond %u: child id %u distributing stopped.",
+ internals->port_id, child_id);
}
}
}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port
*/
static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t child_id)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[child_id];
struct rte_mbuf *lacp_pkt = NULL;
struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
/* Source and destination MAC */
rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(child_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
return;
}
} else {
- uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+ uint16_t pkts_sent = rte_eth_tx_prepare(child_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, 1);
- pkts_sent = rte_eth_tx_burst(slave_id,
+ pkts_sent = rte_eth_tx_burst(child_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, pkts_sent);
if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
* @param port_pos Port to assign.
*/
static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t child_id)
{
struct port *agg, *port;
- uint16_t slaves_count, new_agg_id, i, j = 0;
- uint16_t *slaves;
+ uint16_t children_count, new_agg_id, i, j = 0;
+ uint16_t *children;
uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
- uint16_t default_slave = 0;
+ uint16_t default_child = 0;
struct rte_eth_link link_info;
uint16_t agg_new_idx = 0;
int ret;
- slaves = internals->active_slaves;
- slaves_count = internals->active_slave_count;
- port = &bond_mode_8023ad_ports[slave_id];
+ children = internals->active_children;
+ children_count = internals->active_child_count;
+ port = &bond_mode_8023ad_ports[child_id];
/* Search for aggregator suitable for this port */
- for (i = 0; i < slaves_count; ++i) {
- agg = &bond_mode_8023ad_ports[slaves[i]];
+ for (i = 0; i < children_count; ++i) {
+ agg = &bond_mode_8023ad_ports[children[i]];
/* Skip ports that are not aggregators */
- if (agg->aggregator_port_id != slaves[i])
+ if (agg->aggregator_port_id != children[i])
continue;
- ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+ ret = rte_eth_link_get_nowait(children[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slaves[i], rte_strerror(-ret));
+ "Child (port %u) link get failed: %s\n",
+ children[i], rte_strerror(-ret));
continue;
}
agg_count[i] += 1;
agg_bandwidth[i] += link_info.link_speed;
- /* Actors system ID is not checked since all slave device have the same
+ /* Actors system ID is not checked since all child device have the same
* ID (MAC address). */
if ((agg->actor.key == port->actor.key &&
agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
if (j == 0)
- default_slave = i;
+ default_child = i;
j++;
}
}
switch (internals->mode4.agg_selection) {
case AGG_COUNT:
- agg_new_idx = max_index(agg_count, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_count, children_count);
+ new_agg_id = children[agg_new_idx];
break;
case AGG_BANDWIDTH:
- agg_new_idx = max_index(agg_bandwidth, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_bandwidth, children_count);
+ new_agg_id = children[agg_new_idx];
break;
case AGG_STABLE:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_child == children_count)
+ new_agg_id = children[child_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = children[default_child];
break;
default:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_child == children_count)
+ new_agg_id = children[child_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = children[default_child];
break;
}
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
MODE4_DEBUG("-> SELECTED: ID=%3u\n"
"\t%s aggregator ID=%3u\n",
port->aggregator_port_id,
- port->aggregator_port_id == slave_id ?
+ port->aggregator_port_id == child_id ?
"aggregator not found, using default" : "aggregator found",
port->aggregator_port_id);
}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
}
static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t child_id,
struct rte_mbuf *lacp_pkt) {
struct lacpdu_header *lacp;
struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
partner = &lacp->lacpdu.partner;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
/* This LACP frame is sending to the bonding port
* so pass it to rx_machine.
*/
- rx_machine(internals, slave_id, &lacp->lacpdu);
+ rx_machine(internals, child_id, &lacp->lacpdu);
} else {
char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
}
rte_pktmbuf_free(lacp_pkt);
} else
- rx_machine(internals, slave_id, NULL);
+ rx_machine(internals, child_id, NULL);
}
static void
bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
- uint16_t slave_id)
+ uint16_t child_id)
{
#define DEDICATED_QUEUE_BURST_SIZE 32
struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ uint16_t rx_count = rte_eth_rx_burst(child_id,
internals->mode4.dedicated_queues.rx_qid,
lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
uint16_t i;
for (i = 0; i < rx_count; i++)
- bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ bond_mode_8023ad_handle_slow_pkt(internals, child_id,
lacp_pkt[i]);
} else {
- rx_machine_update(internals, slave_id, NULL);
+ rx_machine_update(internals, child_id, NULL);
}
}
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
struct bond_dev_private *internals = bond_dev->data->dev_private;
struct port *port;
struct rte_eth_link link_info;
- struct rte_ether_addr slave_addr;
+ struct rte_ether_addr child_addr;
struct rte_mbuf *lacp_pkt = NULL;
- uint16_t slave_id;
+ uint16_t child_id;
uint16_t i;
/* Update link status on each port */
- for (i = 0; i < internals->active_slave_count; i++) {
+ for (i = 0; i < internals->active_child_count; i++) {
uint16_t key;
int ret;
- slave_id = internals->active_slaves[i];
- ret = rte_eth_link_get_nowait(slave_id, &link_info);
+ child_id = internals->active_children[i];
+ ret = rte_eth_link_get_nowait(child_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_id, rte_strerror(-ret));
+ "Child (port %u) link get failed: %s\n",
+ child_id, rte_strerror(-ret));
}
if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
key = 0;
}
- rte_eth_macaddr_get(slave_id, &slave_addr);
- port = &bond_mode_8023ad_ports[slave_id];
+ rte_eth_macaddr_get(child_id, &child_addr);
+ port = &bond_mode_8023ad_ports[child_id];
key = rte_cpu_to_be_16(key);
if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, NTT);
}
- if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
- rte_ether_addr_copy(&slave_addr, &port->actor.system);
- if (port->aggregator_port_id == slave_id)
+ if (!rte_is_same_ether_addr(&port->actor.system, &child_addr)) {
+ rte_ether_addr_copy(&child_addr, &port->actor.system);
+ if (port->aggregator_port_id == child_id)
SM_FLAG_SET(port, NTT);
}
}
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_child_count; i++) {
+ child_id = internals->active_children[i];
+ port = &bond_mode_8023ad_ports[child_id];
if ((port->actor.key &
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (retval != 0)
lacp_pkt = NULL;
- rx_machine_update(internals, slave_id, lacp_pkt);
+ rx_machine_update(internals, child_id, lacp_pkt);
} else {
bond_mode_8023ad_dedicated_rxq_process(internals,
- slave_id);
+ child_id);
}
- periodic_machine(internals, slave_id);
- mux_machine(internals, slave_id);
- tx_machine(internals, slave_id);
- selection_logic(internals, slave_id);
+ periodic_machine(internals, child_id);
+ mux_machine(internals, child_id);
+ tx_machine(internals, child_id);
+ selection_logic(internals, child_id);
SM_FLAG_CLR(port, BEGIN);
- show_warnings(slave_id);
+ show_warnings(child_id);
}
rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
}
static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t child_id)
{
int ret;
- ret = rte_eth_allmulticast_enable(slave_id);
+ ret = rte_eth_allmulticast_enable(child_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ child_id, rte_strerror(-ret));
}
- if (rte_eth_allmulticast_get(slave_id)) {
+ if (rte_eth_allmulticast_get(child_id)) {
RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ child_id);
+ bond_mode_8023ad_ports[child_id].forced_rx_flags =
BOND_8023AD_FORCED_ALLMULTI;
return 0;
}
- ret = rte_eth_promiscuous_enable(slave_id);
+ ret = rte_eth_promiscuous_enable(child_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ child_id, rte_strerror(-ret));
}
- if (rte_eth_promiscuous_get(slave_id)) {
+ if (rte_eth_promiscuous_get(child_id)) {
RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ child_id);
+ bond_mode_8023ad_ports[child_id].forced_rx_flags =
BOND_8023AD_FORCED_PROMISC;
return 0;
}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
}
static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t child_id)
{
int ret;
- switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+ switch (bond_mode_8023ad_ports[child_id].forced_rx_flags) {
case BOND_8023AD_FORCED_ALLMULTI:
- RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
- ret = rte_eth_allmulticast_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", child_id);
+ ret = rte_eth_allmulticast_disable(child_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ child_id, rte_strerror(-ret));
break;
case BOND_8023AD_FORCED_PROMISC:
- RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
- ret = rte_eth_promiscuous_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset promisc for port %u", child_id);
+ ret = rte_eth_promiscuous_disable(child_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ child_id, rte_strerror(-ret));
break;
default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
}
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
- uint16_t slave_id)
+bond_mode_8023ad_activate_child(struct rte_eth_dev *bond_dev,
+ uint16_t child_id)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[child_id];
struct port_params initial = {
.system = { { 0 } },
.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
- /* Given slave mus not be in active list */
- RTE_ASSERT(find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) == internals->active_slave_count);
+ /* Given child mus not be in active list */
+ RTE_ASSERT(find_child_by_id(internals->active_children,
+ internals->active_child_count, child_id) == internals->active_child_count);
RTE_SET_USED(internals); /* used only for assert when enabled */
memcpy(&port->actor, &initial, sizeof(struct port_params));
/* Standard requires that port ID must be grater than 0.
* Add 1 do get corresponding port_number */
- port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+ port->actor.port_number = rte_cpu_to_be_16(child_id + 1);
memcpy(&port->partner, &initial, sizeof(struct port_params));
memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
port->sm_flags = SM_FLAGS_BEGIN;
/* use this port as aggregator */
- port->aggregator_port_id = slave_id;
+ port->aggregator_port_id = child_id;
- if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
- RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
- slave_id);
+ if (bond_mode_8023ad_register_lacp_mac(child_id) < 0) {
+ RTE_BOND_LOG(WARNING, "child %u is most likely broken and won't receive LACP packets",
+ child_id);
}
timer_cancel(&port->warning_timer);
@@ -1087,7 +1087,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_ASSERT(port->rx_ring == NULL);
RTE_ASSERT(port->tx_ring == NULL);
- socket_id = rte_eth_dev_socket_id(slave_id);
+ socket_id = rte_eth_dev_socket_id(child_id);
if (socket_id == -1)
socket_id = rte_socket_id();
@@ -1095,14 +1095,14 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_PKTMBUF_HEADROOM;
/* The size of the mempool should be at least:
- * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
- total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+ * the sum of the TX descriptors + BOND_MODE_8023AX_CHILD_TX_PKTS */
+ total_tx_desc = BOND_MODE_8023AX_CHILD_TX_PKTS;
for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
total_tx_desc += bd_tx_q->nb_tx_desc;
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "child_port%u_pool", child_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1111,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Child %u: Failed to create memory pool '%s': %s\n",
+ child_id, mem_name, rte_strerror(rte_errno));
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "child_%u_rx", child_id);
port->rx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_CHILD_RX_PKTS), socket_id, 0);
if (port->rx_ring == NULL) {
- rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+ rte_panic("Child %u: Failed to create rx ring '%s': %s\n", child_id,
mem_name, rte_strerror(rte_errno));
}
/* TX ring is at least one pkt longer to make room for marker packet. */
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "child_%u_tx", child_id);
port->tx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_CHILD_TX_PKTS + 1), socket_id, 0);
if (port->tx_ring == NULL) {
- rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+ rte_panic("Child %u: Failed to create tx ring '%s': %s\n", child_id,
mem_name, rte_strerror(rte_errno));
}
}
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
- uint16_t slave_id)
+bond_mode_8023ad_deactivate_child(struct rte_eth_dev *bond_dev __rte_unused,
+ uint16_t child_id)
{
void *pkt = NULL;
struct port *port = NULL;
uint8_t old_partner_state;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
ACTOR_STATE_CLR(port, AGGREGATION);
port->selected = UNSELECTED;
@@ -1151,7 +1151,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
old_partner_state = port->partner_state;
record_default(port);
- bond_mode_8023ad_unregister_lacp_mac(slave_id);
+ bond_mode_8023ad_unregister_lacp_mac(child_id);
/* If partner timeout state changes then disable timer */
if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1174,30 @@ void
bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct rte_ether_addr slave_addr;
- struct port *slave, *agg_slave;
- uint16_t slave_id, i, j;
+ struct rte_ether_addr child_addr;
+ struct port *child, *agg_child;
+ uint16_t child_id, i, j;
bond_mode_8023ad_stop(bond_dev);
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- slave = &bond_mode_8023ad_ports[slave_id];
- rte_eth_macaddr_get(slave_id, &slave_addr);
+ for (i = 0; i < internals->active_child_count; i++) {
+ child_id = internals->active_children[i];
+ child = &bond_mode_8023ad_ports[child_id];
+ rte_eth_macaddr_get(child_id, &child_addr);
- if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+ if (rte_is_same_ether_addr(&child_addr, &child->actor.system))
continue;
- rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+ rte_ether_addr_copy(&child_addr, &child->actor.system);
/* Do nothing if this port is not an aggregator. In other case
* Set NTT flag on every port that use this aggregator. */
- if (slave->aggregator_port_id != slave_id)
+ if (child->aggregator_port_id != child_id)
continue;
- for (j = 0; j < internals->active_slave_count; j++) {
- agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
- if (agg_slave->aggregator_port_id == slave_id)
- SM_FLAG_SET(agg_slave, NTT);
+ for (j = 0; j < internals->active_child_count; j++) {
+ agg_child = &bond_mode_8023ad_ports[internals->active_children[j]];
+ if (agg_child->aggregator_port_id == child_id)
+ SM_FLAG_SET(agg_child, NTT);
}
}
@@ -1288,9 +1288,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
struct bond_dev_private *internals = bond_dev->data->dev_private;
uint16_t i;
- for (i = 0; i < internals->active_slave_count; i++)
- bond_mode_8023ad_activate_slave(bond_dev,
- internals->active_slaves[i]);
+ for (i = 0; i < internals->active_child_count; i++)
+ bond_mode_8023ad_activate_child(bond_dev,
+ internals->active_children[i]);
return 0;
}
@@ -1326,10 +1326,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt)
+ uint16_t child_id, struct rte_mbuf *pkt)
{
struct mode8023ad_private *mode4 = &internals->mode4;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[child_id];
struct marker_header *m_hdr;
uint64_t marker_timer, old_marker_timer;
int retval;
@@ -1362,7 +1362,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(child_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1373,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
}
} else {
/* Send packet directly to the slow queue */
- uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+ uint16_t tx_count = rte_eth_tx_prepare(child_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, 1);
- tx_count = rte_eth_tx_burst(slave_id,
+ tx_count = rte_eth_tx_burst(child_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, tx_count);
if (tx_count != 1) {
@@ -1394,7 +1394,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
goto free_out;
}
} else
- rx_machine_update(internals, slave_id, pkt);
+ rx_machine_update(internals, child_id, pkt);
} else {
wrn = WRN_UNKNOWN_SLOW_TYPE;
goto free_out;
@@ -1477,7 +1477,7 @@ bond_8023ad_setup_validate(uint16_t port_id,
return -EINVAL;
if (conf != NULL) {
- /* Basic sanity check */
+ /* Check configuration */
if (conf->slow_periodic_ms == 0 ||
conf->fast_periodic_ms >= conf->slow_periodic_ms ||
conf->long_timeout_ms == 0 ||
@@ -1517,8 +1517,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_child_info(uint16_t port_id, uint16_t child_id,
+ struct rte_eth_bond_8023ad_child_info *info)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1531,12 +1531,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
bond_dev = &rte_eth_devices[port_id];
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_child_by_id(internals->active_children,
+ internals->active_child_count, child_id) ==
+ internals->active_child_count)
return -EINVAL;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
info->selected = port->selected;
info->actor_state = port->actor_state;
@@ -1550,7 +1550,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
}
static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t child_id)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1565,9 +1565,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
return -EINVAL;
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_child_by_id(internals->active_children,
+ internals->active_child_count, child_id) ==
+ internals->active_child_count)
return -EINVAL;
mode4 = &internals->mode4;
@@ -1578,17 +1578,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
}
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t child_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, child_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
if (enabled)
ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1599,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t child_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, child_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
if (enabled)
ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1620,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t child_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, child_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
return ACTOR_STATE(port, DISTRIBUTING);
}
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t child_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, child_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
return ACTOR_STATE(port, COLLECTING);
}
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t child_id,
struct rte_mbuf *lacp_pkt)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, child_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[child_id];
if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
return -EINVAL;
@@ -1683,11 +1683,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
struct mode8023ad_private *mode4 = &internals->mode4;
struct port *port;
void *pkt = NULL;
- uint16_t i, slave_id;
+ uint16_t i, child_id;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_child_count; i++) {
+ child_id = internals->active_children[i];
+ port = &bond_mode_8023ad_ports[child_id];
if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1700,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
/* This is LACP frame so pass it to rx callback.
* Callback is responsible for freeing mbuf.
*/
- mode4->slowrx_cb(slave_id, lacp_pkt);
+ mode4->slowrx_cb(child_id, lacp_pkt);
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00bd5..d66817a199fe 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t child_id,
struct rte_mbuf *lacp_pkt);
enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
uint16_t system_priority;
/**< System priority (unused in current implementation) */
struct rte_ether_addr system;
- /**< System ID - Slave MAC address, same as bonding MAC address */
+ /**< System ID - Child MAC address, same as bonding MAC address */
uint16_t key;
/**< Speed information (implementation dependent) and duplex. */
uint16_t port_priority;
/**< Priority of this (unused in current implementation) */
uint16_t port_number;
- /**< Port number. It corresponds to slave port id. */
+ /**< Port number. It corresponds to child port id. */
} __rte_packed __rte_aligned(2);
struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
enum rte_bond_8023ad_agg_selection agg_selection;
};
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_child_info {
enum rte_bond_8023ad_selection selected;
uint8_t actor_state;
struct port_params actor;
@@ -184,104 +184,104 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
/**
* @internal
*
- * Function returns current state of given slave device.
+ * Function returns current state of given child device.
*
- * @param slave_id Port id of valid slave.
+ * @param child_id Port id of valid child.
* @param conf buffer for configuration
* @return
* 0 - if ok
- * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ * -EINVAL if conf is NULL or child id is invalid (not a child of given
* bonded device or is not inactive).
*/
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_child_info(uint16_t port_id, uint16_t child_id,
+ struct rte_eth_bond_8023ad_child_info *conf);
#ifdef __cplusplus
}
#endif
/**
- * Configure a slave port to start collecting.
+ * Configure a child port to start collecting.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param child_id Port id of valid child.
* @param enabled Non-zero when collection enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if child is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t child_id,
int enabled);
/**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from child port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param child_id Port id of valid child.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if child is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t child_id);
/**
- * Configure a slave port to start distributing.
+ * Configure a child port to start distributing.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param child_id Port id of valid child.
* @param enabled Non-zero when distribution enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if child is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t child_id,
int enabled);
/**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from child port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param child_id Port id of valid child.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if child is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t child_id);
/**
* LACPDU transmit path for external 802.3ad state machine. Caller retains
* ownership of the packet on failure.
*
* @param port_id Bonding device id
- * @param slave_id Port ID of valid slave device.
+ * @param child_id Port ID of valid child device.
* @param lacp_pkt mbuf containing LACPDU.
*
* @return
* 0 on success, negative value otherwise.
*/
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t child_id,
struct rte_mbuf *lacp_pkt);
/**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on children
*
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each child for
* dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each child to redirect all LACP slow packets to that rx queue
* for processing in the LACP state machine, this removes the need to filter
* these packets in the bonded devices data path. The additional tx queue is
* used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * child hw independently of the bonded devices data path.
*
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all children must support the programming of the flow
* filter rule required for rx and have enough queues that one rx and tx queue
* can be reserved for the LACP state machines control packets.
*
@@ -296,7 +296,7 @@ int
rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
/**
- * Disable slow queue on slaves
+ * Disable slow queue on children
*
* This function disables hardware slow packet filter.
*
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a797135..0fcd1448c15b 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
}
static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_child(struct bond_dev_private *internals)
{
uint16_t idx;
- idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
- internals->mode6.last_slave = idx;
- return internals->active_slaves[idx];
+ idx = (internals->mode6.last_child + 1) % internals->active_child_count;
+ internals->mode6.last_child = idx;
+ return internals->active_children[idx];
}
int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
/* Fill hash table with initial values */
memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
rte_spinlock_init(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_child = ALB_NULL_INDEX;
internals->mode6.ntt = 0;
/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
/*
* We got reply for ARP Request send by the application. We need to
* update client table when received data differ from what is stored
- * in ALB table and issue sending update packet to that slave.
+ * in ALB table and issue sending update packet to that child.
*/
rte_spinlock_lock(&internals->mode6.lock);
if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
client_info->cli_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_sha,
&client_info->cli_mac);
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->child_idx = calculate_child(internals);
+ rte_eth_macaddr_get(client_info->child_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
&arp->arp_data.arp_tha,
&client_info->cli_mac);
}
- rte_eth_macaddr_get(client_info->slave_idx,
+ rte_eth_macaddr_get(client_info->child_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->child_idx;
}
}
- /* Assign new slave to this client and update src mac in ARP */
+ /* Assign new child to this client and update src mac in ARP */
client_info->in_use = 1;
client_info->ntt = 0;
client_info->app_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_tha,
&client_info->cli_mac);
client_info->cli_ip = arp->arp_data.arp_tip;
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->child_idx = calculate_child(internals);
+ rte_eth_macaddr_get(client_info->child_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->child_idx;
}
/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
{
struct rte_ether_hdr *eth_h;
struct rte_arp_hdr *arp_h;
- uint16_t slave_idx;
+ uint16_t child_idx;
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
arp_h->arp_plen = sizeof(uint32_t);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
- slave_idx = client_info->slave_idx;
+ child_idx = client_info->child_idx;
rte_spinlock_unlock(&internals->mode6.lock);
- return slave_idx;
+ return child_idx;
}
void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
int i;
- /* If active slave count is 0, it's pointless to refresh alb table */
- if (internals->active_slave_count <= 0)
+ /* If active child count is 0, it's pointless to refresh alb table */
+ if (internals->active_child_count <= 0)
return;
rte_spinlock_lock(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_child = ALB_NULL_INDEX;
for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+ client_info->child_idx = calculate_child(internals);
+ rte_eth_macaddr_get(client_info->child_idx, &client_info->app_mac);
internals->mode6.ntt = 1;
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc86..dae3f84c5efb 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
uint32_t cli_ip;
/**< Client IP address */
- uint16_t slave_idx;
- /**< Index of slave on which we connect with that client */
+ uint16_t child_idx;
+ /**< Index of child on which we connect with that client */
uint8_t in_use;
/**< Flag indicating if entry in client table is currently used */
uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
/**< Mempool for creating ARP update packets */
uint8_t ntt;
/**< Flag indicating if we need to send update to any client on next tx */
- uint32_t last_slave;
- /**< Index of last used slave in client table */
+ uint32_t last_child;
+ /**< Index of last used child in client table */
rte_spinlock_t lock;
};
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
struct bond_dev_private *internals);
/**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which child
+ * send that packet. If packet is ARP Request, it is send on primary child.
+ * If it is ARP Reply, it is send on child stored in client table for that
* connection. On Reply function also updates data in client table.
*
* @param eth_h ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of child on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of child on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_upd(struct client_data *client_info,
struct rte_mbuf *pkt, struct bond_dev_private *internals);
/**
- * Function updates slave indexes of active connections.
+ * Function updates child indexes of active connections.
*
* @param bond_dev Pointer to bonded device struct.
*/
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b464..231d117bc5ed 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
}
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_parent_bonded_ethdev(const struct rte_eth_dev *eth_dev)
{
int i;
struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- /* Check if any of slave devices is a bonded device */
- for (i = 0; i < internals->slave_count; i++)
- if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+ /* Check if any of child devices is a bonded device */
+ for (i = 0; i < internals->child_count; i++)
+ if (valid_bonded_port_id(internals->children[i].port_id) == 0)
return 1;
return 0;
}
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_child_port_id(struct bond_dev_private *internals, uint16_t child_port_id)
{
- RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(child_port_id, -1);
- /* Verify that slave_port_id refers to a non bonded port */
- if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+ /* Verify that child_port_id refers to a non bonded port */
+ if (check_for_bonded_ethdev(&rte_eth_devices[child_port_id]) == 0 &&
internals->mode == BONDING_MODE_8023AD) {
- RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
- " mode as slave is also a bonded device, only "
+ RTE_BOND_LOG(ERR, "Cannot add child to bonded device in 802.3ad"
+ " mode as child is also a bonded device, only "
"physical devices can be support in this mode.");
return -1;
}
- if (internals->port_id == slave_port_id) {
+ if (internals->port_id == child_port_id) {
RTE_BOND_LOG(ERR,
- "Cannot add the bonded device itself as its slave.");
+ "Cannot add the bonded device itself as its child.");
return -1;
}
@@ -79,61 +79,61 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
}
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_child(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_child_count;
if (internals->mode == BONDING_MODE_8023AD)
- bond_mode_8023ad_activate_slave(eth_dev, port_id);
+ bond_mode_8023ad_activate_child(eth_dev, port_id);
if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB) {
- internals->tlb_slaves_order[active_count] = port_id;
+ internals->tlb_children_order[active_count] = port_id;
}
- RTE_ASSERT(internals->active_slave_count <
- (RTE_DIM(internals->active_slaves) - 1));
+ RTE_ASSERT(internals->active_child_count <
+ (RTE_DIM(internals->active_children) - 1));
- internals->active_slaves[internals->active_slave_count] = port_id;
- internals->active_slave_count++;
+ internals->active_children[internals->active_child_count] = port_id;
+ internals->active_child_count++;
if (internals->mode == BONDING_MODE_TLB)
- bond_tlb_activate_slave(internals);
+ bond_tlb_activate_child(internals);
if (internals->mode == BONDING_MODE_ALB)
bond_mode_alb_client_list_upd(eth_dev);
}
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_child(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
- uint16_t slave_pos;
+ uint16_t child_pos;
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_child_count;
if (internals->mode == BONDING_MODE_8023AD) {
bond_mode_8023ad_stop(eth_dev);
- bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+ bond_mode_8023ad_deactivate_child(eth_dev, port_id);
} else if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB)
bond_tlb_disable(internals);
- slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+ child_pos = find_child_by_id(internals->active_children, active_count,
port_id);
- /* If slave was not at the end of the list
- * shift active slaves up active array list */
- if (slave_pos < active_count) {
+ /* If child was not at the end of the list
+ * shift active children up active array list */
+ if (child_pos < active_count) {
active_count--;
- memmove(internals->active_slaves + slave_pos,
- internals->active_slaves + slave_pos + 1,
- (active_count - slave_pos) *
- sizeof(internals->active_slaves[0]));
+ memmove(internals->active_children + child_pos,
+ internals->active_children + child_pos + 1,
+ (active_count - child_pos) *
+ sizeof(internals->active_children[0]));
}
- RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
- internals->active_slave_count = active_count;
+ RTE_ASSERT(active_count < RTE_DIM(internals->active_children));
+ internals->active_child_count = active_count;
if (eth_dev->data->dev_started) {
if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +192,7 @@ rte_eth_bond_free(const char *name)
}
static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+child_vlan_filter_set(uint16_t bonded_port_id, uint16_t child_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -224,7 +224,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
if (unlikely(slab & mask)) {
uint16_t vlan_id = pos + i;
- res = rte_eth_dev_vlan_filter(slave_port_id,
+ res = rte_eth_dev_vlan_filter(child_port_id,
vlan_id, 1);
}
}
@@ -236,45 +236,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+child_rte_flow_prepare(uint16_t child_id, struct bond_dev_private *internals)
{
struct rte_flow *flow;
struct rte_flow_error ferror;
- uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+ uint16_t child_port_id = internals->children[child_id].port_id;
if (internals->flow_isolated_valid != 0) {
- if (rte_eth_dev_stop(slave_port_id) != 0) {
+ if (rte_eth_dev_stop(child_port_id) != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_port_id);
+ child_port_id);
return -1;
}
- if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+ if (rte_flow_isolate(child_port_id, internals->flow_isolated,
&ferror)) {
- RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
- " %d: %s", slave_id, ferror.message ?
+ RTE_BOND_LOG(ERR, "rte_flow_isolate failed for child"
+ " %d: %s", child_id, ferror.message ?
ferror.message : "(no stated reason)");
return -1;
}
}
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- flow->flows[slave_id] = rte_flow_create(slave_port_id,
+ flow->flows[child_id] = rte_flow_create(child_port_id,
flow->rule.attr,
flow->rule.pattern,
flow->rule.actions,
&ferror);
- if (flow->flows[slave_id] == NULL) {
- RTE_BOND_LOG(ERR, "Cannot create flow for slave"
- " %d: %s", slave_id,
+ if (flow->flows[child_id] == NULL) {
+ RTE_BOND_LOG(ERR, "Cannot create flow for child"
+ " %d: %s", child_id,
ferror.message ? ferror.message :
"(no stated reason)");
- /* Destroy successful bond flows from the slave */
+ /* Destroy successful bond flows from the child */
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- flow->flows[slave_id],
+ if (flow->flows[child_id] != NULL) {
+ rte_flow_destroy(child_port_id,
+ flow->flows[child_id],
&ferror);
- flow->flows[slave_id] = NULL;
+ flow->flows[child_id] = NULL;
}
}
return -1;
@@ -284,7 +284,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
}
static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_rx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +292,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
internals->reta_size = di->reta_size;
internals->rss_key_len = di->hash_key_size;
- /* Inherit Rx offload capabilities from the first slave device */
+ /* Inherit Rx offload capabilities from the first child device */
internals->rx_offload_capa = di->rx_offload_capa;
internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
- /* Inherit maximum Rx packet size from the first slave device */
+ /* Inherit maximum Rx packet size from the first child device */
internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
- /* Inherit default Rx queue settings from the first slave device */
+ /* Inherit default Rx queue settings from the first child device */
memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * child devices. Applications may tweak this setting if need be.
*/
rxconf_i->rx_thresh.pthresh = 0;
rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +314,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
/* Setting this to zero should effectively enable default values */
rxconf_i->rx_free_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all child devices */
rxconf_i->rx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_tx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
- /* Inherit Tx offload capabilities from the first slave device */
+ /* Inherit Tx offload capabilities from the first child device */
internals->tx_offload_capa = di->tx_offload_capa;
internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
- /* Inherit default Tx queue settings from the first slave device */
+ /* Inherit default Tx queue settings from the first child device */
memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * child devices. Applications may tweak this setting if need be.
*/
txconf_i->tx_thresh.pthresh = 0;
txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +341,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
/*
* Setting these parameters to zero assumes that default
- * values will be configured implicitly by slave devices.
+ * values will be configured implicitly by child devices.
*/
txconf_i->tx_free_thresh = 0;
txconf_i->tx_rs_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all child devices */
txconf_i->tx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_rx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +362,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
/*
- * If at least one slave device suggests enabling this
- * setting by default, enable it for all slave devices
+ * If at least one child device suggests enabling this
+ * setting by default, enable it for all child devices
* since disabling it may not be necessarily supported.
*/
if (rxconf->rx_drop_en == 1)
rxconf_i->rx_drop_en = 1;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new child device may cause some of previously inherited
* offloads to be withdrawn from the internal rx_queue_offload_capa
* value. Thus, the new internal value of default Rx queue offloads
* has to be masked by rx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new child device.
*/
rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
internals->rx_queue_offload_capa;
/*
- * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+ * RETA size is GCD of all children RETA sizes, so, if all sizes will be
* the power of 2, the lower one is GCD
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
if (internals->rss_key_len > di->hash_key_size) {
- RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ RTE_BOND_LOG(WARNING, "child has different rss key size, "
"configuring rss may fail");
internals->rss_key_len = di->hash_key_size;
}
@@ -398,7 +398,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
}
static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_tx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +408,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new child device may cause some of previously inherited
* offloads to be withdrawn from the internal tx_queue_offload_capa
* value. Thus, the new internal value of default Tx queue offloads
* has to be masked by tx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new child device.
*/
txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
internals->tx_queue_offload_capa;
}
static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_child_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *child_desc_lim)
{
- memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+ memcpy(bond_desc_lim, child_desc_lim, sizeof(*bond_desc_lim));
}
static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_child_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *child_desc_lim)
{
bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
- slave_desc_lim->nb_max);
+ child_desc_lim->nb_max);
bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
- slave_desc_lim->nb_min);
+ child_desc_lim->nb_min);
bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
- slave_desc_lim->nb_align);
+ child_desc_lim->nb_align);
if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +444,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
}
/* Treat maximum number of segments equal to 0 as unspecified */
- if (slave_desc_lim->nb_seg_max != 0 &&
+ if (child_desc_lim->nb_seg_max != 0 &&
(bond_desc_lim->nb_seg_max == 0 ||
- slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
- bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
- if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+ child_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+ bond_desc_lim->nb_seg_max = child_desc_lim->nb_seg_max;
+ if (child_desc_lim->nb_mtu_seg_max != 0 &&
(bond_desc_lim->nb_mtu_seg_max == 0 ||
- slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
- bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+ child_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+ bond_desc_lim->nb_mtu_seg_max = child_desc_lim->nb_mtu_seg_max;
return 0;
}
static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_child_add_lock_free(uint16_t bonded_port_id, uint16_t child_port_id)
{
- struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+ struct rte_eth_dev *bonded_eth_dev, *child_eth_dev;
struct bond_dev_private *internals;
struct rte_eth_link link_props;
struct rte_eth_dev_info dev_info;
@@ -468,77 +468,77 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_child_port_id(internals, child_port_id) != 0)
return -1;
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
- RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+ child_eth_dev = &rte_eth_devices[child_port_id];
+ if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_CHILD) {
+ RTE_BOND_LOG(ERR, "Child device is already a child of a bonded device");
return -1;
}
- ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+ ret = rte_eth_dev_info_get(child_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port_id, strerror(-ret));
+ __func__, child_port_id, strerror(-ret));
return ret;
}
if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
- RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
- slave_port_id);
+ RTE_BOND_LOG(ERR, "Child (port %u) max_rx_pktlen too small",
+ child_port_id);
return -1;
}
- slave_add(internals, slave_eth_dev);
+ child_add(internals, child_eth_dev);
- /* We need to store slaves reta_size to be able to synchronize RETA for all
- * slave devices even if its sizes are different.
+ /* We need to store children reta_size to be able to synchronize RETA for all
+ * child devices even if its sizes are different.
*/
- internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+ internals->children[internals->child_count].reta_size = dev_info.reta_size;
- if (internals->slave_count < 1) {
- /* if MAC is not user defined then use MAC of first slave add to
+ if (internals->child_count < 1) {
+ /* if MAC is not user defined then use MAC of first child add to
* bonded device */
if (!internals->user_defined_mac) {
if (mac_address_set(bonded_eth_dev,
- slave_eth_dev->data->mac_addrs)) {
+ child_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to set MAC address");
return -1;
}
}
- /* Make primary slave */
- internals->primary_port = slave_port_id;
- internals->current_primary_port = slave_port_id;
+ /* Make primary child */
+ internals->primary_port = child_port_id;
+ internals->current_primary_port = child_port_id;
internals->speed_capa = dev_info.speed_capa;
- /* Inherit queues settings from first slave */
- internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
- internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+ /* Inherit queues settings from first child */
+ internals->nb_rx_queues = child_eth_dev->data->nb_rx_queues;
+ internals->nb_tx_queues = child_eth_dev->data->nb_tx_queues;
- eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+ eth_bond_child_inherit_dev_info_rx_first(internals, &dev_info);
+ eth_bond_child_inherit_dev_info_tx_first(internals, &dev_info);
- eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+ eth_bond_child_inherit_desc_lim_first(&internals->rx_desc_lim,
&dev_info.rx_desc_lim);
- eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+ eth_bond_child_inherit_desc_lim_first(&internals->tx_desc_lim,
&dev_info.tx_desc_lim);
} else {
int ret;
internals->speed_capa &= dev_info.speed_capa;
- eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+ eth_bond_child_inherit_dev_info_rx_next(internals, &dev_info);
+ eth_bond_child_inherit_dev_info_tx_next(internals, &dev_info);
- ret = eth_bond_slave_inherit_desc_lim_next(
+ ret = eth_bond_child_inherit_desc_lim_next(
&internals->rx_desc_lim, &dev_info.rx_desc_lim);
if (ret != 0)
return ret;
- ret = eth_bond_slave_inherit_desc_lim_next(
+ ret = eth_bond_child_inherit_desc_lim_next(
&internals->tx_desc_lim, &dev_info.tx_desc_lim);
if (ret != 0)
return ret;
@@ -552,79 +552,79 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
- if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
- RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
- slave_port_id);
+ if (child_rte_flow_prepare(internals->child_count, internals) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to prepare new child flows: port=%d",
+ child_port_id);
return -1;
}
- /* Add additional MAC addresses to the slave */
- if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
- RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
- slave_port_id);
+ /* Add additional MAC addresses to the child */
+ if (child_add_mac_addresses(bonded_eth_dev, child_port_id) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to add mac address(es) to child %hu",
+ child_port_id);
return -1;
}
- internals->slave_count++;
+ internals->child_count++;
if (bonded_eth_dev->data->dev_started) {
- if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
- slave_port_id);
+ if (child_configure(bonded_eth_dev, child_eth_dev) != 0) {
+ internals->child_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_children_configure: port=%d",
+ child_port_id);
return -1;
}
- if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
- slave_port_id);
+ if (child_start(bonded_eth_dev, child_eth_dev) != 0) {
+ internals->child_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_children_start: port=%d",
+ child_port_id);
return -1;
}
}
- /* Update all slave devices MACs */
- mac_address_slaves_update(bonded_eth_dev);
+ /* Update all child devices MACs */
+ mac_address_children_update(bonded_eth_dev);
/* Register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_register(child_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
- /* If bonded device is started then we can add the slave to our active
- * slave array */
+ /* If bonded device is started then we can add the child to our active
+ * child array */
if (bonded_eth_dev->data->dev_started) {
- ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+ ret = rte_eth_link_get_nowait(child_port_id, &link_props);
if (ret < 0) {
- rte_eth_dev_callback_unregister(slave_port_id,
+ rte_eth_dev_callback_unregister(child_port_id,
RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&bonded_eth_dev->data->port_id);
- internals->slave_count--;
+ internals->child_count--;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_port_id, rte_strerror(-ret));
+ "Child (port %u) link get failed: %s\n",
+ child_port_id, rte_strerror(-ret));
return -1;
}
if (link_props.link_status == RTE_ETH_LINK_UP) {
- if (internals->active_slave_count == 0 &&
+ if (internals->active_child_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
- slave_port_id);
+ child_port_id);
}
}
- /* Add slave details to bonded device */
- slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+ /* Add child details to bonded device */
+ child_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_CHILD;
- slave_vlan_filter_set(bonded_port_id, slave_port_id);
+ child_vlan_filter_set(bonded_port_id, child_port_id);
return 0;
}
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_child_add(uint16_t bonded_port_id, uint16_t child_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -637,12 +637,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_child_port_id(internals, child_port_id) != 0)
return -1;
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_child_add_lock_free(bonded_port_id, child_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -650,93 +650,93 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
- uint16_t slave_port_id)
+__eth_bond_child_remove_lock_free(uint16_t bonded_port_id,
+ uint16_t child_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *child_eth_dev;
struct rte_flow_error flow_error;
struct rte_flow *flow;
- int i, slave_idx;
+ int i, child_idx;
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) < 0)
+ if (valid_child_port_id(internals, child_port_id) < 0)
return -1;
- /* first remove from active slave list */
- slave_idx = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_port_id);
+ /* first remove from active child list */
+ child_idx = find_child_by_id(internals->active_children,
+ internals->active_child_count, child_port_id);
- if (slave_idx < internals->active_slave_count)
- deactivate_slave(bonded_eth_dev, slave_port_id);
+ if (child_idx < internals->active_child_count)
+ deactivate_child(bonded_eth_dev, child_port_id);
- slave_idx = -1;
- /* now find in slave list */
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == slave_port_id) {
- slave_idx = i;
+ child_idx = -1;
+ /* now find in child list */
+ for (i = 0; i < internals->child_count; i++)
+ if (internals->children[i].port_id == child_port_id) {
+ child_idx = i;
break;
}
- if (slave_idx < 0) {
- RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
- internals->slave_count);
+ if (child_idx < 0) {
+ RTE_BOND_LOG(ERR, "Couldn't find child in port list, child count %u",
+ internals->child_count);
return -1;
}
/* Un-register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_unregister(child_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&rte_eth_devices[bonded_port_id].data->port_id);
- /* Restore original MAC address of slave device */
- rte_eth_dev_default_mac_addr_set(slave_port_id,
- &(internals->slaves[slave_idx].persisted_mac_addr));
+ /* Restore original MAC address of child device */
+ rte_eth_dev_default_mac_addr_set(child_port_id,
+ &(internals->children[child_idx].persisted_mac_addr));
- /* remove additional MAC addresses from the slave */
- slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+ /* remove additional MAC addresses from the child */
+ child_remove_mac_addresses(bonded_eth_dev, child_port_id);
/*
- * Remove bond device flows from slave device.
+ * Remove bond device flows from child device.
* Note: don't restore flow isolate mode.
*/
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_idx] != NULL) {
- rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+ if (flow->flows[child_idx] != NULL) {
+ rte_flow_destroy(child_port_id, flow->flows[child_idx],
&flow_error);
- flow->flows[slave_idx] = NULL;
+ flow->flows[child_idx] = NULL;
}
}
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- slave_remove(internals, slave_eth_dev);
- slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+ child_eth_dev = &rte_eth_devices[child_port_id];
+ child_remove(internals, child_eth_dev);
+ child_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_CHILD);
- /* first slave in the active list will be the primary by default,
+ /* first child in the active list will be the primary by default,
* otherwise use first device in list */
- if (internals->current_primary_port == slave_port_id) {
- if (internals->active_slave_count > 0)
- internals->current_primary_port = internals->active_slaves[0];
- else if (internals->slave_count > 0)
- internals->current_primary_port = internals->slaves[0].port_id;
+ if (internals->current_primary_port == child_port_id) {
+ if (internals->active_child_count > 0)
+ internals->current_primary_port = internals->active_children[0];
+ else if (internals->child_count > 0)
+ internals->current_primary_port = internals->children[0].port_id;
else
internals->primary_port = 0;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_children_update(bonded_eth_dev);
}
- if (internals->active_slave_count < 1) {
- /* if no slaves are any longer attached to bonded device and MAC is not
+ if (internals->active_child_count < 1) {
+ /* if no children are any longer attached to bonded device and MAC is not
* user defined then clear MAC of bonded device as it will be reset
- * when a new slave is added */
- if (internals->slave_count < 1 && !internals->user_defined_mac)
+ * when a new child is added */
+ if (internals->child_count < 1 && !internals->user_defined_mac)
memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
}
- if (internals->slave_count == 0) {
+ if (internals->child_count == 0) {
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -750,7 +750,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
}
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_child_remove(uint16_t bonded_port_id, uint16_t child_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -764,7 +764,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_child_remove_lock_free(bonded_port_id, child_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -781,7 +781,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
- if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+ if (check_for_parent_bonded_ethdev(bonded_eth_dev) != 0 &&
mode == BONDING_MODE_8023AD)
return -1;
@@ -802,7 +802,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
}
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t child_port_id)
{
struct bond_dev_private *internals;
@@ -811,13 +811,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_child_port_id(internals, child_port_id) != 0)
return -1;
internals->user_defined_primary_port = 1;
- internals->primary_port = slave_port_id;
+ internals->primary_port = child_port_id;
- bond_ethdev_primary_set(internals, slave_port_id);
+ bond_ethdev_primary_set(internals, child_port_id);
return 0;
}
@@ -832,14 +832,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count < 1)
+ if (internals->child_count < 1)
return -1;
return internals->current_primary_port;
}
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_children_get(uint16_t bonded_port_id, uint16_t children[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -848,22 +848,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (children == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count > len)
+ if (internals->child_count > len)
return -1;
- for (i = 0; i < internals->slave_count; i++)
- slaves[i] = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++)
+ children[i] = internals->children[i].port_id;
- return internals->slave_count;
+ return internals->child_count;
}
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_children_get(uint16_t bonded_port_id, uint16_t children[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -871,18 +871,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (children == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ if (internals->active_child_count > len)
return -1;
- memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ memcpy(children, internals->active_children,
+ internals->active_child_count * sizeof(internals->active_children[0]));
- return internals->active_slave_count;
+ return internals->active_child_count;
}
int
@@ -904,9 +904,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
internals->user_defined_mac = 1;
- /* Update all slave devices MACs*/
- if (internals->slave_count > 0)
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all child devices MACs*/
+ if (internals->child_count > 0)
+ return mac_address_children_update(bonded_eth_dev);
return 0;
}
@@ -925,30 +925,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
internals->user_defined_mac = 0;
- if (internals->slave_count > 0) {
- int slave_port;
- /* Get the primary slave location based on the primary port
- * number as, while slave_add(), we will keep the primary
- * slave based on slave_count,but not based on the primary port.
+ if (internals->child_count > 0) {
+ int child_port;
+ /* Get the primary child location based on the primary port
+ * number as, while child_add(), we will keep the primary
+ * child based on child_count,but not based on the primary port.
*/
- for (slave_port = 0; slave_port < internals->slave_count;
- slave_port++) {
- if (internals->slaves[slave_port].port_id ==
+ for (child_port = 0; child_port < internals->child_count;
+ child_port++) {
+ if (internals->children[child_port].port_id ==
internals->primary_port)
break;
}
/* Set MAC Address of Bonded Device */
if (mac_address_set(bonded_eth_dev,
- &internals->slaves[slave_port].persisted_mac_addr)
+ &internals->children[child_port].persisted_mac_addr)
!= 0) {
RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
return -1;
}
- /* Update all slave devices MAC addresses */
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all child devices MAC addresses */
+ return mac_address_children_update(bonded_eth_dev);
}
- /* No need to update anything as no slaves present */
+ /* No need to update anything as no children present */
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5cb7..c4af24f119e7 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
#include "eth_bond_private.h"
const char *pmd_bond_init_valid_arguments[] = {
- PMD_BOND_SLAVE_PORT_KVARG,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
+ PMD_BOND_CHILD_PORT_KVARG,
+ PMD_BOND_PRIMARY_CHILD_KVARG,
PMD_BOND_MODE_KVARG,
PMD_BOND_XMIT_POLICY_KVARG,
PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
}
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_child_port_kvarg(const char *key,
const char *value, void *extra_args)
{
- struct bond_ethdev_slave_ports *slave_ports;
+ struct bond_ethdev_child_ports *child_ports;
if (value == NULL || extra_args == NULL)
return -1;
- slave_ports = extra_args;
+ child_ports = extra_args;
- if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+ if (strcmp(key, PMD_BOND_CHILD_PORT_KVARG) == 0) {
int port_id = parse_port_id(value);
if (port_id < 0) {
- RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+ RTE_BOND_LOG(ERR, "Invalid child port value (%s) specified",
value);
return -1;
} else
- slave_ports->slaves[slave_ports->slave_count++] =
+ child_ports->children[child_ports->child_count++] =
port_id;
}
return 0;
}
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
case BONDING_MODE_ALB:
return 0;
default:
- RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+ RTE_BOND_LOG(ERR, "Invalid child mode value (%s) specified", value);
return -1;
}
}
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
}
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_child_port_id_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
- int primary_slave_port_id;
+ int primary_child_port_id;
if (value == NULL || extra_args == NULL)
return -1;
- primary_slave_port_id = parse_port_id(value);
- if (primary_slave_port_id < 0)
+ primary_child_port_id = parse_port_id(value);
+ if (primary_child_port_id < 0)
return -1;
- *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+ *(uint16_t *)extra_args = (uint16_t)primary_child_port_id;
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae709..b2d5b171c712 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+ for (i = 0; i < internals->child_count; i++) {
+ ret = rte_flow_validate(internals->children[i].port_id, attr,
patterns, actions, err);
if (ret) {
RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
- " for slave %d with error %d", i, ret);
+ " for child %d with error %d", i, ret);
return ret;
}
}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
NULL, rte_strerror(ENOMEM));
return NULL;
}
- for (i = 0; i < internals->slave_count; i++) {
- flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+ for (i = 0; i < internals->child_count; i++) {
+ flow->flows[i] = rte_flow_create(internals->children[i].port_id,
attr, patterns, actions, err);
if (unlikely(flow->flows[i] == NULL)) {
- RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+ RTE_BOND_LOG(ERR, "Failed to create flow on child %d",
i);
goto err;
}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
return flow;
err:
- /* Destroy all slaves flows. */
- for (i = 0; i < internals->slave_count; i++) {
+ /* Destroy all children flows. */
+ for (i = 0; i < internals->child_count; i++) {
if (flow->flows[i] != NULL)
- rte_flow_destroy(internals->slaves[i].port_id,
+ rte_flow_destroy(internals->children[i].port_id,
flow->flows[i], err);
}
bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
int i;
int ret = 0;
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->child_count; i++) {
int lret;
if (unlikely(flow->flows[i] == NULL))
continue;
- lret = rte_flow_destroy(internals->slaves[i].port_id,
+ lret = rte_flow_destroy(internals->children[i].port_id,
flow->flows[i], err);
if (unlikely(lret != 0)) {
- RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+ RTE_BOND_LOG(ERR, "Failed to destroy flow on child %d:"
" %d", i, lret);
ret = lret;
}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
int ret = 0;
int lret;
- /* Destroy all bond flows from its slaves instead of flushing them to
+ /* Destroy all bond flows from its children instead of flushing them to
* keep the LACP flow or any other external flows.
*/
RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
ret = lret;
}
if (unlikely(ret != 0))
- RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+ RTE_BOND_LOG(ERR, "Failed to flush flow in all children");
return ret;
}
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *err)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_flow_query_count slave_count;
+ struct rte_flow_query_count child_count;
int i;
int ret;
count->bytes = 0;
count->hits = 0;
- rte_memcpy(&slave_count, count, sizeof(slave_count));
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_query(internals->slaves[i].port_id,
+ rte_memcpy(&child_count, count, sizeof(child_count));
+ for (i = 0; i < internals->child_count; i++) {
+ ret = rte_flow_query(internals->children[i].port_id,
flow->flows[i], action,
- &slave_count, err);
+ &child_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
- " slave %d: %d", i, ret);
+ " child %d: %d", i, ret);
return ret;
}
- count->bytes += slave_count.bytes;
- count->hits += slave_count.hits;
- slave_count.bytes = 0;
- slave_count.hits = 0;
+ count->bytes += child_count.bytes;
+ count->hits += child_count.hits;
+ child_count.bytes = 0;
+ child_count.hits = 0;
}
return 0;
}
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+ for (i = 0; i < internals->child_count; i++) {
+ ret = rte_flow_isolate(internals->children[i].port_id, set, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
- " for slave %d with error %d", i, ret);
+ " for child %d with error %d", i, ret);
internals->flow_isolated_valid = 0;
return ret;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b86..5c9da8d0d5f8 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,33 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct bond_dev_private *internals;
uint16_t num_rx_total = 0;
- uint16_t slave_count;
- uint16_t active_slave;
+ uint16_t child_count;
+ uint16_t active_child;
int i;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- slave_count = internals->active_slave_count;
- active_slave = bd_rx_q->active_slave;
+ child_count = internals->active_child_count;
+ active_child = bd_rx_q->active_child;
- for (i = 0; i < slave_count && nb_pkts; i++) {
- uint16_t num_rx_slave;
+ for (i = 0; i < child_count && nb_pkts; i++) {
+ uint16_t num_rx_child;
/* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave =
- rte_eth_rx_burst(internals->active_slaves[active_slave],
+ * from other children */
+ num_rx_child =
+ rte_eth_rx_burst(internals->active_children[active_child],
bd_rx_q->queue_id,
bufs + num_rx_total, nb_pkts);
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
- if (++active_slave >= slave_count)
- active_slave = 0;
+ num_rx_total += num_rx_child;
+ nb_pkts -= num_rx_child;
+ if (++active_child >= child_count)
+ active_child = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_child >= child_count)
+ bd_rx_q->active_child = 0;
return num_rx_total;
}
@@ -158,8 +158,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port) {
- struct rte_eth_dev_info slave_info;
+ uint16_t child_port) {
+ struct rte_eth_dev_info child_info;
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -177,29 +177,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
}
};
- int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+ int ret = rte_flow_validate(child_port, &flow_attr_8023ad,
flow_item_8023ad, actions, &error);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
- __func__, error.message, slave_port,
+ RTE_BOND_LOG(ERR, "%s: %s (child_port=%d queue_id=%d)",
+ __func__, error.message, child_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
- ret = rte_eth_dev_info_get(slave_port, &slave_info);
+ ret = rte_eth_dev_info_get(child_port, &child_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port, strerror(-ret));
+ __func__, child_port, strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
- slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+ if (child_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+ child_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
RTE_BOND_LOG(ERR,
- "%s: Slave %d capabilities doesn't allow allocating additional queues",
- __func__, slave_port);
+ "%s: Child %d capabilities doesn't allow allocating additional queues",
+ __func__, child_port);
return -1;
}
@@ -214,8 +214,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
uint16_t idx;
int ret;
- /* Verify if all slaves in bonding supports flow director and */
- if (internals->slave_count > 0) {
+ /* Verify if all children in bonding supports flow director and */
+ if (internals->child_count > 0) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
@@ -229,9 +229,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
- for (idx = 0; idx < internals->slave_count; idx++) {
+ for (idx = 0; idx < internals->child_count; idx++) {
if (bond_ethdev_8023ad_flow_verify(bond_dev,
- internals->slaves[idx].port_id) != 0)
+ internals->children[idx].port_id) != 0)
return -1;
}
}
@@ -240,7 +240,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
}
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t child_port) {
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +258,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
}
};
- internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+ internals->mode4.dedicated_queues.flow[child_port] = rte_flow_create(child_port,
&flow_attr_8023ad, flow_item_8023ad, actions, &error);
- if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+ if (internals->mode4.dedicated_queues.flow[child_port] == NULL) {
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
- "(slave_port=%d queue_id=%d)",
- error.message, slave_port,
+ "(child_port=%d queue_id=%d)",
+ error.message, child_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
@@ -304,10 +304,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
const uint16_t ether_type_slow_be =
rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint16_t slaves[RTE_MAX_ETHPORTS];
- uint16_t slave_count, idx;
+ uint16_t children[RTE_MAX_ETHPORTS];
+ uint16_t child_count, idx;
- uint8_t collecting; /* current slave collecting status */
+ uint8_t collecting; /* current child collecting status */
const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
uint8_t subtype;
@@ -315,24 +315,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
uint16_t j;
uint16_t k;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy child list to protect against child up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
+ child_count = internals->active_child_count;
+ memcpy(children, internals->active_children,
+ sizeof(internals->active_children[0]) * child_count);
- idx = bd_rx_q->active_slave;
- if (idx >= slave_count) {
- bd_rx_q->active_slave = 0;
+ idx = bd_rx_q->active_child;
+ if (idx >= child_count) {
+ bd_rx_q->active_child = 0;
idx = 0;
}
- for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+ for (i = 0; i < child_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+ collecting = ACTOR_STATE(&bond_mode_8023ad_ports[children[idx]],
COLLECTING);
- /* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+ /* Read packets from this child */
+ num_rx_total += rte_eth_rx_burst(children[idx], bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +348,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
- * - slave is not in collecting state,
+ * - child is not in collecting state,
* - bonding interface is not in promiscuous mode and
* packet address isn't in mac_addrs array:
* - packet is unicast,
@@ -367,7 +367,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
!allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
- internals, slaves[idx], bufs[j]);
+ internals, children[idx], bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -380,12 +380,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
} else
j++;
}
- if (unlikely(++idx == slave_count))
+ if (unlikely(++idx == child_count))
idx = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_child >= child_count)
+ bd_rx_q->active_child = 0;
return num_rx_total;
}
@@ -583,59 +583,59 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
- uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+ struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+ uint16_t child_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
- uint16_t num_of_slaves;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_children;
+ uint16_t children[RTE_MAX_ETHPORTS];
- uint16_t num_tx_total = 0, num_tx_slave;
+ uint16_t num_tx_total = 0, num_tx_child;
- static int slave_idx = 0;
- int i, cslave_idx = 0, tx_fail_total = 0;
+ static int child_idx = 0;
+ int i, cchild_idx = 0, tx_fail_total = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy child list to protect against child up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_children = internals->active_child_count;
+ memcpy(children, internals->active_children,
+ sizeof(internals->active_children[0]) * num_of_children);
- if (num_of_slaves < 1)
+ if (num_of_children < 1)
return num_tx_total;
- /* Populate slaves mbuf with which packets are to be sent on it */
+ /* Populate children mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
- cslave_idx = (slave_idx + i) % num_of_slaves;
- slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+ cchild_idx = (child_idx + i) % num_of_children;
+ child_bufs[cchild_idx][(child_nb_pkts[cchild_idx])++] = bufs[i];
}
- /* increment current slave index so the next call to tx burst starts on the
- * next slave */
- slave_idx = ++cslave_idx;
+ /* increment current child index so the next call to tx burst starts on the
+ * next child */
+ child_idx = ++cchild_idx;
- /* Send packet burst on each slave device */
- for (i = 0; i < num_of_slaves; i++) {
- if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_prepare(slaves[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_pkts[i]);
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- slave_bufs[i], num_tx_slave);
+ /* Send packet burst on each child device */
+ for (i = 0; i < num_of_children; i++) {
+ if (child_nb_pkts[i] > 0) {
+ num_tx_child = rte_eth_tx_prepare(children[i],
+ bd_tx_q->queue_id, child_bufs[i],
+ child_nb_pkts[i]);
+ num_tx_child = rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
+ child_bufs[i], num_tx_child);
/* if tx burst fails move packets to end of bufs */
- if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
- int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+ if (unlikely(num_tx_child < child_nb_pkts[i])) {
+ int tx_fail_child = child_nb_pkts[i] - num_tx_child;
- tx_fail_total += tx_fail_slave;
+ tx_fail_total += tx_fail_child;
memcpy(&bufs[nb_pkts - tx_fail_total],
- &slave_bufs[i][num_tx_slave],
- tx_fail_slave * sizeof(bufs[0]));
+ &child_bufs[i][num_tx_child],
+ tx_fail_child * sizeof(bufs[0]));
}
- num_tx_total += num_tx_slave;
+ num_tx_total += num_tx_child;
}
}
@@ -653,7 +653,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
+ if (internals->active_child_count < 1)
return 0;
nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +699,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t child_count, uint16_t *children)
{
struct rte_ether_hdr *eth_hdr;
uint32_t hash;
@@ -710,13 +710,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i] = (hash ^= hash >> 8) % slave_count;
+ children[i] = (hash ^= hash >> 8) % child_count;
}
}
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t child_count, uint16_t *children)
{
uint16_t i;
struct rte_ether_hdr *eth_hdr;
@@ -748,13 +748,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ children[i] = hash % child_count;
}
}
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t child_count, uint16_t *children)
{
struct rte_ether_hdr *eth_hdr;
uint16_t proto;
@@ -822,30 +822,30 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ children[i] = hash % child_count;
}
}
-struct bwg_slave {
+struct bwg_child {
uint64_t bwg_left_int;
uint64_t bwg_left_remainder;
- uint16_t slave;
+ uint16_t child;
};
void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_child(struct bond_dev_private *internals) {
int i;
- for (i = 0; i < internals->active_slave_count; i++) {
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_child_count; i++) {
+ tlb_last_obytets[internals->active_children[i]] = 0;
}
}
static int
bandwidth_cmp(const void *a, const void *b)
{
- const struct bwg_slave *bwg_a = a;
- const struct bwg_slave *bwg_b = b;
+ const struct bwg_child *bwg_a = a;
+ const struct bwg_child *bwg_b = b;
int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +863,14 @@ bandwidth_cmp(const void *a, const void *b)
static void
bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
- struct bwg_slave *bwg_slave)
+ struct bwg_child *bwg_child)
{
struct rte_eth_link link_status;
int ret;
ret = rte_eth_link_get_nowait(port_id, &link_status);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Child (port %u) link get failed: %s",
port_id, rte_strerror(-ret));
return;
}
@@ -878,51 +878,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
if (link_bwg == 0)
return;
link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
- bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
- bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+ bwg_child->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
+ bwg_child->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
}
static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_child_cb(void *arg)
{
struct bond_dev_private *internals = arg;
- struct rte_eth_stats slave_stats;
- struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ struct rte_eth_stats child_stats;
+ struct bwg_child bwg_array[RTE_MAX_ETHPORTS];
+ uint16_t child_count;
uint64_t tx_bytes;
uint8_t update_stats = 0;
- uint16_t slave_id;
+ uint16_t child_id;
uint16_t i;
- internals->slave_update_idx++;
+ internals->child_update_idx++;
- if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+ if (internals->child_update_idx >= REORDER_PERIOD_MS)
update_stats = 1;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- rte_eth_stats_get(slave_id, &slave_stats);
- tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
- bandwidth_left(slave_id, tx_bytes,
- internals->slave_update_idx, &bwg_array[i]);
- bwg_array[i].slave = slave_id;
+ for (i = 0; i < internals->active_child_count; i++) {
+ child_id = internals->active_children[i];
+ rte_eth_stats_get(child_id, &child_stats);
+ tx_bytes = child_stats.obytes - tlb_last_obytets[child_id];
+ bandwidth_left(child_id, tx_bytes,
+ internals->child_update_idx, &bwg_array[i]);
+ bwg_array[i].child = child_id;
if (update_stats) {
- tlb_last_obytets[slave_id] = slave_stats.obytes;
+ tlb_last_obytets[child_id] = child_stats.obytes;
}
}
if (update_stats == 1)
- internals->slave_update_idx = 0;
+ internals->child_update_idx = 0;
- slave_count = i;
- qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
- for (i = 0; i < slave_count; i++)
- internals->tlb_slaves_order[i] = bwg_array[i].slave;
+ child_count = i;
+ qsort(bwg_array, child_count, sizeof(bwg_array[0]), bandwidth_cmp);
+ for (i = 0; i < child_count; i++)
+ internals->tlb_children_order[i] = bwg_array[i].child;
- rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+ rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_child_cb,
(struct bond_dev_private *)internals);
}
@@ -937,29 +937,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0, num_tx_prep;
uint16_t i, j;
- uint16_t num_of_slaves = internals->active_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_children = internals->active_child_count;
+ uint16_t children[RTE_MAX_ETHPORTS];
struct rte_ether_hdr *ether_hdr;
- struct rte_ether_addr primary_slave_addr;
- struct rte_ether_addr active_slave_addr;
+ struct rte_ether_addr primary_child_addr;
+ struct rte_ether_addr active_child_addr;
- if (num_of_slaves < 1)
+ if (num_of_children < 1)
return num_tx_total;
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+ memcpy(children, internals->tlb_children_order,
+ sizeof(internals->tlb_children_order[0]) * num_of_children);
- rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+ rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_child_addr);
if (nb_pkts > 3) {
for (i = 0; i < 3; i++)
rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
}
- for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ for (i = 0; i < num_of_children; i++) {
+ rte_eth_macaddr_get(children[i], &active_child_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +967,17 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
if (rte_is_same_ether_addr(ðer_hdr->src_addr,
- &primary_slave_addr))
- rte_ether_addr_copy(&active_slave_addr,
+ &primary_child_addr))
+ rte_ether_addr_copy(&active_child_addr,
ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
- mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+ mode6_debug("TX IPv4:", ether_hdr, children[i], &burstnumberTX);
#endif
}
- num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+ num_tx_prep = rte_eth_tx_prepare(children[i], bd_tx_q->queue_id,
bufs + num_tx_total, nb_pkts - num_tx_total);
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_total += rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
bufs + num_tx_total, num_tx_prep);
if (num_tx_total == nb_pkts)
@@ -990,13 +990,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
void
bond_tlb_disable(struct bond_dev_private *internals)
{
- rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+ rte_eal_alarm_cancel(bond_ethdev_update_tlb_child_cb, internals);
}
void
bond_tlb_enable(struct bond_dev_private *internals)
{
- bond_ethdev_update_tlb_slave_cb(internals);
+ bond_ethdev_update_tlb_child_cb(internals);
}
static uint16_t
@@ -1011,11 +1011,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct client_data *client_info;
/*
- * We create transmit buffers for every slave and one additional to send
+ * We create transmit buffers for every child and one additional to send
* through tlb. In worst case every packet will be send on one port.
*/
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
- uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+ struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+ uint16_t child_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
* We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1029,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_send, num_not_send = 0;
uint16_t num_tx_total = 0;
- uint16_t slave_idx;
+ uint16_t child_idx;
int i, j;
@@ -1040,19 +1040,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
offset = get_vlan_offset(eth_h, ðer_type);
if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
- slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+ child_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
+ rte_eth_macaddr_get(child_idx, ð_h->src_addr);
- /* Add packet to slave tx buffer */
- slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
- slave_bufs_pkts[slave_idx]++;
+ /* Add packet to child tx buffer */
+ child_bufs[child_idx][child_bufs_pkts[child_idx]] = bufs[i];
+ child_bufs_pkts[child_idx]++;
} else {
/* If packet is not ARP, send it with TLB policy */
- slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+ child_bufs[RTE_MAX_ETHPORTS][child_bufs_pkts[RTE_MAX_ETHPORTS]] =
bufs[i];
- slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+ child_bufs_pkts[RTE_MAX_ETHPORTS]++;
}
}
@@ -1062,7 +1062,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- /* Allocate new packet to send ARP update on current slave */
+ /* Allocate new packet to send ARP update on current child */
upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
if (upd_pkt == NULL) {
RTE_BOND_LOG(ERR,
@@ -1076,36 +1076,36 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
upd_pkt->data_len = pkt_size;
upd_pkt->pkt_len = pkt_size;
- slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+ child_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
internals);
/* Add packet to update tx buffer */
- update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
- update_bufs_pkts[slave_idx]++;
+ update_bufs[child_idx][update_bufs_pkts[child_idx]] = upd_pkt;
+ update_bufs_pkts[child_idx]++;
}
}
internals->mode6.ntt = 0;
}
- /* Send ARP packets on proper slaves */
+ /* Send ARP packets on proper children */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (slave_bufs_pkts[i] > 0) {
+ if (child_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
- slave_bufs[i], slave_bufs_pkts[i]);
+ child_bufs[i], child_bufs_pkts[i]);
num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
- slave_bufs[i], num_send);
- for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+ child_bufs[i], num_send);
+ for (j = 0; j < child_bufs_pkts[i] - num_send; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[i][nb_pkts - 1 - j];
+ child_bufs[i][nb_pkts - 1 - j];
}
num_tx_total += num_send;
- num_not_send += slave_bufs_pkts[i] - num_send;
+ num_not_send += child_bufs_pkts[i] - num_send;
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
/* Print TX stats including update packets */
- for (j = 0; j < slave_bufs_pkts[i]; j++) {
- eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+ for (j = 0; j < child_bufs_pkts[i]; j++) {
+ eth_h = rte_pktmbuf_mtod(child_bufs[i][j],
struct rte_ether_hdr *);
mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
}
@@ -1113,7 +1113,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
}
}
- /* Send update packets on proper slaves */
+ /* Send update packets on proper children */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (update_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1134,14 +1134,14 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
}
/* Send non-ARP packets using tlb policy */
- if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+ if (child_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
num_send = bond_ethdev_tx_burst_tlb(queue,
- slave_bufs[RTE_MAX_ETHPORTS],
- slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+ child_bufs[RTE_MAX_ETHPORTS],
+ child_bufs_pkts[RTE_MAX_ETHPORTS]);
- for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+ for (j = 0; j < child_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+ child_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
}
num_tx_total += num_send;
@@ -1152,59 +1152,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static inline uint16_t
tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
- uint16_t *slave_port_ids, uint16_t slave_count)
+ uint16_t *child_port_ids, uint16_t child_count)
{
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- /* Array to sort mbufs for transmission on each slave into */
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
- /* Number of mbufs for transmission on each slave */
- uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
- /* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[nb_bufs];
+ /* Array to sort mbufs for transmission on each child into */
+ struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+ /* Number of mbufs for transmission on each child */
+ uint16_t child_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+ /* Mapping array generated by hash function to map mbufs to children */
+ uint16_t bufs_child_port_idxs[nb_bufs];
- uint16_t slave_tx_count;
+ uint16_t child_tx_count;
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
uint16_t i;
/*
- * Populate slaves mbuf with the packets which are to be sent on it
- * selecting output slave using hash based on xmit policy
+ * Populate children mbuf with the packets which are to be sent on it
+ * selecting output child using hash based on xmit policy
*/
- internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
- bufs_slave_port_idxs);
+ internals->burst_xmit_hash(bufs, nb_bufs, child_count,
+ bufs_child_port_idxs);
for (i = 0; i < nb_bufs; i++) {
- /* Populate slave mbuf arrays with mbufs for that slave. */
- uint16_t slave_idx = bufs_slave_port_idxs[i];
+ /* Populate child mbuf arrays with mbufs for that child. */
+ uint16_t child_idx = bufs_child_port_idxs[i];
- slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+ child_bufs[child_idx][child_nb_bufs[child_idx]++] = bufs[i];
}
- /* Send packet burst on each slave device */
- for (i = 0; i < slave_count; i++) {
- if (slave_nb_bufs[i] == 0)
+ /* Send packet burst on each child device */
+ for (i = 0; i < child_count; i++) {
+ if (child_nb_bufs[i] == 0)
continue;
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_bufs[i]);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_tx_count);
+ child_tx_count = rte_eth_tx_prepare(child_port_ids[i],
+ bd_tx_q->queue_id, child_bufs[i],
+ child_nb_bufs[i]);
+ child_tx_count = rte_eth_tx_burst(child_port_ids[i],
+ bd_tx_q->queue_id, child_bufs[i],
+ child_tx_count);
- total_tx_count += slave_tx_count;
+ total_tx_count += child_tx_count;
/* If tx burst fails move packets to end of bufs */
- if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
- int slave_tx_fail_count = slave_nb_bufs[i] -
- slave_tx_count;
- total_tx_fail_count += slave_tx_fail_count;
+ if (unlikely(child_tx_count < child_nb_bufs[i])) {
+ int child_tx_fail_count = child_nb_bufs[i] -
+ child_tx_count;
+ total_tx_fail_count += child_tx_fail_count;
memcpy(&bufs[nb_bufs - total_tx_fail_count],
- &slave_bufs[i][slave_tx_count],
- slave_tx_fail_count * sizeof(bufs[0]));
+ &child_bufs[i][child_tx_count],
+ child_tx_fail_count * sizeof(bufs[0]));
}
}
@@ -1218,23 +1218,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t child_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t child_count;
if (unlikely(nb_bufs == 0))
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy child list to protect against child up/down changes during tx
* bursting
*/
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ child_count = internals->active_child_count;
+ if (unlikely(child_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
- return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
- slave_count);
+ memcpy(child_port_ids, internals->active_children,
+ sizeof(child_port_ids[0]) * child_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, child_port_ids,
+ child_count);
}
static inline uint16_t
@@ -1244,31 +1244,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t child_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t child_count;
- uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t dist_slave_count;
+ uint16_t dist_child_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t dist_child_count;
- uint16_t slave_tx_count;
+ uint16_t child_tx_count;
uint16_t i;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy child list to protect against child up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ child_count = internals->active_child_count;
+ if (unlikely(child_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
+ memcpy(child_port_ids, internals->active_children,
+ sizeof(child_port_ids[0]) * child_count);
if (dedicated_txq)
goto skip_tx_ring;
/* Check for LACP control packets and send if available */
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ for (i = 0; i < child_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[child_port_ids[i]];
struct rte_mbuf *ctrl_pkt = NULL;
if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1276,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (rte_ring_dequeue(port->tx_ring,
(void **)&ctrl_pkt) != -ENOENT) {
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+ child_tx_count = rte_eth_tx_prepare(child_port_ids[i],
bd_tx_q->queue_id, &ctrl_pkt, 1);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+ child_tx_count = rte_eth_tx_burst(child_port_ids[i],
+ bd_tx_q->queue_id, &ctrl_pkt, child_tx_count);
/*
* re-enqueue LAG control plane packets to buffering
* ring if transmission fails so the packet isn't lost.
*/
- if (slave_tx_count != 1)
+ if (child_tx_count != 1)
rte_ring_enqueue(port->tx_ring, ctrl_pkt);
}
}
@@ -1293,20 +1293,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (unlikely(nb_bufs == 0))
return 0;
- dist_slave_count = 0;
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ dist_child_count = 0;
+ for (i = 0; i < child_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[child_port_ids[i]];
if (ACTOR_STATE(port, DISTRIBUTING))
- dist_slave_port_ids[dist_slave_count++] =
- slave_port_ids[i];
+ dist_child_port_ids[dist_child_count++] =
+ child_port_ids[i];
}
- if (unlikely(dist_slave_count < 1))
+ if (unlikely(dist_child_count < 1))
return 0;
- return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
- dist_slave_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, dist_child_port_ids,
+ dist_child_count);
}
static uint16_t
@@ -1330,78 +1330,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t children[RTE_MAX_ETHPORTS];
uint8_t tx_failed_flag = 0;
- uint16_t num_of_slaves;
+ uint16_t num_of_children;
uint16_t max_nb_of_tx_pkts = 0;
- int slave_tx_total[RTE_MAX_ETHPORTS];
- int i, most_successful_tx_slave = -1;
+ int child_tx_total[RTE_MAX_ETHPORTS];
+ int i, most_successful_tx_child = -1;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy child list to protect against child up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_children = internals->active_child_count;
+ memcpy(children, internals->active_children,
+ sizeof(internals->active_children[0]) * num_of_children);
- if (num_of_slaves < 1)
+ if (num_of_children < 1)
return 0;
/* It is rare that bond different PMDs together, so just call tx-prepare once */
- nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+ nb_pkts = rte_eth_tx_prepare(children[0], bd_tx_q->queue_id, bufs, nb_pkts);
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
- rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+ rte_pktmbuf_refcnt_update(bufs[i], num_of_children - 1);
- /* Transmit burst on each active slave */
- for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ /* Transmit burst on each active child */
+ for (i = 0; i < num_of_children; i++) {
+ child_tx_total[i] = rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
bufs, nb_pkts);
- if (unlikely(slave_tx_total[i] < nb_pkts))
+ if (unlikely(child_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
- /* record the value and slave index for the slave which transmits the
+ /* record the value and child index for the child which transmits the
* maximum number of packets */
- if (slave_tx_total[i] > max_nb_of_tx_pkts) {
- max_nb_of_tx_pkts = slave_tx_total[i];
- most_successful_tx_slave = i;
+ if (child_tx_total[i] > max_nb_of_tx_pkts) {
+ max_nb_of_tx_pkts = child_tx_total[i];
+ most_successful_tx_child = i;
}
}
- /* if slaves fail to transmit packets from burst, the calling application
+ /* if children fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
- * handle failures of all packets except those of the most successful slave
+ * handle failures of all packets except those of the most successful child
*/
if (unlikely(tx_failed_flag))
- for (i = 0; i < num_of_slaves; i++)
- if (i != most_successful_tx_slave)
- while (slave_tx_total[i] < nb_pkts)
- rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+ for (i = 0; i < num_of_children; i++)
+ if (i != most_successful_tx_child)
+ while (child_tx_total[i] < nb_pkts)
+ rte_pktmbuf_free(bufs[child_tx_total[i]++]);
return max_nb_of_tx_pkts;
}
static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *child_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
/**
* If in mode 4 then save the link properties of the first
- * slave, all subsequent slaves must match these properties
+ * child, all subsequent children must match these properties
*/
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.child_link;
- bond_link->link_autoneg = slave_link->link_autoneg;
- bond_link->link_duplex = slave_link->link_duplex;
- bond_link->link_speed = slave_link->link_speed;
+ bond_link->link_autoneg = child_link->link_autoneg;
+ bond_link->link_duplex = child_link->link_duplex;
+ bond_link->link_speed = child_link->link_speed;
} else {
/**
* In any other mode the link properties are set to default
@@ -1414,16 +1414,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
static int
link_properties_valid(struct rte_eth_dev *ethdev,
- struct rte_eth_link *slave_link)
+ struct rte_eth_link *child_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.child_link;
- if (bond_link->link_duplex != slave_link->link_duplex ||
- bond_link->link_autoneg != slave_link->link_autoneg ||
- bond_link->link_speed != slave_link->link_speed)
+ if (bond_link->link_duplex != child_link->link_duplex ||
+ bond_link->link_autoneg != child_link->link_autoneg ||
+ bond_link->link_speed != child_link->link_speed)
return -1;
}
@@ -1480,11 +1480,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
static const struct rte_ether_addr null_mac_addr;
/*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the child
*/
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+child_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t child_port_id)
{
int i, ret;
struct rte_ether_addr *mac_addr;
@@ -1494,11 +1494,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+ ret = rte_eth_dev_mac_addr_add(child_port_id, mac_addr, 0);
if (ret < 0) {
/* rollback */
for (i--; i > 0; i--)
- rte_eth_dev_mac_addr_remove(slave_port_id,
+ rte_eth_dev_mac_addr_remove(child_port_id,
&bonded_eth_dev->data->mac_addrs[i]);
return ret;
}
@@ -1508,11 +1508,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
/*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the child
*/
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+child_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t child_port_id)
{
int i, rc, ret;
struct rte_ether_addr *mac_addr;
@@ -1523,7 +1523,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+ ret = rte_eth_dev_mac_addr_remove(child_port_id, mac_addr);
/* save only the first error */
if (ret < 0 && rc == 0)
rc = ret;
@@ -1533,26 +1533,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_children_update(struct rte_eth_dev *bonded_eth_dev)
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
bool set;
int i;
- /* Update slave devices MAC addresses */
- if (internals->slave_count < 1)
+ /* Update child devices MAC addresses */
+ if (internals->child_count < 1)
return -1;
switch (internals->mode) {
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->child_count; i++) {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
+ internals->children[i].port_id,
bonded_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->children[i].port_id);
return -1;
}
}
@@ -1565,8 +1565,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
case BONDING_MODE_ALB:
default:
set = true;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id ==
+ for (i = 0; i < internals->child_count; i++) {
+ if (internals->children[i].port_id ==
internals->current_primary_port) {
if (rte_eth_dev_default_mac_addr_set(
internals->current_primary_port,
@@ -1577,10 +1577,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
}
} else {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
- &internals->slaves[i].persisted_mac_addr)) {
+ internals->children[i].port_id,
+ &internals->children[i].persisted_mac_addr)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->children[i].port_id);
}
}
}
@@ -1655,55 +1655,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+child_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *child_eth_dev)
{
int errval = 0;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+ struct port *port = &bond_mode_8023ad_ports[child_eth_dev->data->port_id];
if (port->slow_pool == NULL) {
char mem_name[256];
- int slave_id = slave_eth_dev->data->port_id;
+ int child_id = child_eth_dev->data->port_id;
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
- slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "child_port%u_slow_pool",
+ child_id);
port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
- slave_eth_dev->data->numa_node);
+ child_eth_dev->data->numa_node);
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->slow_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Child %u: Failed to create memory pool '%s': %s\n",
+ child_id, mem_name, rte_strerror(rte_errno));
}
}
if (internals->mode4.dedicated_queues.enabled == 1) {
/* Configure slow Rx queue */
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_rx_queue_setup(child_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid, 128,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(child_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ child_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid,
errval);
return errval;
}
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_tx_queue_setup(child_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid, 512,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(child_eth_dev->data->port_id),
NULL);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ child_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid,
errval);
return errval;
@@ -1713,8 +1713,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
}
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+child_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *child_eth_dev)
{
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
@@ -1723,45 +1723,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- /* Stop slave */
- errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+ /* Stop child */
+ errval = rte_eth_dev_stop(child_eth_dev->data->port_id);
if (errval != 0)
RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
- /* Enable interrupts on slave device if supported */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+ /* Enable interrupts on child device if supported */
+ if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ child_eth_dev->data->dev_conf.intr_conf.lsc = 1;
- /* If RSS is enabled for bonding, try to enable it for slaves */
+ /* If RSS is enabled for bonding, try to enable it for children */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ child_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
} else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+ child_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
}
- slave_eth_dev->data->dev_conf.rxmode.mtu =
+ child_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- slave_eth_dev->data->dev_conf.link_speeds =
+ child_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads =
+ child_eth_dev->data->dev_conf.txmode.offloads =
bonded_eth_dev->data->dev_conf.txmode.offloads;
- slave_eth_dev->data->dev_conf.rxmode.offloads =
+ child_eth_dev->data->dev_conf.rxmode.offloads =
bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1775,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* Configure device */
- errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_configure(child_eth_dev->data->port_id,
nb_rx_queues, nb_tx_queues,
- &(slave_eth_dev->data->dev_conf));
+ &(child_eth_dev->data->dev_conf));
if (errval != 0) {
- RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ RTE_BOND_LOG(ERR, "Cannot configure child device: port %u, err (%d)",
+ child_eth_dev->data->port_id, errval);
return errval;
}
- errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_set_mtu(child_eth_dev->data->port_id,
bonded_eth_dev->data->mtu);
if (errval != 0 && errval != -ENOTSUP) {
RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
return errval;
}
return 0;
}
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+child_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *child_eth_dev)
{
int errval = 0;
struct bond_rx_queue *bd_rx_q;
@@ -1809,14 +1809,14 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_rx_queue_setup(child_eth_dev->data->port_id, q_id,
bd_rx_q->nb_rx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(child_eth_dev->data->port_id),
&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ child_eth_dev->data->port_id, q_id, errval);
return errval;
}
}
@@ -1825,58 +1825,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_tx_queue_setup(child_eth_dev->data->port_id, q_id,
bd_tx_q->nb_tx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(child_eth_dev->data->port_id),
&bd_tx_q->tx_conf);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ child_eth_dev->data->port_id, q_id, errval);
return errval;
}
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
- if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+ if (child_configure_slow_queue(bonded_eth_dev, child_eth_dev)
!= 0)
return errval;
errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ child_eth_dev->data->port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
return errval;
}
- if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
- errval = rte_flow_destroy(slave_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+ if (internals->mode4.dedicated_queues.flow[child_eth_dev->data->port_id] != NULL) {
+ errval = rte_flow_destroy(child_eth_dev->data->port_id,
+ internals->mode4.dedicated_queues.flow[child_eth_dev->data->port_id],
&flow_error);
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
}
}
/* Start device */
- errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+ errval = rte_eth_dev_start(child_eth_dev->data->port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
return -1;
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ child_eth_dev->data->port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
return errval;
}
}
@@ -1888,27 +1888,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
internals = bonded_eth_dev->data->dev_private;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+ for (i = 0; i < internals->child_count; i++) {
+ if (internals->children[i].port_id == child_eth_dev->data->port_id) {
errval = rte_eth_dev_rss_reta_update(
- slave_eth_dev->data->port_id,
+ child_eth_dev->data->port_id,
&internals->reta_conf[0],
- internals->slaves[i].reta_size);
+ internals->children[i].reta_size);
if (errval != 0) {
RTE_BOND_LOG(WARNING,
- "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+ "rte_eth_dev_rss_reta_update on child port %d fails (err %d)."
" RSS Configuration for bonding may be inconsistent.",
- slave_eth_dev->data->port_id, errval);
+ child_eth_dev->data->port_id, errval);
}
break;
}
}
}
- /* If lsc interrupt is set, check initial slave's link status */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
- slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
- bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+ /* If lsc interrupt is set, check initial child's link status */
+ if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ child_eth_dev->dev_ops->link_update(child_eth_dev, 0);
+ bond_ethdev_lsc_event_callback(child_eth_dev->data->port_id,
RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
NULL);
}
@@ -1917,75 +1917,75 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
}
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+child_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *child_eth_dev)
{
uint16_t i;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id ==
- slave_eth_dev->data->port_id)
+ for (i = 0; i < internals->child_count; i++)
+ if (internals->children[i].port_id ==
+ child_eth_dev->data->port_id)
break;
- if (i < (internals->slave_count - 1)) {
+ if (i < (internals->child_count - 1)) {
struct rte_flow *flow;
- memmove(&internals->slaves[i], &internals->slaves[i + 1],
- sizeof(internals->slaves[0]) *
- (internals->slave_count - i - 1));
+ memmove(&internals->children[i], &internals->children[i + 1],
+ sizeof(internals->children[0]) *
+ (internals->child_count - i - 1));
TAILQ_FOREACH(flow, &internals->flow_list, next) {
memmove(&flow->flows[i], &flow->flows[i + 1],
sizeof(flow->flows[0]) *
- (internals->slave_count - i - 1));
- flow->flows[internals->slave_count - 1] = NULL;
+ (internals->child_count - i - 1));
+ flow->flows[internals->child_count - 1] = NULL;
}
}
- internals->slave_count--;
+ internals->child_count--;
- /* force reconfiguration of slave interfaces */
- rte_eth_dev_internal_reset(slave_eth_dev);
+ /* force reconfiguration of child interfaces */
+ rte_eth_dev_internal_reset(child_eth_dev);
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_child_link_status_change_monitor(void *cb_arg);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+child_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *child_eth_dev)
{
- struct bond_slave_details *slave_details =
- &internals->slaves[internals->slave_count];
+ struct bond_child_details *child_details =
+ &internals->children[internals->child_count];
- slave_details->port_id = slave_eth_dev->data->port_id;
- slave_details->last_link_status = 0;
+ child_details->port_id = child_eth_dev->data->port_id;
+ child_details->last_link_status = 0;
- /* Mark slave devices that don't support interrupts so we can
+ /* Mark child devices that don't support interrupts so we can
* compensate when we start the bond
*/
- if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
- slave_details->link_status_poll_enabled = 1;
+ if (!(child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+ child_details->link_status_poll_enabled = 1;
}
- slave_details->link_status_wait_to_complete = 0;
+ child_details->link_status_wait_to_complete = 0;
/* clean tlb_last_obytes when adding port for bonding device */
- memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+ memcpy(&(child_details->persisted_mac_addr), child_eth_dev->data->mac_addrs,
sizeof(struct rte_ether_addr));
}
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id)
+ uint16_t child_port_id)
{
int i;
- if (internals->active_slave_count < 1)
- internals->current_primary_port = slave_port_id;
+ if (internals->active_child_count < 1)
+ internals->current_primary_port = child_port_id;
else
- /* Search bonded device slave ports for new proposed primary port */
- for (i = 0; i < internals->active_slave_count; i++) {
- if (internals->active_slaves[i] == slave_port_id)
- internals->current_primary_port = slave_port_id;
+ /* Search bonded device child ports for new proposed primary port */
+ for (i = 0; i < internals->active_child_count; i++) {
+ if (internals->active_children[i] == child_port_id)
+ internals->current_primary_port = child_port_id;
}
}
@@ -1998,9 +1998,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals;
int i;
- /* slave eth dev will be started by bonded device */
+ /* child eth dev will be started by bonded device */
if (check_for_bonded_ethdev(eth_dev)) {
- RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+ RTE_BOND_LOG(ERR, "User tried to explicitly start a child eth_dev (%d)",
eth_dev->data->port_id);
return -1;
}
@@ -2010,17 +2010,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- if (internals->slave_count == 0) {
- RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+ if (internals->child_count == 0) {
+ RTE_BOND_LOG(ERR, "Cannot start port since there are no child devices");
goto out_err;
}
if (internals->user_defined_mac == 0) {
struct rte_ether_addr *new_mac_addr = NULL;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == internals->primary_port)
- new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+ for (i = 0; i < internals->child_count; i++)
+ if (internals->children[i].port_id == internals->primary_port)
+ new_mac_addr = &internals->children[i].persisted_mac_addr;
if (new_mac_addr == NULL)
goto out_err;
@@ -2042,28 +2042,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
}
- /* Reconfigure each slave device if starting bonded device */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(eth_dev, slave_ethdev) != 0) {
+ /* Reconfigure each child device if starting bonded device */
+ for (i = 0; i < internals->child_count; i++) {
+ struct rte_eth_dev *child_ethdev =
+ &(rte_eth_devices[internals->children[i].port_id]);
+ if (child_configure(eth_dev, child_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to reconfigure slave device (%d)",
+ "bonded port (%d) failed to reconfigure child device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->children[i].port_id);
goto out_err;
}
- if (slave_start(eth_dev, slave_ethdev) != 0) {
+ if (child_start(eth_dev, child_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to start slave device (%d)",
+ "bonded port (%d) failed to start child device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->children[i].port_id);
goto out_err;
}
- /* We will need to poll for link status if any slave doesn't
+ /* We will need to poll for link status if any child doesn't
* support interrupts
*/
- if (internals->slaves[i].link_status_poll_enabled)
+ if (internals->children[i].link_status_poll_enabled)
internals->link_status_polling_enabled = 1;
}
@@ -2071,12 +2071,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
if (internals->link_status_polling_enabled) {
rte_eal_alarm_set(
internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor,
+ bond_ethdev_child_link_status_change_monitor,
(void *)&rte_eth_devices[internals->port_id]);
}
- /* Update all slave devices MACs*/
- if (mac_address_slaves_update(eth_dev) != 0)
+ /* Update all child devices MACs*/
+ if (mac_address_children_update(eth_dev) != 0)
goto out_err;
if (internals->user_defined_primary_port)
@@ -2132,8 +2132,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
bond_mode_8023ad_stop(eth_dev);
/* Discard all messages to/from mode 4 state machines */
- for (i = 0; i < internals->active_slave_count; i++) {
- port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+ for (i = 0; i < internals->active_child_count; i++) {
+ port = &bond_mode_8023ad_ports[internals->active_children[i]];
RTE_ASSERT(port->rx_ring != NULL);
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2148,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
if (internals->mode == BONDING_MODE_TLB ||
internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
- for (i = 0; i < internals->active_slave_count; i++)
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_child_count; i++)
+ tlb_last_obytets[internals->active_children[i]] = 0;
}
eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t slave_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ uint16_t child_id = internals->children[i].port_id;
- internals->slaves[i].last_link_status = 0;
- ret = rte_eth_dev_stop(slave_id);
+ internals->children[i].last_link_status = 0;
+ ret = rte_eth_dev_stop(child_id);
if (ret != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_id);
+ child_id);
return ret;
}
- /* active slaves need to be deactivated. */
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) !=
- internals->active_slave_count)
- deactivate_slave(eth_dev, slave_id);
+ /* active children need to be deactivated. */
+ if (find_child_by_id(internals->active_children,
+ internals->active_child_count, child_id) !=
+ internals->active_child_count)
+ deactivate_child(eth_dev, child_id);
}
return 0;
@@ -2188,8 +2188,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
/* Flush flows in all back-end devices before removing them */
bond_flow_ops.flush(dev, &ferror);
- while (internals->slave_count != skipped) {
- uint16_t port_id = internals->slaves[skipped].port_id;
+ while (internals->child_count != skipped) {
+ uint16_t port_id = internals->children[skipped].port_id;
int ret;
ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2203,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
continue;
}
- if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+ if (rte_eth_bond_child_remove(bond_port_id, port_id) != 0) {
RTE_BOND_LOG(ERR,
"Failed to remove port %d from bonded device %s",
port_id, dev->device->name);
@@ -2246,7 +2246,7 @@ static int
bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct bond_slave_details slave;
+ struct bond_child_details child;
int ret;
uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2259,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_ETHER_MAX_JUMBO_FRAME_LEN;
/* Max number of tx/rx queues that the bonded device can support is the
- * minimum values of the bonded slaves, as all slaves must be capable
+ * minimum values of the bonded children, as all children must be capable
* of supporting the same number of tx/rx queues.
*/
- if (internals->slave_count > 0) {
- struct rte_eth_dev_info slave_info;
+ if (internals->child_count > 0) {
+ struct rte_eth_dev_info child_info;
uint16_t idx;
- for (idx = 0; idx < internals->slave_count; idx++) {
- slave = internals->slaves[idx];
- ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+ for (idx = 0; idx < internals->child_count; idx++) {
+ child = internals->children[idx];
+ ret = rte_eth_dev_info_get(child.port_id, &child_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
__func__,
- slave.port_id,
+ child.port_id,
strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < max_nb_rx_queues)
- max_nb_rx_queues = slave_info.max_rx_queues;
+ if (child_info.max_rx_queues < max_nb_rx_queues)
+ max_nb_rx_queues = child_info.max_rx_queues;
- if (slave_info.max_tx_queues < max_nb_tx_queues)
- max_nb_tx_queues = slave_info.max_tx_queues;
+ if (child_info.max_tx_queues < max_nb_tx_queues)
+ max_nb_tx_queues = child_info.max_tx_queues;
}
}
@@ -2332,7 +2332,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
uint16_t i;
struct bond_dev_private *internals = dev->data->dev_private;
- /* don't do this while a slave is being added */
+ /* don't do this while a child is being added */
rte_spinlock_lock(&internals->lock);
if (on)
@@ -2340,13 +2340,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
else
rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ uint16_t port_id = internals->children[i].port_id;
res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
if (res == ENOTSUP)
RTE_BOND_LOG(WARNING,
- "Setting VLAN filter on slave port %u not supported.",
+ "Setting VLAN filter on child port %u not supported.",
port_id);
}
@@ -2424,14 +2424,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_child_link_status_change_monitor(void *cb_arg)
{
- struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+ struct rte_eth_dev *bonded_ethdev, *child_ethdev;
struct bond_dev_private *internals;
- /* Default value for polling slave found is true as we don't want to
+ /* Default value for polling child found is true as we don't want to
* disable the polling thread if we cannot get the lock */
- int i, polling_slave_found = 1;
+ int i, polling_child_found = 1;
if (cb_arg == NULL)
return;
@@ -2443,28 +2443,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
!internals->link_status_polling_enabled)
return;
- /* If device is currently being configured then don't check slaves link
+ /* If device is currently being configured then don't check children link
* status, wait until next period */
if (rte_spinlock_trylock(&internals->lock)) {
- if (internals->slave_count > 0)
- polling_slave_found = 0;
+ if (internals->child_count > 0)
+ polling_child_found = 0;
- for (i = 0; i < internals->slave_count; i++) {
- if (!internals->slaves[i].link_status_poll_enabled)
+ for (i = 0; i < internals->child_count; i++) {
+ if (!internals->children[i].link_status_poll_enabled)
continue;
- slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
- polling_slave_found = 1;
+ child_ethdev = &rte_eth_devices[internals->children[i].port_id];
+ polling_child_found = 1;
- /* Update slave link status */
- (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
- internals->slaves[i].link_status_wait_to_complete);
+ /* Update child link status */
+ (*child_ethdev->dev_ops->link_update)(child_ethdev,
+ internals->children[i].link_status_wait_to_complete);
/* if link status has changed since last checked then call lsc
* event callback */
- if (slave_ethdev->data->dev_link.link_status !=
- internals->slaves[i].last_link_status) {
- bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+ if (child_ethdev->data->dev_link.link_status !=
+ internals->children[i].last_link_status) {
+ bond_ethdev_lsc_event_callback(internals->children[i].port_id,
RTE_ETH_EVENT_INTR_LSC,
&bonded_ethdev->data->port_id,
NULL);
@@ -2473,10 +2473,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
rte_spinlock_unlock(&internals->lock);
}
- if (polling_slave_found)
- /* Set alarm to continue monitoring link status of slave ethdev's */
+ if (polling_child_found)
+ /* Set alarm to continue monitoring link status of child ethdev's */
rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor, cb_arg);
+ bond_ethdev_child_link_status_change_monitor, cb_arg);
}
static int
@@ -2485,7 +2485,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
struct bond_dev_private *bond_ctx;
- struct rte_eth_link slave_link;
+ struct rte_eth_link child_link;
bool one_link_update_succeeded;
uint32_t idx;
@@ -2496,7 +2496,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
- bond_ctx->active_slave_count == 0) {
+ bond_ctx->active_child_count == 0) {
ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -2512,51 +2512,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
case BONDING_MODE_BROADCAST:
/**
* Setting link speed to UINT32_MAX to ensure we pick up the
- * value of the first active slave
+ * value of the first active child
*/
ethdev->data->dev_link.link_speed = UINT32_MAX;
/**
- * link speed is minimum value of all the slaves link speed as
- * packet loss will occur on this slave if transmission at rates
+ * link speed is minimum value of all the children link speed as
+ * packet loss will occur on this child if transmission at rates
* greater than this are attempted
*/
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_child_count; idx++) {
+ ret = link_update(bond_ctx->active_children[idx],
+ &child_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Child (port %u) link get failed: %s",
+ bond_ctx->active_children[idx],
rte_strerror(-ret));
return 0;
}
- if (slave_link.link_speed <
+ if (child_link.link_speed <
ethdev->data->dev_link.link_speed)
ethdev->data->dev_link.link_speed =
- slave_link.link_speed;
+ child_link.link_speed;
}
break;
case BONDING_MODE_ACTIVE_BACKUP:
- /* Current primary slave */
- ret = link_update(bond_ctx->current_primary_port, &slave_link);
+ /* Current primary child */
+ ret = link_update(bond_ctx->current_primary_port, &child_link);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Child (port %u) link get failed: %s",
bond_ctx->current_primary_port,
rte_strerror(-ret));
return 0;
}
- ethdev->data->dev_link.link_speed = slave_link.link_speed;
+ ethdev->data->dev_link.link_speed = child_link.link_speed;
break;
case BONDING_MODE_8023AD:
ethdev->data->dev_link.link_autoneg =
- bond_ctx->mode4.slave_link.link_autoneg;
+ bond_ctx->mode4.child_link.link_autoneg;
ethdev->data->dev_link.link_duplex =
- bond_ctx->mode4.slave_link.link_duplex;
+ bond_ctx->mode4.child_link.link_duplex;
/* fall through */
/* to update link speed */
case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2566,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
default:
/**
* In theses mode the maximum theoretical link speed is the sum
- * of all the slaves
+ * of all the children
*/
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_child_count; idx++) {
+ ret = link_update(bond_ctx->active_children[idx],
+ &child_link);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Child (port %u) link get failed: %s",
+ bond_ctx->active_children[idx],
rte_strerror(-ret));
continue;
}
one_link_update_succeeded = true;
ethdev->data->dev_link.link_speed +=
- slave_link.link_speed;
+ child_link.link_speed;
}
if (!one_link_update_succeeded) {
- RTE_BOND_LOG(ERR, "All slaves link get failed");
+ RTE_BOND_LOG(ERR, "All children link get failed");
return 0;
}
}
@@ -2602,27 +2602,27 @@ static int
bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_eth_stats slave_stats;
+ struct rte_eth_stats child_stats;
int i, j;
- for (i = 0; i < internals->slave_count; i++) {
- rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+ for (i = 0; i < internals->child_count; i++) {
+ rte_eth_stats_get(internals->children[i].port_id, &child_stats);
- stats->ipackets += slave_stats.ipackets;
- stats->opackets += slave_stats.opackets;
- stats->ibytes += slave_stats.ibytes;
- stats->obytes += slave_stats.obytes;
- stats->imissed += slave_stats.imissed;
- stats->ierrors += slave_stats.ierrors;
- stats->oerrors += slave_stats.oerrors;
- stats->rx_nombuf += slave_stats.rx_nombuf;
+ stats->ipackets += child_stats.ipackets;
+ stats->opackets += child_stats.opackets;
+ stats->ibytes += child_stats.ibytes;
+ stats->obytes += child_stats.obytes;
+ stats->imissed += child_stats.imissed;
+ stats->ierrors += child_stats.ierrors;
+ stats->oerrors += child_stats.oerrors;
+ stats->rx_nombuf += child_stats.rx_nombuf;
for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
- stats->q_ipackets[j] += slave_stats.q_ipackets[j];
- stats->q_opackets[j] += slave_stats.q_opackets[j];
- stats->q_ibytes[j] += slave_stats.q_ibytes[j];
- stats->q_obytes[j] += slave_stats.q_obytes[j];
- stats->q_errors[j] += slave_stats.q_errors[j];
+ stats->q_ipackets[j] += child_stats.q_ipackets[j];
+ stats->q_opackets[j] += child_stats.q_opackets[j];
+ stats->q_ibytes[j] += child_stats.q_ibytes[j];
+ stats->q_obytes[j] += child_stats.q_obytes[j];
+ stats->q_errors[j] += child_stats.q_errors[j];
}
}
@@ -2638,8 +2638,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
int err;
int ret;
- for (i = 0, err = 0; i < internals->slave_count; i++) {
- ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+ for (i = 0, err = 0; i < internals->child_count; i++) {
+ ret = rte_eth_stats_reset(internals->children[i].port_id);
if (ret != 0)
err = ret;
}
@@ -2656,15 +2656,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all children */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int child_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ port_id = internals->children[i].port_id;
ret = rte_eth_promiscuous_enable(port_id);
if (ret != 0)
@@ -2672,23 +2672,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
"Failed to enable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ child_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one child. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (child_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary child */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->child_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2710,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all children */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int child_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ port_id = internals->children[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
BOND_8023AD_FORCED_PROMISC) {
- slave_ok++;
+ child_ok++;
continue;
}
ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2732,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
"Failed to disable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ child_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one child. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (child_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary child */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->child_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2772,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As promiscuous mode is propagated to all slaves for these
+ /* As promiscuous mode is propagated to all children for these
* mode, no need to update for bonding device.
*/
break;
@@ -2780,9 +2780,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As promiscuous mode is propagated only to primary slave
+ /* As promiscuous mode is propagated only to primary child
* for these mode. When active/standby switchover, promiscuous
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary child according to bonding
* device.
*/
if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2803,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all children */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int child_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ port_id = internals->children[i].port_id;
ret = rte_eth_allmulticast_enable(port_id);
if (ret != 0)
@@ -2819,23 +2819,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
"Failed to enable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ child_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one child. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (child_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary child */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->child_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2857,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all children */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int child_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->child_count; i++) {
+ uint16_t port_id = internals->children[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2878,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
"Failed to disable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ child_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one child. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (child_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary child */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->child_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2918,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As allmulticast mode is propagated to all slaves for these
+ /* As allmulticast mode is propagated to all children for these
* mode, no need to update for bonding device.
*/
break;
@@ -2926,9 +2926,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As allmulticast mode is propagated only to primary slave
+ /* As allmulticast mode is propagated only to primary child
* for these mode. When active/standby switchover, allmulticast
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary child according to bonding
* device.
*/
if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2961,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
int ret;
uint8_t lsc_flag = 0;
- int valid_slave = 0;
- uint16_t active_pos, slave_idx;
+ int valid_child = 0;
+ uint16_t active_pos, child_idx;
uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2979,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (!bonded_eth_dev->data->dev_started)
return rc;
- /* verify that port_id is a valid slave of bonded port */
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == port_id) {
- valid_slave = 1;
- slave_idx = i;
+ /* verify that port_id is a valid child of bonded port */
+ for (i = 0; i < internals->child_count; i++) {
+ if (internals->children[i].port_id == port_id) {
+ valid_child = 1;
+ child_idx = i;
break;
}
}
- if (!valid_slave)
+ if (!valid_child)
return rc;
/* Synchronize lsc callback parallel calls either by real link event
- * from the slaves PMDs or by the bonding PMD itself.
+ * from the children PMDs or by the bonding PMD itself.
*/
rte_spinlock_lock(&internals->lsc_lock);
/* Search for port in active port list */
- active_pos = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, port_id);
+ active_pos = find_child_by_id(internals->active_children,
+ internals->active_child_count, port_id);
ret = rte_eth_link_get_nowait(port_id, &link);
if (ret < 0)
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+ RTE_BOND_LOG(ERR, "Child (port %u) link get failed", port_id);
if (ret == 0 && link.link_status) {
- if (active_pos < internals->active_slave_count)
+ if (active_pos < internals->active_child_count)
goto link_update;
/* check link state properties if bonded link is up*/
if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
- "for slave %d in bonding mode %d",
+ "for child %d in bonding mode %d",
port_id, internals->mode);
} else {
- /* inherit slave link properties */
+ /* inherit child link properties */
link_properties_set(bonded_eth_dev, &link);
}
- /* If no active slave ports then set this port to be
+ /* If no active child ports then set this port to be
* the primary port.
*/
- if (internals->active_slave_count < 1) {
- /* If first active slave, then change link status */
+ if (internals->active_child_count < 1) {
+ /* If first active child, then change link status */
bonded_eth_dev->data->dev_link.link_status =
RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_children_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
- activate_slave(bonded_eth_dev, port_id);
+ activate_child(bonded_eth_dev, port_id);
/* If the user has defined the primary port then default to
* using it.
@@ -3043,24 +3043,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
internals->primary_port == port_id)
bond_ethdev_primary_set(internals, port_id);
} else {
- if (active_pos == internals->active_slave_count)
+ if (active_pos == internals->active_child_count)
goto link_update;
- /* Remove from active slave list */
- deactivate_slave(bonded_eth_dev, port_id);
+ /* Remove from active child list */
+ deactivate_child(bonded_eth_dev, port_id);
- if (internals->active_slave_count < 1)
+ if (internals->active_child_count < 1)
lsc_flag = 1;
- /* Update primary id, take first active slave from list or if none
+ /* Update primary id, take first active child from list or if none
* available set to -1 */
if (port_id == internals->current_primary_port) {
- if (internals->active_slave_count > 0)
+ if (internals->active_child_count > 0)
bond_ethdev_primary_set(internals,
- internals->active_slaves[0]);
+ internals->active_children[0]);
else
internals->current_primary_port = internals->primary_port;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_children_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
@@ -3069,10 +3069,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
link_update:
/**
* Update bonded device link properties after any change to active
- * slaves
+ * children
*/
bond_ethdev_link_update(bonded_eth_dev, 0);
- internals->slaves[slave_idx].last_link_status = link.link_status;
+ internals->children[child_idx].last_link_status = link.link_status;
if (lsc_flag) {
/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3114,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
{
unsigned i, j;
int result = 0;
- int slave_reta_size;
+ int child_reta_size;
unsigned reta_count;
struct bond_dev_private *internals = dev->data->dev_private;
@@ -3137,11 +3137,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
sizeof(internals->reta_conf[0]) * reta_count);
- /* Propagate RETA over slaves */
- for (i = 0; i < internals->slave_count; i++) {
- slave_reta_size = internals->slaves[i].reta_size;
- result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
- &internals->reta_conf[0], slave_reta_size);
+ /* Propagate RETA over children */
+ for (i = 0; i < internals->child_count; i++) {
+ child_reta_size = internals->children[i].reta_size;
+ result = rte_eth_dev_rss_reta_update(internals->children[i].port_id,
+ &internals->reta_conf[0], child_reta_size);
if (result < 0)
return result;
}
@@ -3194,8 +3194,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
bond_rss_conf.rss_key_len = internals->rss_key_len;
}
- for (i = 0; i < internals->slave_count; i++) {
- result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+ for (i = 0; i < internals->child_count; i++) {
+ result = rte_eth_dev_rss_hash_update(internals->children[i].port_id,
&bond_rss_conf);
if (result < 0)
return result;
@@ -3221,21 +3221,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
static int
bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *child_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+ for (i = 0; i < internals->child_count; i++) {
+ child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+ if (*child_eth_dev->dev_ops->mtu_set == NULL) {
rte_spinlock_unlock(&internals->lock);
return -ENOTSUP;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+ for (i = 0; i < internals->child_count; i++) {
+ ret = rte_eth_dev_set_mtu(internals->children[i].port_id, mtu);
if (ret < 0) {
rte_spinlock_unlock(&internals->lock);
return ret;
@@ -3271,29 +3271,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
struct rte_ether_addr *mac_addr,
__rte_unused uint32_t index, uint32_t vmdq)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *child_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
- *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+ for (i = 0; i < internals->child_count; i++) {
+ child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+ if (*child_eth_dev->dev_ops->mac_addr_add == NULL ||
+ *child_eth_dev->dev_ops->mac_addr_remove == NULL) {
ret = -ENOTSUP;
goto end;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+ for (i = 0; i < internals->child_count; i++) {
+ ret = rte_eth_dev_mac_addr_add(internals->children[i].port_id,
mac_addr, vmdq);
if (ret < 0) {
/* rollback */
for (i--; i >= 0; i--)
rte_eth_dev_mac_addr_remove(
- internals->slaves[i].port_id, mac_addr);
+ internals->children[i].port_id, mac_addr);
goto end;
}
}
@@ -3307,22 +3307,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
static void
bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *child_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+ for (i = 0; i < internals->child_count; i++) {
+ child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+ if (*child_eth_dev->dev_ops->mac_addr_remove == NULL)
goto end;
}
struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
- for (i = 0; i < internals->slave_count; i++)
- rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+ for (i = 0; i < internals->child_count; i++)
+ rte_eth_dev_mac_addr_remove(internals->children[i].port_id,
mac_addr);
end:
@@ -3402,30 +3402,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
fprintf(f, "\n");
}
- if (internals->slave_count > 0) {
- fprintf(f, "\tSlaves (%u): [", internals->slave_count);
- for (i = 0; i < internals->slave_count - 1; i++)
- fprintf(f, "%u ", internals->slaves[i].port_id);
+ if (internals->child_count > 0) {
+ fprintf(f, "\tChilds (%u): [", internals->child_count);
+ for (i = 0; i < internals->child_count - 1; i++)
+ fprintf(f, "%u ", internals->children[i].port_id);
- fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+ fprintf(f, "%u]\n", internals->children[internals->child_count - 1].port_id);
} else {
- fprintf(f, "\tSlaves: []\n");
+ fprintf(f, "\tChilds: []\n");
}
- if (internals->active_slave_count > 0) {
- fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
- for (i = 0; i < internals->active_slave_count - 1; i++)
- fprintf(f, "%u ", internals->active_slaves[i]);
+ if (internals->active_child_count > 0) {
+ fprintf(f, "\tActive Childs (%u): [", internals->active_child_count);
+ for (i = 0; i < internals->active_child_count - 1; i++)
+ fprintf(f, "%u ", internals->active_children[i]);
- fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+ fprintf(f, "%u]\n", internals->active_children[internals->active_child_count - 1]);
} else {
- fprintf(f, "\tActive Slaves: []\n");
+ fprintf(f, "\tActive Childs: []\n");
}
if (internals->user_defined_primary_port)
fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
- if (internals->slave_count > 0)
+ if (internals->child_count > 0)
fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
}
@@ -3471,7 +3471,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
}
static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_child(const struct rte_eth_bond_8023ad_child_info *info, FILE *f)
{
char a_state[256] = { 0 };
char p_state[256] = { 0 };
@@ -3520,18 +3520,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
static void
dump_lacp(uint16_t port_id, FILE *f)
{
- struct rte_eth_bond_8023ad_slave_info slave_info;
+ struct rte_eth_bond_8023ad_child_info child_info;
struct rte_eth_bond_8023ad_conf port_conf;
- uint16_t slaves[RTE_MAX_ETHPORTS];
- int num_active_slaves;
+ uint16_t children[RTE_MAX_ETHPORTS];
+ int num_active_children;
int i, ret;
fprintf(f, " - Lacp info:\n");
- num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+ num_active_children = rte_eth_bond_active_children_get(port_id, children,
RTE_MAX_ETHPORTS);
- if (num_active_slaves < 0) {
- fprintf(f, "\tFailed to get active slave list for port %u\n",
+ if (num_active_children < 0) {
+ fprintf(f, "\tFailed to get active child list for port %u\n",
port_id);
return;
}
@@ -3545,16 +3545,16 @@ dump_lacp(uint16_t port_id, FILE *f)
}
dump_lacp_conf(&port_conf, f);
- for (i = 0; i < num_active_slaves; i++) {
- ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
- &slave_info);
+ for (i = 0; i < num_active_children; i++) {
+ ret = rte_eth_bond_8023ad_child_info(port_id, children[i],
+ &child_info);
if (ret) {
- fprintf(f, "\tGet slave device %u 8023ad info failed\n",
- slaves[i]);
+ fprintf(f, "\tGet child device %u 8023ad info failed\n",
+ children[i]);
return;
}
- fprintf(f, "\tSlave Port: %u\n", slaves[i]);
- dump_lacp_slave(&slave_info, f);
+ fprintf(f, "\tChild Port: %u\n", children[i]);
+ dump_lacp_child(&child_info, f);
}
}
@@ -3655,8 +3655,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->link_down_delay_ms = 0;
internals->link_up_delay_ms = 0;
- internals->slave_count = 0;
- internals->active_slave_count = 0;
+ internals->child_count = 0;
+ internals->active_child_count = 0;
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3684,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->rx_desc_lim.nb_align = 1;
internals->tx_desc_lim.nb_align = 1;
- memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
- memset(internals->slaves, 0, sizeof(internals->slaves));
+ memset(internals->active_children, 0, sizeof(internals->active_children));
+ memset(internals->children, 0, sizeof(internals->children));
TAILQ_INIT(&internals->flow_list);
internals->flow_isolated_valid = 0;
@@ -3770,7 +3770,7 @@ bond_probe(struct rte_vdev_device *dev)
/* Parse link bonding mode */
if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
- &bond_ethdev_parse_slave_mode_kvarg,
+ &bond_ethdev_parse_child_mode_kvarg,
&bonding_mode) != 0) {
RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
name);
@@ -3815,7 +3815,7 @@ bond_probe(struct rte_vdev_device *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_child_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3865,7 @@ bond_remove(struct rte_vdev_device *dev)
RTE_ASSERT(eth_dev->device == &dev->device);
internals = eth_dev->data->dev_private;
- if (internals->slave_count != 0)
+ if (internals->child_count != 0)
return -EBUSY;
if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3877,7 @@ bond_remove(struct rte_vdev_device *dev)
return ret;
}
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the child portids after all the other pdev and vdev
* have been allocated */
static int
bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3959,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
if ((link_speeds &
(internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
- RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+ RTE_BOND_LOG(ERR, "the fixed speed is not supported by all child devices.");
return -EINVAL;
}
/*
@@ -4041,7 +4041,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_child_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4059,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- /* Parse/add slave ports to bonded device */
- if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
- struct bond_ethdev_slave_ports slave_ports;
+ /* Parse/add child ports to bonded device */
+ if (rte_kvargs_count(kvlist, PMD_BOND_CHILD_PORT_KVARG) > 0) {
+ struct bond_ethdev_child_ports child_ports;
unsigned i;
- memset(&slave_ports, 0, sizeof(slave_ports));
+ memset(&child_ports, 0, sizeof(child_ports));
- if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
- &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+ if (rte_kvargs_process(kvlist, PMD_BOND_CHILD_PORT_KVARG,
+ &bond_ethdev_parse_child_port_kvarg, &child_ports) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to parse slave ports for bonded device %s",
+ "Failed to parse child ports for bonded device %s",
name);
return -1;
}
- for (i = 0; i < slave_ports.slave_count; i++) {
- if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+ for (i = 0; i < child_ports.child_count; i++) {
+ if (rte_eth_bond_child_add(port_id, child_ports.children[i]) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to add port %d as slave to bonded device %s",
- slave_ports.slaves[i], name);
+ "Failed to add port %d as child to bonded device %s",
+ child_ports.children[i], name);
}
}
} else {
- RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+ RTE_BOND_LOG(INFO, "No children specified for bonded device %s", name);
return -1;
}
- /* Parse/set primary slave port id*/
- arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+ /* Parse/set primary child port id*/
+ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_CHILD_KVARG);
if (arg_count == 1) {
- uint16_t primary_slave_port_id;
+ uint16_t primary_child_port_id;
if (rte_kvargs_process(kvlist,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
- &bond_ethdev_parse_primary_slave_port_id_kvarg,
- &primary_slave_port_id) < 0) {
+ PMD_BOND_PRIMARY_CHILD_KVARG,
+ &bond_ethdev_parse_primary_child_port_id_kvarg,
+ &primary_child_port_id) < 0) {
RTE_BOND_LOG(INFO,
- "Invalid primary slave port id specified for bonded device %s",
+ "Invalid primary child port id specified for bonded device %s",
name);
return -1;
}
/* Set balance mode transmit policy*/
- if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+ if (rte_eth_bond_primary_set(port_id, primary_child_port_id)
!= 0) {
RTE_BOND_LOG(ERR,
- "Failed to set primary slave port %d on bonded device %s",
- primary_slave_port_id, name);
+ "Failed to set primary child port %d on bonded device %s",
+ primary_child_port_id, name);
return -1;
}
} else if (arg_count > 1) {
RTE_BOND_LOG(INFO,
- "Primary slave can be specified only once for bonded device %s",
+ "Primary child can be specified only once for bonded device %s",
name);
return -1;
}
@@ -4206,15 +4206,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
return -1;
}
- /* configure slaves so we can pass mtu setting */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(dev, slave_ethdev) != 0) {
+ /* configure children so we can pass mtu setting */
+ for (i = 0; i < internals->child_count; i++) {
+ struct rte_eth_dev *child_ethdev =
+ &(rte_eth_devices[internals->children[i].port_id]);
+ if (child_configure(dev, child_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to configure slave device (%d)",
+ "bonded port (%d) failed to configure child device (%d)",
dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->children[i].port_id);
return -1;
}
}
@@ -4230,7 +4230,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
- "slave=<ifc> "
+ "child=<ifc> "
"primary=<ifc> "
"mode=[0-6] "
"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e62..b31ed8d49689 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -3,6 +3,7 @@ DPDK_23 {
rte_eth_bond_8023ad_agg_selection_get;
rte_eth_bond_8023ad_agg_selection_set;
+ rte_eth_bond_8023ad_child_info;
rte_eth_bond_8023ad_conf_get;
rte_eth_bond_8023ad_dedicated_queues_disable;
rte_eth_bond_8023ad_dedicated_queues_enable;
@@ -12,8 +13,10 @@ DPDK_23 {
rte_eth_bond_8023ad_ext_distrib_get;
rte_eth_bond_8023ad_ext_slowtx;
rte_eth_bond_8023ad_setup;
- rte_eth_bond_8023ad_slave_info;
- rte_eth_bond_active_slaves_get;
+ rte_eth_bond_active_children_get;
+ rte_eth_bond_child_add;
+ rte_eth_bond_child_remove;
+ rte_eth_bond_children_get;
rte_eth_bond_create;
rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
@@ -23,9 +26,6 @@ DPDK_23 {
rte_eth_bond_mode_set;
rte_eth_bond_primary_get;
rte_eth_bond_primary_set;
- rte_eth_bond_slave_add;
- rte_eth_bond_slave_remove;
- rte_eth_bond_slaves_get;
rte_eth_bond_xmit_policy_get;
rte_eth_bond_xmit_policy_set;
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39fa3..12de9c1f2901 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
RTE_ETHER_ADDR_BYTES(&addr))
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t children[RTE_MAX_ETHPORTS];
+uint16_t children_count;
static uint16_t BOND_PORT = 0xffff;
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
};
static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+child_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
{
int retval;
uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
"failed (res=%d)\n", BOND_PORT, retval);
- for (i = 0; i < slaves_count; i++) {
- if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
- rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
- slaves[i], BOND_PORT);
+ for (i = 0; i < children_count; i++) {
+ if (rte_eth_bond_child_add(BOND_PORT, children[i]) == -1)
+ rte_exit(-1, "Oooops! adding child (%u) to bond (%u) failed!\n",
+ children[i], BOND_PORT);
}
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
if (retval < 0)
rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
- printf("Waiting for slaves to become active...");
+ printf("Waiting for children to become active...");
while (wait_counter) {
- uint16_t act_slaves[16] = {0};
- if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
- slaves_count) {
+ uint16_t act_children[16] = {0};
+ if (rte_eth_bond_active_children_get(BOND_PORT, act_children, 16) ==
+ children_count) {
printf("\n");
break;
}
sleep(1);
printf("...");
if (--wait_counter == 0)
- rte_exit(-1, "\nFailed to activate slaves\n");
+ rte_exit(-1, "\nFailed to activate children\n");
}
retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
"send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
- "show - shows some bond info: ex. active slaves etc.\n"
+ "show - shows some bond info: ex. active children etc.\n"
"help - prints help.\n"
"quit - terminate all threads and quit.\n"
);
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
struct cmdline *cl,
__rte_unused void *data)
{
- uint16_t slaves[16] = {0};
+ uint16_t children[16] = {0};
uint8_t len = 16;
struct rte_ether_addr addr;
uint16_t i;
int ret;
- for (i = 0; i < slaves_count; i++) {
+ for (i = 0; i < children_count; i++) {
ret = rte_eth_macaddr_get(i, &addr);
if (ret != 0) {
cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
rte_spinlock_lock(&global_flag_stru_p->lock);
cmdline_printf(cl,
- "Active_slaves:%d "
+ "Active_children:%d "
"packets received:Tot:%d Arp:%d IPv4:%d\n",
- rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+ rte_eth_bond_active_children_get(BOND_PORT, children, len),
global_flag_stru_p->port_packets[0],
global_flag_stru_p->port_packets[1],
global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
/* initialize all ports */
- slaves_count = nb_ports;
+ children_count = nb_ports;
RTE_ETH_FOREACH_DEV(i) {
- slave_port_init(i, mbuf_pool);
- slaves[i] = i;
+ child_port_init(i, mbuf_pool);
+ children[i] = i;
}
bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b20..c717a463c905 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,10 @@ struct rte_eth_dev_owner {
#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0)
/** Device supports link state interrupt */
#define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2)
+/** Device is a bonded */
+#define RTE_ETH_DEV_BONDED_CHILD RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE \
+ RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) RTE_ETH_DEV_BONDED_CHILD
/** Device supports device removal interrupt */
#define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3)
/** Device is port representor */
--
2.39.2
^ permalink raw reply [relevance 1%]
* RE: [PATCH] eventdev: fix alignment padding
@ 2023-05-17 13:35 3% ` Morten Brørup
2023-05-23 15:15 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-05-17 13:35 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom; +Cc: Sivaprasad Tummala, jerinj, dev
> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Wednesday, 17 May 2023 15.20
>
> On Tue, Apr 18, 2023 at 8:46 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
> > On 2023-04-18 16:07, Morten Brørup wrote:
> > >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> > >> Sent: Tuesday, 18 April 2023 14.31
> > >>
> > >> On 2023-04-18 12:45, Sivaprasad Tummala wrote:
> > >>> fixed the padding required to align to cacheline size.
> > >>>
> > >>
> > >> What's the point in having this structure cache-line aligned? False
> > >> sharing is a non-issue, since this is more or less a read only struct.
> > >>
> > >> This is not so much a comment on your patch, but the __rte_cache_aligned
> > >> attribute.
> > >
> > > When the structure is cache aligned, an individual entry in the array does
> not unnecessarily cross a cache line border. With 16 pointers and aligned, it
> uses exactly two cache lines. If unaligned, it may span three cache lines.
> > >
> > An *element* in the reserved uint64_t array won't span across two cache
> > lines, regardless if __rte_cache_aligned is specified or not. You would
> > need a packed struct for that to occur, plus the reserved array field
> > being preceded by some appropriately-sized fields.
> >
> > The only effect __rte_cache_aligned has on this particular struct is
> > that if you instantiate the struct on the stack, or as a static
> > variable, it will be cache-line aligned. That effect you can get by
> > specifying the attribute when you define the variable, and you will save
> > some space (by having smaller elements). In this case it doesn't matter
> > if the array is compact or not, since an application is likely to only
> > use one of the members in the array.
> >
> > It also doesn't matter of the struct is two or three cache lines, as
> > long as only the first two are used.
>
>
> Discussions stalled at this point.
Not stalled at this point. You seem to have missed my follow-up email clarifying why cache aligning is relevant:
http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D87897@smartserver.smartshare.dk/
But the patch still breaks the ABI, and thus should be postponed to 23.11.
>
> Hi Shiva,
>
> Marking this patch as rejected. If you think the other way, Please
> change patchwork status and let's discuss more here.
I am not taking any action regarding the status of this patch. I will leave that decision to Jerin and Shiva.
>
>
>
> >
> > >>
> > >>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> > >>> Cc: mattias.ronnblom@ericsson.com
> > >>>
> > >>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > >>> ---
> > >>> lib/eventdev/rte_eventdev_core.h | 2 +-
> > >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/lib/eventdev/rte_eventdev_core.h
> > >> b/lib/eventdev/rte_eventdev_core.h
> > >>> index c328bdbc82..c27a52ccc0 100644
> > >>> --- a/lib/eventdev/rte_eventdev_core.h
> > >>> +++ b/lib/eventdev/rte_eventdev_core.h
> > >>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
> > >>> /**< PMD Tx adapter enqueue same destination function. */
> > >>> event_crypto_adapter_enqueue_t ca_enqueue;
> > >>> /**< PMD Crypto adapter enqueue function. */
> > >>> - uintptr_t reserved[6];
> > >>> + uintptr_t reserved[5];
> > >>> } __rte_cache_aligned;
> > >>>
> > >>> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> > >
> >
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
2023-05-17 7:16 3% ` Mattias Rönnblom
@ 2023-05-17 12:28 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-17 12:28 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup
On Wed, May 17, 2023 at 12:46 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2023-05-16 15:08, Jerin Jacob wrote:
> > On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> >>
> >> On 2023-05-15 14:38, Jerin Jacob wrote:
> >>> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>
> >>>> On 2023-05-12 13:59, Jerin Jacob wrote:
> >>>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
> >>>>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>>>
> >>>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
> >>>>>> dequeue only when the burst size is compile-time constant (and equal
> >>>>>> to one).
> >>>>>>
> >>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>>>
> >>>>>> ---
> >>>>>>
> >>>>>> v3: Actually include the change v2 claimed to contain.
> >>>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
> >>>>>> application is compiled with -pedantic. (Morten Brørup)
> >>>>>> ---
> >>>>>> lib/eventdev/rte_eventdev.h | 4 ++--
> >>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>>>
> >>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >>>>>> index a90e23ac8b..a471caeb6d 100644
> >>>>>> --- a/lib/eventdev/rte_eventdev.h
> >>>>>> +++ b/lib/eventdev/rte_eventdev.h
> >>>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >>>>>> * Allow zero cost non burst mode routine invocation if application
> >>>>>> * requests nb_events as const one
> >>>>>> */
> >>>>>> - if (nb_events == 1)
> >>>>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>>>
> >>>>> "Why" part is not clear from the commit message. Is this to avoid
> >>>>> nb_events read if it is built-in const.
> >>>>
> >>>> The __builtin_constant_p() is introduced to avoid having the compiler
> >>>> generate a conditional branch and two different code paths in case
> >>>> nb_elem is a run-time variable.
> >>>>
> >>>> In particular, this matters if nb_elems is run-time variable and varies
> >>>> between 1 and some larger value.
> >>>>
> >>>> I should have mention this in the commit message.
> >>>>
> >>>> A very slight performance improvement. It also makes the code better
> >>>> match the comment, imo. Zero cost for const one enqueues, but no impact
> >>>> non-compile-time-constant-length enqueues.
> >>>>
> >>>> Feel free to ignore.
> >>>
> >>>
> >>> I did some performance comparison of the patch.
> >>> A low-end ARM machines shows 0.7% drop with single event case. No
> >>> regression see with high-end ARM cores with single event case.
> >>>
> >>> IMO, optimizing the check for burst mode(the new patch) may not show
> >>> any real improvement as the cost is divided by number of event.
> >>> Whereas optimizing the check for single event case(The current code)
> >>> shows better performance with single event case and no regression
> >>> with burst mode as cost is divided by number of events.
> >>
> >> I ran some tests on an AMD Zen 3 with DSW.
> >> In the below tests the enqueue burst size is not compile-time constant.
> >>
> >> Enqueue burst size Performance improvement
> >> Run-time constant 1 ~5%
> >> Run-time constant 2 ~0%
> >> Run-time variable 1-2 ~9%
> >> Run-time variable 1-16 ~0%
> >>
> >> The run-time variable enqueue sizes randomly (uniformly) distributed in
> >> the specified range.
> >>
> >> The first result may come as a surprise. The benchmark is using
> >> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
> >> in most apps). The single-event enqueue function only exists in a
> >> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
> >> I suspect that is the reason for the performance improvement.
> >>
> >> This effect is large-enough to make it somewhat beneficial (+~1%) to use
> >> run-time variable single-event enqueue compared to keeping the burst
> >> size compile-time constant.
> >
> > # Interesting, Could you share your testeventdev command to test it.
>
> I'm using a proprietary benchmark to evaluate the effect of these
> changes. There's certainly nothing secret about that program, and also
> nothing very DSW-specific either. I hope to at some point both extend
> DPDK eventdev tests to include DSW, and also to contribute
> benchmarks/characteristics tests (perf unit tests or as a separate
> program), if there seems to be a value in this.
Yes. Please extend the testeventdev for your use case so that all
drivers can test
and help to optimize _real world_ cases. Testeventdev already has
plugin kind of interface,
it should pretty easy to add new MODES.
>
> > # By having quick glance on DSW code, following change can be added(or
> > similar cases).
> > Not sure such change in DSW driver is making a difference or nor?
> >
> >
> > diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
> > index e84b65d99f..455470997b 100644
> > --- a/drivers/event/dsw/dsw_event.c
> > +++ b/drivers/event/dsw/dsw_event.c
> > @@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
> > *dsw, struct dsw_port *source_port)
> > uint16_t
> > dsw_event_enqueue(void *port, const struct rte_event *ev)
> > {
> > - return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
> > + return dsw_event_enqueue_burst(port, ev, 1);
>
> Good point.
>
> Historical note: I think that comparison is old cruft borne out of a
> misconception, that the single-event enqueue could be called directly
> from application code, combined with the fact that producer-only ports
> needed some way to "maintain" a port, prior to the introduction of
> rte_event_maintain().
>
> > }
> >
> > static __rte_always_inline uint16_t
> > @@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
> > *source_port,
> > return (num_non_release + num_release);
> > }
> >
> > -uint16_t
> > +inline uint16_t
>
> From what it seems, this does not have the desired effect, at least not
> on GCC 11.3 (w/ the default DPDK compiler configuration).
>
> I reached this conclusion when I noticed that if I reshuffle the code so
> to force (not hint) the inlining of the burst (and generic burst)
> enqueue function into dsw_event_enqueue(), your change performs better.
>
> > dsw_event_enqueue_burst(void *port, const struct rte_event events[],
> > uint16_t events_len)
> > {
> >
> > # I am testing with command like this "/build/app/dpdk-test-eventdev
> > -l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
> > --stlist p --nb_pkts=10000000000"
> >
>
>
> I re-ran the compile-time variable, run-time constant enqueue size of 1,
> and I got the following:
>
> Jerin's change: +4%
> Jerin's change + ensure inlining: +6%
> RFC v3: +7%
>
> (Here I use a more different setup that produces more deterministic
> results, hence the different numbers compared to the previous runs. They
> were using a pipeline spread over two chiplets, and these runs are using
> only a single chiplet.)
>
> It seems like with your suggested changes you eliminate most of the
> single-enqueue-special case performance degradation (for DSW), but not
> all of it. The remaining degradation is very small (for the above case,
Cores like AMD Zen 3, I was not expecting 1% diff with such check.
e.s.p if it has proper branch predictors. Even pretty low-end arm
cores, had around
0.7% diff and new arm cores shows no difference.
> larger for small by run-time variable enqueue sizes), but it's a little
> sad that a supposedly performance-enhancing special case (that drives
> complexity in the code, although not much) actually degrades performance.
OK. Let's get rid of fp_ops->dequeue callback. Initial RFC of eventdev
has public non burst API,
that was the reason for that callback.
>
> >>
> >> The performance gain is counted toward both enqueue and dequeue costs
> >> (+benchmark app overhead), so an under-estimation if see this as an
> >> enqueue performance improvement.
> >>
> >>> If you agree, then we can skip this patch.
> >>>
> >>
> >> I have no strong opinion if this should be included or not.
> >>
> >> It was up to me, I would drop the single-enqueue special case handling
> >> altogether in the next ABI update.
> >
> > That's a reasonable path. If we are willing to push a patch, we can
> > test it and give feedback.
> > Or in our spare time, We can do that as well.
> >
>
> Sure, I'll give it a try.
>
> The next release is an ABI-breaking one?
Yes (23.11). Please plan to send deprecation notice before 23.07 release.
I will mark this patch as rejected in patchwork.
Thanks for your time.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
2023-05-16 11:36 0% ` Eelco Chaudron
2023-05-16 11:45 0% ` Maxime Coquelin
@ 2023-05-17 9:18 0% ` Eelco Chaudron
1 sibling, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-17 9:18 UTC (permalink / raw)
To: David Marchand; +Cc: maxime.coquelin, chenbo.xia, dev
On 16 May 2023, at 13:36, Eelco Chaudron wrote:
> On 16 May 2023, at 12:12, David Marchand wrote:
>
>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>
>> [snip]
>>
>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>> vsocket->path = NULL;
>>>>> }
>>>>>
>>>>> + if (vsocket && vsocket->alloc_notify_ops) {
>>>>> +#pragma GCC diagnostic push
>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>> + free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>> +#pragma GCC diagnostic pop
>>>>> + vsocket->notify_ops = NULL;
>>>>> + }
>>>>
>>>> Rather than select the behavior based on a boolean (and here force the
>>>> compiler to close its eyes), I would instead add a non const pointer
>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>
>>> Good idea, I will make the change in v3.
>>
>> Feel free to use a better name for this field :-).
>>
>>>
>>>>> +
>>>>> if (vsocket) {
>>>>> free(vsocket);
>>>>> vsocket = NULL;
>>
>> [snip]
>>
>>>>> + /*
>>>>> + * Although the ops structure is a const structure, we do need to
>>>>> + * override the guest_notify operation. This is because with the
>>>>> + * previous APIs it was "reserved" and if any garbage value was passed,
>>>>> + * it could crash the application.
>>>>> + */
>>>>> + if (ops && !ops->guest_notify) {
>>>>
>>>> Hum, as described in the comment above, I don't think we should look
>>>> at ops->guest_notify value at all.
>>>> Checking ops != NULL should be enough.
>>>
>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>
>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>
>> Hum, I don't understand my comment either o_O'.
>> Too many days off... or maybe my evil twin took over the keyboard.
>>
>>
>>>
>>>>> + struct rte_vhost_device_ops *new_ops;
>>>>> +
>>>>> + new_ops = malloc(sizeof(*new_ops));
>>>>
>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>> I am unclear of the impact though.
>>>
>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>
>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>
>> Determinining current numa is doable, via 'ops'
>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>> numa_realloc().
>> The problem is how to allocate on this numa with the libc allocator
>> for which I have no idea...
>> We could go with the dpdk allocator (again, like numa_realloc()).
>>
>>
>> In practice, the passed ops will be probably from a const variable in
>> the program .data section (for which I think fields are set to 0
>> unless explicitly initialised), or a memset() will be called for a
>> dynamic allocation from good citizens.
>> So we can probably live with the current proposal.
>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>> we will drop this compat code.
>>
>> Maxime, Chenbo, what do you think?
>
> Wait for their response, but for now I assume we can just keep the numa unaware malloc().
>
>>
>> [snip]
>>
>>>>
>>>> But putting indentation aside, is this change equivalent?
>>>> - if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>> - (vq->callfd >= 0)) ||
>>>> - unlikely(!signalled_used_valid)) {
>>>> + if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>> + unlikely(!signalled_used_valid)) &&
>>>> + vq->callfd >= 0) {
>>>
>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>
>> I think this should be a separate fix.
>
> ACK, will add a separate patch in this series to fix it.
FYI I sent out the v3 patch.
//Eelco
^ permalink raw reply [relevance 0%]
* [PATCH v3 0/4] vhost: add device op to offload the interrupt kick
@ 2023-05-17 9:08 4% Eelco Chaudron
2023-06-01 20:00 0% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Eelco Chaudron @ 2023-05-17 9:08 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia, david.marchand; +Cc: dev
This series adds an operation callback which gets called every time the
library wants to call eventfd_write(). This eventfd_write() call could
result in a system call, which could potentially block the PMD thread.
The callback function can decide whether it's ok to handle the
eventfd_write() now or have the newly introduced function,
rte_vhost_notify_guest(), called at a later time.
This can be used by 3rd party applications, like OVS, to avoid system
calls being called as part of the PMD threads.
v3:
- Changed ABI compatibility code to no longer use a boolean
to avoid having to disable specific GCC warnings.
- Moved the fd check fix to a separate patch (patch 3/4).
- Fixed some coding style issues.
v2: - Used vhost_virtqueue->index to find index for operation.
- Aligned function name to VDUSE RFC patchset.
- Added error and offload statistics counter.
- Mark new API as experimental.
- Change the virtual queue spin lock to read/write spin lock.
- Made shared counters atomic.
- Add versioned rte_vhost_driver_callback_register() for
ABI compliance.
Eelco Chaudron (4):
vhost: change vhost_virtqueue access lock to a read/write one
vhost: make the guest_notifications statistic counter atomic
vhost: fix invalid call FD handling
vhost: add device op to offload the interrupt kick
lib/eal/include/generic/rte_rwlock.h | 17 +++++
lib/vhost/meson.build | 2 +
lib/vhost/rte_vhost.h | 23 ++++++-
lib/vhost/socket.c | 63 +++++++++++++++++--
lib/vhost/version.map | 9 +++
lib/vhost/vhost.c | 92 +++++++++++++++++++++-------
lib/vhost/vhost.h | 69 ++++++++++++++-------
lib/vhost/vhost_user.c | 14 ++---
lib/vhost/virtio_net.c | 90 +++++++++++++--------------
9 files changed, 278 insertions(+), 101 deletions(-)
^ permalink raw reply [relevance 4%]
* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
2023-05-17 7:45 0% ` lihuisong (C)
@ 2023-05-17 8:53 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-05-17 8:53 UTC (permalink / raw)
To: lihuisong (C)
Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
dev, techboard
On 5/17/2023 8:45 AM, lihuisong (C) wrote:
>
> 在 2023/5/16 22:13, Ferruh Yigit 写道:
>> On 5/16/2023 12:47 PM, lihuisong (C) wrote:
>>> Hi Ferruh,
>>>
>>> There is no result on techboard.
>>> How to deal with this problem next?
>> +techboard for comment.
>>
>>
>> Btw, what was your positioning to Bruce's suggestion,
>> when a MAC address is in the list, fail to set it as default and enforce
>> user do the corrective action (delete MAC explicitly etc...).
> If a MAC address is in the list, rte_eth_dev_default_mac_addr_set
> returns failure?
Yes.
In that case API can return EEXIST or similar. In this case user need to
call 'rte_eth_dev_mac_addr_remove()' first and call
'rte_eth_dev_default_mac_addr_set()' again, if this is the intention.
>> If you are OK with it, that is good for me too, unless techboard objects
>> we can proceed with that one.
>>
>>
>>> /Huisong
>>>
>>> 在 2023/2/2 20:36, Huisong Li 写道:
>>>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>>>> applications modify the default MAC address by .mac_addr_set().
>>>> However,
>>>> if the new default one has been added as a non-default MAC address by
>>>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the
>>>> mac_addrs
>>>> list. As a result, one MAC address occupies two entries in the list.
>>>> Like:
>>>> add(MAC1)
>>>> add(MAC2)
>>>> add(MAC3)
>>>> add(MAC4)
>>>> set_default(MAC3)
>>>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>>>> Note: MAC3 occupies two entries.
>>>>
>>>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove
>>>> the
>>>> old default MAC when set default MAC. If user continues to do
>>>> set_default(MAC5), and the mac_addrs list is default=MAC5,
>>>> filters=(MAC1,
>>>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the
>>>> list,
>>>> but packets with MAC3 aren't actually received by the PMD.
>>>>
>>>> So need to ensure that the new default address is removed from the
>>>> rest of
>>>> the list if the address was already in the list.
>>>>
>>>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>>>> ---
>>>> v8: fix some comments.
>>>> v7: add announcement in the release notes and document this behavior.
>>>> v6: fix commit log and some code comments.
>>>> v5:
>>>> - merge the second patch into the first patch.
>>>> - add error log when rollback failed.
>>>> v4:
>>>> - fix broken in the patchwork
>>>> v3:
>>>> - first explicitly remove the non-default MAC, then set default
>>>> one.
>>>> - document default and non-default MAC address
>>>> v2:
>>>> - fixed commit log.
>>>> ---
>>>> doc/guides/rel_notes/release_23_03.rst | 6 +++++
>>>> lib/ethdev/ethdev_driver.h | 6 ++++-
>>>> lib/ethdev/rte_ethdev.c | 35
>>>> ++++++++++++++++++++++++--
>>>> lib/ethdev/rte_ethdev.h | 3 +++
>>>> 4 files changed, 47 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>>> b/doc/guides/rel_notes/release_23_03.rst
>>>> index 84b112a8b1..1c9b9912c2 100644
>>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>>> @@ -105,6 +105,12 @@ API Changes
>>>> Also, make sure to start the actual text at the margin.
>>>> =======================================================
>>>> +* ethdev: ensured all entries in MAC address list are uniques.
>>>> + When setting a default MAC address with the function
>>>> + ``rte_eth_dev_default_mac_addr_set``,
>>>> + the address is now removed from the rest of the address list
>>>> + in order to ensure it is only at index 0 of the list.
>>>> +
>>>> ABI Changes
>>>> -----------
>>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>>> index dde3ec84ef..3994c61b86 100644
>>>> --- a/lib/ethdev/ethdev_driver.h
>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>>>> uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>>>> failures */
>>>> - /** Device Ethernet link address. @see
>>>> rte_eth_dev_release_port() */
>>>> + /**
>>>> + * Device Ethernet link addresses.
>>>> + * All entries are unique.
>>>> + * The first entry (index zero) is the default address.
>>>> + */
>>>> struct rte_ether_addr *mac_addrs;
>>>> /** Bitmap associating MAC addresses to pools */
>>>> uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>>> index 86ca303ab5..de25183619 100644
>>>> --- a/lib/ethdev/rte_ethdev.c
>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>>> struct rte_ether_addr *addr)
>>>> int
>>>> rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>>>> rte_ether_addr *addr)
>>>> {
>>>> + uint64_t mac_pool_sel_bk = 0;
>>>> struct rte_eth_dev *dev;
>>>> + uint32_t pool;
>>>> + int index;
>>>> int ret;
>>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>>>> port_id, struct rte_ether_addr *addr)
>>>> if (*dev->dev_ops->mac_addr_set == NULL)
>>>> return -ENOTSUP;
>>>> + /* Keep address unique in dev->data->mac_addrs[]. */
>>>> + index = eth_dev_get_mac_addr_index(port_id, addr);
>>>> + if (index > 0) {
>>>> + /* Remove address in dev data structure */
>>>> + mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>>>> + ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>>>> + if (ret < 0) {
>>>> + RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>>>> from the rest of list.\n",
>>>> + port_id);
>>>> + return ret;
>>>> + }
>>>> + }
>>>> ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>>>> if (ret < 0)
>>>> - return ret;
>>>> + goto out;
>>>> /* Update default address in NIC data structure */
>>>> rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>>>> return 0;
>>>> -}
>>>> +out:
>>>> + if (index > 0) {
>>>> + pool = 0;
>>>> + do {
>>>> + if (mac_pool_sel_bk & UINT64_C(1)) {
>>>> + if (rte_eth_dev_mac_addr_add(port_id, addr,
>>>> + pool) != 0)
>>>> + RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>>>> id(%u) in port %u.\n",
>>>> + pool, port_id);
>>>> + }
>>>> + mac_pool_sel_bk >>= 1;
>>>> + pool++;
>>>> + } while (mac_pool_sel_bk != 0);
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>>> /*
>>>> * Returns index into MAC address array of addr. Use
>>>> 00:00:00:00:00:00 to find
>>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>>> index d22de196db..2456153457 100644
>>>> --- a/lib/ethdev/rte_ethdev.h
>>>> +++ b/lib/ethdev/rte_ethdev.h
>>>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>>> /**
>>>> * Set the default MAC address.
>>>> + * It replaces the address at index 0 of the MAC address list.
>>>> + * If the address was already in the MAC address list,
>>>> + * it is removed from the rest of the list.
>>>> *
>>>> * @param port_id
>>>> * The port identifier of the Ethernet device.
>> .
^ permalink raw reply [relevance 0%]
* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
2023-05-16 14:13 0% ` Ferruh Yigit
@ 2023-05-17 7:45 0% ` lihuisong (C)
2023-05-17 8:53 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: lihuisong (C) @ 2023-05-17 7:45 UTC (permalink / raw)
To: Ferruh Yigit
Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
dev, techboard
在 2023/5/16 22:13, Ferruh Yigit 写道:
> On 5/16/2023 12:47 PM, lihuisong (C) wrote:
>> Hi Ferruh,
>>
>> There is no result on techboard.
>> How to deal with this problem next?
> +techboard for comment.
>
>
> Btw, what was your positioning to Bruce's suggestion,
> when a MAC address is in the list, fail to set it as default and enforce
> user do the corrective action (delete MAC explicitly etc...).
If a MAC address is in the list, rte_eth_dev_default_mac_addr_set
returns failure?
> If you are OK with it, that is good for me too, unless techboard objects
> we can proceed with that one.
>
>
>> /Huisong
>>
>> 在 2023/2/2 20:36, Huisong Li 写道:
>>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>>> applications modify the default MAC address by .mac_addr_set(). However,
>>> if the new default one has been added as a non-default MAC address by
>>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
>>> list. As a result, one MAC address occupies two entries in the list.
>>> Like:
>>> add(MAC1)
>>> add(MAC2)
>>> add(MAC3)
>>> add(MAC4)
>>> set_default(MAC3)
>>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>>> Note: MAC3 occupies two entries.
>>>
>>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
>>> old default MAC when set default MAC. If user continues to do
>>> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
>>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
>>> but packets with MAC3 aren't actually received by the PMD.
>>>
>>> So need to ensure that the new default address is removed from the
>>> rest of
>>> the list if the address was already in the list.
>>>
>>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>>> ---
>>> v8: fix some comments.
>>> v7: add announcement in the release notes and document this behavior.
>>> v6: fix commit log and some code comments.
>>> v5:
>>> - merge the second patch into the first patch.
>>> - add error log when rollback failed.
>>> v4:
>>> - fix broken in the patchwork
>>> v3:
>>> - first explicitly remove the non-default MAC, then set default one.
>>> - document default and non-default MAC address
>>> v2:
>>> - fixed commit log.
>>> ---
>>> doc/guides/rel_notes/release_23_03.rst | 6 +++++
>>> lib/ethdev/ethdev_driver.h | 6 ++++-
>>> lib/ethdev/rte_ethdev.c | 35 ++++++++++++++++++++++++--
>>> lib/ethdev/rte_ethdev.h | 3 +++
>>> 4 files changed, 47 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index 84b112a8b1..1c9b9912c2 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -105,6 +105,12 @@ API Changes
>>> Also, make sure to start the actual text at the margin.
>>> =======================================================
>>> +* ethdev: ensured all entries in MAC address list are uniques.
>>> + When setting a default MAC address with the function
>>> + ``rte_eth_dev_default_mac_addr_set``,
>>> + the address is now removed from the rest of the address list
>>> + in order to ensure it is only at index 0 of the list.
>>> +
>>> ABI Changes
>>> -----------
>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>> index dde3ec84ef..3994c61b86 100644
>>> --- a/lib/ethdev/ethdev_driver.h
>>> +++ b/lib/ethdev/ethdev_driver.h
>>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>>> uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>>> failures */
>>> - /** Device Ethernet link address. @see
>>> rte_eth_dev_release_port() */
>>> + /**
>>> + * Device Ethernet link addresses.
>>> + * All entries are unique.
>>> + * The first entry (index zero) is the default address.
>>> + */
>>> struct rte_ether_addr *mac_addrs;
>>> /** Bitmap associating MAC addresses to pools */
>>> uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index 86ca303ab5..de25183619 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>> struct rte_ether_addr *addr)
>>> int
>>> rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>>> rte_ether_addr *addr)
>>> {
>>> + uint64_t mac_pool_sel_bk = 0;
>>> struct rte_eth_dev *dev;
>>> + uint32_t pool;
>>> + int index;
>>> int ret;
>>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>>> port_id, struct rte_ether_addr *addr)
>>> if (*dev->dev_ops->mac_addr_set == NULL)
>>> return -ENOTSUP;
>>> + /* Keep address unique in dev->data->mac_addrs[]. */
>>> + index = eth_dev_get_mac_addr_index(port_id, addr);
>>> + if (index > 0) {
>>> + /* Remove address in dev data structure */
>>> + mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>>> + ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>>> + if (ret < 0) {
>>> + RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>>> from the rest of list.\n",
>>> + port_id);
>>> + return ret;
>>> + }
>>> + }
>>> ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>>> if (ret < 0)
>>> - return ret;
>>> + goto out;
>>> /* Update default address in NIC data structure */
>>> rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>>> return 0;
>>> -}
>>> +out:
>>> + if (index > 0) {
>>> + pool = 0;
>>> + do {
>>> + if (mac_pool_sel_bk & UINT64_C(1)) {
>>> + if (rte_eth_dev_mac_addr_add(port_id, addr,
>>> + pool) != 0)
>>> + RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>>> id(%u) in port %u.\n",
>>> + pool, port_id);
>>> + }
>>> + mac_pool_sel_bk >>= 1;
>>> + pool++;
>>> + } while (mac_pool_sel_bk != 0);
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> /*
>>> * Returns index into MAC address array of addr. Use
>>> 00:00:00:00:00:00 to find
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index d22de196db..2456153457 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>> /**
>>> * Set the default MAC address.
>>> + * It replaces the address at index 0 of the MAC address list.
>>> + * If the address was already in the MAC address list,
>>> + * it is removed from the rest of the list.
>>> *
>>> * @param port_id
>>> * The port identifier of the Ethernet device.
> .
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
2023-05-16 13:08 0% ` Jerin Jacob
@ 2023-05-17 7:16 3% ` Mattias Rönnblom
2023-05-17 12:28 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-05-17 7:16 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup
On 2023-05-16 15:08, Jerin Jacob wrote:
> On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2023-05-15 14:38, Jerin Jacob wrote:
>>> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>
>>>> On 2023-05-12 13:59, Jerin Jacob wrote:
>>>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
>>>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>>>
>>>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
>>>>>> dequeue only when the burst size is compile-time constant (and equal
>>>>>> to one).
>>>>>>
>>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> v3: Actually include the change v2 claimed to contain.
>>>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
>>>>>> application is compiled with -pedantic. (Morten Brørup)
>>>>>> ---
>>>>>> lib/eventdev/rte_eventdev.h | 4 ++--
>>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>>>> index a90e23ac8b..a471caeb6d 100644
>>>>>> --- a/lib/eventdev/rte_eventdev.h
>>>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>>>>> * Allow zero cost non burst mode routine invocation if application
>>>>>> * requests nb_events as const one
>>>>>> */
>>>>>> - if (nb_events == 1)
>>>>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>>
>>>>> "Why" part is not clear from the commit message. Is this to avoid
>>>>> nb_events read if it is built-in const.
>>>>
>>>> The __builtin_constant_p() is introduced to avoid having the compiler
>>>> generate a conditional branch and two different code paths in case
>>>> nb_elem is a run-time variable.
>>>>
>>>> In particular, this matters if nb_elems is run-time variable and varies
>>>> between 1 and some larger value.
>>>>
>>>> I should have mention this in the commit message.
>>>>
>>>> A very slight performance improvement. It also makes the code better
>>>> match the comment, imo. Zero cost for const one enqueues, but no impact
>>>> non-compile-time-constant-length enqueues.
>>>>
>>>> Feel free to ignore.
>>>
>>>
>>> I did some performance comparison of the patch.
>>> A low-end ARM machines shows 0.7% drop with single event case. No
>>> regression see with high-end ARM cores with single event case.
>>>
>>> IMO, optimizing the check for burst mode(the new patch) may not show
>>> any real improvement as the cost is divided by number of event.
>>> Whereas optimizing the check for single event case(The current code)
>>> shows better performance with single event case and no regression
>>> with burst mode as cost is divided by number of events.
>>
>> I ran some tests on an AMD Zen 3 with DSW.
>> In the below tests the enqueue burst size is not compile-time constant.
>>
>> Enqueue burst size Performance improvement
>> Run-time constant 1 ~5%
>> Run-time constant 2 ~0%
>> Run-time variable 1-2 ~9%
>> Run-time variable 1-16 ~0%
>>
>> The run-time variable enqueue sizes randomly (uniformly) distributed in
>> the specified range.
>>
>> The first result may come as a surprise. The benchmark is using
>> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
>> in most apps). The single-event enqueue function only exists in a
>> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
>> I suspect that is the reason for the performance improvement.
>>
>> This effect is large-enough to make it somewhat beneficial (+~1%) to use
>> run-time variable single-event enqueue compared to keeping the burst
>> size compile-time constant.
>
> # Interesting, Could you share your testeventdev command to test it.
I'm using a proprietary benchmark to evaluate the effect of these
changes. There's certainly nothing secret about that program, and also
nothing very DSW-specific either. I hope to at some point both extend
DPDK eventdev tests to include DSW, and also to contribute
benchmarks/characteristics tests (perf unit tests or as a separate
program), if there seems to be a value in this.
> # By having quick glance on DSW code, following change can be added(or
> similar cases).
> Not sure such change in DSW driver is making a difference or nor?
>
>
> diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
> index e84b65d99f..455470997b 100644
> --- a/drivers/event/dsw/dsw_event.c
> +++ b/drivers/event/dsw/dsw_event.c
> @@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
> *dsw, struct dsw_port *source_port)
> uint16_t
> dsw_event_enqueue(void *port, const struct rte_event *ev)
> {
> - return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
> + return dsw_event_enqueue_burst(port, ev, 1);
Good point.
Historical note: I think that comparison is old cruft borne out of a
misconception, that the single-event enqueue could be called directly
from application code, combined with the fact that producer-only ports
needed some way to "maintain" a port, prior to the introduction of
rte_event_maintain().
> }
>
> static __rte_always_inline uint16_t
> @@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
> *source_port,
> return (num_non_release + num_release);
> }
>
> -uint16_t
> +inline uint16_t
From what it seems, this does not have the desired effect, at least not
on GCC 11.3 (w/ the default DPDK compiler configuration).
I reached this conclusion when I noticed that if I reshuffle the code so
to force (not hint) the inlining of the burst (and generic burst)
enqueue function into dsw_event_enqueue(), your change performs better.
> dsw_event_enqueue_burst(void *port, const struct rte_event events[],
> uint16_t events_len)
> {
>
> # I am testing with command like this "/build/app/dpdk-test-eventdev
> -l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
> --stlist p --nb_pkts=10000000000"
>
I re-ran the compile-time variable, run-time constant enqueue size of 1,
and I got the following:
Jerin's change: +4%
Jerin's change + ensure inlining: +6%
RFC v3: +7%
(Here I use a more different setup that produces more deterministic
results, hence the different numbers compared to the previous runs. They
were using a pipeline spread over two chiplets, and these runs are using
only a single chiplet.)
It seems like with your suggested changes you eliminate most of the
single-enqueue-special case performance degradation (for DSW), but not
all of it. The remaining degradation is very small (for the above case,
larger for small by run-time variable enqueue sizes), but it's a little
sad that a supposedly performance-enhancing special case (that drives
complexity in the code, although not much) actually degrades performance.
>>
>> The performance gain is counted toward both enqueue and dequeue costs
>> (+benchmark app overhead), so an under-estimation if see this as an
>> enqueue performance improvement.
>>
>>> If you agree, then we can skip this patch.
>>>
>>
>> I have no strong opinion if this should be included or not.
>>
>> It was up to me, I would drop the single-enqueue special case handling
>> altogether in the next ABI update.
>
> That's a reasonable path. If we are willing to push a patch, we can
> test it and give feedback.
> Or in our spare time, We can do that as well.
>
Sure, I'll give it a try.
The next release is an ABI-breaking one?
>>
>>>
>>>>
>>>>> If so, check should be following. Right?
>>>>>
>>>>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>> || nb_events == 1)
>>>>>
>>>>> At least, It was my original intention in the code.
>>>>>
>>>>>
>>>>>
>>>>>> return (fp_ops->enqueue)(port, ev);
>>>>>> else
>>>>>> return fn(port, ev, nb_events);
>>>>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>>>>>> * Allow zero cost non burst mode routine invocation if application
>>>>>> * requests nb_events as const one
>>>>>> */
>>>>>> - if (nb_events == 1)
>>>>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>>> return (fp_ops->dequeue)(port, ev, timeout_ticks);
>>>>>> else
>>>>>> return (fp_ops->dequeue_burst)(port, ev, nb_events,
>>>>>> --
>>>>>> 2.34.1
>>>>>>
>>>>
^ permalink raw reply [relevance 3%]
* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
2023-05-16 11:47 0% ` lihuisong (C)
@ 2023-05-16 14:13 0% ` Ferruh Yigit
2023-05-17 7:45 0% ` lihuisong (C)
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-05-16 14:13 UTC (permalink / raw)
To: lihuisong (C)
Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
dev, techboard
On 5/16/2023 12:47 PM, lihuisong (C) wrote:
> Hi Ferruh,
>
> There is no result on techboard.
> How to deal with this problem next?
+techboard for comment.
Btw, what was your positioning to Bruce's suggestion,
when a MAC address is in the list, fail to set it as default and enforce
user do the corrective action (delete MAC explicitly etc...).
If you are OK with it, that is good for me too, unless techboard objects
we can proceed with that one.
>
> /Huisong
>
> 在 2023/2/2 20:36, Huisong Li 写道:
>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>> applications modify the default MAC address by .mac_addr_set(). However,
>> if the new default one has been added as a non-default MAC address by
>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
>> list. As a result, one MAC address occupies two entries in the list.
>> Like:
>> add(MAC1)
>> add(MAC2)
>> add(MAC3)
>> add(MAC4)
>> set_default(MAC3)
>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>> Note: MAC3 occupies two entries.
>>
>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
>> old default MAC when set default MAC. If user continues to do
>> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
>> but packets with MAC3 aren't actually received by the PMD.
>>
>> So need to ensure that the new default address is removed from the
>> rest of
>> the list if the address was already in the list.
>>
>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>> ---
>> v8: fix some comments.
>> v7: add announcement in the release notes and document this behavior.
>> v6: fix commit log and some code comments.
>> v5:
>> - merge the second patch into the first patch.
>> - add error log when rollback failed.
>> v4:
>> - fix broken in the patchwork
>> v3:
>> - first explicitly remove the non-default MAC, then set default one.
>> - document default and non-default MAC address
>> v2:
>> - fixed commit log.
>> ---
>> doc/guides/rel_notes/release_23_03.rst | 6 +++++
>> lib/ethdev/ethdev_driver.h | 6 ++++-
>> lib/ethdev/rte_ethdev.c | 35 ++++++++++++++++++++++++--
>> lib/ethdev/rte_ethdev.h | 3 +++
>> 4 files changed, 47 insertions(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>> b/doc/guides/rel_notes/release_23_03.rst
>> index 84b112a8b1..1c9b9912c2 100644
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> @@ -105,6 +105,12 @@ API Changes
>> Also, make sure to start the actual text at the margin.
>> =======================================================
>> +* ethdev: ensured all entries in MAC address list are uniques.
>> + When setting a default MAC address with the function
>> + ``rte_eth_dev_default_mac_addr_set``,
>> + the address is now removed from the rest of the address list
>> + in order to ensure it is only at index 0 of the list.
>> +
>> ABI Changes
>> -----------
>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>> index dde3ec84ef..3994c61b86 100644
>> --- a/lib/ethdev/ethdev_driver.h
>> +++ b/lib/ethdev/ethdev_driver.h
>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>> uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>> failures */
>> - /** Device Ethernet link address. @see
>> rte_eth_dev_release_port() */
>> + /**
>> + * Device Ethernet link addresses.
>> + * All entries are unique.
>> + * The first entry (index zero) is the default address.
>> + */
>> struct rte_ether_addr *mac_addrs;
>> /** Bitmap associating MAC addresses to pools */
>> uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 86ca303ab5..de25183619 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>> struct rte_ether_addr *addr)
>> int
>> rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>> rte_ether_addr *addr)
>> {
>> + uint64_t mac_pool_sel_bk = 0;
>> struct rte_eth_dev *dev;
>> + uint32_t pool;
>> + int index;
>> int ret;
>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>> port_id, struct rte_ether_addr *addr)
>> if (*dev->dev_ops->mac_addr_set == NULL)
>> return -ENOTSUP;
>> + /* Keep address unique in dev->data->mac_addrs[]. */
>> + index = eth_dev_get_mac_addr_index(port_id, addr);
>> + if (index > 0) {
>> + /* Remove address in dev data structure */
>> + mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>> + ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>> + if (ret < 0) {
>> + RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>> from the rest of list.\n",
>> + port_id);
>> + return ret;
>> + }
>> + }
>> ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>> if (ret < 0)
>> - return ret;
>> + goto out;
>> /* Update default address in NIC data structure */
>> rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>> return 0;
>> -}
>> +out:
>> + if (index > 0) {
>> + pool = 0;
>> + do {
>> + if (mac_pool_sel_bk & UINT64_C(1)) {
>> + if (rte_eth_dev_mac_addr_add(port_id, addr,
>> + pool) != 0)
>> + RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>> id(%u) in port %u.\n",
>> + pool, port_id);
>> + }
>> + mac_pool_sel_bk >>= 1;
>> + pool++;
>> + } while (mac_pool_sel_bk != 0);
>> + }
>> +
>> + return ret;
>> +}
>> /*
>> * Returns index into MAC address array of addr. Use
>> 00:00:00:00:00:00 to find
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index d22de196db..2456153457 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>> /**
>> * Set the default MAC address.
>> + * It replaces the address at index 0 of the MAC address list.
>> + * If the address was already in the MAC address list,
>> + * it is removed from the rest of the list.
>> *
>> * @param port_id
>> * The port identifier of the Ethernet device.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
2023-05-15 20:52 3% ` Mattias Rönnblom
@ 2023-05-16 13:08 0% ` Jerin Jacob
2023-05-17 7:16 3% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-05-16 13:08 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup
On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2023-05-15 14:38, Jerin Jacob wrote:
> > On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >>
> >> On 2023-05-12 13:59, Jerin Jacob wrote:
> >>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>
> >>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
> >>>> dequeue only when the burst size is compile-time constant (and equal
> >>>> to one).
> >>>>
> >>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>
> >>>> ---
> >>>>
> >>>> v3: Actually include the change v2 claimed to contain.
> >>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
> >>>> application is compiled with -pedantic. (Morten Brørup)
> >>>> ---
> >>>> lib/eventdev/rte_eventdev.h | 4 ++--
> >>>> 1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >>>> index a90e23ac8b..a471caeb6d 100644
> >>>> --- a/lib/eventdev/rte_eventdev.h
> >>>> +++ b/lib/eventdev/rte_eventdev.h
> >>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >>>> * Allow zero cost non burst mode routine invocation if application
> >>>> * requests nb_events as const one
> >>>> */
> >>>> - if (nb_events == 1)
> >>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>
> >>> "Why" part is not clear from the commit message. Is this to avoid
> >>> nb_events read if it is built-in const.
> >>
> >> The __builtin_constant_p() is introduced to avoid having the compiler
> >> generate a conditional branch and two different code paths in case
> >> nb_elem is a run-time variable.
> >>
> >> In particular, this matters if nb_elems is run-time variable and varies
> >> between 1 and some larger value.
> >>
> >> I should have mention this in the commit message.
> >>
> >> A very slight performance improvement. It also makes the code better
> >> match the comment, imo. Zero cost for const one enqueues, but no impact
> >> non-compile-time-constant-length enqueues.
> >>
> >> Feel free to ignore.
> >
> >
> > I did some performance comparison of the patch.
> > A low-end ARM machines shows 0.7% drop with single event case. No
> > regression see with high-end ARM cores with single event case.
> >
> > IMO, optimizing the check for burst mode(the new patch) may not show
> > any real improvement as the cost is divided by number of event.
> > Whereas optimizing the check for single event case(The current code)
> > shows better performance with single event case and no regression
> > with burst mode as cost is divided by number of events.
>
> I ran some tests on an AMD Zen 3 with DSW.
> In the below tests the enqueue burst size is not compile-time constant.
>
> Enqueue burst size Performance improvement
> Run-time constant 1 ~5%
> Run-time constant 2 ~0%
> Run-time variable 1-2 ~9%
> Run-time variable 1-16 ~0%
>
> The run-time variable enqueue sizes randomly (uniformly) distributed in
> the specified range.
>
> The first result may come as a surprise. The benchmark is using
> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
> in most apps). The single-event enqueue function only exists in a
> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
> I suspect that is the reason for the performance improvement.
>
> This effect is large-enough to make it somewhat beneficial (+~1%) to use
> run-time variable single-event enqueue compared to keeping the burst
> size compile-time constant.
# Interesting, Could you share your testeventdev command to test it.
# By having quick glance on DSW code, following change can be added(or
similar cases).
Not sure such change in DSW driver is making a difference or nor?
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index e84b65d99f..455470997b 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
*dsw, struct dsw_port *source_port)
uint16_t
dsw_event_enqueue(void *port, const struct rte_event *ev)
{
- return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
+ return dsw_event_enqueue_burst(port, ev, 1);
}
static __rte_always_inline uint16_t
@@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
*source_port,
return (num_non_release + num_release);
}
-uint16_t
+inline uint16_t
dsw_event_enqueue_burst(void *port, const struct rte_event events[],
uint16_t events_len)
{
# I am testing with command like this "/build/app/dpdk-test-eventdev
-l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
--stlist p --nb_pkts=10000000000"
>
> The performance gain is counted toward both enqueue and dequeue costs
> (+benchmark app overhead), so an under-estimation if see this as an
> enqueue performance improvement.
>
> > If you agree, then we can skip this patch.
> >
>
> I have no strong opinion if this should be included or not.
>
> It was up to me, I would drop the single-enqueue special case handling
> altogether in the next ABI update.
That's a reasonable path. If we are willing to push a patch, we can
test it and give feedback.
Or in our spare time, We can do that as well.
>
> >
> >>
> >>> If so, check should be following. Right?
> >>>
> >>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
> >>> || nb_events == 1)
> >>>
> >>> At least, It was my original intention in the code.
> >>>
> >>>
> >>>
> >>>> return (fp_ops->enqueue)(port, ev);
> >>>> else
> >>>> return fn(port, ev, nb_events);
> >>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> >>>> * Allow zero cost non burst mode routine invocation if application
> >>>> * requests nb_events as const one
> >>>> */
> >>>> - if (nb_events == 1)
> >>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>> return (fp_ops->dequeue)(port, ev, timeout_ticks);
> >>>> else
> >>>> return (fp_ops->dequeue_burst)(port, ev, nb_events,
> >>>> --
> >>>> 2.34.1
> >>>>
> >>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
2023-05-16 11:45 0% ` Maxime Coquelin
@ 2023-05-16 12:07 0% ` Eelco Chaudron
0 siblings, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-16 12:07 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: David Marchand, chenbo.xia, dev
On 16 May 2023, at 13:45, Maxime Coquelin wrote:
> On 5/16/23 13:36, Eelco Chaudron wrote:
>>
>>
>> On 16 May 2023, at 12:12, David Marchand wrote:
>>
>>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>>
>>> [snip]
>>>
>>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>>> vsocket->path = NULL;
>>>>>> }
>>>>>>
>>>>>> + if (vsocket && vsocket->alloc_notify_ops) {
>>>>>> +#pragma GCC diagnostic push
>>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>>> + free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>>> +#pragma GCC diagnostic pop
>>>>>> + vsocket->notify_ops = NULL;
>>>>>> + }
>>>>>
>>>>> Rather than select the behavior based on a boolean (and here force the
>>>>> compiler to close its eyes), I would instead add a non const pointer
>>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>>
>>>> Good idea, I will make the change in v3.
>>>
>>> Feel free to use a better name for this field :-).
>>>
>>>>
>>>>>> +
>>>>>> if (vsocket) {
>>>>>> free(vsocket);
>>>>>> vsocket = NULL;
>>>
>>> [snip]
>>>
>>>>>> + /*
>>>>>> + * Although the ops structure is a const structure, we do need to
>>>>>> + * override the guest_notify operation. This is because with the
>>>>>> + * previous APIs it was "reserved" and if any garbage value was passed,
>>>>>> + * it could crash the application.
>>>>>> + */
>>>>>> + if (ops && !ops->guest_notify) {
>>>>>
>>>>> Hum, as described in the comment above, I don't think we should look
>>>>> at ops->guest_notify value at all.
>>>>> Checking ops != NULL should be enough.
>>>>
>>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>>
>>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>>
>>> Hum, I don't understand my comment either o_O'.
>>> Too many days off... or maybe my evil twin took over the keyboard.
>>>
>>>
>>>>
>>>>>> + struct rte_vhost_device_ops *new_ops;
>>>>>> +
>>>>>> + new_ops = malloc(sizeof(*new_ops));
>>>>>
>>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>>> I am unclear of the impact though.
>>>>
>>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>>
>>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>>
>>> Determinining current numa is doable, via 'ops'
>>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>>> numa_realloc().
>>> The problem is how to allocate on this numa with the libc allocator
>>> for which I have no idea...
>>> We could go with the dpdk allocator (again, like numa_realloc()).
>>>
>>>
>>> In practice, the passed ops will be probably from a const variable in
>>> the program .data section (for which I think fields are set to 0
>>> unless explicitly initialised), or a memset() will be called for a
>>> dynamic allocation from good citizens.
>>> So we can probably live with the current proposal.
>>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>>> we will drop this compat code.
>>>
>>> Maxime, Chenbo, what do you think?
>>
>> Wait for their response, but for now I assume we can just keep the numa unaware malloc().
>
> Let's keep it as is as we'll get rid of it in 23.11.
Thanks for confirming.
>>>
>>> [snip]
>>>
>>>>>
>>>>> But putting indentation aside, is this change equivalent?
>>>>> - if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>>> - (vq->callfd >= 0)) ||
>>>>> - unlikely(!signalled_used_valid)) {
>>>>> + if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>>> + unlikely(!signalled_used_valid)) &&
>>>>> + vq->callfd >= 0) {
>>>>
>>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>>
>>> I think this should be a separate fix.
>>
>> ACK, will add a separate patch in this series to fix it.
>
> I also caught & fixed it while implementing my VDUSE series [0].
> You can pick it in your series, and I will rebase my series on top of
> it.
Thanks for the details I’ll include your patch in my series.
I will send out a new revision soon (after testing the changes with OVS).
Thanks,
Eelco
> Thanks,
> Maxime
>
> [0]: https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/b976e1f226db5c09834148847d994045eb89be93
>
>
>>
>>>
>>>>
>>>>>> + vhost_vring_inject_irq(dev, vq);
>>>
>>>
>>> --
>>> David Marchand
>>
^ permalink raw reply [relevance 0%]
* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
@ 2023-05-16 11:47 0% ` lihuisong (C)
2023-05-16 14:13 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: lihuisong (C) @ 2023-05-16 11:47 UTC (permalink / raw)
To: ferruh.yigit
Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen, dev
Hi Ferruh,
There is no result on techboard.
How to deal with this problem next?
/Huisong
在 2023/2/2 20:36, Huisong Li 写道:
> The dev->data->mac_addrs[0] will be changed to a new MAC address when
> applications modify the default MAC address by .mac_addr_set(). However,
> if the new default one has been added as a non-default MAC address by
> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
> list. As a result, one MAC address occupies two entries in the list. Like:
> add(MAC1)
> add(MAC2)
> add(MAC3)
> add(MAC4)
> set_default(MAC3)
> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
> Note: MAC3 occupies two entries.
>
> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
> old default MAC when set default MAC. If user continues to do
> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
> but packets with MAC3 aren't actually received by the PMD.
>
> So need to ensure that the new default address is removed from the rest of
> the list if the address was already in the list.
>
> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
> Cc: stable@dpdk.org
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
> v8: fix some comments.
> v7: add announcement in the release notes and document this behavior.
> v6: fix commit log and some code comments.
> v5:
> - merge the second patch into the first patch.
> - add error log when rollback failed.
> v4:
> - fix broken in the patchwork
> v3:
> - first explicitly remove the non-default MAC, then set default one.
> - document default and non-default MAC address
> v2:
> - fixed commit log.
> ---
> doc/guides/rel_notes/release_23_03.rst | 6 +++++
> lib/ethdev/ethdev_driver.h | 6 ++++-
> lib/ethdev/rte_ethdev.c | 35 ++++++++++++++++++++++++--
> lib/ethdev/rte_ethdev.h | 3 +++
> 4 files changed, 47 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
> index 84b112a8b1..1c9b9912c2 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -105,6 +105,12 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* ethdev: ensured all entries in MAC address list are uniques.
> + When setting a default MAC address with the function
> + ``rte_eth_dev_default_mac_addr_set``,
> + the address is now removed from the rest of the address list
> + in order to ensure it is only at index 0 of the list.
> +
>
> ABI Changes
> -----------
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index dde3ec84ef..3994c61b86 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>
> uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
>
> - /** Device Ethernet link address. @see rte_eth_dev_release_port() */
> + /**
> + * Device Ethernet link addresses.
> + * All entries are unique.
> + * The first entry (index zero) is the default address.
> + */
> struct rte_ether_addr *mac_addrs;
> /** Bitmap associating MAC addresses to pools */
> uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 86ca303ab5..de25183619 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
> int
> rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
> {
> + uint64_t mac_pool_sel_bk = 0;
> struct rte_eth_dev *dev;
> + uint32_t pool;
> + int index;
> int ret;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
> if (*dev->dev_ops->mac_addr_set == NULL)
> return -ENOTSUP;
>
> + /* Keep address unique in dev->data->mac_addrs[]. */
> + index = eth_dev_get_mac_addr_index(port_id, addr);
> + if (index > 0) {
> + /* Remove address in dev data structure */
> + mac_pool_sel_bk = dev->data->mac_pool_sel[index];
> + ret = rte_eth_dev_mac_addr_remove(port_id, addr);
> + if (ret < 0) {
> + RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address from the rest of list.\n",
> + port_id);
> + return ret;
> + }
> + }
> ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
> if (ret < 0)
> - return ret;
> + goto out;
>
> /* Update default address in NIC data structure */
> rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>
> return 0;
> -}
>
> +out:
> + if (index > 0) {
> + pool = 0;
> + do {
> + if (mac_pool_sel_bk & UINT64_C(1)) {
> + if (rte_eth_dev_mac_addr_add(port_id, addr,
> + pool) != 0)
> + RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool id(%u) in port %u.\n",
> + pool, port_id);
> + }
> + mac_pool_sel_bk >>= 1;
> + pool++;
> + } while (mac_pool_sel_bk != 0);
> + }
> +
> + return ret;
> +}
>
> /*
> * Returns index into MAC address array of addr. Use 00:00:00:00:00:00 to find
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d22de196db..2456153457 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>
> /**
> * Set the default MAC address.
> + * It replaces the address at index 0 of the MAC address list.
> + * If the address was already in the MAC address list,
> + * it is removed from the rest of the list.
> *
> * @param port_id
> * The port identifier of the Ethernet device.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
2023-05-16 11:36 0% ` Eelco Chaudron
@ 2023-05-16 11:45 0% ` Maxime Coquelin
2023-05-16 12:07 0% ` Eelco Chaudron
2023-05-17 9:18 0% ` Eelco Chaudron
1 sibling, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-05-16 11:45 UTC (permalink / raw)
To: Eelco Chaudron, David Marchand; +Cc: chenbo.xia, dev
On 5/16/23 13:36, Eelco Chaudron wrote:
>
>
> On 16 May 2023, at 12:12, David Marchand wrote:
>
>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>
>> [snip]
>>
>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>> vsocket->path = NULL;
>>>>> }
>>>>>
>>>>> + if (vsocket && vsocket->alloc_notify_ops) {
>>>>> +#pragma GCC diagnostic push
>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>> + free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>> +#pragma GCC diagnostic pop
>>>>> + vsocket->notify_ops = NULL;
>>>>> + }
>>>>
>>>> Rather than select the behavior based on a boolean (and here force the
>>>> compiler to close its eyes), I would instead add a non const pointer
>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>
>>> Good idea, I will make the change in v3.
>>
>> Feel free to use a better name for this field :-).
>>
>>>
>>>>> +
>>>>> if (vsocket) {
>>>>> free(vsocket);
>>>>> vsocket = NULL;
>>
>> [snip]
>>
>>>>> + /*
>>>>> + * Although the ops structure is a const structure, we do need to
>>>>> + * override the guest_notify operation. This is because with the
>>>>> + * previous APIs it was "reserved" and if any garbage value was passed,
>>>>> + * it could crash the application.
>>>>> + */
>>>>> + if (ops && !ops->guest_notify) {
>>>>
>>>> Hum, as described in the comment above, I don't think we should look
>>>> at ops->guest_notify value at all.
>>>> Checking ops != NULL should be enough.
>>>
>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>
>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>
>> Hum, I don't understand my comment either o_O'.
>> Too many days off... or maybe my evil twin took over the keyboard.
>>
>>
>>>
>>>>> + struct rte_vhost_device_ops *new_ops;
>>>>> +
>>>>> + new_ops = malloc(sizeof(*new_ops));
>>>>
>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>> I am unclear of the impact though.
>>>
>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>
>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>
>> Determinining current numa is doable, via 'ops'
>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>> numa_realloc().
>> The problem is how to allocate on this numa with the libc allocator
>> for which I have no idea...
>> We could go with the dpdk allocator (again, like numa_realloc()).
>>
>>
>> In practice, the passed ops will be probably from a const variable in
>> the program .data section (for which I think fields are set to 0
>> unless explicitly initialised), or a memset() will be called for a
>> dynamic allocation from good citizens.
>> So we can probably live with the current proposal.
>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>> we will drop this compat code.
>>
>> Maxime, Chenbo, what do you think?
>
> Wait for their response, but for now I assume we can just keep the numa unaware malloc().
Let's keep it as is as we'll get rid of it in 23.11.
>>
>> [snip]
>>
>>>>
>>>> But putting indentation aside, is this change equivalent?
>>>> - if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>> - (vq->callfd >= 0)) ||
>>>> - unlikely(!signalled_used_valid)) {
>>>> + if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>> + unlikely(!signalled_used_valid)) &&
>>>> + vq->callfd >= 0) {
>>>
>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>
>> I think this should be a separate fix.
>
> ACK, will add a separate patch in this series to fix it.
I also caught & fixed it while implementing my VDUSE series [0].
You can pick it in your series, and I will rebase my series on top of
it.
Thanks,
Maxime
[0]:
https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/b976e1f226db5c09834148847d994045eb89be93
>
>>
>>>
>>>>> + vhost_vring_inject_irq(dev, vq);
>>
>>
>> --
>> David Marchand
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
2023-05-16 10:12 3% ` David Marchand
@ 2023-05-16 11:36 0% ` Eelco Chaudron
2023-05-16 11:45 0% ` Maxime Coquelin
2023-05-17 9:18 0% ` Eelco Chaudron
0 siblings, 2 replies; 200+ results
From: Eelco Chaudron @ 2023-05-16 11:36 UTC (permalink / raw)
To: David Marchand; +Cc: maxime.coquelin, chenbo.xia, dev
On 16 May 2023, at 12:12, David Marchand wrote:
> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>> On 10 May 2023, at 13:44, David Marchand wrote:
>
> [snip]
>
>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>> vsocket->path = NULL;
>>>> }
>>>>
>>>> + if (vsocket && vsocket->alloc_notify_ops) {
>>>> +#pragma GCC diagnostic push
>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>> + free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>> +#pragma GCC diagnostic pop
>>>> + vsocket->notify_ops = NULL;
>>>> + }
>>>
>>> Rather than select the behavior based on a boolean (and here force the
>>> compiler to close its eyes), I would instead add a non const pointer
>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>
>> Good idea, I will make the change in v3.
>
> Feel free to use a better name for this field :-).
>
>>
>>>> +
>>>> if (vsocket) {
>>>> free(vsocket);
>>>> vsocket = NULL;
>
> [snip]
>
>>>> + /*
>>>> + * Although the ops structure is a const structure, we do need to
>>>> + * override the guest_notify operation. This is because with the
>>>> + * previous APIs it was "reserved" and if any garbage value was passed,
>>>> + * it could crash the application.
>>>> + */
>>>> + if (ops && !ops->guest_notify) {
>>>
>>> Hum, as described in the comment above, I don't think we should look
>>> at ops->guest_notify value at all.
>>> Checking ops != NULL should be enough.
>>
>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>
>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>
> Hum, I don't understand my comment either o_O'.
> Too many days off... or maybe my evil twin took over the keyboard.
>
>
>>
>>>> + struct rte_vhost_device_ops *new_ops;
>>>> +
>>>> + new_ops = malloc(sizeof(*new_ops));
>>>
>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>> I am unclear of the impact though.
>>
>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>
>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>
> Determinining current numa is doable, via 'ops'
> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
> numa_realloc().
> The problem is how to allocate on this numa with the libc allocator
> for which I have no idea...
> We could go with the dpdk allocator (again, like numa_realloc()).
>
>
> In practice, the passed ops will be probably from a const variable in
> the program .data section (for which I think fields are set to 0
> unless explicitly initialised), or a memset() will be called for a
> dynamic allocation from good citizens.
> So we can probably live with the current proposal.
> Plus, this is only for one release, since in 23.11 with the ABI bump,
> we will drop this compat code.
>
> Maxime, Chenbo, what do you think?
Wait for their response, but for now I assume we can just keep the numa unaware malloc().
>
> [snip]
>
>>>
>>> But putting indentation aside, is this change equivalent?
>>> - if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>> - (vq->callfd >= 0)) ||
>>> - unlikely(!signalled_used_valid)) {
>>> + if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>> + unlikely(!signalled_used_valid)) &&
>>> + vq->callfd >= 0) {
>>
>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>
> I think this should be a separate fix.
ACK, will add a separate patch in this series to fix it.
>
>>
>>>> + vhost_vring_inject_irq(dev, vq);
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port
@ 2023-05-16 11:27 0% ` lihuisong (C)
2023-05-23 0:46 0% ` fengchengwen
1 sibling, 0 replies; 200+ results
From: lihuisong (C) @ 2023-05-16 11:27 UTC (permalink / raw)
To: ferruh.yigit, thomas
Cc: dev, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen
Hi Ferruh and Thomas,
Can you continue to take a look at this series?
This work has been working on since August last year.
/Huisong
在 2023/1/31 11:33, Huisong Li 写道:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
>
> ---
> -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
> -v4: fix a misspelling.
> -v3:
> #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
> for other bus type.
> #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
> the probelm in patch 2/5.
> -v2: resend due to CI unexplained failure.
>
> Huisong Li (5):
> drivers/bus: restore driver assignment at front of probing
> ethdev: fix skip valid port in probing callback
> app/testpmd: check the validity of the port
> app/testpmd: add attach and detach port for multiple process
> app/testpmd: stop forwarding in new or destroy event
>
> app/test-pmd/testpmd.c | 47 +++++++++++++++---------
> app/test-pmd/testpmd.h | 1 -
> drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
> drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
> drivers/bus/fslmc/fslmc_bus.c | 8 +++-
> drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
> drivers/bus/pci/pci_common.c | 9 ++++-
> drivers/bus/vdev/vdev.c | 10 ++++-
> drivers/bus/vmbus/vmbus_common.c | 9 ++++-
> drivers/net/bnxt/bnxt_ethdev.c | 3 +-
> drivers/net/bonding/bonding_testpmd.c | 1 -
> drivers/net/mlx5/mlx5.c | 2 +-
> lib/ethdev/ethdev_driver.c | 13 +++++--
> lib/ethdev/ethdev_driver.h | 12 ++++++
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 4 +-
> lib/ethdev/rte_ethdev.h | 4 +-
> lib/ethdev/version.map | 1 +
> 19 files changed, 114 insertions(+), 44 deletions(-)
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
@ 2023-05-16 10:12 3% ` David Marchand
2023-05-16 11:36 0% ` Eelco Chaudron
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-05-16 10:12 UTC (permalink / raw)
To: Eelco Chaudron, maxime.coquelin, chenbo.xia; +Cc: dev
On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
> On 10 May 2023, at 13:44, David Marchand wrote:
[snip]
> >> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
> >> vsocket->path = NULL;
> >> }
> >>
> >> + if (vsocket && vsocket->alloc_notify_ops) {
> >> +#pragma GCC diagnostic push
> >> +#pragma GCC diagnostic ignored "-Wcast-qual"
> >> + free((struct rte_vhost_device_ops *)vsocket->notify_ops);
> >> +#pragma GCC diagnostic pop
> >> + vsocket->notify_ops = NULL;
> >> + }
> >
> > Rather than select the behavior based on a boolean (and here force the
> > compiler to close its eyes), I would instead add a non const pointer
> > to ops (let's say alloc_notify_ops) in vhost_user_socket.
> > The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>
> Good idea, I will make the change in v3.
Feel free to use a better name for this field :-).
>
> >> +
> >> if (vsocket) {
> >> free(vsocket);
> >> vsocket = NULL;
[snip]
> >> + /*
> >> + * Although the ops structure is a const structure, we do need to
> >> + * override the guest_notify operation. This is because with the
> >> + * previous APIs it was "reserved" and if any garbage value was passed,
> >> + * it could crash the application.
> >> + */
> >> + if (ops && !ops->guest_notify) {
> >
> > Hum, as described in the comment above, I don't think we should look
> > at ops->guest_notify value at all.
> > Checking ops != NULL should be enough.
>
> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>
> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
Hum, I don't understand my comment either o_O'.
Too many days off... or maybe my evil twin took over the keyboard.
>
> >> + struct rte_vhost_device_ops *new_ops;
> >> +
> >> + new_ops = malloc(sizeof(*new_ops));
> >
> > Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
> > I am unclear of the impact though.
>
> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>
> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
Determinining current numa is doable, via 'ops'
get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
numa_realloc().
The problem is how to allocate on this numa with the libc allocator
for which I have no idea...
We could go with the dpdk allocator (again, like numa_realloc()).
In practice, the passed ops will be probably from a const variable in
the program .data section (for which I think fields are set to 0
unless explicitly initialised), or a memset() will be called for a
dynamic allocation from good citizens.
So we can probably live with the current proposal.
Plus, this is only for one release, since in 23.11 with the ABI bump,
we will drop this compat code.
Maxime, Chenbo, what do you think?
[snip]
> >
> > But putting indentation aside, is this change equivalent?
> > - if ((vhost_need_event(vhost_used_event(vq), new, old) &&
> > - (vq->callfd >= 0)) ||
> > - unlikely(!signalled_used_valid)) {
> > + if ((vhost_need_event(vhost_used_event(vq), new, old) ||
> > + unlikely(!signalled_used_valid)) &&
> > + vq->callfd >= 0) {
>
> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
I think this should be a separate fix.
>
> >> + vhost_vring_inject_irq(dev, vq);
--
David Marchand
^ permalink raw reply [relevance 3%]
* [PATCH v1 5/7] ethdev: add GENEVE TLV option modification support
@ 2023-05-16 6:37 3% ` Michael Baum
1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-16 6:37 UTC (permalink / raw)
To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon
Add modify field support for GENEVE option fields:
- "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
- "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
- "RTE_FLOW_FIELD_GENEVE_OPT_DATA"
Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.
To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++-
doc/guides/prog_guide/rte_flow.rst | 12 ++++++
doc/guides/rel_notes/release_23_07.rst | 3 ++
lib/ethdev/rte_flow.h | 51 +++++++++++++++++++++++++-
4 files changed, 112 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
ACTION_MODIFY_FIELD_DST_LEVEL,
ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
ACTION_MODIFY_FIELD_SRC_LEVEL,
ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
"ipv6_proto",
"flex_item",
- "hash_result", NULL
+ "hash_result",
+ "geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+ NULL
};
static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
static const enum index action_modify_field_dst[] = {
ACTION_MODIFY_FIELD_DST_LEVEL,
+ ACTION_MODIFY_FIELD_DST_TYPE_ID,
+ ACTION_MODIFY_FIELD_DST_CLASS_ID,
ACTION_MODIFY_FIELD_DST_OFFSET,
ACTION_MODIFY_FIELD_SRC_TYPE,
ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
static const enum index action_modify_field_src[] = {
ACTION_MODIFY_FIELD_SRC_LEVEL,
+ ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+ ACTION_MODIFY_FIELD_SRC_CLASS_ID,
ACTION_MODIFY_FIELD_SRC_OFFSET,
ACTION_MODIFY_FIELD_SRC_VALUE,
ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+ .name = "dst_type_id",
+ .help = "destination field type ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ dst.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+ .name = "dst_class",
+ .help = "destination field class ID",
+ .next = NEXT(action_modify_field_dst,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ dst.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_DST_OFFSET] = {
.name = "dst_offset",
.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
.call = parse_vc_modify_field_level,
.comp = comp_none,
},
+ [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+ .name = "src_type_id",
+ .help = "source field type ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+ src.type)),
+ .call = parse_vc_conf,
+ },
+ [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+ .name = "src_class",
+ .help = "source field class ID",
+ .next = NEXT(action_modify_field_src,
+ NEXT_ENTRY(COMMON_UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+ src.class_id)),
+ .call = parse_vc_conf,
+ },
[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
.name = "src_offset",
.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..cd38f0de46 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
For the tag array (in case of multiple tags are supported and present)
``level`` translates directly into the array index.
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
``flex_handle`` is used to specify the flex item pointer which is being
modified. ``flex_handle`` and ``level`` are mutually exclusive.
@@ -2994,6 +3002,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
+-----------------+----------------------------------------------------------+
| ``level`` | encapsulation level of a packet field or tag array index |
+-----------------+----------------------------------------------------------+
+ | ``type`` | geneve option type |
+ +-----------------+----------------------------------------------------------+
+ | ``class_id`` | geneve option class ID |
+ +-----------------+----------------------------------------------------------+
| ``flex_handle`` | flex item handle of a packet field |
+-----------------+----------------------------------------------------------+
| ``offset`` | number of bits to skip at the beginning |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* The ``level`` field in experimental structure
+ ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..b82eb0c0a8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */
RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */
RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */
+ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */
+ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+ RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */
};
/**
@@ -3788,7 +3791,53 @@ struct rte_flow_action_modify_data {
struct {
/** Encapsulation level or tag index or flex item handle. */
union {
- uint32_t level;
+ struct {
+ /**
+ * Packet encapsulation level containing
+ * the field modify to.
+ *
+ * - @p 0 requests the default behavior.
+ * Depending on the packet type, it
+ * can mean outermost, innermost or
+ * anything in between.
+ *
+ * It basically stands for the
+ * innermost encapsulation level
+ * modification can be performed on
+ * according to PMD and device
+ * capabilities.
+ *
+ * - @p 1 requests modification to be
+ * performed on the outermost packet
+ * encapsulation level.
+ *
+ * - @p 2 and subsequent values request
+ * modification to be performed on
+ * the specified inner packet
+ * encapsulation level, from
+ * outermost to innermost (lower to
+ * higher values).
+ *
+ * Values other than @p 0 are not
+ * necessarily supported.
+ *
+ * For RTE_FLOW_FIELD_TAG it represents
+ * the tag element in the tag array.
+ */
+ uint8_t level;
+ /**
+ * Geneve option type. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ uint8_t type;
+ /**
+ * Geneve option class. relevant only
+ * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+ * modification type.
+ */
+ rte_be16_t class_id;
+ };
struct rte_flow_item_flex_handle *flex_handle;
};
/** Number of bits to skip from a field. */
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
@ 2023-05-15 20:52 3% ` Mattias Rönnblom
2023-05-16 13:08 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-05-15 20:52 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom; +Cc: jerinj, dev, Morten Brørup
On 2023-05-15 14:38, Jerin Jacob wrote:
> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> On 2023-05-12 13:59, Jerin Jacob wrote:
>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>
>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
>>>> dequeue only when the burst size is compile-time constant (and equal
>>>> to one).
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>
>>>> ---
>>>>
>>>> v3: Actually include the change v2 claimed to contain.
>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
>>>> application is compiled with -pedantic. (Morten Brørup)
>>>> ---
>>>> lib/eventdev/rte_eventdev.h | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>> index a90e23ac8b..a471caeb6d 100644
>>>> --- a/lib/eventdev/rte_eventdev.h
>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>>> * Allow zero cost non burst mode routine invocation if application
>>>> * requests nb_events as const one
>>>> */
>>>> - if (nb_events == 1)
>>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>
>>> "Why" part is not clear from the commit message. Is this to avoid
>>> nb_events read if it is built-in const.
>>
>> The __builtin_constant_p() is introduced to avoid having the compiler
>> generate a conditional branch and two different code paths in case
>> nb_elem is a run-time variable.
>>
>> In particular, this matters if nb_elems is run-time variable and varies
>> between 1 and some larger value.
>>
>> I should have mention this in the commit message.
>>
>> A very slight performance improvement. It also makes the code better
>> match the comment, imo. Zero cost for const one enqueues, but no impact
>> non-compile-time-constant-length enqueues.
>>
>> Feel free to ignore.
>
>
> I did some performance comparison of the patch.
> A low-end ARM machines shows 0.7% drop with single event case. No
> regression see with high-end ARM cores with single event case.
>
> IMO, optimizing the check for burst mode(the new patch) may not show
> any real improvement as the cost is divided by number of event.
> Whereas optimizing the check for single event case(The current code)
> shows better performance with single event case and no regression
> with burst mode as cost is divided by number of events.
I ran some tests on an AMD Zen 3 with DSW.
In the below tests the enqueue burst size is not compile-time constant.
Enqueue burst size Performance improvement
Run-time constant 1 ~5%
Run-time constant 2 ~0%
Run-time variable 1-2 ~9%
Run-time variable 1-16 ~0%
The run-time variable enqueue sizes randomly (uniformly) distributed in
the specified range.
The first result may come as a surprise. The benchmark is using
RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
in most apps). The single-event enqueue function only exists in a
generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
I suspect that is the reason for the performance improvement.
This effect is large-enough to make it somewhat beneficial (+~1%) to use
run-time variable single-event enqueue compared to keeping the burst
size compile-time constant.
The performance gain is counted toward both enqueue and dequeue costs
(+benchmark app overhead), so an under-estimation if see this as an
enqueue performance improvement.
> If you agree, then we can skip this patch.
>
I have no strong opinion if this should be included or not.
It was up to me, I would drop the single-enqueue special case handling
altogether in the next ABI update.
>
>>
>>> If so, check should be following. Right?
>>>
>>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
>>> || nb_events == 1)
>>>
>>> At least, It was my original intention in the code.
>>>
>>>
>>>
>>>> return (fp_ops->enqueue)(port, ev);
>>>> else
>>>> return fn(port, ev, nb_events);
>>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>>>> * Allow zero cost non burst mode routine invocation if application
>>>> * requests nb_events as const one
>>>> */
>>>> - if (nb_events == 1)
>>>> + if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>> return (fp_ops->dequeue)(port, ev, timeout_ticks);
>>>> else
>>>> return (fp_ops->dequeue_burst)(port, ev, nb_events,
>>>> --
>>>> 2.34.1
>>>>
>>
^ permalink raw reply [relevance 3%]
* [PATCH v6 1/3] ring: fix unmatched type definition and usage
2023-05-09 9:24 3% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
@ 2023-05-09 9:24 3% ` Jie Hai
2023-05-30 9:27 0% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
1 sibling, 0 replies; 200+ results
From: Jie Hai @ 2023-05-09 9:24 UTC (permalink / raw)
To: Honnappa Nagarahalli, Konstantin Ananyev; +Cc: dev, liudongdong3
Field 'flags' of struct rte_ring is defined as int type. However,
it is used as unsigned int. To ensure consistency, change the
type of flags to unsigned int. Since these two types has the
same byte size, this change is not an ABI change.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/ring/rte_ring_core.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 82b237091b71..1c809abeb531 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {
struct rte_ring {
char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
/**< Name of the ring. */
- int flags; /**< Flags supplied at creation. */
+ uint32_t flags; /**< Flags supplied at creation. */
const struct rte_memzone *memzone;
/**< Memzone, if any, containing the rte_ring */
uint32_t size; /**< Size of ring. */
--
2.33.0
^ permalink raw reply [relevance 3%]
* [PATCH v6 0/3] add telemetry cmds for ring
2023-05-09 1:29 3% ` [PATCH v5 " Jie Hai
2023-05-09 1:29 3% ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-05-09 9:24 3% ` Jie Hai
2023-05-09 9:24 3% ` [PATCH v6 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-05-30 9:27 0% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
1 sibling, 2 replies; 200+ results
From: Jie Hai @ 2023-05-09 9:24 UTC (permalink / raw)
Cc: dev, liudongdong3
This patch set supports telemetry cmd to list rings and dump information
of a ring by its name.
v1->v2:
1. Add space after "switch".
2. Fix wrong strlen parameter.
v2->v3:
1. Remove prefix "rte_" for static function.
2. Add Acked-by Konstantin Ananyev for PATCH 1.
3. Introduce functions to return strings instead copy strings.
4. Check pointer to memzone of ring.
5. Remove redundant variable.
6. Hold lock when access ring data.
v3->v4:
1. Update changelog according to reviews of Honnappa Nagarahalli.
2. Add Reviewed-by Honnappa Nagarahalli.
3. Correct grammar in help information.
4. Correct spell warning on "te" reported by checkpatch.pl.
5. Use ring_walk() to query ring info instead of rte_ring_lookup().
6. Fix that type definition the flag field of rte_ring does not match the usage.
7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
for mask and flags.
v4->v5:
1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
2. Add ABI change explanation for commit message of patch 1/3.
v5->v6:
1. Add Acked-by Morten Brørup.
2. Fix incorrect reference of commit.
Jie Hai (3):
ring: fix unmatched type definition and usage
ring: add telemetry cmd to list rings
ring: add telemetry cmd for ring info
lib/ring/meson.build | 1 +
lib/ring/rte_ring.c | 139 +++++++++++++++++++++++++++++++++++++++
lib/ring/rte_ring_core.h | 2 +-
3 files changed, 141 insertions(+), 1 deletion(-)
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 1/3] ring: fix unmatched type definition and usage
2023-05-09 6:23 0% ` Ruifeng Wang
@ 2023-05-09 8:15 0% ` Jie Hai
0 siblings, 0 replies; 200+ results
From: Jie Hai @ 2023-05-09 8:15 UTC (permalink / raw)
To: Ruifeng Wang, Honnappa Nagarahalli, Konstantin Ananyev,
Olivier Matz, Dharmik Jayesh Thakkar
Cc: dev, liudongdong3, nd
On 2023/5/9 14:23, Ruifeng Wang wrote:
>> -----Original Message-----
>> From: Jie Hai <haijie1@huawei.com>
>> Sent: Tuesday, May 9, 2023 9:29 AM
>> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Konstantin Ananyev
>> <konstantin.v.ananyev@yandex.ru>; Ruifeng Wang <Ruifeng.Wang@arm.com>; Gavin Hu
>> <Gavin.Hu@arm.com>; Olivier Matz <olivier.matz@6wind.com>; Dharmik Jayesh Thakkar
>> <DharmikJayesh.Thakkar@arm.com>
>> Cc: dev@dpdk.org; liudongdong3@huawei.com
>> Subject: [PATCH v5 1/3] ring: fix unmatched type definition and usage
>>
>> Field 'flags' of struct rte_ring is defined as int type. However, it is used as unsigned
>> int. To ensure consistency, change the type of flags to unsigned int. Since these two
>> types has the same byte size, this change is not an ABI change.
>>
>> Fixes: cc4b218790f6 ("ring: support configurable element size")
>
> The change looks good.
> However, I think the fix line is not accurate.
> I suppose it fixes af75078fece3 ("first public release").
>
Thanks for your review. Sorry for quoting the wrong commit.
This issue was indeed introduced by commit af75078fece3 ("first public
release").
I will fix this in the next version.
>>
>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>> Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>> ---
>> lib/ring/rte_ring_core.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
>> 82b237091b71..1c809abeb531 100644
>> --- a/lib/ring/rte_ring_core.h
>> +++ b/lib/ring/rte_ring_core.h
>> @@ -120,7 +120,7 @@ struct rte_ring_hts_headtail { struct rte_ring {
>> char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
>> /**< Name of the ring. */
>> - int flags; /**< Flags supplied at creation. */
>> + uint32_t flags; /**< Flags supplied at creation. */
>> const struct rte_memzone *memzone;
>> /**< Memzone, if any, containing the rte_ring */
>> uint32_t size; /**< Size of ring. */
>> --
>> 2.33.0
>
> .
^ permalink raw reply [relevance 0%]
* RE: [PATCH v5 1/3] ring: fix unmatched type definition and usage
2023-05-09 1:29 3% ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-05-09 6:23 0% ` Ruifeng Wang
2023-05-09 8:15 0% ` Jie Hai
0 siblings, 1 reply; 200+ results
From: Ruifeng Wang @ 2023-05-09 6:23 UTC (permalink / raw)
To: Jie Hai, Honnappa Nagarahalli, Konstantin Ananyev, Olivier Matz,
Dharmik Jayesh Thakkar
Cc: dev, liudongdong3, nd
> -----Original Message-----
> From: Jie Hai <haijie1@huawei.com>
> Sent: Tuesday, May 9, 2023 9:29 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; Ruifeng Wang <Ruifeng.Wang@arm.com>; Gavin Hu
> <Gavin.Hu@arm.com>; Olivier Matz <olivier.matz@6wind.com>; Dharmik Jayesh Thakkar
> <DharmikJayesh.Thakkar@arm.com>
> Cc: dev@dpdk.org; liudongdong3@huawei.com
> Subject: [PATCH v5 1/3] ring: fix unmatched type definition and usage
>
> Field 'flags' of struct rte_ring is defined as int type. However, it is used as unsigned
> int. To ensure consistency, change the type of flags to unsigned int. Since these two
> types has the same byte size, this change is not an ABI change.
>
> Fixes: cc4b218790f6 ("ring: support configurable element size")
The change looks good.
However, I think the fix line is not accurate.
I suppose it fixes af75078fece3 ("first public release").
>
> Signed-off-by: Jie Hai <haijie1@huawei.com>
> Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
> lib/ring/rte_ring_core.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
> 82b237091b71..1c809abeb531 100644
> --- a/lib/ring/rte_ring_core.h
> +++ b/lib/ring/rte_ring_core.h
> @@ -120,7 +120,7 @@ struct rte_ring_hts_headtail { struct rte_ring {
> char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
> /**< Name of the ring. */
> - int flags; /**< Flags supplied at creation. */
> + uint32_t flags; /**< Flags supplied at creation. */
> const struct rte_memzone *memzone;
> /**< Memzone, if any, containing the rte_ring */
> uint32_t size; /**< Size of ring. */
> --
> 2.33.0
^ permalink raw reply [relevance 0%]
* [PATCH v5 1/3] ring: fix unmatched type definition and usage
2023-05-09 1:29 3% ` [PATCH v5 " Jie Hai
@ 2023-05-09 1:29 3% ` Jie Hai
2023-05-09 6:23 0% ` Ruifeng Wang
2023-05-09 9:24 3% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
1 sibling, 1 reply; 200+ results
From: Jie Hai @ 2023-05-09 1:29 UTC (permalink / raw)
To: Honnappa Nagarahalli, Konstantin Ananyev, Ruifeng Wang, Gavin Hu,
Olivier Matz, Dharmik Thakkar
Cc: dev, liudongdong3
Field 'flags' of struct rte_ring is defined as int type. However,
it is used as unsigned int. To ensure consistency, change the
type of flags to unsigned int. Since these two types has the
same byte size, this change is not an ABI change.
Fixes: cc4b218790f6 ("ring: support configurable element size")
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/ring/rte_ring_core.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 82b237091b71..1c809abeb531 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {
struct rte_ring {
char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
/**< Name of the ring. */
- int flags; /**< Flags supplied at creation. */
+ uint32_t flags; /**< Flags supplied at creation. */
const struct rte_memzone *memzone;
/**< Memzone, if any, containing the rte_ring */
uint32_t size; /**< Size of ring. */
--
2.33.0
^ permalink raw reply [relevance 3%]
* [PATCH v5 0/3] add telemetry cmds for ring
@ 2023-05-09 1:29 3% ` Jie Hai
2023-05-09 1:29 3% ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-05-09 9:24 3% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
0 siblings, 2 replies; 200+ results
From: Jie Hai @ 2023-05-09 1:29 UTC (permalink / raw)
Cc: dev, liudongdong3
This patch set supports telemetry cmd to list rings and dump information
of a ring by its name.
v1->v2:
1. Add space after "switch".
2. Fix wrong strlen parameter.
v2->v3:
1. Remove prefix "rte_" for static function.
2. Add Acked-by Konstantin Ananyev for PATCH 1.
3. Introduce functions to return strings instead copy strings.
4. Check pointer to memzone of ring.
5. Remove redundant variable.
6. Hold lock when access ring data.
v3->v4:
1. Update changelog according to reviews of Honnappa Nagarahalli.
2. Add Reviewed-by Honnappa Nagarahalli.
3. Correct grammar in help information.
4. Correct spell warning on "te" reported by checkpatch.pl.
5. Use ring_walk() to query ring info instead of rte_ring_lookup().
6. Fix that type definition the flag field of rte_ring does not match the usage.
7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
for mask and flags.
v4-v5:
1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
2. Add ABI change explanation for commit message of patch 1/3.
Jie Hai (3):
ring: fix unmatched type definition and usage
ring: add telemetry cmd to list rings
ring: add telemetry cmd for ring info
lib/ring/meson.build | 1 +
lib/ring/rte_ring.c | 139 +++++++++++++++++++++++++++++++++++++++
lib/ring/rte_ring_core.h | 2 +-
3 files changed, 141 insertions(+), 1 deletion(-)
--
2.33.0
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 0/3] vhost: add device op to offload the interrupt kick
2023-04-05 12:40 3% [PATCH v2 0/3] vhost: add device op to offload the interrupt kick Eelco Chaudron
@ 2023-05-08 13:58 0% ` Eelco Chaudron
1 sibling, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-08 13:58 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia; +Cc: dev
On 5 Apr 2023, at 14:40, Eelco Chaudron wrote:
> This series adds an operation callback which gets called every time the
> library wants to call eventfd_write(). This eventfd_write() call could
> result in a system call, which could potentially block the PMD thread.
>
> The callback function can decide whether it's ok to handle the
> eventfd_write() now or have the newly introduced function,
> rte_vhost_notify_guest(), called at a later time.
>
> This can be used by 3rd party applications, like OVS, to avoid system
> calls being called as part of the PMD threads.
Wondering if anyone had a chance to look at this patchset.
Cheers,
Eelco
> v2: - Used vhost_virtqueue->index to find index for operation.
> - Aligned function name to VDUSE RFC patchset.
> - Added error and offload statistics counter.
> - Mark new API as experimental.
> - Change the virtual queue spin lock to read/write spin lock.
> - Made shared counters atomic.
> - Add versioned rte_vhost_driver_callback_register() for
> ABI compliance.
>
> Eelco Chaudron (3):
> vhost: Change vhost_virtqueue access lock to a read/write one.
> vhost: make the guest_notifications statistic counter atomic.
> vhost: add device op to offload the interrupt kick
>
>
> lib/eal/include/generic/rte_rwlock.h | 17 +++++
> lib/vhost/meson.build | 2 +
> lib/vhost/rte_vhost.h | 23 ++++++-
> lib/vhost/socket.c | 72 ++++++++++++++++++++--
> lib/vhost/version.map | 9 +++
> lib/vhost/vhost.c | 92 +++++++++++++++++++++-------
> lib/vhost/vhost.h | 70 ++++++++++++++-------
> lib/vhost/vhost_user.c | 14 ++---
> lib/vhost/virtio_net.c | 90 +++++++++++++--------------
> 9 files changed, 288 insertions(+), 101 deletions(-)
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
2023-05-02 14:18 5% ` Ferruh Yigit
@ 2023-05-08 13:44 1% ` jerinj
2023-05-17 15:47 0% ` Jerin Jacob
1 sibling, 1 reply; 200+ results
From: jerinj @ 2023-05-08 13:44 UTC (permalink / raw)
To: dev, Thomas Monjalon, Anatoly Burakov
Cc: david.marchand, ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
The LiquidIO product line has been substituted with CN9K/CN10K
OCTEON product line smart NICs located at drivers/net/octeon_ep/.
DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
because of the absence of updates in the driver.
Due to the above reasons, the driver removed from DPDK 23.07.
Also removed deprecation notice entry for the removal in
doc/guides/rel_notes/deprecation.rst and skipped removed
driver file in ABI check in devtools/libabigail.abignore.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
v2:
- Skip driver ABI check (Ferruh)
- Addressed the review comments in
http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)
MAINTAINERS | 8 -
devtools/libabigail.abignore | 1 +
doc/guides/nics/features/liquidio.ini | 29 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/liquidio.rst | 169 --
doc/guides/rel_notes/deprecation.rst | 7 -
doc/guides/rel_notes/release_23_07.rst | 2 +
drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
drivers/net/liquidio/base/lio_mbox.c | 246 ---
drivers/net/liquidio/base/lio_mbox.h | 102 -
drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
drivers/net/liquidio/lio_ethdev.h | 179 --
drivers/net/liquidio/lio_logs.h | 58 -
drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
drivers/net/liquidio/lio_rxtx.h | 740 --------
drivers/net/liquidio/lio_struct.h | 661 -------
drivers/net/liquidio/meson.build | 16 -
drivers/net/meson.build | 1 -
21 files changed, 3 insertions(+), 7148 deletions(-)
delete mode 100644 doc/guides/nics/features/liquidio.ini
delete mode 100644 doc/guides/nics/liquidio.rst
delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
delete mode 100644 drivers/net/liquidio/lio_ethdev.c
delete mode 100644 drivers/net/liquidio/lio_ethdev.h
delete mode 100644 drivers/net/liquidio/lio_logs.h
delete mode 100644 drivers/net/liquidio/lio_rxtx.c
delete mode 100644 drivers/net/liquidio/lio_rxtx.h
delete mode 100644 drivers/net/liquidio/lio_struct.h
delete mode 100644 drivers/net/liquidio/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e5099..0157c26dd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -681,14 +681,6 @@ F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
F: doc/guides/nics/features/thunderx.ini
-Cavium LiquidIO - UNMAINTAINED
-M: Shijith Thotton <sthotton@marvell.com>
-M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/liquidio/
-F: doc/guides/nics/liquidio.rst
-F: doc/guides/nics/features/liquidio.ini
-
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..c0361bfc7b 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -25,6 +25,7 @@
;
; SKIP_LIBRARY=librte_common_mlx5_glue
; SKIP_LIBRARY=librte_net_mlx4_glue
+; SKIP_LIBRARY=librte_net_liquidio
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Experimental APIs exceptions ;
diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
deleted file mode 100644
index a8bde282e0..0000000000
--- a/doc/guides/nics/features/liquidio.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-;
-; Supported features of the 'LiquidIO' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Link status = Y
-Link status event = Y
-MTU update = Y
-Scattered Rx = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-VLAN filter = Y
-CRC offload = Y
-VLAN offload = P
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Basic stats = Y
-Extended stats = Y
-Multiprocess aware = Y
-Linux = Y
-x86-64 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 5c9d1edf5e..31296822e5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -44,7 +44,6 @@ Network Interface Controller Drivers
ipn3ke
ixgbe
kni
- liquidio
mana
memif
mlx4
diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
deleted file mode 100644
index f893b3b539..0000000000
--- a/doc/guides/nics/liquidio.rst
+++ /dev/null
@@ -1,169 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Cavium, Inc
-
-LiquidIO VF Poll Mode Driver
-============================
-
-The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
-Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
-done using kernel driver.
-
-More information can be found at `Cavium Official Website
-<http://cavium.com/LiquidIO_Adapters.html>`_.
-
-Supported LiquidIO Adapters
------------------------------
-
-- LiquidIO II CN2350 210SV/225SV
-- LiquidIO II CN2350 210SVPT
-- LiquidIO II CN2360 210SV/225SV
-- LiquidIO II CN2360 210SVPT
-
-
-SR-IOV: Prerequisites and Sample Application Notes
---------------------------------------------------
-
-This section provides instructions to configure SR-IOV with Linux OS.
-
-#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
-
- .. code-block:: console
-
- lspci -s <slot> -vvv
-
- Example output:
-
- .. code-block:: console
-
- [...]
- Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
- [...]
- Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
- [...]
- Kernel driver in use: LiquidIO
-
-#. Load the kernel module:
-
- .. code-block:: console
-
- modprobe liquidio
-
-#. Bring up the PF ports:
-
- .. code-block:: console
-
- ifconfig p4p1 up
- ifconfig p4p2 up
-
-#. Change PF MTU if required:
-
- .. code-block:: console
-
- ifconfig p4p1 mtu 9000
- ifconfig p4p2 mtu 9000
-
-#. Create VF device(s):
-
- Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
- of the parent PF.
-
- .. code-block:: console
-
- echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
- echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
-
-#. Assign VF MAC address:
-
- Assign MAC address to the VF using iproute2 utility. The syntax is::
-
- ip link set <PF iface> vf <VF id> mac <macaddr>
-
- Example output:
-
- .. code-block:: console
-
- ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
-
-#. Assign VF(s) to VM.
-
- The VF devices may be passed through to the guest VM using qemu or
- virt-manager or virsh etc.
-
- Example qemu guest launch command:
-
- .. code-block:: console
-
- ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
- -cpu host -m 4096 -smp 4 \
- -drive file=<disk_file>,if=none,id=disk1,format=<type> \
- -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
- -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
-
-#. Running testpmd
-
- Refer to the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
- ``testpmd`` application.
-
- .. note::
-
- Use ``igb_uio`` instead of ``vfio-pci`` in VM.
-
- Example output:
-
- .. code-block:: console
-
- [...]
- EAL: PCI device 0000:03:00.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- EAL: using IOMMU type 1 (Type 1)
- PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
- EAL: PCI device 0000:03:08.3 on NUMA socket 0
- EAL: probe driver: 177d:9712 net_liovf
- PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
- Interactive-mode selected
- USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
- Configuring Port 0 (socket 0)
- PMD: net_liovf[03:00.3]INFO: Starting port 0
- Port 0: F2:A8:1B:5E:B4:66
- Configuring Port 1 (socket 0)
- PMD: net_liovf[03:08.3]INFO: Starting port 1
- Port 1: 32:76:CC:EE:56:D7
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-#. Enabling VF promiscuous mode
-
- One VF per PF can be marked as trusted for promiscuous mode.
-
- .. code-block:: console
-
- ip link set dev <PF iface> vf <VF id> trust on
-
-
-Limitations
------------
-
-VF MTU
-~~~~~~
-
-VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
-
-VLAN offload
-~~~~~~~~~~~~
-
-Tx VLAN insertion is not supported and consequently VLAN offload feature is
-marked partial.
-
-Ring size
-~~~~~~~~~
-
-Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..8e1cdd677a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -121,13 +121,6 @@ Deprecation Notices
* net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
This decision has been made to alleviate the burden of maintaining a discontinued product.
-* net/liquidio: Remove LiquidIO ethdev driver.
- The LiquidIO product line has been substituted
- with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
- DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
- because of the absence of updates in the driver.
- Due to the above reasons, the driver will be unavailable from DPDK 23.07.
-
* cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
to have another parameter ``qp_id`` to return the queue pair ID
which got error interrupt to the application,
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..f13a7b32b6 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -68,6 +68,8 @@ Removed Items
Also, make sure to start the actual text at the margin.
=======================================================
+* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
+
API Changes
-----------
diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
deleted file mode 100644
index 9f28504b53..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_reg.h
+++ /dev/null
@@ -1,165 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_REG_H_
-#define _LIO_23XX_REG_H_
-
-/* ###################### REQUEST QUEUE ######################### */
-
-/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
-#define CN23XX_SLI_PKT_INSTR_BADDR_START64 0x10010
-
-/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
-#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
-
-/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START 0x10030
-
-/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
-#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64 0x10040
-
-/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
- * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
- */
-#define CN23XX_SLI_PKT_INPUT_CONTROL_START64 0x10000
-
-/* ------- Request Queue Macros --------- */
-
-/* Each Input Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_IQ_OFFSET 0x20000
-
-#define CN23XX_SLI_IQ_PKT_CONTROL64(iq) \
- (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_BASE_ADDR64(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_SIZE(iq) \
- (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_DOORBELL(iq) \
- (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_INSTR_COUNT64(iq) \
- (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-/* Number of instructions to be read in one MAC read request.
- * setting to Max value(4)
- */
-#define CN23XX_PKT_INPUT_CTL_RDSIZE (3 << 25)
-#define CN23XX_PKT_INPUT_CTL_IS_64B (1 << 24)
-#define CN23XX_PKT_INPUT_CTL_RST (1 << 23)
-#define CN23XX_PKT_INPUT_CTL_QUIET (1 << 28)
-#define CN23XX_PKT_INPUT_CTL_RING_ENB (1 << 22)
-#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP (1 << 6)
-#define CN23XX_PKT_INPUT_CTL_USE_CSR (1 << 4)
-#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP (2)
-
-/* These bits[47:44] select the Physical function number within the MAC */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS 45
-/* These bits[43:32] select the function number within the PF */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS 32
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR)
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK \
- (CN23XX_PKT_INPUT_CTL_RDSIZE | \
- CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP | \
- CN23XX_PKT_INPUT_CTL_USE_CSR | \
- CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
-#endif
-
-/* ############################ OUTPUT QUEUE ######################### */
-
-/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
-#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START 0x10050
-
-/* 64 registers for Output queue buffer and info size
- * SLI_PKT(0..63)_OUT_SIZE
- */
-#define CN23XX_SLI_PKT_OUT_SIZE 0x10060
-
-/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
-#define CN23XX_SLI_SLIST_BADDR_START64 0x10070
-
-/* 64 registers for Output Queue Packet Credits
- * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
- */
-#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
-
-/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START 0x10090
-
-/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
-#define CN23XX_SLI_PKT_CNTS_START 0x100B0
-
-/* Each Output Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_OQ_OFFSET 0x20000
-
-/* ------- Output Queue Macros --------- */
-
-#define CN23XX_SLI_OQ_PKT_CONTROL(oq) \
- (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BASE_ADDR64(oq) \
- (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_SIZE(oq) \
- (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq) \
- (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_SENT(oq) \
- (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_CREDIT(oq) \
- (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-/* ------------------ Masks ---------------- */
-#define CN23XX_PKT_OUTPUT_CTL_IPTR (1 << 11)
-#define CN23XX_PKT_OUTPUT_CTL_ES (1 << 9)
-#define CN23XX_PKT_OUTPUT_CTL_NSR (1 << 8)
-#define CN23XX_PKT_OUTPUT_CTL_ROR (1 << 7)
-#define CN23XX_PKT_OUTPUT_CTL_DPTR (1 << 6)
-#define CN23XX_PKT_OUTPUT_CTL_BMODE (1 << 5)
-#define CN23XX_PKT_OUTPUT_CTL_ES_P (1 << 3)
-#define CN23XX_PKT_OUTPUT_CTL_NSR_P (1 << 2)
-#define CN23XX_PKT_OUTPUT_CTL_ROR_P (1 << 1)
-#define CN23XX_PKT_OUTPUT_CTL_RING_ENB (1 << 0)
-
-/* Rings per Virtual Function [RO] */
-#define CN23XX_PKT_INPUT_CTL_RPVF_MASK 0x3F
-#define CN23XX_PKT_INPUT_CTL_RPVF_POS 48
-
-/* These bits[47:44][RO] give the Physical function
- * number info within the MAC
- */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK 0x7
-
-/* These bits[43:32][RO] give the virtual function
- * number info within the PF
- */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK 0x1FFF
-
-/* ######################### Mailbox Reg Macros ######################## */
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START 0x10200
-#define CN23XX_VF_SLI_PKT_MBOX_INT_START 0x10210
-
-#define CN23XX_SLI_MBOX_OFFSET 0x20000
-#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET 0x8
-
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx) \
- (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START + \
- ((q) * CN23XX_SLI_MBOX_OFFSET + \
- (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
-
-#define CN23XX_VF_SLI_PKT_MBOX_INT(q) \
- (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
-
-#endif /* _LIO_23XX_REG_H_ */
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
deleted file mode 100644
index c6b8310b71..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.c
+++ /dev/null
@@ -1,513 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <string.h>
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_23xx_reg.h"
-#include "lio_mbox.h"
-
-static int
-cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
-{
- uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
- uint64_t d64, q_no;
- int ret_val = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < num_queues; q_no++) {
- /* set RST bit to 1. This bit applies to both IQ and OQ */
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- d64);
- }
-
- /* wait until the RST bit is clear or the RST and QUIET bits are set */
- for (q_no = 0; q_no < num_queues; q_no++) {
- volatile uint64_t reg_val;
-
- reg_val = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
- !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
- loop) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- loop = loop - 1;
- }
-
- if (loop == 0) {
- lio_dev_err(lio_dev,
- "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
- (unsigned long)q_no);
- return -1;
- }
-
- reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
-
- reg_val = lio_read_csr64(
- lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
- lio_dev_err(lio_dev,
- "clearing the reset failed for qno: %lu\n",
- (unsigned long)q_no);
- ret_val = -1;
- }
- }
-
- return ret_val;
-}
-
-static int
-cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
-{
- uint64_t q_no;
- uint64_t d64;
-
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_reset_io_queues(lio_dev,
- lio_dev->sriov_info.rings_per_vf))
- return -1;
-
- for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
- 0xFFFFFFFF);
-
- d64 = lio_read_csr64(lio_dev,
- CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
-
- d64 &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
- d64);
-
- /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
- * the Input Queues
- */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- CN23XX_PKT_INPUT_CTL_MASK);
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
-{
- uint32_t reg_val;
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
- 0xFFFFFFFF);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
-
- reg_val &= 0xEFFFFFFFFFFFFFFFL;
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
-
- reg_val =
- lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
-
- /* set IPTR & DPTR */
- reg_val |=
- (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
-
- /* reset BMODE */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
-
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Scatter List
- * reset ROR_P, NSR_P
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
-#endif
- /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
- * for Output Queue Data
- * reset ROR, NSR
- */
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
- reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
- /* set the ES bit */
- reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
-
- /* write all the selected settings */
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
-}
-
-static int
-cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- if (cn23xx_vf_setup_global_input_regs(lio_dev))
- return -1;
-
- cn23xx_vf_setup_global_output_regs(lio_dev);
-
- return 0;
-}
-
-static void
-cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint64_t pkt_in_done = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Write the start of the input queue's ring and its size */
- lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
- iq->base_addr_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
-
- /* Remember the doorbell & instruction count register addr
- * for this queue
- */
- iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_DOORBELL(iq_no);
- iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
- lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
- iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
-
- /* Store the current instruction counter (used in flush_iq
- * calculation)
- */
- pkt_in_done = rte_read64(iq->inst_cnt_reg);
-
- /* Clear the count by writing back what we read, but don't
- * enable data traffic here
- */
- rte_write64(pkt_in_done, iq->inst_cnt_reg);
-}
-
-static void
-cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
-{
- struct lio_droq *droq = lio_dev->droq[oq_no];
-
- PMD_INIT_FUNC_TRACE();
-
- lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
- droq->desc_ring_dma);
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
-
- lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
- (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
-
- /* Get the mapped address of the pkt_sent and pkts_credit regs */
- droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_SENT(oq_no);
- droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
-}
-
-static void
-cn23xx_vf_free_mbox(struct lio_device *lio_dev)
-{
- PMD_INIT_FUNC_TRACE();
-
- rte_free(lio_dev->mbox[0]);
- lio_dev->mbox[0] = NULL;
-
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
-}
-
-static int
-cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
-{
- struct lio_mbox *mbox;
-
- PMD_INIT_FUNC_TRACE();
-
- if (lio_dev->mbox == NULL) {
- lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
- if (lio_dev->mbox == NULL)
- return -ENOMEM;
- }
-
- mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
- if (mbox == NULL) {
- rte_free(lio_dev->mbox);
- lio_dev->mbox = NULL;
- return -ENOMEM;
- }
-
- rte_spinlock_init(&mbox->lock);
-
- mbox->lio_dev = lio_dev;
-
- mbox->q_no = 0;
-
- mbox->state = LIO_MBOX_STATE_IDLE;
-
- /* VF mbox interrupt reg */
- mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_VF_SLI_PKT_MBOX_INT(0);
- /* VF reads from SIG0 reg */
- mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
- /* VF writes into SIG1 reg */
- mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
- CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
-
- lio_dev->mbox[0] = mbox;
-
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
-}
-
-static int
-cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t q_no;
-
- PMD_INIT_FUNC_TRACE();
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
- uint64_t reg_val;
-
- /* set the corresponding IQ IS_64B bit */
- if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
-
- /* set the corresponding IQ ENB bit */
- if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
- reg_val = lio_read_csr64(
- lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
- reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
- lio_write_csr64(lio_dev,
- CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
- reg_val);
- }
- }
- for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
- uint32_t reg_val;
-
- /* set the corresponding OQ ENB bit */
- if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
- reg_val = lio_read_csr(
- lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no));
- reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
- lio_write_csr(lio_dev,
- CN23XX_SLI_OQ_PKT_CONTROL(q_no),
- reg_val);
- }
- }
-
- return 0;
-}
-
-static void
-cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
-{
- uint32_t num_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* per HRM, rings can only be disabled via reset operation,
- * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
- */
- num_queues = lio_dev->num_iqs;
- if (num_queues < lio_dev->num_oqs)
- num_queues = lio_dev->num_oqs;
-
- cn23xx_vf_reset_io_queues(lio_dev, num_queues);
-}
-
-void
-cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 0;
- mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
- mbox_cmd.msg.s.len = 1;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = NULL;
- mbox_cmd.fn_arg = 0;
-
- lio_mbox_write(lio_dev, &mbox_cmd);
-}
-
-static void
-cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
- struct lio_mbox_cmd *cmd, void *arg)
-{
- uint32_t major = 0;
-
- PMD_INIT_FUNC_TRACE();
-
- rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
- if (cmd->recv_len > 1) {
- struct lio_version *lio_ver = (struct lio_version *)cmd->data;
-
- major = lio_ver->major;
- major = major << 16;
- }
-
- rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
-}
-
-int
-cn23xx_pfvf_handshake(struct lio_device *lio_dev)
-{
- struct lio_mbox_cmd mbox_cmd;
- struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
- uint32_t q_no, count = 0;
- rte_atomic64_t status;
- uint32_t pfmajor;
- uint32_t vfmajor;
- uint32_t ret;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Sending VF_ACTIVE indication to the PF driver */
- lio_dev_dbg(lio_dev, "requesting info from PF\n");
-
- mbox_cmd.msg.mbox_msg64 = 0;
- mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
- mbox_cmd.msg.s.resp_needed = 1;
- mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
- mbox_cmd.msg.s.len = 2;
- mbox_cmd.data[0] = 0;
- lio_ver->major = LIO_BASE_MAJOR_VERSION;
- lio_ver->minor = LIO_BASE_MINOR_VERSION;
- lio_ver->micro = LIO_BASE_MICRO_VERSION;
- mbox_cmd.q_no = 0;
- mbox_cmd.recv_len = 0;
- mbox_cmd.recv_status = 0;
- mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
- mbox_cmd.fn_arg = (void *)&status;
-
- if (lio_mbox_write(lio_dev, &mbox_cmd)) {
- lio_dev_err(lio_dev, "Write to mailbox failed\n");
- return -1;
- }
-
- rte_atomic64_set(&status, 0);
-
- do {
- rte_delay_ms(1);
- } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
-
- ret = rte_atomic64_read(&status);
- if (ret == 0) {
- lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
- return -1;
- }
-
- for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
- lio_dev->instr_queue[q_no]->txpciq.s.pkind =
- lio_dev->pfvf_hsword.pkind;
-
- vfmajor = LIO_BASE_MAJOR_VERSION;
- pfmajor = ret >> 16;
- if (pfmajor != vfmajor) {
- lio_dev_err(lio_dev,
- "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = -EPERM;
- } else {
- lio_dev_dbg(lio_dev,
- "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
- vfmajor, pfmajor);
- ret = 0;
- }
-
- lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
- lio_dev->pfvf_hsword.pkind);
-
- return ret;
-}
-
-void
-cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
-{
- uint64_t mbox_int_val;
-
- /* read and clear by writing 1 */
- mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
- rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
- if (lio_mbox_read(lio_dev->mbox[0]))
- lio_mbox_process_message(lio_dev->mbox[0]);
-}
-
-int
-cn23xx_vf_setup_device(struct lio_device *lio_dev)
-{
- uint64_t reg_val;
-
- PMD_INIT_FUNC_TRACE();
-
- /* INPUT_CONTROL[RPVF] gives the VF IOq count */
- reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
-
- lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
- lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
- CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
-
- reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
-
- lio_dev->sriov_info.rings_per_vf =
- reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
-
- lio_dev->default_config = lio_get_conf(lio_dev);
- if (lio_dev->default_config == NULL)
- return -1;
-
- lio_dev->fn_list.setup_iq_regs = cn23xx_vf_setup_iq_regs;
- lio_dev->fn_list.setup_oq_regs = cn23xx_vf_setup_oq_regs;
- lio_dev->fn_list.setup_mbox = cn23xx_vf_setup_mbox;
- lio_dev->fn_list.free_mbox = cn23xx_vf_free_mbox;
-
- lio_dev->fn_list.setup_device_regs = cn23xx_vf_setup_device_regs;
-
- lio_dev->fn_list.enable_io_queues = cn23xx_vf_enable_io_queues;
- lio_dev->fn_list.disable_io_queues = cn23xx_vf_disable_io_queues;
-
- return 0;
-}
-
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
deleted file mode 100644
index 8e5362db15..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_VF_H_
-#define _LIO_23XX_VF_H_
-
-#include <stdio.h>
-
-#include "lio_struct.h"
-
-static const struct lio_config default_cn23xx_conf = {
- .card_type = LIO_23XX,
- .card_name = LIO_23XX_NAME,
- /** IQ attributes */
- .iq = {
- .max_iqs = CN23XX_CFG_IO_QUEUES,
- .pending_list_size =
- (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
- .instr_type = OCTEON_64BYTE_INSTR,
- },
-
- /** OQ attributes */
- .oq = {
- .max_oqs = CN23XX_CFG_IO_QUEUES,
- .info_ptr = OCTEON_OQ_INFOPTR_MODE,
- .refill_threshold = CN23XX_OQ_REFIL_THRESHOLD,
- },
-
- .num_nic_ports = CN23XX_DEFAULT_NUM_PORTS,
- .num_def_rx_descs = CN23XX_MAX_OQ_DESCRIPTORS,
- .num_def_tx_descs = CN23XX_MAX_IQ_DESCRIPTORS,
- .def_rx_buf_size = CN23XX_OQ_BUF_SIZE,
-};
-
-static inline const struct lio_config *
-lio_get_conf(struct lio_device *lio_dev)
-{
- const struct lio_config *default_lio_conf = NULL;
-
- /* check the LIO Device model & return the corresponding lio
- * configuration
- */
- default_lio_conf = &default_cn23xx_conf;
-
- if (default_lio_conf == NULL) {
- lio_dev_err(lio_dev, "Configuration verification failed\n");
- return NULL;
- }
-
- return default_lio_conf;
-}
-
-#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT 100000
-
-void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
-
-int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
-
-int cn23xx_vf_setup_device(struct lio_device *lio_dev);
-
-void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
-#endif /* _LIO_23XX_VF_H_ */
diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
deleted file mode 100644
index 5e119c1241..0000000000
--- a/drivers/net/liquidio/base/lio_hw_defs.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_HW_DEFS_H_
-#define _LIO_HW_DEFS_H_
-
-#include <rte_io.h>
-
-#ifndef PCI_VENDOR_ID_CAVIUM
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#endif
-
-#define LIO_CN23XX_VF_VID 0x9712
-
-/* CN23xx subsystem device ids */
-#define PCI_SUBSYS_DEV_ID_CN2350_210 0x0004
-#define PCI_SUBSYS_DEV_ID_CN2360_210 0x0005
-#define PCI_SUBSYS_DEV_ID_CN2360_225 0x0006
-#define PCI_SUBSYS_DEV_ID_CN2350_225 0x0007
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3 0x0008
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3 0x0009
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT 0x000a
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT 0x000b
-
-/* --------------------------CONFIG VALUES------------------------ */
-
-/* CN23xx IQ configuration macros */
-#define CN23XX_MAX_RINGS_PER_PF 64
-#define CN23XX_MAX_RINGS_PER_VF 8
-
-#define CN23XX_MAX_INPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_IQ_DESCRIPTORS 512
-#define CN23XX_MIN_IQ_DESCRIPTORS 128
-
-#define CN23XX_MAX_OUTPUT_QUEUES CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_OQ_DESCRIPTORS 512
-#define CN23XX_MIN_OQ_DESCRIPTORS 128
-#define CN23XX_OQ_BUF_SIZE 1536
-
-#define CN23XX_OQ_REFIL_THRESHOLD 16
-
-#define CN23XX_DEFAULT_NUM_PORTS 1
-
-#define CN23XX_CFG_IO_QUEUES CN23XX_MAX_RINGS_PER_PF
-
-/* common OCTEON configuration macros */
-#define OCTEON_64BYTE_INSTR 64
-#define OCTEON_OQ_INFOPTR_MODE 1
-
-/* Max IOQs per LIO Link */
-#define LIO_MAX_IOQS_PER_IF 64
-
-/* Wait time in milliseconds for FLR */
-#define LIO_PCI_FLR_WAIT 100
-
-enum lio_card_type {
- LIO_23XX /* 23xx */
-};
-
-#define LIO_23XX_NAME "23xx"
-
-#define LIO_DEV_RUNNING 0xc
-
-#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg) \
- ((cfg)->default_config->oq.refill_threshold)
-#define LIO_NUM_DEF_TX_DESCS_CFG(cfg) \
- ((cfg)->default_config->num_def_tx_descs)
-
-#define LIO_IQ_INSTR_TYPE(cfg) ((cfg)->default_config->iq.instr_type)
-
-/* The following config values are fixed and should not be modified. */
-
-/* Maximum number of Instruction queues */
-#define LIO_MAX_INSTR_QUEUES(lio_dev) CN23XX_MAX_RINGS_PER_VF
-
-#define LIO_MAX_POSSIBLE_INSTR_QUEUES CN23XX_MAX_INPUT_QUEUES
-#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES CN23XX_MAX_OUTPUT_QUEUES
-
-#define LIO_DEVICE_NAME_LEN 32
-#define LIO_BASE_MAJOR_VERSION 1
-#define LIO_BASE_MINOR_VERSION 5
-#define LIO_BASE_MICRO_VERSION 1
-
-#define LIO_FW_VERSION_LENGTH 32
-
-#define LIO_Q_RECONF_MIN_VERSION "1.7.0"
-#define LIO_VF_TRUST_MIN_VERSION "1.7.1"
-
-/** Tag types used by Octeon cores in its work. */
-enum octeon_tag_type {
- OCTEON_ORDERED_TAG = 0,
- OCTEON_ATOMIC_TAG = 1,
-};
-
-/* pre-defined host->NIC tag values */
-#define LIO_CONTROL (0x11111110)
-#define LIO_DATA(i) (0x11111111 + (i))
-
-/* used for NIC operations */
-#define LIO_OPCODE 1
-
-/* Subcodes are used by host driver/apps to identify the sub-operation
- * for the core. They only need to by unique for a given subsystem.
- */
-#define LIO_OPCODE_SUBCODE(op, sub) \
- ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
-
-/** LIO_OPCODE subcodes */
-/* This subcode is sent by core PCI driver to indicate cores are ready. */
-#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */
-#define LIO_OPCODE_CMD 0x03
-#define LIO_OPCODE_INFO 0x04
-#define LIO_OPCODE_PORT_STATS 0x05
-#define LIO_OPCODE_IF_CFG 0x09
-
-#define LIO_MIN_RX_BUF_SIZE 64
-#define LIO_MAX_RX_PKTLEN (64 * 1024)
-
-/* NIC Command types */
-#define LIO_CMD_CHANGE_MTU 0x1
-#define LIO_CMD_CHANGE_DEVFLAGS 0x3
-#define LIO_CMD_RX_CTL 0x4
-#define LIO_CMD_CLEAR_STATS 0x6
-#define LIO_CMD_SET_RSS 0xD
-#define LIO_CMD_TNL_RX_CSUM_CTL 0x10
-#define LIO_CMD_TNL_TX_CSUM_CTL 0x11
-#define LIO_CMD_ADD_VLAN_FILTER 0x17
-#define LIO_CMD_DEL_VLAN_FILTER 0x18
-#define LIO_CMD_VXLAN_PORT_CONFIG 0x19
-#define LIO_CMD_QUEUE_COUNT_CTL 0x1f
-
-#define LIO_CMD_VXLAN_PORT_ADD 0x0
-#define LIO_CMD_VXLAN_PORT_DEL 0x1
-#define LIO_CMD_RXCSUM_ENABLE 0x0
-#define LIO_CMD_TXCSUM_ENABLE 0x0
-
-/* RX(packets coming from wire) Checksum verification flags */
-/* TCP/UDP csum */
-#define LIO_L4_CSUM_VERIFIED 0x1
-#define LIO_IP_CSUM_VERIFIED 0x2
-
-/* RSS */
-#define LIO_RSS_PARAM_DISABLE_RSS 0x10
-#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED 0x08
-#define LIO_RSS_PARAM_ITABLE_UNCHANGED 0x04
-#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED 0x02
-
-#define LIO_RSS_HASH_IPV4 0x100
-#define LIO_RSS_HASH_TCP_IPV4 0x200
-#define LIO_RSS_HASH_IPV6 0x400
-#define LIO_RSS_HASH_TCP_IPV6 0x1000
-#define LIO_RSS_HASH_IPV6_EX 0x800
-#define LIO_RSS_HASH_TCP_IPV6_EX 0x2000
-
-#define LIO_RSS_OFFLOAD_ALL ( \
- LIO_RSS_HASH_IPV4 | \
- LIO_RSS_HASH_TCP_IPV4 | \
- LIO_RSS_HASH_IPV6 | \
- LIO_RSS_HASH_TCP_IPV6 | \
- LIO_RSS_HASH_IPV6_EX | \
- LIO_RSS_HASH_TCP_IPV6_EX)
-
-#define LIO_RSS_MAX_TABLE_SZ 128
-#define LIO_RSS_MAX_KEY_SZ 40
-#define LIO_RSS_PARAM_SIZE 16
-
-/* Interface flags communicated between host driver and core app. */
-enum lio_ifflags {
- LIO_IFFLAG_PROMISC = 0x01,
- LIO_IFFLAG_ALLMULTI = 0x02,
- LIO_IFFLAG_UNICAST = 0x10
-};
-
-/* Routines for reading and writing CSRs */
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define lio_write_csr(lio_dev, reg_off, value) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(value) _value = value; \
- PMD_REGS_LOG(_dev, \
- "Write32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)_value); \
- rte_write32(_value, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- do { \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- typeof(val64) _val64 = val64; \
- PMD_REGS_LOG( \
- _dev, \
- "Write64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)_val64); \
- rte_write64(_val64, _dev->hw_addr + _reg_off); \
- } while (0)
-
-#define lio_read_csr(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint32_t val = rte_read32(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG(_dev, \
- "Read32: Reg: 0x%08lx Val: 0x%08lx\n", \
- (unsigned long)_reg_off, \
- (unsigned long)val); \
- val; \
- })
-
-#define lio_read_csr64(lio_dev, reg_off) \
- ({ \
- typeof(lio_dev) _dev = lio_dev; \
- typeof(reg_off) _reg_off = reg_off; \
- uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off); \
- PMD_REGS_LOG( \
- _dev, \
- "Read64: Reg: 0x%08lx Val: 0x%016llx\n", \
- (unsigned long)_reg_off, \
- (unsigned long long)val64); \
- val64; \
- })
-#else
-#define lio_write_csr(lio_dev, reg_off, value) \
- rte_write32(value, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_write_csr64(lio_dev, reg_off, val64) \
- rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr(lio_dev, reg_off) \
- rte_read32((lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr64(lio_dev, reg_off) \
- rte_read64((lio_dev)->hw_addr + (reg_off))
-#endif
-#endif /* _LIO_HW_DEFS_H_ */
diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
deleted file mode 100644
index 2ac2b1b334..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.c
+++ /dev/null
@@ -1,246 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_mbox.h"
-
-/**
- * lio_mbox_read:
- * @mbox: Pointer mailbox
- *
- * Reads the 8-bytes of data from the mbox register
- * Writes back the acknowledgment indicating completion of read
- */
-int
-lio_mbox_read(struct lio_mbox *mbox)
-{
- union lio_mbox_message msg;
- int ret = 0;
-
- msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
-
- if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
- return 0;
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_req.recv_len++;
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
- msg.mbox_msg64;
- mbox->mbox_resp.recv_len++;
- } else {
- if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
- (msg.s.type == LIO_MBOX_REQUEST)) {
- mbox->state &= ~LIO_MBOX_STATE_IDLE;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
- mbox->mbox_req.q_no = mbox->q_no;
- mbox->mbox_req.recv_len = 1;
- } else {
- if ((mbox->state &
- LIO_MBOX_STATE_RES_PENDING) &&
- (msg.s.type == LIO_MBOX_RESPONSE)) {
- mbox->state &=
- ~LIO_MBOX_STATE_RES_PENDING;
- mbox->state |=
- LIO_MBOX_STATE_RES_RECEIVING;
- mbox->mbox_resp.msg.mbox_msg64 =
- msg.mbox_msg64;
- mbox->mbox_resp.q_no = mbox->q_no;
- mbox->mbox_resp.recv_len = 1;
- } else {
- rte_write64(LIO_PFVFERR,
- mbox->mbox_read_reg);
- mbox->state |= LIO_MBOX_STATE_ERROR;
- return -1;
- }
- }
- }
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
- if (mbox->mbox_req.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
- ret = 1;
- }
- } else {
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
- if (mbox->mbox_resp.recv_len < msg.s.len) {
- ret = 0;
- } else {
- mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
- mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
- ret = 1;
- }
- } else {
- RTE_ASSERT(0);
- }
- }
-
- rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
-
- return ret;
-}
-
-/**
- * lio_mbox_write:
- * @lio_dev: Pointer lio device
- * @mbox_cmd: Cmd to send to mailbox.
- *
- * Populates the queue specific mbox structure
- * with cmd information.
- * Write the cmd to mbox register
- */
-int
-lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
- uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
- !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
- return LIO_MBOX_STATUS_FAILED;
-
- if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
- !(mbox->state & LIO_MBOX_STATE_IDLE))
- return LIO_MBOX_STATUS_BUSY;
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
- rte_memcpy(&mbox->mbox_resp, mbox_cmd,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_RES_PENDING;
- }
-
- count = 0;
-
- while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
-
- if (ret == LIO_MBOX_STATUS_SUCCESS) {
- rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
- for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
- count = 0;
- while (rte_read64(mbox->mbox_write_reg) !=
- LIO_PFVFACK) {
- rte_delay_ms(1);
- if (count++ == 1000) {
- ret = LIO_MBOX_STATUS_FAILED;
- break;
- }
- }
- rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
- }
- }
-
- if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- } else {
- if ((!mbox_cmd->msg.s.resp_needed) ||
- (ret == LIO_MBOX_STATUS_FAILED)) {
- mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
- if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
- LIO_MBOX_STATE_REQ_RECEIVED)))
- mbox->state = LIO_MBOX_STATE_IDLE;
- }
- }
-
- return ret;
-}
-
-/**
- * lio_mbox_process_cmd:
- * @mbox: Pointer mailbox
- * @mbox_cmd: Pointer to command received
- *
- * Process the cmd received in mbox
- */
-static int
-lio_mbox_process_cmd(struct lio_mbox *mbox,
- struct lio_mbox_cmd *mbox_cmd)
-{
- struct lio_device *lio_dev = mbox->lio_dev;
-
- if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
- lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
-
- return 0;
-}
-
-/**
- * Process the received mbox message.
- */
-int
-lio_mbox_process_message(struct lio_mbox *mbox)
-{
- struct lio_mbox_cmd mbox_cmd;
-
- if (mbox->state & LIO_MBOX_STATE_ERROR) {
- if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
- LIO_MBOX_STATE_RES_RECEIVING)) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 1;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
- mbox_cmd.fn_arg);
-
- return 0;
- }
-
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
- sizeof(struct lio_mbox_cmd));
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- mbox_cmd.recv_status = 0;
- if (mbox_cmd.fn)
- mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
-
- return 0;
- }
-
- if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
- rte_memcpy(&mbox_cmd, &mbox->mbox_req,
- sizeof(struct lio_mbox_cmd));
- if (!mbox_cmd.msg.s.resp_needed) {
- mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
- if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
- mbox->state = LIO_MBOX_STATE_IDLE;
- rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
- }
-
- lio_mbox_process_cmd(mbox, &mbox_cmd);
-
- return 0;
- }
-
- RTE_ASSERT(0);
-
- return 0;
-}
diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
deleted file mode 100644
index 457917e91f..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_MBOX_H_
-#define _LIO_MBOX_H_
-
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-
-/* Macros for Mail Box Communication */
-
-#define LIO_MBOX_DATA_MAX 32
-
-#define LIO_VF_ACTIVE 0x1
-#define LIO_VF_FLR_REQUEST 0x2
-#define LIO_CORES_CRASHED 0x3
-
-/* Macro for Read acknowledgment */
-#define LIO_PFVFACK 0xffffffffffffffff
-#define LIO_PFVFSIG 0x1122334455667788
-#define LIO_PFVFERR 0xDEADDEADDEADDEAD
-
-enum lio_mbox_cmd_status {
- LIO_MBOX_STATUS_SUCCESS = 0,
- LIO_MBOX_STATUS_FAILED = 1,
- LIO_MBOX_STATUS_BUSY = 2
-};
-
-enum lio_mbox_message_type {
- LIO_MBOX_REQUEST = 0,
- LIO_MBOX_RESPONSE = 1
-};
-
-union lio_mbox_message {
- uint64_t mbox_msg64;
- struct {
- uint16_t type : 1;
- uint16_t resp_needed : 1;
- uint16_t cmd : 6;
- uint16_t len : 8;
- uint8_t params[6];
- } s;
-};
-
-typedef void (*lio_mbox_callback)(void *, void *, void *);
-
-struct lio_mbox_cmd {
- union lio_mbox_message msg;
- uint64_t data[LIO_MBOX_DATA_MAX];
- uint32_t q_no;
- uint32_t recv_len;
- uint32_t recv_status;
- lio_mbox_callback fn;
- void *fn_arg;
-};
-
-enum lio_mbox_state {
- LIO_MBOX_STATE_IDLE = 1,
- LIO_MBOX_STATE_REQ_RECEIVING = 2,
- LIO_MBOX_STATE_REQ_RECEIVED = 4,
- LIO_MBOX_STATE_RES_PENDING = 8,
- LIO_MBOX_STATE_RES_RECEIVING = 16,
- LIO_MBOX_STATE_RES_RECEIVED = 16,
- LIO_MBOX_STATE_ERROR = 32
-};
-
-struct lio_mbox {
- /* A spinlock to protect access to this q_mbox. */
- rte_spinlock_t lock;
-
- struct lio_device *lio_dev;
-
- uint32_t q_no;
-
- enum lio_mbox_state state;
-
- /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
- void *mbox_int_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
- */
- void *mbox_write_reg;
-
- /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
- * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
- */
- void *mbox_read_reg;
-
- struct lio_mbox_cmd mbox_req;
-
- struct lio_mbox_cmd mbox_resp;
-
-};
-
-int lio_mbox_read(struct lio_mbox *mbox);
-int lio_mbox_write(struct lio_device *lio_dev,
- struct lio_mbox_cmd *mbox_cmd);
-int lio_mbox_process_message(struct lio_mbox *mbox);
-#endif /* _LIO_MBOX_H_ */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
deleted file mode 100644
index ebcfbb1a5c..0000000000
--- a/drivers/net/liquidio/lio_ethdev.c
+++ /dev/null
@@ -1,2147 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-#include <rte_alarm.h>
-#include <rte_ether.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-/* Default RSS key in use */
-static uint8_t lio_rss_key[40] = {
- 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
- 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
- 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
- 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
- 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
-};
-
-static const struct rte_eth_desc_lim lio_rx_desc_lim = {
- .nb_max = CN23XX_MAX_OQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_OQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-static const struct rte_eth_desc_lim lio_tx_desc_lim = {
- .nb_max = CN23XX_MAX_IQ_DESCRIPTORS,
- .nb_min = CN23XX_MIN_IQ_DESCRIPTORS,
- .nb_align = 1,
-};
-
-/* Wait for control command to reach nic. */
-static uint16_t
-lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
- struct lio_dev_ctrl_cmd *ctrl_cmd)
-{
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-
- while ((ctrl_cmd->cond == 0) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
- rte_delay_ms(1);
- }
-
- return !timeout;
-}
-
-/**
- * \brief Send Rx control command
- * @param eth_dev Pointer to the structure rte_eth_dev
- * @param start_stop whether to start or stop
- */
-static int
-lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
- ctrl_pkt.ncmd.s.param1 = start_stop;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send RX Control message\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "RX Control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/* store statistics names and its offset in stats structure */
-struct rte_lio_xstats_name_off {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- unsigned int offset;
-};
-
-static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
- {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
- {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
- {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
- {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
- {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
- {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
- {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
- {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
- {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
- {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
- {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
- {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
- {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
- {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_broadcast_pkts",
- (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_multicast_pkts",
- (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
- total_collisions)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
- sizeof(struct octeon_rx_stats)},
- {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
- sizeof(struct octeon_rx_stats)},
-};
-
-#define LIO_NB_XSTATS RTE_DIM(rte_lio_stats_strings)
-
-/* Get hw stats of the port */
-static int
-lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct octeon_link_stats *hw_stats;
- struct lio_link_stats_resp *resp;
- struct lio_soft_command *sc;
- uint32_t resp_size;
- unsigned int i;
- int retval;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (n < LIO_NB_XSTATS)
- return LIO_NB_XSTATS;
-
- resp_size = sizeof(struct lio_link_stats_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_link_stats_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_PORT_STATS, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
- retval);
- goto get_stats_fail;
- }
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
- goto get_stats_fail;
- }
-
- lio_swap_8B_data((uint64_t *)(&resp->link_stats),
- sizeof(struct octeon_link_stats) >> 3);
-
- hw_stats = &resp->link_stats;
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- xstats[i].id = i;
- xstats[i].value =
- *(uint64_t *)(((char *)hw_stats) +
- rte_lio_stats_strings[i].offset);
- }
-
- lio_free_soft_command(sc);
-
- return LIO_NB_XSTATS;
-
-get_stats_fail:
- lio_free_soft_command(sc);
-
- return -1;
-}
-
-static int
-lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned limit __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- unsigned int i;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (xstats_names == NULL)
- return LIO_NB_XSTATS;
-
- /* Note: limit checked in rte_eth_xstats_names() */
-
- for (i = 0; i < LIO_NB_XSTATS; i++) {
- snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
- "%s", rte_lio_stats_strings[i].name);
- }
-
- return LIO_NB_XSTATS;
-}
-
-/* Reset hw stats for the port */
-static int
-lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int ret;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Failed to send clear stats command\n");
- return ret;
- }
-
- ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
- if (ret != 0) {
- lio_dev_err(lio_dev, "Clear stats command timed out\n");
- return ret;
- }
-
- /* clear stored per queue stats */
- if (*eth_dev->dev_ops->stats_reset == NULL)
- return 0;
- return (*eth_dev->dev_ops->stats_reset)(eth_dev);
-}
-
-/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
-static int
-lio_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
- uint64_t bytes = 0;
- uint64_t pkts = 0;
- uint64_t drop = 0;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- pkts += iq_stats->tx_done;
- drop += iq_stats->tx_dropped;
- bytes += iq_stats->tx_tot_bytes;
- }
- }
-
- stats->opackets = pkts;
- stats->obytes = bytes;
- stats->oerrors = drop;
-
- pkts = 0;
- drop = 0;
- bytes = 0;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- pkts += oq_stats->rx_pkts_received;
- drop += (oq_stats->rx_dropped +
- oq_stats->dropped_toomany +
- oq_stats->dropped_nomem);
- bytes += oq_stats->rx_bytes_received;
- }
- }
- stats->ibytes = bytes;
- stats->ipackets = pkts;
- stats->ierrors = drop;
-
- return 0;
-}
-
-static int
-lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_droq_stats *oq_stats;
- struct lio_iq_stats *iq_stats;
- struct lio_instr_queue *txq;
- struct lio_droq *droq;
- int i, iq_no, oq_no;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- iq_no = lio_dev->linfo.txpciq[i].s.q_no;
- txq = lio_dev->instr_queue[iq_no];
- if (txq != NULL) {
- iq_stats = &txq->stats;
- memset(iq_stats, 0, sizeof(struct lio_iq_stats));
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
- droq = lio_dev->droq[oq_no];
- if (droq != NULL) {
- oq_stats = &droq->stats;
- memset(oq_stats, 0, sizeof(struct lio_droq_stats));
- }
- }
-
- return 0;
-}
-
-static int
-lio_dev_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *devinfo)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- switch (pci_dev->id.subsystem_device_id) {
- /* CN23xx 10G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_210:
- case PCI_SUBSYS_DEV_ID_CN2360_210:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
- case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
- case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- break;
- /* CN23xx 25G cards */
- case PCI_SUBSYS_DEV_ID_CN2350_225:
- case PCI_SUBSYS_DEV_ID_CN2360_225:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
- break;
- default:
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
- lio_dev_err(lio_dev,
- "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
- return -EINVAL;
- }
-
- devinfo->max_rx_queues = lio_dev->max_rx_queues;
- devinfo->max_tx_queues = lio_dev->max_tx_queues;
-
- devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
- devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
-
- devinfo->max_mac_addrs = 1;
-
- devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_RSS_HASH);
- devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
-
- devinfo->rx_desc_lim = lio_rx_desc_lim;
- devinfo->tx_desc_lim = lio_tx_desc_lim;
-
- devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
- devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4 |
- RTE_ETH_RSS_NONFRAG_IPV4_TCP |
- RTE_ETH_RSS_IPV6 |
- RTE_ETH_RSS_NONFRAG_IPV6_TCP |
- RTE_ETH_RSS_IPV6_EX |
- RTE_ETH_RSS_IPV6_TCP_EX);
- return 0;
-}
-
-static int
-lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- PMD_INIT_FUNC_TRACE();
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
- ctrl_pkt.ncmd.s.param1 = mtu;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to change MTU timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
- int i, j, index;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
- rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
- rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
-
- for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
- index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
- rss_state->itable[index] = reta_conf[i].reta[j];
- }
- }
- }
-
- rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
- memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- int i, num;
-
- if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
- lio_dev_err(lio_dev,
- "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
- reta_size, LIO_RSS_MAX_TABLE_SZ);
- return -EINVAL;
- }
-
- num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
-
- for (i = 0; i < num; i++) {
- memcpy(reta_conf->reta,
- &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
- RTE_ETH_RETA_GROUP_SIZE);
- reta_conf++;
- }
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- uint8_t *hash_key = NULL;
- uint64_t rss_hf = 0;
-
- if (rss_state->hash_disable) {
- lio_dev_info(lio_dev, "RSS disabled in nic\n");
- rss_conf->rss_hf = 0;
- return 0;
- }
-
- /* Get key value */
- hash_key = rss_conf->rss_key;
- if (hash_key != NULL)
- memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
-
- if (rss_state->ip)
- rss_hf |= RTE_ETH_RSS_IPV4;
- if (rss_state->tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
- if (rss_state->ipv6)
- rss_hf |= RTE_ETH_RSS_IPV6;
- if (rss_state->ipv6_tcp_hash)
- rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
- if (rss_state->ipv6_ex)
- rss_hf |= RTE_ETH_RSS_IPV6_EX;
- if (rss_state->ipv6_tcp_ex_hash)
- rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
-
- rss_conf->rss_hf = rss_hf;
-
- return 0;
-}
-
-static int
-lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct lio_rss_set *rss_param;
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
- lio_dev->port_id);
- return -EINVAL;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
- ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- rss_param->param.flags = 0xF;
-
- if (rss_conf->rss_key) {
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
- rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
- rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
- memcpy(rss_state->hash_key, rss_conf->rss_key,
- rss_state->hash_key_size);
- memcpy(rss_param->key, rss_state->hash_key,
- rss_state->hash_key_size);
- }
-
- if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- /* Can't disable rss through hash flags,
- * if it is enabled by default during init
- */
- if (!rss_state->hash_disable)
- return -EINVAL;
-
- /* This is for --disable-rss during testpmd launch */
- rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
- } else {
- uint32_t hashinfo = 0;
-
- /* Can't enable rss if disabled by default during init */
- if (rss_state->hash_disable)
- return -EINVAL;
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
- hashinfo |= LIO_RSS_HASH_IPV4;
- rss_state->ip = 1;
- } else {
- rss_state->ip = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV4;
- rss_state->tcp_hash = 1;
- } else {
- rss_state->tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
- hashinfo |= LIO_RSS_HASH_IPV6;
- rss_state->ipv6 = 1;
- } else {
- rss_state->ipv6 = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6;
- rss_state->ipv6_tcp_hash = 1;
- } else {
- rss_state->ipv6_tcp_hash = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
- hashinfo |= LIO_RSS_HASH_IPV6_EX;
- rss_state->ipv6_ex = 1;
- } else {
- rss_state->ipv6_ex = 0;
- }
-
- if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
- hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
- rss_state->ipv6_tcp_ex_hash = 1;
- } else {
- rss_state->ipv6_tcp_ex_hash = 0;
- }
-
- rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
- rss_param->param.hashinfo = hashinfo;
- }
-
- lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to set rss hash\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Set rss hash timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Add vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Remove vxlan dest udp port for an interface.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param udp_tnl
- * udp tunnel conf
- *
- * @return
- * On success return 0
- * On failure return -1
- */
-static int
-lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
- struct rte_eth_udp_tunnel *udp_tnl)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (udp_tnl == NULL)
- return -EINVAL;
-
- if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
- lio_dev_err(lio_dev, "Unsupported tunnel type\n");
- return -1;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
- ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
- ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (lio_dev->linfo.vlan_is_admin_assigned)
- return -EPERM;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = on ?
- LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
- ctrl_pkt.ncmd.s.param1 = vlan_id;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
- on ? "add" : "remove");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
- on ? "add" : "remove");
- return -1;
- }
-
- return 0;
-}
-
-static uint64_t
-lio_hweight64(uint64_t w)
-{
- uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
-
- res =
- (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
- res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
- res = res + (res >> 8);
- res = res + (res >> 16);
-
- return (res + (res >> 32)) & 0x00000000000000FFul;
-}
-
-static int
-lio_dev_link_update(struct rte_eth_dev *eth_dev,
- int wait_to_complete __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_eth_link link;
-
- /* Initialize */
- memset(&link, 0, sizeof(link));
- link.link_status = RTE_ETH_LINK_DOWN;
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- link.link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- /* Return what we found */
- if (lio_dev->linfo.link.s.link_up == 0) {
- /* Interface is down */
- return rte_eth_linkstatus_set(eth_dev, &link);
- }
-
- link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
- link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- switch (lio_dev->linfo.link.s.speed) {
- case LIO_LINK_SPEED_10000:
- link.link_speed = RTE_ETH_SPEED_NUM_10G;
- break;
- case LIO_LINK_SPEED_25000:
- link.link_speed = RTE_ETH_SPEED_NUM_25G;
- break;
- default:
- link.link_speed = RTE_ETH_SPEED_NUM_NONE;
- link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
- }
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-/**
- * \brief Net device enable, disable allmulticast
- * @param eth_dev Pointer to the structure rte_eth_dev
- *
- * @return
- * On success return 0
- * On failure return negative errno
- */
-static int
-lio_change_dev_flag(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- /* Create a ctrl pkt command to be sent to core app. */
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
- ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send change flag message\n");
- return -EAGAIN;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Change dev flag command timed out\n");
- return -ETIMEDOUT;
- }
-
- return 0;
-}
-
-static int
-lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_VF_TRUST_MIN_VERSION);
- return -EAGAIN;
- }
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
- lio_dev->port_id);
- return -EAGAIN;
- }
-
- lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
- return lio_change_dev_flag(eth_dev);
-}
-
-static void
-lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_reta_entry64 reta_conf[8];
- struct rte_eth_rss_conf rss_conf;
- uint16_t i;
-
- /* Configure the RSS key and the RSS protocols used to compute
- * the RSS hash of input packets.
- */
- rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
- if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- return;
- }
-
- if (rss_conf.rss_key == NULL)
- rss_conf.rss_key = lio_rss_key; /* Default hash key */
-
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
-
- memset(reta_conf, 0, sizeof(reta_conf));
- for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
- uint8_t q_idx, conf_idx, reta_idx;
-
- q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
- i % eth_dev->data->nb_rx_queues : 0);
- conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
- reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
- reta_conf[conf_idx].reta[reta_idx] = q_idx;
- reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
- }
-
- lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
-}
-
-static void
-lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
- struct rte_eth_rss_conf rss_conf;
-
- switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
- case RTE_ETH_MQ_RX_RSS:
- lio_dev_rss_configure(eth_dev);
- break;
- case RTE_ETH_MQ_RX_NONE:
- /* if mq_mode is none, disable rss mode. */
- default:
- memset(&rss_conf, 0, sizeof(rss_conf));
- rss_state->hash_disable = 1;
- lio_dev_rss_hash_update(eth_dev, &rss_conf);
- }
-}
-
-/**
- * Setup our receive queue/ringbuffer. This is the
- * queue the Octeon uses to send us packets and
- * responses. We are given a memory pool for our
- * packet buffers that are used to populate the receive
- * queue.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- * @param q_no
- * Queue number
- * @param num_rx_descs
- * Number of entries in the queue
- * @param socket_id
- * Where to allocate memory
- * @param rx_conf
- * Pointer to the struction rte_eth_rxconf
- * @param mp
- * Pointer to the packet pool
- *
- * @return
- * - On success, return 0
- * - On failure, return -1
- */
-static int
-lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_rx_descs, unsigned int socket_id,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mp)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct rte_pktmbuf_pool_private *mbp_priv;
- uint32_t fw_mapped_oq;
- uint16_t buf_size;
-
- if (q_no >= lio_dev->nb_rx_queues) {
- lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
-
- fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
-
- /* Free previous allocation if any */
- if (eth_dev->data->rx_queues[q_no] != NULL) {
- lio_dev_rx_queue_release(eth_dev, q_no);
- eth_dev->data->rx_queues[q_no] = NULL;
- }
-
- mbp_priv = rte_mempool_get_priv(mp);
- buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
- socket_id)) {
- lio_dev_err(lio_dev, "droq allocation failed\n");
- return -1;
- }
-
- eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
-
- return 0;
-}
-
-/**
- * Release the receive queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Receive queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_droq *droq = dev->data->rx_queues[q_no];
- int oq_no;
-
- if (droq) {
- oq_no = droq->q_no;
- lio_delete_droq_queue(droq->lio_dev, oq_no);
- }
-}
-
-/**
- * Allocate and initialize SW ring. Initialize associated HW registers.
- *
- * @param eth_dev
- * Pointer to structure rte_eth_dev
- *
- * @param q_no
- * Queue number
- *
- * @param num_tx_descs
- * Number of ringbuffer descriptors
- *
- * @param socket_id
- * NUMA socket id, used for memory allocations
- *
- * @param tx_conf
- * Pointer to the structure rte_eth_txconf
- *
- * @return
- * - On success, return 0
- * - On failure, return -errno value
- */
-static int
-lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
- uint16_t num_tx_descs, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
- int retval;
-
- if (q_no >= lio_dev->nb_tx_queues) {
- lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
- return -EINVAL;
- }
-
- lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
-
- /* Free previous allocation if any */
- if (eth_dev->data->tx_queues[q_no] != NULL) {
- lio_dev_tx_queue_release(eth_dev, q_no);
- eth_dev->data->tx_queues[q_no] = NULL;
- }
-
- retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
- num_tx_descs, lio_dev, socket_id);
-
- if (retval) {
- lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
- return retval;
- }
-
- retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
- lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
- socket_id);
-
- if (retval) {
- lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
- return retval;
- }
-
- eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
-
- return 0;
-}
-
-/**
- * Release the transmit queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- * Pointer to Ethernet device structure.
- * @param q_no
- * Transmit queue index.
- *
- * @return
- * - nothing
- */
-void
-lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
- struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
- uint32_t fw_mapped_iq_no;
-
-
- if (tq) {
- /* Free sg_list */
- lio_delete_sglist(tq);
-
- fw_mapped_iq_no = tq->txpciq.s.q_no;
- lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
- }
-}
-
-/**
- * Api to check link state.
- */
-static void
-lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- struct lio_link_status_resp *resp;
- union octeon_link_status *ls;
- struct lio_soft_command *sc;
- uint32_t resp_size;
-
- if (!lio_dev->intf_open)
- return;
-
- resp_size = sizeof(struct lio_link_status_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return;
-
- resp = (struct lio_link_status_resp *)sc->virtrptr;
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_INFO, 0, 0, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
- goto get_status_fail;
-
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- rte_delay_ms(1);
- }
-
- if (resp->status)
- goto get_status_fail;
-
- ls = &resp->link_info.link;
-
- lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
-
- if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
- if (ls->s.mtu < eth_dev->data->mtu) {
- lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
- ls->s.mtu);
- eth_dev->data->mtu = ls->s.mtu;
- }
- lio_dev->linfo.link.link_status64 = ls->link_status64;
- lio_dev_link_update(eth_dev, 0);
- }
-
- lio_free_soft_command(sc);
-
- return;
-
-get_status_fail:
- lio_free_soft_command(sc);
-}
-
-/* This function will be invoked every LSC_TIMEOUT ns (100ms)
- * and will update link state if it changes.
- */
-static void
-lio_sync_link_state_check(void *eth_dev)
-{
- struct lio_device *lio_dev =
- (((struct rte_eth_dev *)eth_dev)->data->dev_private);
-
- if (lio_dev->port_configured)
- lio_dev_get_link_status(eth_dev);
-
- /* Schedule periodic link status check.
- * Stop check if interface is close and start again while opening.
- */
- if (lio_dev->intf_open)
- rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
- eth_dev);
-}
-
-static int
-lio_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int ret = 0;
-
- lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- return -1;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1))
- return -1;
-
- /* Ready for link status updates */
- lio_dev->intf_open = 1;
- rte_mb();
-
- /* Configure RSS if device configured with multiple RX queues. */
- lio_dev_mq_rx_configure(eth_dev);
-
- /* Before update the link info,
- * must set linfo.link.link_status64 to 0.
- */
- lio_dev->linfo.link.link_status64 = 0;
-
- /* start polling for lsc */
- ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
- lio_sync_link_state_check,
- eth_dev);
- if (ret) {
- lio_dev_err(lio_dev,
- "link state check handler creation failed\n");
- goto dev_lsc_handle_error;
- }
-
- while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
- rte_delay_ms(1);
-
- if (lio_dev->linfo.link.link_status64 == 0) {
- ret = -1;
- goto dev_mtu_set_error;
- }
-
- ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
- if (ret != 0)
- goto dev_mtu_set_error;
-
- return 0;
-
-dev_mtu_set_error:
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
-dev_lsc_handle_error:
- lio_dev->intf_open = 0;
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- return ret;
-}
-
-/* Stop device and disable input/output functions */
-static int
-lio_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
- eth_dev->data->dev_started = 0;
- lio_dev->intf_open = 0;
- rte_mb();
-
- /* Cancel callback if still running. */
- rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
- lio_send_rx_ctrl_cmd(eth_dev, 0);
-
- lio_wait_for_instr_fetch(lio_dev);
-
- /* Clear recorded link status */
- lio_dev->linfo.link.link_status64 = 0;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already UP\n");
- return 0;
- }
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
- lio_dev_err(lio_dev, "Unable to set Link UP\n");
- return -1;
- }
-
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
- return 0;
-}
-
-static int
-lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- if (!lio_dev->intf_open) {
- lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
- return 0;
- }
-
- if (!lio_dev->linfo.link.s.link_up) {
- lio_dev_info(lio_dev, "Link is already DOWN\n");
- return 0;
- }
-
- lio_dev->linfo.link.s.link_up = 0;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
-
- if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
- lio_dev->linfo.link.s.link_up = 1;
- eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
- lio_dev_err(lio_dev, "Unable to set Link Down\n");
- return -1;
- }
-
- return 0;
-}
-
-/**
- * Reset and stop the device. This occurs on the first
- * call to this routine. Subsequent calls will simply
- * return. NB: This will require the NIC to be rebooted.
- *
- * @param eth_dev
- * Pointer to the structure rte_eth_dev
- *
- * @return
- * - nothing
- */
-static int
-lio_dev_close(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret = 0;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
-
- if (lio_dev->intf_open)
- ret = lio_dev_stop(eth_dev);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- rte_delay_ms(LIO_PCI_FLR_WAIT);
- }
-
- /* lio_free_mbox */
- lio_dev->fn_list.free_mbox(lio_dev);
-
- /* Free glist resources */
- rte_free(lio_dev->glist_head);
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_head = NULL;
- lio_dev->glist_lock = NULL;
-
- lio_dev->port_configured = 0;
-
- /* Delete all queues */
- lio_dev_clear_queues(eth_dev);
-
- return ret;
-}
-
-/**
- * Enable tunnel rx checksum verification from firmware.
- */
-static void
-lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
-}
-
-/**
- * Enable checksum calculation for inner packet in a tunnel.
- */
-static void
-lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
- ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
- return;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
- lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
-}
-
-static int
-lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
- int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- struct lio_dev_ctrl_cmd ctrl_cmd;
- struct lio_ctrl_pkt ctrl_pkt;
-
- if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
- lio_dev_err(lio_dev, "Require firmware version >= %s\n",
- LIO_Q_RECONF_MIN_VERSION);
- return -ENOTSUP;
- }
-
- /* flush added to prevent cmd failure
- * incase the queue is full
- */
- lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
- memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
- memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
- ctrl_cmd.eth_dev = eth_dev;
- ctrl_cmd.cond = 0;
-
- ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
- ctrl_pkt.ncmd.s.param1 = num_txq;
- ctrl_pkt.ncmd.s.param2 = num_rxq;
- ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
- if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
- lio_dev_err(lio_dev, "Failed to send queue count control command\n");
- return -1;
- }
-
- if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
- lio_dev_err(lio_dev, "Queue count control command timed out\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- int ret;
-
- if (lio_dev->nb_rx_queues != num_rxq ||
- lio_dev->nb_tx_queues != num_txq) {
- if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
- return -1;
- lio_dev->nb_rx_queues = num_rxq;
- lio_dev->nb_tx_queues = num_txq;
- }
-
- if (lio_dev->intf_open) {
- ret = lio_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- }
-
- /* Reset ioq registers */
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- return -1;
- }
-
- return 0;
-}
-
-static int
-lio_dev_configure(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
- int retval, num_iqueues, num_oqueues;
- uint8_t mac[RTE_ETHER_ADDR_LEN], i;
- struct lio_if_cfg_resp *resp;
- struct lio_soft_command *sc;
- union lio_if_cfg if_cfg;
- uint32_t resp_size;
-
- PMD_INIT_FUNC_TRACE();
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
- eth_dev->data->dev_conf.rxmode.offloads |=
- RTE_ETH_RX_OFFLOAD_RSS_HASH;
-
- /* Inform firmware about change in number of queues to use.
- * Disable IO queues and reset registers for re-configuration.
- */
- if (lio_dev->port_configured)
- return lio_reconf_queues(eth_dev,
- eth_dev->data->nb_tx_queues,
- eth_dev->data->nb_rx_queues);
-
- lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
-
- /* Set max number of queues which can be re-configured. */
- lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
- lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
-
- resp_size = sizeof(struct lio_if_cfg_resp);
- sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
- if (sc == NULL)
- return -ENOMEM;
-
- resp = (struct lio_if_cfg_resp *)sc->virtrptr;
-
- /* Firmware doesn't have capability to reconfigure the queues,
- * Claim all queues, and use as many required
- */
- if_cfg.if_cfg64 = 0;
- if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
- if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
- if_cfg.s.base_queue = 0;
-
- if_cfg.s.gmx_port_id = lio_dev->pf_num;
-
- lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
- LIO_OPCODE_IF_CFG, 0,
- if_cfg.if_cfg64, 0);
-
- /* Setting wait time in seconds */
- sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
- retval);
- /* Soft instr is freed by driver in case of failure. */
- goto nic_config_fail;
- }
-
- /* Sleep on a wait queue till the cond flag indicates that the
- * response arrived or timed-out.
- */
- while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
- lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
- lio_process_ordered_list(lio_dev);
- rte_delay_ms(1);
- }
-
- retval = resp->status;
- if (retval) {
- lio_dev_err(lio_dev, "iq/oq config failed\n");
- goto nic_config_fail;
- }
-
- strlcpy(lio_dev->firmware_version,
- resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
-
- lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
- sizeof(struct octeon_if_cfg_info) >> 3);
-
- num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
- num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
-
- if (!(num_iqueues) || !(num_oqueues)) {
- lio_dev_err(lio_dev,
- "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask);
- goto nic_config_fail;
- }
-
- lio_dev_dbg(lio_dev,
- "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
- eth_dev->data->port_id,
- (unsigned long)resp->cfg_info.iqmask,
- (unsigned long)resp->cfg_info.oqmask,
- num_iqueues, num_oqueues);
-
- lio_dev->linfo.num_rxpciq = num_oqueues;
- lio_dev->linfo.num_txpciq = num_iqueues;
-
- for (i = 0; i < num_oqueues; i++) {
- lio_dev->linfo.rxpciq[i].rxpciq64 =
- resp->cfg_info.linfo.rxpciq[i].rxpciq64;
- lio_dev_dbg(lio_dev, "index %d OQ %d\n",
- i, lio_dev->linfo.rxpciq[i].s.q_no);
- }
-
- for (i = 0; i < num_iqueues; i++) {
- lio_dev->linfo.txpciq[i].txpciq64 =
- resp->cfg_info.linfo.txpciq[i].txpciq64;
- lio_dev_dbg(lio_dev, "index %d IQ %d\n",
- i, lio_dev->linfo.txpciq[i].s.q_no);
- }
-
- lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
- lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
- lio_dev->linfo.link.link_status64 =
- resp->cfg_info.linfo.link.link_status64;
-
- /* 64-bit swap required on LE machines */
- lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
- for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
- mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
- 2 + i));
-
- /* Copy the permanent MAC address */
- rte_ether_addr_copy((struct rte_ether_addr *)mac,
- ð_dev->data->mac_addrs[0]);
-
- /* enable firmware checksum support for tunnel packets */
- lio_enable_hw_tunnel_rx_checksum(eth_dev);
- lio_enable_hw_tunnel_tx_checksum(eth_dev);
-
- lio_dev->glist_lock =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
- if (lio_dev->glist_lock == NULL)
- return -ENOMEM;
-
- lio_dev->glist_head =
- rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
- 0);
- if (lio_dev->glist_head == NULL) {
- rte_free(lio_dev->glist_lock);
- lio_dev->glist_lock = NULL;
- return -ENOMEM;
- }
-
- lio_dev_link_update(eth_dev, 0);
-
- lio_dev->port_configured = 1;
-
- lio_free_soft_command(sc);
-
- /* Reset ioq regs */
- lio_dev->fn_list.setup_device_regs(lio_dev);
-
- /* Free iq_0 used during init */
- lio_free_instr_queue0(lio_dev);
-
- return 0;
-
-nic_config_fail:
- lio_dev_err(lio_dev, "Failed retval %d\n", retval);
- lio_free_soft_command(sc);
- lio_free_instr_queue0(lio_dev);
-
- return -ENODEV;
-}
-
-/* Define our ethernet definitions */
-static const struct eth_dev_ops liovf_eth_dev_ops = {
- .dev_configure = lio_dev_configure,
- .dev_start = lio_dev_start,
- .dev_stop = lio_dev_stop,
- .dev_set_link_up = lio_dev_set_link_up,
- .dev_set_link_down = lio_dev_set_link_down,
- .dev_close = lio_dev_close,
- .promiscuous_enable = lio_dev_promiscuous_enable,
- .promiscuous_disable = lio_dev_promiscuous_disable,
- .allmulticast_enable = lio_dev_allmulticast_enable,
- .allmulticast_disable = lio_dev_allmulticast_disable,
- .link_update = lio_dev_link_update,
- .stats_get = lio_dev_stats_get,
- .xstats_get = lio_dev_xstats_get,
- .xstats_get_names = lio_dev_xstats_get_names,
- .stats_reset = lio_dev_stats_reset,
- .xstats_reset = lio_dev_xstats_reset,
- .dev_infos_get = lio_dev_info_get,
- .vlan_filter_set = lio_dev_vlan_filter_set,
- .rx_queue_setup = lio_dev_rx_queue_setup,
- .rx_queue_release = lio_dev_rx_queue_release,
- .tx_queue_setup = lio_dev_tx_queue_setup,
- .tx_queue_release = lio_dev_tx_queue_release,
- .reta_update = lio_dev_rss_reta_update,
- .reta_query = lio_dev_rss_reta_query,
- .rss_hash_conf_get = lio_dev_rss_hash_conf_get,
- .rss_hash_update = lio_dev_rss_hash_update,
- .udp_tunnel_port_add = lio_dev_udp_tunnel_add,
- .udp_tunnel_port_del = lio_dev_udp_tunnel_del,
- .mtu_set = lio_dev_mtu_set,
-};
-
-static void
-lio_check_pf_hs_response(void *lio_dev)
-{
- struct lio_device *dev = lio_dev;
-
- /* check till response arrives */
- if (dev->pfvf_hsword.coproc_tics_per_us)
- return;
-
- cn23xx_vf_handle_mbox(dev);
-
- rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
-}
-
-/**
- * \brief Identify the LIO device and to map the BAR address space
- * @param lio_dev lio device
- */
-static int
-lio_chip_specific_setup(struct lio_device *lio_dev)
-{
- struct rte_pci_device *pdev = lio_dev->pci_dev;
- uint32_t dev_id = pdev->id.device_id;
- const char *s;
- int ret = 1;
-
- switch (dev_id) {
- case LIO_CN23XX_VF_VID:
- lio_dev->chip_id = LIO_CN23XX_VF_VID;
- ret = cn23xx_vf_setup_device(lio_dev);
- s = "CN23XX VF";
- break;
- default:
- s = "?";
- lio_dev_err(lio_dev, "Unsupported Chip\n");
- }
-
- if (!ret)
- lio_dev_info(lio_dev, "DEVICE : %s\n", s);
-
- return ret;
-}
-
-static int
-lio_first_time_init(struct lio_device *lio_dev,
- struct rte_pci_device *pdev)
-{
- int dpdk_queues;
-
- PMD_INIT_FUNC_TRACE();
-
- /* set dpdk specific pci device pointer */
- lio_dev->pci_dev = pdev;
-
- /* Identify the LIO type and set device ops */
- if (lio_chip_specific_setup(lio_dev)) {
- lio_dev_err(lio_dev, "Chip specific setup failed\n");
- return -1;
- }
-
- /* Initialize soft command buffer pool */
- if (lio_setup_sc_buffer_pool(lio_dev)) {
- lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
- return -1;
- }
-
- /* Initialize lists to manage the requests of different types that
- * arrive from applications for this lio device.
- */
- lio_setup_response_list(lio_dev);
-
- if (lio_dev->fn_list.setup_mbox(lio_dev)) {
- lio_dev_err(lio_dev, "Mailbox setup failed\n");
- goto error;
- }
-
- /* Check PF response */
- lio_check_pf_hs_response((void *)lio_dev);
-
- /* Do handshake and exit if incompatible PF driver */
- if (cn23xx_pfvf_handshake(lio_dev))
- goto error;
-
- /* Request and wait for device reset. */
- if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
- cn23xx_vf_ask_pf_to_do_flr(lio_dev);
- /* FLR wait time doubled as a precaution. */
- rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
- }
-
- if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to configure device registers\n");
- goto error;
- }
-
- if (lio_setup_instr_queue0(lio_dev)) {
- lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
- goto error;
- }
-
- dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
-
- lio_dev->max_tx_queues = dpdk_queues;
- lio_dev->max_rx_queues = dpdk_queues;
-
- /* Enable input and output queues for this device */
- if (lio_dev->fn_list.enable_io_queues(lio_dev))
- goto error;
-
- return 0;
-
-error:
- lio_free_sc_buffer_pool(lio_dev);
- if (lio_dev->mbox[0])
- lio_dev->fn_list.free_mbox(lio_dev);
- if (lio_dev->instr_queue[0])
- lio_free_instr_queue0(lio_dev);
-
- return -1;
-}
-
-static int
-lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
-{
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* lio_free_sc_buffer_pool */
- lio_free_sc_buffer_pool(lio_dev);
-
- return 0;
-}
-
-static int
-lio_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
- PMD_INIT_FUNC_TRACE();
-
- eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
- eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
-
- /* Primary does the initialization. */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- rte_eth_copy_pci_info(eth_dev, pdev);
-
- if (pdev->mem_resource[0].addr) {
- lio_dev->hw_addr = pdev->mem_resource[0].addr;
- } else {
- PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
- return -ENODEV;
- }
-
- lio_dev->eth_dev = eth_dev;
- /* set lio device print string */
- snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
- "%s[%02x:%02x.%x]", pdev->driver->driver.name,
- pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
-
- lio_dev->port_id = eth_dev->data->port_id;
-
- if (lio_first_time_init(lio_dev, pdev)) {
- lio_dev_err(lio_dev, "Device init failed\n");
- return -EINVAL;
- }
-
- eth_dev->dev_ops = &liovf_eth_dev_ops;
- eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- lio_dev_err(lio_dev,
- "MAC addresses memory allocation failed\n");
- eth_dev->dev_ops = NULL;
- eth_dev->rx_pkt_burst = NULL;
- eth_dev->tx_pkt_burst = NULL;
- return -ENOMEM;
- }
-
- rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
- rte_wmb();
-
- lio_dev->port_configured = 0;
- /* Always allow unicast packets */
- lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
-
- return 0;
-}
-
-static int
-lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
- lio_eth_dev_init);
-}
-
-static int
-lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
-{
- return rte_eth_dev_pci_generic_remove(pci_dev,
- lio_eth_dev_uninit);
-}
-
-/* Set of PCI devices this driver supports */
-static const struct rte_pci_id pci_id_liovf_map[] = {
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
- { .vendor_id = 0, /* sentinel */ }
-};
-
-static struct rte_pci_driver rte_liovf_pmd = {
- .id_table = pci_id_liovf_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = lio_eth_dev_pci_probe,
- .remove = lio_eth_dev_pci_remove,
-};
-
-RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
deleted file mode 100644
index ece2b03858..0000000000
--- a/drivers/net/liquidio/lio_ethdev.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_ETHDEV_H_
-#define _LIO_ETHDEV_H_
-
-#include <stdint.h>
-
-#include "lio_struct.h"
-
-/* timeout to check link state updates from firmware in us */
-#define LIO_LSC_TIMEOUT 100000 /* 100000us (100ms) */
-#define LIO_MAX_CMD_TIMEOUT 10000 /* 10000ms (10s) */
-
-/* The max frame size with default MTU */
-#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-
-#define LIO_DEV(_eth_dev) ((_eth_dev)->data->dev_private)
-
-/* LIO Response condition variable */
-struct lio_dev_ctrl_cmd {
- struct rte_eth_dev *eth_dev;
- uint64_t cond;
-};
-
-enum lio_bus_speed {
- LIO_LINK_SPEED_UNKNOWN = 0,
- LIO_LINK_SPEED_10000 = 10000,
- LIO_LINK_SPEED_25000 = 25000
-};
-
-struct octeon_if_cfg_info {
- uint64_t iqmask; /** mask for IQs enabled for the port */
- uint64_t oqmask; /** mask for OQs enabled for the port */
- struct octeon_link_info linfo; /** initial link information */
- char lio_firmware_version[LIO_FW_VERSION_LENGTH];
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_rx_stats {
- /* link-level stats */
- uint64_t total_rcvd;
- uint64_t bytes_rcvd;
- uint64_t total_bcst;
- uint64_t total_mcst;
- uint64_t runts;
- uint64_t ctl_rcvd;
- uint64_t fifo_err; /* Accounts for over/under-run of buffers */
- uint64_t dmac_drop;
- uint64_t fcs_err;
- uint64_t jabber_err;
- uint64_t l2_err;
- uint64_t frame_err;
-
- /* firmware stats */
- uint64_t fw_total_rcvd;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_rx_vxlan;
- uint64_t fw_rx_vxlan_err;
-
- /* LRO */
- uint64_t fw_lro_pkts; /* Number of packets that are LROed */
- uint64_t fw_lro_octs; /* Number of octets that are LROed */
- uint64_t fw_total_lro; /* Number of LRO packets formed */
- uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
- uint64_t fw_lro_aborts_port;
- uint64_t fw_lro_aborts_seq;
- uint64_t fw_lro_aborts_tsval;
- uint64_t fw_lro_aborts_timer;
- /* intrmod: packet forward rate */
- uint64_t fwd_rate;
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_tx_stats {
- /* link-level stats */
- uint64_t total_pkts_sent;
- uint64_t total_bytes_sent;
- uint64_t mcast_pkts_sent;
- uint64_t bcast_pkts_sent;
- uint64_t ctl_sent;
- uint64_t one_collision_sent; /* Packets sent after one collision */
- /* Packets sent after multiple collision */
- uint64_t multi_collision_sent;
- /* Packets not sent due to max collisions */
- uint64_t max_collision_fail;
- /* Packets not sent due to max deferrals */
- uint64_t max_deferral_fail;
- /* Accounts for over/under-run of buffers */
- uint64_t fifo_err;
- uint64_t runts;
- uint64_t total_collisions; /* Total number of collisions detected */
-
- /* firmware stats */
- uint64_t fw_total_sent;
- uint64_t fw_total_fwd;
- uint64_t fw_total_fwd_bytes;
- uint64_t fw_err_pko;
- uint64_t fw_err_link;
- uint64_t fw_err_drop;
- uint64_t fw_err_tso;
- uint64_t fw_tso; /* number of tso requests */
- uint64_t fw_tso_fwd; /* number of packets segmented in tso */
- uint64_t fw_tx_vxlan;
-};
-
-struct octeon_link_stats {
- struct octeon_rx_stats fromwire;
- struct octeon_tx_stats fromhost;
-};
-
-union lio_if_cfg {
- uint64_t if_cfg64;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t base_queue : 16;
- uint64_t num_iqueues : 16;
- uint64_t num_oqueues : 16;
- uint64_t gmx_port_id : 8;
- uint64_t vf_id : 8;
-#else
- uint64_t vf_id : 8;
- uint64_t gmx_port_id : 8;
- uint64_t num_oqueues : 16;
- uint64_t num_iqueues : 16;
- uint64_t base_queue : 16;
-#endif
- } s;
-};
-
-struct lio_if_cfg_resp {
- uint64_t rh;
- struct octeon_if_cfg_info cfg_info;
- uint64_t status;
-};
-
-struct lio_link_stats_resp {
- uint64_t rh;
- struct octeon_link_stats link_stats;
- uint64_t status;
-};
-
-struct lio_link_status_resp {
- uint64_t rh;
- struct octeon_link_info link_info;
- uint64_t status;
-};
-
-struct lio_rss_set {
- struct param {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t flags : 16;
- uint64_t hashinfo : 32;
- uint64_t itablesize : 16;
- uint64_t hashkeysize : 16;
- uint64_t reserved : 48;
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t itablesize : 16;
- uint64_t hashinfo : 32;
- uint64_t flags : 16;
- uint64_t reserved : 48;
- uint64_t hashkeysize : 16;
-#endif
- } param;
-
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t key[LIO_RSS_MAX_KEY_SZ];
-};
-
-void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-#endif /* _LIO_ETHDEV_H_ */
diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
deleted file mode 100644
index f227827081..0000000000
--- a/drivers/net/liquidio/lio_logs.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_LOGS_H_
-#define _LIO_LOGS_H_
-
-extern int lio_logtype_driver;
-#define lio_dev_printf(lio_dev, level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_driver, \
- "%s" fmt, (lio_dev)->dev_string, ##args)
-
-#define lio_dev_info(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
-
-#define lio_dev_err(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
-
-extern int lio_logtype_init;
-#define PMD_INIT_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, lio_logtype_init, \
- fmt, ## args)
-
-/* Enable these through config options */
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
-
-#define lio_dev_dbg(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_RX
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_TX
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
-#define PMD_MBOX_LOG(lio_dev, level, fmt, args...) \
- lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
-#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define PMD_REGS_LOG(lio_dev, fmt, args...) \
- lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
-#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
-
-#endif /* _LIO_LOGS_H_ */
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
deleted file mode 100644
index e09798ddd7..0000000000
--- a/drivers/net/liquidio/lio_rxtx.c
+++ /dev/null
@@ -1,1804 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-#define LIO_MAX_SG 12
-/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
-#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
-#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
-
-static void
-lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
-{
- uint32_t count = 0;
-
- do {
- count += droq->buffer_size;
- } while (count < LIO_MAX_RX_PKTLEN);
-}
-
-static void
-lio_droq_reset_indices(struct lio_droq *droq)
-{
- droq->read_idx = 0;
- droq->write_idx = 0;
- droq->refill_idx = 0;
- droq->refill_count = 0;
- rte_atomic64_set(&droq->pkts_pending, 0);
-}
-
-static void
-lio_droq_destroy_ring_buffers(struct lio_droq *droq)
-{
- uint32_t i;
-
- for (i = 0; i < droq->nb_desc; i++) {
- if (droq->recv_buf_list[i].buffer) {
- rte_pktmbuf_free((struct rte_mbuf *)
- droq->recv_buf_list[i].buffer);
- droq->recv_buf_list[i].buffer = NULL;
- }
- }
-
- lio_droq_reset_indices(droq);
-}
-
-static int
-lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
- struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring = droq->desc_ring;
- uint32_t i;
- void *buf;
-
- for (i = 0; i < droq->nb_desc; i++) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- if (buf == NULL) {
- lio_dev_err(lio_dev, "buffer alloc failed\n");
- droq->stats.rx_alloc_failure++;
- lio_droq_destroy_ring_buffers(droq);
- return -ENOMEM;
- }
-
- droq->recv_buf_list[i].buffer = buf;
- droq->info_list[i].length = 0;
-
- /* map ring buffers into memory */
- desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
- desc_ring[i].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[i].buffer);
- }
-
- lio_droq_reset_indices(droq);
-
- lio_droq_compute_max_packet_bufs(droq);
-
- return 0;
-}
-
-static void
-lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
-{
- const struct rte_memzone *mz_tmp;
- int ret = 0;
-
- if (mz == NULL) {
- lio_dev_err(lio_dev, "Memzone NULL\n");
- return;
- }
-
- mz_tmp = rte_memzone_lookup(mz->name);
- if (mz_tmp == NULL) {
- lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
- return;
- }
-
- ret = rte_memzone_free(mz);
- if (ret)
- lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
-}
-
-/**
- * Frees the space for descriptor ring for the droq.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- */
-static void
-lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
-{
- struct lio_droq *droq = lio_dev->droq[q_no];
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- lio_droq_destroy_ring_buffers(droq);
- rte_free(droq->recv_buf_list);
- droq->recv_buf_list = NULL;
- lio_dma_zone_free(lio_dev, droq->info_mz);
- lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
-
- memset(droq, 0, LIO_DROQ_SIZE);
-}
-
-static void *
-lio_alloc_info_buffer(struct lio_device *lio_dev,
- struct lio_droq *droq, unsigned int socket_id)
-{
- droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "info_list", droq->q_no,
- (droq->nb_desc *
- LIO_DROQ_INFO_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->info_mz == NULL)
- return NULL;
-
- droq->info_list_dma = droq->info_mz->iova;
- droq->info_alloc_size = droq->info_mz->len;
- droq->info_base_addr = (size_t)droq->info_mz->addr;
-
- return droq->info_mz->addr;
-}
-
-/**
- * Allocates space for the descriptor ring for the droq and
- * sets the base addr, num desc etc in Octeon registers.
- *
- * @param lio_dev - pointer to the lio device structure
- * @param q_no - droq no.
- * @param app_ctx - pointer to application context
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
- uint32_t num_descs, uint32_t desc_size,
- struct rte_mempool *mpool, unsigned int socket_id)
-{
- uint32_t c_refill_threshold;
- uint32_t desc_ring_size;
- struct lio_droq *droq;
-
- lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
- droq = lio_dev->droq[q_no];
- droq->lio_dev = lio_dev;
- droq->q_no = q_no;
- droq->mpool = mpool;
-
- c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
-
- droq->nb_desc = num_descs;
- droq->buffer_size = desc_size;
-
- desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
- droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "droq", q_no,
- desc_ring_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
-
- if (droq->desc_ring_mz == NULL) {
- lio_dev_err(lio_dev,
- "Output queue %d ring alloc failed\n", q_no);
- return -1;
- }
-
- droq->desc_ring_dma = droq->desc_ring_mz->iova;
- droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
-
- lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
- q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
- lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
- droq->nb_desc);
-
- droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
- if (droq->info_list == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
- goto init_droq_fail;
- }
-
- droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
- (droq->nb_desc *
- LIO_DROQ_RECVBUF_SIZE),
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (droq->recv_buf_list == NULL) {
- lio_dev_err(lio_dev,
- "Output queue recv buf list alloc failed\n");
- goto init_droq_fail;
- }
-
- if (lio_droq_setup_ring_buffers(lio_dev, droq))
- goto init_droq_fail;
-
- droq->refill_threshold = c_refill_threshold;
-
- rte_spinlock_init(&droq->lock);
-
- lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
-
- lio_dev->io_qmask.oq |= (1ULL << q_no);
-
- return 0;
-
-init_droq_fail:
- lio_delete_droq(lio_dev, q_no);
-
- return -1;
-}
-
-int
-lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
- int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
-{
- struct lio_droq *droq;
-
- PMD_INIT_FUNC_TRACE();
-
- /* Allocate the DS for the new droq. */
- droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (droq == NULL)
- return -ENOMEM;
-
- lio_dev->droq[oq_no] = droq;
-
- /* Initialize the Droq */
- if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
- socket_id)) {
- lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
- return -ENOMEM;
- }
-
- lio_dev->num_oqs++;
-
- lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
-
- /* Send credit for octeon output queues. credits are always
- * sent after the output queue is enabled.
- */
- rte_write32(lio_dev->droq[oq_no]->nb_desc,
- lio_dev->droq[oq_no]->pkts_credit_reg);
- rte_wmb();
-
- return 0;
-}
-
-static inline uint32_t
-lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
-{
- uint32_t buf_cnt = 0;
-
- while (total_len > (buf_size * buf_cnt))
- buf_cnt++;
-
- return buf_cnt;
-}
-
-/* If we were not able to refill all buffers, try to move around
- * the buffers that were not dispatched.
- */
-static inline uint32_t
-lio_droq_refill_pullup_descs(struct lio_droq *droq,
- struct lio_droq_desc *desc_ring)
-{
- uint32_t refill_index = droq->refill_idx;
- uint32_t desc_refilled = 0;
-
- while (refill_index != droq->read_idx) {
- if (droq->recv_buf_list[refill_index].buffer) {
- droq->recv_buf_list[droq->refill_idx].buffer =
- droq->recv_buf_list[refill_index].buffer;
- desc_ring[droq->refill_idx].buffer_ptr =
- desc_ring[refill_index].buffer_ptr;
- droq->recv_buf_list[refill_index].buffer = NULL;
- desc_ring[refill_index].buffer_ptr = 0;
- do {
- droq->refill_idx = lio_incr_index(
- droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- } while (droq->recv_buf_list[droq->refill_idx].buffer);
- }
- refill_index = lio_incr_index(refill_index, 1,
- droq->nb_desc);
- } /* while */
-
- return desc_refilled;
-}
-
-/* lio_droq_refill
- *
- * @param droq - droq in which descriptors require new buffers.
- *
- * Description:
- * Called during normal DROQ processing in interrupt mode or by the poll
- * thread to refill the descriptors from which buffers were dispatched
- * to upper layers. Attempts to allocate new buffers. If that fails, moves
- * up buffers (that were not dispatched) to form a contiguous ring.
- *
- * Returns:
- * No of descriptors refilled.
- *
- * Locks:
- * This routine is called with droq->lock held.
- */
-static uint32_t
-lio_droq_refill(struct lio_droq *droq)
-{
- struct lio_droq_desc *desc_ring;
- uint32_t desc_refilled = 0;
- void *buf = NULL;
-
- desc_ring = droq->desc_ring;
-
- while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
- /* If a valid buffer exists (happens if there is no dispatch),
- * reuse the buffer, else allocate.
- */
- if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
- buf = rte_pktmbuf_alloc(droq->mpool);
- /* If a buffer could not be allocated, no point in
- * continuing
- */
- if (buf == NULL) {
- droq->stats.rx_alloc_failure++;
- break;
- }
-
- droq->recv_buf_list[droq->refill_idx].buffer = buf;
- }
-
- desc_ring[droq->refill_idx].buffer_ptr =
- lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
- /* Reset any previous values in the length field. */
- droq->info_list[droq->refill_idx].length = 0;
-
- droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
- droq->nb_desc);
- desc_refilled++;
- droq->refill_count--;
- }
-
- if (droq->refill_count)
- desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
-
- /* if droq->refill_count
- * The refill count would not change in pass two. We only moved buffers
- * to close the gap in the ring, but we would still have the same no. of
- * buffers to refill.
- */
- return desc_refilled;
-}
-
-static int
-lio_droq_fast_process_packet(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts)
-{
- struct rte_mbuf *nicbuf = NULL;
- struct lio_droq_info *info;
- uint32_t total_len = 0;
- int data_total_len = 0;
- uint32_t pkt_len = 0;
- union octeon_rh *rh;
- int data_pkts = 0;
-
- info = &droq->info_list[droq->read_idx];
- lio_swap_8B_data((uint64_t *)info, 2);
-
- if (!info->length)
- return -1;
-
- /* Len of resp hdr in included in the received data len. */
- info->length -= OCTEON_RH_SIZE;
- rh = &info->rh;
-
- total_len += (uint32_t)info->length;
-
- if (lio_opcode_slow_path(rh)) {
- uint32_t buf_cnt;
-
- buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
- (uint32_t)info->length);
- droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
- droq->nb_desc);
- droq->refill_count += buf_cnt;
- } else {
- if (info->length <= droq->buffer_size) {
- if (rh->r_dh.has_hash)
- pkt_len = (uint32_t)(info->length - 8);
- else
- pkt_len = (uint32_t)info->length;
-
- nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer = NULL;
- droq->read_idx = lio_incr_index(
- droq->read_idx, 1,
- droq->nb_desc);
- droq->refill_count++;
-
- if (likely(nicbuf != NULL)) {
- /* We don't have a way to pass flags yet */
- nicbuf->ol_flags = 0;
- if (rh->r_dh.has_hash) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(nicbuf,
- uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss = (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- }
-
- nicbuf->pkt_len = pkt_len;
- nicbuf->data_len = pkt_len;
- nicbuf->port = lio_dev->port_id;
- /* Store the mbuf */
- rx_pkts[data_pkts++] = nicbuf;
- data_total_len += pkt_len;
- }
-
- /* Prefetch buffer pointers when on a cache line
- * boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(
- &droq->recv_buf_list[droq->read_idx]);
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- } else {
- struct rte_mbuf *first_buf = NULL;
- struct rte_mbuf *last_buf = NULL;
-
- while (pkt_len < info->length) {
- int cpy_len = 0;
-
- cpy_len = ((pkt_len + droq->buffer_size) >
- info->length)
- ? ((uint32_t)info->length -
- pkt_len)
- : droq->buffer_size;
-
- nicbuf =
- droq->recv_buf_list[droq->read_idx].buffer;
- droq->recv_buf_list[droq->read_idx].buffer =
- NULL;
-
- if (likely(nicbuf != NULL)) {
- /* Note the first seg */
- if (!pkt_len)
- first_buf = nicbuf;
-
- nicbuf->port = lio_dev->port_id;
- /* We don't have a way to pass
- * flags yet
- */
- nicbuf->ol_flags = 0;
- if ((!pkt_len) && (rh->r_dh.has_hash)) {
- uint64_t *hash_ptr;
-
- nicbuf->ol_flags |=
- RTE_MBUF_F_RX_RSS_HASH;
- hash_ptr = rte_pktmbuf_mtod(
- nicbuf, uint64_t *);
- lio_swap_8B_data(hash_ptr, 1);
- nicbuf->hash.rss =
- (uint32_t)*hash_ptr;
- nicbuf->data_off += 8;
- nicbuf->pkt_len = cpy_len - 8;
- nicbuf->data_len = cpy_len - 8;
- } else {
- nicbuf->pkt_len = cpy_len;
- nicbuf->data_len = cpy_len;
- }
-
- if (pkt_len)
- first_buf->nb_segs++;
-
- if (last_buf)
- last_buf->next = nicbuf;
-
- last_buf = nicbuf;
- } else {
- PMD_RX_LOG(lio_dev, ERR, "no buf\n");
- }
-
- pkt_len += cpy_len;
- droq->read_idx = lio_incr_index(
- droq->read_idx,
- 1, droq->nb_desc);
- droq->refill_count++;
-
- /* Prefetch buffer pointers when on a
- * cache line boundary
- */
- if ((droq->read_idx & 3) == 0) {
- rte_prefetch0(&droq->recv_buf_list
- [droq->read_idx]);
-
- rte_prefetch0(
- &droq->info_list[droq->read_idx]);
- }
- }
- rx_pkts[data_pkts++] = first_buf;
- if (rh->r_dh.has_hash)
- data_total_len += (pkt_len - 8);
- else
- data_total_len += pkt_len;
- }
-
- /* Inform upper layer about packet checksum verification */
- struct rte_mbuf *m = rx_pkts[data_pkts - 1];
-
- if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
- if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
- m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
-
- if (droq->refill_count >= droq->refill_threshold) {
- int desc_refilled = lio_droq_refill(droq);
-
- /* Flush the droq descriptor data to memory to be sure
- * that when we update the credits the data in memory is
- * accurate.
- */
- rte_wmb();
- rte_write32(desc_refilled, droq->pkts_credit_reg);
- /* make sure mmio write completes */
- rte_wmb();
- }
-
- info->length = 0;
- info->rh.rh64 = 0;
-
- droq->stats.pkts_received++;
- droq->stats.rx_pkts_received += data_pkts;
- droq->stats.rx_bytes_received += data_total_len;
- droq->stats.bytes_received += total_len;
-
- return data_pkts;
-}
-
-static uint32_t
-lio_droq_fast_process_packets(struct lio_device *lio_dev,
- struct lio_droq *droq,
- struct rte_mbuf **rx_pkts,
- uint32_t pkts_to_process)
-{
- int ret, data_pkts = 0;
- uint32_t pkt;
-
- for (pkt = 0; pkt < pkts_to_process; pkt++) {
- ret = lio_droq_fast_process_packet(lio_dev, droq,
- &rx_pkts[data_pkts]);
- if (ret < 0) {
- lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
- lio_dev->port_id, droq->q_no,
- droq->read_idx, pkts_to_process);
- break;
- }
- data_pkts += ret;
- }
-
- rte_atomic64_sub(&droq->pkts_pending, pkt);
-
- return data_pkts;
-}
-
-static inline uint32_t
-lio_droq_check_hw_for_pkts(struct lio_droq *droq)
-{
- uint32_t last_count;
- uint32_t pkt_count;
-
- pkt_count = rte_read32(droq->pkts_sent_reg);
-
- last_count = pkt_count - droq->pkt_count;
- droq->pkt_count = pkt_count;
-
- if (last_count)
- rte_atomic64_add(&droq->pkts_pending, last_count);
-
- return last_count;
-}
-
-uint16_t
-lio_dev_recv_pkts(void *rx_queue,
- struct rte_mbuf **rx_pkts,
- uint16_t budget)
-{
- struct lio_droq *droq = rx_queue;
- struct lio_device *lio_dev = droq->lio_dev;
- uint32_t pkts_processed = 0;
- uint32_t pkt_count = 0;
-
- lio_droq_check_hw_for_pkts(droq);
-
- pkt_count = rte_atomic64_read(&droq->pkts_pending);
- if (!pkt_count)
- return 0;
-
- if (pkt_count > budget)
- pkt_count = budget;
-
- /* Grab the lock */
- rte_spinlock_lock(&droq->lock);
- pkts_processed = lio_droq_fast_process_packets(lio_dev,
- droq, rx_pkts,
- pkt_count);
-
- if (droq->pkt_count) {
- rte_write32(droq->pkt_count, droq->pkts_sent_reg);
- droq->pkt_count = 0;
- }
-
- /* Release the spin lock */
- rte_spinlock_unlock(&droq->lock);
-
- return pkts_processed;
-}
-
-void
-lio_delete_droq_queue(struct lio_device *lio_dev,
- int oq_no)
-{
- lio_delete_droq(lio_dev, oq_no);
- lio_dev->num_oqs--;
- rte_free(lio_dev->droq[oq_no]);
- lio_dev->droq[oq_no] = NULL;
-}
-
-/**
- * lio_init_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param txpciq - queue to be initialized.
- *
- * Called at driver init time for each input queue. iq_conf has the
- * configuration parameters for the queue.
- *
- * @return Success: 0 Failure: -1
- */
-static int
-lio_init_instr_queue(struct lio_device *lio_dev,
- union octeon_txpciq txpciq,
- uint32_t num_descs, unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
- struct lio_instr_queue *iq;
- uint32_t instr_type;
- uint32_t q_size;
-
- instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
-
- q_size = instr_type * num_descs;
- iq = lio_dev->instr_queue[iq_no];
- iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
- "instr_queue", iq_no, q_size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->iq_mz == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
- iq_no);
- return -1;
- }
-
- iq->base_addr_dma = iq->iq_mz->iova;
- iq->base_addr = (uint8_t *)iq->iq_mz->addr;
-
- iq->nb_desc = num_descs;
-
- /* Initialize a list to holds requests that have been posted to Octeon
- * but has yet to be fetched by octeon
- */
- iq->request_list = rte_zmalloc_socket("request_list",
- sizeof(*iq->request_list) *
- num_descs,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (iq->request_list == NULL) {
- lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
- iq_no);
- lio_dma_zone_free(lio_dev, iq->iq_mz);
- return -1;
- }
-
- lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
- iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
- iq->nb_desc);
-
- iq->lio_dev = lio_dev;
- iq->txpciq.txpciq64 = txpciq.txpciq64;
- iq->fill_cnt = 0;
- iq->host_write_index = 0;
- iq->lio_read_index = 0;
- iq->flush_index = 0;
-
- rte_atomic64_set(&iq->instr_pending, 0);
-
- /* Initialize the spinlock for this instruction queue */
- rte_spinlock_init(&iq->lock);
- rte_spinlock_init(&iq->post_lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- lio_dev->io_qmask.iq |= (1ULL << iq_no);
-
- /* Set the 32B/64B mode for each input queue */
- lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
- iq->iqcmd_64B = (instr_type == 64);
-
- lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
-
- return 0;
-}
-
-int
-lio_setup_instr_queue0(struct lio_device *lio_dev)
-{
- union octeon_txpciq txpciq;
- uint32_t num_descs = 0;
- uint32_t iq_no = 0;
-
- num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
-
- lio_dev->num_iqs = 0;
-
- lio_dev->instr_queue[0] = rte_zmalloc(NULL,
- sizeof(struct lio_instr_queue), 0);
- if (lio_dev->instr_queue[0] == NULL)
- return -ENOMEM;
-
- lio_dev->instr_queue[0]->q_index = 0;
- lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
- txpciq.txpciq64 = 0;
- txpciq.s.q_no = iq_no;
- txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
- txpciq.s.use_qpg = 0;
- txpciq.s.qpg = 0;
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-/**
- * lio_delete_instr_queue()
- * @param lio_dev - pointer to the lio device structure.
- * @param iq_no - queue to be deleted.
- *
- * Called at driver unload time for each input queue. Deletes all
- * allocated resources for the input queue.
- */
-static void
-lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-
- rte_free(iq->request_list);
- iq->request_list = NULL;
- lio_dma_zone_free(lio_dev, iq->iq_mz);
-}
-
-void
-lio_free_instr_queue0(struct lio_device *lio_dev)
-{
- lio_delete_instr_queue(lio_dev, 0);
- rte_free(lio_dev->instr_queue[0]);
- lio_dev->instr_queue[0] = NULL;
- lio_dev->num_iqs--;
-}
-
-/* Return 0 on success, -1 on failure */
-int
-lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id)
-{
- uint32_t iq_no = (uint32_t)txpciq.s.q_no;
-
- lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
- sizeof(struct lio_instr_queue),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lio_dev->instr_queue[iq_no] == NULL)
- return -1;
-
- lio_dev->instr_queue[iq_no]->q_index = q_index;
- lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
-
- if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- return -1;
- }
-
- lio_dev->num_iqs++;
-
- return 0;
-}
-
-int
-lio_wait_for_instr_fetch(struct lio_device *lio_dev)
-{
- int pending, instr_cnt;
- int i, retry = 1000;
-
- do {
- instr_cnt = 0;
-
- for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
- if (!(lio_dev->io_qmask.iq & (1ULL << i)))
- continue;
-
- if (lio_dev->instr_queue[i] == NULL)
- break;
-
- pending = rte_atomic64_read(
- &lio_dev->instr_queue[i]->instr_pending);
- if (pending)
- lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
-
- instr_cnt += pending;
- }
-
- if (instr_cnt == 0)
- break;
-
- rte_delay_ms(1);
-
- } while (retry-- && instr_cnt);
-
- return instr_cnt;
-}
-
-static inline void
-lio_ring_doorbell(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
- rte_write32(iq->fill_cnt, iq->doorbell_reg);
- /* make sure doorbell write goes through */
- rte_wmb();
- iq->fill_cnt = 0;
- }
-}
-
-static inline void
-copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- uint8_t *iqptr, cmdsize;
-
- cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
- iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
-
- rte_memcpy(iqptr, cmd, cmdsize);
-}
-
-static inline struct lio_iq_post_status
-post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
-{
- struct lio_iq_post_status st;
-
- st.status = LIO_IQ_SEND_OK;
-
- /* This ensures that the read index does not wrap around to the same
- * position if queue gets full before Octeon could fetch any instr.
- */
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 1)) {
- st.status = LIO_IQ_SEND_FAILED;
- st.index = -1;
- return st;
- }
-
- if (rte_atomic64_read(&iq->instr_pending) >=
- (int32_t)(iq->nb_desc - 2))
- st.status = LIO_IQ_SEND_STOP;
-
- copy_cmd_into_iq(iq, cmd);
-
- /* "index" is returned, host_write_index is modified. */
- st.index = iq->host_write_index;
- iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
- iq->nb_desc);
- iq->fill_cnt++;
-
- /* Flush the command into memory. We need to be sure the data is in
- * memory before indicating that the instruction is pending.
- */
- rte_wmb();
-
- rte_atomic64_inc(&iq->instr_pending);
-
- return st;
-}
-
-static inline void
-lio_add_to_request_list(struct lio_instr_queue *iq,
- int idx, void *buf, int reqtype)
-{
- iq->request_list[idx].buf = buf;
- iq->request_list[idx].reqtype = reqtype;
-}
-
-static inline void
-lio_free_netsgbuf(void *buf)
-{
- struct lio_buf_free_info *finfo = buf;
- struct lio_device *lio_dev = finfo->lio_dev;
- struct rte_mbuf *m = finfo->mbuf;
- struct lio_gather *g = finfo->g;
- uint8_t iq = finfo->iq_no;
-
- /* This will take care of multiple segments also */
- rte_pktmbuf_free(m);
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq]);
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
- rte_free(finfo);
-}
-
-/* Can only run in process context */
-static int
-lio_process_iq_request_list(struct lio_device *lio_dev,
- struct lio_instr_queue *iq)
-{
- struct octeon_instr_irh *irh = NULL;
- uint32_t old = iq->flush_index;
- struct lio_soft_command *sc;
- uint32_t inst_count = 0;
- int reqtype;
- void *buf;
-
- while (old != iq->lio_read_index) {
- reqtype = iq->request_list[old].reqtype;
- buf = iq->request_list[old].buf;
-
- if (reqtype == LIO_REQTYPE_NONE)
- goto skip_this;
-
- switch (reqtype) {
- case LIO_REQTYPE_NORESP_NET:
- rte_pktmbuf_free((struct rte_mbuf *)buf);
- break;
- case LIO_REQTYPE_NORESP_NET_SG:
- lio_free_netsgbuf(buf);
- break;
- case LIO_REQTYPE_SOFT_COMMAND:
- sc = buf;
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- /* We're expecting a response from Octeon.
- * It's up to lio_process_ordered_list() to
- * process sc. Add sc to the ordered soft
- * command response list because we expect
- * a response from Octeon.
- */
- rte_spinlock_lock(&lio_dev->response_list.lock);
- rte_atomic64_inc(
- &lio_dev->response_list.pending_req_count);
- STAILQ_INSERT_TAIL(
- &lio_dev->response_list.head,
- &sc->node, entries);
- rte_spinlock_unlock(
- &lio_dev->response_list.lock);
- } else {
- if (sc->callback) {
- /* This callback must not sleep */
- sc->callback(LIO_REQUEST_DONE,
- sc->callback_arg);
- }
- }
- break;
- default:
- lio_dev_err(lio_dev,
- "Unknown reqtype: %d buf: %p at idx %d\n",
- reqtype, buf, old);
- }
-
- iq->request_list[old].buf = NULL;
- iq->request_list[old].reqtype = 0;
-
-skip_this:
- inst_count++;
- old = lio_incr_index(old, 1, iq->nb_desc);
- }
-
- iq->flush_index = old;
-
- return inst_count;
-}
-
-static void
-lio_update_read_index(struct lio_instr_queue *iq)
-{
- uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
- uint32_t last_done;
-
- last_done = pkt_in_done - iq->pkt_in_done;
- iq->pkt_in_done = pkt_in_done;
-
- /* Add last_done and modulo with the IQ size to get new index */
- iq->lio_read_index = (iq->lio_read_index +
- (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
- iq->nb_desc;
-}
-
-int
-lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
-{
- uint32_t inst_processed = 0;
- int tx_done = 1;
-
- if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
- return tx_done;
-
- rte_spinlock_lock(&iq->lock);
-
- lio_update_read_index(iq);
-
- do {
- /* Process any outstanding IQ packets. */
- if (iq->flush_index == iq->lio_read_index)
- break;
-
- inst_processed = lio_process_iq_request_list(lio_dev, iq);
-
- if (inst_processed) {
- rte_atomic64_sub(&iq->instr_pending, inst_processed);
- iq->stats.instr_processed += inst_processed;
- }
-
- inst_processed = 0;
-
- } while (1);
-
- rte_spinlock_unlock(&iq->lock);
-
- rte_atomic64_clear(&iq->iq_flush_running);
-
- return tx_done;
-}
-
-static int
-lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
- void *buf, uint32_t datasize, uint32_t reqtype)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- struct lio_iq_post_status st;
-
- rte_spinlock_lock(&iq->post_lock);
-
- st = post_command2(iq, cmd);
-
- if (st.status != LIO_IQ_SEND_FAILED) {
- lio_add_to_request_list(iq, st.index, buf, reqtype);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
- datasize);
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
-
- lio_ring_doorbell(lio_dev, iq);
- } else {
- LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
- }
-
- rte_spinlock_unlock(&iq->post_lock);
-
- return st.status;
-}
-
-void
-lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc, uint8_t opcode,
- uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1)
-{
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_rdp *rdp;
-
- RTE_ASSERT(opcode <= 15);
- RTE_ASSERT(subcode <= 127);
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
-
- ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
-
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
-
- pki_ih3->w = 1;
- pki_ih3->raw = 1;
- pki_ih3->utag = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
- pki_ih3->utt = 1;
-
- pki_ih3->tag = LIO_CONTROL;
- pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x7;
- pki_ih3->sl = 8;
-
- if (sc->datasize)
- ih3->dlengsz = sc->datasize;
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- irh->opcode = opcode;
- irh->subcode = subcode;
-
- /* opcode/subcode specific parameters (ossp) */
- irh->ossp = irh_ossp;
- sc->cmd.cmd3.ossp[0] = ossp0;
- sc->cmd.cmd3.ossp[1] = ossp1;
-
- if (sc->rdatasize) {
- rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
- rdp->pcie_port = lio_dev->pcie_port;
- rdp->rlen = sc->rdatasize;
- irh->rflag = 1;
- /* PKI IH3 */
- ih3->fsz = OCTEON_SOFT_CMD_RESP_IH3;
- } else {
- irh->rflag = 0;
- /* PKI IH3 */
- ih3->fsz = OCTEON_PCI_CMD_O3;
- }
-}
-
-int
-lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc)
-{
- struct octeon_instr_ih3 *ih3;
- struct octeon_instr_irh *irh;
- uint32_t len = 0;
-
- ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
- if (ih3->dlengsz) {
- RTE_ASSERT(sc->dmadptr);
- sc->cmd.cmd3.dptr = sc->dmadptr;
- }
-
- irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
- if (irh->rflag) {
- RTE_ASSERT(sc->dmarptr);
- RTE_ASSERT(sc->status_word != NULL);
- *sc->status_word = LIO_COMPLETION_WORD_INIT;
- sc->cmd.cmd3.rptr = sc->dmarptr;
- }
-
- len = (uint32_t)ih3->dlengsz;
-
- if (sc->wait_time)
- sc->timeout = lio_uptime + sc->wait_time;
-
- return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
- LIO_REQTYPE_SOFT_COMMAND);
-}
-
-int
-lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
-{
- char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
- uint16_t buf_size;
-
- buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
- snprintf(sc_pool_name, sizeof(sc_pool_name),
- "lio_sc_pool_%u", lio_dev->port_id);
- lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
- LIO_MAX_SOFT_COMMAND_BUFFERS,
- 0, 0, buf_size, SOCKET_ID_ANY);
- return 0;
-}
-
-void
-lio_free_sc_buffer_pool(struct lio_device *lio_dev)
-{
- rte_mempool_free(lio_dev->sc_buf_pool);
-}
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
- uint32_t rdatasize, uint32_t ctxsize)
-{
- uint32_t offset = sizeof(struct lio_soft_command);
- struct lio_soft_command *sc;
- struct rte_mbuf *m;
- uint64_t dma_addr;
-
- RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
- LIO_SOFT_COMMAND_BUFFER_SIZE);
-
- m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
- if (m == NULL) {
- lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
- return NULL;
- }
-
- /* set rte_mbuf data size and there is only 1 segment */
- m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
- m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
-
- /* use rte_mbuf buffer for soft command */
- sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
- memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
- sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
- sc->dma_addr = rte_mbuf_data_iova(m);
- sc->mbuf = m;
-
- dma_addr = sc->dma_addr;
-
- if (ctxsize) {
- sc->ctxptr = (uint8_t *)sc + offset;
- sc->ctxsize = ctxsize;
- }
-
- /* Start data at 128 byte boundary */
- offset = (offset + ctxsize + 127) & 0xffffff80;
-
- if (datasize) {
- sc->virtdptr = (uint8_t *)sc + offset;
- sc->dmadptr = dma_addr + offset;
- sc->datasize = datasize;
- }
-
- /* Start rdata at 128 byte boundary */
- offset = (offset + datasize + 127) & 0xffffff80;
-
- if (rdatasize) {
- RTE_ASSERT(rdatasize >= 16);
- sc->virtrptr = (uint8_t *)sc + offset;
- sc->dmarptr = dma_addr + offset;
- sc->rdatasize = rdatasize;
- sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
- rdatasize - 8);
- }
-
- return sc;
-}
-
-void
-lio_free_soft_command(struct lio_soft_command *sc)
-{
- rte_pktmbuf_free(sc->mbuf);
-}
-
-void
-lio_setup_response_list(struct lio_device *lio_dev)
-{
- STAILQ_INIT(&lio_dev->response_list.head);
- rte_spinlock_init(&lio_dev->response_list.lock);
- rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
-}
-
-int
-lio_process_ordered_list(struct lio_device *lio_dev)
-{
- int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
- struct lio_response_list *ordered_sc_list;
- struct lio_soft_command *sc;
- int request_complete = 0;
- uint64_t status64;
- uint32_t status;
-
- ordered_sc_list = &lio_dev->response_list;
-
- do {
- rte_spinlock_lock(&ordered_sc_list->lock);
-
- if (STAILQ_EMPTY(&ordered_sc_list->head)) {
- /* ordered_sc_list is empty; there is
- * nothing to process
- */
- rte_spinlock_unlock(&ordered_sc_list->lock);
- return -1;
- }
-
- sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
- struct lio_soft_command, node);
-
- status = LIO_REQUEST_PENDING;
-
- /* check if octeon has finished DMA'ing a response
- * to where rptr is pointing to
- */
- status64 = *sc->status_word;
-
- if (status64 != LIO_COMPLETION_WORD_INIT) {
- /* This logic ensures that all 64b have been written.
- * 1. check byte 0 for non-FF
- * 2. if non-FF, then swap result from BE to host order
- * 3. check byte 7 (swapped to 0) for non-FF
- * 4. if non-FF, use the low 32-bit status code
- * 5. if either byte 0 or byte 7 is FF, don't use status
- */
- if ((status64 & 0xff) != 0xff) {
- lio_swap_8B_data(&status64, 1);
- if (((status64 & 0xff) != 0xff)) {
- /* retrieve 16-bit firmware status */
- status = (uint32_t)(status64 &
- 0xffffULL);
- if (status) {
- status =
- LIO_FIRMWARE_STATUS_CODE(
- status);
- } else {
- /* i.e. no error */
- status = LIO_REQUEST_DONE;
- }
- }
- }
- } else if ((sc->timeout && lio_check_timeout(lio_uptime,
- sc->timeout))) {
- lio_dev_err(lio_dev,
- "cmd failed, timeout (%ld, %ld)\n",
- (long)lio_uptime, (long)sc->timeout);
- status = LIO_REQUEST_TIMEOUT;
- }
-
- if (status != LIO_REQUEST_PENDING) {
- /* we have received a response or we have timed out.
- * remove node from linked list
- */
- STAILQ_REMOVE(&ordered_sc_list->head,
- &sc->node, lio_stailq_node, entries);
- rte_atomic64_dec(
- &lio_dev->response_list.pending_req_count);
- rte_spinlock_unlock(&ordered_sc_list->lock);
-
- if (sc->callback)
- sc->callback(status, sc->callback_arg);
-
- request_complete++;
- } else {
- /* no response yet */
- request_complete = 0;
- rte_spinlock_unlock(&ordered_sc_list->lock);
- }
-
- /* If we hit the Max Ordered requests to process every loop,
- * we quit and let this function be invoked the next time
- * the poll thread runs to process the remaining requests.
- * This function can take up the entire CPU if there is
- * no upper limit to the requests processed.
- */
- if (request_complete >= resp_to_process)
- break;
- } while (request_complete);
-
- return 0;
-}
-
-static inline struct lio_stailq_node *
-list_delete_first_node(struct lio_stailq_head *head)
-{
- struct lio_stailq_node *node;
-
- if (STAILQ_EMPTY(head))
- node = NULL;
- else
- node = STAILQ_FIRST(head);
-
- if (node)
- STAILQ_REMOVE(head, node, lio_stailq_node, entries);
-
- return node;
-}
-
-void
-lio_delete_sglist(struct lio_instr_queue *txq)
-{
- struct lio_device *lio_dev = txq->lio_dev;
- int iq_no = txq->q_index;
- struct lio_gather *g;
-
- if (lio_dev->glist_head == NULL)
- return;
-
- do {
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- if (g) {
- if (g->sg)
- rte_free(
- (void *)((unsigned long)g->sg - g->adjust));
- rte_free(g);
- }
- } while (g);
-}
-
-/**
- * \brief Setup gather lists
- * @param lio per-network private data
- */
-int
-lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id)
-{
- struct lio_gather *g;
- int i;
-
- rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
-
- STAILQ_INIT(&lio_dev->glist_head[iq_no]);
-
- for (i = 0; i < num_descs; i++) {
- g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
- socket_id);
- if (g == NULL) {
- lio_dev_err(lio_dev,
- "lio_gather memory allocation failed for qno %d\n",
- iq_no);
- break;
- }
-
- g->sg_size =
- ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
-
- g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (g->sg == NULL) {
- lio_dev_err(lio_dev,
- "sg list memory allocation failed for qno %d\n",
- iq_no);
- rte_free(g);
- break;
- }
-
- /* The gather component should be aligned on 64-bit boundary */
- if (((unsigned long)g->sg) & 7) {
- g->adjust = 8 - (((unsigned long)g->sg) & 7);
- g->sg =
- (struct lio_sg_entry *)((unsigned long)g->sg +
- g->adjust);
- }
-
- STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
- entries);
- }
-
- if (i != num_descs) {
- lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
- return -ENOMEM;
- }
-
- return 0;
-}
-
-void
-lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
-{
- lio_delete_instr_queue(lio_dev, iq_no);
- rte_free(lio_dev->instr_queue[iq_no]);
- lio_dev->instr_queue[iq_no] = NULL;
- lio_dev->num_iqs--;
-}
-
-static inline uint32_t
-lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
- (uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending));
-}
-
-static inline int
-lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
-{
- return ((uint32_t)rte_atomic64_read(
- &lio_dev->instr_queue[q_no]->instr_pending) >=
- (lio_dev->instr_queue[q_no]->nb_desc - 2));
-}
-
-static int
-lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
-{
- struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
- uint32_t count = 10000;
-
- while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
- --count)
- lio_flush_iq(lio_dev, iq);
-
- return count ? 0 : 1;
-}
-
-static void
-lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
-{
- struct lio_soft_command *sc = sc_ptr;
- struct lio_dev_ctrl_cmd *ctrl_cmd;
- struct lio_ctrl_pkt *ctrl_pkt;
-
- ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
- ctrl_cmd = ctrl_pkt->ctrl_cmd;
- ctrl_cmd->cond = 1;
-
- lio_free_soft_command(sc);
-}
-
-static inline struct lio_soft_command *
-lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- uint32_t uddsize, datasize;
- uint32_t rdatasize;
- uint8_t *data;
-
- uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
-
- datasize = OCTEON_CMD_SIZE + uddsize;
- rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
-
- sc = lio_alloc_soft_command(lio_dev, datasize,
- rdatasize, sizeof(struct lio_ctrl_pkt));
- if (sc == NULL)
- return NULL;
-
- rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
-
- data = (uint8_t *)sc->virtdptr;
-
- rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
-
- lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
-
- if (uddsize) {
- /* Endian-Swap for UDD should have been done by caller. */
- rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
- }
-
- sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
-
- lio_prepare_soft_command(lio_dev, sc,
- LIO_OPCODE, LIO_OPCODE_CMD,
- 0, 0, 0);
-
- sc->callback = lio_ctrl_cmd_callback;
- sc->callback_arg = sc;
- sc->wait_time = ctrl_pkt->wait_time;
-
- return sc;
-}
-
-int
-lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
-{
- struct lio_soft_command *sc = NULL;
- int retval;
-
- sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
- if (sc == NULL) {
- lio_dev_err(lio_dev, "soft command allocation failed\n");
- return -1;
- }
-
- retval = lio_send_soft_command(lio_dev, sc);
- if (retval == LIO_IQ_SEND_FAILED) {
- lio_free_soft_command(sc);
- lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
- lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
- return -1;
- }
-
- return retval;
-}
-
-/** Send data packet to the device
- * @param lio_dev - lio device pointer
- * @param ndata - control structure with queueing, and buffer information
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-static inline int
-lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
-{
- return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
- ndata->buf, ndata->datasize, ndata->reqtype);
-}
-
-uint16_t
-lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
-{
- struct lio_instr_queue *txq = tx_queue;
- union lio_cmd_setup cmdsetup;
- struct lio_device *lio_dev;
- struct lio_iq_stats *stats;
- struct lio_data_pkt ndata;
- int i, processed = 0;
- struct rte_mbuf *m;
- uint32_t tag = 0;
- int status = 0;
- int iq_no;
-
- lio_dev = txq->lio_dev;
- iq_no = txq->txpciq.s.q_no;
- stats = &lio_dev->instr_queue[iq_no]->stats;
-
- if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
- PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
- lio_dev->linfo.link.s.link_up);
- goto xmit_failed;
- }
-
- lio_dev_cleanup_iq(lio_dev, iq_no);
-
- for (i = 0; i < nb_pkts; i++) {
- uint32_t pkt_len = 0;
-
- m = pkts[i];
-
- /* Prepare the attributes for the data to be passed to BASE. */
- memset(&ndata, 0, sizeof(struct lio_data_pkt));
-
- ndata.buf = m;
-
- ndata.q_no = iq_no;
- if (lio_iq_is_full(lio_dev, ndata.q_no)) {
- stats->tx_iq_busy++;
- if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit failed iq:%d full\n",
- ndata.q_no);
- break;
- }
- }
-
- cmdsetup.cmd_setup64 = 0;
- cmdsetup.s.iq_no = iq_no;
-
- /* check checksum offload flags to form cmd */
- if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
- cmdsetup.s.ip_csum = 1;
-
- if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
- cmdsetup.s.tnl_csum = 1;
- else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
- (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
- cmdsetup.s.transport_csum = 1;
-
- if (m->nb_segs == 1) {
- pkt_len = rte_pktmbuf_data_len(m);
- cmdsetup.s.u.datasize = pkt_len;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
- ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
- ndata.reqtype = LIO_REQTYPE_NORESP_NET;
- } else {
- struct lio_buf_free_info *finfo;
- struct lio_gather *g;
- rte_iova_t phyaddr;
- int i, frags;
-
- finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
- sizeof(*finfo), 0);
- if (finfo == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "free buffer alloc failed\n");
- goto xmit_failed;
- }
-
- rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
- g = (struct lio_gather *)list_delete_first_node(
- &lio_dev->glist_head[iq_no]);
- rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
- if (g == NULL) {
- PMD_TX_LOG(lio_dev, ERR,
- "Transmit scatter gather: glist null!\n");
- goto xmit_failed;
- }
-
- cmdsetup.s.gather = 1;
- cmdsetup.s.u.gatherptrs = m->nb_segs;
- lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
- &cmdsetup, tag);
-
- memset(g->sg, 0, g->sg_size);
- g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[0], m->data_len, 0);
- pkt_len = m->data_len;
- finfo->mbuf = m;
-
- /* First seg taken care above */
- frags = m->nb_segs - 1;
- i = 1;
- m = m->next;
- while (frags--) {
- g->sg[(i >> 2)].ptr[(i & 3)] =
- rte_mbuf_data_iova(m);
- lio_add_sg_size(&g->sg[(i >> 2)],
- m->data_len, (i & 3));
- pkt_len += m->data_len;
- i++;
- m = m->next;
- }
-
- phyaddr = rte_mem_virt2iova(g->sg);
- if (phyaddr == RTE_BAD_IOVA) {
- PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
- goto xmit_failed;
- }
-
- ndata.cmd.cmd3.dptr = phyaddr;
- ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
-
- finfo->g = g;
- finfo->lio_dev = lio_dev;
- finfo->iq_no = (uint64_t)iq_no;
- ndata.buf = finfo;
- }
-
- ndata.datasize = pkt_len;
-
- status = lio_send_data_pkt(lio_dev, &ndata);
-
- if (unlikely(status == LIO_IQ_SEND_FAILED)) {
- PMD_TX_LOG(lio_dev, ERR, "send failed\n");
- break;
- }
-
- if (unlikely(status == LIO_IQ_SEND_STOP)) {
- PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
- /* create space as iq is full */
- lio_dev_cleanup_iq(lio_dev, iq_no);
- }
-
- stats->tx_done++;
- stats->tx_tot_bytes += pkt_len;
- processed++;
- }
-
-xmit_failed:
- stats->tx_dropped += (nb_pkts - processed);
-
- return processed;
-}
-
-void
-lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
-{
- struct lio_instr_queue *txq;
- struct lio_droq *rxq;
- uint16_t i;
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- if (txq != NULL) {
- lio_dev_tx_queue_release(eth_dev, i);
- eth_dev->data->tx_queues[i] = NULL;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- if (rxq != NULL) {
- lio_dev_rx_queue_release(eth_dev, i);
- eth_dev->data->rx_queues[i] = NULL;
- }
- }
-}
diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
deleted file mode 100644
index d2a45104f0..0000000000
--- a/drivers/net/liquidio/lio_rxtx.h
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_RXTX_H_
-#define _LIO_RXTX_H_
-
-#include <stdio.h>
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-#include <rte_memory.h>
-
-#include "lio_struct.h"
-
-#ifndef ROUNDUP4
-#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
-#endif
-
-#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem) \
- (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
-
-#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
-
-#define lio_uptime \
- (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
-
-/** Descriptor format.
- * The descriptor ring is made of descriptors which have 2 64-bit values:
- * -# Physical (bus) address of the data buffer.
- * -# Physical (bus) address of a lio_droq_info structure.
- * The device DMA's incoming packets and its information at the address
- * given by these descriptor fields.
- */
-struct lio_droq_desc {
- /** The buffer pointer */
- uint64_t buffer_ptr;
-
- /** The Info pointer */
- uint64_t info_ptr;
-};
-
-#define LIO_DROQ_DESC_SIZE (sizeof(struct lio_droq_desc))
-
-/** Information about packet DMA'ed by Octeon.
- * The format of the information available at Info Pointer after Octeon
- * has posted a packet. Not all descriptors have valid information. Only
- * the Info field of the first descriptor for a packet has information
- * about the packet.
- */
-struct lio_droq_info {
- /** The Output Receive Header. */
- union octeon_rh rh;
-
- /** The Length of the packet. */
- uint64_t length;
-};
-
-#define LIO_DROQ_INFO_SIZE (sizeof(struct lio_droq_info))
-
-/** Pointer to data buffer.
- * Driver keeps a pointer to the data buffer that it made available to
- * the Octeon device. Since the descriptor ring keeps physical (bus)
- * addresses, this field is required for the driver to keep track of
- * the virtual address pointers.
- */
-struct lio_recv_buffer {
- /** Packet buffer, including meta data. */
- void *buffer;
-
- /** Data in the packet buffer. */
- uint8_t *data;
-
-};
-
-#define LIO_DROQ_RECVBUF_SIZE (sizeof(struct lio_recv_buffer))
-
-#define LIO_DROQ_SIZE (sizeof(struct lio_droq))
-
-#define LIO_IQ_SEND_OK 0
-#define LIO_IQ_SEND_STOP 1
-#define LIO_IQ_SEND_FAILED -1
-
-/* conditions */
-#define LIO_REQTYPE_NONE 0
-#define LIO_REQTYPE_NORESP_NET 1
-#define LIO_REQTYPE_NORESP_NET_SG 2
-#define LIO_REQTYPE_SOFT_COMMAND 3
-
-struct lio_request_list {
- uint32_t reqtype;
- void *buf;
-};
-
-/*---------------------- INSTRUCTION FORMAT ----------------------------*/
-
-struct lio_instr3_64B {
- /** Pointer where the input data is available. */
- uint64_t dptr;
-
- /** Instruction Header. */
- uint64_t ih3;
-
- /** Instruction Header. */
- uint64_t pki_ih3;
-
- /** Input Request Header. */
- uint64_t irh;
-
- /** opcode/subcode specific parameters */
- uint64_t ossp[2];
-
- /** Return Data Parameters */
- uint64_t rdp;
-
- /** Pointer where the response for a RAW mode packet will be written
- * by Octeon.
- */
- uint64_t rptr;
-
-};
-
-union lio_instr_64B {
- struct lio_instr3_64B cmd3;
-};
-
-/** The size of each buffer in soft command buffer pool */
-#define LIO_SOFT_COMMAND_BUFFER_SIZE 1536
-
-/** Maximum number of buffers to allocate into soft command buffer pool */
-#define LIO_MAX_SOFT_COMMAND_BUFFERS 255
-
-struct lio_soft_command {
- /** Soft command buffer info. */
- struct lio_stailq_node node;
- uint64_t dma_addr;
- uint32_t size;
-
- /** Command and return status */
- union lio_instr_64B cmd;
-
-#define LIO_COMPLETION_WORD_INIT 0xffffffffffffffffULL
- uint64_t *status_word;
-
- /** Data buffer info */
- void *virtdptr;
- uint64_t dmadptr;
- uint32_t datasize;
-
- /** Return buffer info */
- void *virtrptr;
- uint64_t dmarptr;
- uint32_t rdatasize;
-
- /** Context buffer info */
- void *ctxptr;
- uint32_t ctxsize;
-
- /** Time out and callback */
- size_t wait_time;
- size_t timeout;
- uint32_t iq_no;
- void (*callback)(uint32_t, void *);
- void *callback_arg;
- struct rte_mbuf *mbuf;
-};
-
-struct lio_iq_post_status {
- int status;
- int index;
-};
-
-/* wqe
- * --------------- 0
- * | wqe word0-3 |
- * --------------- 32
- * | PCI IH |
- * --------------- 40
- * | RPTR |
- * --------------- 48
- * | PCI IRH |
- * --------------- 56
- * | OCTEON_CMD |
- * --------------- 64
- * | Addtl 8-BData |
- * | |
- * ---------------
- */
-
-union octeon_cmd {
- uint64_t cmd64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t cmd : 5;
-
- uint64_t more : 6; /* How many udd words follow the command */
-
- uint64_t reserved : 29;
-
- uint64_t param1 : 16;
-
- uint64_t param2 : 8;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- uint64_t param2 : 8;
-
- uint64_t param1 : 16;
-
- uint64_t reserved : 29;
-
- uint64_t more : 6;
-
- uint64_t cmd : 5;
-
-#endif
- } s;
-};
-
-#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
-
-/* Maximum number of 8-byte words can be
- * sent in a NIC control message.
- */
-#define LIO_MAX_NCTRL_UDD 32
-
-/* Structure of control information passed by driver to the BASE
- * layer when sending control commands to Octeon device software.
- */
-struct lio_ctrl_pkt {
- /** Command to be passed to the Octeon device software. */
- union octeon_cmd ncmd;
-
- /** Send buffer */
- void *data;
- uint64_t dmadata;
-
- /** Response buffer */
- void *rdata;
- uint64_t dmardata;
-
- /** Additional data that may be needed by some commands. */
- uint64_t udd[LIO_MAX_NCTRL_UDD];
-
- /** Input queue to use to send this command. */
- uint64_t iq_no;
-
- /** Time to wait for Octeon software to respond to this control command.
- * If wait_time is 0, BASE assumes no response is expected.
- */
- size_t wait_time;
-
- struct lio_dev_ctrl_cmd *ctrl_cmd;
-};
-
-/** Structure of data information passed by driver to the BASE
- * layer when forwarding data to Octeon device software.
- */
-struct lio_data_pkt {
- /** Pointer to information maintained by NIC module for this packet. The
- * BASE layer passes this as-is to the driver.
- */
- void *buf;
-
- /** Type of buffer passed in "buf" above. */
- uint32_t reqtype;
-
- /** Total data bytes to be transferred in this command. */
- uint32_t datasize;
-
- /** Command to be passed to the Octeon device software. */
- union lio_instr_64B cmd;
-
- /** Input queue to use to send this command. */
- uint32_t q_no;
-};
-
-/** Structure passed by driver to BASE layer to prepare a command to send
- * network data to Octeon.
- */
-union lio_cmd_setup {
- struct {
- uint32_t iq_no : 8;
- uint32_t gather : 1;
- uint32_t timestamp : 1;
- uint32_t ip_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t tnl_csum : 1;
- uint32_t rsvd : 19;
-
- union {
- uint32_t datasize;
- uint32_t gatherptrs;
- } u;
- } s;
-
- uint64_t cmd_setup64;
-};
-
-/* Instruction Header */
-struct octeon_instr_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** Reserved1 */
- uint64_t reserved1 : 32;
-
- /** PKI port kind - PKIND */
- uint64_t pkind : 6;
-
- /** Reserved2 */
- uint64_t reserved2 : 4;
-
- /** Front Data size */
- uint64_t fsz : 6;
-
- /** Data length OR no. of entries in gather list */
- uint64_t dlengsz : 14;
-
- /** Gather indicator 1=gather*/
- uint64_t gather : 1;
-
- /** Reserved3 */
- uint64_t reserved3 : 1;
-
-#endif
-};
-
-/* PKI Instruction Header(PKI IH) */
-struct octeon_instr_pki_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** Wider bit */
- uint64_t w : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Tag Value */
- uint64_t tag : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
- /** Tag Value */
- uint64_t tag : 32;
-
- /** QPG Value */
- uint64_t qpg : 11;
-
- /** Reserved1 */
- uint64_t reserved1 : 2;
-
- /** Tag type */
- uint64_t tagtype : 2;
-
- /** Use Tag Type */
- uint64_t utt : 1;
-
- /** Skip Length */
- uint64_t sl : 8;
-
- /** Parse Mode */
- uint64_t pm : 3;
-
- /** Reserved2 */
- uint64_t reserved2 : 1;
-
- /** Use QPG */
- uint64_t uqpg : 1;
-
- /** Use Tag */
- uint64_t utag : 1;
-
- /** Raw mode indicator 1 = RAW */
- uint64_t raw : 1;
-
- /** Wider bit */
- uint64_t w : 1;
-#endif
-};
-
-/** Input Request Header */
-struct octeon_instr_irh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t opcode : 4;
- uint64_t rflag : 1;
- uint64_t subcode : 7;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t reserved : 5;
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t ossp : 32; /* opcode/subcode specific parameters */
- uint64_t reserved : 5;
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t subcode : 7;
- uint64_t rflag : 1;
- uint64_t opcode : 4;
-#endif
-};
-
-/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
-#define OCTEON_SOFT_CMD_RESP_IH3 (40 + 8)
-/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
-#define OCTEON_PCI_CMD_O3 (24 + 8)
-
-/** Return Data Parameters */
-struct octeon_instr_rdp {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t reserved : 49;
- uint64_t pcie_port : 3;
- uint64_t rlen : 12;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- uint64_t rlen : 12;
- uint64_t pcie_port : 3;
- uint64_t reserved : 49;
-#endif
-};
-
-union octeon_packet_params {
- uint32_t pkt_params32;
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint32_t reserved : 24;
- uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
- /* Perform Outer transport header checksum */
- uint32_t transport_csum : 1;
- /* Find tunnel, and perform transport csum. */
- uint32_t tnl_csum : 1;
- uint32_t tsflag : 1; /* Timestamp this packet */
- uint32_t ipsec_ops : 4; /* IPsec operation */
-#else
- uint32_t ipsec_ops : 4;
- uint32_t tsflag : 1;
- uint32_t tnl_csum : 1;
- uint32_t transport_csum : 1;
- uint32_t ip_csum : 1;
- uint32_t reserved : 7;
-#endif
- } s;
-};
-
-/** Utility function to prepare a 64B NIC instruction based on a setup command
- * @param cmd - pointer to instruction to be filled in.
- * @param setup - pointer to the setup structure
- * @param q_no - which queue for back pressure
- *
- * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
- */
-static inline void
-lio_prepare_pci_cmd(struct lio_device *lio_dev,
- union lio_instr_64B *cmd,
- union lio_cmd_setup *setup,
- uint32_t tag)
-{
- union octeon_packet_params packet_params;
- struct octeon_instr_pki_ih3 *pki_ih3;
- struct octeon_instr_irh *irh;
- struct octeon_instr_ih3 *ih3;
- int port;
-
- memset(cmd, 0, sizeof(union lio_instr_64B));
-
- ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
- pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
-
- /* assume that rflag is cleared so therefore front data will only have
- * irh and ossp[1] and ossp[2] for a total of 24 bytes
- */
- ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
- /* PKI IH */
- ih3->fsz = OCTEON_PCI_CMD_O3;
-
- if (!setup->s.gather) {
- ih3->dlengsz = setup->s.u.datasize;
- } else {
- ih3->gather = 1;
- ih3->dlengsz = setup->s.u.gatherptrs;
- }
-
- pki_ih3->w = 1;
- pki_ih3->raw = 0;
- pki_ih3->utag = 0;
- pki_ih3->utt = 1;
- pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
-
- port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
-
- if (tag)
- pki_ih3->tag = tag;
- else
- pki_ih3->tag = LIO_DATA(port);
-
- pki_ih3->tagtype = OCTEON_ORDERED_TAG;
- pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
- pki_ih3->pm = 0x0; /* parse from L2 */
- pki_ih3->sl = 32; /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
-
- irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
-
- irh->opcode = LIO_OPCODE;
- irh->subcode = LIO_OPCODE_NW_DATA;
-
- packet_params.pkt_params32 = 0;
- packet_params.s.ip_csum = setup->s.ip_csum;
- packet_params.s.transport_csum = setup->s.transport_csum;
- packet_params.s.tnl_csum = setup->s.tnl_csum;
- packet_params.s.tsflag = setup->s.timestamp;
-
- irh->ossp = packet_params.pkt_params32;
-}
-
-int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
-void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev,
- uint32_t datasize, uint32_t rdatasize,
- uint32_t ctxsize);
-void lio_prepare_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc,
- uint8_t opcode, uint8_t subcode,
- uint32_t irh_ossp, uint64_t ossp0,
- uint64_t ossp1);
-int lio_send_soft_command(struct lio_device *lio_dev,
- struct lio_soft_command *sc);
-void lio_free_soft_command(struct lio_soft_command *sc);
-
-/** Send control packet to the device
- * @param lio_dev - lio device pointer
- * @param nctrl - control structure with command, timeout, and callback info
- *
- * @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- * queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-int lio_send_ctrl_pkt(struct lio_device *lio_dev,
- struct lio_ctrl_pkt *ctrl_pkt);
-
-/** Maximum ordered requests to process in every invocation of
- * lio_process_ordered_list(). The function will continue to process requests
- * as long as it can find one that has finished processing. If it keeps
- * finding requests that have completed, the function can run for ever. The
- * value defined here sets an upper limit on the number of requests it can
- * process before it returns control to the poll thread.
- */
-#define LIO_MAX_ORD_REQS_TO_PROCESS 4096
-
-/** Error codes used in Octeon Host-Core communication.
- *
- * 31 16 15 0
- * ----------------------------
- * | | |
- * ----------------------------
- * Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
- * are reserved to identify the group to which the error code belongs. The
- * lower 16-bits, called Minor Error Number, carry the actual code.
- *
- * So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
- */
-/** Status for a request.
- * If the request is successfully queued, the driver will return
- * a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
- * the driver if the response for request failed to arrive before a
- * time-out period or if the request processing * got interrupted due to
- * a signal respectively.
- */
-enum {
- /** A value of 0x00000000 indicates no error i.e. success */
- LIO_REQUEST_DONE = 0x00000000,
- /** (Major number: 0x0000; Minor Number: 0x0001) */
- LIO_REQUEST_PENDING = 0x00000001,
- LIO_REQUEST_TIMEOUT = 0x00000003,
-
-};
-
-/*------ Error codes used by firmware (bits 15..0 set by firmware */
-#define LIO_FIRMWARE_MAJOR_ERROR_CODE 0x0001
-#define LIO_FIRMWARE_STATUS_CODE(status) \
- ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
-
-/** Initialize the response lists. The number of response lists to create is
- * given by count.
- * @param lio_dev - the lio device structure.
- */
-void lio_setup_response_list(struct lio_device *lio_dev);
-
-/** Check the status of first entry in the ordered list. If the instruction at
- * that entry finished processing or has timed-out, the entry is cleaned.
- * @param lio_dev - the lio device structure.
- * @return 1 if the ordered list is empty, 0 otherwise.
- */
-int lio_process_ordered_list(struct lio_device *lio_dev);
-
-#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count) \
- (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
-
-static inline void
-lio_swap_8B_data(uint64_t *data, uint32_t blocks)
-{
- while (blocks) {
- *data = rte_cpu_to_be_64(*data);
- blocks--;
- data++;
- }
-}
-
-static inline uint64_t
-lio_map_ring(void *buf)
-{
- rte_iova_t dma_addr;
-
- dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
-
- return (uint64_t)dma_addr;
-}
-
-static inline uint64_t
-lio_map_ring_info(struct lio_droq *droq, uint32_t i)
-{
- rte_iova_t dma_addr;
-
- dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
-
- return (uint64_t)dma_addr;
-}
-
-static inline int
-lio_opcode_slow_path(union octeon_rh *rh)
-{
- uint16_t subcode1, subcode2;
-
- subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
- subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
-
- return subcode2 != subcode1;
-}
-
-static inline void
-lio_add_sg_size(struct lio_sg_entry *sg_entry,
- uint16_t size, uint32_t pos)
-{
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- sg_entry->u.size[pos] = size;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- sg_entry->u.size[3 - pos] = size;
-#endif
-}
-
-/* Macro to increment index.
- * Index is incremented by count; if the sum exceeds
- * max, index is wrapped-around to the start.
- */
-static inline uint32_t
-lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
-{
- if ((index + count) >= max)
- index = index + count - max;
- else
- index += count;
-
- return index;
-}
-
-int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
- int desc_size, struct rte_mempool *mpool,
- unsigned int socket_id);
-uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t budget);
-void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
-
-void lio_delete_sglist(struct lio_instr_queue *txq);
-int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
- int fw_mapped_iq, int num_descs, unsigned int socket_id);
-uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
- uint16_t nb_pkts);
-int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
-int lio_setup_iq(struct lio_device *lio_dev, int q_index,
- union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
- unsigned int socket_id);
-int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
-void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
-/** Setup instruction queue zero for the device
- * @param lio_dev which lio device to setup
- *
- * @return 0 if success. -1 if fails
- */
-int lio_setup_instr_queue0(struct lio_device *lio_dev);
-void lio_free_instr_queue0(struct lio_device *lio_dev);
-void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
-#endif /* _LIO_RXTX_H_ */
diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
deleted file mode 100644
index 10270c560e..0000000000
--- a/drivers/net/liquidio/lio_struct.h
+++ /dev/null
@@ -1,661 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_STRUCT_H_
-#define _LIO_STRUCT_H_
-
-#include <stdio.h>
-#include <stdint.h>
-#include <sys/queue.h>
-
-#include <rte_spinlock.h>
-#include <rte_atomic.h>
-
-#include "lio_hw_defs.h"
-
-struct lio_stailq_node {
- STAILQ_ENTRY(lio_stailq_node) entries;
-};
-
-STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
-
-struct lio_version {
- uint16_t major;
- uint16_t minor;
- uint16_t micro;
- uint16_t reserved;
-};
-
-/** Input Queue statistics. Each input queue has four stats fields. */
-struct lio_iq_stats {
- uint64_t instr_posted; /**< Instructions posted to this queue. */
- uint64_t instr_processed; /**< Instructions processed in this queue. */
- uint64_t instr_dropped; /**< Instructions that could not be processed */
- uint64_t bytes_sent; /**< Bytes sent through this queue. */
- uint64_t tx_done; /**< Num of packets sent to network. */
- uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
- uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
- uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
-};
-
-/** Output Queue statistics. Each output queue has four stats fields. */
-struct lio_droq_stats {
- /** Number of packets received in this queue. */
- uint64_t pkts_received;
-
- /** Bytes received by this queue. */
- uint64_t bytes_received;
-
- /** Packets dropped due to no memory available. */
- uint64_t dropped_nomem;
-
- /** Packets dropped due to large number of pkts to process. */
- uint64_t dropped_toomany;
-
- /** Number of packets sent to stack from this queue. */
- uint64_t rx_pkts_received;
-
- /** Number of Bytes sent to stack from this queue. */
- uint64_t rx_bytes_received;
-
- /** Num of Packets dropped due to receive path failures. */
- uint64_t rx_dropped;
-
- /** Num of vxlan packets received; */
- uint64_t rx_vxlan;
-
- /** Num of failures of rte_pktmbuf_alloc() */
- uint64_t rx_alloc_failure;
-
-};
-
-/** The Descriptor Ring Output Queue structure.
- * This structure has all the information required to implement a
- * DROQ.
- */
-struct lio_droq {
- /** A spinlock to protect access to this ring. */
- rte_spinlock_t lock;
-
- uint32_t q_no;
-
- uint32_t pkt_count;
-
- struct lio_device *lio_dev;
-
- /** The 8B aligned descriptor ring starts at this address. */
- struct lio_droq_desc *desc_ring;
-
- /** Index in the ring where the driver should read the next packet */
- uint32_t read_idx;
-
- /** Index in the ring where Octeon will write the next packet */
- uint32_t write_idx;
-
- /** Index in the ring where the driver will refill the descriptor's
- * buffer
- */
- uint32_t refill_idx;
-
- /** Packets pending to be processed */
- rte_atomic64_t pkts_pending;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** The number of descriptors pending refill. */
- uint32_t refill_count;
-
- uint32_t refill_threshold;
-
- /** The 8B aligned info ptrs begin from this address. */
- struct lio_droq_info *info_list;
-
- /** The receive buffer list. This list has the virtual addresses of the
- * buffers.
- */
- struct lio_recv_buffer *recv_buf_list;
-
- /** The size of each buffer pointed by the buffer pointer. */
- uint32_t buffer_size;
-
- /** Pointer to the mapped packet credit register.
- * Host writes number of info/buffer ptrs available to this register
- */
- void *pkts_credit_reg;
-
- /** Pointer to the mapped packet sent register.
- * Octeon writes the number of packets DMA'ed to host memory
- * in this register.
- */
- void *pkts_sent_reg;
-
- /** Statistics for this DROQ. */
- struct lio_droq_stats stats;
-
- /** DMA mapped address of the DROQ descriptor ring. */
- size_t desc_ring_dma;
-
- /** Info ptr list are allocated at this virtual address. */
- size_t info_base_addr;
-
- /** DMA mapped address of the info list */
- size_t info_list_dma;
-
- /** Allocated size of info list. */
- uint32_t info_alloc_size;
-
- /** Memory zone **/
- const struct rte_memzone *desc_ring_mz;
- const struct rte_memzone *info_mz;
- struct rte_mempool *mpool;
-};
-
-/** Receive Header */
-union octeon_rh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t rh64;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 17;
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- } r;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t extra : 28;
- uint64_t vlan : 12;
- uint64_t priority : 3;
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
- uint64_t encap_on : 1;
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- } r_dh;
- struct {
- uint64_t opcode : 4;
- uint64_t subcode : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t reserved : 8;
- uint64_t extra : 25;
- uint64_t gmxport : 16;
- } r_nic_info;
-#else
- uint64_t rh64;
- struct {
- uint64_t ossp : 32; /** opcode/subcode specific parameters */
- uint64_t reserved : 17;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r;
- struct {
- uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
- uint64_t encap_on : 1;
- uint64_t has_hwtstamp : 1; /** 1 = has hwtstamp */
- uint64_t csum_verified : 3; /** checksum verified. */
- uint64_t priority : 3;
- uint64_t vlan : 12;
- uint64_t extra : 28;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_dh;
- struct {
- uint64_t gmxport : 16;
- uint64_t extra : 25;
- uint64_t reserved : 8;
- uint64_t len : 3; /** additional 64-bit words */
- uint64_t subcode : 8;
- uint64_t opcode : 4;
- } r_nic_info;
-#endif
-};
-
-#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
-
-/** The txpciq info passed to host from the firmware */
-union octeon_txpciq {
- uint64_t txpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t port : 8;
- uint64_t pkind : 6;
- uint64_t use_qpg : 1;
- uint64_t qpg : 11;
- uint64_t aura_num : 10;
- uint64_t reserved : 20;
-#else
- uint64_t reserved : 20;
- uint64_t aura_num : 10;
- uint64_t qpg : 11;
- uint64_t use_qpg : 1;
- uint64_t pkind : 6;
- uint64_t port : 8;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** The instruction (input) queue.
- * The input queue is used to post raw (instruction) mode data or packet
- * data to Octeon device from the host. Each input queue for
- * a LIO device has one such structure to represent it.
- */
-struct lio_instr_queue {
- /** A spinlock to protect access to the input ring. */
- rte_spinlock_t lock;
-
- rte_spinlock_t post_lock;
-
- struct lio_device *lio_dev;
-
- uint32_t pkt_in_done;
-
- rte_atomic64_t iq_flush_running;
-
- /** Flag that indicates if the queue uses 64 byte commands. */
- uint32_t iqcmd_64B:1;
-
- /** Queue info. */
- union octeon_txpciq txpciq;
-
- uint32_t rsvd:17;
-
- uint32_t status:8;
-
- /** Number of descriptors in this ring. */
- uint32_t nb_desc;
-
- /** Index in input ring where the driver should write the next packet */
- uint32_t host_write_index;
-
- /** Index in input ring where Octeon is expected to read the next
- * packet.
- */
- uint32_t lio_read_index;
-
- /** This index aids in finding the window in the queue where Octeon
- * has read the commands.
- */
- uint32_t flush_index;
-
- /** This field keeps track of the instructions pending in this queue. */
- rte_atomic64_t instr_pending;
-
- /** Pointer to the Virtual Base addr of the input ring. */
- uint8_t *base_addr;
-
- struct lio_request_list *request_list;
-
- /** Octeon doorbell register for the ring. */
- void *doorbell_reg;
-
- /** Octeon instruction count register for this ring. */
- void *inst_cnt_reg;
-
- /** Number of instructions pending to be posted to Octeon. */
- uint32_t fill_cnt;
-
- /** Statistics for this input queue. */
- struct lio_iq_stats stats;
-
- /** DMA mapped base address of the input descriptor ring. */
- uint64_t base_addr_dma;
-
- /** Application context */
- void *app_ctx;
-
- /* network stack queue index */
- int q_index;
-
- /* Memory zone */
- const struct rte_memzone *iq_mz;
-};
-
-/** This structure is used by driver to store information required
- * to free the mbuff when the packet has been fetched by Octeon.
- * Bytes offset below assume worst-case of a 64-bit system.
- */
-struct lio_buf_free_info {
- /** Bytes 1-8. Pointer to network device private structure. */
- struct lio_device *lio_dev;
-
- /** Bytes 9-16. Pointer to mbuff. */
- struct rte_mbuf *mbuf;
-
- /** Bytes 17-24. Pointer to gather list. */
- struct lio_gather *g;
-
- /** Bytes 25-32. Physical address of mbuf->data or gather list. */
- uint64_t dptr;
-
- /** Bytes 33-47. Piggybacked soft command, if any */
- struct lio_soft_command *sc;
-
- /** Bytes 48-63. iq no */
- uint64_t iq_no;
-};
-
-/* The Scatter-Gather List Entry. The scatter or gather component used with
- * input instruction has this format.
- */
-struct lio_sg_entry {
- /** The first 64 bit gives the size of data in each dptr. */
- union {
- uint16_t size[4];
- uint64_t size64;
- } u;
-
- /** The 4 dptr pointers for this entry. */
- uint64_t ptr[4];
-};
-
-#define LIO_SG_ENTRY_SIZE (sizeof(struct lio_sg_entry))
-
-/** Structure of a node in list of gather components maintained by
- * driver for each network device.
- */
-struct lio_gather {
- /** List manipulation. Next and prev pointers. */
- struct lio_stailq_node list;
-
- /** Size of the gather component at sg in bytes. */
- int sg_size;
-
- /** Number of bytes that sg was adjusted to make it 8B-aligned. */
- int adjust;
-
- /** Gather component that can accommodate max sized fragment list
- * received from the IP layer.
- */
- struct lio_sg_entry *sg;
-};
-
-struct lio_rss_ctx {
- uint16_t hash_key_size;
- uint8_t hash_key[LIO_RSS_MAX_KEY_SZ];
- /* Ideally a factor of number of queues */
- uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
- uint8_t itable_size;
- uint8_t ip;
- uint8_t tcp_hash;
- uint8_t ipv6;
- uint8_t ipv6_tcp_hash;
- uint8_t ipv6_ex;
- uint8_t ipv6_tcp_ex_hash;
- uint8_t hash_disable;
-};
-
-struct lio_io_enable {
- uint64_t iq;
- uint64_t oq;
- uint64_t iq64B;
-};
-
-struct lio_fn_list {
- void (*setup_iq_regs)(struct lio_device *, uint32_t);
- void (*setup_oq_regs)(struct lio_device *, uint32_t);
-
- int (*setup_mbox)(struct lio_device *);
- void (*free_mbox)(struct lio_device *);
-
- int (*setup_device_regs)(struct lio_device *);
- int (*enable_io_queues)(struct lio_device *);
- void (*disable_io_queues)(struct lio_device *);
-};
-
-struct lio_pf_vf_hs_word {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** RESERVED */
- uint64_t reserved : 16;
-
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
- /** RESERVED */
- uint64_t reserved : 16;
-
- /** app that currently running on OCTEON */
- uint64_t app_mode : 8;
-
- /** OCTEON coprocessor clock multiplier */
- uint64_t coproc_tics_per_us : 16;
-
- /** OCTEON core clock multiplier */
- uint64_t core_tics_per_us : 16;
-
- /** PKIND value assigned for the DPI interface */
- uint64_t pkind : 8;
-#endif
-};
-
-struct lio_sriov_info {
- /** Number of rings assigned to VF */
- uint32_t rings_per_vf;
-
- /** Number of VF devices enabled */
- uint32_t num_vfs;
-};
-
-/* Head of a response list */
-struct lio_response_list {
- /** List structure to add delete pending entries to */
- struct lio_stailq_head head;
-
- /** A lock for this response list */
- rte_spinlock_t lock;
-
- rte_atomic64_t pending_req_count;
-};
-
-/* Structure to define the configuration attributes for each Input queue. */
-struct lio_iq_config {
- /* Max number of IQs available */
- uint8_t max_iqs;
-
- /** Pending list size (usually set to the sum of the size of all Input
- * queues)
- */
- uint32_t pending_list_size;
-
- /** Command size - 32 or 64 bytes */
- uint32_t instr_type;
-};
-
-/* Structure to define the configuration attributes for each Output queue. */
-struct lio_oq_config {
- /* Max number of OQs available */
- uint8_t max_oqs;
-
- /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
- uint32_t info_ptr;
-
- /** The number of buffers that were consumed during packet processing by
- * the driver on this Output queue before the driver attempts to
- * replenish the descriptor ring with new buffers.
- */
- uint32_t refill_threshold;
-};
-
-/* Structure to define the configuration. */
-struct lio_config {
- uint16_t card_type;
- const char *card_name;
-
- /** Input Queue attributes. */
- struct lio_iq_config iq;
-
- /** Output Queue attributes. */
- struct lio_oq_config oq;
-
- int num_nic_ports;
-
- int num_def_tx_descs;
-
- /* Num of desc for rx rings */
- int num_def_rx_descs;
-
- int def_rx_buf_size;
-};
-
-/** Status of a RGMII Link on Octeon as seen by core driver. */
-union octeon_link_status {
- uint64_t link_status64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t duplex : 8;
- uint64_t mtu : 16;
- uint64_t speed : 16;
- uint64_t link_up : 1;
- uint64_t autoneg : 1;
- uint64_t if_mode : 5;
- uint64_t pause : 1;
- uint64_t flashing : 1;
- uint64_t reserved : 15;
-#else
- uint64_t reserved : 15;
- uint64_t flashing : 1;
- uint64_t pause : 1;
- uint64_t if_mode : 5;
- uint64_t autoneg : 1;
- uint64_t link_up : 1;
- uint64_t speed : 16;
- uint64_t mtu : 16;
- uint64_t duplex : 8;
-#endif
- } s;
-};
-
-/** The rxpciq info passed to host from the firmware */
-union octeon_rxpciq {
- uint64_t rxpciq64;
-
- struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t q_no : 8;
- uint64_t reserved : 56;
-#else
- uint64_t reserved : 56;
- uint64_t q_no : 8;
-#endif
- } s;
-};
-
-/** Information for a OCTEON ethernet interface shared between core & host. */
-struct octeon_link_info {
- union octeon_link_status link;
- uint64_t hw_addr;
-
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- uint64_t gmxport : 16;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t rsvd : 30;
- uint64_t num_txpciq : 8;
- uint64_t num_rxpciq : 8;
-#else
- uint64_t num_rxpciq : 8;
- uint64_t num_txpciq : 8;
- uint64_t rsvd : 30;
- uint64_t vlan_is_admin_assigned : 1;
- uint64_t macaddr_is_admin_assigned : 1;
- uint64_t gmxport : 16;
-#endif
-
- union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
- union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
-};
-
-/* ----------------------- THE LIO DEVICE --------------------------- */
-/** The lio device.
- * Each lio device has this structure to represent all its
- * components.
- */
-struct lio_device {
- /** PCI device pointer */
- struct rte_pci_device *pci_dev;
-
- /** Octeon Chip type */
- uint16_t chip_id;
- uint16_t pf_num;
- uint16_t vf_num;
-
- /** This device's PCIe port used for traffic. */
- uint16_t pcie_port;
-
- /** The state of this device */
- rte_atomic64_t status;
-
- uint8_t intf_open;
-
- struct octeon_link_info linfo;
-
- uint8_t *hw_addr;
-
- struct lio_fn_list fn_list;
-
- uint32_t num_iqs;
-
- /** Guards each glist */
- rte_spinlock_t *glist_lock;
- /** Array of gather component linked lists */
- struct lio_stailq_head *glist_head;
-
- /* The pool containing pre allocated buffers used for soft commands */
- struct rte_mempool *sc_buf_pool;
-
- /** The input instruction queues */
- struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
-
- /** The singly-linked tail queues of instruction response */
- struct lio_response_list response_list;
-
- uint32_t num_oqs;
-
- /** The DROQ output queues */
- struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
-
- struct lio_io_enable io_qmask;
-
- struct lio_sriov_info sriov_info;
-
- struct lio_pf_vf_hs_word pfvf_hsword;
-
- /** Mail Box details of each lio queue. */
- struct lio_mbox **mbox;
-
- char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
-
- const struct lio_config *default_config;
-
- struct rte_eth_dev *eth_dev;
-
- uint64_t ifflags;
- uint8_t max_rx_queues;
- uint8_t max_tx_queues;
- uint8_t nb_rx_queues;
- uint8_t nb_tx_queues;
- uint8_t port_configured;
- struct lio_rss_ctx rss_state;
- uint16_t port_id;
- char firmware_version[LIO_FW_VERSION_LENGTH];
-};
-#endif /* _LIO_STRUCT_H_ */
diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
deleted file mode 100644
index ebadbf3dea..0000000000
--- a/drivers/net/liquidio/meson.build
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-sources = files(
- 'base/lio_23xx_vf.c',
- 'base/lio_mbox.c',
- 'lio_ethdev.c',
- 'lio_rxtx.c',
-)
-includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index b1df17ce8c..f68bbc27a7 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -36,7 +36,6 @@ drivers = [
'ipn3ke',
'ixgbe',
'kni',
- 'liquidio',
'mana',
'memif',
'mlx4',
--
2.40.1
^ permalink raw reply [relevance 1%]
* Minutes of Technical Board Meeting, 2023-01-11
[not found] ` <DS0PR11MB73090EC350B82E0730D0D9A197CE9@DS0PR11MB7309.namprd11.prod.outlook.com>
@ 2023-05-05 15:05 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-05 15:05 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
NOTE: The technical board meetings are on every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC. Meetings are public, and DPDK
community members are welcome to attend.
NOTE: Next meeting will be on Wednesday 2023-01-25 @ 3pm UTC, and will
be chaired by Bruce.
Agenda Items
============
1) C99 standard
---------------
Future support for C11 atomics raised the question of should C99 be
required for DPDK. Several places use C99 already but it is not project
wide. DPDK is using C11 now but marked as extension where used.
The open issues are:
- do not want to require application to require C99 but
want to allow applications using C99. This impacts inline in headers.
- Need to announce. Should not cause API/ABI breakage.
- the testing and infrastructure are impacted as well.
- need to keep inline for performance reasons.
Bruce is adding build support for test and compatibility.
Investigating what fallout is from project wide enablement.
2) Technical Writer
------------------
Possible candidate did not work out. Two candidates under
review.
3) MIT License
--------------
Original Governing Board wording for MIT license exception
became overly complicated. Linux Foundation legal expert
revised it. Governing Board is reviewing.
4) Governing Board
------------------
DPDK Technical Board member to Governing Board:
- past Stephen; current Thomas; next Aaron
Recent vote on modification to charter to codify treasurer role.
Discussion on marketing. The existing Linux Foundation model
has DPDK project paying for things that are not necessary and
not getting the expected support.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver
@ 2023-05-02 14:18 5% ` Ferruh Yigit
2023-05-08 13:44 1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-05-02 14:18 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, Shijith Thotton,
Srisivasubramanian Srinivasan, Anatoly Burakov, David Marchand
On 4/28/2023 11:31 AM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
>
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
>
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
>
> Due to the above reasons, the driver removed from DPDK 23.07.
>
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
> MAINTAINERS | 8 -
> doc/guides/nics/features/liquidio.ini | 29 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/liquidio.rst | 169 --
> doc/guides/rel_notes/deprecation.rst | 7 -
> doc/guides/rel_notes/release_23_07.rst | 9 +-
> drivers/net/liquidio/base/lio_23xx_reg.h | 165 --
> drivers/net/liquidio/base/lio_23xx_vf.c | 513 ------
> drivers/net/liquidio/base/lio_23xx_vf.h | 63 -
> drivers/net/liquidio/base/lio_hw_defs.h | 239 ---
> drivers/net/liquidio/base/lio_mbox.c | 246 ---
> drivers/net/liquidio/base/lio_mbox.h | 102 -
> drivers/net/liquidio/lio_ethdev.c | 2147 ----------------------
> drivers/net/liquidio/lio_ethdev.h | 179 --
> drivers/net/liquidio/lio_logs.h | 58 -
> drivers/net/liquidio/lio_rxtx.c | 1804 ------------------
> drivers/net/liquidio/lio_rxtx.h | 740 --------
> drivers/net/liquidio/lio_struct.h | 661 -------
> drivers/net/liquidio/meson.build | 16 -
> drivers/net/meson.build | 1 -
> 20 files changed, 1 insertion(+), 7156 deletions(-)
> delete mode 100644 doc/guides/nics/features/liquidio.ini
> delete mode 100644 doc/guides/nics/liquidio.rst
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
> delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
> delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
> delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
> delete mode 100644 drivers/net/liquidio/lio_ethdev.c
> delete mode 100644 drivers/net/liquidio/lio_ethdev.h
> delete mode 100644 drivers/net/liquidio/lio_logs.h
> delete mode 100644 drivers/net/liquidio/lio_rxtx.c
> delete mode 100644 drivers/net/liquidio/lio_rxtx.h
> delete mode 100644 drivers/net/liquidio/lio_struct.h
> delete mode 100644 drivers/net/liquidio/meson.build
>
This cause warning in the ABI check script [1], not because there is an
ABI breakage, but because how script works, that needs to be fixed as well.
[1]
Checking ABI compatibility of build-gcc-shared
.../dpdk-next-net/devtools/../devtools/check-abi.sh
/tmp/dpdk-abiref/v22.11.1/build-gcc-shared
.../dpdk-next-net/build-gcc-shared/install
Error: cannot find librte_net_liquidio.so.23.0 in
.../dpdk-next-net/build-gcc-shared/install
<...>
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -59,14 +59,7 @@ New Features
> Removed Items
> -------------
>
> -.. This section should contain removed items in this release. Sample format:
> -
> - * Add a short 1-2 sentence description of the removed item
> - in the past tense.
> -
> - This section is a comment. Do not overwrite or remove it.
> - Also, make sure to start the actual text at the margin.
> - =======================================================
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
>
>
No need to remove the section comment.
Rest looks good to me.
^ permalink raw reply [relevance 5%]
* [PATCH v8 10/14] eal: expand most macros to empty when using MSVC
@ 2023-05-02 3:15 5% ` Tyler Retzlaff
2023-05-02 3:15 3% ` [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-05-02 3:15 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (!!(x))
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (!!(x))
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..0c55a23 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
#define RTE_STD_C11
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/*
* RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
* while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
(type *)(ptr); \
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
+#else
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
#endif
/** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC
2023-05-02 3:15 5% ` [PATCH v8 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-05-02 3:15 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-05-02 3:15 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-04-18 8:33 4% ` Jerin Jacob
@ 2023-04-24 22:41 3% ` Thomas Monjalon
2023-05-19 8:07 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-04-24 22:41 UTC (permalink / raw)
To: Stephen Hemminger, Jerin Jacob
Cc: Nithin Dabilpuram, Akhil Goyal, jerinj, dev, Morten Brørup,
techboard
18/04/2023 10:33, Jerin Jacob:
> On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Tue, 11 Apr 2023 15:34:07 +0530
> > Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
> >
> > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > index 4bacf9fcd9..866cd4e8ee 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> > > */
> > > uint32_t ip_reassembly_en : 1;
> > >
> > > + /** Enable out of place processing on inline inbound packets.
> > > + *
> > > + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > > + * inbound SA if supported by driver. PMD need to register mbuf
> > > + * dynamic field using rte_security_oop_dynfield_register()
> > > + * and security session creation would fail if dynfield is not
> > > + * registered successfully.
> > > + * * 0: Disable OOP processing for this session (default).
> > > + */
> > > + uint32_t ingress_oop : 1;
> > > +
> > > /** Reserved bit fields for future extension
> > > *
> > > * User should ensure reserved_opts is cleared as it may change in
> > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > *
> > > * Note: Reduce number of bits in reserved_opts for every new option.
> > > */
> > > - uint32_t reserved_opts : 17;
> > > + uint32_t reserved_opts : 16;
> > > };
> >
> > NAK
> > Let me repeat the reserved bit rant. YAGNI
> >
> > Reserved space is not usable without ABI breakage unless the existing
> > code enforces that reserved space has to be zero.
> >
> > Just saying "User should ensure reserved_opts is cleared" is not enough.
>
> Yes. I think, we need to enforce to have _init functions for the
> structures which is using reserved filed.
>
> On the same note on YAGNI, I am wondering why NOT introduce
> RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> wants it to avoid waiting for one year any ABI breaking changes.
> There are a lot of "fixed appliance" customers (not OS distribution
> driven customer) they are willing to recompile DPDK for new feature.
> What we are loosing with this scheme?
RTE_NEXT_ABI is described in the ABI policy.
We are not doing it currently, but I think we could
when it is not too much complicate in the code.
The only problems I see are:
- more #ifdef clutter
- 2 binary versions to test
- CI and checks must handle RTE_NEXT_ABI version
^ permalink raw reply [relevance 3%]
* Re: [RFC] lib: set/get max memzone segments
@ 2023-04-21 8:34 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-04-21 8:34 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Ophir Munk, dev, Bruce Richardson, Devendra Singh Rawat,
Alok Prasad, Matan Azrad, Lior Margalit
20/04/2023 20:20, Tyler Retzlaff:
> On Thu, Apr 20, 2023 at 09:43:28AM +0200, Thomas Monjalon wrote:
> > 19/04/2023 16:51, Tyler Retzlaff:
> > > On Wed, Apr 19, 2023 at 11:36:34AM +0300, Ophir Munk wrote:
> > > > In current DPDK the RTE_MAX_MEMZONE definition is unconditionally hard
> > > > coded as 2560. For applications requiring different values of this
> > > > parameter – it is more convenient to set the max value via an rte API -
> > > > rather than changing the dpdk source code per application. In many
> > > > organizations, the possibility to compile a private DPDK library for a
> > > > particular application does not exist at all. With this option there is
> > > > no need to recompile DPDK and it allows using an in-box packaged DPDK.
> > > > An example usage for updating the RTE_MAX_MEMZONE would be of an
> > > > application that uses the DPDK mempool library which is based on DPDK
> > > > memzone library. The application may need to create a number of
> > > > steering tables, each of which will require its own mempool allocation.
> > > > This commit is not about how to optimize the application usage of
> > > > mempool nor about how to improve the mempool implementation based on
> > > > memzone. It is about how to make the max memzone definition - run-time
> > > > customized.
> > > > This commit adds an API which must be called before rte_eal_init():
> > > > rte_memzone_max_set(int max). If not called, the default memzone
> > > > (RTE_MAX_MEMZONE) is used. There is also an API to query the effective
> > > > max memzone: rte_memzone_max_get().
> > > >
> > > > Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
> > > > ---
> > >
> > > the use case of each application may want a different non-hard coded
> > > value makes sense.
> > >
> > > it's less clear to me that requiring it be called before eal init makes
> > > sense over just providing it as configuration to eal init so that it is
> > > composed.
> >
> > Why do you think it would be better as EAL init option?
> > From an API perspective, I think it is simpler to call a dedicated function.
> > And I don't think a user wants to deal with it when starting the application.
>
> because a dedicated function that can be called detached from the eal
> state enables an opportunity for accidental and confusing use outside
> the correct context.
>
> i know the above prescribes not to do this but.
>
> now you can call set after eal init, but we protect about calling it
> after init by failing. what do we do sensibly with the failure?
It would be a developer mistake which could be fix during development stage
very easily. I don't see a problem here.
> > > can you elaborate further on why you need get if you have a one-shot
> > > set? why would the application not know the value if you can only ever
> > > call it once before init?
> >
> > The "get" function is used in this patch by test and qede driver.
> > The application could use it as well, especially to query the default value.
>
> this seems incoherent to me, why does the application not know if it has
> called set or not? if it called set it knows what the value is, if it didn't
> call set it knows what the default is.
No the application doesn't know the default, it is an internal value.
> anyway, the use case is valid and i would like to see the ability to
> change it dynamically i'd prefer not to see an api like this be introduced
> as prescribed but that's for you folks to decide.
>
> anyway, i own a lot of apis that operate just like the proposed and
> they're great source of support overhead. i prefer not to rely on
> documenting a contract when i can enforce the contract and implicit state
> machine mechanically with the api instead.
>
> fwiw a nicer pattern for doing this one of framework influencing config
> might look something like this.
>
> struct eal_config config;
>
> eal_config_init(&config); // defaults are set entire state made valid
> eal_config_set_max_memzone(&config, 1024); // default is overridden
>
> rte_eal_init(&config);
In general, we discovered that functions doing too much are bad
for usability and for ABI stability.
In the function eal_config_init() that you propose,
any change in the struct eal_config will be an ABI breakage.
^ permalink raw reply [relevance 4%]
* Re: [PATCH] eventdev: fix alignment padding
2023-04-18 11:06 4% ` Morten Brørup
@ 2023-04-18 12:40 3% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-04-18 12:40 UTC (permalink / raw)
To: Morten Brørup, Sivaprasad Tummala, jerinj; +Cc: dev
On 2023-04-18 13:06, Morten Brørup wrote:
>> From: Sivaprasad Tummala [mailto:sivaprasad.tummala@amd.com]
>> Sent: Tuesday, 18 April 2023 12.46
>>
>> fixed the padding required to align to cacheline size.
>>
>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
>> Cc: mattias.ronnblom@ericsson.com
>>
>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>> ---
>> lib/eventdev/rte_eventdev_core.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev_core.h
>> b/lib/eventdev/rte_eventdev_core.h
>> index c328bdbc82..c27a52ccc0 100644
>> --- a/lib/eventdev/rte_eventdev_core.h
>> +++ b/lib/eventdev/rte_eventdev_core.h
>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
>> /**< PMD Tx adapter enqueue same destination function. */
>> event_crypto_adapter_enqueue_t ca_enqueue;
>> /**< PMD Crypto adapter enqueue function. */
>> - uintptr_t reserved[6];
>> + uintptr_t reserved[5];
>> } __rte_cache_aligned;
>
> This fix changes the size (reduces it by one cache line) of the elements in the public rte_event_fp_ops array, and thus breaks the ABI.
>
> BTW, the patch it fixes, which was dated November 2021, also broke the ABI.
21.11 has a new ABI version, so that's not an issue.
>
>>
>> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
>> --
>> 2.34.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH] eventdev: fix alignment padding
@ 2023-04-18 11:06 4% ` Morten Brørup
2023-04-18 12:40 3% ` Mattias Rönnblom
1 sibling, 1 reply; 200+ results
From: Morten Brørup @ 2023-04-18 11:06 UTC (permalink / raw)
To: Sivaprasad Tummala, jerinj; +Cc: dev, mattias.ronnblom
> From: Sivaprasad Tummala [mailto:sivaprasad.tummala@amd.com]
> Sent: Tuesday, 18 April 2023 12.46
>
> fixed the padding required to align to cacheline size.
>
> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> Cc: mattias.ronnblom@ericsson.com
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> lib/eventdev/rte_eventdev_core.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/eventdev/rte_eventdev_core.h
> b/lib/eventdev/rte_eventdev_core.h
> index c328bdbc82..c27a52ccc0 100644
> --- a/lib/eventdev/rte_eventdev_core.h
> +++ b/lib/eventdev/rte_eventdev_core.h
> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
> /**< PMD Tx adapter enqueue same destination function. */
> event_crypto_adapter_enqueue_t ca_enqueue;
> /**< PMD Crypto adapter enqueue function. */
> - uintptr_t reserved[6];
> + uintptr_t reserved[5];
> } __rte_cache_aligned;
This fix changes the size (reduces it by one cache line) of the elements in the public rte_event_fp_ops array, and thus breaks the ABI.
BTW, the patch it fixes, which was dated November 2021, also broke the ABI.
>
> extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> --
> 2.34.1
^ permalink raw reply [relevance 4%]
* RE: [RFC 0/4] Support VFIO sparse mmap in PCI bus
2023-04-18 7:46 3% ` David Marchand
2023-04-18 9:27 0% ` Xia, Chenbo
@ 2023-04-18 9:33 0% ` Xia, Chenbo
1 sibling, 0 replies; 200+ results
From: Xia, Chenbo @ 2023-04-18 9:33 UTC (permalink / raw)
To: David Marchand; +Cc: dev, skori, Cao, Yahui, Li, Miao
David,
Sorry that I missed one comment...
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, April 18, 2023 3:47 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; skori@marvell.com
> Subject: Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
>
> Hello Chenbo,
>
> On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This series introduces a VFIO standard capability, called sparse
> > mmap to PCI bus. In linux kernel, it's defined as
> > VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> > mmap whole BAR region into DPDK process, only mmap part of the
> > BAR region after getting sparse mmap information from kernel.
> > For the rest of BAR region that is not mmap-ed, DPDK process
> > can use pread/pwrite system calls to access. Sparse mmap is
> > useful when kernel does not want userspace to mmap whole BAR
> > region, or kernel wants to control over access to specific BAR
> > region. Vendors can choose to enable this feature or not for
> > their devices in their specific kernel modules.
>
> Sorry, I did not take the time to look into the details.
> Could you summarize what would be the benefit of this series?
>
>
> >
> > In this patchset:
> >
> > Patch 1-3 is mainly for introducing BAR access APIs so that
> > driver could use them to access specific BAR using pread/pwrite
> > system calls when part of the BAR is not mmap-able.
> >
> > Patch 4 adds the VFIO sparse mmap support finally. A question
> > is for all sparse mmap regions, should they be mapped to a
> > continuous virtual address region that follows device-specific
> > BAR layout or not. In theory, there could be three options to
> > support this feature.
> >
> > Option 1: Map sparse mmap regions independently
> > ======================================================
> > In this approach, we mmap each sparse mmap region one by one
> > and each region could be located anywhere in process address
> > space. But accessing the mmaped BAR will not be as easy as
> > 'bar_base_address + bar_offset', driver needs to check the
> > sparse mmap information to access specific BAR register.
> >
> > Patch 4 in this patchset adopts this option. Driver API change
> > is introduced in bus_pci_driver.h. Corresponding changes in
> > all drivers are also done and currently I am assuming drivers
> > do not support this feature so they will not check the
> > 'is_sparse' flag but assumes it to be false. Note that it will
> > not break any driver and each vendor can add related logic when
> > they start to support this feature. This is only because I don't
> > want to introduce complexity to drivers that do not want to
> > support this feature.
> >
> > Option 2: Map sparse mmap regions based on device-specific BAR layout
> > ======================================================================
> > In this approach, the sparse mmap regions are mapped to continuous
> > virtual address region that follows device-specific BAR layout.
> > For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> > region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> > mmaped. Region #1 will be mapped at 'base_addr' and region #2
> > will be mapped at 'base_addr + 0x3000'. The good thing is if
> > we implement like this, driver can still access all BAR registers
> > using 'bar_base_address + bar_offset' way and we don't need
> > to introduce any driver API change. But the address space
> > range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> > be reserved so it could result in waste of address space or memory
> > (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> > range). Meanwhile, driver needs to know which part of BAR is
> > mmaped (this is possible since the range is defined by vendor's
> > specific kernel module).
> >
> > Option 3: Support both option 1 & 2
> > ===================================
> > We could define a driver flag to let driver choose which way it
> > perfers since either option has its own Pros & Cons.
> >
> > Please share your comments, Thanks!
> >
> >
> > Chenbo Xia (4):
> > bus/pci: introduce an internal representation of PCI device
>
> I think this first patch main motivation was to avoid ABI issues.
> Since v22.11, the rte_pci_device object is opaque to applications.
>
> So, do we still need this patch?
I think it could be good to reduce unnecessary driver APIs..
Hiding these region information could be friendly to driver developer?
Thanks,
Chenbo
>
>
> > bus/pci: avoid depending on private value in kernel source
> > bus/pci: introduce helper for MMIO read and write
> > bus/pci: add VFIO sparse mmap support
> >
> > drivers/baseband/acc/rte_acc100_pmd.c | 6 +-
> > drivers/baseband/acc/rte_vrb_pmd.c | 6 +-
> > .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 6 +-
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 6 +-
> > drivers/bus/pci/bsd/pci.c | 43 +-
> > drivers/bus/pci/bus_pci_driver.h | 24 +-
> > drivers/bus/pci/linux/pci.c | 91 +++-
> > drivers/bus/pci/linux/pci_init.h | 14 +-
> > drivers/bus/pci/linux/pci_uio.c | 34 +-
> > drivers/bus/pci/linux/pci_vfio.c | 445 ++++++++++++++----
> > drivers/bus/pci/pci_common.c | 57 ++-
> > drivers/bus/pci/pci_common_uio.c | 12 +-
> > drivers/bus/pci/private.h | 25 +-
> > drivers/bus/pci/rte_bus_pci.h | 48 ++
> > drivers/bus/pci/version.map | 3 +
> > drivers/common/cnxk/roc_dev.c | 4 +-
> > drivers/common/cnxk/roc_dpi.c | 2 +-
> > drivers/common/cnxk/roc_ml.c | 22 +-
> > drivers/common/qat/dev/qat_dev_gen1.c | 2 +-
> > drivers/common/qat/dev/qat_dev_gen4.c | 4 +-
> > drivers/common/sfc_efx/sfc_efx.c | 2 +-
> > drivers/compress/octeontx/otx_zip.c | 4 +-
> > drivers/crypto/ccp/ccp_dev.c | 4 +-
> > drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
> > drivers/crypto/nitrox/nitrox_device.c | 4 +-
> > drivers/crypto/octeontx/otx_cryptodev_ops.c | 6 +-
> > drivers/crypto/virtio/virtio_pci.c | 6 +-
> > drivers/dma/cnxk/cnxk_dmadev.c | 2 +-
> > drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
> > drivers/dma/idxd/idxd_pci.c | 4 +-
> > drivers/dma/ioat/ioat_dmadev.c | 2 +-
> > drivers/event/dlb2/pf/dlb2_main.c | 16 +-
> > drivers/event/octeontx/ssovf_probe.c | 38 +-
> > drivers/event/octeontx/timvf_probe.c | 18 +-
> > drivers/event/skeleton/skeleton_eventdev.c | 2 +-
> > drivers/mempool/octeontx/octeontx_fpavf.c | 6 +-
> > drivers/net/ark/ark_ethdev.c | 4 +-
> > drivers/net/atlantic/atl_ethdev.c | 2 +-
> > drivers/net/avp/avp_ethdev.c | 20 +-
> > drivers/net/axgbe/axgbe_ethdev.c | 4 +-
> > drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> > drivers/net/bnxt/bnxt_ethdev.c | 8 +-
> > drivers/net/cpfl/cpfl_ethdev.c | 4 +-
> > drivers/net/cxgbe/cxgbe_ethdev.c | 2 +-
> > drivers/net/cxgbe/cxgbe_main.c | 2 +-
> > drivers/net/cxgbe/cxgbevf_ethdev.c | 2 +-
> > drivers/net/cxgbe/cxgbevf_main.c | 2 +-
> > drivers/net/e1000/em_ethdev.c | 4 +-
> > drivers/net/e1000/igb_ethdev.c | 4 +-
> > drivers/net/ena/ena_ethdev.c | 4 +-
> > drivers/net/enetc/enetc_ethdev.c | 2 +-
> > drivers/net/enic/enic_main.c | 4 +-
> > drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> > drivers/net/gve/gve_ethdev.c | 4 +-
> > drivers/net/hinic/base/hinic_pmd_hwif.c | 14 +-
> > drivers/net/hns3/hns3_ethdev.c | 2 +-
> > drivers/net/hns3/hns3_ethdev_vf.c | 2 +-
> > drivers/net/hns3/hns3_rxtx.c | 4 +-
> > drivers/net/i40e/i40e_ethdev.c | 2 +-
> > drivers/net/iavf/iavf_ethdev.c | 2 +-
> > drivers/net/ice/ice_dcf.c | 2 +-
> > drivers/net/ice/ice_ethdev.c | 2 +-
> > drivers/net/idpf/idpf_ethdev.c | 4 +-
> > drivers/net/igc/igc_ethdev.c | 2 +-
> > drivers/net/ionic/ionic_dev_pci.c | 2 +-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> > drivers/net/liquidio/lio_ethdev.c | 4 +-
> > drivers/net/nfp/nfp_ethdev.c | 2 +-
> > drivers/net/nfp/nfp_ethdev_vf.c | 6 +-
> > drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 4 +-
> > drivers/net/ngbe/ngbe_ethdev.c | 2 +-
> > drivers/net/octeon_ep/otx_ep_ethdev.c | 2 +-
> > drivers/net/octeontx/base/octeontx_pkivf.c | 6 +-
> > drivers/net/octeontx/base/octeontx_pkovf.c | 12 +-
> > drivers/net/qede/qede_main.c | 6 +-
> > drivers/net/sfc/sfc.c | 2 +-
> > drivers/net/thunderx/nicvf_ethdev.c | 2 +-
> > drivers/net/txgbe/txgbe_ethdev.c | 2 +-
> > drivers/net/txgbe/txgbe_ethdev_vf.c | 2 +-
> > drivers/net/virtio/virtio_pci.c | 6 +-
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
> > drivers/raw/cnxk_bphy/cnxk_bphy.c | 10 +-
> > drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c | 6 +-
> > drivers/raw/ifpga/afu_pmd_n3000.c | 4 +-
> > drivers/raw/ifpga/ifpga_rawdev.c | 6 +-
> > drivers/raw/ntb/ntb_hw_intel.c | 8 +-
> > drivers/vdpa/ifc/ifcvf_vdpa.c | 6 +-
> > drivers/vdpa/sfc/sfc_vdpa_hw.c | 2 +-
> > drivers/vdpa/sfc/sfc_vdpa_ops.c | 2 +-
> > lib/eal/include/rte_vfio.h | 1 -
> > 90 files changed, 853 insertions(+), 352 deletions(-)
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* RE: [RFC 0/4] Support VFIO sparse mmap in PCI bus
2023-04-18 7:46 3% ` David Marchand
@ 2023-04-18 9:27 0% ` Xia, Chenbo
2023-04-18 9:33 0% ` Xia, Chenbo
1 sibling, 0 replies; 200+ results
From: Xia, Chenbo @ 2023-04-18 9:27 UTC (permalink / raw)
To: David Marchand; +Cc: dev, skori, Cao, Yahui, Li, Miao
Hi David,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, April 18, 2023 3:47 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; skori@marvell.com
> Subject: Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
>
> Hello Chenbo,
>
> On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This series introduces a VFIO standard capability, called sparse
> > mmap to PCI bus. In linux kernel, it's defined as
> > VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> > mmap whole BAR region into DPDK process, only mmap part of the
> > BAR region after getting sparse mmap information from kernel.
> > For the rest of BAR region that is not mmap-ed, DPDK process
> > can use pread/pwrite system calls to access. Sparse mmap is
> > useful when kernel does not want userspace to mmap whole BAR
> > region, or kernel wants to control over access to specific BAR
> > region. Vendors can choose to enable this feature or not for
> > their devices in their specific kernel modules.
>
> Sorry, I did not take the time to look into the details.
> Could you summarize what would be the benefit of this series?
It could be different benefit for different vendor. There was one discussion:
http://inbox.dpdk.org/dev/CO6PR18MB386016A2634AF375F5B4BA8CB4899@CO6PR18MB3860.namprd18.prod.outlook.com/
Above problem is some device has very large BAR, and we don't want DPDK to map
the whole BAR.
For Intel devices, one benefit is that we want our kernel module to control over
access to specific BAR region so we will let DPDK process unable to mmap that region.
(Because after mmap, kernel will not know if userspace is accessing device BAR).
So that's why I summarize as 'Sparse mmap is useful when kernel does not want
userspace to mmap whole BAR region, or kernel wants to control over access to
specific BAR region'. It could be more usage for other vendors that I have not realized
Thanks,
Chenbo
>
>
> >
> > In this patchset:
> >
> > Patch 1-3 is mainly for introducing BAR access APIs so that
> > driver could use them to access specific BAR using pread/pwrite
> > system calls when part of the BAR is not mmap-able.
> >
> > Patch 4 adds the VFIO sparse mmap support finally. A question
> > is for all sparse mmap regions, should they be mapped to a
> > continuous virtual address region that follows device-specific
> > BAR layout or not. In theory, there could be three options to
> > support this feature.
> >
> > Option 1: Map sparse mmap regions independently
> > ======================================================
> > In this approach, we mmap each sparse mmap region one by one
> > and each region could be located anywhere in process address
> > space. But accessing the mmaped BAR will not be as easy as
> > 'bar_base_address + bar_offset', driver needs to check the
> > sparse mmap information to access specific BAR register.
> >
> > Patch 4 in this patchset adopts this option. Driver API change
> > is introduced in bus_pci_driver.h. Corresponding changes in
> > all drivers are also done and currently I am assuming drivers
> > do not support this feature so they will not check the
> > 'is_sparse' flag but assumes it to be false. Note that it will
> > not break any driver and each vendor can add related logic when
> > they start to support this feature. This is only because I don't
> > want to introduce complexity to drivers that do not want to
> > support this feature.
> >
> > Option 2: Map sparse mmap regions based on device-specific BAR layout
> > ======================================================================
> > In this approach, the sparse mmap regions are mapped to continuous
> > virtual address region that follows device-specific BAR layout.
> > For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> > region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> > mmaped. Region #1 will be mapped at 'base_addr' and region #2
> > will be mapped at 'base_addr + 0x3000'. The good thing is if
> > we implement like this, driver can still access all BAR registers
> > using 'bar_base_address + bar_offset' way and we don't need
> > to introduce any driver API change. But the address space
> > range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> > be reserved so it could result in waste of address space or memory
> > (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> > range). Meanwhile, driver needs to know which part of BAR is
> > mmaped (this is possible since the range is defined by vendor's
> > specific kernel module).
> >
> > Option 3: Support both option 1 & 2
> > ===================================
> > We could define a driver flag to let driver choose which way it
> > perfers since either option has its own Pros & Cons.
> >
> > Please share your comments, Thanks!
> >
> >
> > Chenbo Xia (4):
> > bus/pci: introduce an internal representation of PCI device
>
> I think this first patch main motivation was to avoid ABI issues.
> Since v22.11, the rte_pci_device object is opaque to applications.
>
> So, do we still need this patch?
>
>
> > bus/pci: avoid depending on private value in kernel source
> > bus/pci: introduce helper for MMIO read and write
> > bus/pci: add VFIO sparse mmap support
> >
> > drivers/baseband/acc/rte_acc100_pmd.c | 6 +-
> > drivers/baseband/acc/rte_vrb_pmd.c | 6 +-
> > .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 6 +-
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 6 +-
> > drivers/bus/pci/bsd/pci.c | 43 +-
> > drivers/bus/pci/bus_pci_driver.h | 24 +-
> > drivers/bus/pci/linux/pci.c | 91 +++-
> > drivers/bus/pci/linux/pci_init.h | 14 +-
> > drivers/bus/pci/linux/pci_uio.c | 34 +-
> > drivers/bus/pci/linux/pci_vfio.c | 445 ++++++++++++++----
> > drivers/bus/pci/pci_common.c | 57 ++-
> > drivers/bus/pci/pci_common_uio.c | 12 +-
> > drivers/bus/pci/private.h | 25 +-
> > drivers/bus/pci/rte_bus_pci.h | 48 ++
> > drivers/bus/pci/version.map | 3 +
> > drivers/common/cnxk/roc_dev.c | 4 +-
> > drivers/common/cnxk/roc_dpi.c | 2 +-
> > drivers/common/cnxk/roc_ml.c | 22 +-
> > drivers/common/qat/dev/qat_dev_gen1.c | 2 +-
> > drivers/common/qat/dev/qat_dev_gen4.c | 4 +-
> > drivers/common/sfc_efx/sfc_efx.c | 2 +-
> > drivers/compress/octeontx/otx_zip.c | 4 +-
> > drivers/crypto/ccp/ccp_dev.c | 4 +-
> > drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
> > drivers/crypto/nitrox/nitrox_device.c | 4 +-
> > drivers/crypto/octeontx/otx_cryptodev_ops.c | 6 +-
> > drivers/crypto/virtio/virtio_pci.c | 6 +-
> > drivers/dma/cnxk/cnxk_dmadev.c | 2 +-
> > drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
> > drivers/dma/idxd/idxd_pci.c | 4 +-
> > drivers/dma/ioat/ioat_dmadev.c | 2 +-
> > drivers/event/dlb2/pf/dlb2_main.c | 16 +-
> > drivers/event/octeontx/ssovf_probe.c | 38 +-
> > drivers/event/octeontx/timvf_probe.c | 18 +-
> > drivers/event/skeleton/skeleton_eventdev.c | 2 +-
> > drivers/mempool/octeontx/octeontx_fpavf.c | 6 +-
> > drivers/net/ark/ark_ethdev.c | 4 +-
> > drivers/net/atlantic/atl_ethdev.c | 2 +-
> > drivers/net/avp/avp_ethdev.c | 20 +-
> > drivers/net/axgbe/axgbe_ethdev.c | 4 +-
> > drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> > drivers/net/bnxt/bnxt_ethdev.c | 8 +-
> > drivers/net/cpfl/cpfl_ethdev.c | 4 +-
> > drivers/net/cxgbe/cxgbe_ethdev.c | 2 +-
> > drivers/net/cxgbe/cxgbe_main.c | 2 +-
> > drivers/net/cxgbe/cxgbevf_ethdev.c | 2 +-
> > drivers/net/cxgbe/cxgbevf_main.c | 2 +-
> > drivers/net/e1000/em_ethdev.c | 4 +-
> > drivers/net/e1000/igb_ethdev.c | 4 +-
> > drivers/net/ena/ena_ethdev.c | 4 +-
> > drivers/net/enetc/enetc_ethdev.c | 2 +-
> > drivers/net/enic/enic_main.c | 4 +-
> > drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> > drivers/net/gve/gve_ethdev.c | 4 +-
> > drivers/net/hinic/base/hinic_pmd_hwif.c | 14 +-
> > drivers/net/hns3/hns3_ethdev.c | 2 +-
> > drivers/net/hns3/hns3_ethdev_vf.c | 2 +-
> > drivers/net/hns3/hns3_rxtx.c | 4 +-
> > drivers/net/i40e/i40e_ethdev.c | 2 +-
> > drivers/net/iavf/iavf_ethdev.c | 2 +-
> > drivers/net/ice/ice_dcf.c | 2 +-
> > drivers/net/ice/ice_ethdev.c | 2 +-
> > drivers/net/idpf/idpf_ethdev.c | 4 +-
> > drivers/net/igc/igc_ethdev.c | 2 +-
> > drivers/net/ionic/ionic_dev_pci.c | 2 +-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> > drivers/net/liquidio/lio_ethdev.c | 4 +-
> > drivers/net/nfp/nfp_ethdev.c | 2 +-
> > drivers/net/nfp/nfp_ethdev_vf.c | 6 +-
> > drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 4 +-
> > drivers/net/ngbe/ngbe_ethdev.c | 2 +-
> > drivers/net/octeon_ep/otx_ep_ethdev.c | 2 +-
> > drivers/net/octeontx/base/octeontx_pkivf.c | 6 +-
> > drivers/net/octeontx/base/octeontx_pkovf.c | 12 +-
> > drivers/net/qede/qede_main.c | 6 +-
> > drivers/net/sfc/sfc.c | 2 +-
> > drivers/net/thunderx/nicvf_ethdev.c | 2 +-
> > drivers/net/txgbe/txgbe_ethdev.c | 2 +-
> > drivers/net/txgbe/txgbe_ethdev_vf.c | 2 +-
> > drivers/net/virtio/virtio_pci.c | 6 +-
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
> > drivers/raw/cnxk_bphy/cnxk_bphy.c | 10 +-
> > drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c | 6 +-
> > drivers/raw/ifpga/afu_pmd_n3000.c | 4 +-
> > drivers/raw/ifpga/ifpga_rawdev.c | 6 +-
> > drivers/raw/ntb/ntb_hw_intel.c | 8 +-
> > drivers/vdpa/ifc/ifcvf_vdpa.c | 6 +-
> > drivers/vdpa/sfc/sfc_vdpa_hw.c | 2 +-
> > drivers/vdpa/sfc/sfc_vdpa_ops.c | 2 +-
> > lib/eal/include/rte_vfio.h | 1 -
> > 90 files changed, 853 insertions(+), 352 deletions(-)
>
>
> --
> David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
2023-04-18 8:52 3% ` Ferruh Yigit
@ 2023-04-18 9:22 3% ` Bruce Richardson
2023-06-01 9:23 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-18 9:22 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Sivaprasad Tummala, david.hunt, dev, david.marchand, Thomas Monjalon
On Tue, Apr 18, 2023 at 09:52:49AM +0100, Ferruh Yigit wrote:
> On 4/18/2023 9:25 AM, Sivaprasad Tummala wrote:
> > A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> > DPDK 23.07 release to support monitorx instruction on EPYC processors.
> > This results in ABI breakage for legacy apps.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index dcc1ca1696..831713983f 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -163,3 +163,6 @@ Deprecation Notices
> > The new port library API (functions rte_swx_port_*)
> > will gradually transition from experimental to stable status
> > starting with DPDK 23.07 release.
> > +
> > +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> > + ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
>
>
> OK to add new CPU flag,
> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
>
>
> But @David, @Bruce, is it OK to break ABI whenever a new CPU flag is
> added, should we hide CPU flags better?
>
> Or other option can be drop the 'RTE_CPUFLAG_NUMFLAGS' and allow
> appending new flags to the end although this may lead enum become more
> messy by time.
+1 top drop the NUMFLAGS value. We should not break ABI each time we need a
new flag.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
2023-04-18 8:25 3% ` [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
@ 2023-04-18 8:52 3% ` Ferruh Yigit
2023-04-18 9:22 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-04-18 8:52 UTC (permalink / raw)
To: Sivaprasad Tummala, david.hunt
Cc: dev, david.marchand, Bruce Richardson, Thomas Monjalon
On 4/18/2023 9:25 AM, Sivaprasad Tummala wrote:
> A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> DPDK 23.07 release to support monitorx instruction on EPYC processors.
> This results in ABI breakage for legacy apps.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..831713983f 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -163,3 +163,6 @@ Deprecation Notices
> The new port library API (functions rte_swx_port_*)
> will gradually transition from experimental to stable status
> starting with DPDK 23.07 release.
> +
> +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> + ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
OK to add new CPU flag,
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
But @David, @Bruce, is it OK to break ABI whenever a new CPU flag is
added, should we hide CPU flags better?
Or other option can be drop the 'RTE_CPUFLAG_NUMFLAGS' and allow
appending new flags to the end although this may lead enum become more
messy by time.
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-04-11 18:05 3% ` Stephen Hemminger
@ 2023-04-18 8:33 4% ` Jerin Jacob
2023-04-24 22:41 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-04-18 8:33 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Nithin Dabilpuram, Thomas Monjalon, Akhil Goyal, jerinj, dev,
Morten Brørup, techboard
On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Tue, 11 Apr 2023 15:34:07 +0530
> Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
>
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 4bacf9fcd9..866cd4e8ee 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> > */
> > uint32_t ip_reassembly_en : 1;
> >
> > + /** Enable out of place processing on inline inbound packets.
> > + *
> > + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > + * inbound SA if supported by driver. PMD need to register mbuf
> > + * dynamic field using rte_security_oop_dynfield_register()
> > + * and security session creation would fail if dynfield is not
> > + * registered successfully.
> > + * * 0: Disable OOP processing for this session (default).
> > + */
> > + uint32_t ingress_oop : 1;
> > +
> > /** Reserved bit fields for future extension
> > *
> > * User should ensure reserved_opts is cleared as it may change in
> > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > *
> > * Note: Reduce number of bits in reserved_opts for every new option.
> > */
> > - uint32_t reserved_opts : 17;
> > + uint32_t reserved_opts : 16;
> > };
>
> NAK
> Let me repeat the reserved bit rant. YAGNI
>
> Reserved space is not usable without ABI breakage unless the existing
> code enforces that reserved space has to be zero.
>
> Just saying "User should ensure reserved_opts is cleared" is not enough.
Yes. I think, we need to enforce to have _init functions for the
structures which is using reserved filed.
On the same note on YAGNI, I am wondering why NOT introduce
RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
wants it to avoid waiting for one year any ABI breaking changes.
There are a lot of "fixed appliance" customers (not OS distribution
driven customer) they are willing to recompile DPDK for new feature.
What we are loosing with this scheme?
>
>
^ permalink raw reply [relevance 4%]
* [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
@ 2023-04-18 8:25 3% ` Sivaprasad Tummala
2023-04-18 8:52 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-18 8:25 UTC (permalink / raw)
To: david.hunt; +Cc: dev, david.marchand, ferruh.yigit
A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on EPYC processors.
This results in ABI breakage for legacy apps.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..831713983f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+ ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
--
2.34.1
^ permalink raw reply [relevance 3%]
* Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
@ 2023-04-18 7:46 3% ` David Marchand
2023-04-18 9:27 0% ` Xia, Chenbo
2023-04-18 9:33 0% ` Xia, Chenbo
0 siblings, 2 replies; 200+ results
From: David Marchand @ 2023-04-18 7:46 UTC (permalink / raw)
To: Chenbo Xia; +Cc: dev, skori
Hello Chenbo,
On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
>
> This series introduces a VFIO standard capability, called sparse
> mmap to PCI bus. In linux kernel, it's defined as
> VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> mmap whole BAR region into DPDK process, only mmap part of the
> BAR region after getting sparse mmap information from kernel.
> For the rest of BAR region that is not mmap-ed, DPDK process
> can use pread/pwrite system calls to access. Sparse mmap is
> useful when kernel does not want userspace to mmap whole BAR
> region, or kernel wants to control over access to specific BAR
> region. Vendors can choose to enable this feature or not for
> their devices in their specific kernel modules.
Sorry, I did not take the time to look into the details.
Could you summarize what would be the benefit of this series?
>
> In this patchset:
>
> Patch 1-3 is mainly for introducing BAR access APIs so that
> driver could use them to access specific BAR using pread/pwrite
> system calls when part of the BAR is not mmap-able.
>
> Patch 4 adds the VFIO sparse mmap support finally. A question
> is for all sparse mmap regions, should they be mapped to a
> continuous virtual address region that follows device-specific
> BAR layout or not. In theory, there could be three options to
> support this feature.
>
> Option 1: Map sparse mmap regions independently
> ======================================================
> In this approach, we mmap each sparse mmap region one by one
> and each region could be located anywhere in process address
> space. But accessing the mmaped BAR will not be as easy as
> 'bar_base_address + bar_offset', driver needs to check the
> sparse mmap information to access specific BAR register.
>
> Patch 4 in this patchset adopts this option. Driver API change
> is introduced in bus_pci_driver.h. Corresponding changes in
> all drivers are also done and currently I am assuming drivers
> do not support this feature so they will not check the
> 'is_sparse' flag but assumes it to be false. Note that it will
> not break any driver and each vendor can add related logic when
> they start to support this feature. This is only because I don't
> want to introduce complexity to drivers that do not want to
> support this feature.
>
> Option 2: Map sparse mmap regions based on device-specific BAR layout
> ======================================================================
> In this approach, the sparse mmap regions are mapped to continuous
> virtual address region that follows device-specific BAR layout.
> For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> mmaped. Region #1 will be mapped at 'base_addr' and region #2
> will be mapped at 'base_addr + 0x3000'. The good thing is if
> we implement like this, driver can still access all BAR registers
> using 'bar_base_address + bar_offset' way and we don't need
> to introduce any driver API change. But the address space
> range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> be reserved so it could result in waste of address space or memory
> (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> range). Meanwhile, driver needs to know which part of BAR is
> mmaped (this is possible since the range is defined by vendor's
> specific kernel module).
>
> Option 3: Support both option 1 & 2
> ===================================
> We could define a driver flag to let driver choose which way it
> perfers since either option has its own Pros & Cons.
>
> Please share your comments, Thanks!
>
>
> Chenbo Xia (4):
> bus/pci: introduce an internal representation of PCI device
I think this first patch main motivation was to avoid ABI issues.
Since v22.11, the rte_pci_device object is opaque to applications.
So, do we still need this patch?
> bus/pci: avoid depending on private value in kernel source
> bus/pci: introduce helper for MMIO read and write
> bus/pci: add VFIO sparse mmap support
>
> drivers/baseband/acc/rte_acc100_pmd.c | 6 +-
> drivers/baseband/acc/rte_vrb_pmd.c | 6 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 6 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 6 +-
> drivers/bus/pci/bsd/pci.c | 43 +-
> drivers/bus/pci/bus_pci_driver.h | 24 +-
> drivers/bus/pci/linux/pci.c | 91 +++-
> drivers/bus/pci/linux/pci_init.h | 14 +-
> drivers/bus/pci/linux/pci_uio.c | 34 +-
> drivers/bus/pci/linux/pci_vfio.c | 445 ++++++++++++++----
> drivers/bus/pci/pci_common.c | 57 ++-
> drivers/bus/pci/pci_common_uio.c | 12 +-
> drivers/bus/pci/private.h | 25 +-
> drivers/bus/pci/rte_bus_pci.h | 48 ++
> drivers/bus/pci/version.map | 3 +
> drivers/common/cnxk/roc_dev.c | 4 +-
> drivers/common/cnxk/roc_dpi.c | 2 +-
> drivers/common/cnxk/roc_ml.c | 22 +-
> drivers/common/qat/dev/qat_dev_gen1.c | 2 +-
> drivers/common/qat/dev/qat_dev_gen4.c | 4 +-
> drivers/common/sfc_efx/sfc_efx.c | 2 +-
> drivers/compress/octeontx/otx_zip.c | 4 +-
> drivers/crypto/ccp/ccp_dev.c | 4 +-
> drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 2 +-
> drivers/crypto/nitrox/nitrox_device.c | 4 +-
> drivers/crypto/octeontx/otx_cryptodev_ops.c | 6 +-
> drivers/crypto/virtio/virtio_pci.c | 6 +-
> drivers/dma/cnxk/cnxk_dmadev.c | 2 +-
> drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
> drivers/dma/idxd/idxd_pci.c | 4 +-
> drivers/dma/ioat/ioat_dmadev.c | 2 +-
> drivers/event/dlb2/pf/dlb2_main.c | 16 +-
> drivers/event/octeontx/ssovf_probe.c | 38 +-
> drivers/event/octeontx/timvf_probe.c | 18 +-
> drivers/event/skeleton/skeleton_eventdev.c | 2 +-
> drivers/mempool/octeontx/octeontx_fpavf.c | 6 +-
> drivers/net/ark/ark_ethdev.c | 4 +-
> drivers/net/atlantic/atl_ethdev.c | 2 +-
> drivers/net/avp/avp_ethdev.c | 20 +-
> drivers/net/axgbe/axgbe_ethdev.c | 4 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 8 +-
> drivers/net/cpfl/cpfl_ethdev.c | 4 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 2 +-
> drivers/net/cxgbe/cxgbe_main.c | 2 +-
> drivers/net/cxgbe/cxgbevf_ethdev.c | 2 +-
> drivers/net/cxgbe/cxgbevf_main.c | 2 +-
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 4 +-
> drivers/net/ena/ena_ethdev.c | 4 +-
> drivers/net/enetc/enetc_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 4 +-
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/gve/gve_ethdev.c | 4 +-
> drivers/net/hinic/base/hinic_pmd_hwif.c | 14 +-
> drivers/net/hns3/hns3_ethdev.c | 2 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 2 +-
> drivers/net/hns3/hns3_rxtx.c | 4 +-
> drivers/net/i40e/i40e_ethdev.c | 2 +-
> drivers/net/iavf/iavf_ethdev.c | 2 +-
> drivers/net/ice/ice_dcf.c | 2 +-
> drivers/net/ice/ice_ethdev.c | 2 +-
> drivers/net/idpf/idpf_ethdev.c | 4 +-
> drivers/net/igc/igc_ethdev.c | 2 +-
> drivers/net/ionic/ionic_dev_pci.c | 2 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 4 +-
> drivers/net/liquidio/lio_ethdev.c | 4 +-
> drivers/net/nfp/nfp_ethdev.c | 2 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 6 +-
> drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 4 +-
> drivers/net/ngbe/ngbe_ethdev.c | 2 +-
> drivers/net/octeon_ep/otx_ep_ethdev.c | 2 +-
> drivers/net/octeontx/base/octeontx_pkivf.c | 6 +-
> drivers/net/octeontx/base/octeontx_pkovf.c | 12 +-
> drivers/net/qede/qede_main.c | 6 +-
> drivers/net/sfc/sfc.c | 2 +-
> drivers/net/thunderx/nicvf_ethdev.c | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 2 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 +-
> drivers/net/virtio/virtio_pci.c | 6 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
> drivers/raw/cnxk_bphy/cnxk_bphy.c | 10 +-
> drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c | 6 +-
> drivers/raw/ifpga/afu_pmd_n3000.c | 4 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 6 +-
> drivers/raw/ntb/ntb_hw_intel.c | 8 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 6 +-
> drivers/vdpa/sfc/sfc_vdpa_hw.c | 2 +-
> drivers/vdpa/sfc/sfc_vdpa_ops.c | 2 +-
> lib/eal/include/rte_vfio.h | 1 -
> 90 files changed, 853 insertions(+), 352 deletions(-)
--
David Marchand
^ permalink raw reply [relevance 3%]
* [PATCH v7 10/14] eal: expand most macros to empty when using MSVC
@ 2023-04-17 16:10 5% ` Tyler Retzlaff
2023-04-17 16:10 3% ` [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-17 16:10 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (!!(x))
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (!!(x))
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..0c55a23 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
#define RTE_STD_C11
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/*
* RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
* while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
(type *)(ptr); \
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
+#else
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
#endif
/** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC
2023-04-17 16:10 5% ` [PATCH v7 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-17 16:10 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-17 16:10 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH v3 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
2023-04-13 11:53 3% ` [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
@ 2023-04-17 4:31 3% ` Sivaprasad Tummala
0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-17 4:31 UTC (permalink / raw)
To: david.hunt; +Cc: dev
A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on EPYC processors.
This results in ABI breakage for legacy apps.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..831713983f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+ ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
--
2.34.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
2023-04-15 20:52 4% ` Tyler Retzlaff
@ 2023-04-15 22:41 4% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-04-15 22:41 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: bruce.richardson, david.marchand, thomas, konstantin.ananyev, dev
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Saturday, 15 April 2023 22.52
>
> On Sat, Apr 15, 2023 at 09:16:21AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Friday, 14 April 2023 19.02
> > >
> > > On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > > Sent: Thursday, 13 April 2023 23.26
> > > > >
> > > > > For now expand a lot of common rte macros empty. The catch here
> is
> > > we
> > > > > need to test that most of the macros do what they should but at
> the
> > > same
> > > > > time they are blocking work needed to bootstrap of the unit
> tests.
> > > > >
> > > > > Later we will return and provide (where possible) expansions
> that
> > > work
> > > > > correctly for msvc and where not possible provide some alternate
> > > macros
> > > > > to achieve the same outcome.
> > > > >
> > > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> >
> > [...]
> >
> > > > > /**
> > > > > * Force alignment
> > > > > */
> > > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > > #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > > > +#else
> > > > > +#define __rte_aligned(a)
> > > > > +#endif
> > > >
> > > > It should be reviewed that __rte_aligned() is only used for
> > > optimization purposes, and is not required for DPDK to function
> > > properly.
> > >
> > > so to expand on what i have in mind (and explain why i leave it
> expanded
> > > empty for now)
> > >
> > > while msvc has a __declspec for align there is a mismatch between
> > > where gcc and msvc want it placed to control alignment of objects.
> > >
> > > msvc support won't be functional in 23.07 because of atomics. so
> once
> > > we reach the 23.11 cycle (where we can merge c11 changes) it means
> we
> > > can also use standard _Alignas which can accomplish the same thing
> > > but portably.
> >
> > That (C11 standard _Alignas) should be the roadmap for solving the
> alignment requirements.
> >
> > This should be a general principle for DPDK... if the C standard
> offers something, don't reinvent our own. And as a consequence of the
> upgrade to C11, we should deprecate all our own now-obsolete substitutes
> for these.
> >
> > >
> > > full disclosure the catch is i still have to properly locate the
> <thing>
> > > that does the alignment and some small questions about the expansion
> and
> > > use of the existing macro.
> > >
> > > on the subject of DPDK requiring proper alignment, you're right it
> > > is generally for performance but also for pre-c11 atomics.
> > >
> > > one question i have been asking myself is would the community see
> value
> > > in more compile time assertions / testing of the size and alignment
> of
> > > structures and offset of structure fields? we have a few key
> > > RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> > > comprehensive protection.
> >
> > Absolutely. Catching bugs at build time is much better than any
> alternative!
>
> that's handy feedback. i am now encouraged to include more compile time
> checks in advance of or along with changes related to structure abi.
Sounds good.
Disclaimer: "Absolutely" was my personal response. But I seriously doubt that anyone in the DPDK community would object to more build time checks. Stability and code quality carries a lot of weight in DPDK community discussions.
With that said, please expect that maintainers might want you to split your patches, so the additional checks are separated from the MSVC changes.
> follow on question, once we do get to use c11 would something like
> _Static_assert be preferable over RTE_BUILD_BUG_ON? structures sensitive
> to layout could be co-located with the asserts right at the point of
> definition. or is there something extra RTE_BUILD_BUG_ON gives us?
People may have different opinions on RTE_BUILD_BUG_ON vs. _Static_assert or static_assert.
Personally, I prefer static_assert/_Static_assert. It also has the advantage that it can be used in the global scope, directly following the structure definitions (like you mention), whereas RTE_BUILD_BUG_ON must be inside a code block (which can probably be worked around by making a dummy static inline function only containing the RTE_BUILD_BUG_ON).
And in the spirit of my proposal of not using home-grown macros as alternatives to what the C standard provides, I think we should deprecate and get rid of RTE_BUILD_BUG_ON in favor of static_assert/_Static_assert introduced by the C11 standard. (My personal opinion, no such principle decision has been made!)
If we want to keep RTE_BUILD_BUG_ON for some reason, we could change its implementation to use static_assert/_Static_assert instead of creating an invalid pointer to make the compilation fail.
>
> >
> > > > > /**
> > > > > * Force a structure to be packed
> > > > > */
> > > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > > #define __rte_packed __attribute__((__packed__))
> > > > > +#else
> > > > > +#define __rte_packed
> > > > > +#endif
> > > >
> > > > Similar comment as for __rte_aligned(); however, I consider it
> more
> > > likely that structure packing is a functional requirement, and not
> just
> > > used for optimization. Based on my experience, it may be used for
> > > packing network structures; perhaps not in DPDK itself but maybe in
> DPDK
> > > applications.
> > >
> > > so interestingly i've discovered this is kind of a mess and as you
> note
> > > some places we can't just "fix" it for abi compatibility reasons.
> > >
> > > in some instances the packing is being applied to structures where
> it is
> > > essentially a noop. i.e. natural alignment gets you the same thing
> so it
> > > is superfluous.
> > >
> > > in some instances the packing is being applied to structures that
> are
> > > private and it appears to be completely unnecessary e.g. some
> structure
> > > that isn't nested into something else and sizeof() or offsetof()
> fields
> > > don't matter in the context of their use.
> > >
> > > in some instances it is completely necessary usually when type
> punning
> > > buffers containing network framing etc...
> > >
> > > unfortunately the standard doesn't offer me an out here as there is
> an
> > > issue of placement of the pragma/attributes that do the packing.
> > >
> > > for places it isn't needed it, whatever i just expand empty. for
> places
> > > it is superfluous again because msvc has no stable abi (we're not
> > > established yet) again i just expand empty. finally for the places
> where
> > > it is needed i'll probably need to expand conditionally but i think
> the
> > > instances are far fewer than current use.
> >
> > Optimally, we will have a common macro (or other solution) to support
> both GCC/CLANG and MSVC to replace or supplement __rte_packed. However,
> the cost of this may be an API break if we replace __rte_packed.
> >
> > >
> > > >
> > > > The same risk applies to __rte_aligned(), but with lower
> probability.
> > >
> > > so that's the long winded story of why they are both expanded empty
> for
> > > now for msvc. but when the time comes i want to submit patch series
> that
> > > focus on each specifically to generate robust discussion.
> >
> > Sounds like the right path to take.
> >
> > Now, I'm thinking ahead here...
> >
> > We should be prepared to accept a major ABI/API break at one point in
> time, to replace our home-grown macros with C11 standard solutions and
> to fully support MSVC. This is not happening anytime soon, but the
> Techboard should acknowledge that this is going to happen (with an
> unspecified release), so it can be formally announced. The sooner it is
> announced, the more time developers will have to prepare for it.
>
> so, just to avoid any confusion i want to make it clear that i am not
> planning to submit changes that would change abi as a part of supporting
> msvc (aside from changing to standard atomics which we agreed on).
Thank you for clarifying.
>
> in general there are some cleanups we could make in the area of code
> maintainability and portability and we may want to discuss the
> advantages or disadvantages of making those changes. but i think those
> changes are a topic unrelated to windows or msvc specifically.
This was the point I was trying to make, when I proposed accepting a major ABI/API break. Sorry about my unclear wording.
If we collect a wish list of breaking changes, I would personally prefer a "big bang" major ABI/API break, rather than a series of incremental API/ABI breaks over multiple DPDK release. In this regard, we could mix both changes driven by the migration to pure C11 (e.g. getting rid of now-obsolete macros, such as RTE_BUILD_BUG_ON, and compiler intrinsics, such as __rte_aligned) and MSVC portability changes (e.g. an improved macro to support structure packing).
>
> >
> > All the details do not need to be known at the time of the
> announcement; they can be added along the way, based on the discussions
> from your future patches.
>
> >
> > >
> > > ty
^ permalink raw reply [relevance 4%]
* Re: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
2023-04-15 7:16 3% ` Morten Brørup
@ 2023-04-15 20:52 4% ` Tyler Retzlaff
2023-04-15 22:41 4% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-15 20:52 UTC (permalink / raw)
To: Morten Brørup
Cc: bruce.richardson, david.marchand, thomas, konstantin.ananyev, dev
On Sat, Apr 15, 2023 at 09:16:21AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Friday, 14 April 2023 19.02
> >
> > On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > Sent: Thursday, 13 April 2023 23.26
> > > >
> > > > For now expand a lot of common rte macros empty. The catch here is
> > we
> > > > need to test that most of the macros do what they should but at the
> > same
> > > > time they are blocking work needed to bootstrap of the unit tests.
> > > >
> > > > Later we will return and provide (where possible) expansions that
> > work
> > > > correctly for msvc and where not possible provide some alternate
> > macros
> > > > to achieve the same outcome.
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
>
> [...]
>
> > > > /**
> > > > * Force alignment
> > > > */
> > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > > +#else
> > > > +#define __rte_aligned(a)
> > > > +#endif
> > >
> > > It should be reviewed that __rte_aligned() is only used for
> > optimization purposes, and is not required for DPDK to function
> > properly.
> >
> > so to expand on what i have in mind (and explain why i leave it expanded
> > empty for now)
> >
> > while msvc has a __declspec for align there is a mismatch between
> > where gcc and msvc want it placed to control alignment of objects.
> >
> > msvc support won't be functional in 23.07 because of atomics. so once
> > we reach the 23.11 cycle (where we can merge c11 changes) it means we
> > can also use standard _Alignas which can accomplish the same thing
> > but portably.
>
> That (C11 standard _Alignas) should be the roadmap for solving the alignment requirements.
>
> This should be a general principle for DPDK... if the C standard offers something, don't reinvent our own. And as a consequence of the upgrade to C11, we should deprecate all our own now-obsolete substitutes for these.
>
> >
> > full disclosure the catch is i still have to properly locate the <thing>
> > that does the alignment and some small questions about the expansion and
> > use of the existing macro.
> >
> > on the subject of DPDK requiring proper alignment, you're right it
> > is generally for performance but also for pre-c11 atomics.
> >
> > one question i have been asking myself is would the community see value
> > in more compile time assertions / testing of the size and alignment of
> > structures and offset of structure fields? we have a few key
> > RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> > comprehensive protection.
>
> Absolutely. Catching bugs at build time is much better than any alternative!
that's handy feedback. i am now encouraged to include more compile time
checks in advance of or along with changes related to structure abi.
follow on question, once we do get to use c11 would something like
_Static_assert be preferable over RTE_BUILD_BUG_ON? structures sensitive
to layout could be co-located with the asserts right at the point of
definition. or is there something extra RTE_BUILD_BUG_ON gives us?
>
> > > > /**
> > > > * Force a structure to be packed
> > > > */
> > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > #define __rte_packed __attribute__((__packed__))
> > > > +#else
> > > > +#define __rte_packed
> > > > +#endif
> > >
> > > Similar comment as for __rte_aligned(); however, I consider it more
> > likely that structure packing is a functional requirement, and not just
> > used for optimization. Based on my experience, it may be used for
> > packing network structures; perhaps not in DPDK itself but maybe in DPDK
> > applications.
> >
> > so interestingly i've discovered this is kind of a mess and as you note
> > some places we can't just "fix" it for abi compatibility reasons.
> >
> > in some instances the packing is being applied to structures where it is
> > essentially a noop. i.e. natural alignment gets you the same thing so it
> > is superfluous.
> >
> > in some instances the packing is being applied to structures that are
> > private and it appears to be completely unnecessary e.g. some structure
> > that isn't nested into something else and sizeof() or offsetof() fields
> > don't matter in the context of their use.
> >
> > in some instances it is completely necessary usually when type punning
> > buffers containing network framing etc...
> >
> > unfortunately the standard doesn't offer me an out here as there is an
> > issue of placement of the pragma/attributes that do the packing.
> >
> > for places it isn't needed it, whatever i just expand empty. for places
> > it is superfluous again because msvc has no stable abi (we're not
> > established yet) again i just expand empty. finally for the places where
> > it is needed i'll probably need to expand conditionally but i think the
> > instances are far fewer than current use.
>
> Optimally, we will have a common macro (or other solution) to support both GCC/CLANG and MSVC to replace or supplement __rte_packed. However, the cost of this may be an API break if we replace __rte_packed.
>
> >
> > >
> > > The same risk applies to __rte_aligned(), but with lower probability.
> >
> > so that's the long winded story of why they are both expanded empty for
> > now for msvc. but when the time comes i want to submit patch series that
> > focus on each specifically to generate robust discussion.
>
> Sounds like the right path to take.
>
> Now, I'm thinking ahead here...
>
> We should be prepared to accept a major ABI/API break at one point in time, to replace our home-grown macros with C11 standard solutions and to fully support MSVC. This is not happening anytime soon, but the Techboard should acknowledge that this is going to happen (with an unspecified release), so it can be formally announced. The sooner it is announced, the more time developers will have to prepare for it.
so, just to avoid any confusion i want to make it clear that i am not
planning to submit changes that would change abi as a part of supporting
msvc (aside from changing to standard atomics which we agreed on).
in general there are some cleanups we could make in the area of code
maintainability and portability and we may want to discuss the
advantages or disadvantages of making those changes. but i think those
changes are a topic unrelated to windows or msvc specifically.
>
> All the details do not need to be known at the time of the announcement; they can be added along the way, based on the discussions from your future patches.
>
> >
> > ty
^ permalink raw reply [relevance 4%]
* RE: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
2023-04-14 17:02 4% ` Tyler Retzlaff
@ 2023-04-15 7:16 3% ` Morten Brørup
2023-04-15 20:52 4% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-04-15 7:16 UTC (permalink / raw)
To: Tyler Retzlaff, bruce.richardson, david.marchand, thomas,
konstantin.ananyev
Cc: dev
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 14 April 2023 19.02
>
> On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Thursday, 13 April 2023 23.26
> > >
> > > For now expand a lot of common rte macros empty. The catch here is
> we
> > > need to test that most of the macros do what they should but at the
> same
> > > time they are blocking work needed to bootstrap of the unit tests.
> > >
> > > Later we will return and provide (where possible) expansions that
> work
> > > correctly for msvc and where not possible provide some alternate
> macros
> > > to achieve the same outcome.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
[...]
> > > /**
> > > * Force alignment
> > > */
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > +#else
> > > +#define __rte_aligned(a)
> > > +#endif
> >
> > It should be reviewed that __rte_aligned() is only used for
> optimization purposes, and is not required for DPDK to function
> properly.
>
> so to expand on what i have in mind (and explain why i leave it expanded
> empty for now)
>
> while msvc has a __declspec for align there is a mismatch between
> where gcc and msvc want it placed to control alignment of objects.
>
> msvc support won't be functional in 23.07 because of atomics. so once
> we reach the 23.11 cycle (where we can merge c11 changes) it means we
> can also use standard _Alignas which can accomplish the same thing
> but portably.
That (C11 standard _Alignas) should be the roadmap for solving the alignment requirements.
This should be a general principle for DPDK... if the C standard offers something, don't reinvent our own. And as a consequence of the upgrade to C11, we should deprecate all our own now-obsolete substitutes for these.
>
> full disclosure the catch is i still have to properly locate the <thing>
> that does the alignment and some small questions about the expansion and
> use of the existing macro.
>
> on the subject of DPDK requiring proper alignment, you're right it
> is generally for performance but also for pre-c11 atomics.
>
> one question i have been asking myself is would the community see value
> in more compile time assertions / testing of the size and alignment of
> structures and offset of structure fields? we have a few key
> RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> comprehensive protection.
Absolutely. Catching bugs at build time is much better than any alternative!
> > > /**
> > > * Force a structure to be packed
> > > */
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > #define __rte_packed __attribute__((__packed__))
> > > +#else
> > > +#define __rte_packed
> > > +#endif
> >
> > Similar comment as for __rte_aligned(); however, I consider it more
> likely that structure packing is a functional requirement, and not just
> used for optimization. Based on my experience, it may be used for
> packing network structures; perhaps not in DPDK itself but maybe in DPDK
> applications.
>
> so interestingly i've discovered this is kind of a mess and as you note
> some places we can't just "fix" it for abi compatibility reasons.
>
> in some instances the packing is being applied to structures where it is
> essentially a noop. i.e. natural alignment gets you the same thing so it
> is superfluous.
>
> in some instances the packing is being applied to structures that are
> private and it appears to be completely unnecessary e.g. some structure
> that isn't nested into something else and sizeof() or offsetof() fields
> don't matter in the context of their use.
>
> in some instances it is completely necessary usually when type punning
> buffers containing network framing etc...
>
> unfortunately the standard doesn't offer me an out here as there is an
> issue of placement of the pragma/attributes that do the packing.
>
> for places it isn't needed it, whatever i just expand empty. for places
> it is superfluous again because msvc has no stable abi (we're not
> established yet) again i just expand empty. finally for the places where
> it is needed i'll probably need to expand conditionally but i think the
> instances are far fewer than current use.
Optimally, we will have a common macro (or other solution) to support both GCC/CLANG and MSVC to replace or supplement __rte_packed. However, the cost of this may be an API break if we replace __rte_packed.
>
> >
> > The same risk applies to __rte_aligned(), but with lower probability.
>
> so that's the long winded story of why they are both expanded empty for
> now for msvc. but when the time comes i want to submit patch series that
> focus on each specifically to generate robust discussion.
Sounds like the right path to take.
Now, I'm thinking ahead here...
We should be prepared to accept a major ABI/API break at one point in time, to replace our home-grown macros with C11 standard solutions and to fully support MSVC. This is not happening anytime soon, but the Techboard should acknowledge that this is going to happen (with an unspecified release), so it can be formally announced. The sooner it is announced, the more time developers will have to prepare for it.
All the details do not need to be known at the time of the announcement; they can be added along the way, based on the discussions from your future patches.
>
> ty
^ permalink raw reply [relevance 3%]
* [PATCH v6 11/15] eal: expand most macros to empty when using MSVC
@ 2023-04-15 1:15 5% ` Tyler Retzlaff
2023-04-15 1:15 3% ` [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-15 1:15 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (!!(x))
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (!!(x))
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..5417f68 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -62,10 +62,18 @@
__GNUC_PATCHLEVEL__)
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
(type *)(ptr); \
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
+#else
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
#endif
/** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC
2023-04-15 1:15 5% ` [PATCH v6 11/15] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-15 1:15 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-15 1:15 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
@ 2023-04-14 17:02 4% ` Tyler Retzlaff
2023-04-15 7:16 3% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-14 17:02 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, bruce.richardson, david.marchand, thomas, konstantin.ananyev
On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Thursday, 13 April 2023 23.26
> >
> > For now expand a lot of common rte macros empty. The catch here is we
> > need to test that most of the macros do what they should but at the same
> > time they are blocking work needed to bootstrap of the unit tests.
> >
> > Later we will return and provide (where possible) expansions that work
> > correctly for msvc and where not possible provide some alternate macros
> > to achieve the same outcome.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> > lib/eal/include/rte_branch_prediction.h | 8 ++++++
> > lib/eal/include/rte_common.h | 45
> > +++++++++++++++++++++++++++++++++
> > lib/eal/include/rte_compat.h | 20 +++++++++++++++
> > 3 files changed, 73 insertions(+)
> >
> > diff --git a/lib/eal/include/rte_branch_prediction.h
> > b/lib/eal/include/rte_branch_prediction.h
> > index 0256a9d..d9a0224 100644
> > --- a/lib/eal/include/rte_branch_prediction.h
> > +++ b/lib/eal/include/rte_branch_prediction.h
> > @@ -25,7 +25,11 @@
> > *
> > */
> > #ifndef likely
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > #define likely(x) __builtin_expect(!!(x), 1)
> > +#else
> > +#define likely(x) (x)
>
> This must be (!!(x)), because x may be non-Boolean, e.g. likely(n & 0x10), and likely() must return Boolean (0 or 1).
yes, you're right. will fix.
>
> > +#endif
> > #endif /* likely */
> >
> > /**
> > @@ -39,7 +43,11 @@
> > *
> > */
> > #ifndef unlikely
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > #define unlikely(x) __builtin_expect(!!(x), 0)
> > +#else
> > +#define unlikely(x) (x)
>
> This must also be (!!(x)), for the same reason as above.
ack
>
> > +#endif
> > #endif /* unlikely */
> >
> > #ifdef __cplusplus
> > diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
> > index 2f464e3..1bdaa2d 100644
> > --- a/lib/eal/include/rte_common.h
> > +++ b/lib/eal/include/rte_common.h
> > @@ -65,7 +65,11 @@
> > /**
> > * Force alignment
> > */
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > +#else
> > +#define __rte_aligned(a)
> > +#endif
>
> It should be reviewed that __rte_aligned() is only used for optimization purposes, and is not required for DPDK to function properly.
so to expand on what i have in mind (and explain why i leave it expanded
empty for now)
while msvc has a __declspec for align there is a mismatch between
where gcc and msvc want it placed to control alignment of objects.
msvc support won't be functional in 23.07 because of atomics. so once
we reach the 23.11 cycle (where we can merge c11 changes) it means we
can also use standard _Alignas which can accomplish the same thing
but portably.
full disclosure the catch is i still have to properly locate the <thing>
that does the alignment and some small questions about the expansion and
use of the existing macro.
on the subject of DPDK requiring proper alignment, you're right it
is generally for performance but also for pre-c11 atomics.
one question i have been asking myself is would the community see value
in more compile time assertions / testing of the size and alignment of
structures and offset of structure fields? we have a few key
RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
comprehensive protection.
>
> >
> > #ifdef RTE_ARCH_STRICT_ALIGN
> > typedef uint64_t unaligned_uint64_t __rte_aligned(1);
> > @@ -80,16 +84,29 @@
> > /**
> > * Force a structure to be packed
> > */
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > #define __rte_packed __attribute__((__packed__))
> > +#else
> > +#define __rte_packed
> > +#endif
>
> Similar comment as for __rte_aligned(); however, I consider it more likely that structure packing is a functional requirement, and not just used for optimization. Based on my experience, it may be used for packing network structures; perhaps not in DPDK itself but maybe in DPDK applications.
so interestingly i've discovered this is kind of a mess and as you note
some places we can't just "fix" it for abi compatibility reasons.
in some instances the packing is being applied to structures where it is
essentially a noop. i.e. natural alignment gets you the same thing so it
is superfluous.
in some instances the packing is being applied to structures that are
private and it appears to be completely unnecessary e.g. some structure
that isn't nested into something else and sizeof() or offsetof() fields
don't matter in the context of their use.
in some instances it is completely necessary usually when type punning
buffers containing network framing etc...
unfortunately the standard doesn't offer me an out here as there is an
issue of placement of the pragma/attributes that do the packing.
for places it isn't needed it, whatever i just expand empty. for places
it is superfluous again because msvc has no stable abi (we're not
established yet) again i just expand empty. finally for the places where
it is needed i'll probably need to expand conditionally but i think the
instances are far fewer than current use.
>
> The same risk applies to __rte_aligned(), but with lower probability.
so that's the long winded story of why they are both expanded empty for
now for msvc. but when the time comes i want to submit patch series that
focus on each specifically to generate robust discussion.
ty
^ permalink raw reply [relevance 4%]
* Re: [PATCH] reorder: improve buffer structure layout
2023-04-14 14:54 3% ` Bruce Richardson
@ 2023-04-14 15:30 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-14 15:30 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Volodymyr Fialko, dev, Reshma Pattan, jerinj, anoobj
On Fri, 14 Apr 2023 15:54:13 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Fri, Apr 14, 2023 at 07:52:30AM -0700, Stephen Hemminger wrote:
> > On Fri, 14 Apr 2023 10:43:43 +0200
> > Volodymyr Fialko <vfialko@marvell.com> wrote:
> >
> > > diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> > > index f55f383700..7418202b04 100644
> > > --- a/lib/reorder/rte_reorder.c
> > > +++ b/lib/reorder/rte_reorder.c
> > > @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
> > > char name[RTE_REORDER_NAMESIZE];
> > > uint32_t min_seqn; /**< Lowest seq. number that can be in the buffer */
> > > unsigned int memsize; /**< memory area size of reorder buffer */
> > > + int is_initialized; /**< flag indicates that buffer was initialized */
> > > +
> > > struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
> > > struct cir_buffer order_buf; /**< buffer used to reorder entries */
> > > - int is_initialized;
> > > } __rte_cache_aligned;
> > >
> > > static void
> >
> > Since this is ABI change it will have to wait for 23.11 release
>
> It shouldn't be an ABI change. This struct is defined in a C file, rather
> than a header, so is not exposed to end applications.
>
> /Bruce
Sorry, Bruce is right.
You might want to use uint8_t or bool for a simple flag.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] reorder: improve buffer structure layout
2023-04-14 14:52 3% ` Stephen Hemminger
@ 2023-04-14 14:54 3% ` Bruce Richardson
2023-04-14 15:30 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-14 14:54 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Volodymyr Fialko, dev, Reshma Pattan, jerinj, anoobj
On Fri, Apr 14, 2023 at 07:52:30AM -0700, Stephen Hemminger wrote:
> On Fri, 14 Apr 2023 10:43:43 +0200
> Volodymyr Fialko <vfialko@marvell.com> wrote:
>
> > diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> > index f55f383700..7418202b04 100644
> > --- a/lib/reorder/rte_reorder.c
> > +++ b/lib/reorder/rte_reorder.c
> > @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
> > char name[RTE_REORDER_NAMESIZE];
> > uint32_t min_seqn; /**< Lowest seq. number that can be in the buffer */
> > unsigned int memsize; /**< memory area size of reorder buffer */
> > + int is_initialized; /**< flag indicates that buffer was initialized */
> > +
> > struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
> > struct cir_buffer order_buf; /**< buffer used to reorder entries */
> > - int is_initialized;
> > } __rte_cache_aligned;
> >
> > static void
>
> Since this is ABI change it will have to wait for 23.11 release
It shouldn't be an ABI change. This struct is defined in a C file, rather
than a header, so is not exposed to end applications.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [PATCH] reorder: improve buffer structure layout
@ 2023-04-14 14:52 3% ` Stephen Hemminger
2023-04-14 14:54 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-04-14 14:52 UTC (permalink / raw)
To: Volodymyr Fialko; +Cc: dev, Reshma Pattan, jerinj, anoobj
On Fri, 14 Apr 2023 10:43:43 +0200
Volodymyr Fialko <vfialko@marvell.com> wrote:
> diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> index f55f383700..7418202b04 100644
> --- a/lib/reorder/rte_reorder.c
> +++ b/lib/reorder/rte_reorder.c
> @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
> char name[RTE_REORDER_NAMESIZE];
> uint32_t min_seqn; /**< Lowest seq. number that can be in the buffer */
> unsigned int memsize; /**< memory area size of reorder buffer */
> + int is_initialized; /**< flag indicates that buffer was initialized */
> +
> struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
> struct cir_buffer order_buf; /**< buffer used to reorder entries */
> - int is_initialized;
> } __rte_cache_aligned;
>
> static void
Since this is ABI change it will have to wait for 23.11 release
^ permalink raw reply [relevance 3%]
* [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC
2023-04-13 21:26 6% ` [PATCH v5 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-13 21:26 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-13 21:26 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
@ 2023-04-13 21:26 6% ` Tyler Retzlaff
2023-04-13 21:26 3% ` [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-13 21:26 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 ++++++
lib/eal/include/rte_common.h | 45 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 +++++++++++++++
3 files changed, 73 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (x)
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (x)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..1bdaa2d 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +84,29 @@
/**
* Force a structure to be packed
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +127,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +166,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +174,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +251,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +280,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +478,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 6%]
* [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t
@ 2023-04-13 11:53 3% ` Sivaprasad Tummala
2023-04-17 4:31 3% ` [PATCH v3 1/4] " Sivaprasad Tummala
0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-13 11:53 UTC (permalink / raw)
To: david.hunt; +Cc: dev
A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on Epyc processors.
This results in ABI breakage for legacy apps.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..65e849616d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+ ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on Epyc processors.
--
2.34.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-11 20:34 0% ` Tyler Retzlaff
@ 2023-04-12 8:50 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-04-12 8:50 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Tue, Apr 11, 2023 at 01:34:14PM -0700, Tyler Retzlaff wrote:
> On Tue, Apr 11, 2023 at 11:24:07AM +0100, Bruce Richardson wrote:
> > On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> > > Windows does not support versioned symbols. Fortunately Windows also
> > > doesn't have an exported stable ABI.
> > >
> > > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > > functions.
> > >
> > > Windows does have a way to achieve similar versioning for symbols but it
> > > is not a simple #define so it will be done as a work package later.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > ---
> > > lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> > > 1 file changed, 16 insertions(+)
> > >
> > > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > > index 2bac2de..284c16e 100644
> > > --- a/lib/telemetry/telemetry_data.c
> > > +++ b/lib/telemetry/telemetry_data.c
> > > @@ -82,8 +82,16 @@
> > > /* mark the v23 function as the older version, and v24 as the default version */
> > > VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> > > BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > > int64_t x), rte_tel_data_add_array_int_v24);
> > > +#else
> > > +int
> > > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > > +{
> > > + return rte_tel_data_add_array_int_v24(d, x);
> > > +}
> > > +#endif
> > >
> >
> > Can't see any general way to do this from the versioning header file, so
> > agree that we need some changes here. Rather than defining a public
> > funcion, we could keep the diff reduced by just using a macro alias here,
> > right? For example:
> >
> > #ifdef RTE_TOOLCHAIN_MSVC
> > #define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
> > #else
> > MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > int64_t x), rte_tel_data_add_array_int_v24);
> > #endif
> >
> > If this is a temporary measure, I'd tend towards the shortest solution that
> > can work. However, no strong opinions, so, either using functions as you
> > have it, or macros:
>
> so i have to leave it as it is the reason being the version.map ->
> exports.def generation does not handle this. the .def only contains the
> rte_tel_data_add_array_int symbol. if we expand it away to the _v24 name
> the link will fail.
>
Ah, thanks for clarifying
> let's consume the change as-is for now and i will work on the
> generalized solution when changes are integrated that actually make the
> windows dso/dll functional.
>
Sure, good for now. Keep my ack on any future versions.
> >
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC
2023-04-11 21:12 6% ` [PATCH v4 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-11 21:12 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-11 21:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH v4 11/14] eal: expand most macros to empty when using MSVC
@ 2023-04-11 21:12 6% ` Tyler Retzlaff
2023-04-11 21:12 3% ` [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-11 21:12 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++++
lib/eal/include/rte_common.h | 41 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++++++
3 files changed, 69 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (x)
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (x)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..dd41315 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -85,11 +89,20 @@
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +123,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +162,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +170,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +247,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +276,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +474,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 6%]
* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-11 10:24 0% ` Bruce Richardson
@ 2023-04-11 20:34 0% ` Tyler Retzlaff
2023-04-12 8:50 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-11 20:34 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Tue, Apr 11, 2023 at 11:24:07AM +0100, Bruce Richardson wrote:
> On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> > Windows does not support versioned symbols. Fortunately Windows also
> > doesn't have an exported stable ABI.
> >
> > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > functions.
> >
> > Windows does have a way to achieve similar versioning for symbols but it
> > is not a simple #define so it will be done as a work package later.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> > lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> > 1 file changed, 16 insertions(+)
> >
> > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > index 2bac2de..284c16e 100644
> > --- a/lib/telemetry/telemetry_data.c
> > +++ b/lib/telemetry/telemetry_data.c
> > @@ -82,8 +82,16 @@
> > /* mark the v23 function as the older version, and v24 as the default version */
> > VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> > BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > int64_t x), rte_tel_data_add_array_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > +{
> > + return rte_tel_data_add_array_int_v24(d, x);
> > +}
> > +#endif
> >
>
> Can't see any general way to do this from the versioning header file, so
> agree that we need some changes here. Rather than defining a public
> funcion, we could keep the diff reduced by just using a macro alias here,
> right? For example:
>
> #ifdef RTE_TOOLCHAIN_MSVC
> #define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
> #else
> MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> int64_t x), rte_tel_data_add_array_int_v24);
> #endif
>
> If this is a temporary measure, I'd tend towards the shortest solution that
> can work. However, no strong opinions, so, either using functions as you
> have it, or macros:
so i have to leave it as it is the reason being the version.map ->
exports.def generation does not handle this. the .def only contains the
rte_tel_data_add_array_int symbol. if we expand it away to the _v24 name
the link will fail.
let's consume the change as-is for now and i will work on the
generalized solution when changes are integrated that actually make the
windows dso/dll functional.
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
2023-04-11 10:04 4% ` [PATCH 1/3] " Nithin Dabilpuram
@ 2023-04-11 18:05 3% ` Stephen Hemminger
2023-04-18 8:33 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-04-11 18:05 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: Thomas Monjalon, Akhil Goyal, jerinj, dev
On Tue, 11 Apr 2023 15:34:07 +0530
Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 4bacf9fcd9..866cd4e8ee 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> */
> uint32_t ip_reassembly_en : 1;
>
> + /** Enable out of place processing on inline inbound packets.
> + *
> + * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> + * inbound SA if supported by driver. PMD need to register mbuf
> + * dynamic field using rte_security_oop_dynfield_register()
> + * and security session creation would fail if dynfield is not
> + * registered successfully.
> + * * 0: Disable OOP processing for this session (default).
> + */
> + uint32_t ingress_oop : 1;
> +
> /** Reserved bit fields for future extension
> *
> * User should ensure reserved_opts is cleared as it may change in
> @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> *
> * Note: Reduce number of bits in reserved_opts for every new option.
> */
> - uint32_t reserved_opts : 17;
> + uint32_t reserved_opts : 16;
> };
NAK
Let me repeat the reserved bit rant. YAGNI
Reserved space is not usable without ABI breakage unless the existing
code enforces that reserved space has to be zero.
Just saying "User should ensure reserved_opts is cleared" is not enough.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-06 0:45 3% ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
@ 2023-04-11 10:24 0% ` Bruce Richardson
2023-04-11 20:34 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-11 10:24 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> Windows does not support versioned symbols. Fortunately Windows also
> doesn't have an exported stable ABI.
>
> Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> functions.
>
> Windows does have a way to achieve similar versioning for symbols but it
> is not a simple #define so it will be done as a work package later.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> index 2bac2de..284c16e 100644
> --- a/lib/telemetry/telemetry_data.c
> +++ b/lib/telemetry/telemetry_data.c
> @@ -82,8 +82,16 @@
> /* mark the v23 function as the older version, and v24 as the default version */
> VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
> MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> int64_t x), rte_tel_data_add_array_int_v24);
> +#else
> +int
> +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> +{
> + return rte_tel_data_add_array_int_v24(d, x);
> +}
> +#endif
>
Can't see any general way to do this from the versioning header file, so
agree that we need some changes here. Rather than defining a public
funcion, we could keep the diff reduced by just using a macro alias here,
right? For example:
#ifdef RTE_TOOLCHAIN_MSVC
#define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
#else
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
#endif
If this is a temporary measure, I'd tend towards the shortest solution that
can work. However, no strong opinions, so, either using functions as you
have it, or macros:
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* [PATCH 1/3] security: introduce out of place support for inline ingress
@ 2023-04-11 10:04 4% ` Nithin Dabilpuram
2023-04-11 18:05 3% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Nithin Dabilpuram @ 2023-04-11 10:04 UTC (permalink / raw)
To: Thomas Monjalon, Akhil Goyal; +Cc: jerinj, dev, Nithin Dabilpuram
Similar to out of place(OOP) processing support that exists for
Lookaside crypto/security sessions, Inline ingress security
sessions may also need out of place processing in usecases
where original encrypted packet needs to be retained for post
processing. So for NIC's which have such a kind of HW support,
a new SA option is provided to indicate whether OOP needs to
be enabled on that Inline ingress security session or not.
Since for inline ingress sessions, packet is not received by
CPU until the processing is done, we can only have per-SA
option and not per-packet option like Lookaside sessions.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
devtools/libabigail.abignore | 4 +++
lib/security/rte_security.c | 17 +++++++++++++
lib/security/rte_security.h | 39 +++++++++++++++++++++++++++++-
lib/security/rte_security_driver.h | 8 ++++++
lib/security/version.map | 2 ++
5 files changed, 69 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..414baac060 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -40,3 +40,7 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; Ignore change to reserved opts for new SA option
+[suppress_type]
+ name = rte_security_ipsec_sa_options
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index e102c55e55..c2199dd8db 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -27,7 +27,10 @@
} while (0)
#define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata"
+#define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata"
+
int rte_security_dynfield_offset = -1;
+int rte_security_oop_dynfield_offset = -1;
int
rte_security_dynfield_register(void)
@@ -42,6 +45,20 @@ rte_security_dynfield_register(void)
return rte_security_dynfield_offset;
}
+int
+rte_security_oop_dynfield_register(void)
+{
+ static const struct rte_mbuf_dynfield dynfield_desc = {
+ .name = RTE_SECURITY_OOP_DYNFIELD_NAME,
+ .size = sizeof(rte_security_oop_dynfield_t),
+ .align = __alignof__(rte_security_oop_dynfield_t),
+ };
+
+ rte_security_oop_dynfield_offset =
+ rte_mbuf_dynfield_register(&dynfield_desc);
+ return rte_security_oop_dynfield_offset;
+}
+
void *
rte_security_session_create(struct rte_security_ctx *instance,
struct rte_security_session_conf *conf,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4bacf9fcd9..866cd4e8ee 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t ip_reassembly_en : 1;
+ /** Enable out of place processing on inline inbound packets.
+ *
+ * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
+ * inbound SA if supported by driver. PMD need to register mbuf
+ * dynamic field using rte_security_oop_dynfield_register()
+ * and security session creation would fail if dynfield is not
+ * registered successfully.
+ * * 0: Disable OOP processing for this session (default).
+ */
+ uint32_t ingress_oop : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 17;
+ uint32_t reserved_opts : 16;
};
/** IPSec security association direction */
@@ -812,6 +823,13 @@ typedef uint64_t rte_security_dynfield_t;
/** Dynamic mbuf field for device-specific metadata */
extern int rte_security_dynfield_offset;
+/** Out-of-Place(OOP) processing field type */
+typedef struct rte_mbuf *rte_security_oop_dynfield_t;
+/** Dynamic mbuf field for pointer to original mbuf for
+ * OOP processing session.
+ */
+extern int rte_security_oop_dynfield_offset;
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
@@ -834,6 +852,25 @@ rte_security_dynfield(struct rte_mbuf *mbuf)
rte_security_dynfield_t *);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get pointer to mbuf field for original mbuf pointer when
+ * Out-Of-Place(OOP) processing is enabled in security session.
+ *
+ * @param mbuf packet to access
+ * @return pointer to mbuf field
+ */
+__rte_experimental
+static inline rte_security_oop_dynfield_t *
+rte_security_oop_dynfield(struct rte_mbuf *mbuf)
+{
+ return RTE_MBUF_DYNFIELD(mbuf,
+ rte_security_oop_dynfield_offset,
+ rte_security_oop_dynfield_t *);
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 421e6f7780..91e7786ab7 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -190,6 +190,14 @@ typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id,
__rte_internal
int rte_security_dynfield_register(void);
+/**
+ * @internal
+ * Register mbuf dynamic field for Security inline ingress Out-of-Place(OOP)
+ * processing.
+ */
+__rte_internal
+int rte_security_oop_dynfield_register(void);
+
/**
* Update the mbuf with provided metadata.
*
diff --git a/lib/security/version.map b/lib/security/version.map
index 07dcce9ffb..59a95f40bd 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -23,10 +23,12 @@ EXPERIMENTAL {
rte_security_macsec_sc_stats_get;
rte_security_session_stats_get;
rte_security_session_update;
+ rte_security_oop_dynfield_offset;
};
INTERNAL {
global:
rte_security_dynfield_register;
+ rte_security_oop_dynfield_register;
};
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] version: 23.07-rc0
2023-04-03 9:37 10% ` [PATCH v2] " David Marchand
@ 2023-04-06 7:44 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-04-06 7:44 UTC (permalink / raw)
To: dev; +Cc: thomas
On Mon, Apr 3, 2023 at 11:45 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
Applied!
--
David Marchand
^ permalink raw reply [relevance 0%]
* [PATCH v3 08/11] eal: expand most macros to empty when using msvc
@ 2023-04-06 0:45 6% ` Tyler Retzlaff
2023-04-06 0:45 3% ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-06 0:45 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 ++++++++
lib/eal/include/rte_common.h | 33 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++++++++++
3 files changed, 61 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (x)
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (x)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
#define __rte_may_alias __attribute__((__may_alias__))
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 6%]
* [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-06 0:45 6% ` [PATCH v3 08/11] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-06 0:45 3% ` Tyler Retzlaff
2023-04-11 10:24 0% ` Bruce Richardson
1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-06 0:45 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH] MAINTAINERS: sort file entries
@ 2023-04-05 23:12 17% Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-05 23:12 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Thomas Monjalon
The list of file paths (F:) is only partially sorted
in some cases.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
MAINTAINERS | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e50999f..5fa432b00aac 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -83,26 +83,26 @@ Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
F: MAINTAINERS
F: devtools/build-dict.sh
-F: devtools/check-abi.sh
F: devtools/check-abi-version.sh
+F: devtools/check-abi.sh
F: devtools/check-doc-vs-code.sh
F: devtools/check-dup-includes.sh
-F: devtools/check-maintainers.sh
F: devtools/check-forbidden-tokens.awk
F: devtools/check-git-log.sh
+F: devtools/check-maintainers.sh
F: devtools/check-spdx-tag.sh
F: devtools/check-symbol-change.sh
F: devtools/check-symbol-maps.sh
F: devtools/checkpatches.sh
F: devtools/get-maintainer.sh
F: devtools/git-log-fixes.sh
+F: devtools/libabigail.abignore
F: devtools/load-devel-config
F: devtools/parse-flow-support.sh
F: devtools/process-iwyu.py
F: devtools/update-abi.sh
F: devtools/update-patches.py
F: devtools/update_version_map_abi.py
-F: devtools/libabigail.abignore
F: devtools/words-case.txt
F: license/
F: .editorconfig
@@ -114,16 +114,16 @@ F: Makefile
F: meson.build
F: meson_options.txt
F: config/
+F: buildtools/call-sphinx-build.py
F: buildtools/check-symbols.sh
F: buildtools/chkincs/
-F: buildtools/call-sphinx-build.py
F: buildtools/get-cpu-count.py
F: buildtools/get-numa-count.py
F: buildtools/list-dir-globs.py
F: buildtools/map-list-symbol.sh
F: buildtools/pkg-config/
-F: buildtools/symlink-drivers-solibs.sh
F: buildtools/symlink-drivers-solibs.py
+F: buildtools/symlink-drivers-solibs.sh
F: devtools/test-meson-builds.sh
F: devtools/check-meson.py
--
2.39.2
^ permalink raw reply [relevance 17%]
* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-05 16:02 0% ` Tyler Retzlaff
@ 2023-04-05 16:17 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-04-05 16:17 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Wed, Apr 05, 2023 at 09:02:10AM -0700, Tyler Retzlaff wrote:
> On Wed, Apr 05, 2023 at 11:56:05AM +0100, Bruce Richardson wrote:
> > On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> > > Windows does not support versioned symbols. Fortunately Windows also
> > > doesn't have an exported stable ABI.
> > >
> > > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > > functions.
> > >
> > > Windows does have a way to achieve similar versioning for symbols but it
> > > is not a simple #define so it will be done as a work package later.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> >
> > Does this require a change in telemetry itself? Can it be done via the
> > header file with the versioning macros in it, so it would apply to any
> > other versioned functions we have in DPDK?
>
> i didn't spend a lot of time thinking if the existing macros could be
> made to expand in the way needed. there is a way of doing versioning on
> windows but it is foreign to how this symbol versioning scheme works so
> i plan to investigate it separately after i get unit tests running.
>
> for now i know what i'm doing is ugly but i need to get protection of
> unit tests so i'm doing minimal changes to get to that point. if you're
> not comfortable with this going in on a temporary basis i can remove it
> from this series and we can work on it as a separated patch set.
>
> my bar is pretty low here, as long as it doesn't break any existing
> linux/gcc/clang etc ok, if msvc is not right i'll take a second pass
> and design each stop-gap properly. it already doesn't work so things
> aren't made worse.
>
> let me know if i need to carve this out of the series.
>
It's not that ugly. :-) If no other clear solution is apparent, I can certainly
live with this.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-05 10:56 0% ` Bruce Richardson
@ 2023-04-05 16:02 0% ` Tyler Retzlaff
2023-04-05 16:17 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-05 16:02 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Wed, Apr 05, 2023 at 11:56:05AM +0100, Bruce Richardson wrote:
> On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> > Windows does not support versioned symbols. Fortunately Windows also
> > doesn't have an exported stable ABI.
> >
> > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > functions.
> >
> > Windows does have a way to achieve similar versioning for symbols but it
> > is not a simple #define so it will be done as a work package later.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
>
> Does this require a change in telemetry itself? Can it be done via the
> header file with the versioning macros in it, so it would apply to any
> other versioned functions we have in DPDK?
i didn't spend a lot of time thinking if the existing macros could be
made to expand in the way needed. there is a way of doing versioning on
windows but it is foreign to how this symbol versioning scheme works so
i plan to investigate it separately after i get unit tests running.
for now i know what i'm doing is ugly but i need to get protection of
unit tests so i'm doing minimal changes to get to that point. if you're
not comfortable with this going in on a temporary basis i can remove it
from this series and we can work on it as a separated patch set.
my bar is pretty low here, as long as it doesn't break any existing
linux/gcc/clang etc ok, if msvc is not right i'll take a second pass
and design each stop-gap properly. it already doesn't work so things
aren't made worse.
let me know if i need to carve this out of the series.
ty
>
> /Bruce
>
> > ---
> > lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> > 1 file changed, 16 insertions(+)
> >
> > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > index 2bac2de..284c16e 100644
> > --- a/lib/telemetry/telemetry_data.c
> > +++ b/lib/telemetry/telemetry_data.c
> > @@ -82,8 +82,16 @@
> > /* mark the v23 function as the older version, and v24 as the default version */
> > VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> > BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > int64_t x), rte_tel_data_add_array_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > +{
> > + return rte_tel_data_add_array_int_v24(d, x);
> > +}
> > +#endif
> >
> > int
> > rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
> > @@ -220,8 +228,16 @@
> > /* mark the v23 function as the older version, and v24 as the default version */
> > VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
> > BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> > MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
> > const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
> > +{
> > + return rte_tel_data_add_dict_int_v24(d, name, val);
> > +}
> > +#endif
> >
> > int
> > rte_tel_data_add_dict_uint(struct rte_tel_data *d,
> > --
> > 1.8.3.1
> >
^ permalink raw reply [relevance 0%]
* [PATCH v2 0/3] vhost: add device op to offload the interrupt kick
@ 2023-04-05 12:40 3% Eelco Chaudron
2023-05-08 13:58 0% ` [PATCH v2 0/3] " Eelco Chaudron
0 siblings, 2 replies; 200+ results
From: Eelco Chaudron @ 2023-04-05 12:40 UTC (permalink / raw)
To: maxime.coquelin, chenbo.xia; +Cc: dev
This series adds an operation callback which gets called every time the
library wants to call eventfd_write(). This eventfd_write() call could
result in a system call, which could potentially block the PMD thread.
The callback function can decide whether it's ok to handle the
eventfd_write() now or have the newly introduced function,
rte_vhost_notify_guest(), called at a later time.
This can be used by 3rd party applications, like OVS, to avoid system
calls being called as part of the PMD threads.
v2: - Used vhost_virtqueue->index to find index for operation.
- Aligned function name to VDUSE RFC patchset.
- Added error and offload statistics counter.
- Mark new API as experimental.
- Change the virtual queue spin lock to read/write spin lock.
- Made shared counters atomic.
- Add versioned rte_vhost_driver_callback_register() for
ABI compliance.
Eelco Chaudron (3):
vhost: Change vhost_virtqueue access lock to a read/write one.
vhost: make the guest_notifications statistic counter atomic.
vhost: add device op to offload the interrupt kick
lib/eal/include/generic/rte_rwlock.h | 17 +++++
lib/vhost/meson.build | 2 +
lib/vhost/rte_vhost.h | 23 ++++++-
lib/vhost/socket.c | 72 ++++++++++++++++++++--
lib/vhost/version.map | 9 +++
lib/vhost/vhost.c | 92 +++++++++++++++++++++-------
lib/vhost/vhost.h | 70 ++++++++++++++-------
lib/vhost/vhost_user.c | 14 ++---
lib/vhost/virtio_net.c | 90 +++++++++++++--------------
9 files changed, 288 insertions(+), 101 deletions(-)
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-04 20:07 3% ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
@ 2023-04-05 10:56 0% ` Bruce Richardson
2023-04-05 16:02 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-05 10:56 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev
On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> Windows does not support versioned symbols. Fortunately Windows also
> doesn't have an exported stable ABI.
>
> Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> functions.
>
> Windows does have a way to achieve similar versioning for symbols but it
> is not a simple #define so it will be done as a work package later.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Does this require a change in telemetry itself? Can it be done via the
header file with the versioning macros in it, so it would apply to any
other versioned functions we have in DPDK?
/Bruce
> ---
> lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> index 2bac2de..284c16e 100644
> --- a/lib/telemetry/telemetry_data.c
> +++ b/lib/telemetry/telemetry_data.c
> @@ -82,8 +82,16 @@
> /* mark the v23 function as the older version, and v24 as the default version */
> VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
> MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> int64_t x), rte_tel_data_add_array_int_v24);
> +#else
> +int
> +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> +{
> + return rte_tel_data_add_array_int_v24(d, x);
> +}
> +#endif
>
> int
> rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
> @@ -220,8 +228,16 @@
> /* mark the v23 function as the older version, and v24 as the default version */
> VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
> BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
> MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
> const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
> +#else
> +int
> +rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
> +{
> + return rte_tel_data_add_dict_int_v24(d, name, val);
> +}
> +#endif
>
> int
> rte_tel_data_add_dict_uint(struct rte_tel_data *d,
> --
> 1.8.3.1
>
^ permalink raw reply [relevance 0%]
* [PATCH v2 6/9] eal: expand most macros to empty when using msvc
@ 2023-04-04 20:07 6% ` Tyler Retzlaff
2023-04-04 20:07 3% ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-04 20:07 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 ++++++++
lib/eal/include/rte_common.h | 33 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++++++++++
3 files changed, 61 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..3589c97 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (!!(x) == 1)
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (!!(x) == 0)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
#define __rte_may_alias __attribute__((__may_alias__))
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 6%]
* [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-04 20:07 6% ` [PATCH v2 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-04 20:07 3% ` Tyler Retzlaff
2023-04-05 10:56 0% ` Bruce Richardson
1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-04 20:07 UTC (permalink / raw)
To: dev
Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH 6/9] eal: expand most macros to empty when using msvc
@ 2023-04-03 21:52 6% ` Tyler Retzlaff
2023-04-03 21:52 3% ` [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
` (7 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-03 21:52 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, david.marchand, thomas, mb, Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/rte_branch_prediction.h | 8 ++++++++
lib/eal/include/rte_common.h | 33 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++++++++++
3 files changed, 61 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..3589c97 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
*
*/
#ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
#define likely(x) __builtin_expect(!!(x), 1)
+#else
+#define likely(x) (!!(x) == 1)
+#endif
#endif /* likely */
/**
@@ -39,7 +43,11 @@
*
*/
#ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
#define unlikely(x) __builtin_expect(!!(x), 0)
+#else
+#define unlikely(x) (!!(x) == 0)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
/**
* Force alignment
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
#define __rte_may_alias __attribute__((__may_alias__))
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
/**
* short definition to mark a function parameter unused
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
/**
* Force a function to be inlined
*/
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
#else
+#ifndef RTE_TOOLCHAIN_MSVC
#define __rte_internal \
__attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 6%]
* [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc
2023-04-03 21:52 6% ` [PATCH 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-03 21:52 3% ` Tyler Retzlaff
` (6 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-03 21:52 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, david.marchand, thomas, mb, Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH v2] devtools: add script to check for non inclusive naming
@ 2023-04-03 14:47 14% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-03 14:47 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Shell script to find use of words that not be used.
By default it prints matches. The -q (quiet) option
is used to just count. There is also -l option
which lists lines matching (like grep -l).
Uses the word lists from Inclusive Naming Initiative
see https://inclusivenaming.org/word-lists/
Examples:
$ ./devtools/check-naming-policy.sh -q
Total files: 37 errors, 90 warnings, 2 suggestions
$ ./devtools/check-naming-policy.sh -q -l lib/eal
Total lines: 32 errors, 8 warnings, 0 suggestions
Add MAINTAINERS file entry for the new tool and resort
the list files back into to alphabetic order
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - fix typo in words
- add subtree (pathspec) option
- update maintainers file (and fix alphabetic order)
MAINTAINERS | 8 ++-
devtools/check-naming-policy.sh | 107 ++++++++++++++++++++++++++++++++
devtools/naming/tier1.txt | 8 +++
devtools/naming/tier2.txt | 1 +
devtools/naming/tier3.txt | 4 ++
5 files changed, 125 insertions(+), 3 deletions(-)
create mode 100755 devtools/check-naming-policy.sh
create mode 100644 devtools/naming/tier1.txt
create mode 100644 devtools/naming/tier2.txt
create mode 100644 devtools/naming/tier3.txt
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e50999f..b5881113ba85 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -83,26 +83,28 @@ Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
F: MAINTAINERS
F: devtools/build-dict.sh
-F: devtools/check-abi.sh
F: devtools/check-abi-version.sh
+F: devtools/check-abi.sh
F: devtools/check-doc-vs-code.sh
F: devtools/check-dup-includes.sh
-F: devtools/check-maintainers.sh
F: devtools/check-forbidden-tokens.awk
F: devtools/check-git-log.sh
+F: devtools/check-maintainers.sh
+F: devtools/check-naming-policy.sh
F: devtools/check-spdx-tag.sh
F: devtools/check-symbol-change.sh
F: devtools/check-symbol-maps.sh
F: devtools/checkpatches.sh
F: devtools/get-maintainer.sh
F: devtools/git-log-fixes.sh
+F: devtools/libabigail.abignore
F: devtools/load-devel-config
+F: devtools/naming/
F: devtools/parse-flow-support.sh
F: devtools/process-iwyu.py
F: devtools/update-abi.sh
F: devtools/update-patches.py
F: devtools/update_version_map_abi.py
-F: devtools/libabigail.abignore
F: devtools/words-case.txt
F: license/
F: .editorconfig
diff --git a/devtools/check-naming-policy.sh b/devtools/check-naming-policy.sh
new file mode 100755
index 000000000000..90347b415652
--- /dev/null
+++ b/devtools/check-naming-policy.sh
@@ -0,0 +1,107 @@
+#! /bin/bash
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2023 Stephen Hemminger
+#
+# This script scans the source tree and creates list of files
+# containing words that are recommended to bavoide by the
+# Inclusive Naming Initiative.
+# See: https://inclusivenaming.org/word-lists/
+#
+# The options are:
+# -q = quiet mode, produces summary count only
+# -l = show lines instead of files with recommendations
+# -v = verbose, show a header between each tier
+#
+# Default is to scan all of DPDK source and documentation.
+# Optional pathspec can be used to limit specific tree.
+#
+# Example:
+# check-naming-policy.sh -q doc/*
+#
+
+errors=0
+warnings=0
+suggestions=0
+quiet=false
+veborse=false
+lines='-l'
+
+print_usage () {
+ echo "usage: $(basename $0) [-l] [-q] [-v] [<pathspec>]"
+ exit 1
+}
+
+# Locate word list files
+selfdir=$(dirname $(readlink -f $0))
+words=$selfdir/naming
+
+# These give false positives
+skipfiles=( ':^devtools/naming/' \
+ ':^doc/guides/rel_notes/' \
+ ':^doc/guides/contributing/coding_style.rst' \
+ ':^doc/guides/prog_guide/glossary.rst' \
+)
+# These are obsolete
+skipfiles+=( \
+ ':^drivers/net/liquidio/' \
+ ':^drivers/net/bnx2x/' \
+ ':^lib/table/' \
+ ':^lib/port/' \
+ ':^lib/pipeline/' \
+ ':^examples/pipeline/' \
+)
+
+#
+# check_wordlist wordfile description
+check_wordlist() {
+ local list=$words/$1
+ local description=$2
+
+ git grep -i $lines -f $list -- ${skipfiles[@]} $pathspec > $tmpfile
+ count=$(wc -l < $tmpfile)
+ if ! $quiet; then
+ if [ $count -gt 0 ]; then
+ if $verbose; then
+ echo $description
+ echo $description | tr '[:print:]' '-'
+ fi
+ cat $tmpfile
+ echo
+ fi
+ fi
+ return $count
+}
+
+while getopts lqvh ARG ; do
+ case $ARG in
+ l ) lines= ;;
+ q ) quiet=true ;;
+ v ) verbose=true ;;
+ h ) print_usage ; exit 0 ;;
+ ? ) print_usage ; exit 1 ;;
+ esac
+done
+shift $(($OPTIND - 1))
+
+tmpfile=$(mktemp -t dpdk.checknames.XXXXXX)
+trap 'rm -f -- "$tmpfile"' INT TERM HUP EXIT
+
+pathspec=$*
+
+check_wordlist tier1.txt "Tier 1: Replace immediately"
+errors=$?
+
+check_wordlist tier2.txt "Tier 2: Strongly consider replacing"
+warnings=$?
+
+check_wordlist tier3.txt "Tier 3: Recommend to replace"
+suggestions=$?
+
+if [ -z "$lines" ] ; then
+ echo -n "Total lines: "
+else
+ echo -n "Total files: "
+fi
+
+echo $errors "errors," $warnings "warnings," $suggestions "suggestions"
+exit $errors
diff --git a/devtools/naming/tier1.txt b/devtools/naming/tier1.txt
new file mode 100644
index 000000000000..a0e9b549c218
--- /dev/null
+++ b/devtools/naming/tier1.txt
@@ -0,0 +1,8 @@
+abort
+blackhat
+blacklist
+cripple
+master
+slave
+whitehat
+whitelist
diff --git a/devtools/naming/tier2.txt b/devtools/naming/tier2.txt
new file mode 100644
index 000000000000..cd4280d1625c
--- /dev/null
+++ b/devtools/naming/tier2.txt
@@ -0,0 +1 @@
+sanity
diff --git a/devtools/naming/tier3.txt b/devtools/naming/tier3.txt
new file mode 100644
index 000000000000..072f6468ea47
--- /dev/null
+++ b/devtools/naming/tier3.txt
@@ -0,0 +1,4 @@
+man.in.the.middle
+segregate
+segregation
+tribe
--
2.39.2
^ permalink raw reply [relevance 14%]
* [PATCH v2] version: 23.07-rc0
2023-04-03 6:59 9% [PATCH] version: 23.07-rc0 David Marchand
@ 2023-04-03 9:37 10% ` David Marchand
2023-04-06 7:44 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-04-03 9:37 UTC (permalink / raw)
To: dev; +Cc: thomas
Start a new release cycle with empty release notes.
Bump version and ABI minor.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v1:
- fix ABI reference git repository,
---
.github/workflows/build.yml | 3 +-
ABI_VERSION | 2 +-
VERSION | 2 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_23_07.rst | 138 +++++++++++++++++++++++++
5 files changed, 142 insertions(+), 4 deletions(-)
create mode 100644 doc/guides/rel_notes/release_23_07.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index e24e47a216..edd39cbd62 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -26,8 +26,7 @@ jobs:
MINGW: ${{ matrix.config.cross == 'mingw' }}
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
- REF_GIT_REPO: https://dpdk.org/git/dpdk-stable
- REF_GIT_TAG: v22.11.1
+ REF_GIT_TAG: v23.03
RISCV64: ${{ matrix.config.cross == 'riscv64' }}
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
diff --git a/ABI_VERSION b/ABI_VERSION
index a12b18e437..3c8ce91a46 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-23.1
+23.2
diff --git a/VERSION b/VERSION
index 533bf9aa13..d3c78a13bf 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-23.03.0
+23.07.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 57475a8158..d8dfa621ec 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_23_07
release_23_03
release_22_11
release_22_07
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
new file mode 100644
index 0000000000..a9b1293689
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2023 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.07
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_23_07.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+* No ABI change that would break compatibility with 22.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
--
2.39.2
^ permalink raw reply [relevance 10%]
* [PATCH] version: 23.07-rc0
@ 2023-04-03 6:59 9% David Marchand
2023-04-03 9:37 10% ` [PATCH v2] " David Marchand
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-04-03 6:59 UTC (permalink / raw)
To: dev; +Cc: thomas
Start a new release cycle with empty release notes.
Bump version and ABI minor.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.github/workflows/build.yml | 2 +-
ABI_VERSION | 2 +-
VERSION | 2 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_23_07.rst | 138 +++++++++++++++++++++++++
5 files changed, 142 insertions(+), 3 deletions(-)
create mode 100644 doc/guides/rel_notes/release_23_07.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index e24e47a216..e824f8841c 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -27,7 +27,7 @@ jobs:
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
REF_GIT_REPO: https://dpdk.org/git/dpdk-stable
- REF_GIT_TAG: v22.11.1
+ REF_GIT_TAG: v23.03
RISCV64: ${{ matrix.config.cross == 'riscv64' }}
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
diff --git a/ABI_VERSION b/ABI_VERSION
index a12b18e437..3c8ce91a46 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-23.1
+23.2
diff --git a/VERSION b/VERSION
index 533bf9aa13..d3c78a13bf 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-23.03.0
+23.07.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 57475a8158..d8dfa621ec 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_23_07
release_23_03
release_22_11
release_22_07
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
new file mode 100644
index 0000000000..a9b1293689
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2023 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.07
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_23_07.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+* No ABI change that would break compatibility with 22.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
--
2.39.2
^ permalink raw reply [relevance 9%]
* DPDK 23.03 released
@ 2023-03-31 17:17 3% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-31 17:17 UTC (permalink / raw)
To: announce
A new major release is available:
https://fast.dpdk.org/rel/dpdk-23.03.tar.xz
Winter release numbers are quite small as usual:
1048 commits from 161 authors
1379 files changed, 85721 insertions(+), 25814 deletions(-)
It is not planned to start a maintenance branch for 23.03.
This version is ABI-compatible with 22.11.
Below are some new features:
- lock annotations
- ARM power management monitor/wakeup
- machine learning inference device API and test application
- platform bus
- 400G link speed
- queue mapping of aggregated ports
- flow quota
- more flow matching (ICMPv6, IPv6 routing extension)
- more flow actions (flex modify, congestion management)
- Intel cpfl IPU driver
- Marvell CNXK machine learning inference
- SHAKE hash algorithm for crypto
- LZ4 algorithm for compression
- more telemetry endpoints
- more tracepoints
- DTS hello world
More details in the release notes:
https://doc.dpdk.org/guides/rel_notes/release_23_03.html
The test framework DTS is being improved and migrated into the mainline.
Please join the DTS effort for contributing, reviewing or testing.
There are 34 new contributors (including authors, reviewers and testers).
Welcome to Alok Prasad, Alvaro Karsz, Anup Prabhu, Boleslav Stankevich,
Boris Ouretskey, Chenyu Huang, Edwin Brossette, Fengnan Chang,
Francesco Mancino, Haijun Chu, Hiral Shah, Isaac Boukris, J.J. Martzki,
Jesna K E, Joshua Washington, Kamalakshitha Aligeri, Krzysztof Karas,
Leo Xu, Maayan Kashani, Michal Schmidt, Mohammad Iqbal Ahmad,
Nathan Brown, Patrick Robb, Prince Takkar, Rushil Gupta,
Saoirse O'Donovan, Shivah Shankar S, Shiyang He, Song Jiale,
Vikash Poddar, Visa Hankala, Yevgeny Kliteynik, Zerun Fu,
and Zhuobin Huang.
Below is the number of commits per employer (with authors count):
265 Marvell (33)
256 Intel (49)
175 NVIDIA (20)
98 Red Hat (6)
68 Huawei (3)
55 Corigine (9)
49 Microsoft (3)
13 Arm (5)
10 PANTHEON.tech (1)
9 Trustnet (1)
9 AMD (2)
8 Ark Networks (2)
...
A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
48 Maxime Coquelin <maxime.coquelin@redhat.com>
46 Ferruh Yigit <ferruh.yigit@amd.com>
44 Morten Brørup <mb@smartsharesystems.com>
25 Ori Kam <orika@nvidia.com>
24 Tyler Retzlaff <roretzla@linux.microsoft.com>
23 Chengwen Feng <fengchengwen@huawei.com>
21 David Marchand <david.marchand@redhat.com>
21 Akhil Goyal <gakhil@marvell.com>
The next version will be 23.07 in July.
The new features for 23.07 can be submitted during the next 3 weeks:
http://core.dpdk.org/roadmap#dates
Please share your roadmap.
One last ask; please fill this quick survey before April 7th
to help planning the next DPDK Summit:
https://docs.google.com/forms/d/1104swKV4-_nNT6GimkRBNVac1uAqX7o2P936bcGsgMc
Thanks everyone
^ permalink raw reply [relevance 3%]
* [PATCH v12 18/22] hash: move rte_hash_set_alg out header
2023-03-29 23:40 2% [PATCH v12 00/22] Covert static log types in libraries to dynamic Stephen Hemminger
@ 2023-03-29 23:40 2% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-03-29 23:40 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Ruifeng Wang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().
Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
lib/hash/meson.build | 1 +
lib/hash/rte_crc_arm64.h | 8 ++---
lib/hash/rte_crc_x86.h | 10 +++---
lib/hash/rte_hash_crc.c | 68 ++++++++++++++++++++++++++++++++++++++++
lib/hash/rte_hash_crc.h | 48 ++--------------------------
lib/hash/version.map | 7 +++++
6 files changed, 88 insertions(+), 54 deletions(-)
create mode 100644 lib/hash/rte_hash_crc.c
diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
sources = files(
'rte_cuckoo_hash.c',
+ 'rte_hash_crc.c',
'rte_fbk_hash.c',
'rte_thash.c',
'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u8(data, init_val);
return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u16(data, init_val);
return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u32(data, init_val);
return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u64(data, init_val);
return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
static inline uint32_t
rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u8(data, init_val);
return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u16(data, init_val);
return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u32(data, init_val);
return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
{
#ifdef RTE_ARCH_X86_64
- if (likely(crc32_alg == CRC32_SSE42_x64))
+ if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
return crc32c_sse42_u64(data, init_val);
#endif
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u64_mimic(data, init_val);
return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ * An OR of following flags:
+ * - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ * - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ * - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ * - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+ rte_hash_crc32_alg = CRC32_SW;
+
+ if (alg == CRC32_SW)
+ return;
+
+#if defined RTE_ARCH_X86
+ if (!(alg & CRC32_SSE42_x64))
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+ rte_hash_crc32_alg = CRC32_SSE42;
+ else
+ rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+ if (!(alg & CRC32_ARM64))
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+ rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+ if (rte_hash_crc32_alg == CRC32_SW)
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+ rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+ rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+ rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e8145ee44204 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
#include <rte_branch_prediction.h>
#include <rte_common.h>
#include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
#include "rte_crc_sw.h"
@@ -31,7 +29,7 @@ extern "C" {
#define CRC32_SSE42_x64 (CRC32_x64|CRC32_SSE42)
#define CRC32_ARM64 (1U << 3)
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
#if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
#include "rte_crc_arm64.h"
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
* - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
*
*/
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
- crc32_alg = CRC32_SW;
-
- if (alg == CRC32_SW)
- return;
-
-#if defined RTE_ARCH_X86
- if (!(alg & CRC32_SSE42_x64))
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
- if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
- crc32_alg = CRC32_SSE42;
- else
- crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
- if (!(alg & CRC32_ARM64))
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
- if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
- crc32_alg = CRC32_ARM64;
-#endif
-
- if (crc32_alg == CRC32_SW)
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
- rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
- rte_hash_crc_set_alg(CRC32_ARM64);
-#else
- rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
#ifdef __DOXYGEN__
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..8b22aad5626b 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
rte_hash_add_key_with_hash;
rte_hash_add_key_with_hash_data;
rte_hash_count;
+ rte_hash_crc_set_alg;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
rte_thash_gfni;
rte_thash_gfni_bulk;
};
+
+INTERNAL {
+ global:
+
+ rte_hash_crc32_alg;
+};
--
2.39.2
^ permalink raw reply [relevance 2%]
* [PATCH v12 00/22] Covert static log types in libraries to dynamic
@ 2023-03-29 23:40 2% Stephen Hemminger
2023-03-29 23:40 2% ` [PATCH v12 18/22] hash: move rte_hash_set_alg out header Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-03-29 23:40 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.
There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.
Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.
v12 - rebase and add table and pipeline libraries
v11 - fix include check on arm cross build
v10 - add necessary rte_compat.h in thash_gfni stub for arm
v9 - fix handling of crc32 alg in lib/hash.
make it an internal global variable.
fix gfni stubs for case where they are not used.
Stephen Hemminger (22):
gso: don't log message on non TCP/UDP
eal: drop no longer used GSO logtype
log: drop unused RTE_LOGTYPE_TIMER
efd: convert RTE_LOGTYPE_EFD to dynamic type
mbuf: convert RTE_LOGTYPE_MBUF to dynamic type
acl: convert RTE_LOGTYPE_ACL to dynamic type
examples/power: replace use of RTE_LOGTYPE_POWER
examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
power: convert RTE_LOGTYPE_POWER to dynamic type
ring: convert RTE_LOGTYPE_RING to dynamic type
mempool: convert RTE_LOGTYPE_MEMPOOL to dynamic type
lpm: convert RTE_LOGTYPE_LPM to dynamic types
kni: convert RTE_LOGTYPE_KNI to dynamic type
sched: convert RTE_LOGTYPE_SCHED to dynamic type
examples/ipsec-secgw: replace RTE_LOGTYPE_PORT
port: convert RTE_LOGTYPE_PORT to dynamic type
hash: move rte_thash_gfni stubs out of header file
hash: move rte_hash_set_alg out header
hash: convert RTE_LOGTYPE_HASH to dynamic type
table: convert RTE_LOGTYPE_TABLE to dynamic type
app/test: remove use of RTE_LOGTYPE_PIPELINE
pipeline: convert RTE_LOGTYPE_PIPELINE to dynamic type
app/test/test_acl.c | 3 +-
app/test/test_table_acl.c | 50 +++++++++++-------------
app/test/test_table_pipeline.c | 40 +++++++++----------
examples/distributor/main.c | 2 +-
examples/ipsec-secgw/sa.c | 6 +--
examples/l3fwd-power/main.c | 17 +++++----
lib/acl/acl_bld.c | 1 +
lib/acl/acl_gen.c | 1 +
lib/acl/acl_log.h | 4 ++
lib/acl/rte_acl.c | 4 ++
lib/acl/tb_mem.c | 3 +-
lib/eal/common/eal_common_log.c | 17 ---------
lib/eal/include/rte_log.h | 34 ++++++++---------
lib/efd/rte_efd.c | 4 ++
lib/fib/fib_log.h | 4 ++
lib/fib/rte_fib.c | 3 ++
lib/fib/rte_fib6.c | 2 +
lib/gso/rte_gso.c | 4 +-
lib/gso/rte_gso.h | 1 +
lib/hash/meson.build | 9 ++++-
lib/hash/rte_crc_arm64.h | 8 ++--
lib/hash/rte_crc_x86.h | 10 ++---
lib/hash/rte_cuckoo_hash.c | 5 +++
lib/hash/rte_fbk_hash.c | 5 +++
lib/hash/rte_hash_crc.c | 68 +++++++++++++++++++++++++++++++++
lib/hash/rte_hash_crc.h | 48 ++---------------------
lib/hash/rte_thash.c | 3 ++
lib/hash/rte_thash_gfni.c | 50 ++++++++++++++++++++++++
lib/hash/rte_thash_gfni.h | 30 +++++----------
lib/hash/version.map | 11 ++++++
lib/kni/rte_kni.c | 3 ++
lib/lpm/lpm_log.h | 4 ++
lib/lpm/rte_lpm.c | 3 ++
lib/lpm/rte_lpm6.c | 1 +
lib/mbuf/mbuf_log.h | 4 ++
lib/mbuf/rte_mbuf.c | 4 ++
lib/mbuf/rte_mbuf_dyn.c | 2 +
lib/mbuf/rte_mbuf_pool_ops.c | 2 +
lib/mempool/rte_mempool.c | 2 +
lib/mempool/rte_mempool.h | 8 ++++
lib/mempool/version.map | 3 ++
lib/pipeline/rte_pipeline.c | 2 +
lib/pipeline/rte_pipeline.h | 5 +++
lib/port/rte_port_ethdev.c | 3 ++
lib/port/rte_port_eventdev.c | 4 ++
lib/port/rte_port_fd.c | 3 ++
lib/port/rte_port_frag.c | 3 ++
lib/port/rte_port_kni.c | 3 ++
lib/port/rte_port_ras.c | 3 ++
lib/port/rte_port_ring.c | 3 ++
lib/port/rte_port_sched.c | 3 ++
lib/port/rte_port_source_sink.c | 3 ++
lib/port/rte_port_sym_crypto.c | 3 ++
lib/power/guest_channel.c | 3 +-
lib/power/power_common.c | 2 +
lib/power/power_common.h | 3 +-
lib/power/power_kvm_vm.c | 1 +
lib/power/rte_power.c | 1 +
lib/rib/rib_log.h | 4 ++
lib/rib/rte_rib.c | 3 ++
lib/rib/rte_rib6.c | 3 ++
lib/ring/rte_ring.c | 3 ++
lib/sched/rte_pie.c | 1 +
lib/sched/rte_sched.c | 5 +++
lib/sched/rte_sched_log.h | 4 ++
lib/table/meson.build | 1 +
lib/table/rte_table.c | 8 ++++
lib/table/rte_table.h | 4 ++
68 files changed, 391 insertions(+), 176 deletions(-)
create mode 100644 lib/acl/acl_log.h
create mode 100644 lib/fib/fib_log.h
create mode 100644 lib/hash/rte_hash_crc.c
create mode 100644 lib/hash/rte_thash_gfni.c
create mode 100644 lib/lpm/lpm_log.h
create mode 100644 lib/mbuf/mbuf_log.h
create mode 100644 lib/rib/rib_log.h
create mode 100644 lib/sched/rte_sched_log.h
create mode 100644 lib/table/rte_table.c
--
2.39.2
^ permalink raw reply [relevance 2%]
* Re: [PATCH v3 03/15] graph: move node process into inline function
2023-03-29 15:34 3% ` Stephen Hemminger
@ 2023-03-29 15:41 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-03-29 15:41 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Zhirun Yan, dev, jerinj, kirankumark, ndabilpuram, cunming.liang,
haiyue.wang
On Wed, Mar 29, 2023 at 9:04 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Wed, 29 Mar 2023 15:43:28 +0900
> Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> > +/**
> > + * @internal
> > + *
> > + * Enqueue a given node to the tail of the graph reel.
> > + *
> > + * @param graph
> > + * Pointer Graph object.
> > + * @param node
> > + * Pointer to node object to be enqueued.
> > + */
> > +static __rte_always_inline void
> > +__rte_node_process(struct rte_graph *graph, struct rte_node *node)
> > +{
> > + uint64_t start;
> > + uint16_t rc;
> > + void **objs;
> > +
> > + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
> > + objs = node->objs;
> > + rte_prefetch0(objs);
> > +
> > + if (rte_graph_has_stats_feature()) {
> > + start = rte_rdtsc();
> > + rc = node->process(graph, node, objs, node->idx);
> > + node->total_cycles += rte_rdtsc() - start;
> > + node->total_calls++;
> > + node->total_objs += rc;
> > + } else {
> > + node->process(graph, node, objs, node->idx);
> > + }
> > + node->idx = 0;
> > +}
> > +
>
> Why inline? Doing everything as inlines has long term ABI
> impacts. And this is not a super critical performance path.
This is one of the real fast path routine.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 03/15] graph: move node process into inline function
@ 2023-03-29 15:34 3% ` Stephen Hemminger
2023-03-29 15:41 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-03-29 15:34 UTC (permalink / raw)
To: Zhirun Yan
Cc: dev, jerinj, kirankumark, ndabilpuram, cunming.liang, haiyue.wang
On Wed, 29 Mar 2023 15:43:28 +0900
Zhirun Yan <zhirun.yan@intel.com> wrote:
> +/**
> + * @internal
> + *
> + * Enqueue a given node to the tail of the graph reel.
> + *
> + * @param graph
> + * Pointer Graph object.
> + * @param node
> + * Pointer to node object to be enqueued.
> + */
> +static __rte_always_inline void
> +__rte_node_process(struct rte_graph *graph, struct rte_node *node)
> +{
> + uint64_t start;
> + uint16_t rc;
> + void **objs;
> +
> + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
> + objs = node->objs;
> + rte_prefetch0(objs);
> +
> + if (rte_graph_has_stats_feature()) {
> + start = rte_rdtsc();
> + rc = node->process(graph, node, objs, node->idx);
> + node->total_cycles += rte_rdtsc() - start;
> + node->total_calls++;
> + node->total_objs += rc;
> + } else {
> + node->process(graph, node, objs, node->idx);
> + }
> + node->idx = 0;
> +}
> +
Why inline? Doing everything as inlines has long term ABI
impacts. And this is not a super critical performance path.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 0/2] ABI check updates
2023-03-23 17:15 9% ` [PATCH v2 " David Marchand
2023-03-23 17:15 21% ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
2023-03-23 17:15 41% ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
@ 2023-03-28 18:38 4% ` Thomas Monjalon
2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-28 18:38 UTC (permalink / raw)
To: David Marchand; +Cc: dev
23/03/2023 18:15, David Marchand:
> This series moves ABI exceptions in a single configuration file and
> simplifies the ABI check so that no artefact depending on libabigail
> version is stored in the CI.
Applied, thanks.
^ permalink raw reply [relevance 4%]
* [PATCH v2 2/2] devtools: stop depending on libabigail xml format
2023-03-23 17:15 9% ` [PATCH v2 " David Marchand
2023-03-23 17:15 21% ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
@ 2023-03-23 17:15 41% ` David Marchand
2023-03-28 18:38 4% ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
2 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
To: dev; +Cc: Aaron Conole, Michael Santana, Thomas Monjalon, Bruce Richardson
A ABI reference depends on:
- DPDK build options,
- toolchain compiler and versions,
- libabigail version.
The reason for the latter point is that, when the ABI reference was
generated, ABI xml files were dumped in a format depending on the
libabigail version.
Those xml files were then later used to compare against modified
code.
There are a few disadvantages with this method:
- since the xml files are dependent on the libabigail version, when
updating CI environments, a change in the libabigail package requires
regenerating the ABI references,
- comparing xml files with abidiff is not well tested, as we (DPDK)
uncovered bugs in libabigail that were not hit with comparing .so,
Switch to comparing .so directly, remove this dependence and update GHA
script.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.ci/linux-build.sh | 4 ----
.github/workflows/build.yml | 2 +-
MAINTAINERS | 1 -
devtools/check-abi.sh | 17 +++++++++--------
devtools/gen-abi.sh | 27 ---------------------------
devtools/test-meson-builds.sh | 5 -----
6 files changed, 10 insertions(+), 46 deletions(-)
delete mode 100755 devtools/gen-abi.sh
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 150b38bd7a..9631e342b5 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -130,8 +130,6 @@ fi
if [ "$ABI_CHECKS" = "true" ]; then
if [ "$(cat libabigail/VERSION 2>/dev/null)" != "$LIBABIGAIL_VERSION" ]; then
rm -rf libabigail
- # if we change libabigail, invalidate existing abi cache
- rm -rf reference
fi
if [ ! -d libabigail ]; then
@@ -153,7 +151,6 @@ if [ "$ABI_CHECKS" = "true" ]; then
meson setup $OPTS -Dexamples= $refsrcdir $refsrcdir/build
ninja -C $refsrcdir/build
DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
- devtools/gen-abi.sh reference
find reference/usr/local -name '*.a' -delete
rm -rf reference/usr/local/bin
rm -rf reference/usr/local/share
@@ -161,7 +158,6 @@ if [ "$ABI_CHECKS" = "true" ]; then
fi
DESTDIR=$(pwd)/install ninja -C build install
- devtools/gen-abi.sh install
devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index bbcb535afb..e24e47a216 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -70,7 +70,7 @@ jobs:
run: |
echo 'ccache=ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W) >> $GITHUB_OUTPUT
echo 'libabigail=libabigail-${{ matrix.config.os }}' >> $GITHUB_OUTPUT
- echo 'abi=abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}' >> $GITHUB_OUTPUT
+ echo 'abi=abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.REF_GIT_TAG }}' >> $GITHUB_OUTPUT
- name: Retrieve ccache cache
uses: actions/cache@v3
with:
diff --git a/MAINTAINERS b/MAINTAINERS
index 1a33ad8592..280058adfc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -94,7 +94,6 @@ F: devtools/check-spdx-tag.sh
F: devtools/check-symbol-change.sh
F: devtools/check-symbol-maps.sh
F: devtools/checkpatches.sh
-F: devtools/gen-abi.sh
F: devtools/get-maintainer.sh
F: devtools/git-log-fixes.sh
F: devtools/load-devel-config
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index f74432be5d..39e3798931 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -37,20 +37,21 @@ fi
export newdir ABIDIFF_OPTIONS ABIDIFF_SUPPRESSIONS
export diff_func='run_diff() {
- dump=$1
- name=$(basename $dump)
- if grep -q "; SKIP_LIBRARY=${name%.dump}\>" $ABIDIFF_SUPPRESSIONS; then
+ lib=$1
+ name=$(basename $lib)
+ if grep -q "; SKIP_LIBRARY=${name%.so.*}\>" $ABIDIFF_SUPPRESSIONS; then
echo "Skipped $name" >&2
return 0
fi
- dump2=$(find $newdir -name $name)
- if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
+ # Look for a library with the same major ABI version
+ lib2=$(find $newdir -name "${name%.*}.*" -a ! -type l)
+ if [ -z "$lib2" ] || [ ! -e "$lib2" ]; then
echo "Error: cannot find $name in $newdir" >&2
return 1
fi
- abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
+ abidiff $ABIDIFF_OPTIONS $lib $lib2 || {
abiret=$?
- echo "Error: ABI issue reported for abidiff $ABIDIFF_OPTIONS $dump $dump2" >&2
+ echo "Error: ABI issue reported for abidiff $ABIDIFF_OPTIONS $lib $lib2" >&2
if [ $(($abiret & 3)) -ne 0 ]; then
echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue." >&2
fi
@@ -65,7 +66,7 @@ export diff_func='run_diff() {
}'
error=
-find $refdir -name "*.dump" |
+find $refdir -name "*.so.*" -a ! -type l |
xargs -n1 -P0 sh -c 'eval "$diff_func"; run_diff $0' ||
error=1
diff --git a/devtools/gen-abi.sh b/devtools/gen-abi.sh
deleted file mode 100755
index 61f7510ea1..0000000000
--- a/devtools/gen-abi.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/sh -e
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (c) 2019 Red Hat, Inc.
-
-if [ $# != 1 ]; then
- echo "Usage: $0 installdir" >&2
- exit 1
-fi
-
-installdir=$1
-if [ ! -d $installdir ]; then
- echo "Error: install directory '$installdir' does not exist." >&2
- exit 1
-fi
-
-dumpdir=$installdir/dump
-rm -rf $dumpdir
-mkdir -p $dumpdir
-for f in $(find $installdir -name "*.so.*"); do
- if test -L $f; then
- continue
- fi
-
- libname=$(basename $f)
- echo $dumpdir/${libname%.so*}.dump $f
-done |
-xargs -n2 -P0 abidw --out-file
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 48f4e52df3..9131088c9d 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -204,7 +204,6 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
-Dexamples= $*
compile $abirefdir/build
install_target $abirefdir/build $abirefdir/$targetdir
- $srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
# save disk space by removing static libs and apps
find $abirefdir/$targetdir/usr/local -name '*.a' -delete
@@ -215,10 +214,6 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
install_target $builds_dir/$targetdir \
$(readlink -f $builds_dir/$targetdir/install)
echo "Checking ABI compatibility of $targetdir" >&$verbose
- echo $srcdir/devtools/gen-abi.sh \
- $(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
- $srcdir/devtools/gen-abi.sh \
- $(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
--
2.39.2
^ permalink raw reply [relevance 41%]
* [PATCH v2 1/2] devtools: unify configuration for ABI check
2023-03-23 17:15 9% ` [PATCH v2 " David Marchand
@ 2023-03-23 17:15 21% ` David Marchand
2023-03-23 17:15 41% ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
2023-03-28 18:38 4% ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
2 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
We have been skipping removed libraries in the ABI check by updating the
check-abi.sh script itself.
See, for example, commit 33584c19ddc2 ("raw/dpaa2_qdma: remove driver").
Having two places for exception is a bit confusing, and those exceptions
are best placed in a single configuration file out of the check script.
Besides, a next patch will switch the check from comparing ABI xml files
to directly comparing .so files. In this mode, libabigail does not
support the soname_regexp syntax used for the mlx glue libraries.
Let's handle these special cases in libabigail.abignore using comments.
Taking the raw/dpaa2_qdma driver as an example, it would be possible to
skip it by adding:
; SKIP_LIBRARY=librte_net_mlx4_glue
+; SKIP_LIBRARY=librte_raw_dpaa2_qdma
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
devtools/check-abi.sh | 9 +++++++--
devtools/libabigail.abignore | 12 +++++++++---
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index d253a12768..f74432be5d 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -10,7 +10,8 @@ fi
refdir=$1
newdir=$2
warnonly=${3:-}
-ABIDIFF_OPTIONS="--suppr $(dirname $0)/libabigail.abignore --no-added-syms"
+ABIDIFF_SUPPRESSIONS=$(dirname $(readlink -f $0))/libabigail.abignore
+ABIDIFF_OPTIONS="--suppr $ABIDIFF_SUPPRESSIONS --no-added-syms"
if [ ! -d $refdir ]; then
echo "Error: reference directory '$refdir' does not exist." >&2
@@ -34,10 +35,14 @@ else
ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir2 $incdir2"
fi
-export newdir ABIDIFF_OPTIONS
+export newdir ABIDIFF_OPTIONS ABIDIFF_SUPPRESSIONS
export diff_func='run_diff() {
dump=$1
name=$(basename $dump)
+ if grep -q "; SKIP_LIBRARY=${name%.dump}\>" $ABIDIFF_SUPPRESSIONS; then
+ echo "Skipped $name" >&2
+ return 0
+ fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
echo "Error: cannot find $name in $newdir" >&2
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 7a93de3ba1..3ff51509de 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -16,9 +16,15 @@
[suppress_variable]
name_regexp = _pmd_info$
-; Ignore changes on soname for mlx glue internal drivers
-[suppress_file]
- soname_regexp = ^librte_.*mlx.*glue\.
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+; Special rules to skip libraries ;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;
+; This is not a libabigail rule (see check-abi.sh).
+; This is used for driver removal and other special cases like mlx glue libs.
+;
+; SKIP_LIBRARY=librte_common_mlx5_glue
+; SKIP_LIBRARY=librte_net_mlx4_glue
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Experimental APIs exceptions ;
--
2.39.2
^ permalink raw reply [relevance 21%]
* [PATCH v2 0/2] ABI check updates
@ 2023-03-23 17:15 9% ` David Marchand
2023-03-23 17:15 21% ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
To: dev
This series moves ABI exceptions in a single configuration file and
simplifies the ABI check so that no artefact depending on libabigail
version is stored in the CI.
--
David Marchand
Changes since v1:
- rebased after abi check parallelisation rework,
David Marchand (2):
devtools: unify configuration for ABI check
devtools: stop depending on libabigail xml format
.ci/linux-build.sh | 4 ----
.github/workflows/build.yml | 2 +-
MAINTAINERS | 1 -
devtools/check-abi.sh | 24 +++++++++++++++---------
devtools/gen-abi.sh | 27 ---------------------------
devtools/libabigail.abignore | 12 +++++++++---
devtools/test-meson-builds.sh | 5 -----
7 files changed, 25 insertions(+), 50 deletions(-)
delete mode 100755 devtools/gen-abi.sh
--
2.39.2
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [RFC] ethdev: improve link speed to string
@ 2023-03-23 14:40 3% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-03-23 14:40 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Min Hu (Connor), Andrew Rybchenko, thomas, dev
On 2/10/2023 2:41 PM, Ferruh Yigit wrote:
> On 1/19/2023 4:45 PM, Stephen Hemminger wrote:
>> On Thu, 19 Jan 2023 11:41:12 +0000
>> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>
>>>>>>> Nothing good will happen if you try to use the function to
>>>>>>> print two different link speeds in one log message.
>>>>>> You are right.
>>>>>> And use malloc for "name" will result in memory leakage, which is also
>>>>>> not a good option.
>>>>>>
>>>>>> BTW, do you think if we need to modify the function
>>>>>> "rte_eth_link_speed_to_str"?
>>>>>
>>>>> IMHO it would be more pain than gain in this case.
>>>>>
>>>>> .
>>>>>
>>>> Agree with you. Thanks Andrew
>>>>
>>>
>>> It can be option to update the API as following in next ABI break release:
>>>
>>> const char *
>>> rte_eth_link_speed_to_str(uint32_t link_speed, char *buf, size_t buf_size);
>>>
>>> For this a deprecation notice needs to be sent and approved, not sure
>>> though if it worth.
>>>
>>>
>>> Meanwhile, what do you think to update string 'Invalid' to something
>>> like 'Irregular' or 'Erratic', does this help to convey the right message?
>>
>>
>> API versioning is possible here.
>
>
> Agree, ABI versioning can be used here.
>
> @Connor, what do you think?
Updating patch status as rejected, if you still pursue the feature
please send a separate patch that updates the API via ABI versioning.
Thanks,
ferruh
^ permalink raw reply [relevance 3%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-23 11:58 3% ` fengchengwen
@ 2023-03-23 12:51 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-23 12:51 UTC (permalink / raw)
To: Olivier Matz, Ferruh Yigit, fengchengwen; +Cc: dev, David Marchand
23/03/2023 12:58, fengchengwen:
> On 2023/3/22 21:49, Thomas Monjalon wrote:
> > 22/03/2023 09:53, Ferruh Yigit:
> >> On 3/22/2023 1:15 AM, fengchengwen wrote:
> >>> On 2023/3/21 21:50, Ferruh Yigit wrote:
> >>>> On 3/17/2023 2:43 AM, fengchengwen wrote:
> >>>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
> >>>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
> >>>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
> >>>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
> >>>>>>> parameter 'value' is NULL when parsed 'only keys'.
> >>>>>>>
> >>>>>>> It may leads to segment fault when parse args with 'only key', this
> >>>>>>> patchset fixes rest of them.
> >>>>>>>
> >>>>>>> Chengwen Feng (5):
> >>>>>>> app/pdump: fix segment fault when parse args
> >>>>>>> net/memif: fix segment fault when parse devargs
> >>>>>>> net/pcap: fix segment fault when parse devargs
> >>>>>>> net/ring: fix segment fault when parse devargs
> >>>>>>> net/sfc: fix segment fault when parse devargs
> >>>>>>
> >>>>>> Hi Chengwen,
> >>>>>>
> >>>>>> Did you scan all `rte_kvargs_process()` instances?
> >>>>>
> >>>>> No, I was just looking at the modules I was concerned about.
> >>>>> I looked at it briefly, and some modules had the same problem.
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> And if there would be a way to tell kvargs that a value is expected (or
> >>>>>> not) this checks could be done in kvargs layer, I think this also can be
> >>>>>> to look at.
> >>>>>
> >>>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> >>>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> >>>>> But it also break the API's behavior.
> >>>>>
> >>>>
> >>>> What about having a new API, like `rte_kvargs_process_extended()`,
> >>>>
> >>>> That gets an additional flag as parameter, which may have values like
> >>>> following to indicate if key expects a value or not:
> >>>> ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
> >>>> ARG_WITH_VALUE --> "key=value"
> >>>> ARG_NO_VALUE --> 'key'
> >>>>
> >>>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> >>>> `rte_kvargs_process()`.
> >>>>
> >>>> This way instead of adding checks, relevant usage can be replaced by
> >>>> `rte_kvargs_process_extended()`, this requires similar amount of change
> >>>> but code will be more clean I think.
> >>>>
> >>>> Do you think does this work?
> >>>
> >>> Yes, it can work.
> >>>
> >>> But I think the introduction of new API adds some complexity.
> >>> And a good API definition could more simpler.
> >>>
> >>
> >> Other option is changing existing API, but that may be widely used and
> >> changing it impacts applications, I don't think it worth.
> >
> > I've planned a change in kvargs API 5 years ago and never did it:
> >>From doc/guides/rel_notes/deprecation.rst:
> > "
> > * kvargs: The function ``rte_kvargs_process`` will get a new parameter
> > for returning key match count. It will ease handling of no-match case.
> > "
>
> I think it's okay to add extra parameter for rte_kvargs_process. But it will
> break ABI.
> Also I notice patchset was deferred in patchwork.
>
> Does it mean that the new version can't accept until the 23.11 release cycle ?
It is a bit too late to take a decision in 23.03 cycle.
Let's continue this discussion.
We can either have some fixes in 23.07 or have an ABI breaking change in 23.11.
^ permalink raw reply [relevance 3%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-22 13:49 0% ` Thomas Monjalon
@ 2023-03-23 11:58 3% ` fengchengwen
2023-03-23 12:51 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-23 11:58 UTC (permalink / raw)
To: Thomas Monjalon, Olivier Matz, Ferruh Yigit; +Cc: dev, David Marchand
On 2023/3/22 21:49, Thomas Monjalon wrote:
> 22/03/2023 09:53, Ferruh Yigit:
>> On 3/22/2023 1:15 AM, fengchengwen wrote:
>>> On 2023/3/21 21:50, Ferruh Yigit wrote:
>>>> On 3/17/2023 2:43 AM, fengchengwen wrote:
>>>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
>>>>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>>>>
>>>>>>> It may leads to segment fault when parse args with 'only key', this
>>>>>>> patchset fixes rest of them.
>>>>>>>
>>>>>>> Chengwen Feng (5):
>>>>>>> app/pdump: fix segment fault when parse args
>>>>>>> net/memif: fix segment fault when parse devargs
>>>>>>> net/pcap: fix segment fault when parse devargs
>>>>>>> net/ring: fix segment fault when parse devargs
>>>>>>> net/sfc: fix segment fault when parse devargs
>>>>>>
>>>>>> Hi Chengwen,
>>>>>>
>>>>>> Did you scan all `rte_kvargs_process()` instances?
>>>>>
>>>>> No, I was just looking at the modules I was concerned about.
>>>>> I looked at it briefly, and some modules had the same problem.
>>>>>
>>>>>>
>>>>>>
>>>>>> And if there would be a way to tell kvargs that a value is expected (or
>>>>>> not) this checks could be done in kvargs layer, I think this also can be
>>>>>> to look at.
>>>>>
>>>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>>>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>>>>> But it also break the API's behavior.
>>>>>
>>>>
>>>> What about having a new API, like `rte_kvargs_process_extended()`,
>>>>
>>>> That gets an additional flag as parameter, which may have values like
>>>> following to indicate if key expects a value or not:
>>>> ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
>>>> ARG_WITH_VALUE --> "key=value"
>>>> ARG_NO_VALUE --> 'key'
>>>>
>>>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
>>>> `rte_kvargs_process()`.
>>>>
>>>> This way instead of adding checks, relevant usage can be replaced by
>>>> `rte_kvargs_process_extended()`, this requires similar amount of change
>>>> but code will be more clean I think.
>>>>
>>>> Do you think does this work?
>>>
>>> Yes, it can work.
>>>
>>> But I think the introduction of new API adds some complexity.
>>> And a good API definition could more simpler.
>>>
>>
>> Other option is changing existing API, but that may be widely used and
>> changing it impacts applications, I don't think it worth.
>
> I've planned a change in kvargs API 5 years ago and never did it:
>>From doc/guides/rel_notes/deprecation.rst:
> "
> * kvargs: The function ``rte_kvargs_process`` will get a new parameter
> for returning key match count. It will ease handling of no-match case.
> "
I think it's okay to add extra parameter for rte_kvargs_process. But it will
break ABI.
Also I notice patchset was deferred in patchwork.
Does it mean that the new version can't accept until the 23.11 release cycle ?
>
>> Of course we can live with as it is and add checks to the callback
>> functions, although I still believe a new 'process()' API is better idea.
>
>
>
> .
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-22 8:53 0% ` Ferruh Yigit
@ 2023-03-22 13:49 0% ` Thomas Monjalon
2023-03-23 11:58 3% ` fengchengwen
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-22 13:49 UTC (permalink / raw)
To: fengchengwen, Olivier Matz, Ferruh Yigit; +Cc: dev, David Marchand
22/03/2023 09:53, Ferruh Yigit:
> On 3/22/2023 1:15 AM, fengchengwen wrote:
> > On 2023/3/21 21:50, Ferruh Yigit wrote:
> >> On 3/17/2023 2:43 AM, fengchengwen wrote:
> >>> On 2023/3/17 2:18, Ferruh Yigit wrote:
> >>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
> >>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
> >>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
> >>>>> parameter 'value' is NULL when parsed 'only keys'.
> >>>>>
> >>>>> It may leads to segment fault when parse args with 'only key', this
> >>>>> patchset fixes rest of them.
> >>>>>
> >>>>> Chengwen Feng (5):
> >>>>> app/pdump: fix segment fault when parse args
> >>>>> net/memif: fix segment fault when parse devargs
> >>>>> net/pcap: fix segment fault when parse devargs
> >>>>> net/ring: fix segment fault when parse devargs
> >>>>> net/sfc: fix segment fault when parse devargs
> >>>>
> >>>> Hi Chengwen,
> >>>>
> >>>> Did you scan all `rte_kvargs_process()` instances?
> >>>
> >>> No, I was just looking at the modules I was concerned about.
> >>> I looked at it briefly, and some modules had the same problem.
> >>>
> >>>>
> >>>>
> >>>> And if there would be a way to tell kvargs that a value is expected (or
> >>>> not) this checks could be done in kvargs layer, I think this also can be
> >>>> to look at.
> >>>
> >>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> >>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> >>> But it also break the API's behavior.
> >>>
> >>
> >> What about having a new API, like `rte_kvargs_process_extended()`,
> >>
> >> That gets an additional flag as parameter, which may have values like
> >> following to indicate if key expects a value or not:
> >> ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
> >> ARG_WITH_VALUE --> "key=value"
> >> ARG_NO_VALUE --> 'key'
> >>
> >> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> >> `rte_kvargs_process()`.
> >>
> >> This way instead of adding checks, relevant usage can be replaced by
> >> `rte_kvargs_process_extended()`, this requires similar amount of change
> >> but code will be more clean I think.
> >>
> >> Do you think does this work?
> >
> > Yes, it can work.
> >
> > But I think the introduction of new API adds some complexity.
> > And a good API definition could more simpler.
> >
>
> Other option is changing existing API, but that may be widely used and
> changing it impacts applications, I don't think it worth.
I've planned a change in kvargs API 5 years ago and never did it:
From doc/guides/rel_notes/deprecation.rst:
"
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
for returning key match count. It will ease handling of no-match case.
"
> Of course we can live with as it is and add checks to the callback
> functions, although I still believe a new 'process()' API is better idea.
^ permalink raw reply [relevance 0%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-22 1:15 0% ` fengchengwen
@ 2023-03-22 8:53 0% ` Ferruh Yigit
2023-03-22 13:49 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-03-22 8:53 UTC (permalink / raw)
To: fengchengwen, thomas, Olivier Matz; +Cc: dev, David Marchand
On 3/22/2023 1:15 AM, fengchengwen wrote:
> On 2023/3/21 21:50, Ferruh Yigit wrote:
>> On 3/17/2023 2:43 AM, fengchengwen wrote:
>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
>>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>>
>>>>> It may leads to segment fault when parse args with 'only key', this
>>>>> patchset fixes rest of them.
>>>>>
>>>>> Chengwen Feng (5):
>>>>> app/pdump: fix segment fault when parse args
>>>>> net/memif: fix segment fault when parse devargs
>>>>> net/pcap: fix segment fault when parse devargs
>>>>> net/ring: fix segment fault when parse devargs
>>>>> net/sfc: fix segment fault when parse devargs
>>>>
>>>> Hi Chengwen,
>>>>
>>>> Did you scan all `rte_kvargs_process()` instances?
>>>
>>> No, I was just looking at the modules I was concerned about.
>>> I looked at it briefly, and some modules had the same problem.
>>>
>>>>
>>>>
>>>> And if there would be a way to tell kvargs that a value is expected (or
>>>> not) this checks could be done in kvargs layer, I think this also can be
>>>> to look at.
>>>
>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>>> But it also break the API's behavior.
>>>
>>
>> What about having a new API, like `rte_kvargs_process_extended()`,
>>
>> That gets an additional flag as parameter, which may have values like
>> following to indicate if key expects a value or not:
>> ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
>> ARG_WITH_VALUE --> "key=value"
>> ARG_NO_VALUE --> 'key'
>>
>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
>> `rte_kvargs_process()`.
>>
>> This way instead of adding checks, relevant usage can be replaced by
>> `rte_kvargs_process_extended()`, this requires similar amount of change
>> but code will be more clean I think.
>>
>> Do you think does this work?
>
> Yes, it can work.
>
> But I think the introduction of new API adds some complexity.
> And a good API definition could more simpler.
>
Other option is changing existing API, but that may be widely used and
changing it impacts applications, I don't think it worth.
Of course we can live with as it is and add checks to the callback
functions, although I still believe a new 'process()' API is better idea.
>>
>>
>>>
>>> Or continue fix the exist code (about 10+ place more),
>>> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
>>> they'll take the initiative to prevent this.
>>>
>>>
>>> Hope for more advise for the next.
>>>
>>>> .
>>>>
>>
>> .
>>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-21 13:50 0% ` Ferruh Yigit
@ 2023-03-22 1:15 0% ` fengchengwen
2023-03-22 8:53 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-22 1:15 UTC (permalink / raw)
To: Ferruh Yigit, thomas, Olivier Matz; +Cc: dev, David Marchand
On 2023/3/21 21:50, Ferruh Yigit wrote:
> On 3/17/2023 2:43 AM, fengchengwen wrote:
>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>
>>>> It may leads to segment fault when parse args with 'only key', this
>>>> patchset fixes rest of them.
>>>>
>>>> Chengwen Feng (5):
>>>> app/pdump: fix segment fault when parse args
>>>> net/memif: fix segment fault when parse devargs
>>>> net/pcap: fix segment fault when parse devargs
>>>> net/ring: fix segment fault when parse devargs
>>>> net/sfc: fix segment fault when parse devargs
>>>
>>> Hi Chengwen,
>>>
>>> Did you scan all `rte_kvargs_process()` instances?
>>
>> No, I was just looking at the modules I was concerned about.
>> I looked at it briefly, and some modules had the same problem.
>>
>>>
>>>
>>> And if there would be a way to tell kvargs that a value is expected (or
>>> not) this checks could be done in kvargs layer, I think this also can be
>>> to look at.
>>
>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>> But it also break the API's behavior.
>>
>
> What about having a new API, like `rte_kvargs_process_extended()`,
>
> That gets an additional flag as parameter, which may have values like
> following to indicate if key expects a value or not:
> ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
> ARG_WITH_VALUE --> "key=value"
> ARG_NO_VALUE --> 'key'
>
> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> `rte_kvargs_process()`.
>
> This way instead of adding checks, relevant usage can be replaced by
> `rte_kvargs_process_extended()`, this requires similar amount of change
> but code will be more clean I think.
>
> Do you think does this work?
Yes, it can work.
But I think the introduction of new API adds some complexity.
And a good API definition could more simpler.
>
>
>>
>> Or continue fix the exist code (about 10+ place more),
>> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
>> they'll take the initiative to prevent this.
>>
>>
>> Hope for more advise for the next.
>>
>>> .
>>>
>
> .
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 0/5] fix segment fault when parse args
2023-03-17 2:43 3% ` fengchengwen
@ 2023-03-21 13:50 0% ` Ferruh Yigit
2023-03-22 1:15 0% ` fengchengwen
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-03-21 13:50 UTC (permalink / raw)
To: fengchengwen, thomas, Olivier Matz; +Cc: dev, David Marchand
On 3/17/2023 2:43 AM, fengchengwen wrote:
> On 2023/3/17 2:18, Ferruh Yigit wrote:
>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>> to parse 'only keys' (e.g. socket_id) type. And the callback function
>>> parameter 'value' is NULL when parsed 'only keys'.
>>>
>>> It may leads to segment fault when parse args with 'only key', this
>>> patchset fixes rest of them.
>>>
>>> Chengwen Feng (5):
>>> app/pdump: fix segment fault when parse args
>>> net/memif: fix segment fault when parse devargs
>>> net/pcap: fix segment fault when parse devargs
>>> net/ring: fix segment fault when parse devargs
>>> net/sfc: fix segment fault when parse devargs
>>
>> Hi Chengwen,
>>
>> Did you scan all `rte_kvargs_process()` instances?
>
> No, I was just looking at the modules I was concerned about.
> I looked at it briefly, and some modules had the same problem.
>
>>
>>
>> And if there would be a way to tell kvargs that a value is expected (or
>> not) this checks could be done in kvargs layer, I think this also can be
>> to look at.
>
> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> But it also break the API's behavior.
>
What about having a new API, like `rte_kvargs_process_extended()`,
That gets an additional flag as parameter, which may have values like
following to indicate if key expects a value or not:
ARG_MAY_HAVE_VALUE --> "key=value" OR 'key'
ARG_WITH_VALUE --> "key=value"
ARG_NO_VALUE --> 'key'
Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
`rte_kvargs_process()`.
This way instead of adding checks, relevant usage can be replaced by
`rte_kvargs_process_extended()`, this requires similar amount of change
but code will be more clean I think.
Do you think does this work?
>
> Or continue fix the exist code (about 10+ place more),
> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
> they'll take the initiative to prevent this.
>
>
> Hope for more advise for the next.
>
>> .
>>
^ permalink raw reply [relevance 0%]
* [PATCH v2 2/2] ci: test compilation with debug in GHA
@ 2023-03-20 12:18 19% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-20 12:18 UTC (permalink / raw)
To: dev; +Cc: Aaron Conole, Michael Santana
We often miss compilation issues with -O0 -g.
Switch to debug in GHA for the gcc job.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v1:
- rather than introduce a new job, updated the ABI check job
to build with debug,
---
.ci/linux-build.sh | 8 +++++++-
.github/workflows/build.yml | 3 ++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ab0994388a..150b38bd7a 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -65,6 +65,12 @@ if [ "$RISCV64" = "true" ]; then
cross_file=config/riscv/riscv64_linux_gcc
fi
+buildtype=debugoptimized
+
+if [ "$BUILD_DEBUG" = "true" ]; then
+ buildtype=debug
+fi
+
if [ "$BUILD_DOCS" = "true" ]; then
OPTS="$OPTS -Denable_docs=true"
fi
@@ -85,7 +91,7 @@ fi
OPTS="$OPTS -Dplatform=generic"
OPTS="$OPTS -Ddefault_library=$DEF_LIB"
-OPTS="$OPTS -Dbuildtype=debugoptimized"
+OPTS="$OPTS -Dbuildtype=$buildtype"
OPTS="$OPTS -Dcheck_includes=true"
if [ "$MINI" = "true" ]; then
OPTS="$OPTS -Denable_drivers=net/null"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 154be70cc1..bbcb535afb 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -18,6 +18,7 @@ jobs:
ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
ASAN: ${{ contains(matrix.config.checks, 'asan') }}
BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
+ BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
@@ -39,7 +40,7 @@ jobs:
mini: mini
- os: ubuntu-20.04
compiler: gcc
- checks: abi+doc+tests
+ checks: abi+debug+doc+tests
- os: ubuntu-20.04
compiler: clang
checks: asan+doc+tests
--
2.39.2
^ permalink raw reply [relevance 19%]
* [PATCH 2/2] ci: test compilation with debug
@ 2023-03-20 10:26 5% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2023-03-20 10:26 UTC (permalink / raw)
To: dev; +Cc: Aaron Conole, Michael Santana
We often miss compilation issues with -O0 -g.
Add a test in GHA.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.ci/linux-build.sh | 8 +++++++-
.github/workflows/build.yml | 4 ++++
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ab0994388a..150b38bd7a 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -65,6 +65,12 @@ if [ "$RISCV64" = "true" ]; then
cross_file=config/riscv/riscv64_linux_gcc
fi
+buildtype=debugoptimized
+
+if [ "$BUILD_DEBUG" = "true" ]; then
+ buildtype=debug
+fi
+
if [ "$BUILD_DOCS" = "true" ]; then
OPTS="$OPTS -Denable_docs=true"
fi
@@ -85,7 +91,7 @@ fi
OPTS="$OPTS -Dplatform=generic"
OPTS="$OPTS -Ddefault_library=$DEF_LIB"
-OPTS="$OPTS -Dbuildtype=debugoptimized"
+OPTS="$OPTS -Dbuildtype=$buildtype"
OPTS="$OPTS -Dcheck_includes=true"
if [ "$MINI" = "true" ]; then
OPTS="$OPTS -Denable_drivers=net/null"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 154be70cc1..d90ecfc6f0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -18,6 +18,7 @@ jobs:
ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
ASAN: ${{ contains(matrix.config.checks, 'asan') }}
BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
+ BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
@@ -37,6 +38,9 @@ jobs:
- os: ubuntu-20.04
compiler: gcc
mini: mini
+ - os: ubuntu-20.04
+ compiler: gcc
+ checks: debug
- os: ubuntu-20.04
compiler: gcc
checks: abi+doc+tests
--
2.39.2
^ permalink raw reply [relevance 5%]
* Re: [PATCH 0/5] fix segment fault when parse args
@ 2023-03-17 2:43 3% ` fengchengwen
2023-03-21 13:50 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-17 2:43 UTC (permalink / raw)
To: Ferruh Yigit, thomas; +Cc: dev, David Marchand
On 2023/3/17 2:18, Ferruh Yigit wrote:
> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>> to parse 'only keys' (e.g. socket_id) type. And the callback function
>> parameter 'value' is NULL when parsed 'only keys'.
>>
>> It may leads to segment fault when parse args with 'only key', this
>> patchset fixes rest of them.
>>
>> Chengwen Feng (5):
>> app/pdump: fix segment fault when parse args
>> net/memif: fix segment fault when parse devargs
>> net/pcap: fix segment fault when parse devargs
>> net/ring: fix segment fault when parse devargs
>> net/sfc: fix segment fault when parse devargs
>
> Hi Chengwen,
>
> Did you scan all `rte_kvargs_process()` instances?
No, I was just looking at the modules I was concerned about.
I looked at it briefly, and some modules had the same problem.
>
>
> And if there would be a way to tell kvargs that a value is expected (or
> not) this checks could be done in kvargs layer, I think this also can be
> to look at.
Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
But it also break the API's behavior.
Or continue fix the exist code (about 10+ place more),
for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
they'll take the initiative to prevent this.
Hope for more advise for the next.
> .
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-16 13:10 3% ` Dongdong Liu
@ 2023-03-16 14:31 0% ` Ivan Malov
0 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2023-03-16 14:31 UTC (permalink / raw)
To: Dongdong Liu
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
stable, yisen.zhuang, Jie Hai
Hi,
Thanks for responding and PSB.
On Thu, 16 Mar 2023, Dongdong Liu wrote:
> Hi Ivan
>
> Many thanks for your review.
>
> On 2023/3/15 19:28, Ivan Malov wrote:
>> Hi,
>>
>> On Wed, 15 Mar 2023, Dongdong Liu wrote:
>>
>>> From: Jie Hai <haijie1@huawei.com>
>>>
>>> Currently, rte_eth_rss_conf supports configuring rss hash
>>> functions, rss key and it's length, but not rss hash algorithm.
>>>
>>> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
>>> "func". This represents the RSS algorithms to apply. The following
>>> API is affected:
>>> - rte_eth_dev_configure
>>> - rte_eth_dev_rss_hash_update
>>> - rte_eth_dev_rss_hash_conf_get
>>>
>>> To prevent configuration failures caused by incorrect func input, check
>>> this parameter in advance. If it's incorrect, a warning is generated
>>> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
>>> and rte_eth_dev_configure.
>>>
>>> To check whether the drivers report the func field, it is set to default
>>> value before querying.
>>>
>>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>>> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
>>> ---
>>> doc/guides/rel_notes/release_23_03.rst | 4 ++--
>>> lib/ethdev/rte_ethdev.c | 18 ++++++++++++++++++
>>> lib/ethdev/rte_ethdev.h | 5 +++++
>>> 3 files changed, 25 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index af6f37389c..7879567427 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -284,8 +284,8 @@ ABI Changes
>>> Also, make sure to start the actual text at the margin.
>>> =======================================================
>>>
>>> -* No ABI change that would break compatibility with 22.11.
>>> -
>>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>>> RSS hash
>>> + algorithm.
>>>
>>> Known Issues
>>> ------------
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index 4d03255683..db561026bd 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id,
>>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>> goto rollback;
>>> }
>>>
>>> + if (dev_conf->rx_adv_conf.rss_conf.func >=
>>> RTE_ETH_HASH_FUNCTION_MAX) {
>>> + RTE_ETHDEV_LOG(WARNING,
>>> + "Ethdev port_id=%u invalid rss hash function (%u),
>>> modified to default value (%u)\n",
>>> + port_id, dev_conf->rx_adv_conf.rss_conf.func,
>>> + RTE_ETH_HASH_FUNCTION_DEFAULT);
>>> + dev->data->dev_conf.rx_adv_conf.rss_conf.func =
>>> + RTE_ETH_HASH_FUNCTION_DEFAULT;
>>
>> I have no strong opinion, but, to me, this behaviour conceals
>> programming errors. For example, if an application intends
>> to enable hash algorithm A but, due to a programming error,
>> passes a gibberish value here, chances are the error will
>> end up unnoticed. Especially in case the application
>> sets the log level to such that warnings are omitted.
> Good point, will fix.
>>
>> Why not just return the error the standard way?
>
> Aha, The original intention is not to break the ABI,
> but I think it could not achieve that.
>>
>>> + }
>>> +
>>> /* Check if Rx RSS distribution is disabled but RSS hash is
>>> enabled. */
>>> if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
>>> (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
>>> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
>>> return -ENOTSUP;
>>> }
>>>
>>> + if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
>>> + RTE_ETHDEV_LOG(NOTICE,
>>> + "Ethdev port_id=%u invalid rss hash function (%u),
>>> modified to default value (%u)\n",
>>> + port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
>>> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>>> + }
>>> +
>>> if (*dev->dev_ops->rss_hash_update == NULL)
>>> return -ENOTSUP;
>>> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
>>> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
>>> return -EINVAL;
>>> }
>>>
>>> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>>> +
>>> if (*dev->dev_ops->rss_hash_conf_get == NULL)
>>> return -ENOTSUP;
>>> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index 99fe9e238b..5abe2cb36d 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -174,6 +174,7 @@ extern "C" {
>>>
>>> #include "rte_ethdev_trace_fp.h"
>>> #include "rte_dev_info.h"
>>> +#include "rte_flow.h"
>>>
>>> extern int rte_eth_dev_logtype;
>>>
>>> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
>>> * The *rss_hf* field of the *rss_conf* structure indicates the different
>>> * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>>> * Supplying an *rss_hf* equal to zero disables the RSS feature.
>>> + *
>>> + * The *func* field of the *rss_conf* structure indicates the different
>>> + * types of hash algorithms applied by the RSS hashing.
>>
>> Consider:
>>
>> The *func* field of the *rss_conf* structure indicates the algorithm to
>> use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
>> the PMD to use its best-effort algorithm rather than a specific one.
>
> Look at some PMD drivers(i40e, hns3 etc), it seems the
> RTE_ETH_HASH_FUNCTION_DEFAULT consider as no rss algorithm is set.
This does not seem to contradict the suggested description.
If they, however, treat this as "no RSS at all", then
perhaps it is a mistake, because if the user requests
Rx MQ mode "RSS" and selects algorithm DEFAULT, this
is clearly not the same as "no RSS". Not by a long
shot. Because for "no RSS" the user would have
passed MQ mode choice "NONE", I take it.
>
> Thanks,
> Dongdong
>>
>>> */
>>> struct rte_eth_rss_conf {
>>> uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
>>> uint8_t rss_key_len; /**< hash key length in bytes. */
>>> uint64_t rss_hf; /**< Hash functions to apply - see below. */
>>> + enum rte_eth_hash_function func; /**< Hash algorithm to apply. */
>>> };
>>>
>>> /*
>>> --
>>> 2.22.0
>>>
>>>
>>
>> Thank you.
>>
>> .
>>
>
Thank you.
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-15 13:43 3% ` Thomas Monjalon
@ 2023-03-16 13:16 3% ` Dongdong Liu
2023-06-02 20:19 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Dongdong Liu @ 2023-03-16 13:16 UTC (permalink / raw)
To: Thomas Monjalon, Jie Hai
Cc: dev, ferruh.yigit, andrew.rybchenko, reshma.pattan, stable,
yisen.zhuang, david.marchand
Hi Thomas
On 2023/3/15 21:43, Thomas Monjalon wrote:
> 15/03/2023 12:00, Dongdong Liu:
>> From: Jie Hai <haijie1@huawei.com>
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> -* No ABI change that would break compatibility with 22.11.
>> -
>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
>> + algorithm.
>
> We cannot break ABI compatibility until 23.11.
Got it. Thank you for reminding.
[PATCH 3/5] and [PATCH 4/5] do not relate with this ABI compatibility.
I will send them separately.
Thanks,
Dongdong
>
>
>
> .
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-15 11:28 0% ` Ivan Malov
@ 2023-03-16 13:10 3% ` Dongdong Liu
2023-03-16 14:31 0% ` Ivan Malov
0 siblings, 1 reply; 200+ results
From: Dongdong Liu @ 2023-03-16 13:10 UTC (permalink / raw)
To: Ivan Malov
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
stable, yisen.zhuang, Jie Hai
Hi Ivan
Many thanks for your review.
On 2023/3/15 19:28, Ivan Malov wrote:
> Hi,
>
> On Wed, 15 Mar 2023, Dongdong Liu wrote:
>
>> From: Jie Hai <haijie1@huawei.com>
>>
>> Currently, rte_eth_rss_conf supports configuring rss hash
>> functions, rss key and it's length, but not rss hash algorithm.
>>
>> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
>> "func". This represents the RSS algorithms to apply. The following
>> API is affected:
>> - rte_eth_dev_configure
>> - rte_eth_dev_rss_hash_update
>> - rte_eth_dev_rss_hash_conf_get
>>
>> To prevent configuration failures caused by incorrect func input, check
>> this parameter in advance. If it's incorrect, a warning is generated
>> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
>> and rte_eth_dev_configure.
>>
>> To check whether the drivers report the func field, it is set to default
>> value before querying.
>>
>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
>> ---
>> doc/guides/rel_notes/release_23_03.rst | 4 ++--
>> lib/ethdev/rte_ethdev.c | 18 ++++++++++++++++++
>> lib/ethdev/rte_ethdev.h | 5 +++++
>> 3 files changed, 25 insertions(+), 2 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>> b/doc/guides/rel_notes/release_23_03.rst
>> index af6f37389c..7879567427 100644
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> @@ -284,8 +284,8 @@ ABI Changes
>> Also, make sure to start the actual text at the margin.
>> =======================================================
>>
>> -* No ABI change that would break compatibility with 22.11.
>> -
>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>> RSS hash
>> + algorithm.
>>
>> Known Issues
>> ------------
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 4d03255683..db561026bd 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>> goto rollback;
>> }
>>
>> + if (dev_conf->rx_adv_conf.rss_conf.func >=
>> RTE_ETH_HASH_FUNCTION_MAX) {
>> + RTE_ETHDEV_LOG(WARNING,
>> + "Ethdev port_id=%u invalid rss hash function (%u),
>> modified to default value (%u)\n",
>> + port_id, dev_conf->rx_adv_conf.rss_conf.func,
>> + RTE_ETH_HASH_FUNCTION_DEFAULT);
>> + dev->data->dev_conf.rx_adv_conf.rss_conf.func =
>> + RTE_ETH_HASH_FUNCTION_DEFAULT;
>
> I have no strong opinion, but, to me, this behaviour conceals
> programming errors. For example, if an application intends
> to enable hash algorithm A but, due to a programming error,
> passes a gibberish value here, chances are the error will
> end up unnoticed. Especially in case the application
> sets the log level to such that warnings are omitted.
Good point, will fix.
>
> Why not just return the error the standard way?
Aha, The original intention is not to break the ABI,
but I think it could not achieve that.
>
>> + }
>> +
>> /* Check if Rx RSS distribution is disabled but RSS hash is
>> enabled. */
>> if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
>> (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
>> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
>> return -ENOTSUP;
>> }
>>
>> + if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
>> + RTE_ETHDEV_LOG(NOTICE,
>> + "Ethdev port_id=%u invalid rss hash function (%u),
>> modified to default value (%u)\n",
>> + port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
>> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>> + }
>> +
>> if (*dev->dev_ops->rss_hash_update == NULL)
>> return -ENOTSUP;
>> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
>> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
>> return -EINVAL;
>> }
>>
>> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>> +
>> if (*dev->dev_ops->rss_hash_conf_get == NULL)
>> return -ENOTSUP;
>> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 99fe9e238b..5abe2cb36d 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -174,6 +174,7 @@ extern "C" {
>>
>> #include "rte_ethdev_trace_fp.h"
>> #include "rte_dev_info.h"
>> +#include "rte_flow.h"
>>
>> extern int rte_eth_dev_logtype;
>>
>> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
>> * The *rss_hf* field of the *rss_conf* structure indicates the different
>> * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>> * Supplying an *rss_hf* equal to zero disables the RSS feature.
>> + *
>> + * The *func* field of the *rss_conf* structure indicates the different
>> + * types of hash algorithms applied by the RSS hashing.
>
> Consider:
>
> The *func* field of the *rss_conf* structure indicates the algorithm to
> use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
> the PMD to use its best-effort algorithm rather than a specific one.
Look at some PMD drivers(i40e, hns3 etc), it seems the
RTE_ETH_HASH_FUNCTION_DEFAULT consider as no rss algorithm is set.
Thanks,
Dongdong
>
>> */
>> struct rte_eth_rss_conf {
>> uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
>> uint8_t rss_key_len; /**< hash key length in bytes. */
>> uint64_t rss_hf; /**< Hash functions to apply - see below. */
>> + enum rte_eth_hash_function func; /**< Hash algorithm to apply. */
>> };
>>
>> /*
>> --
>> 2.22.0
>>
>>
>
> Thank you.
>
> .
>
^ permalink raw reply [relevance 3%]
* [RFC v2 0/2] Add high-performance timer facility
@ 2023-03-15 17:03 3% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-03-15 17:03 UTC (permalink / raw)
To: dev
Cc: Erik Gabriel Carrillo, David Marchand, maria.lingemark,
Stefan Sundkvist, Stephen Hemminger, Morten Brørup,
Tyler Retzlaff, Mattias Rönnblom
This patchset is an attempt to introduce a high-performance, highly
scalable timer facility into DPDK.
More specifically, the goals for the htimer library are:
* Efficient handling of a handful up to hundreds of thousands of
concurrent timers.
* Make adding and canceling timers low-overhead, constant-time
operations.
* Provide a service functionally equivalent to that of
<rte_timer.h>. API/ABI backward compatibility is secondary.
In the author's opinion, there are two main shortcomings with the
current DPDK timer library (i.e., rte_timer.[ch]).
One is the synchronization overhead, where heavy-weight full-barrier
type synchronization is used. rte_timer.c uses per-EAL/lcore skip
lists, but any thread may add or cancel (or otherwise access) timers
managed by another lcore (and thus resides in its timer skip list).
The other is an algorithmic shortcoming, with rte_timer.c's reliance
on a skip list, which is less efficient than certain alternatives.
This patchset implements a hierarchical timer wheel (HWT, in
rte_htw.c), as per the Varghese and Lauck paper "Hashed and
Hierarchical Timing Wheels: Data Structures for the Efficient
Implementation of a Timer Facility". A HWT is a data structure
purposely design for this task, and used by many operating system
kernel timer facilities.
To further improve the solution described by Varghese and Lauck, a
bitset is placed in front of each of the timer wheel in the HWT,
reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
and expiry processing).
Cycle-efficient scanning and manipulation of these bitsets are crucial
for the HWT's performance.
The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
instance, much like rte_timer.c keeps a per-lcore skip list.
To avoid expensive synchronization overhead for thread-local timer
management, the HWTs are accessed only from the "owning" thread. Any
interaction any other thread does with a particular lcore's timer
wheel goes over a set of DPDK rings. A side-effect of this design is
that all operations working toward a "remote" HWT must be
asynchronous.
The <rte_htimer.h> API is available only to EAL threads and registered
non-EAL threads.
The htimer API allows the application to supply the current time,
useful in case it already has retrieved this for other purposes,
saving the cost of a rdtsc instruction (or its equivalent).
Relative htimer does not retrieve a new time, but reuse the current
time (as known via/at-the-time of the manage-call), again to shave off
some cycles of overhead.
A semantic improvement compared to the <rte_timer.h> API is that the
htimer library can give a definite answer on the question if the timer
expiry callback was called, after a timer has been canceled.
The patchset includes a performance test case
'timer_htimer_htw_perf_autotest', which compares rte_timer, rte_htimer
and rte_htw timers in the same scenario.
'timer_htimer_htw_perf_autotest' suggests that rte_htimer is ~3-5x
faster than rte_timer for timer/timeout-heavy applications, in a
scenario where the timer always fires. For a scenario with a mix of
canceled and expired timers, the performance difference is greater.
In scenarios with few timeouts, rte_timer has lower overhead than
htimer, but both variants consume very little CPU time.
In certain scenarios, rte_timer does not suffer from
non-constant-time-add and cancel operations. On such is in case the
timer added is always last in the list, where htimer is only ~2-3x
faster.
The bitset implementation which the HWT implementation depends upon
seemed generic-enough and potentially useful outside the world of
HWTs, to justify being located in the EAL.
This patchset is very much an RFC, and the author is yet to form an
opinion on many important issues.
* If deemed a suitable replacement, should the htimer replace the
current DPDK timer library in some particular (ABI-breaking)
release, or should it live side-by-side with the then-legacy
<rte_timer.h> API? A lot of things in and outside DPDK depend on
<rte_timer.h>, so coexistence may be required to facilitate a smooth
transition.
* Should the htimer and htw-related files be colocated with rte_timer.c
in the timer library?
* Would it be useful for applications using asynchronous cancel to
have the option of having the timer callback run not only in case of
timer expiration, but also cancellation (on the target lcore)? The
timer cb signature would need to include an additional parameter in
that case.
* Should the rte_htimer be a nested struct, so the htw parts be separated
from the htimer parts?
* <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
<rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
be so?
* rte_htimer struct is only supposed to be used by the application to
give an indication of how much memory it needs to allocate, and is
its member are not supposed to be directly accessed (w/ the possible
exception of the owner_lcore_id field). Should there be a dummy
struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
function instead, serving the same purpose? Better encapsulation,
but more inconvenient for applications. Run-time dynamic sizing
would force application-level dynamic allocations.
* Asynchronous cancellation is a little tricky to use for the
application (primarily due to timer memory reclamation/race
issues). Should this functionality be removed?
* Should rte_htimer_mgr_init() also retrieve the current time? If so,
there should to be a variant which allows the user to specify the
time (to match rte_htimer_mgr_manage_time()). One pitfall with the
current proposed API is an application calling rte_htimer_mgr_init()
and then immediately adding a timer with a relative timeout, in
which case the current absolute time used is 0, which might be a
surprise.
* Would the event timer adapter be best off using <rte_htw.h>
directly, or <rte_htimer.h>? In the latter case, there needs to be a
way to instantiate more HWTs (similar to the "alt" functions of
<rte_timer.h>)?
* Should the PERIODICAL flag (and the complexity it brings) be
removed? And leave the application with only single-shot timers, and
the option to re-add them in the timer callback.
* Should the async result codes and the sync cancel error codes be merged
into one set of result codes?
* Should the rte_htimer_mgr_async_add() have a flag which allow
buffering add request messages until rte_htimer_mgr_process() is
called? Or any manage function. Would reduce ring signaling overhead
(i.e., burst enqueue operations instead of single-element
enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
solving the same "problem" a different way. (The signature of such
a function would not be pretty.)
* Does the functionality provided by the rte_htimer_mgr_process()
function match its the use cases? Should there me a more clear
separation between expiry processing and asynchronous operation
processing?
* Should the patchset be split into more commits? If so, how?
Thanks to Erik Carrillo for his assistance.
Mattias Rönnblom (2):
eal: add bitset type
eal: add high-performance timer facility
app/test/meson.build | 12 +-
app/test/test_bitset.c | 645 +++++++++++++++++++
app/test/test_htimer_mgr.c | 674 ++++++++++++++++++++
app/test/test_htimer_mgr_perf.c | 322 ++++++++++
app/test/test_htw.c | 478 ++++++++++++++
app/test/test_htw_perf.c | 181 ++++++
app/test/test_timer_htimer_htw_perf.c | 693 ++++++++++++++++++++
doc/api/doxy-api-index.md | 5 +-
doc/api/doxy-api.conf.in | 1 +
lib/eal/common/meson.build | 1 +
lib/eal/common/rte_bitset.c | 29 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_bitset.h | 879 ++++++++++++++++++++++++++
lib/eal/version.map | 3 +
lib/htimer/meson.build | 7 +
lib/htimer/rte_htimer.h | 68 ++
lib/htimer/rte_htimer_mgr.c | 547 ++++++++++++++++
lib/htimer/rte_htimer_mgr.h | 516 +++++++++++++++
lib/htimer/rte_htimer_msg.h | 44 ++
lib/htimer/rte_htimer_msg_ring.c | 18 +
lib/htimer/rte_htimer_msg_ring.h | 55 ++
lib/htimer/rte_htw.c | 445 +++++++++++++
lib/htimer/rte_htw.h | 49 ++
lib/htimer/version.map | 17 +
lib/meson.build | 1 +
25 files changed, 5689 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_bitset.c
create mode 100644 app/test/test_htimer_mgr.c
create mode 100644 app/test/test_htimer_mgr_perf.c
create mode 100644 app/test/test_htw.c
create mode 100644 app/test/test_htw_perf.c
create mode 100644 app/test/test_timer_htimer_htw_perf.c
create mode 100644 lib/eal/common/rte_bitset.c
create mode 100644 lib/eal/include/rte_bitset.h
create mode 100644 lib/htimer/meson.build
create mode 100644 lib/htimer/rte_htimer.h
create mode 100644 lib/htimer/rte_htimer_mgr.c
create mode 100644 lib/htimer/rte_htimer_mgr.h
create mode 100644 lib/htimer/rte_htimer_msg.h
create mode 100644 lib/htimer/rte_htimer_msg_ring.c
create mode 100644 lib/htimer/rte_htimer_msg_ring.h
create mode 100644 lib/htimer/rte_htw.c
create mode 100644 lib/htimer/rte_htw.h
create mode 100644 lib/htimer/version.map
--
2.34.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
2023-03-15 11:28 0% ` Ivan Malov
@ 2023-03-15 13:43 3% ` Thomas Monjalon
2023-03-16 13:16 3% ` Dongdong Liu
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-15 13:43 UTC (permalink / raw)
To: Dongdong Liu, Jie Hai
Cc: dev, ferruh.yigit, andrew.rybchenko, reshma.pattan, stable,
yisen.zhuang, david.marchand
15/03/2023 12:00, Dongdong Liu:
> From: Jie Hai <haijie1@huawei.com>
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> -* No ABI change that would break compatibility with 22.11.
> -
> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
> + algorithm.
We cannot break ABI compatibility until 23.11.
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
@ 2023-03-15 11:28 0% ` Ivan Malov
2023-03-16 13:10 3% ` Dongdong Liu
2023-03-15 13:43 3% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Ivan Malov @ 2023-03-15 11:28 UTC (permalink / raw)
To: Dongdong Liu
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
stable, yisen.zhuang, Jie Hai
Hi,
On Wed, 15 Mar 2023, Dongdong Liu wrote:
> From: Jie Hai <haijie1@huawei.com>
>
> Currently, rte_eth_rss_conf supports configuring rss hash
> functions, rss key and it's length, but not rss hash algorithm.
>
> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
> "func". This represents the RSS algorithms to apply. The following
> API is affected:
> - rte_eth_dev_configure
> - rte_eth_dev_rss_hash_update
> - rte_eth_dev_rss_hash_conf_get
>
> To prevent configuration failures caused by incorrect func input, check
> this parameter in advance. If it's incorrect, a warning is generated
> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
> and rte_eth_dev_configure.
>
> To check whether the drivers report the func field, it is set to default
> value before querying.
>
> Signed-off-by: Jie Hai <haijie1@huawei.com>
> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
> ---
> doc/guides/rel_notes/release_23_03.rst | 4 ++--
> lib/ethdev/rte_ethdev.c | 18 ++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 5 +++++
> 3 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
> index af6f37389c..7879567427 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -284,8 +284,8 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> -* No ABI change that would break compatibility with 22.11.
> -
> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
> + algorithm.
>
> Known Issues
> ------------
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d03255683..db561026bd 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
> }
>
> + if (dev_conf->rx_adv_conf.rss_conf.func >= RTE_ETH_HASH_FUNCTION_MAX) {
> + RTE_ETHDEV_LOG(WARNING,
> + "Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
> + port_id, dev_conf->rx_adv_conf.rss_conf.func,
> + RTE_ETH_HASH_FUNCTION_DEFAULT);
> + dev->data->dev_conf.rx_adv_conf.rss_conf.func =
> + RTE_ETH_HASH_FUNCTION_DEFAULT;
I have no strong opinion, but, to me, this behaviour conceals
programming errors. For example, if an application intends
to enable hash algorithm A but, due to a programming error,
passes a gibberish value here, chances are the error will
end up unnoticed. Especially in case the application
sets the log level to such that warnings are omitted.
Why not just return the error the standard way?
> + }
> +
> /* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
> if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
> (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
> return -ENOTSUP;
> }
>
> + if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
> + RTE_ETHDEV_LOG(NOTICE,
> + "Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
> + port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
> + }
> +
> if (*dev->dev_ops->rss_hash_update == NULL)
> return -ENOTSUP;
> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
> return -EINVAL;
> }
>
> + rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
> +
> if (*dev->dev_ops->rss_hash_conf_get == NULL)
> return -ENOTSUP;
> ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 99fe9e238b..5abe2cb36d 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -174,6 +174,7 @@ extern "C" {
>
> #include "rte_ethdev_trace_fp.h"
> #include "rte_dev_info.h"
> +#include "rte_flow.h"
>
> extern int rte_eth_dev_logtype;
>
> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
> * The *rss_hf* field of the *rss_conf* structure indicates the different
> * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
> * Supplying an *rss_hf* equal to zero disables the RSS feature.
> + *
> + * The *func* field of the *rss_conf* structure indicates the different
> + * types of hash algorithms applied by the RSS hashing.
Consider:
The *func* field of the *rss_conf* structure indicates the algorithm to
use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
the PMD to use its best-effort algorithm rather than a specific one.
> */
> struct rte_eth_rss_conf {
> uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
> uint8_t rss_key_len; /**< hash key length in bytes. */
> uint64_t rss_hf; /**< Hash functions to apply - see below. */
> + enum rte_eth_hash_function func; /**< Hash algorithm to apply. */
> };
>
> /*
> --
> 2.22.0
>
>
Thank you.
^ permalink raw reply [relevance 0%]
* [PATCH 1/5] ethdev: support setting and querying rss algorithm
@ 2023-03-15 11:00 10% ` Dongdong Liu
2023-03-15 11:28 0% ` Ivan Malov
2023-03-15 13:43 3% ` Thomas Monjalon
0 siblings, 2 replies; 200+ results
From: Dongdong Liu @ 2023-03-15 11:00 UTC (permalink / raw)
To: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan
Cc: stable, yisen.zhuang, liudongdong3, Jie Hai
From: Jie Hai <haijie1@huawei.com>
Currently, rte_eth_rss_conf supports configuring rss hash
functions, rss key and it's length, but not rss hash algorithm.
The structure ``rte_eth_rss_conf`` is extended by adding a new field,
"func". This represents the RSS algorithms to apply. The following
API is affected:
- rte_eth_dev_configure
- rte_eth_dev_rss_hash_update
- rte_eth_dev_rss_hash_conf_get
To prevent configuration failures caused by incorrect func input, check
this parameter in advance. If it's incorrect, a warning is generated
and the default value is set. Do the same for rte_eth_dev_rss_hash_update
and rte_eth_dev_configure.
To check whether the drivers report the func field, it is set to default
value before querying.
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
doc/guides/rel_notes/release_23_03.rst | 4 ++--
lib/ethdev/rte_ethdev.c | 18 ++++++++++++++++++
lib/ethdev/rte_ethdev.h | 5 +++++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index af6f37389c..7879567427 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -284,8 +284,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
-* No ABI change that would break compatibility with 22.11.
-
+* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
+ algorithm.
Known Issues
------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..db561026bd 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
+ if (dev_conf->rx_adv_conf.rss_conf.func >= RTE_ETH_HASH_FUNCTION_MAX) {
+ RTE_ETHDEV_LOG(WARNING,
+ "Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
+ port_id, dev_conf->rx_adv_conf.rss_conf.func,
+ RTE_ETH_HASH_FUNCTION_DEFAULT);
+ dev->data->dev_conf.rx_adv_conf.rss_conf.func =
+ RTE_ETH_HASH_FUNCTION_DEFAULT;
+ }
+
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
(dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
@@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
return -ENOTSUP;
}
+ if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
+ RTE_ETHDEV_LOG(NOTICE,
+ "Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
+ port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
+ rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
+ }
+
if (*dev->dev_ops->rss_hash_update == NULL)
return -ENOTSUP;
ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
@@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
return -EINVAL;
}
+ rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
+
if (*dev->dev_ops->rss_hash_conf_get == NULL)
return -ENOTSUP;
ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..5abe2cb36d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -174,6 +174,7 @@ extern "C" {
#include "rte_ethdev_trace_fp.h"
#include "rte_dev_info.h"
+#include "rte_flow.h"
extern int rte_eth_dev_logtype;
@@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
* The *rss_hf* field of the *rss_conf* structure indicates the different
* types of IPv4/IPv6 packets to which the RSS hashing must be applied.
* Supplying an *rss_hf* equal to zero disables the RSS feature.
+ *
+ * The *func* field of the *rss_conf* structure indicates the different
+ * types of hash algorithms applied by the RSS hashing.
*/
struct rte_eth_rss_conf {
uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
uint8_t rss_key_len; /**< hash key length in bytes. */
uint64_t rss_hf; /**< Hash functions to apply - see below. */
+ enum rte_eth_hash_function func; /**< Hash algorithm to apply. */
};
/*
--
2.22.0
^ permalink raw reply [relevance 10%]
Results 1201-1400 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2018-10-09 7:54 [dpdk-dev] [PATCH 0/2] eal/bitmap: support reverse bitmap scan Vivek Sharma
2023-06-12 2:23 4% ` Stephen Hemminger
2019-04-16 1:59 [dpdk-dev] [PATCH] fbarray: get fbarrays from containerized secondary ogawa.yasufumi
2023-06-13 16:51 3% ` Stephen Hemminger
2019-11-25 16:13 [dpdk-dev] [RFC PATCH] mark experimental variables David Marchand
2023-06-12 2:49 0% ` Stephen Hemminger
2021-09-13 8:45 [dpdk-dev] Questions about rte_eth_link_speed_to_str API Min Hu (Connor)
2021-09-16 2:56 ` [dpdk-dev] [RFC] ethdev: improve link speed to string Min Hu (Connor)
2021-09-16 6:22 ` Andrew Rybchenko
2021-09-16 8:16 ` Min Hu (Connor)
2021-09-16 8:21 ` Andrew Rybchenko
2021-09-17 0:43 ` Min Hu (Connor)
2023-01-19 11:41 ` Ferruh Yigit
2023-01-19 16:45 ` Stephen Hemminger
2023-02-10 14:41 ` Ferruh Yigit
2023-03-23 14:40 3% ` Ferruh Yigit
2021-12-24 16:46 [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Feifei Wang
2023-05-25 9:45 ` [PATCH v6 0/4] Recycle mbufs from Tx queue to Rx queue Feifei Wang
2023-05-25 9:45 ` [PATCH v6 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-06-05 12:53 ` Константин Ананьев
2023-06-06 2:55 1% ` Feifei Wang
2023-06-06 7:10 0% ` Konstantin Ananyev
2023-06-06 7:31 3% ` Feifei Wang
2023-06-06 8:34 0% ` Konstantin Ananyev
2023-06-07 0:00 0% ` Ferruh Yigit
2023-06-12 3:25 0% ` Feifei Wang
2022-09-28 12:45 [PATCH 4/5] test/security: add inline MACsec cases Akhil Goyal
2023-05-23 19:49 ` [PATCH 00/13] Add MACsec unit test cases Akhil Goyal
2023-05-23 19:49 ` [PATCH 02/13] security: add MACsec packet number threshold Akhil Goyal
2023-05-23 21:29 3% ` Stephen Hemminger
2023-05-24 7:12 0% ` [EXT] " Akhil Goyal
2023-05-24 8:09 3% ` Akhil Goyal
2023-06-07 15:19 ` [PATCH v2 00/13] Add MACsec unit test cases Akhil Goyal
2023-06-07 15:19 3% ` [PATCH v2 01/13] security: add direction in SA/SC configuration Akhil Goyal
2023-06-07 19:49 3% ` David Marchand
2023-06-08 6:58 0% ` [EXT] " Akhil Goyal
2023-06-08 6:54 ` [PATCH v3 00/13] Add MACsec unit test cases Akhil Goyal
2023-06-08 6:54 3% ` [PATCH v3 01/13] security: add direction in SA/SC configuration Akhil Goyal
2022-10-20 9:31 [PATCH V5] ethdev: fix one address occupies two indexes in MAC addrs Huisong Li
2023-02-02 12:36 ` [PATCH V8] ethdev: fix one address occupies two entries " Huisong Li
2023-05-16 11:47 0% ` lihuisong (C)
2023-05-16 14:13 0% ` Ferruh Yigit
2023-05-17 7:45 0% ` lihuisong (C)
2023-05-17 8:53 0% ` Ferruh Yigit
2023-05-19 3:00 4% ` [PATCH V9] " Huisong Li
2023-05-19 9:31 3% ` [PATCH V10] " Huisong Li
2022-11-03 15:47 [PATCH 0/2] ABI check updates David Marchand
2023-03-23 17:15 9% ` [PATCH v2 " David Marchand
2023-03-23 17:15 21% ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
2023-03-23 17:15 41% ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
2023-03-28 18:38 4% ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
2023-02-10 2:48 [PATCH v4 0/3] add telemetry cmds for ring Jie Hai
2023-05-09 1:29 3% ` [PATCH v5 " Jie Hai
2023-05-09 1:29 3% ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-05-09 6:23 0% ` Ruifeng Wang
2023-05-09 8:15 0% ` Jie Hai
2023-05-09 9:24 3% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
2023-05-09 9:24 3% ` [PATCH v6 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-05-30 9:27 0% ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
2023-02-28 9:39 [RFC 0/2] Add high-performance timer facility Mattias Rönnblom
2023-03-15 17:03 3% ` [RFC v2 " Mattias Rönnblom
2023-03-09 8:56 [RFC 1/2] security: introduce out of place support for inline ingress Nithin Dabilpuram
2023-04-11 10:04 4% ` [PATCH 1/3] " Nithin Dabilpuram
2023-04-11 18:05 3% ` Stephen Hemminger
2023-04-18 8:33 4% ` Jerin Jacob
2023-04-24 22:41 3% ` Thomas Monjalon
2023-05-19 8:07 4% ` Jerin Jacob
2023-05-30 9:23 0% ` Jerin Jacob
2023-05-30 13:51 0% ` Thomas Monjalon
2023-05-31 9:26 5% ` Morten Brørup
2023-03-14 12:48 [PATCH 0/5] fix segment fault when parse args Chengwen Feng
2023-03-16 18:18 ` Ferruh Yigit
2023-03-17 2:43 3% ` fengchengwen
2023-03-21 13:50 0% ` Ferruh Yigit
2023-03-22 1:15 0% ` fengchengwen
2023-03-22 8:53 0% ` Ferruh Yigit
2023-03-22 13:49 0% ` Thomas Monjalon
2023-03-23 11:58 3% ` fengchengwen
2023-03-23 12:51 3% ` Thomas Monjalon
2023-03-15 11:00 [PATCH 0/5] support setting and querying RSS algorithms Dongdong Liu
2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
2023-03-15 11:28 0% ` Ivan Malov
2023-03-16 13:10 3% ` Dongdong Liu
2023-03-16 14:31 0% ` Ivan Malov
2023-03-15 13:43 3% ` Thomas Monjalon
2023-03-16 13:16 3% ` Dongdong Liu
2023-06-02 20:19 0% ` Ferruh Yigit
2023-06-05 12:34 0% ` Dongdong Liu
2023-03-17 20:19 [PATCH 0/7] replace rte atomics with GCC builtin atomics Tyler Retzlaff
2023-03-23 22:53 ` [PATCH v3 " Tyler Retzlaff
2023-05-24 12:40 ` David Marchand
2023-05-24 15:47 3% ` Tyler Retzlaff
2023-05-24 20:06 0% ` David Marchand
2023-05-24 22:50 0% ` Tyler Retzlaff
2023-05-24 22:56 0% ` Honnappa Nagarahalli
2023-05-25 0:02 0% ` Tyler Retzlaff
2023-03-20 10:26 [PATCH 1/2] app/mldev: fix build with debug David Marchand
2023-03-20 10:26 5% ` [PATCH 2/2] ci: test compilation " David Marchand
2023-03-20 12:18 ` [PATCH v2 1/2] app/mldev: fix build " David Marchand
2023-03-20 12:18 19% ` [PATCH v2 2/2] ci: test compilation with debug in GHA David Marchand
2023-03-24 2:16 [PATCH v2 00/15] graph enhancement for multi-core dispatch Zhirun Yan
2023-03-29 6:43 ` [PATCH v3 " Zhirun Yan
2023-03-29 6:43 ` [PATCH v3 03/15] graph: move node process into inline function Zhirun Yan
2023-03-29 15:34 3% ` Stephen Hemminger
2023-03-29 15:41 0% ` Jerin Jacob
2023-03-29 23:40 2% [PATCH v12 00/22] Covert static log types in libraries to dynamic Stephen Hemminger
2023-03-29 23:40 2% ` [PATCH v12 18/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-03-31 4:02 [PATCH v5 00/15] graph enhancement for multi-core dispatch Zhirun Yan
2023-05-09 6:03 ` [PATCH v6 " Zhirun Yan
2023-05-09 6:03 ` [PATCH v6 04/15] graph: add get/set graph worker model APIs Zhirun Yan
2023-05-24 6:08 3% ` Jerin Jacob
2023-05-26 9:58 0% ` Yan, Zhirun
2023-03-31 17:17 3% DPDK 23.03 released Thomas Monjalon
2023-03-31 20:08 [PATCH] devtools: add script to check for non inclusive naming Stephen Hemminger
2023-04-03 14:47 14% ` [PATCH v2] " Stephen Hemminger
2023-04-03 6:59 9% [PATCH] version: 23.07-rc0 David Marchand
2023-04-03 9:37 10% ` [PATCH v2] " David Marchand
2023-04-06 7:44 0% ` David Marchand
2023-04-03 21:52 [PATCH 0/9] msvc integration changes Tyler Retzlaff
2023-04-03 21:52 6% ` [PATCH 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-03 21:52 3% ` [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-04 20:07 ` [PATCH v2 0/9] msvc integration changes Tyler Retzlaff
2023-04-04 20:07 6% ` [PATCH v2 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-04 20:07 3% ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-05 10:56 0% ` Bruce Richardson
2023-04-05 16:02 0% ` Tyler Retzlaff
2023-04-05 16:17 0% ` Bruce Richardson
2023-04-06 0:45 ` [PATCH v3 00/11] msvc integration changes Tyler Retzlaff
2023-04-06 0:45 6% ` [PATCH v3 08/11] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-06 0:45 3% ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-11 10:24 0% ` Bruce Richardson
2023-04-11 20:34 0% ` Tyler Retzlaff
2023-04-12 8:50 0% ` Bruce Richardson
2023-04-11 21:12 ` [PATCH v4 00/14] msvc integration changes Tyler Retzlaff
2023-04-11 21:12 6% ` [PATCH v4 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-11 21:12 3% ` [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-13 21:25 ` [PATCH v5 00/14] msvc integration changes Tyler Retzlaff
2023-04-13 21:26 6% ` [PATCH v5 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-14 6:45 ` Morten Brørup
2023-04-14 17:02 4% ` Tyler Retzlaff
2023-04-15 7:16 3% ` Morten Brørup
2023-04-15 20:52 4% ` Tyler Retzlaff
2023-04-15 22:41 4% ` Morten Brørup
2023-04-13 21:26 3% ` [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-15 1:15 ` [PATCH v6 00/15] msvc integration changes Tyler Retzlaff
2023-04-15 1:15 5% ` [PATCH v6 11/15] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-15 1:15 3% ` [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-17 16:10 ` [PATCH v7 00/14] msvc integration changes Tyler Retzlaff
2023-04-17 16:10 5% ` [PATCH v7 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-17 16:10 3% ` [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-05-02 3:15 ` [PATCH v8 00/14] msvc integration changes Tyler Retzlaff
2023-05-02 3:15 5% ` [PATCH v8 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-05-02 3:15 3% ` [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-05 12:40 3% [PATCH v2 0/3] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-04-05 12:41 ` [PATCH v2 3/3] " Eelco Chaudron
2023-05-10 11:44 ` David Marchand
2023-05-16 8:53 ` Eelco Chaudron
2023-05-16 10:12 3% ` David Marchand
2023-05-16 11:36 0% ` Eelco Chaudron
2023-05-16 11:45 0% ` Maxime Coquelin
2023-05-16 12:07 0% ` Eelco Chaudron
2023-05-17 9:18 0% ` Eelco Chaudron
2023-05-08 13:58 0% ` [PATCH v2 0/3] " Eelco Chaudron
2023-04-05 23:12 17% [PATCH] MAINTAINERS: sort file entries Stephen Hemminger
2023-04-13 11:53 [PATCH v2 1/3] eal: add x86 cpuid support for monitorx Sivaprasad Tummala
2023-04-13 11:53 3% ` [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-04-17 4:31 3% ` [PATCH v3 1/4] " Sivaprasad Tummala
2023-04-18 8:25 ` [PATCH v4 0/4] power: monitor support for AMD EPYC processors Sivaprasad Tummala
2023-04-18 8:25 3% ` [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-04-18 8:52 3% ` Ferruh Yigit
2023-04-18 9:22 3% ` Bruce Richardson
2023-06-01 9:23 0% ` David Marchand
2023-04-14 8:43 [PATCH] reorder: improve buffer structure layout Volodymyr Fialko
2023-04-14 14:52 3% ` Stephen Hemminger
2023-04-14 14:54 3% ` Bruce Richardson
2023-04-14 15:30 0% ` Stephen Hemminger
2023-04-18 5:30 [RFC 0/4] Support VFIO sparse mmap in PCI bus Chenbo Xia
2023-04-18 7:46 3% ` David Marchand
2023-04-18 9:27 0% ` Xia, Chenbo
2023-04-18 9:33 0% ` Xia, Chenbo
2023-04-18 10:45 [PATCH] eventdev: fix alignment padding Sivaprasad Tummala
2023-04-18 11:06 4% ` Morten Brørup
2023-04-18 12:40 3% ` Mattias Rönnblom
2023-04-18 12:30 ` Mattias Rönnblom
2023-04-18 14:07 ` Morten Brørup
2023-04-18 15:16 ` Mattias Rönnblom
2023-05-17 13:20 ` Jerin Jacob
2023-05-17 13:35 3% ` Morten Brørup
2023-05-23 15:15 3% ` Jerin Jacob
2023-04-19 8:36 [RFC] lib: set/get max memzone segments Ophir Munk
2023-04-20 7:43 ` Thomas Monjalon
2023-04-20 18:20 ` Tyler Retzlaff
2023-04-21 8:34 4% ` Thomas Monjalon
2023-04-28 10:31 [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver jerinj
2023-05-02 14:18 5% ` Ferruh Yigit
2023-05-08 13:44 1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
2023-05-17 15:47 0% ` Jerin Jacob
[not found] <20230125075636.363cafaf@hermes.local>
[not found] ` <3688057.uBEoKPz9u1@thomas>
[not found] ` <DS0PR11MB73090EC350B82E0730D0D9A197CE9@DS0PR11MB7309.namprd11.prod.outlook.com>
2023-05-05 15:05 3% ` Minutes of Technical Board Meeting, 2023-01-11 Stephen Hemminger
2023-05-11 8:16 [PATCH v2] eventdev: avoid non-burst shortcut for variable-size bursts Mattias Rönnblom
2023-05-11 8:24 ` [PATCH v3] " Mattias Rönnblom
2023-05-12 11:59 ` Jerin Jacob
2023-05-12 13:15 ` Mattias Rönnblom
2023-05-15 12:38 ` Jerin Jacob
2023-05-15 20:52 3% ` Mattias Rönnblom
2023-05-16 13:08 0% ` Jerin Jacob
2023-05-17 7:16 3% ` Mattias Rönnblom
2023-05-17 12:28 0% ` Jerin Jacob
2023-05-16 6:37 [PATCH v1 0/7] ethdev: modify field API for multiple headers Michael Baum
2023-05-16 6:37 3% ` [PATCH v1 5/7] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-18 17:40 ` [PATCH v2 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-18 17:40 3% ` [PATCH v2 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-22 19:27 ` [PATCH v3 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-22 19:28 3% ` [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-22 19:28 3% ` [PATCH v3 5/5] ethdev: add MPLS header " Michael Baum
2023-05-23 12:48 ` [PATCH v4 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-23 12:48 3% ` [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-23 12:48 2% ` [PATCH v4 5/5] ethdev: add MPLS header " Michael Baum
2023-05-23 21:31 ` [PATCH v5 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-23 21:31 3% ` [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-23 21:31 2% ` [PATCH v5 5/5] ethdev: add MPLS header " Michael Baum
2023-05-17 6:59 [PATCH] net/bonding: replace master/slave to main/member Chaoyong He
2023-05-17 14:52 1% ` Stephen Hemminger
2023-05-18 6:32 1% ` [PATCH v2] " Chaoyong He
2023-05-18 7:01 1% ` [PATCH v3] " Chaoyong He
2023-05-18 8:44 1% ` [PATCH v4] " Chaoyong He
2023-05-18 15:39 3% ` Stephen Hemminger
2023-06-02 15:05 0% ` Ferruh Yigit
2023-05-17 9:08 4% [PATCH v3 0/4] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-06-01 20:00 0% ` Maxime Coquelin
2023-06-02 6:20 0% ` Eelco Chaudron
2023-05-17 16:15 [PATCH 00/20] Replace use of term sanity-check Stephen Hemminger
2023-05-18 16:45 ` [PATCH v2 00/19] Replace use of the " Stephen Hemminger
2023-05-18 16:45 2% ` [PATCH v2 01/19] mbuf: replace term sanity check Stephen Hemminger
2023-05-19 17:45 ` [PATCH v3 00/19] Replace use "sanity check" Stephen Hemminger
2023-05-19 17:45 2% ` [PATCH v3 01/19] mbuf: replace term sanity check Stephen Hemminger
2023-05-22 11:40 [PATCH 0/2] add support of showing firmware version Chaoyong He
2023-05-22 11:40 6% ` [PATCH 1/2] net/nfp: align reading of version info with kernel driver Chaoyong He
2023-05-25 10:08 [PATCH v2 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB Sunyang Wu
2023-05-25 14:48 3% ` [EXT] " Akhil Goyal
2023-05-25 20:39 8% [PATCH] ethdev: validate reserved fields Stephen Hemminger
2023-05-26 8:15 0% ` Bruce Richardson
2023-06-06 15:24 3% ` Ferruh Yigit
2023-06-06 15:38 0% ` Stephen Hemminger
2023-05-25 23:23 4% [PATCH v1 0/4] bbdev: API extension for 23.11 Nicolas Chautru
2023-05-26 1:18 [PATCH v3 1/2] cryptodev: support SM3_HMAC,SM4_CFB and SM4_OFB Sunyang Wu
2023-05-26 7:15 4% ` [EXT] " Akhil Goyal
2023-05-29 3:06 4% ` 回复: " Sunyang Wu
2023-05-26 2:11 [PATCH v1 0/1] doc: accounce change in bbdev extension Nicolas Chautru
2023-05-26 2:11 ` [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension Nicolas Chautru
2023-05-26 3:47 ` Stephen Hemminger
2023-06-05 19:07 ` Maxime Coquelin
2023-06-05 20:08 4% ` Chautru, Nicolas
2023-06-06 9:20 4% ` David Marchand
2023-06-06 21:01 0% ` Chautru, Nicolas
2023-06-08 8:47 0% ` Maxime Coquelin
2023-06-12 20:53 3% ` Chautru, Nicolas
2023-06-13 8:14 4% ` Maxime Coquelin
2023-06-13 17:16 3% ` Chautru, Nicolas
2023-06-13 20:00 4% ` Maxime Coquelin
2023-06-13 21:22 3% ` Stephen Hemminger
2023-06-14 18:18 0% ` Chautru, Nicolas
2023-06-15 7:52 0% ` Maxime Coquelin
2023-06-15 19:30 5% ` Chautru, Nicolas
[not found] <20220825024425.10534-1-lihuisong@huawei.com>
2023-01-31 3:33 ` [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port Huisong Li
2023-05-16 11:27 0% ` lihuisong (C)
2023-05-23 0:46 0% ` fengchengwen
2023-05-27 2:11 3% ` [PATCH V6 " Huisong Li
2023-05-27 2:11 2% ` [PATCH V6 2/5] ethdev: fix skip valid port in probing callback Huisong Li
2023-06-06 16:26 0% ` [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port Ferruh Yigit
2023-06-07 10:14 0% ` lihuisong (C)
2023-05-31 7:08 3% [PATCH] common/sfc_efx/base: update fields name for MARK and FLAG actions Artemii Morozov
2023-06-01 15:43 0% ` Ferruh Yigit
2023-06-02 2:04 [PATCH v1 0/1] bbdev: extend range for alloc function Nicolas Chautru
2023-06-02 2:04 ` [PATCH v1 1/1] bbdev: extend range of allocation function Nicolas Chautru
2023-06-02 7:56 3% ` Maxime Coquelin
2023-06-02 14:17 3% ` Chautru, Nicolas
2023-06-05 19:08 3% ` Maxime Coquelin
2023-06-06 12:11 [PATCH] doc: deprecation notice to add RSS hash algorithm field Dongdong Liu
2023-06-06 15:39 ` Stephen Hemminger
2023-06-06 15:50 3% ` Ferruh Yigit
2023-06-06 16:35 3% ` Stephen Hemminger
2023-06-07 1:56 [PATCH 00/10] support rte_flow for flower firmware with NFDk Chaoyong He
2023-06-07 1:57 12% ` [PATCH 02/10] net/nfp: add a check function for the NFD version Chaoyong He
2023-06-09 10:51 [PATCH] doc: prefer installing using meson rather than ninja Bruce Richardson
2023-06-09 13:34 3% ` [PATCH v2] " Bruce Richardson
2023-06-09 14:51 3% ` [PATCH v3] " Bruce Richardson
2023-06-13 8:17 [PATCH 0/4] Test examples compilation externally David Marchand
2023-06-13 8:17 10% ` [PATCH 4/4] ci: build examples externally David Marchand
2023-06-13 14:06 ` [PATCH v2 0/4] Test examples compilation externally David Marchand
2023-06-13 14:06 10% ` [PATCH v2 4/4] ci: build examples externally David Marchand
2023-06-13 16:58 4% [PATCH v3] build: prevent accidentally building without NUMA support Bruce Richardson
2023-06-13 17:08 4% ` [PATCH v4] " Bruce Richardson
2023-06-15 14:38 4% ` [PATCH v5] " Bruce Richardson
2023-06-14 14:26 [PATCH 0/5] cleanup in library header files Thomas Monjalon
2023-06-14 14:26 1% ` [PATCH 1/5] lib: remove blank line ending comment blocks Thomas Monjalon
2023-06-15 16:48 5% [PATCH v2 0/5] bbdev: API extension for 23.11 Nicolas Chautru
2023-06-15 16:49 8% ` [PATCH v2 5/5] devtools: ignore changes into bbdev experimental API Nicolas Chautru
2023-06-15 17:59 3% DPDK Release Status Meeting 2023-06-15 Mcnamara, John
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).