* Bond fails to start all slaves
@ 2025-04-04 14:08 Aman Thakur
2025-04-07 10:05 ` madhukar mythri
0 siblings, 1 reply; 2+ messages in thread
From: Aman Thakur @ 2025-04-04 14:08 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 1656 bytes --]
Hi users,
Environment
===========
DPKD Version : 21.11.9
Bond member
slave 1 : Intel e1000e I217 1Gbps "Ethernet Connection I217-LM 153a"
slave 2 : Intel e1000e 82574 1Gbps "82574L Gigabit Network Connection 10d3"
OS :Rocky Linux 8.10 rhel centos fedora
Compiler: gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-24)
Steps to reproduce
==================
1. bind ports to dpdk
dpdk-devbind.py -b vfio-pci 0000:b1:00.0 0000:b1:00.1
2. launch testpmd
./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2
--no-lsc-interrupt --port-topology=chained
3. create bonding device
port stop all
create bonded device 0 0
add bonding slave 0 2
add bonding slave 1 2
port start all
show port info all
Results:
========
the link status of 1 ports is down ( not specific to mode )
In every bond mode link speed of the slave interface 1 Intel e1000e I217
1Gbps "Ethernet Connection I217-LM 153a" is down.
Port 0 Link down
Port 1 Link up at 1 Gbps FDX Autoneg
Port 1: link state change event
Port 2 Link up at 1 Gbps FDX Autoneg
Done
Expected Result:
================
The status of all ports should be normal. Link status of both member/slave
should be up and status of port 0 should not be always down.
In mode 0 with bond mode 0 link speed should be 2 Gbps with 2 members each
1 1Gbps.
*My Questions:*
1. What could be causing one of the slaves to become inactive ?
2. Is there a specific configuration or step I might be missing that's
preventing the bond from utilizing both slaves ?
3. Are there any known compatibility issues or limitations with Intel
e1000e I217 1Gbps "Ethernet Connection I217-LM 153a" that could explain
this behavior?
[-- Attachment #2: Type: text/html, Size: 1983 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Bond fails to start all slaves
2025-04-04 14:08 Bond fails to start all slaves Aman Thakur
@ 2025-04-07 10:05 ` madhukar mythri
0 siblings, 0 replies; 2+ messages in thread
From: madhukar mythri @ 2025-04-07 10:05 UTC (permalink / raw)
To: Aman Thakur; +Cc: users
[-- Attachment #1: Type: text/plain, Size: 2283 bytes --]
Hi,
Why is the EAL argument "portmask" set to 0x1 ? it optional, try by
removing it.
Have you tried without "--no-lsc-interrupt" from EAL arguments.
And also, you can cross checked the Bonding device creation and
configuration with the "--vdev" EAL parameters as follows:
--vdev 'net_bonding0,mode=0,member=0000:b1:00.0,member=0000:b1:00.1'
Regards,
Madhukar.
On Mon, Apr 7, 2025 at 1:40 PM Aman Thakur <r.aman.t.435@gmail.com> wrote:
> Hi users,
>
> Environment
> ===========
> DPKD Version : 21.11.9
> Bond member
> slave 1 : Intel e1000e I217 1Gbps "Ethernet Connection I217-LM 153a"
> slave 2 : Intel e1000e 82574 1Gbps "82574L Gigabit Network Connection 10d3"
> OS :Rocky Linux 8.10 rhel centos fedora
> Compiler: gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-24)
>
> Steps to reproduce
> ==================
> 1. bind ports to dpdk
> dpdk-devbind.py -b vfio-pci 0000:b1:00.0 0000:b1:00.1
> 2. launch testpmd
> ./dpdk-testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2
> --no-lsc-interrupt --port-topology=chained
> 3. create bonding device
> port stop all
> create bonded device 0 0
> add bonding slave 0 2
> add bonding slave 1 2
> port start all
> show port info all
>
>
>
> Results:
> ========
> the link status of 1 ports is down ( not specific to mode )
> In every bond mode link speed of the slave interface 1 Intel e1000e I217
> 1Gbps "Ethernet Connection I217-LM 153a" is down.
>
> Port 0 Link down
> Port 1 Link up at 1 Gbps FDX Autoneg
>
> Port 1: link state change event
> Port 2 Link up at 1 Gbps FDX Autoneg
> Done
>
> Expected Result:
> ================
> The status of all ports should be normal. Link status of both member/slave
> should be up and status of port 0 should not be always down.
> In mode 0 with bond mode 0 link speed should be 2 Gbps with 2 members each
> 1 1Gbps.
>
> *My Questions:*
>
> 1. What could be causing one of the slaves to become inactive ?
> 2. Is there a specific configuration or step I might be missing that's
> preventing the bond from utilizing both slaves ?
> 3. Are there any known compatibility issues or limitations with Intel
> e1000e I217 1Gbps "Ethernet Connection I217-LM 153a" that could explain
> this behavior?
>
>
>
>
[-- Attachment #2: Type: text/html, Size: 3273 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-04-07 10:05 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-04 14:08 Bond fails to start all slaves Aman Thakur
2025-04-07 10:05 ` madhukar mythri
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).