* [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
@ 2015-10-08 15:25 Jesper Wramberg
2015-10-08 15:36 ` Olga Shern
0 siblings, 1 reply; 6+ messages in thread
From: Jesper Wramberg @ 2015-10-08 15:25 UTC (permalink / raw)
To: dev
Hi all,
I was wondering if anyone has any experience with the ConnectX-3 card from
Mellanox ? I have a server with such a card I can't seem to get link
bonding to work.
I've installed the necessary kernel modules, etc. and the card works as
expected when testing it using e.g. the layer 2 forwarding example. If i
try to run the bond example, however, I get a segmentation fault when the
"rte_eth_bond_slave_add" function is called. Originally I wanted to bond
the ports using the EAL cmd line option but the card only has one pci
address :(
Does anyone have any idea what could be causing this behavior ?
I am running fw version 2.30.x which I have considered upgrading. Besides
this I have been wondering if I have to do anything in Linux since the PMD
uses the Linux drivers for control. I haven't been able to find any
information on it though.
Regards,
Jesper Wramberg
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
2015-10-08 15:25 [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3 Jesper Wramberg
@ 2015-10-08 15:36 ` Olga Shern
2015-10-08 18:19 ` Jesper Wramberg
0 siblings, 1 reply; 6+ messages in thread
From: Olga Shern @ 2015-10-08 15:36 UTC (permalink / raw)
To: Jesper Wramberg, dev
Hi Jesper,
Bonding pmd is not supported with dpdk 2.1 on Mellanox nic
We just sent patches to support async link events. Without these patches it will not work.
Best Regards
Olga
Sent from Samsung Mobile.
-------- Original message --------
From: Jesper Wramberg
Date:08/10/2015 4:25 PM (GMT+00:00)
To: dev@dpdk.org
Subject: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
Hi all,
I was wondering if anyone has any experience with the ConnectX-3 card from
Mellanox ? I have a server with such a card I can't seem to get link
bonding to work.
I've installed the necessary kernel modules, etc. and the card works as
expected when testing it using e.g. the layer 2 forwarding example. If i
try to run the bond example, however, I get a segmentation fault when the
"rte_eth_bond_slave_add" function is called. Originally I wanted to bond
the ports using the EAL cmd line option but the card only has one pci
address :(
Does anyone have any idea what could be causing this behavior ?
I am running fw version 2.30.x which I have considered upgrading. Besides
this I have been wondering if I have to do anything in Linux since the PMD
uses the Linux drivers for control. I haven't been able to find any
information on it though.
Regards,
Jesper Wramberg
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
2015-10-08 15:36 ` Olga Shern
@ 2015-10-08 18:19 ` Jesper Wramberg
2015-10-08 22:29 ` Olga Shern
0 siblings, 1 reply; 6+ messages in thread
From: Jesper Wramberg @ 2015-10-08 18:19 UTC (permalink / raw)
To: Olga Shern; +Cc: dev
Hi Olga,
Thank you for your help. I'll check out the patches tomorrow.
For the sake of clarity, I assume you mean the patches about:
"eal: new interrupt handler type"
"mlx4: handle interrupts"
etc.
Regards,
Jesper
2015-10-08 17:36 GMT+02:00 Olga Shern <olgas@mellanox.com>:
>
>
> Hi Jesper,
>
> Bonding pmd is not supported with dpdk 2.1 on Mellanox nic
> We just sent patches to support async link events. Without these patches
> it will not work.
>
> Best Regards
> Olga
> Sent from Samsung Mobile.
>
>
> -------- Original message --------
> From: Jesper Wramberg
> Date:08/10/2015 4:25 PM (GMT+00:00)
> To: dev@dpdk.org
> Subject: [dpdk-dev] Segmentation fault when bonding ports on Mellanox
> ConnectX-3
>
> Hi all,
>
> I was wondering if anyone has any experience with the ConnectX-3 card from
> Mellanox ? I have a server with such a card I can't seem to get link
> bonding to work.
>
> I've installed the necessary kernel modules, etc. and the card works as
> expected when testing it using e.g. the layer 2 forwarding example. If i
> try to run the bond example, however, I get a segmentation fault when the
> "rte_eth_bond_slave_add" function is called. Originally I wanted to bond
> the ports using the EAL cmd line option but the card only has one pci
> address :(
>
> Does anyone have any idea what could be causing this behavior ?
> I am running fw version 2.30.x which I have considered upgrading. Besides
> this I have been wondering if I have to do anything in Linux since the PMD
> uses the Linux drivers for control. I haven't been able to find any
> information on it though.
>
> Regards,
> Jesper Wramberg
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
2015-10-08 18:19 ` Jesper Wramberg
@ 2015-10-08 22:29 ` Olga Shern
2015-10-12 16:02 ` Jesper Wramberg
0 siblings, 1 reply; 6+ messages in thread
From: Olga Shern @ 2015-10-08 22:29 UTC (permalink / raw)
To: Jesper Wramberg; +Cc: dev
For the sake of clarity, I assume you mean the patches about:
"eal: new interrupt handler type"
"mlx4: handle interrupts"
[Olga ] Yes, you are right
Best Regards,
Olga
2015-10-08 17:36 GMT+02:00 Olga Shern <olgas@mellanox.com<mailto:olgas@mellanox.com>>:
Hi Jesper,
Bonding pmd is not supported with dpdk 2.1 on Mellanox nic
We just sent patches to support async link events. Without these patches it will not work.
Best Regards
Olga
Sent from Samsung Mobile.
-------- Original message --------
From: Jesper Wramberg
Date:08/10/2015 4:25 PM (GMT+00:00)
To: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
Hi all,
I was wondering if anyone has any experience with the ConnectX-3 card from
Mellanox ? I have a server with such a card I can't seem to get link
bonding to work.
I've installed the necessary kernel modules, etc. and the card works as
expected when testing it using e.g. the layer 2 forwarding example. If i
try to run the bond example, however, I get a segmentation fault when the
"rte_eth_bond_slave_add" function is called. Originally I wanted to bond
the ports using the EAL cmd line option but the card only has one pci
address :(
Does anyone have any idea what could be causing this behavior ?
I am running fw version 2.30.x which I have considered upgrading. Besides
this I have been wondering if I have to do anything in Linux since the PMD
uses the Linux drivers for control. I haven't been able to find any
information on it though.
Regards,
Jesper Wramberg
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
2015-10-08 22:29 ` Olga Shern
@ 2015-10-12 16:02 ` Jesper Wramberg
2015-10-15 13:12 ` Olga Shern
0 siblings, 1 reply; 6+ messages in thread
From: Jesper Wramberg @ 2015-10-12 16:02 UTC (permalink / raw)
To: Olga Shern; +Cc: dev
Hi again,
The patches worked great and the DPDK bonding API functions with the
ConnectX-3 now.. yay :-)
I have run into some new trouble however and since its related I figured I
would ask here if anyone could help. I am using SR-IOV to create a
dual-port VF. When trying to bond the ports on this VF it seems that DPDK
cannot set the mac address on the slaves. As a result, I am unable to
receive data on the bonded port. As a workaround, I can set the mac address
on the two VF ports manually in Linux using "ip link set <pf interface> vf
0 mac ..." after which I can start DPDK and receive data on the bonded port
as wanted.
Is there a way to allow DPDK to change the mac address so I avoid the
workaround ? I have done some searching but couldn't find anything.
Regards,
Jesper Wramberg
2015-10-09 0:29 GMT+02:00 Olga Shern <olgas@mellanox.com>:
> For the sake of clarity, I assume you mean the patches about:
>
> "eal: new interrupt handler type"
>
> "mlx4: handle interrupts"
>
> *[Olga ] Yes, you are right *
>
>
>
> Best Regards,
>
> Olga
>
>
>
> 2015-10-08 17:36 GMT+02:00 Olga Shern <olgas@mellanox.com>:
>
>
>
>
>
> Hi Jesper,
>
>
>
> Bonding pmd is not supported with dpdk 2.1 on Mellanox nic
>
> We just sent patches to support async link events. Without these patches
> it will not work.
>
>
>
> Best Regards
>
> Olga
>
> Sent from Samsung Mobile.
>
>
>
> -------- Original message --------
>
> From: Jesper Wramberg
>
> Date:08/10/2015 4:25 PM (GMT+00:00)
>
> To: dev@dpdk.org
>
> Subject: [dpdk-dev] Segmentation fault when bonding ports on Mellanox
> ConnectX-3
>
>
>
> Hi all,
>
> I was wondering if anyone has any experience with the ConnectX-3 card from
> Mellanox ? I have a server with such a card I can't seem to get link
> bonding to work.
>
> I've installed the necessary kernel modules, etc. and the card works as
> expected when testing it using e.g. the layer 2 forwarding example. If i
> try to run the bond example, however, I get a segmentation fault when the
> "rte_eth_bond_slave_add" function is called. Originally I wanted to bond
> the ports using the EAL cmd line option but the card only has one pci
> address :(
>
> Does anyone have any idea what could be causing this behavior ?
> I am running fw version 2.30.x which I have considered upgrading. Besides
> this I have been wondering if I have to do anything in Linux since the PMD
> uses the Linux drivers for control. I haven't been able to find any
> information on it though.
>
> Regards,
> Jesper Wramberg
>
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
2015-10-12 16:02 ` Jesper Wramberg
@ 2015-10-15 13:12 ` Olga Shern
0 siblings, 0 replies; 6+ messages in thread
From: Olga Shern @ 2015-10-15 13:12 UTC (permalink / raw)
To: Jesper Wramberg; +Cc: dev, Gideon Naim
Hi Jeff,
Glad to hear that this is working for you ☺
What bonding mode do you use?
There is a limitation, as you saw, we cannot add additional MAC to the VF from DPDK.
You can set MAC of the VF on VM by the same way you did on the pf.
Let me know if you have any additional questions
Best Regards,
Olga
From: Jesper Wramberg [mailto:jesper.wramberg@gmail.com]
Sent: Monday, October 12, 2015 7:03 PM
To: Olga Shern <olgas@mellanox.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
Hi again,
The patches worked great and the DPDK bonding API functions with the ConnectX-3 now.. yay :-)
I have run into some new trouble however and since its related I figured I would ask here if anyone could help. I am using SR-IOV to create a dual-port VF. When trying to bond the ports on this VF it seems that DPDK cannot set the mac address on the slaves. As a result, I am unable to receive data on the bonded port. As a workaround, I can set the mac address on the two VF ports manually in Linux using "ip link set <pf interface> vf 0 mac ..." after which I can start DPDK and receive data on the bonded port as wanted.
Is there a way to allow DPDK to change the mac address so I avoid the workaround ? I have done some searching but couldn't find anything.
Regards,
Jesper Wramberg
2015-10-09 0:29 GMT+02:00 Olga Shern <olgas@mellanox.com<mailto:olgas@mellanox.com>>:
For the sake of clarity, I assume you mean the patches about:
"eal: new interrupt handler type"
"mlx4: handle interrupts"
[Olga ] Yes, you are right
Best Regards,
Olga
2015-10-08 17:36 GMT+02:00 Olga Shern <olgas@mellanox.com<mailto:olgas@mellanox.com>>:
Hi Jesper,
Bonding pmd is not supported with dpdk 2.1 on Mellanox nic
We just sent patches to support async link events. Without these patches it will not work.
Best Regards
Olga
Sent from Samsung Mobile.
-------- Original message --------
From: Jesper Wramberg
Date:08/10/2015 4:25 PM (GMT+00:00)
To: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3
Hi all,
I was wondering if anyone has any experience with the ConnectX-3 card from
Mellanox ? I have a server with such a card I can't seem to get link
bonding to work.
I've installed the necessary kernel modules, etc. and the card works as
expected when testing it using e.g. the layer 2 forwarding example. If i
try to run the bond example, however, I get a segmentation fault when the
"rte_eth_bond_slave_add" function is called. Originally I wanted to bond
the ports using the EAL cmd line option but the card only has one pci
address :(
Does anyone have any idea what could be causing this behavior ?
I am running fw version 2.30.x which I have considered upgrading. Besides
this I have been wondering if I have to do anything in Linux since the PMD
uses the Linux drivers for control. I haven't been able to find any
information on it though.
Regards,
Jesper Wramberg
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-10-15 13:12 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-08 15:25 [dpdk-dev] Segmentation fault when bonding ports on Mellanox ConnectX-3 Jesper Wramberg
2015-10-08 15:36 ` Olga Shern
2015-10-08 18:19 ` Jesper Wramberg
2015-10-08 22:29 ` Olga Shern
2015-10-12 16:02 ` Jesper Wramberg
2015-10-15 13:12 ` Olga Shern
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).