* Regarding Mellanox bifurcated driver on Azure
@ 2025-04-25 17:47 Prashant Upadhyaya
2025-04-25 23:01 ` Stephen Hemminger
2025-04-26 15:28 ` Stephen Hemminger
0 siblings, 2 replies; 3+ messages in thread
From: Prashant Upadhyaya @ 2025-04-25 17:47 UTC (permalink / raw)
To: dev
[-- Attachment #1: Type: text/plain, Size: 1772 bytes --]
Hi,
I am having a VM on Azure where I have got two 'accelerated networking'
interfaces of Mellanox
# lspci -nn|grep -i ether
6561:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
f08c:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
I have a DPDK application which needs to obtain 'all' packets from the NIC.
I installed the drivers, compiled DPDK24.11 (Ubuntu20.04), my app starts
and is able to detect the NIC's.
Everything looks good
myapp.out -c 0x07 -a f08c:00:02.0 -a 6561:00:02.0
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
mlx5_net: Default miss action is not supported.
mlx5_net: Default miss action is not supported.
All Ports initialized
Port 0 is UP (50000 Mbps)
Port 1 is UP (50000 Mbps)
The trouble is that the ARP packets are not being picked up by my DPDK
application, I see them being delivered to the kernel via the eth interface
corresponding to the port (MLX is a bifurcated driver, you don't really
bind to the NIC, so you still see the eth interfaces at linux level and can
run tcpdump on those, I see ARP packets in the tcpdump there on the
interface)
I can receive UDP packets in my DPDK app though.
My application is not setting any rte_flow rules etc. so I was expecting
that by default my dpdk app would get all the packets as is normally the
case with other NIC's
Is there something I need to configure for Mellanox NIC somewhere such that
I get 'all' the packets including ARP packets in my DPDK app ?
Regards
-Prashant
[-- Attachment #2: Type: text/html, Size: 2106 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Regarding Mellanox bifurcated driver on Azure
2025-04-25 17:47 Regarding Mellanox bifurcated driver on Azure Prashant Upadhyaya
@ 2025-04-25 23:01 ` Stephen Hemminger
2025-04-26 15:28 ` Stephen Hemminger
1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2025-04-25 23:01 UTC (permalink / raw)
To: Prashant Upadhyaya; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 2030 bytes --]
Short answer Accelerated networking on Azure is not designed to support
bifurcated VF usage
On Fri, Apr 25, 2025, 10:47 Prashant Upadhyaya <praupadhyaya@gmail.com>
wrote:
> Hi,
>
> I am having a VM on Azure where I have got two 'accelerated networking'
> interfaces of Mellanox
> # lspci -nn|grep -i ether
> 6561:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> f08c:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
>
> I have a DPDK application which needs to obtain 'all' packets from the NIC.
> I installed the drivers, compiled DPDK24.11 (Ubuntu20.04), my app starts
> and is able to detect the NIC's.
> Everything looks good
> myapp.out -c 0x07 -a f08c:00:02.0 -a 6561:00:02.0
> EAL: Detected CPU lcores: 8
> EAL: Detected NUMA nodes: 1
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: VFIO support initialized
> mlx5_net: Default miss action is not supported.
> mlx5_net: Default miss action is not supported.
> All Ports initialized
> Port 0 is UP (50000 Mbps)
> Port 1 is UP (50000 Mbps)
>
> The trouble is that the ARP packets are not being picked up by my DPDK
> application, I see them being delivered to the kernel via the eth interface
> corresponding to the port (MLX is a bifurcated driver, you don't really
> bind to the NIC, so you still see the eth interfaces at linux level and can
> run tcpdump on those, I see ARP packets in the tcpdump there on the
> interface)
> I can receive UDP packets in my DPDK app though.
>
> My application is not setting any rte_flow rules etc. so I was expecting
> that by default my dpdk app would get all the packets as is normally the
> case with other NIC's
> Is there something I need to configure for Mellanox NIC somewhere such
> that I get 'all' the packets including ARP packets in my DPDK app ?
>
> Regards
> -Prashant
>
>
[-- Attachment #2: Type: text/html, Size: 2584 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Regarding Mellanox bifurcated driver on Azure
2025-04-25 17:47 Regarding Mellanox bifurcated driver on Azure Prashant Upadhyaya
2025-04-25 23:01 ` Stephen Hemminger
@ 2025-04-26 15:28 ` Stephen Hemminger
1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2025-04-26 15:28 UTC (permalink / raw)
To: Prashant Upadhyaya; +Cc: dev
On Fri, 25 Apr 2025 23:17:30 +0530
Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:
> Hi,
>
> I am having a VM on Azure where I have got two 'accelerated networking'
> interfaces of Mellanox
> # lspci -nn|grep -i ether
> 6561:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> f08c:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
>
> I have a DPDK application which needs to obtain 'all' packets from the NIC.
> I installed the drivers, compiled DPDK24.11 (Ubuntu20.04), my app starts
> and is able to detect the NIC's.
> Everything looks good
> myapp.out -c 0x07 -a f08c:00:02.0 -a 6561:00:02.0
> EAL: Detected CPU lcores: 8
> EAL: Detected NUMA nodes: 1
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: VFIO support initialized
> mlx5_net: Default miss action is not supported.
> mlx5_net: Default miss action is not supported.
> All Ports initialized
> Port 0 is UP (50000 Mbps)
> Port 1 is UP (50000 Mbps)
>
> The trouble is that the ARP packets are not being picked up by my DPDK
> application, I see them being delivered to the kernel via the eth interface
> corresponding to the port (MLX is a bifurcated driver, you don't really
> bind to the NIC, so you still see the eth interfaces at linux level and can
> run tcpdump on those, I see ARP packets in the tcpdump there on the
> interface)
> I can receive UDP packets in my DPDK app though.
>
> My application is not setting any rte_flow rules etc. so I was expecting
> that by default my dpdk app would get all the packets as is normally the
> case with other NIC's
> Is there something I need to configure for Mellanox NIC somewhere such that
> I get 'all' the packets including ARP packets in my DPDK app ?
>
> Regards
> -Prashant
The Mellanox device in Azure networking cards is only used as a VF switch.
You can go back to earlier DPDK presentations for more detail.
Three reason bifurcation won't work.
1. Only some of the packets arrive on the VF. All non-IP show up on
the synthetic device. The VF is only used after the TCP three way
handshake.
2. The Netvsc PMD doesn't handle flow rules.
3. The VF can be removed and restored any time (by hypervisor)
it is not a stable entity.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-04-26 15:28 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-25 17:47 Regarding Mellanox bifurcated driver on Azure Prashant Upadhyaya
2025-04-25 23:01 ` Stephen Hemminger
2025-04-26 15:28 ` Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).