On Mon, 28 Apr 2025 11:31:10 +0530
Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:
> On Sat, 26 Apr 2025 at 20:58, Stephen Hemminger <stephen@networkplumber.org>
> wrote:
>
> > On Fri, 25 Apr 2025 23:17:30 +0530
> > Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I am having a VM on Azure where I have got two 'accelerated networking'
> > > interfaces of Mellanox
> > > # lspci -nn|grep -i ether
> > > 6561:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> > > Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> > > f08c:00:02.0 Ethernet controller [0200]: Mellanox Technologies MT27710
> > > Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> > >
> > > I have a DPDK application which needs to obtain 'all' packets from the
> > NIC.
> > > I installed the drivers, compiled DPDK24.11 (Ubuntu20.04), my app starts
> > > and is able to detect the NIC's.
> > > Everything looks good
> > > myapp.out -c 0x07 -a f08c:00:02.0 -a 6561:00:02.0
> > > EAL: Detected CPU lcores: 8
> > > EAL: Detected NUMA nodes: 1
> > > EAL: Detected shared linkage of DPDK
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: VFIO support initialized
> > > mlx5_net: Default miss action is not supported.
> > > mlx5_net: Default miss action is not supported.
> > > All Ports initialized
> > > Port 0 is UP (50000 Mbps)
> > > Port 1 is UP (50000 Mbps)
> > >
> > > The trouble is that the ARP packets are not being picked up by my DPDK
> > > application, I see them being delivered to the kernel via the eth
> > interface
> > > corresponding to the port (MLX is a bifurcated driver, you don't really
> > > bind to the NIC, so you still see the eth interfaces at linux level and
> > can
> > > run tcpdump on those, I see ARP packets in the tcpdump there on the
> > > interface)
> > > I can receive UDP packets in my DPDK app though.
> > >
> > > My application is not setting any rte_flow rules etc. so I was expecting
> > > that by default my dpdk app would get all the packets as is normally the
> > > case with other NIC's
> > > Is there something I need to configure for Mellanox NIC somewhere such
> > that
> > > I get 'all' the packets including ARP packets in my DPDK app ?
> > >
> > > Regards
> > > -Prashant
> >
> > The Mellanox device in Azure networking cards is only used as a VF switch.
> > You can go back to earlier DPDK presentations for more detail.
> >
> > Three reason bifurcation won't work.
> > 1. Only some of the packets arrive on the VF. All non-IP show up on
> > the synthetic device. The VF is only used after the TCP three way
> > handshake.
> > 2. The Netvsc PMD doesn't handle flow rules.
> > 3. The VF can be removed and restored any time (by hypervisor)
> > it is not a stable entity.
> >
> >
> Thanks Stephen, so are we concluding that DPDK apps are unusable in Azure
> as per my requirements, is there no work around or any other possibility to
> use DPDK in Azure VM ? Please do send me a link to the correct presentation
> I should refer to.
>
> Regards
> -Prashant
Remember in the cloud network interfaces are free, there really is no need
for bifurication just create two interfaces to VM, one for DPDK, and one for
non DPDK.
I am afraid Stephen, I am not entirely clear about your suggestion.
When I create an Accelerated Networking interface on Azure (with the intention of my DPDK app fully controlling it), it is automatically giving one eth interface (representing slowpath) and one enp interface (presumably the fast path and which my dpdk app detects and operates upon). It is this pair that is created automatically for a single packet interface by Azure.
The trouble is the above bifurcation because I don't get to see all the packets from the NIC on DPDK controlled interface -- my DPDK app wants to see all the packets.
Regards
-Prashant