On Tue, 29 Apr 2025 at 21:54, Stephen Hemminger <stephen@networkplumber.org> wrote:
On Tue, 29 Apr 2025 21:45:36 +0530
Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:

> On Mon, 28 Apr 2025 at 21:07, Stephen Hemminger <stephen@networkplumber.org>
> wrote:
>
> > On Mon, 28 Apr 2025 11:31:10 +0530
> > Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:
> > 
> > > On Sat, 26 Apr 2025 at 20:58, Stephen Hemminger <
> > stephen@networkplumber.org>
> > > wrote:
> > > 
> > > > On Fri, 25 Apr 2025 23:17:30 +0530
> > > > Prashant Upadhyaya <praupadhyaya@gmail.com> wrote:
> > > > 
> > > > > Hi,
> > > > >
> > > > > I am having a VM on Azure where I have got two 'accelerated 
> > networking' 
> > > > > interfaces of Mellanox
> > > > > # lspci -nn|grep -i ether
> > > > > 6561:00:02.0 Ethernet controller [0200]: Mellanox Technologies 
> > MT27710 
> > > > > Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> > > > > f08c:00:02.0 Ethernet controller [0200]: Mellanox Technologies 
> > MT27710 
> > > > > Family [ConnectX-4 Lx Virtual Function] [15b3:1016] (rev 80)
> > > > >
> > > > > I have a DPDK application which needs to obtain 'all' packets from 
> > the 
> > > > NIC. 
> > > > > I installed the drivers, compiled DPDK24.11 (Ubuntu20.04), my app 
> > starts 
> > > > > and is able to detect the NIC's.
> > > > > Everything looks good
> > > > > myapp.out -c 0x07 -a f08c:00:02.0 -a 6561:00:02.0
> > > > > EAL: Detected CPU lcores: 8
> > > > > EAL: Detected NUMA nodes: 1
> > > > > EAL: Detected shared linkage of DPDK
> > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > EAL: Selected IOVA mode 'PA'
> > > > > EAL: VFIO support initialized
> > > > > mlx5_net: Default miss action is not supported.
> > > > > mlx5_net: Default miss action is not supported.
> > > > > All Ports initialized
> > > > > Port 0 is UP (50000 Mbps)
> > > > > Port 1 is UP (50000 Mbps)
> > > > >
> > > > > The trouble is that the ARP packets are not being picked up by my 
> > DPDK 
> > > > > application, I see them being delivered to the kernel via the eth 
> > > > interface 
> > > > > corresponding to the port (MLX is a bifurcated driver, you don't 
> > really 
> > > > > bind to the NIC, so you still see the eth interfaces at linux level 
> > and 
> > > > can 
> > > > > run tcpdump on those, I see ARP packets in the tcpdump there on the
> > > > > interface)
> > > > > I can receive UDP packets in my DPDK app though.
> > > > >
> > > > > My application is not setting any rte_flow rules etc. so I was 
> > expecting 
> > > > > that by default my dpdk app would get all the packets as is normally 
> > the 
> > > > > case with other NIC's
> > > > > Is there something I need to configure for Mellanox NIC somewhere 
> > such 
> > > > that 
> > > > > I get 'all' the packets including ARP packets in my DPDK app ?
> > > > >
> > > > > Regards
> > > > > -Prashant 
> > > >
> > > > The Mellanox device in Azure networking cards is only used as a VF 
> > switch. 
> > > > You can go back to earlier DPDK presentations for more detail.
> > > >
> > > > Three reason bifurcation won't work.
> > > >  1. Only some of the packets arrive on the VF. All non-IP show up on
> > > >     the synthetic device. The VF is only used after the TCP three way
> > > >     handshake.
> > > >  2. The Netvsc PMD doesn't handle flow rules.
> > > >  3. The VF can be removed and restored any time (by hypervisor)
> > > >     it is not a stable entity.
> > > >
> > > > 
> > >  Thanks Stephen, so are we concluding that DPDK apps are unusable in 
> > Azure 
> > > as per my requirements, is there no work around or any other possibility 
> > to 
> > > use DPDK in Azure VM ? Please do send me a link to the correct 
> > presentation 
> > > I should refer to.
> > >
> > > Regards
> > > -Prashant 
> >
> > Remember in the cloud network interfaces are free, there really is no need
> > for bifurication just create two interfaces to VM, one for DPDK, and one
> > for
> > non DPDK.
> > 
>
> I am afraid Stephen, I am not entirely clear about your suggestion.
> When I create an Accelerated Networking interface on Azure (with the
> intention of my DPDK app fully controlling it), it is automatically giving
> one eth interface (representing slowpath) and one enp interface (presumably
> the fast path and which my dpdk app detects and operates upon). It is this
> pair that is created automatically for a single packet interface by Azure.
> The trouble is the above bifurcation because I don't get to see all the
> packets from the NIC on DPDK controlled interface -- my DPDK app wants to
> see all the packets.
>
> Regards
> -Prashant

With DPDK on Azure, an application should never use the VF directly.
It needs to use either netvsc PMD which handles both the vmbus (slow path)
and VF (fast path) combined. Or use the older vdev_netvsc/failsafe/tap combination.
The latter uses a virtual device to make a failsafe PMD which then does
a combination of TAP (via kernel slow path) and MLX5 VF.  The failsafe PMD
is what is exposed for application usage.

The limitations are not explicitly mentioned in the documentation but:
  - don't use VF directly in application
  - there is no support for bifurcation where some packets go to kernel
    and some to DPDK
  - there is only very limited support for rte_flow; that is with failsafe PMD
    (not netvsc PMD) and the limitations are that the emulation of rte_flow
    in the TAP device only supports a few things.

Thanks Stephen, the above information was very instructive.
If I do use the Netvsc PMD with the latest DPDK, will my DPDK app get the non IP packets like ARP, please confirm.
I quickly tried the Netvsc PMD but don't seem to be getting the ARP packets in still.
When you mention "The failsafe PMD is what is exposed for application usage", what is the meaning of this, are the apps expected to use failsafe PMD, please suggest.

Regards
-Prashant