/opt/vc/bin/dpdk-testpmd -l 1-3 -n 1 -a f030:00:02.0 -a 2334:00:02.0 -- --rxq=2 --txq=2 -i
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: 2334:00:02.0 (socket -1)
mlx5_net: No available register for sampler.
EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: f030:00:02.0 (socket -1)
mlx5_net: No available register for sampler.
hn_vf_attach(): found matching VF port 0
hn_vf_attach(): found matching VF port 1
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=326912, size=2560, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 2 (socket 0)
Port 2: 00:0D:3A:42:F8:3C
Configuring Port 3 (socket 0)
Port 3: 00:0D:3A:42:FB:CD
Checking link statuses...
Done
testpmd> set verbose 3
Change verbose level from 0 to 3
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 4 streams:
RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
RX P=2/Q=1 (socket 0) -> TX P=3/Q=1 (socket 0) peer=02:00:00:00:00:03
RX P=3/Q=1 (socket 0) -> TX P=2/Q=1 (socket 0) peer=02:00:00:00:00:02
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 2: RX queue number: 2 Tx queue number: 2
Rx offloads=0x80000 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80000
TX queue: 0
TX desc=256 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 3: RX queue number: 2 Tx queue number: 2
Rx offloads=0x80000 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80000
TX queue: 0
TX desc=256 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 3/queue 1: sent 13 packets
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Send queue=0x1
ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Send queue=0x1
ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Send queue=0x1
ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
...........
port 2/queue 1: received 18 packets
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x1
ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x1
ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x1
ol_flags: RTE_MBUF_F_RX_RSS_HASH RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
src=00:0D:3A:42:F8:3C - dst=00:0D:3A:42:F8:3C - pool=mb_pool_0 - type=0x0800 - length=64 - nb_segs=1 - RSS hash=0xef1c29ff - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP -
Have you also performed the modification of txonly.c that Microsoft recommends on that page?
“When you're running the previous commands on a virtual machine, change IP_SRC_ADDR and IP_DST_ADDR in app/test-pmd/txonly.c to match the actual IP address of the virtual machines before you compile. Otherwise, the packets are dropped before reaching the forwarder.”
Keep in mind that in Azure you do not have a true L2 network between two interfaces even on the same subnet, it’s all routed via the subnet gateway (x.x.x.1, mac addr 12:34:56:78:9a:bc). I would not expect an L2 forwarding app to behave in the same way as a regular VM or hardware.
I haven’t personally used testpmd in this way in Azure, but I’ve used dpdk-pktgen and it took some effort to get traffic to go to the right place.
Josh