DPDK usage discussions
 help / color / mirror / Atom feed
* Implementing a simple TAP PMD to dpdk-vhost structure
@ 2023-09-06  1:56 Nicolson Ken (ニコルソン ケン)
  2023-09-06  6:07 ` David Marchand
  0 siblings, 1 reply; 9+ messages in thread
From: Nicolson Ken (ニコルソン ケン) @ 2023-09-06  1:56 UTC (permalink / raw)
  To: users

Hi all,

Using dpdk 22.11.2 on Ubuntu 22.04

I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:

$ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file /tmp/sock0 --client -p 1

However, this fails with:

EAL: Detected CPU lcores: 20
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
VHOST_PORT: Failed to get VMDq info.
EAL: Error - exiting with code: 1
  Cause: Cannot initialize network ports

The offending code is from examples/vhost/main.c:

	if (dev_info.max_vmdq_pools == 0) {
		RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
		return -1;
	}

This is because the TAP PMD doesn't support VMDq pools.

Is there an easy way to get this to work?

Thanks,
Ken

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  1:56 Implementing a simple TAP PMD to dpdk-vhost structure Nicolson Ken (ニコルソン ケン)
@ 2023-09-06  6:07 ` David Marchand
  2023-09-06  7:15   ` Maxime Coquelin
  2023-09-06  7:52   ` Nicolson Ken (ニコルソン ケン)
  0 siblings, 2 replies; 9+ messages in thread
From: David Marchand @ 2023-09-06  6:07 UTC (permalink / raw)
  To: Nicolson Ken (ニコルソン ケン)
  Cc: users, Maxime Coquelin, Xia, Chenbo

Hello Ken,

On Wed, Sep 6, 2023 at 3:56 AM Nicolson Ken (ニコルソン ケン)
<ken.nicolson@jp.panasonic.com> wrote:
>
> Hi all,
>
> Using dpdk 22.11.2 on Ubuntu 22.04
>
> I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:
>
> $ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file /tmp/sock0 --client -p 1
>
> However, this fails with:
>
> EAL: Detected CPU lcores: 20
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> VHOST_PORT: Failed to get VMDq info.
> EAL: Error - exiting with code: 1
>   Cause: Cannot initialize network ports
>
> The offending code is from examples/vhost/main.c:
>
>         if (dev_info.max_vmdq_pools == 0) {
>                 RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
>                 return -1;
>         }
>
> This is because the TAP PMD doesn't support VMDq pools.
>
> Is there an easy way to get this to work?

This sounds strange to require VMDq support...
Copying Maxime and Chenbo who probably know better about this example code.

Alternatively, did you consider using testpmd with the vhost pmd instead ?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  6:07 ` David Marchand
@ 2023-09-06  7:15   ` Maxime Coquelin
  2023-09-06  8:28     ` Nicolson Ken (ニコルソン ケン)
  2023-09-06  7:52   ` Nicolson Ken (ニコルソン ケン)
  1 sibling, 1 reply; 9+ messages in thread
From: Maxime Coquelin @ 2023-09-06  7:15 UTC (permalink / raw)
  To: David Marchand,
	Nicolson Ken (ニコルソン
	ケン)
  Cc: users, Xia, Chenbo



On 9/6/23 08:07, David Marchand wrote:
> Hello Ken,
> 
> On Wed, Sep 6, 2023 at 3:56 AM Nicolson Ken (ニコルソン ケン)
> <ken.nicolson@jp.panasonic.com> wrote:
>>
>> Hi all,
>>
>> Using dpdk 22.11.2 on Ubuntu 22.04
>>
>> I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:
>>
>> $ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file /tmp/sock0 --client -p 1
>>
>> However, this fails with:
>>
>> EAL: Detected CPU lcores: 20
>> EAL: Detected NUMA nodes: 1
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> VHOST_PORT: Failed to get VMDq info.
>> EAL: Error - exiting with code: 1
>>    Cause: Cannot initialize network ports
>>
>> The offending code is from examples/vhost/main.c:
>>
>>          if (dev_info.max_vmdq_pools == 0) {
>>                  RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
>>                  return -1;
>>          }
>>
>> This is because the TAP PMD doesn't support VMDq pools.
>>
>> Is there an easy way to get this to work?
> 
> This sounds strange to require VMDq support...
> Copying Maxime and Chenbo who probably know better about this example code.
> 
> Alternatively, did you consider using testpmd with the vhost pmd instead ?
> 
> 

Maybe you could use testpmd application instead, with net_tap PMD and a
net_pcap PMD?

An alternative to net_tap could be to use Virtio-user PMD with Vhost-
kernel backend.

Maxime


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  6:07 ` David Marchand
  2023-09-06  7:15   ` Maxime Coquelin
@ 2023-09-06  7:52   ` Nicolson Ken (ニコルソン ケン)
  2023-09-06  8:41     ` David Marchand
  1 sibling, 1 reply; 9+ messages in thread
From: Nicolson Ken (ニコルソン ケン) @ 2023-09-06  7:52 UTC (permalink / raw)
  To: David Marchand; +Cc: users, Maxime Coquelin, Xia, Chenbo

Hi David,

> Alternatively, did you consider using testpmd with the vhost pmd instead ?

I've tried that before, but as far as I can see from net/vhost/rte_eth_host.c it uses rte_vost_enqueue/dequeue_burst() to basically act as a loopback for the Guest OS. I use:

$ sudo dpdk-testpmd -l 0-3 -n 4 --vdev 'net_tap0' --vdev 'net_vhost1,iface=/tmp/sock0,client=1' -- -i

But if I feed data in using "tcpreplay -I dtap0 ...", "show port stats all" shows everything going into the TAP but nothing is forward to vhost.

Thanks,
Ken

-----Original Message-----
From: David Marchand <david.marchand@redhat.com> 
Sent: Wednesday, September 6, 2023 3:07 PM
To: Nicolson Ken (ニコルソン ケン) <ken.nicolson@jp.panasonic.com>
Cc: users@dpdk.org; Maxime Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>
Subject: Re: Implementing a simple TAP PMD to dpdk-vhost structure

Hello Ken,

On Wed, Sep 6, 2023 at 3:56 AM Nicolson Ken (ニコルソン ケン)
<ken.nicolson@jp.panasonic.com> wrote:
>
> Hi all,
>
> Using dpdk 22.11.2 on Ubuntu 22.04
>
> I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:
>
> $ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file /tmp/sock0 --client -p 1
>
> However, this fails with:
>
> EAL: Detected CPU lcores: 20
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> VHOST_PORT: Failed to get VMDq info.
> EAL: Error - exiting with code: 1
>   Cause: Cannot initialize network ports
>
> The offending code is from examples/vhost/main.c:
>
>         if (dev_info.max_vmdq_pools == 0) {
>                 RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
>                 return -1;
>         }
>
> This is because the TAP PMD doesn't support VMDq pools.
>
> Is there an easy way to get this to work?

This sounds strange to require VMDq support...
Copying Maxime and Chenbo who probably know better about this example code.

Alternatively, did you consider using testpmd with the vhost pmd instead ?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  7:15   ` Maxime Coquelin
@ 2023-09-06  8:28     ` Nicolson Ken (ニコルソン ケン)
  2023-09-06  8:43       ` Maxime Coquelin
  0 siblings, 1 reply; 9+ messages in thread
From: Nicolson Ken (ニコルソン ケン) @ 2023-09-06  8:28 UTC (permalink / raw)
  To: Maxime Coquelin, David Marchand; +Cc: users, Xia, Chenbo

Hi Maxime,

> Maybe you could use testpmd application instead, with net_tap PMD and a net_pcap PMD?

I have an existing solution that uses the TAP PMD which I then add to a standard kernel bridge that allows me to have two-way communication with the Guest VM, but I suspect the performance would be better if we were to use a DPDK virtio-based solution; otherwise I could just drop DPDK all together and use tcpreplay, etc to directly access the bridge.

[Also net_pcap only outputs to kernel interface (or to a file or null) as it uses libpcap APIs for Tx]

> An alternative to net_tap could be to use Virtio-user PMD with Vhost- kernel backend.

That uses KNI, which the documentation says is deprecated, and I'm not sure I want to start mucking about with kernel drivers.

Thanks,
Ken

-----Original Message-----
From: Maxime Coquelin <maxime.coquelin@redhat.com> 
Sent: Wednesday, September 6, 2023 4:16 PM
To: David Marchand <david.marchand@redhat.com>; Nicolson Ken (ニコルソン ケン) <ken.nicolson@jp.panasonic.com>
Cc: users@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>
Subject: Re: Implementing a simple TAP PMD to dpdk-vhost structure



On 9/6/23 08:07, David Marchand wrote:
> Hello Ken,
> 
> On Wed, Sep 6, 2023 at 3:56 AM Nicolson Ken (ニコルソン ケン)
> <ken.nicolson@jp.panasonic.com> wrote:
>>
>> Hi all,
>>
>> Using dpdk 22.11.2 on Ubuntu 22.04
>>
>> I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:
>>
>> $ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l 
>> 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file 
>> /tmp/sock0 --client -p 1
>>
>> However, this fails with:
>>
>> EAL: Detected CPU lcores: 20
>> EAL: Detected NUMA nodes: 1
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> VHOST_PORT: Failed to get VMDq info.
>> EAL: Error - exiting with code: 1
>>    Cause: Cannot initialize network ports
>>
>> The offending code is from examples/vhost/main.c:
>>
>>          if (dev_info.max_vmdq_pools == 0) {
>>                  RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
>>                  return -1;
>>          }
>>
>> This is because the TAP PMD doesn't support VMDq pools.
>>
>> Is there an easy way to get this to work?
> 
> This sounds strange to require VMDq support...
> Copying Maxime and Chenbo who probably know better about this example code.
> 
> Alternatively, did you consider using testpmd with the vhost pmd instead ?
> 
> 

Maybe you could use testpmd application instead, with net_tap PMD and a net_pcap PMD?

An alternative to net_tap could be to use Virtio-user PMD with Vhost- kernel backend.

Maxime


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  7:52   ` Nicolson Ken (ニコルソン ケン)
@ 2023-09-06  8:41     ` David Marchand
  2023-09-07  6:22       ` Nicolson Ken (ニコルソン ケン)
  0 siblings, 1 reply; 9+ messages in thread
From: David Marchand @ 2023-09-06  8:41 UTC (permalink / raw)
  To: Nicolson Ken (ニコルソン ケン)
  Cc: users, Maxime Coquelin, Xia, Chenbo

On Wed, Sep 6, 2023 at 9:53 AM Nicolson Ken (ニコルソン ケン)
<ken.nicolson@jp.panasonic.com> wrote:
> > Alternatively, did you consider using testpmd with the vhost pmd instead ?
>
> I've tried that before, but as far as I can see from net/vhost/rte_eth_host.c it uses rte_vost_enqueue/dequeue_burst() to basically act as a loopback for the Guest OS. I use:
>
> $ sudo dpdk-testpmd -l 0-3 -n 4 --vdev 'net_tap0' --vdev 'net_vhost1,iface=/tmp/sock0,client=1' -- -i
>
> But if I feed data in using "tcpreplay -I dtap0 ...", "show port stats all" shows everything going into the TAP but nothing is forward to vhost.

Well, pinging from a dtap0 netdev in the host to a virtio-net netdev
in a guest works me.

testpmd> set verbose 3
Change verbose level from 0 to 3
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

port 0/queue 0: received 16 packets
  src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 -
type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
- sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
  src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 -
type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
- sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
  src=26:9B:E2:29:7E:C6 - dst=33:33:FF:29:7E:C6 - pool=mb_pool_0 -
type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw
ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN


Are you seeing the vhost port getting initialised in testpmd output?
How are you sure that nothing is forwarded?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  8:28     ` Nicolson Ken (ニコルソン ケン)
@ 2023-09-06  8:43       ` Maxime Coquelin
  0 siblings, 0 replies; 9+ messages in thread
From: Maxime Coquelin @ 2023-09-06  8:43 UTC (permalink / raw)
  To: Nicolson Ken (ニコルソン
	ケン),
	David Marchand
  Cc: users, Xia, Chenbo



On 9/6/23 10:28, Nicolson Ken (ニコルソン ケン) wrote:
> Hi Maxime,
> 
>> Maybe you could use testpmd application instead, with net_tap PMD and a net_pcap PMD?
> 
> I have an existing solution that uses the TAP PMD which I then add to a standard kernel bridge that allows me to have two-way communication with the Guest VM, but I suspect the performance would be better if we were to use a DPDK virtio-based solution; otherwise I could just drop DPDK all together and use tcpreplay, etc to directly access the bridge.

Ha yes, I misread the initial thread.
You should indeed use testpmd with Vhost PMD and net_pcap, I'm pretty
sure I've done that in the past.

> 
> [Also net_pcap only outputs to kernel interface (or to a file or null) as it uses libpcap APIs for Tx]
> 
>> An alternative to net_tap could be to use Virtio-user PMD with Vhost- kernel backend.
> 
> That uses KNI, which the documentation says is deprecated, and I'm not sure I want to start mucking about with kernel drivers.

Virtio-user PMD with kernel backend is an alternative solution to KNI,
no out-of-tree driver needed. But that should not be the way to go for
your usecase.

Maxime
> Thanks,
> Ken
> 
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, September 6, 2023 4:16 PM
> To: David Marchand <david.marchand@redhat.com>; Nicolson Ken (ニコルソン ケン) <ken.nicolson@jp.panasonic.com>
> Cc: users@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>
> Subject: Re: Implementing a simple TAP PMD to dpdk-vhost structure
> 
> 
> 
> On 9/6/23 08:07, David Marchand wrote:
>> Hello Ken,
>>
>> On Wed, Sep 6, 2023 at 3:56 AM Nicolson Ken (ニコルソン ケン)
>> <ken.nicolson@jp.panasonic.com> wrote:
>>>
>>> Hi all,
>>>
>>> Using dpdk 22.11.2 on Ubuntu 22.04
>>>
>>> I have a really simple use case, but I cannot find how to implement it. I've set up QEMU with all the required virtio support, so I just need to configure my Host OS-side. I want to send data from a PCAP file via tcpreplay from the Host to the Guest, so I use this command line:
>>>
>>> $ sudo /home/integ/dpdk-stable-22.11.2/build/examples/dpdk-vhost -l
>>> 0-3 -n 4 --socket-mem 1024 --vdev 'net_tap0' -- --socket-file
>>> /tmp/sock0 --client -p 1
>>>
>>> However, this fails with:
>>>
>>> EAL: Detected CPU lcores: 20
>>> EAL: Detected NUMA nodes: 1
>>> EAL: Detected static linkage of DPDK
>>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>>> EAL: Selected IOVA mode 'PA'
>>> VHOST_PORT: Failed to get VMDq info.
>>> EAL: Error - exiting with code: 1
>>>     Cause: Cannot initialize network ports
>>>
>>> The offending code is from examples/vhost/main.c:
>>>
>>>           if (dev_info.max_vmdq_pools == 0) {
>>>                   RTE_LOG(ERR, VHOST_PORT, "Failed to get VMDq info.\n");
>>>                   return -1;
>>>           }
>>>
>>> This is because the TAP PMD doesn't support VMDq pools.
>>>
>>> Is there an easy way to get this to work?
>>
>> This sounds strange to require VMDq support...
>> Copying Maxime and Chenbo who probably know better about this example code.
>>
>> Alternatively, did you consider using testpmd with the vhost pmd instead ?
>>
>>
> 
> Maybe you could use testpmd application instead, with net_tap PMD and a net_pcap PMD?
> 
> An alternative to net_tap could be to use Virtio-user PMD with Vhost- kernel backend.
> 
> Maxime
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-06  8:41     ` David Marchand
@ 2023-09-07  6:22       ` Nicolson Ken (ニコルソン ケン)
  2023-09-07  7:21         ` Maxime Coquelin
  0 siblings, 1 reply; 9+ messages in thread
From: Nicolson Ken (ニコルソン ケン) @ 2023-09-07  6:22 UTC (permalink / raw)
  To: David Marchand; +Cc: users, Maxime Coquelin, Xia, Chenbo

Hi David,

Hmm, maybe the issue is at my end - to configure QEMU, I followed the tutorial at https://www.redhat.com/en/blog/hands-vhost-user-warm-welcome-dpdk

> Are you seeing the vhost port getting initialised in testpmd output?

Yes, I get a lot of VHOST_CONFIG messages, ending with:
VHOST_CONFIG: (/tmp/sock0) virtio is now ready for processing.
Rx csum will be done in SW, may impact performance.

***
UPDATE: I've been reading more of the manual while doing the troubleshooting below, and I think I've found a major issue while checking the Guest OS.
According to https://doc.dpdk.org/guides-21.11/linux_gsg/linux_drivers.html

dmesg | tail
...
[ 1297.875090] vfio-pci: probe of 0000:31:00.0 failed with error -22

I get the above in the Guest when trying to do devbind, and I also get this on both Host and Guest:

cat /boot/config-$(uname -r) | grep NOIOMMU
CONFIG_VFIO_NOIOMMU=y

Should that actually be "N"? Does "is not set" equal to no IOMMU? I should follow the grubby settings given on that RedHat page, I think. If that is off, then I would guess that that could very well be a source of all my issues.

Back to the previous contents:
***

Now, I try your example - in another terminal on the Host, I'll use "ping -v -I dtap0 -6 fe80::5054:ff:fe01:7d00", which is the address of the interface created in the Guest OS by QEMU (hmm, my <mac address=> seems to have been ignored), after I do "sudo ip link set enp9s0 up" there.

First, ping output (if I don't use "-I dtap0", nothing happens):
$ ping -v -I dtap0 -6 fe80::5054:ff:fe01:7d00
ping: Warning: source address might be selected on device other than: dtap0
PING fe80::5054:ff:fe01:7d00(fe80::5054:ff:fe01:7d00) from :: dtap0: 56 data bytes
From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=1 Destination unreachable: Address unreachable
From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=2 Destination unreachable: Address unreachable
From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=3 Destination unreachable: Address unreachable
^C
--- fe80::5054:ff:fe01:7d00 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3055ms

testpmd> set verbose 3
Change verbose level from 0 to 3
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

Then for each for the 6 attempted pings I see:

port 0/queue 0: received 1 packets
  src=22:D4:2D:67:37:68 - dst=33:33:FF:01:7D:00 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN 
port 1/queue 0: sent 1 packets
  src=22:D4:2D:67:37:68 - dst=33:33:FF:01:7D:00 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Send queue=0x0
  ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM

Thanks for the help,
Ken

-----Original Message-----
From: David Marchand <david.marchand@redhat.com> 
Sent: Wednesday, September 6, 2023 5:41 PM
To: Nicolson Ken (ニコルソン ケン) <ken.nicolson@jp.panasonic.com>
Cc: users@dpdk.org; Maxime Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>
Subject: Re: Implementing a simple TAP PMD to dpdk-vhost structure

On Wed, Sep 6, 2023 at 9:53 AM Nicolson Ken (ニコルソン ケン)
<ken.nicolson@jp.panasonic.com> wrote:
> > Alternatively, did you consider using testpmd with the vhost pmd instead ?
>
> I've tried that before, but as far as I can see from net/vhost/rte_eth_host.c it uses rte_vost_enqueue/dequeue_burst() to basically act as a loopback for the Guest OS. I use:
>
> $ sudo dpdk-testpmd -l 0-3 -n 4 --vdev 'net_tap0' --vdev 
> 'net_vhost1,iface=/tmp/sock0,client=1' -- -i
>
> But if I feed data in using "tcpreplay -I dtap0 ...", "show port stats all" shows everything going into the TAP but nothing is forward to vhost.

Well, pinging from a dtap0 netdev in the host to a virtio-net netdev in a guest works me.

testpmd> set verbose 3
Change verbose level from 0 to 3
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 1) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

port 0/queue 0: received 16 packets
  src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 - type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
- sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
  src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 - type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
- sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
  src=26:9B:E2:29:7E:C6 - dst=33:33:FF:29:7E:C6 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw
ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN


Are you seeing the vhost port getting initialised in testpmd output?
How are you sure that nothing is forwarded?


--
David Marchand


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Implementing a simple TAP PMD to dpdk-vhost structure
  2023-09-07  6:22       ` Nicolson Ken (ニコルソン ケン)
@ 2023-09-07  7:21         ` Maxime Coquelin
  0 siblings, 0 replies; 9+ messages in thread
From: Maxime Coquelin @ 2023-09-07  7:21 UTC (permalink / raw)
  To: Nicolson Ken (ニコルソン
	ケン),
	David Marchand
  Cc: users, Xia, Chenbo



On 9/7/23 08:22, Nicolson Ken (ニコルソン ケン) wrote:
> Hi David,
> 
> Hmm, maybe the issue is at my end - to configure QEMU, I followed the tutorial at https://www.redhat.com/en/blog/hands-vhost-user-warm-welcome-dpdk
> 
>> Are you seeing the vhost port getting initialised in testpmd output?
> 
> Yes, I get a lot of VHOST_CONFIG messages, ending with:
> VHOST_CONFIG: (/tmp/sock0) virtio is now ready for processing.
> Rx csum will be done in SW, may impact performance.
> 
> ***
> UPDATE: I've been reading more of the manual while doing the troubleshooting below, and I think I've found a major issue while checking the Guest OS.
> According to https://doc.dpdk.org/guides-21.11/linux_gsg/linux_drivers.html
> 
> dmesg | tail
> ...
> [ 1297.875090] vfio-pci: probe of 0000:31:00.0 failed with error -22
> 
> I get the above in the Guest when trying to do devbind, and I also get this on both Host and Guest:
> 
> cat /boot/config-$(uname -r) | grep NOIOMMU
> CONFIG_VFIO_NOIOMMU=y
> 
> Should that actually be "N"? Does "is not set" equal to no IOMMU? I should follow the grubby settings given on that RedHat page, I think. If that is off, then I would guess that that could very well be a source of all my issues.

CONFIG_VFIO_NOIOMMU=y is valid but it just build noiommu support, what
you need is to enable it at probe time:

# modprobe vfio enable_unsafe_noiommu_mode=y
# cat /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
Y

But I understood you wanted to inject packets in the guest kernel. If
this is true it should not be necessary, the Virtio device  has to be
bound to Kernel Virtio-net driver in the guest, not to VFIO/Virtio PMD.

Maxime

> 
> Back to the previous contents:
> ***
> 
> Now, I try your example - in another terminal on the Host, I'll use "ping -v -I dtap0 -6 fe80::5054:ff:fe01:7d00", which is the address of the interface created in the Guest OS by QEMU (hmm, my <mac address=> seems to have been ignored), after I do "sudo ip link set enp9s0 up" there.
> 
> First, ping output (if I don't use "-I dtap0", nothing happens):
> $ ping -v -I dtap0 -6 fe80::5054:ff:fe01:7d00
> ping: Warning: source address might be selected on device other than: dtap0
> PING fe80::5054:ff:fe01:7d00(fe80::5054:ff:fe01:7d00) from :: dtap0: 56 data bytes
>  From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=1 Destination unreachable: Address unreachable
>  From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=2 Destination unreachable: Address unreachable
>  From fe80::20d4:2dff:fe67:3768%dtap0 icmp_seq=3 Destination unreachable: Address unreachable
> ^C
> --- fe80::5054:ff:fe01:7d00 ping statistics ---
> 4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3055ms
> 
> testpmd> set verbose 3
> Change verbose level from 0 to 3
> testpmd> start
> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 2 streams:
>    RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
>    RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> 
> Then for each for the 6 attempted pings I see:
> 
> port 0/queue 0: received 1 packets
>    src=22:D4:2D:67:37:68 - dst=33:33:FF:01:7D:00 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Receive queue=0x0
>    ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
> port 1/queue 0: sent 1 packets
>    src=22:D4:2D:67:37:68 - dst=33:33:FF:01:7D:00 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Send queue=0x0
>    ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
> 
> Thanks for the help,
> Ken
> 
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Wednesday, September 6, 2023 5:41 PM
> To: Nicolson Ken (ニコルソン ケン) <ken.nicolson@jp.panasonic.com>
> Cc: users@dpdk.org; Maxime Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Subject: Re: Implementing a simple TAP PMD to dpdk-vhost structure
> 
> On Wed, Sep 6, 2023 at 9:53 AM Nicolson Ken (ニコルソン ケン)
> <ken.nicolson@jp.panasonic.com> wrote:
>>> Alternatively, did you consider using testpmd with the vhost pmd instead ?
>>
>> I've tried that before, but as far as I can see from net/vhost/rte_eth_host.c it uses rte_vost_enqueue/dequeue_burst() to basically act as a loopback for the Guest OS. I use:
>>
>> $ sudo dpdk-testpmd -l 0-3 -n 4 --vdev 'net_tap0' --vdev
>> 'net_vhost1,iface=/tmp/sock0,client=1' -- -i
>>
>> But if I feed data in using "tcpreplay -I dtap0 ...", "show port stats all" shows everything going into the TAP but nothing is forward to vhost.
> 
> Well, pinging from a dtap0 netdev in the host to a virtio-net netdev in a guest works me.
> 
> testpmd> set verbose 3
> Change verbose level from 0 to 3
> testpmd> start
> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 1) forwards packets on 2 streams:
>    RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
>    RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> 
> port 0/queue 0: received 16 packets
>    src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 - type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
> - sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
> queue=0x0
>    ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
>    src=26:9B:E2:29:7E:C6 - dst=33:33:00:00:00:16 - pool=mb_pool_0 - type=0x86dd - length=90 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6_EXT
> - sw ptype: L2_ETHER L3_IPV6_EXT  - l2_len=14 - l3_len=48 - Receive
> queue=0x0
>    ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
>    src=26:9B:E2:29:7E:C6 - dst=33:33:FF:29:7E:C6 - pool=mb_pool_0 - type=0x86dd - length=86 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV6  - sw
> ptype: L2_ETHER L3_IPV6  - l2_len=14 - l3_len=40 - Receive queue=0x0
>    ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> 
> Are you seeing the vhost port getting initialised in testpmd output?
> How are you sure that nothing is forwarded?
> 
> 
> --
> David Marchand
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-09-07  7:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-06  1:56 Implementing a simple TAP PMD to dpdk-vhost structure Nicolson Ken (ニコルソン ケン)
2023-09-06  6:07 ` David Marchand
2023-09-06  7:15   ` Maxime Coquelin
2023-09-06  8:28     ` Nicolson Ken (ニコルソン ケン)
2023-09-06  8:43       ` Maxime Coquelin
2023-09-06  7:52   ` Nicolson Ken (ニコルソン ケン)
2023-09-06  8:41     ` David Marchand
2023-09-07  6:22       ` Nicolson Ken (ニコルソン ケン)
2023-09-07  7:21         ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).