DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] TestPMD testing with VMX3NET Devices...
@ 2018-09-06 16:39 Jamie Fargen
  2018-09-07 13:08 ` Rami Rosen
  2018-09-07 17:24 ` Emre Eraltan
  0 siblings, 2 replies; 9+ messages in thread
From: Jamie Fargen @ 2018-09-06 16:39 UTC (permalink / raw)
  To: users

Hello-

Would like to do some performance analysis using testpmd on a RHEL7 VMWare
guest using a VMX3NET network devices. Similar tests have been performed
using RHEL7 KVM guests using VirtIO network devices, but the same process
does not work with VMX3NET network interfaces.

The dpdk-stable-17.11 has been compiled and it looks like the devices are
properly bound to the uio driver, but when testpmd is started it is unable
to locate the devices.

This is the basic process of how the uio module is loaded, the devices our
bound to the driver, and testpmd is started.

[root@redacted ~]# cat startTestPmd17.sh
#!/bin/bash -x
modprobe uio
insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
/root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
/root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
mount -t hugetlbfs nodev /mnt/huge
echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
2048kB/nr_hugepages
testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
--eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
--nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192

EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: No probed ethernet devices
Set mac packet forwarding mode
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
socket=0
Done
testpmd>


There are network devices bound to DPDK-compatible uio driver.
[root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s

Network devices using DPDK-compatible driver
============================================
0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3

Network devices using kernel driver
===================================
0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
unused=igb_uio *Active*


If anyone has any spare cycles to help me solve this issue it would be
greatly appreciated.


-- 
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-06 16:39 [dpdk-users] TestPMD testing with VMX3NET Devices Jamie Fargen
@ 2018-09-07 13:08 ` Rami Rosen
  2018-09-07 13:22   ` Jamie Fargen
  2018-09-07 17:24 ` Emre Eraltan
  1 sibling, 1 reply; 9+ messages in thread
From: Rami Rosen @ 2018-09-07 13:08 UTC (permalink / raw)
  To: jfargen; +Cc: users

Hi Jamie,
Can you run this with --log-level=8 -w 0000:0b:00.0 -w 0000:13:00.0
and post the results here?

(I mean like the following:
testpmd -l 1,2,3 -n 1 --log-level=8 -w 0000:0b:00.0 -w 0000:13:00.0 --
--disable-hw-vlan --forward-mode=mac
--eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
--nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192)

 and also to be on the safe side, make sure that
in build/.config you have
CONFIG_RTE_LIBRTE_VMXNET3_PMD=y
(It should be so by default for dpdk-stable-17.11.4)

Regards,
Rami Rosen

On Thu, 6 Sep 2018 at 19:39, Jamie Fargen <jfargen@redhat.com> wrote:
>
> Hello-
>
> Would like to do some performance analysis using testpmd on a RHEL7 VMWare
> guest using a VMX3NET network devices. Similar tests have been performed
> using RHEL7 KVM guests using VirtIO network devices, but the same process
> does not work with VMX3NET network interfaces.
>
> The dpdk-stable-17.11 has been compiled and it looks like the devices are
> properly bound to the uio driver, but when testpmd is started it is unable
> to locate the devices.
>
> This is the basic process of how the uio module is loaded, the devices our
> bound to the driver, and testpmd is started.
>
> [root@redacted ~]# cat startTestPmd17.sh
> #!/bin/bash -x
> modprobe uio
> insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
> mount -t hugetlbfs nodev /mnt/huge
> echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192
>
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: No probed ethernet devices
> Set mac packet forwarding mode
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
> socket=0
> Done
> testpmd>
>
>
> There are network devices bound to DPDK-compatible uio driver.
> [root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
> 0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>
> Network devices using kernel driver
> ===================================
> 0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
> unused=igb_uio *Active*
>
>
> If anyone has any spare cycles to help me solve this issue it would be
> greatly appreciated.
>
>
> --
> Jamie Fargen
> Senior Consultant
> jfargen@redhat.com
> 813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 13:08 ` Rami Rosen
@ 2018-09-07 13:22   ` Jamie Fargen
  2018-09-07 13:52     ` Rami Rosen
  0 siblings, 1 reply; 9+ messages in thread
From: Jamie Fargen @ 2018-09-07 13:22 UTC (permalink / raw)
  To: Rami Rosen; +Cc: users

Rami-

Attached is the results of running testpmd with loglevel=8.

The setting CONFIG_RTE_LIBRTE_VMXNET3_PMD=y seems to be set.
# grep 'CONFIG_RTE_LIBRTE_VMXNET3_PMD=y' build/.config
CONFIG_RTE_LIBRTE_VMXNET3_PMD=y

Should a library be build like
/usr/lib64/dpdk-pmds/librte_pmd_vmxnet3.so.1?  If so I don't see it.
# find /usr/lib64/dpdk-pmds
/usr/lib64/dpdk-pmds
/usr/lib64/dpdk-pmds/librte_pmd_bnxt.so.2
/usr/lib64/dpdk-pmds/librte_pmd_e1000.so.1
/usr/lib64/dpdk-pmds/librte_pmd_enic.so.1
/usr/lib64/dpdk-pmds/librte_pmd_failsafe.so.1
/usr/lib64/dpdk-pmds/librte_pmd_i40e.so.2
/usr/lib64/dpdk-pmds/librte_pmd_ixgbe.so.2
/usr/lib64/dpdk-pmds/librte_pmd_mlx4.so.1
/usr/lib64/dpdk-pmds/librte_pmd_mlx5.so.1
/usr/lib64/dpdk-pmds/librte_pmd_nfp.so.1
/usr/lib64/dpdk-pmds/librte_pmd_qede.so.1
/usr/lib64/dpdk-pmds/librte_pmd_ring.so.2
/usr/lib64/dpdk-pmds/librte_pmd_softnic.so.1
/usr/lib64/dpdk-pmds/librte_pmd_vhost.so.2
/usr/lib64/dpdk-pmds/librte_pmd_virtio.so.1

Regards,
-Jamie

On Fri, Sep 7, 2018 at 9:08 AM, Rami Rosen <roszenrami@gmail.com> wrote:

> Hi Jamie,
> Can you run this with --log-level=8 -w 0000:0b:00.0 -w 0000:13:00.0
> and post the results here?
>
> (I mean like the following:
> testpmd -l 1,2,3 -n 1 --log-level=8 -w 0000:0b:00.0 -w 0000:13:00.0 --
> --disable-hw-vlan --forward-mode=mac
> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192)
>
>  and also to be on the safe side, make sure that
> in build/.config you have
> CONFIG_RTE_LIBRTE_VMXNET3_PMD=y
> (It should be so by default for dpdk-stable-17.11.4)
>
> Regards,
> Rami Rosen
>
> On Thu, 6 Sep 2018 at 19:39, Jamie Fargen <jfargen@redhat.com> wrote:
> >
> > Hello-
> >
> > Would like to do some performance analysis using testpmd on a RHEL7
> VMWare
> > guest using a VMX3NET network devices. Similar tests have been performed
> > using RHEL7 KVM guests using VirtIO network devices, but the same process
> > does not work with VMX3NET network interfaces.
> >
> > The dpdk-stable-17.11 has been compiled and it looks like the devices are
> > properly bound to the uio driver, but when testpmd is started it is
> unable
> > to locate the devices.
> >
> > This is the basic process of how the uio module is loaded, the devices
> our
> > bound to the driver, and testpmd is started.
> >
> > [root@redacted ~]# cat startTestPmd17.sh
> > #!/bin/bash -x
> > modprobe uio
> > insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
> > /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
> > /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
> > mount -t hugetlbfs nodev /mnt/huge
> > echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
> > 2048kB/nr_hugepages
> > testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
> > --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> > --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192
> >
> > EAL: Detected 4 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: No probed ethernet devices
> > Set mac packet forwarding mode
> > Interactive-mode selected
> > USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
> > socket=0
> > Done
> > testpmd>
> >
> >
> > There are network devices bound to DPDK-compatible uio driver.
> > [root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s
> >
> > Network devices using DPDK-compatible driver
> > ============================================
> > 0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio
> unused=vmxnet3
> > 0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio
> unused=vmxnet3
> >
> > Network devices using kernel driver
> > ===================================
> > 0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
> > unused=igb_uio *Active*
> >
> >
> > If anyone has any spare cycles to help me solve this issue it would be
> > greatly appreciated.
> >
> >
> > --
> > Jamie Fargen
> > Senior Consultant
> > jfargen@redhat.com
> > 813-817-4430
>



-- 
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 13:22   ` Jamie Fargen
@ 2018-09-07 13:52     ` Rami Rosen
  2018-09-07 14:38       ` Jamie Fargen
  0 siblings, 1 reply; 9+ messages in thread
From: Rami Rosen @ 2018-09-07 13:52 UTC (permalink / raw)
  To: jfargen; +Cc: users

Hi, Jamie,
By default, building in DPDK (including in stable-17.11.4) is by creating static
libs (*.a) and not shared libs (*so).

What does cat build/.config | grep -i shared show ?
If you have CONFIG_RTE_BUILD_SHARED_LIB=y, this means that
you use shared libs, and in such a case, a VMXNET3 *.so should be
generated in the build
(librte_pmd_vmxnet3_uio.so), and if indeed the testpmd app you use was built in
a tree where CONFIG_RTE_BUILD_SHARED_LIB is "y",
then the librte_pmd_vmxnet3_uio.so should be under /usr/lib64/... folder
so it will work.
It seems that the best thing is to rebuild the DPKD tree (after make
clean) and reinstall it.

Regards,
Rami Rosen

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 13:52     ` Rami Rosen
@ 2018-09-07 14:38       ` Jamie Fargen
  2018-09-07 15:20         ` Jamie Fargen
  0 siblings, 1 reply; 9+ messages in thread
From: Jamie Fargen @ 2018-09-07 14:38 UTC (permalink / raw)
  To: Rami Rosen; +Cc: users

Rami-

The output show that that CONFIG_RTE_BUILD_SHARED_LIB is set to no.
# cat build/.config | grep -i shared CONFIG_RTE_BUILD_SHARED_LIB
CONFIG_RTE_BUILD_SHARED_LIB=n

I wil l change that to CONFIG_RTE_BUILD_SHARED_LIB=y and reinstall.

Regards,

-Jamie

On Fri, Sep 7, 2018 at 9:52 AM, Rami Rosen <roszenrami@gmail.com> wrote:

> Hi, Jamie,
> By default, building in DPDK (including in stable-17.11.4) is by creating
> static
> libs (*.a) and not shared libs (*so).
>
> What does cat build/.config | grep -i shared show ?
> If you have CONFIG_RTE_BUILD_SHARED_LIB=y, this means that
> you use shared libs, and in such a case, a VMXNET3 *.so should be
> generated in the build
> (librte_pmd_vmxnet3_uio.so), and if indeed the testpmd app you use was
> built in
> a tree where CONFIG_RTE_BUILD_SHARED_LIB is "y",
> then the librte_pmd_vmxnet3_uio.so should be under /usr/lib64/... folder
> so it will work.
> It seems that the best thing is to rebuild the DPKD tree (after make
> clean) and reinstall it.
>
> Regards,
> Rami Rosen
>



-- 
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 14:38       ` Jamie Fargen
@ 2018-09-07 15:20         ` Jamie Fargen
  2018-09-07 15:44           ` Rami Rosen
  0 siblings, 1 reply; 9+ messages in thread
From: Jamie Fargen @ 2018-09-07 15:20 UTC (permalink / raw)
  To: Rami Rosen; +Cc: users

Rami-

Ok, can you clear something else up. Should I use the vfio-pci or the
igb_uio driver/kernel module?

Regards,
-Jamie

On Fri, Sep 7, 2018 at 10:38 AM, Jamie Fargen <jfargen@redhat.com> wrote:

> Rami-
>
> The output show that that CONFIG_RTE_BUILD_SHARED_LIB is set to no.
> # cat build/.config | grep -i shared CONFIG_RTE_BUILD_SHARED_LIB
> CONFIG_RTE_BUILD_SHARED_LIB=n
>
> I wil l change that to CONFIG_RTE_BUILD_SHARED_LIB=y and reinstall.
>
> Regards,
>
> -Jamie
>
> On Fri, Sep 7, 2018 at 9:52 AM, Rami Rosen <roszenrami@gmail.com> wrote:
>
>> Hi, Jamie,
>> By default, building in DPDK (including in stable-17.11.4) is by creating
>> static
>> libs (*.a) and not shared libs (*so).
>>
>> What does cat build/.config | grep -i shared show ?
>> If you have CONFIG_RTE_BUILD_SHARED_LIB=y, this means that
>> you use shared libs, and in such a case, a VMXNET3 *.so should be
>> generated in the build
>> (librte_pmd_vmxnet3_uio.so), and if indeed the testpmd app you use was
>> built in
>> a tree where CONFIG_RTE_BUILD_SHARED_LIB is "y",
>> then the librte_pmd_vmxnet3_uio.so should be under /usr/lib64/... folder
>> so it will work.
>> It seems that the best thing is to rebuild the DPKD tree (after make
>> clean) and reinstall it.
>>
>> Regards,
>> Rami Rosen
>>
>
>
>
> --
> Jamie Fargen
> Senior Consultant
> jfargen@redhat.com
> 813-817-4430
>



-- 
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 15:20         ` Jamie Fargen
@ 2018-09-07 15:44           ` Rami Rosen
  0 siblings, 0 replies; 9+ messages in thread
From: Rami Rosen @ 2018-09-07 15:44 UTC (permalink / raw)
  To: jfargen; +Cc: users

Hi Jamie,
>Ok, can you clear something else up. Should I use the vfio-pci or the igb_uio driver/kernel module?

Sure.
First there is a third option, which is binding with uio_pci_generic,
which is a generic kernel module.
When you bind with igb_uio, an entry called max_vfs will be generated
under the linux pci device sysfs entry.
Echoing an integer into max_vfs will generate DPDK VFs (if the DPDK
PMD support VFs; Intel and Mellnox
and other vendor nics do support it) as opposed to echoing into
sriov_numvs, which
generates a kernel module PF/PFs.
OTOH, You don't have max_vfs with vfio_pci and uio_pci_generic.

It says in the DPDK doc:
"If UEFI secure boot is enabled, the Linux kernel may disallow the use
of UIO on the system. Therefore, devices for use by DPDK should be
bound to the vfio-pci kernel module rather than igb_uio or
uio_pci_generic.'
see
http://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#vfio

see also:
"5.4. Binding and Unbinding Network Ports to/from the Kernel Modules":
http://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#linux-gsg-binding-kernel

Hope this helps to clarify things,

Regards,
Rami Rosen

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-06 16:39 [dpdk-users] TestPMD testing with VMX3NET Devices Jamie Fargen
  2018-09-07 13:08 ` Rami Rosen
@ 2018-09-07 17:24 ` Emre Eraltan
  2018-09-07 18:06   ` Jamie Fargen
  1 sibling, 1 reply; 9+ messages in thread
From: Emre Eraltan @ 2018-09-07 17:24 UTC (permalink / raw)
  To: Jamie Fargen; +Cc: users

Hi Jamie,

I have recently used testpmd on VMware ESX 5.5 with a Ubuntu 16.04 VM - and
it was working with vmxnet3 devices.

Did you try lowering your RXD and TXD sizes?
I am not sure VMXNET3 supports your values "--rxd=8192 --txd=8192".

Try removing these 2 parameters to use the default sizes or lower your
values.

>From my history:
$ wget http://fast.dpdk.org/rel/dpdk-17.11.4.tar.xz
$ tar -xf dpdk-17.11.4.tar.xz
$ cd dpdk-stable-17.11.4
$ make config T=x86_64-native-linuxapp-gcc
$ make ## had to install libnuma-dev package
$ modprobe uio
$ insmod ./build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
$ ./usertools/dpdk-devbind.py -b igb_uio 03:00.0
$ ./usertools/dpdk-devbind.py -b igb_uio 0b:00.0
$ echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
2048kB/nr_hugepages
$ mkdir -p /mnt/huge
$ mount -t hugetlbfs nodev /mnt/huge
$ testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
--eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
--nb-cores=2 --rxq=8 --txq=8

testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 00:0C:29:06:04:24
Driver name: net_vmxnet3
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off
  filter off
  qinq(extend) off
Supported flow types:
  ipv4
  ipv4-tcp
  ipv6
  ipv6-tcp
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Max possible TX queues: 8
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 512
TXDs number alignment: 1

********************* Infos for port 1  *********************
MAC address: 00:0C:29:06:04:2E
Driver name: net_vmxnet3
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off
  filter off
  qinq(extend) off
Supported flow types:
  ipv4
  ipv4-tcp
  ipv6
  ipv6-tcp
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Max possible TX queues: 8
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 512
TXDs number alignment: 1

Hope it helps.

Regards,
Emre

On Thu, Sep 6, 2018 at 9:39 AM, Jamie Fargen <jfargen@redhat.com> wrote:

> Hello-
>
> Would like to do some performance analysis using testpmd on a RHEL7 VMWare
> guest using a VMX3NET network devices. Similar tests have been performed
> using RHEL7 KVM guests using VirtIO network devices, but the same process
> does not work with VMX3NET network interfaces.
>
> The dpdk-stable-17.11 has been compiled and it looks like the devices are
> properly bound to the uio driver, but when testpmd is started it is unable
> to locate the devices.
>
> This is the basic process of how the uio module is loaded, the devices our
> bound to the driver, and testpmd is started.
>
> [root@redacted ~]# cat startTestPmd17.sh
> #!/bin/bash -x
> modprobe uio
> insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
> mount -t hugetlbfs nodev /mnt/huge
> echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192
>
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: No probed ethernet devices
> Set mac packet forwarding mode
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
> socket=0
> Done
> testpmd>
>
>
> There are network devices bound to DPDK-compatible uio driver.
> [root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
> 0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>
> Network devices using kernel driver
> ===================================
> 0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
> unused=igb_uio *Active*
>
>
> If anyone has any spare cycles to help me solve this issue it would be
> greatly appreciated.
>
>
> --
> Jamie Fargen
> Senior Consultant
> jfargen@redhat.com
> 813-817-4430
>



-- 

-- 
Emre Eraltan
Lead Solutions Architect

Direct (US): +1 408 508 6733
Mobile (US): +1 408 329 0271

2975 Scott Blvd, Suite 115
Santa Clara 95054 CA

Visit our web site: http://www.6wind.com
Join us on Twitter: http://twitter.com/6windsoftware
Join us on LinkedIn: https://www.linkedin.com/company/6wind

<https://www.linkedin.com/company/6wind>


================================================================================
This e-mail message, including any attachments, is for the sole use of
the intended recipient(s) and contains information that is
confidential and proprietary to 6WIND. All unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all
copies of the original message.
Ce courriel ainsi que toutes les pièces jointes, est uniquement
destiné à son ou ses destinataires. Il contient des informations
confidentielles qui sont la propriété de 6WIND. Toute révélation,
distribution ou copie des informations qu'il contient est strictement
interdite. Si vous avez reçu ce message par erreur, veuillez
immédiatement le signaler à l'émetteur et détruire toutes les données
reçues
================================================================================

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
  2018-09-07 17:24 ` Emre Eraltan
@ 2018-09-07 18:06   ` Jamie Fargen
  0 siblings, 0 replies; 9+ messages in thread
From: Jamie Fargen @ 2018-09-07 18:06 UTC (permalink / raw)
  To: Emre Eraltan; +Cc: users

Emre-

Things are looking better, but this is a ESX6.x hypervisor with CentOS 7.5
Guest, and it looks like vmxnet3 is mis-reporting the driver version. I
will try and get more details about the hypervisor.

# insmod
/root/dpdk-stable-17.11.4/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
# ./dpdk-devbind.py -b igb_uio 0b:00.0
# ./dpdk-devbind.py -b igb_uio 13:00.0
# echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
# mount -t hugetlbfs nodev /mnt/huge
# testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
--eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
--nb-cores=2 --rxq=8 --txq=8
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
PMD: eth_vmxnet3_dev_init(): Incompatible hardware version: 0
EAL: Requested device 0000:0b:00.0 cannot be used
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
PMD: eth_vmxnet3_dev_init(): Incompatible hardware version: 0
EAL: Requested device 0000:13:00.0 cannot be used
EAL: PCI device 0000:1b:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
EAL: No probed ethernet devices
Set mac packet forwarding mode
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
socket=0
Done
testpmd> show port info all
testpmd>


On Fri, Sep 7, 2018 at 1:24 PM, Emre Eraltan <emre.eraltan@6wind.com> wrote:

> Hi Jamie,
>
> I have recently used testpmd on VMware ESX 5.5 with a Ubuntu 16.04 VM -
> and it was working with vmxnet3 devices.
>
> Did you try lowering your RXD and TXD sizes?
> I am not sure VMXNET3 supports your values "--rxd=8192 --txd=8192".
>
> Try removing these 2 parameters to use the default sizes or lower your
> values.
>
> From my history:
> $ wget http://fast.dpdk.org/rel/dpdk-17.11.4.tar.xz
> $ tar -xf dpdk-17.11.4.tar.xz
> $ cd dpdk-stable-17.11.4
> $ make config T=x86_64-native-linuxapp-gcc
> $ make ## had to install libnuma-dev package
> $ modprobe uio
> $ insmod ./build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
> $ ./usertools/dpdk-devbind.py -b igb_uio 03:00.0
> $ ./usertools/dpdk-devbind.py -b igb_uio 0b:00.0
> $ echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/
> nr_hugepages
> $ mkdir -p /mnt/huge
> $ mount -t hugetlbfs nodev /mnt/huge
> $ testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> --nb-cores=2 --rxq=8 --txq=8
>
> testpmd> show port info all
>
> ********************* Infos for port 0  *********************
> MAC address: 00:0C:29:06:04:24
> Driver name: net_vmxnet3
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: 10000 Mbps
> Link duplex: full-duplex
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 1
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
>   strip off
>   filter off
>   qinq(extend) off
> Supported flow types:
>   ipv4
>   ipv4-tcp
>   ipv6
>   ipv6-tcp
> Max possible RX queues: 16
> Max possible number of RXDs per queue: 4096
> Min possible number of RXDs per queue: 128
> RXDs number alignment: 1
> Max possible TX queues: 8
> Max possible number of TXDs per queue: 4096
> Min possible number of TXDs per queue: 512
> TXDs number alignment: 1
>
> ********************* Infos for port 1  *********************
> MAC address: 00:0C:29:06:04:2E
> Driver name: net_vmxnet3
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: 10000 Mbps
> Link duplex: full-duplex
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 1
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
>   strip off
>   filter off
>   qinq(extend) off
> Supported flow types:
>   ipv4
>   ipv4-tcp
>   ipv6
>   ipv6-tcp
> Max possible RX queues: 16
> Max possible number of RXDs per queue: 4096
> Min possible number of RXDs per queue: 128
> RXDs number alignment: 1
> Max possible TX queues: 8
> Max possible number of TXDs per queue: 4096
> Min possible number of TXDs per queue: 512
> TXDs number alignment: 1
>
> Hope it helps.
>
> Regards,
> Emre
>
> On Thu, Sep 6, 2018 at 9:39 AM, Jamie Fargen <jfargen@redhat.com> wrote:
>
>> Hello-
>>
>> Would like to do some performance analysis using testpmd on a RHEL7 VMWare
>> guest using a VMX3NET network devices. Similar tests have been performed
>> using RHEL7 KVM guests using VirtIO network devices, but the same process
>> does not work with VMX3NET network interfaces.
>>
>> The dpdk-stable-17.11 has been compiled and it looks like the devices are
>> properly bound to the uio driver, but when testpmd is started it is unable
>> to locate the devices.
>>
>> This is the basic process of how the uio module is loaded, the devices our
>> bound to the driver, and testpmd is started.
>>
>> [root@redacted ~]# cat startTestPmd17.sh
>> #!/bin/bash -x
>> modprobe uio
>> insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
>> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
>> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
>> mount -t hugetlbfs nodev /mnt/huge
>> echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
>> 2048kB/nr_hugepages
>> testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
>> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
>> --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192
>>
>> EAL: Detected 4 lcore(s)
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: Probing VFIO support...
>> EAL: No probed ethernet devices
>> Set mac packet forwarding mode
>> Interactive-mode selected
>> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
>> socket=0
>> Done
>> testpmd>
>>
>>
>> There are network devices bound to DPDK-compatible uio driver.
>> [root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>> 0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
>> unused=igb_uio *Active*
>>
>>
>> If anyone has any spare cycles to help me solve this issue it would be
>> greatly appreciated.
>>
>>
>> --
>> Jamie Fargen
>> Senior Consultant
>> jfargen@redhat.com
>> 813-817-4430
>>
>
>
>
> --
>
> --
> Emre Eraltan
> Lead Solutions Architect
>
> Direct (US): +1 408 508 6733
> Mobile (US): +1 408 329 0271
>
> 2975 Scott Blvd, Suite 115 <https://maps.google.com/?q=2975+Scott+Blvd,+Suite+115+Santa+Clara+95054+CA&entry=gmail&source=g>
> Santa Clara 95054 CA <https://maps.google.com/?q=2975+Scott+Blvd,+Suite+115+Santa+Clara+95054+CA&entry=gmail&source=g>
>
> Visit our web site: http://www.6wind.com
> Join us on Twitter: http://twitter.com/6windsoftware
> Join us on LinkedIn: https://www.linkedin.com/company/6wind
>
> <https://www.linkedin.com/company/6wind>
>
>
> ================================================================================
> This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and contains information that is confidential and proprietary to 6WIND. All unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
> Ce courriel ainsi que toutes les pièces jointes, est uniquement destiné à son ou ses destinataires. Il contient des informations confidentielles qui sont la propriété de 6WIND. Toute révélation, distribution ou copie des informations qu'il contient est strictement interdite. Si vous avez reçu ce message par erreur, veuillez immédiatement le signaler à l'émetteur et détruire toutes les données reçues
> ================================================================================
>
>


-- 
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-09-07 18:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-06 16:39 [dpdk-users] TestPMD testing with VMX3NET Devices Jamie Fargen
2018-09-07 13:08 ` Rami Rosen
2018-09-07 13:22   ` Jamie Fargen
2018-09-07 13:52     ` Rami Rosen
2018-09-07 14:38       ` Jamie Fargen
2018-09-07 15:20         ` Jamie Fargen
2018-09-07 15:44           ` Rami Rosen
2018-09-07 17:24 ` Emre Eraltan
2018-09-07 18:06   ` Jamie Fargen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).