From: Jamie Fargen <jfargen@redhat.com>
To: Emre Eraltan <emre.eraltan@6wind.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] TestPMD testing with VMX3NET Devices...
Date: Fri, 7 Sep 2018 14:06:50 -0400 [thread overview]
Message-ID: <CAB-EjE+TgzDWC3jUkwn8G7a0VMZx9nq2iWcKc70xYBRrX8XRjA@mail.gmail.com> (raw)
In-Reply-To: <CADe9FyWo1ibXBbYSc+8JeGLZvdz8-rbFfGZ2RDes+t7UBXp=7g@mail.gmail.com>
Emre-
Things are looking better, but this is a ESX6.x hypervisor with CentOS 7.5
Guest, and it looks like vmxnet3 is mis-reporting the driver version. I
will try and get more details about the hypervisor.
# insmod
/root/dpdk-stable-17.11.4/build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
# ./dpdk-devbind.py -b igb_uio 0b:00.0
# ./dpdk-devbind.py -b igb_uio 13:00.0
# echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
# mount -t hugetlbfs nodev /mnt/huge
# testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
--eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
--nb-cores=2 --rxq=8 --txq=8
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
PMD: eth_vmxnet3_dev_init(): Incompatible hardware version: 0
EAL: Requested device 0000:0b:00.0 cannot be used
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
PMD: eth_vmxnet3_dev_init(): Incompatible hardware version: 0
EAL: Requested device 0000:13:00.0 cannot be used
EAL: PCI device 0000:1b:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: No probed ethernet devices
Set mac packet forwarding mode
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
socket=0
Done
testpmd> show port info all
testpmd>
On Fri, Sep 7, 2018 at 1:24 PM, Emre Eraltan <emre.eraltan@6wind.com> wrote:
> Hi Jamie,
>
> I have recently used testpmd on VMware ESX 5.5 with a Ubuntu 16.04 VM -
> and it was working with vmxnet3 devices.
>
> Did you try lowering your RXD and TXD sizes?
> I am not sure VMXNET3 supports your values "--rxd=8192 --txd=8192".
>
> Try removing these 2 parameters to use the default sizes or lower your
> values.
>
> From my history:
> $ wget http://fast.dpdk.org/rel/dpdk-17.11.4.tar.xz
> $ tar -xf dpdk-17.11.4.tar.xz
> $ cd dpdk-stable-17.11.4
> $ make config T=x86_64-native-linuxapp-gcc
> $ make ## had to install libnuma-dev package
> $ modprobe uio
> $ insmod ./build/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.ko
> $ ./usertools/dpdk-devbind.py -b igb_uio 03:00.0
> $ ./usertools/dpdk-devbind.py -b igb_uio 0b:00.0
> $ echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/
> nr_hugepages
> $ mkdir -p /mnt/huge
> $ mount -t hugetlbfs nodev /mnt/huge
> $ testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
> --nb-cores=2 --rxq=8 --txq=8
>
> testpmd> show port info all
>
> ********************* Infos for port 0 *********************
> MAC address: 00:0C:29:06:04:24
> Driver name: net_vmxnet3
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: 10000 Mbps
> Link duplex: full-duplex
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 1
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
> strip off
> filter off
> qinq(extend) off
> Supported flow types:
> ipv4
> ipv4-tcp
> ipv6
> ipv6-tcp
> Max possible RX queues: 16
> Max possible number of RXDs per queue: 4096
> Min possible number of RXDs per queue: 128
> RXDs number alignment: 1
> Max possible TX queues: 8
> Max possible number of TXDs per queue: 4096
> Min possible number of TXDs per queue: 512
> TXDs number alignment: 1
>
> ********************* Infos for port 1 *********************
> MAC address: 00:0C:29:06:04:2E
> Driver name: net_vmxnet3
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: 10000 Mbps
> Link duplex: full-duplex
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 1
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
> strip off
> filter off
> qinq(extend) off
> Supported flow types:
> ipv4
> ipv4-tcp
> ipv6
> ipv6-tcp
> Max possible RX queues: 16
> Max possible number of RXDs per queue: 4096
> Min possible number of RXDs per queue: 128
> RXDs number alignment: 1
> Max possible TX queues: 8
> Max possible number of TXDs per queue: 4096
> Min possible number of TXDs per queue: 512
> TXDs number alignment: 1
>
> Hope it helps.
>
> Regards,
> Emre
>
> On Thu, Sep 6, 2018 at 9:39 AM, Jamie Fargen <jfargen@redhat.com> wrote:
>
>> Hello-
>>
>> Would like to do some performance analysis using testpmd on a RHEL7 VMWare
>> guest using a VMX3NET network devices. Similar tests have been performed
>> using RHEL7 KVM guests using VirtIO network devices, but the same process
>> does not work with VMX3NET network interfaces.
>>
>> The dpdk-stable-17.11 has been compiled and it looks like the devices are
>> properly bound to the uio driver, but when testpmd is started it is unable
>> to locate the devices.
>>
>> This is the basic process of how the uio module is loaded, the devices our
>> bound to the driver, and testpmd is started.
>>
>> [root@redacted ~]# cat startTestPmd17.sh
>> #!/bin/bash -x
>> modprobe uio
>> insmod /root/dpdk-stable-17.11.4/build/kmod/igb_uio.ko
>> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 0b:00.0
>> /root/dpdk-stable-17.11.4/usertools/dpdk-devbind.py -b igb_uio 13:00.0
>> mount -t hugetlbfs nodev /mnt/huge
>> echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
>> 2048kB/nr_hugepages
>> testpmd -l 1,2,3 -n 1 -- --disable-hw-vlan --forward-mode=mac
>> --eth-peer=0,00:00:00:00:33:33 --eth-peer=1,00:00:00:00:44:44 -i
>> --nb-cores=2 --rxq=8 --txq=8 --rxd=8192 --txd=8192
>>
>> EAL: Detected 4 lcore(s)
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: Probing VFIO support...
>> EAL: No probed ethernet devices
>> Set mac packet forwarding mode
>> Interactive-mode selected
>> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176,
>> socket=0
>> Done
>> testpmd>
>>
>>
>> There are network devices bound to DPDK-compatible uio driver.
>> [root@redacted ~]# dpdk-stable-17.11.4/usertools/dpdk-devbind.py -s
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:0b:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>> 0000:13:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:1b:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens256 drv=vmxnet3
>> unused=igb_uio *Active*
>>
>>
>> If anyone has any spare cycles to help me solve this issue it would be
>> greatly appreciated.
>>
>>
>> --
>> Jamie Fargen
>> Senior Consultant
>> jfargen@redhat.com
>> 813-817-4430
>>
>
>
>
> --
>
> --
> Emre Eraltan
> Lead Solutions Architect
>
> Direct (US): +1 408 508 6733
> Mobile (US): +1 408 329 0271
>
> 2975 Scott Blvd, Suite 115 <https://maps.google.com/?q=2975+Scott+Blvd,+Suite+115+Santa+Clara+95054+CA&entry=gmail&source=g>
> Santa Clara 95054 CA <https://maps.google.com/?q=2975+Scott+Blvd,+Suite+115+Santa+Clara+95054+CA&entry=gmail&source=g>
>
> Visit our web site: http://www.6wind.com
> Join us on Twitter: http://twitter.com/6windsoftware
> Join us on LinkedIn: https://www.linkedin.com/company/6wind
>
> <https://www.linkedin.com/company/6wind>
>
>
> ================================================================================
> This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and contains information that is confidential and proprietary to 6WIND. All unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
> Ce courriel ainsi que toutes les pièces jointes, est uniquement destiné à son ou ses destinataires. Il contient des informations confidentielles qui sont la propriété de 6WIND. Toute révélation, distribution ou copie des informations qu'il contient est strictement interdite. Si vous avez reçu ce message par erreur, veuillez immédiatement le signaler à l'émetteur et détruire toutes les données reçues
> ================================================================================
>
>
--
Jamie Fargen
Senior Consultant
jfargen@redhat.com
813-817-4430
prev parent reply other threads:[~2018-09-07 18:06 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-06 16:39 Jamie Fargen
2018-09-07 13:08 ` Rami Rosen
2018-09-07 13:22 ` Jamie Fargen
2018-09-07 13:52 ` Rami Rosen
2018-09-07 14:38 ` Jamie Fargen
2018-09-07 15:20 ` Jamie Fargen
2018-09-07 15:44 ` Rami Rosen
2018-09-07 17:24 ` Emre Eraltan
2018-09-07 18:06 ` Jamie Fargen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAB-EjE+TgzDWC3jUkwn8G7a0VMZx9nq2iWcKc70xYBRrX8XRjA@mail.gmail.com \
--to=jfargen@redhat.com \
--cc=emre.eraltan@6wind.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).