DPDK usage discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Templin (US), Fred L" <Fred.L.Templin@boeing.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] A simpler question - does DPDK run over virtual interfaces?
Date: Wed, 5 May 2021 17:19:21 +0100	[thread overview]
Message-ID: <596d01d1-3472-4d47-8a1b-5037238baa66@intel.com> (raw)
In-Reply-To: <62e934307738436988aefc7d292b7f28@boeing.com>

On 4/13/2021 12:39 PM, Templin (US), Fred L wrote:
> Let me backtrack and start by asking a simpler question - can DPDK run over
> virtual interfaces such as a loopback?
> 

It can, using DPDK virtual interfaces, like using 'af_packet' or 'pcap'. Like:
"./build/app/dpdk-testpmd --vdev net_af_packet0,iface=lo --no-pci -- -i"

I don't know about CORE vnode, but if it is seen as another Linux virtual
interface, above should work same.

This may work to test/check some functionality, but for performance you will
need the physical interfaces.

Btw, binding a physical interface to vfio driver is to enable driving it by
DPDK, you don't need this step for Linux virtual interfaces.

> Thanks - Fred
> 
>> -----Original Message-----
>> From: Templin (US), Fred L
>> Sent: Monday, April 12, 2021 10:42 AM
>> To: 'users@dpdk.org' <users@dpdk.org>
>> Subject: Problems using DPDK within CORE emulations running on Ubuntu 18.04 VMs
>>
>> Hi, I am running Ubuntu 18.04 in a VM running on VirtualBox. I have built and
>> installed DPDK-20.11 from sources and had no troubles building by following
>> the Getting Started Guide for Linux instructions:
>>
>> http://doc.dpdk.org/guides/linux_gsg/intro.html
>>
>> Next, within the Ubuntu VM I run the CORE network emulator:
>>
>> https://www.nrl.navy.mil/Our-Work/Areas-of-Research/Information-Technology/NCS/CORE/
>>
>> I have a simple two-node network setup with two CORE vnodes connected
>> via a network switch, and verified that I can ping between the two nodes.
>> Now, I want to experiment with the DPDK-20.11 "ip_fragmentation" and
>> "ip_reassembly" example programs (which I was able to build successfully)
>> but it appears that these example programs require ports to be mapped.
>>
>> So, I skipped ahead to Section 5 of the Getting Started Guide for Linux
>> ("Linux Drivers") and tried to follow the instructions in Section 5.5. on
>> "Binding and Unbinding Network Ports to/from the Kernel Modules" by
>> typing commands into one of the CORE vnode shell windows. The text
>> at the end of this message shows the commands I typed and the output
>> I was shown in response. In particular, the "dpdk-devbind.py --status"
>> script does not appear to show a usable map of my CORE vnode
>> network interfaces, and attempts to bind were unsuccessful.
>>
>> Has anyone ever run DPDK out of a CORE vnode before and/or can
>> you tell me what steps are needed to be able to bind CORE vnode
>> interfaces so that they can be used by DPDK? Or, is DPDK simply
>> incompatible with virtualization environments.
>>
>> Another question - can DPDK be run over loopback interfaces?
>>
>> Thanks - Fred
>>
>> ---
>>
>> Script started on 2021-04-12 09:54:19-0700
>> root@n1: pwd
>> /home/fltemplin/src/DPDK/dpdk-20.11/usertools
>> root@n1: ip link sho
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> 5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
>>     link/ether 00:00:00:aa:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> root@n1: sudo modprobe vfio-pci
>> root@n1: ip link sho
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> 5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
>>     link/ether 00:00:00:aa:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> root@n1: ./dpdk-devbind.py --status
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if= drv=e1000 unused=vfio-pci
>> 0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' if= drv=e1000 unused=vfio-pci
>>
>> No 'Baseband' devices detected
>> ==============================
>>
>> No 'Crypto' devices detected
>> ============================
>>
>> No 'Eventdev' devices detected
>> ==============================
>>
>> No 'Mempool' devices detected
>> =============================
>>
>> No 'Compress' devices detected
>> ==============================
>>
>> No 'Misc (rawdev)' devices detected
>> ===================================
>>
>> No 'Regex' devices detected
>> ===========================
>> root@n1: exit
>>
>> Script done on 2021-04-12 09:55:41-0700
>>
> 


  parent reply	other threads:[~2021-05-05 16:19 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-13 11:39 Templin (US), Fred L
2021-04-13 16:32 ` Stephen Hemminger
2021-05-05 16:19 ` Ferruh Yigit [this message]
2021-05-05 16:27 Templin (US), Fred L

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=596d01d1-3472-4d47-8a1b-5037238baa66@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=Fred.L.Templin@boeing.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).