From: Fulvio Risso <fulvio.risso@polito.it>
To: dev@dpdk.org
Subject: [dpdk-dev] Does PCI hotplug work with IVSHMEM?
Date: Tue, 21 Jul 2015 11:45:54 +0200 [thread overview]
Message-ID: <55AE14D2.9050608@polito.it> (raw)
Dear all,
we're adding dynamically an IVSHMEM device on a VM that is already
running, but apparently this is not visible in DPDK, while it is
correctly recognized by the Guest OS.
This is the list of steps we execute:
1) Launch a new Guest VM with Qemu
2) Create a new IVSHMEM metadata file in the Host
3) Map that file as a new IVSHMEM device in the Guest
For this step, we use the "device_add" command from Qemu:
(qemu) device_add ivshmem,size=2048M,shm=fd:/dev/hugepages
/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run
/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
4) List the available PCI devices in the Guest with "lspci":
$ sudo lspci
00:04.0 RAM memory: Red Hat, Inc Virtio Inter-VM shared memory
==> hence, the Guest OS correctly recognizes the new device
5) Lauch a DPDK simple application such as 'helloworld' in the Guest:
$ sudo ./build/helloworld -c 0x1 -n 4
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Searching for IVSHMEM devices...
EAL: No IVSHMEM configuration found!
EAL: Setting up memory...
==> Hence, the DPDK app in the Guest OS doesn't recognize the new
device.
An additional observations that may be important here.
If we reboot the Guest and re-lauch the same DPDK as before, the IVSHMEM
is correclty detected by the DPDK app:
$ sudo reboot
....
$ sudo ./build/helloworld -c 0x1 -n 4
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Searching for IVSHMEM devices...
EAL: Parsing metadata for "vm_1"
EAL: Found IVSHMEM device 00:04.0
EAL: Memory segment mapped: 0x7f8b40d8c000 (len 3937000) at offset
0xd8c000
EAL: IVSHMEM segment found, size: 0x3936ec0
EAL: Setting up memory...
Finally, this is what the DPDK programming guide says about IVSHMEM:
-----------------------------------------------------------------------
http://dpdk.org/doc/guides/prog_guide/ivshmem_lib.html
Currently, there is no hot plug support for QEMU IVSHMEM devices, so one
cannot add additional memory to an IVSHMEM device once it has been
created. Therefore, the correct sequence to run an IVSHMEM application
is to run host application first, obtain the command lines for each
IVSHMEM device and then run all QEMU instances with guest applications
afterwards.
-----------------------------------------------------------------------
To our understanding, IVSHMEM HotPlug is supported, but we cannot resize
dynamically the memory.
Is that correct?
fulvio
reply other threads:[~2015-07-21 9:45 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55AE14D2.9050608@polito.it \
--to=fulvio.risso@polito.it \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).