DPDK patches and discussions
 help / color / mirror / Atom feed
From: Abhijeet Karve <abhijeet.karve@tcs.com>
To: przemyslaw.czesnowicz@intel.com, sean.k.mooney@intel.com,
	sugesh.chandran@intel.com
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"discuss@openvswitch.org" <discuss@openvswitch.org>
Subject: [dpdk-dev] DPDK with vhostuser - Issue in Testing with DPDK-Pktgen
Date: Fri, 26 Feb 2016 20:14:50 +0530	[thread overview]
Message-ID: <OF5AAA3FD0.4866E8B4-ON65257F65.004F88AC-65257F65.0051028E@tcs.com> (raw)

Dear Przemek, Sean & Sugesh,

It would be great if get your valuable inputs for the below listed issue. 
We have setup DPDK with vhostuser in AllinOne setup on following software 
platform.
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________

In our Openstack OVS-DPDK  set up, we are spawning 2 instance on same host 
with Ubuntu and running DPDK-pktgen in one of the VM and measuring traffic 
between VMs of same host.

VMs spec are 8GB of memory and 4 core, we are using one port for pktgen 
application.
 
Please fine the VM huge page setting as well,
 
HugePages_Total:    1024
HugePages_Free:      896
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
 
Please find the below command we are using,
 
./app/app/$RTE_TARGET/pktgen -c 0f -n 3  --proc-type auto --socket-mem 256 
-b 0000:00:03.0 -- -P -m "[1-3].0"
 
Regardless of the cores whatever we are passing here(-m "[1-3].0"), the 
traffic is sending via only one core, remaining 2 cores are idle.
 
We have added specific property as well to the images we are using,
 
hw:­vif_multiqueue_­enabled=­true à Flavor property
hw_­vif_mutliqueue_­enabled=­true à Image property
 
Still we got the same issue on the application.
 
>From logs,
. . . . .
** Dev Info (rte_virtio_pmd:0) **
   max_vfs        :   0 min_rx_bufsize    :  64 max_rx_pktlen :  9728 
max_rx_queues         :   1 max_tx_queues:   1
. . . . .
 

Kindly suggest us something on this regard if we need to tune anything on 
Openstack end or in pktgen application.


Thanks & Regards
Abhijeet Karve
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you

             reply	other threads:[~2016-02-26 16:08 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-26 14:44 Abhijeet Karve [this message]
2016-02-26 16:38 ` Wiles, Keith
2016-02-27 12:30 ` Czesnowicz, Przemyslaw

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=OF5AAA3FD0.4866E8B4-ON65257F65.004F88AC-65257F65.0051028E@tcs.com \
    --to=abhijeet.karve@tcs.com \
    --cc=dev@dpdk.org \
    --cc=discuss@openvswitch.org \
    --cc=przemyslaw.czesnowicz@intel.com \
    --cc=sean.k.mooney@intel.com \
    --cc=sugesh.chandran@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).