DPDK patches and discussions
 help / color / mirror / Atom feed
From: Keren Hochman <keren.hochman@lightcyber.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Run 2 different testpmd from the same machine
Date: Wed, 28 Sep 2016 10:36:40 +0300	[thread overview]
Message-ID: <CAJq3SQ4TrBocQXfjcUGfYThChEzZv7LW03JqCmKid7oAjm7ayw@mail.gmail.com> (raw)
In-Reply-To: <7032DA68-EB05-4D42-A528-ED14954D91F4@intel.com>

Is there any way to connect to the same pci from 2 different applications?
Both applications do not transmit any data, and one application only uses
rte_eal_init and rte_eth_link?
Thank you,
Keren

On Mon, Sep 12, 2016 at 4:29 PM, Wiles, Keith <keith.wiles@intel.com> wrote:

>
> Regards,
> Keith
>
> > On Sep 12, 2016, at 6:18 AM, Keren Hochman <keren.hochman@lightcyber.com>
> wrote:
> >
> > Hi,
> > I tried to run 2 instances of testpmd from the same machine but received
> a
> > message:  Cannot get hugepage information when I tried to run the second
> > instance. Is there a way to disable hugepages or allow to instances to
> > access it ? Thanks. keren
>
> Running two instances or more DPDK applications you need to make sure the
> resources are split up correctly. You did not supply your command lines
> being used, but I will try to state how it is done.
>
> First memory or huge pages must to allocated to each instance using the
> —socket-mem 128,128 or —socket-men 128 if you only have one socket in your
> system. Make sure you have enough huge pages allocated in the
> /etc/sysctl.conf file for both instances. In the —socket-mem 128,128 you
> are giving 256 huge pages to one instance and if the second instance used
> —socket-mem 256,256 then 512 pages, which means you need 256+512 huge pages
> in the system.
>
> Next the huge page files in the /dev/hugepage directory must have
> different prefixes by using the —file-prefix option giving different file
> prefixes for each instance. If you have already run DPDK instance once
> without the option please ‘sudo rm -fr /dev/hupages/*’ to release the
> current huge pages.
>
> Next you need to make sure you blacklist the port using the -b option on
> the command line of the ports not used by this instance. Each instance
> needs to blacklist the ports not being used. This seems to be the easiest
> for me, but you could look into use the whitelist option as well.
>
> Next make sure you allocate different cores to each instance using the -c
> or -l option, the -l option is a bit easier to read IMO.
>
> Next use the —proc-type auto in both instances just to be clear. This
> could be optional I think.
>
> I hope this helps. You can also pull down Pktgen and look at the
> pktgen-master.sh and pktgen-slave.sh scripts and modify them for your
> needs. http://dpdk.org/download <http://dpdk.org/download>
>
>

      reply	other threads:[~2016-09-28  7:36 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-12 11:18 Keren Hochman
2016-09-12 13:29 ` Wiles, Keith
2016-09-28  7:36   ` Keren Hochman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJq3SQ4TrBocQXfjcUGfYThChEzZv7LW03JqCmKid7oAjm7ayw@mail.gmail.com \
    --to=keren.hochman@lightcyber.com \
    --cc=dev@dpdk.org \
    --cc=keith.wiles@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).