From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
To: Tetsuya Mukawa <mukawa@igel.co.jp>, "dev@dpdk.org" <dev@dpdk.org>
Cc: Hayato Momma <h-momma@ce.jp.nec.com>
Subject: Re: [dpdk-dev] [memnic PATCH 0/7] MEMNIC PMD performance improvement
Date: Thu, 11 Sep 2014 08:36:48 +0000 [thread overview]
Message-ID: <7F861DC0615E0C47A872E6F3C5FCDDBD011AA7E1@BPXM14GP.gisp.nec.co.jp> (raw)
In-Reply-To: <5411598F.70907@igel.co.jp>
Hi Mukawa-san,
> Subject: Re: [dpdk-dev] [memnic PATCH 0/7] MEMNIC PMD performance improvement
>
> Hi Shimamoto-san,
>
>
> (2014/09/11 16:45), Hiroshi Shimamoto wrote:
> > From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
> >
> > This patchset improves MEMNIC PMD performance.
> >
> > The first patch introduces a new benchmark test run in guest,
> > and will be used to evaluate the following patch effects.
> >
> > This patchset improves the throughput results of memnic-tester.
> > Using Xeon E5-2697 v2 @ 2.70GHz, 4 vCPU.
> How many cores are you actually using for sending and receiving?
In this case, I use 4 dedicated cores pinned to each vCPU,
so the answer is 4 cores, more precisely 2 cores for the test DPDK App.
> I guess 1 dedicated core is used for sending on host or guest side, and
> one more dedicated core is for receiving on the other side.
> And you've got a following performance result.
> Is this correct?
I think you can see the test details in the first patch.
The test is done in guest only because I just want to know the
PMD performance only. The host does nothing in the test.
In guest 1 thread = 1 dedicated core emulates packet send/recv
by turning flag on/off. On the other hand another thread, also
pinned 1 dedicated core, does rx_burst and tx_burst.
The test measures how much packets can be received and transmitted
by MEMNIC PMD.
This results means that if host can sends and receives packets in
enough performance, how much throughput the guest application can
achieve.
thanks,
Hiroshi
>
> Thanks,
> Tetsuya Mukawa
>
> > size | before | after
> > 64 | 4.18Mpps | 5.83Mpps
> > 128 | 3.85Mpps | 5.71Mpps
> > 256 | 4.01Mpps | 5.40Mpps
> > 512 | 3.52Mpps | 4.64Mpps
> > 1024 | 3.18Mpps | 3.68Mpps
> > 1280 | 2.86Mpps | 3.17Mpps
> > 1518 | 2.59Mpps | 2.90Mpps
> >
> > Hiroshi Shimamoto (7):
> > guest: memnic-tester: PMD benchmark in guest
> > pmd: remove needless assignment
> > pmd: use helper macros
> > pmd: use compiler barrier
> > pmd: packet receiving optimization with prefetch
> > pmd: add branch hint in recv/xmit
> > pmd: split calling mbuf free
> >
> > guest/Makefile | 20 ++++
> > guest/README.rst | 94 +++++++++++++++++
> > guest/memnic-tester.c | 281 ++++++++++++++++++++++++++++++++++++++++++++++++++
> > pmd/pmd_memnic.c | 43 ++++----
> > 4 files changed, 417 insertions(+), 21 deletions(-)
> > create mode 100644 guest/Makefile
> > create mode 100644 guest/README.rst
> > create mode 100644 guest/memnic-tester.c
> >
next prev parent reply other threads:[~2014-09-11 8:31 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-11 7:45 Hiroshi Shimamoto
2014-09-11 8:13 ` Tetsuya Mukawa
2014-09-11 8:36 ` Hiroshi Shimamoto [this message]
2014-09-11 9:11 ` Tetsuya Mukawa
2014-09-26 3:05 ` Choi, Sy Jong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7F861DC0615E0C47A872E6F3C5FCDDBD011AA7E1@BPXM14GP.gisp.nec.co.jp \
--to=h-shimamoto@ct.jp.nec.com \
--cc=dev@dpdk.org \
--cc=h-momma@ce.jp.nec.com \
--cc=mukawa@igel.co.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).