DPDK usage discussions
 help / color / mirror / Atom feed
From: Harsh Patel <thadodaharsh10@gmail.com>
To: keith.wiles@intel.com
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Query on handling packets
Date: Sat, 17 Nov 2018 15:52:37 +0530	[thread overview]
Message-ID: <CAA0iYrGYreV5eoZ-ZfO=+ab-BOhGe_Rvor5j=pFTehFh4YAQrw@mail.gmail.com> (raw)
In-Reply-To: <A74CE37E-B8B1-4069-9AAF-566DE44F92A8@intel.com>

Hello,
Thanks a lot for going through the code and providing us with so much
information.
We removed all the memcpy/malloc from the data path as you suggested and
here is the link to the Read/Write part of code.
READ -
https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L638
WRITE -
https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L600
After removing this, we are able to see a performance gain but not as good
as raw socket.

After this we ran some tests and we have some graphs which are attached to
this mail. The graphs contain images of TCP & UDP flows of various
bandwidths on both raw socket and ns-3-DPDK scenarios.
For some reason, we can see a bottleneck in both TCP and UDP. It will be
clear when you look at the graphs.
Do you know what could be the reason for this? Or can you look at the code
and see what is going wrong?

Thanks for your help.
Regards,
Harsh & Hrishikesh.

On Wed, 14 Nov 2018 at 20:45, Wiles, Keith <keith.wiles@intel.com> wrote:

>
>
> > On Nov 14, 2018, at 7:54 AM, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> >
> > Hello,
> > This is a link to the complete source code of our project :-
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk
> > For the description of the project, look through this :-
> https://ns-3-dpdk-integration.github.io/
> > Once you go through it, you will have a basic understanding of the
> project.
> > Installation instructions link are provided in the github.io page.
> >
> > In the code we mentioned above, the master branch contains the
> implementation of the logic using rte_rings which we mentioned at the very
> beginning of the discussion. There is a branch named "newrxtx" which
> contains the implementation according to the logic you provided.
> >
> > We would like you to take a look at the code in newrxtx branch. (
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk/tree/newrxtx)
> > In the code in this branch, go to
> ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/ directory. Here we
> have implemented the DpdkNetDevice model. This model contains the code
> which implements the whole model providing interaction between ns-3 and
> DPDK. We would like you take a look at our Read function (
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L626)
> and Write function (
> https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L576).
> These contains the logic you suggested.
>
> A couple of points for performance with DPDK.
>  - Never use memcpy in the data path unless it is absolutely require and
> always try to avoid copying all of the data. In some cases you may want to
> use memcpy or rte_memcpy to only replace a small amount of data or to grab
> a copy of some small amount of data.
>  - Never use malloc in the data path, meaning never call malloc on every
> packet use a list of buffers allocated up front if you need buffers of some
> time.
>  - DPDK mempools are highly tuned and if you can use them for fixed size
> buffers.
>
> I believe in the DPDK docs is a performance white paper or some
> information about optimizing packet process in DPDK. If you have not read
> it you may want to do so.
>
> >
> > Can you go through this and suggest us some changes or find some mistake
> in our code? If you need any help or have any doubt, ping us.
> >
> > Thanks and Regards,
> > Harsh & Hrishikesh
> >
> > On Tue, 13 Nov 2018 at 19:17, Wiles, Keith <keith.wiles@intel.com>
> wrote:
> >
> >
> > > On Nov 12, 2018, at 8:25 PM, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> > >
> > > Hello,
> > > It would be really helpful if you can provide us a link (for both Tx
> and Rx) to the project you mentioned earlier where you worked on a similar
> problem, if possible.
> > >
> >
> > At this time I can not provide a link. I will try and see what I can do,
> but do not hold your breath it could be awhile as we have to go thru a lot
> of legal stuff. If you can try vtune tool from Intel for x86 systems if you
> can get a copy for your platform as it can tell you a lot about the code
> and where the performance issues are located. If you are not running Intel
> x86 then my code may not work for you, I do not remember if you told me
> which platform.
> >
> >
> > > Thanks and Regards,
> > > Harsh & Hrishikesh.
> > >
> > > On Mon, 12 Nov 2018 at 01:15, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> > > Thanks a lot for all the support. We are looking into our work as of
> now and will contact you once we are done checking it completely from our
> side. Thanks for the help.
> > >
> > > Regards,
> > > Harsh and Hrishikesh
> > >
> > > On Sat, 10 Nov 2018 at 11:47, Wiles, Keith <keith.wiles@intel.com>
> wrote:
> > > Please make sure to send your emails in plain text format. The Mac
> mail program loves to use rich-text format is the original email use it and
> I have told it not only send plain text :-(
> > >
> > > > On Nov 9, 2018, at 4:09 AM, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> > > >
> > > > We have implemented the logic for Tx/Rx as you suggested. We
> compared the obtained throughput with another version of same application
> that uses Linux raw sockets.
> > > > Unfortunately, the throughput we receive in our DPDK application is
> less by a good margin. Is this any way we can optimize our implementation
> or anything that we are missing?
> > > >
> > >
> > > The PoC code I was developing for DAPI I did not have any performance
> of issues it run just as fast with my limited testing. I converted the
> l3fwd code and I saw 10G 64byte wire rate as I remember using pktgen to
> generate the traffic.
> > >
> > > Not sure why you would see a big performance drop, but I do not know
> your application or code.
> > >
> > > > Thanks and regards
> > > > Harsh & Hrishikesh
> > > >
> > > > On Thu, 8 Nov 2018 at 23:14, Wiles, Keith <keith.wiles@intel.com>
> wrote:
> > > >
> > > >
> > > >> On Nov 8, 2018, at 4:58 PM, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> > > >>
> > > >> Thanks
> > > >>  for your insight on the topic. Transmission is working with the
> functions you mentioned. We tried to search for some similar functions for
> handling incoming packets but could not find anything. Can you help us on
> that as well?
> > > >>
> > > >
> > > > I do not know if a DPDK API set for RX side. But in the DAPI (DPDK
> API) PoC I was working on and presented at the DPDK Summit last Sept. In
> the PoC I did create a RX side version. The issues it has a bit of tangled
> up in the DAPI PoC.
> > > >
> > > > The basic concept is a call to RX a single packet does a rx_burst of
> N number of packets keeping then in a mbuf list. The code would spin
> waiting for mbufs to arrive or return quickly if a flag was set. When it
> did find RX mbufs it would just return the single mbuf and keep the list of
> mbufs for later requests until the list is empty then do another rx_burst
> call.
> > > >
> > > > Sorry this is a really quick note on how it works. If you need more
> details we can talk more later.
> > > >>
> > > >> Regards,
> > > >> Harsh
> > > >>  and Hrishikesh.
> > > >>
> > > >>
> > > >> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith <keith.wiles@intel.com>
> wrote:
> > > >>
> > > >>
> > > >> > On Nov 8, 2018, at 8:24 AM, Harsh Patel <thadodaharsh10@gmail.com>
> wrote:
> > > >> >
> > > >> > Hi,
> > > >> > We are working on a project where we are trying to integrate DPDK
> with
> > > >> > another software. We are able to obtain packets from the other
> environment
> > > >> > to DPDK environment in one-by-one fashion. On the other hand DPDK
> allows to
> > > >> > send/receive burst of data packets. We want to know if there is
> any
> > > >> > functionality in DPDK to achieve this conversion of single
> incoming packet
> > > >> > to a burst of packets sent on NIC and similarly, conversion of
> burst read
> > > >> > packets from NIC to send it to other environment sequentially?
> > > >>
> > > >>
> > > >> Search in the docs or lib/librte_ethdev directory on
> rte_eth_tx_buffer_init, rte_eth_tx_buffer, ...
> > > >>
> > > >>
> > > >>
> > > >> > Thanks and regards
> > > >> > Harsh Patel, Hrishikesh Hiraskar
> > > >> > NITK Surathkal
> > > >>
> > > >> Regards,
> > > >> Keith
> > > >>
> > > >
> > > > Regards,
> > > > Keith
> > > >
> > >
> > > Regards,
> > > Keith
> > >
> >
> > Regards,
> > Keith
> >
>
> Regards,
> Keith
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: UDP Throughput Comparison.png
Type: image/png
Size: 9863 bytes
Desc: not available
URL: <http://mails.dpdk.org/archives/users/attachments/20181117/40072f9b/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: TCP PPS Comparison.png
Type: image/png
Size: 11408 bytes
Desc: not available
URL: <http://mails.dpdk.org/archives/users/attachments/20181117/40072f9b/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: TCP Throughput Comparison.png
Type: image/png
Size: 12833 bytes
Desc: not available
URL: <http://mails.dpdk.org/archives/users/attachments/20181117/40072f9b/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: UDP PPS Comparison.png
Type: image/png
Size: 11645 bytes
Desc: not available
URL: <http://mails.dpdk.org/archives/users/attachments/20181117/40072f9b/attachment-0003.png>

  reply	other threads:[~2018-11-17 10:22 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-08  8:24 Harsh Patel
2018-11-08  8:56 ` Wiles, Keith
2018-11-08 16:58   ` Harsh Patel
2018-11-08 17:43     ` Wiles, Keith
2018-11-09 10:09       ` Harsh Patel
2018-11-09 21:26         ` Wiles, Keith
2018-11-10  6:17         ` Wiles, Keith
2018-11-11 19:45           ` Harsh Patel
2018-11-13  2:25             ` Harsh Patel
2018-11-13 13:47               ` Wiles, Keith
2018-11-14 13:54                 ` Harsh Patel
2018-11-14 15:02                   ` Wiles, Keith
2018-11-14 15:04                   ` Wiles, Keith
2018-11-14 15:15                   ` Wiles, Keith
2018-11-17 10:22                     ` Harsh Patel [this message]
2018-11-17 22:05                       ` Kyle Larose
2018-11-19 13:49                         ` Wiles, Keith
2018-11-22 15:54                           ` Harsh Patel
2018-11-24 15:43                             ` Wiles, Keith
2018-11-24 15:48                               ` Wiles, Keith
2018-11-24 16:01                             ` Wiles, Keith
2018-11-25  4:35                               ` Stephen Hemminger
2018-11-30  9:02                                 ` Harsh Patel
2018-11-30 10:24                                   ` Harsh Patel
2018-11-30 15:54                                   ` Wiles, Keith
2018-12-03  9:37                                     ` Harsh Patel
2018-12-14 17:41                                       ` Harsh Patel
2018-12-14 18:06                                         ` Wiles, Keith
     [not found]                                           ` <CAA0iYrHyLtO3XLXMq-aeVhgJhns0+ErfuhEeDSNDi4cFVBcZmw@mail.gmail.com>
2018-12-30  0:19                                             ` Wiles, Keith
2018-12-30  0:30                                             ` Wiles, Keith
2019-01-03 18:12                                               ` Harsh Patel
2019-01-03 22:43                                                 ` Wiles, Keith
2019-01-04  5:57                                                   ` Harsh Patel
2019-01-16 13:55                                                     ` Harsh Patel
2019-01-30 23:36                                                       ` Harsh Patel
2019-01-31 16:58                                                         ` Wiles, Keith
2019-02-05  6:37                                                           ` Harsh Patel
2019-02-05 13:03                                                             ` Wiles, Keith
2019-02-05 14:00                                                               ` Harsh Patel
2019-02-05 14:12                                                                 ` Wiles, Keith
2019-02-05 14:22                                                                   ` Harsh Patel
2019-02-05 14:27                                                                     ` Wiles, Keith
2019-02-05 14:33                                                                       ` Harsh Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA0iYrGYreV5eoZ-ZfO=+ab-BOhGe_Rvor5j=pFTehFh4YAQrw@mail.gmail.com' \
    --to=thadodaharsh10@gmail.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).