From: "Jayakumar, Muthurajan" <muthurajan.jayakumar@intel.com>
To: Harrison Ford <ogi@myself.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Packet drop issue on DPDK-based application
Date: Sun, 26 Oct 2014 18:17:08 +0000 [thread overview]
Message-ID: <5D695A7F6F10504DBD9B9187395A21797D1556C4@ORSMSX112.amr.corp.intel.com> (raw)
In-Reply-To: <trinity-88c87ecb-2252-46ad-9957-15b985b41912-1414311853814@3capp-mailcom-lxa16>
Hi,
1) One thing to do is instrument database logging section of code. By Reading Time Stamp Counter (RdTsc) at the entry as well as the exit. The difference will give indication of how much load that is adding.
2) In case it is found that to run to completion (like l2fwd) the budget is not sufficient, then pipeline model, load balancer application can be taken as a starting point.
3) To keep variables minimum, to start with can the application be run in physical instead of virtual environment?
(ps: Bulk enqueue / dequeue and software prefetch).
4) Bulk enqueue/dequeue: It has been observed in enqueue/dequeue scenario, using bulk enqueue/dequeue improves throughput since it amortizes the overhead.
5) S/w Prefetch: L3fwd sample application shows how s/w prefetch can be used to hide memory latency
Thanks,
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Harrison Ford
Sent: Sunday, October 26, 2014 1:24 AM
To: dev@dpdk.org
Subject: [dpdk-dev] Packet drop issue on DPDK-based application
Hi, all.
I am trying to write an application that is supposed to receive packets over one interface, and before forwarding them out the other, log the src and dst addresses into a database for statistical purposes. I started by modifying the l2fwd example, and everything worked perfectly until I added the database logging. After that, the number of received packets dropped to about one third of the total number of sent packets (I am able to precisely determine the number of sent packets, since I am using tcpreply to send packets from a pcap file). The database logging code is obviously slowing down the entire process, but I am not sure how to resolve this issue. I have tried increasing the RX queue size (NB_RX_DESCRIPTORS value) and the memory pool size, but nothing changed. The packets are enqueued in a ring (i.e. rte_ring) where they await logging, after which they are forwarded out without any further processing.
The application is simple enough and should be working fast enough. The incoming packet speed is aroung 50 Mbps, which is also not that fast.
The application runs on a virtual machine, and I am using a single lcore to test it. What should I try to solve this problem, and what is the best way to debug such a behaviour?
Thank you.
Paul
prev parent reply other threads:[~2014-10-26 18:08 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-26 8:24 Harrison Ford
2014-10-26 18:17 ` Jayakumar, Muthurajan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5D695A7F6F10504DBD9B9187395A21797D1556C4@ORSMSX112.amr.corp.intel.com \
--to=muthurajan.jayakumar@intel.com \
--cc=dev@dpdk.org \
--cc=ogi@myself.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).