DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users]  Bad IO-latency when sending one rte_mbuf at a time
@ 2018-07-19 21:09 Sungho Hong
  0 siblings, 0 replies; only message in thread
From: Sungho Hong @ 2018-07-19 21:09 UTC (permalink / raw)
  To: users

Hello, I am testing single round-trip latency of a single message using
DPDK and POSIX.

The round-trip latency that I am talking about is that
I send one message in other words rte_mbuf *m[1] from client to server,
and the server echos back to the client .


I have tested this same thing on both POSIX and DPDK, and DPDK performance
is really bad when I do this.

for example when using POSIX
the total round-trip latency is 1275.666667 usec

while when I use DPDK
the total round-trip latency is 61322


In the past, I have only tested DPDK based on run-time on sending a bulk of
data, for example 10 Gigabyte of files, in that case, I remember that DPDK
outperforms POSIX.

I believe that I am using the DPDK in the wrong way, or missing something
very critical. the test cases that I have built can be viewed here.

https://github.com/SungHoHong2/Ceph-Experiment/tree/master/DPDK-FUSE/FUSE-2nd


Would it be possible to know how I can improve the performance of a
round-trip latency of a single message?
(Or is this not ideal for DPDK?)







Best
Sungho Hong

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2018-07-19 21:09 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-19 21:09 [dpdk-users] Bad IO-latency when sending one rte_mbuf at a time Sungho Hong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).