DPDK patches and discussions
 help / color / mirror / Atom feed
From: satish <nsatishbabu@gmail.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Performance impact with QoS
Date: Sun, 16 Nov 2014 22:02:53 -0800	[thread overview]
Message-ID: <CADVv77qxGt3hzCWqYVS4acxR2dpMyk=xy=CdokX3XMQDVCMaFA@mail.gmail.com> (raw)
In-Reply-To: <CADVv77q9wXVVx8YO=rxdju39EzUYP=mPqsMThM+9-0erXsbMzQ@mail.gmail.com>

Hi All,
Can someone please provide comments on queries in below mail?

Regards,
Satish Babu

On Mon, Nov 10, 2014 at 4:24 PM, satish <nsatishbabu@gmail.com> wrote:

> Hi,
> I need comments on performance impact with DPDK-QoS.
>
> We are working on developing a application based on DPDK.
> Our application supports IPv4 forwarding with and without QoS.
>
> Without QOS, we are achieving almost full wire rate (bi-directional
> traffic) with 128, 256 and 512 byte packets.
> But when we enabled QoS, performance dropped to half for 128 and 256 byte
> packets.
> For 512 byte packet, we didn't observe any drop even after enabling QoS
> (Achieving full wire rate).
> Traffic used in both the cases is same. ( One stream with Qos match to
> first queue in traffic class 0)
>
> In our application, we are using memory buffer pools to receive the packet
> bursts (Ring buffer is not used).
> Same buffer is used during packet processing and TX (enqueue and dequeue).
> All above handled on the same core.
>
> For normal forwarding(without QoS), we are using rte_eth_tx_burst for TX.
>
> For forwarding with QoS, using rte_sched_port_pkt_write(),
> rte_sched_port_enqueue () and rte_sched_port_dequeue ()
> before rte_eth_tx_burst ().
>
> We understood that performance dip in case of 128 and 256 byte packet is
> bacause
> of processing more number of packets compared to 512 byte packet.
>
> Can some comment on performance dip in my case with QOS enabled?
> [1] can this be because of inefficient use of RTE calls for QoS?
> [2] Is it the poor buffer management?
> [3] any other comments?
>
> To achieve good performance in QoS case, is it must to use worker thread
> (running on different core) with ring buffer?
>
> Please provide your comments.
>
> Thanks in advance.
>
> Regards,
> Satish Babu
>
>


-- 
Regards,
Satish Babu

  reply	other threads:[~2014-11-17  5:52 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-11  0:24 satish
2014-11-17  6:02 ` satish [this message]
2014-11-17 21:03   ` Dumitrescu, Cristian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADVv77qxGt3hzCWqYVS4acxR2dpMyk=xy=CdokX3XMQDVCMaFA@mail.gmail.com' \
    --to=nsatishbabu@gmail.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).