From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f50.google.com (mail-oi0-f50.google.com [209.85.218.50]) by dpdk.org (Postfix) with ESMTP id 002CB7F7D for ; Mon, 17 Nov 2014 06:52:38 +0100 (CET) Received: by mail-oi0-f50.google.com with SMTP id a141so6525407oig.9 for ; Sun, 16 Nov 2014 22:02:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=twZnOKNo7Uu87Xb23bHyrek4Bl/p+kICbRC4mRoKyjw=; b=zC1rEnRGLvlHl+OoUJWk58ZveTckutY0nTjOnhSLgqksD7nGZccZVyEhNs8Hp/X0rY pZgrZZ2HmOk/s4vR80IHGrtp4wSWZPYeM+FCMkI2sZhE0suWC4odbNTK2/aqin6q5Sdj TKoP7O130eZI+WP5zxU8n+le2dShqOxCQn1NXEqaZfFbMi7Ke4ltICvZjraeyYtq2hAl YHtgjEb/59eldH+idFvUQulNGXs5tjHui1+eL2yujqXvNKbbQ+IpMDN4whwpkFcT7DnO JfR6quUmFI8rpVGWdbRKsdRmvS2IcloxgN4f+NPDj+t66km2OdwTlyS2i7Qplm+k/QIz sgiQ== MIME-Version: 1.0 X-Received: by 10.202.84.147 with SMTP id i141mr635563oib.56.1416204173603; Sun, 16 Nov 2014 22:02:53 -0800 (PST) Received: by 10.182.66.135 with HTTP; Sun, 16 Nov 2014 22:02:53 -0800 (PST) In-Reply-To: References: Date: Sun, 16 Nov 2014 22:02:53 -0800 Message-ID: From: satish To: "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Performance impact with QoS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Nov 2014 05:52:39 -0000 Hi All, Can someone please provide comments on queries in below mail? Regards, Satish Babu On Mon, Nov 10, 2014 at 4:24 PM, satish wrote: > Hi, > I need comments on performance impact with DPDK-QoS. > > We are working on developing a application based on DPDK. > Our application supports IPv4 forwarding with and without QoS. > > Without QOS, we are achieving almost full wire rate (bi-directional > traffic) with 128, 256 and 512 byte packets. > But when we enabled QoS, performance dropped to half for 128 and 256 byte > packets. > For 512 byte packet, we didn't observe any drop even after enabling QoS > (Achieving full wire rate). > Traffic used in both the cases is same. ( One stream with Qos match to > first queue in traffic class 0) > > In our application, we are using memory buffer pools to receive the packet > bursts (Ring buffer is not used). > Same buffer is used during packet processing and TX (enqueue and dequeue). > All above handled on the same core. > > For normal forwarding(without QoS), we are using rte_eth_tx_burst for TX. > > For forwarding with QoS, using rte_sched_port_pkt_write(), > rte_sched_port_enqueue () and rte_sched_port_dequeue () > before rte_eth_tx_burst (). > > We understood that performance dip in case of 128 and 256 byte packet is > bacause > of processing more number of packets compared to 512 byte packet. > > Can some comment on performance dip in my case with QOS enabled? > [1] can this be because of inefficient use of RTE calls for QoS? > [2] Is it the poor buffer management? > [3] any other comments? > > To achieve good performance in QoS case, is it must to use worker thread > (running on different core) with ring buffer? > > Please provide your comments. > > Thanks in advance. > > Regards, > Satish Babu > > -- Regards, Satish Babu