From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f45.google.com (mail-qg0-f45.google.com [209.85.192.45]) by dpdk.org (Postfix) with ESMTP id A962E7F0D for ; Tue, 11 Nov 2014 01:15:08 +0100 (CET) Received: by mail-qg0-f45.google.com with SMTP id z107so6325381qgd.32 for ; Mon, 10 Nov 2014 16:24:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:cc:content-type; bh=q6ugrCLaOhj9ARawkyeX/xbA5w78uSofMtgn/010Om8=; b=TD8gBb1ps4sPFPMxqcTDlZ1rrPKfY5OJkJURkZM4lhued6lZRQIh6Rv5LazaEEXIby 3KGnmDwan74fQNIIj+42DRxr83m+Sfymf0Jbe6H8e6e6WxkpGb7esrC3GCkKd4TDTdq9 r0TJT8q7F1s6PaKllP/kvCEH4w0fpmxWyBy527facrT/P2ffZb7k1c6q3J7nw/NHbU7Z N68DH8je+VmcHIcp8EJ2rzmpi61bFbD9lU8tEtJGzcn5gX6/tXaOswpF8EQODYLaotl6 7BUq3Y9nKeXjG2VO2lLNXzisrJDEhdjVdashVITQIfjyrQI2rsCoX3GJEj8aqvcCBt/D EWZA== X-Received: by 10.229.49.69 with SMTP id u5mr22560020qcf.19.1415665495501; Mon, 10 Nov 2014 16:24:55 -0800 (PST) MIME-Version: 1.0 From: satish Date: Tue, 11 Nov 2014 00:24:55 +0000 Message-ID: To: "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Performance impact with QoS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Nov 2014 00:15:09 -0000 Hi, I need comments on performance impact with DPDK-QoS. We are working on developing a application based on DPDK. Our application supports IPv4 forwarding with and without QoS. Without QOS, we are achieving almost full wire rate (bi-directional traffic) with 128, 256 and 512 byte packets. But when we enabled QoS, performance dropped to half for 128 and 256 byte packets. For 512 byte packet, we didn't observe any drop even after enabling QoS (Achieving full wire rate). Traffic used in both the cases is same. ( One stream with Qos match to first queue in traffic class 0) In our application, we are using memory buffer pools to receive the packet bursts (Ring buffer is not used). Same buffer is used during packet processing and TX (enqueue and dequeue). All above handled on the same core. For normal forwarding(without QoS), we are using rte_eth_tx_burst for TX. For forwarding with QoS, using rte_sched_port_pkt_write(), rte_sched_port_enqueue () and rte_sched_port_dequeue () before rte_eth_tx_burst (). We understood that performance dip in case of 128 and 256 byte packet is bacause of processing more number of packets compared to 512 byte packet. Can some comment on performance dip in my case with QOS enabled? [1] can this be because of inefficient use of RTE calls for QoS? [2] Is it the poor buffer management? [3] any other comments? To achieve good performance in QoS case, is it must to use worker thread (running on different core) with ring buffer? Please provide your comments. Thanks in advance. Regards, Satish Babu