From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCDF0A0526 for ; Wed, 8 Jul 2020 22:42:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1DC391DDCC; Wed, 8 Jul 2020 22:42:33 +0200 (CEST) Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by dpdk.org (Postfix) with ESMTP id D138F1DDBE for ; Wed, 8 Jul 2020 22:42:31 +0200 (CEST) Received: by mail-qv1-f47.google.com with SMTP id p7so21137192qvl.4 for ; Wed, 08 Jul 2020 13:42:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=iith.ac.in; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MjtUmxRDisreb6w/yD6/bGe3ZygdgW9e1fR7AZ6JAM8=; b=B1G9l2xkDXJZ8MViV0b8kGOklnp0BLTBrEONdwGP45nnMouh9ePPvN1uCYaOMHp1hB SIz9qdgu1cZHdbhTm8y7bSutJXsJYgdxam4a9Mw2r8JFAhBzJ6Oppn43GikUsbevdIkX JffdHkDYW45ZhZfnjOJnG9R690jVL9RWdwuHo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MjtUmxRDisreb6w/yD6/bGe3ZygdgW9e1fR7AZ6JAM8=; b=Yb6JX0KFcHSrvQANCaoAM5bMPx/Kr0avaoDStMv/vXBKyBUA1zYD5gSB4x8c25pecC 3PPvYUPcxZTLYEvcgvYx5yvwkwA6QzyWe57pSBxRQwvVcckXEoESYMTTs2kXbqBiMGXx ts6IwYNuiW0V3lU473GLmnqda00yaWFaSNTil8jSyxDkMyXsrce6ovi6/N92e0jKhpk/ 3timI/3+JFEMEIjf9QnVmb8CgbHZzDTHe5Y+78bBigLTmpb1tKU9JzH0yqma5473oLKv Wy5mqHuK8aQWD3zOXPoeLUjK1qmiOpL1Nbtu3OsTXnAGFWOexysMzmEiK3/2BHHiWfm0 kpzQ== X-Gm-Message-State: AOAM5315DTPkTHXqTa/ng1GxdVZBPPJjHIsw0CwYG9u2xjJcnmltAlBO LH/i0g2kO84sqo6I/9To0i8aaRVQmM81MRQdCvUEpg== X-Google-Smtp-Source: ABdhPJwFP01MgFwC29edbaq4NAvlsKBuAliEa7D+WmyfeUG7k4e++r8QX9geEdrqNywRmfQ0QZt9njE3CgqR8v00eZQ= X-Received: by 2002:a05:6214:88e:: with SMTP id cz14mr58909507qvb.72.1594240951007; Wed, 08 Jul 2020 13:42:31 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Suraj R Gupta Date: Thu, 9 Jul 2020 02:12:20 +0530 Message-ID: To: Bev SCHWARTZ Cc: "users@dpdk.org" Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Significant performance degradation when using tx buffers rather than rte_eth_tx_burst X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi bev, If my understanding is right, rte_eth_tx_burst transmits output packets immediately with a specified number of packets. While, 'rte_eth_tx_buffer' buffers the packet in the queue of the port, the packets would be transmitted only when buffer is or rte_eth_tx_buffer_flush is called. Since you are buffering packets one by one and then you are calling flush, this may have contributed to the delay. Thanks and Regards Suraj R Gupta On Wed, Jul 8, 2020 at 10:53 PM Bev SCHWARTZ wrote: > I am writing a bridge using DPDK, where I have traffic read from one port > transmitted to the other. Here is the core of the program, based on > basicfwd.c. > > while (!force_quit) { > nb_rx = rte_eth_rx_burst(rx_port, rx_queue, bufs, BURST_SIZE); > for (i = 0; i < nb_rx; i++) { > /* inspect packet */ > } > nb_tx = rte_eth_tx_burst(tx_port, tx_queue, bufs, nb_rx); > for (i = nb_tx; i < nb_rx; i++) { > rte_pktmbuf_free(bufs[i]); > } > } > > (A bunch of error checking and such left out for brevity.) > > This worked great, I got bandwidth equivalent to using a Linux Bridge. > > I then tried using tx buffers instead. (Initialization code left out for > brevity.) Here is the new loop. > > while (!force_quit) { > nb_rx = rte_eth_rx_burst(rx_port, rx_queue, bufs, BURST_SIZE); > for (i = 0; i < nb_rx; i++) { > /* inspect packet */ > rte_eth_tx_buffer(tx_port, tx_queue, tx_buffer, bufs[i]); > } > rte_eth_tx_buffer_flush(tx_port, tx_queue, tx_buffer); > } > > (Once again, error checking left out for brevity.) > > I am running this on 8 cores, each core has its own loop. (tx_buffer is > created for each core.) > > If I have well balanced traffic across the cores, then my performance goes > down, about 5% or so. If I have unbalanced traffic such as all traffic > coming from a single flow, my performance goes down 80% from about 10 gbs > to 2gbs. > > I want to stress that the ONLY thing that changed in this code is changing > how I transmit packets. Everything else is the same. > > Any idea why this would cause such a degradation in bit rate? > > -Bev -- Thanks and Regards Suraj R Gupta