From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A8C5A04A4 for ; Tue, 26 May 2020 18:50:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7D9821D670; Tue, 26 May 2020 18:50:42 +0200 (CEST) Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by dpdk.org (Postfix) with ESMTP id 312251D668 for ; Tue, 26 May 2020 18:50:41 +0200 (CEST) Received: by mail-pj1-f67.google.com with SMTP id n15so29352pjt.4 for ; Tue, 26 May 2020 09:50:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=z/z49XYSU/fbikMZfQDedQYQtoPN5o0vXHaQPmSqv0o=; b=uMCoPDgE7fkjc9dPFXnbUPkvEnp7wQc/6YtTE5oO3RjUEeyOjdksZfgVTp/qlppcVQ HDBrgSB5+WqtZaz6bcvhxOb34Jb/hlw8cmuattYq6u4XifyHDNEj1TJH+W2HKQ6DBypY O3CeL9PImBhRjKl+yNDuuBuAUSZH0KC40BlyKfz3HbDMRMgXVt5PVtr0jK5OWKaAZFdM Tioq5D2vRh80POFY/WDHPPhkVytSqTzioIxEU7gszbuTFpn8Qwe9t/PCozY/KAYXCU22 Qmy9JHODlXmdkfl9ck4r6MrGDYiO2ECOsj5VDElsFPai3bpGtGOSyu5rxfraVPqzBNaY RAtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=z/z49XYSU/fbikMZfQDedQYQtoPN5o0vXHaQPmSqv0o=; b=crTtEkX/4ff/yygpre3Hfu9zpcBQ+s7tfLXfKTIAVHI7abXzpZFCsWhhwMyGrsN/SX wJxyYKeo11U0EIV4lL6FLPCtoLxc/JRvbrr8rSEJ64uPYyrim3rO9i/QwcfGFeniW3zh 1+setGEkOrGt3qAoBjkttK1zJk2mUQQRM+Vn+dxiNDSSK+RjiIzjYIbtm1zYAciwoQSt cdrgZWcmwT2wJU2nrrsARqoVrN4PcerFf0AHA7jO3jRQLkJFDl1Aazo7T8rOFQHc6G4b LOBTW0GLhuC09NiW5OMSlc+Ry67UPusjD3IWwcr08h4YSmRNxvtGTIIyBF0SMrza+6dl 832Q== X-Gm-Message-State: AOAM5319D5uzSzwdf2FZ5tOkMIWunnVIqDSoCqEhY32DF8WL3fWJYq/S BX2qN5dz1sC3uGJKwwFsJdI= X-Google-Smtp-Source: ABdhPJwZ0g5tGwSp4ZAR0ReHKbS6h6dBF2c0mJ6hVADPfIxfY5H747sjwbqiDf38uXDbemEYZKWQ7Q== X-Received: by 2002:a17:902:ba86:: with SMTP id k6mr1955287pls.212.1590511840266; Tue, 26 May 2020 09:50:40 -0700 (PDT) Received: from sea-l-00003662.olympus.f5net.com (c-71-231-121-172.hsd1.wa.comcast.net. [71.231.121.172]) by smtp.gmail.com with ESMTPSA id f66sm78581pfg.174.2020.05.26.09.50.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 May 2020 09:50:39 -0700 (PDT) From: Vincent Li X-Google-Original-From: Vincent Li Date: Tue, 26 May 2020 09:50:38 -0700 (PDT) X-X-Sender: vli@sea-ml-00029224.olympus.f5net.com To: Pavel Vajarov cc: users@dpdk.org In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (OSX 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Subject: Re: [dpdk-users] Peformance troubleshouting of TCP/IP stack over DPDK. X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" On Wed, 6 May 2020, Pavel Vajarov wrote: > Hi there, > > We are trying to compare the performance of DPDK+FreeBSD networking stack > vs standard Linux kernel and we have problems finding out why the former is > slower. The details are below. > > There is a project called F-Stack . > It glues the networking stack from > FreeBSD 11.01 over DPDK. We made a setup to test the performance of > transparent > TCP proxy based on F-Stack and another one running on Standard Linux > kernel. I assume you wrote your own TCP proxy based on F-Stack library? > > Here are the test results: > 1. The Linux based proxy was able to handle about 1.7-1.8 Gbps before it > started to throttle the traffic. No visible CPU usage was observed on core > 0 during the tests, only core 1, where the application and the IRQs were > pinned, took the load. > 2. The DPDK+FreeBSD proxy was able to thandle 700-800 Mbps before it > started to throttle the traffic. No visible CPU usage was observed on core > 0 during the tests only core 1, where the application was pinned, took the > load. In some of the latter tests I did some changes to the number of read > packets in one call from the network card and the number of handled events > in one call to epoll. With these changes I was able to increase the > throughput > to 900-1000 Mbps but couldn't increase it more. > 3. We did another test with the DPDK+FreeBSD proxy just to give us some > more info about the problem. We disabled the TCP proxy functionality and > let the packets be simply ip forwarded by the FreeBSD stack. In this test > we reached up to 5Gbps without being able to throttle the traffic. We just > don't have more traffic to redirect there at the moment. So the bottlneck > seem to be either in the upper level of the network stack or in the > application > code. > I once tested F-Stack ported Nginx and used Nginx TCP proxy, I could achieve above 6Gbps with iperf. After seeing your email, I setup PCI passthrough to KVM VM and ran F-Stack Nginx as webserver with http load test, no proxy, I could achieve about 6.5Gbps > There is a huawei switch which redirects the traffic to this server. It > regularly > sends arping and if the server doesn't respond it stops the redirection. > So we assumed that when the redirection stops it's because the server > throttles the traffic and drops packets and can't respond to the arping > because > of the packets drop. I did have some weird issue with ARPing of F-Stack, I manually added static ARP for F-Stack interface for each F-Stack process, not sure if it is related to your ARPing, see https://github.com/F-Stack/f-stack/issues/515