From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 420DCA00BE for ; Wed, 27 May 2020 18:44:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDEF01DABF; Wed, 27 May 2020 18:44:53 +0200 (CEST) Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by dpdk.org (Postfix) with ESMTP id E05461DAAE for ; Wed, 27 May 2020 18:44:52 +0200 (CEST) Received: by mail-pj1-f65.google.com with SMTP id t8so1718558pju.3 for ; Wed, 27 May 2020 09:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=r60Ju/xDx/bJr1R50CRCoziDknFP99AmBWYoe6FKWqo=; b=IQnKoZ2rabqSKdN4g6RK0BKGvCRg9Z4xP/p2ap70CdP0wMIZUOx2JfU0okh9Q04sk0 +gSRZwaq7JTXU0eHnqxzJ0lMALhQS+G40dSPWMZDhvJLF+KIV5XPIi4HkCdwQAWXCkgx Mx/3f5sLy+VzgbUYaNjKYIN0EO+Ilv/ClvZlcWkmrmgqS1iUKz3aDVEw91H7f2qMOBwF PaTT6PmLNoq2FoJjX9BKUg9wxEW5xTIrRcebaUIhV01PQKLGOf667ezYxEE7AB1NGKYg reZ4vSQVImBZST9jPM9nwk4+H657DwWpwwj0Wqu+hOO812aPAGtn7En6puifRtAXbz1c Z+yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=r60Ju/xDx/bJr1R50CRCoziDknFP99AmBWYoe6FKWqo=; b=UKDKk8v2BOtJnGr8XTPgTkflX4gvjxbka20ATPchc7oE39qHqGSw7URu0fqAjkUdjH Gog3I1bTyURiHOJHhd2Ey3yqh7FvBT2tNJjUMZmGjs36O9ETEP5u8SJHgPmB5Wlnrcc2 haytbnwpWRXWW0ZFUM9/yJHPo7WHwivuYHGsmJyaGGuvVtgNs/YeJHWpWdOCZ787mQRS yqWT2RtU+Y09FCXtVQUWwNdpn5tmVWfwGS8tc7eLGVrxuTe2TQVb74Af++hHdliyc//A mgubeGPU/h7BKC07IeqHQESyK3Qx0GdKOGie0KClpjbMzpusPScXxtvGV5aPmD5I4EIM 5U6A== X-Gm-Message-State: AOAM530/lCSe2ifqJTYih+jnj1h5mEdg3PrZCPjoVxLaRkUvlSPaib3n t0pzylVZw1bHC1M3eJz3H9YcndSaj84= X-Google-Smtp-Source: ABdhPJykkypNVlgWibS2jGtLwd1yrZ9F0mxJv/jYlakYTsuPU04VkWyz0uzr6QZQJpuCX9uAU6PTdg== X-Received: by 2002:a17:90a:e28d:: with SMTP id d13mr6012041pjz.128.1590597891985; Wed, 27 May 2020 09:44:51 -0700 (PDT) Received: from sea-l-00003662.olympus.f5net.com (c-71-231-121-172.hsd1.wa.comcast.net. [71.231.121.172]) by smtp.gmail.com with ESMTPSA id 131sm2528157pfv.139.2020.05.27.09.44.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 May 2020 09:44:51 -0700 (PDT) From: Vincent Li X-Google-Original-From: Vincent Li Date: Wed, 27 May 2020 09:44:49 -0700 (PDT) X-X-Sender: vli@sea-ml-00029224.olympus.f5net.com To: Pavel Vajarov cc: Vincent Li , users In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (OSX 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Peformance troubleshouting of TCP/IP stack over DPDK. X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" On Wed, 27 May 2020, Pavel Vajarov wrote: > > Hi there, > > > > We are trying to compare the performance of DPDK+FreeBSD networking stack > > vs standard Linux kernel and we have problems finding out why the former is > > slower. The details are below. > > > > There is a project called F-Stack . > > It glues the networking stack from > > FreeBSD 11.01 over DPDK. We made a setup to test the performance of > > transparent > > TCP proxy based on F-Stack and another one running on Standard Linux > > kernel. > > I assume you wrote your own TCP proxy based on F-Stack library? > > > Yes, I wrote transparent TCP proxy based on the F-Stack library for the tests. > The thing is that we have our transparent caching proxy running on Linux and > now we try to find a ways to improve its performance and hardware requirements. >   > > > > Here are the test results: > > 1. The Linux based proxy was able to handle about 1.7-1.8 Gbps before it > > started to throttle the traffic. No visible CPU usage was observed on core > > 0 during the tests, only core 1, where the application and the IRQs were > > pinned, took the load. > > 2. The DPDK+FreeBSD proxy was able to thandle 700-800 Mbps before it > > started to throttle the traffic. No visible CPU usage was observed on core > > 0 during the tests only core 1, where the application was pinned, took the > > load. In some of the latter tests I did some changes to the number of read > > packets in one call from the network card and the number of handled events > > in one call to epoll. With these changes I was able to increase the > > throughput > > to 900-1000 Mbps but couldn't increase it more. > > 3. We did another test with the DPDK+FreeBSD proxy just to give us some > > more info about the problem. We disabled the TCP proxy functionality and > > let the packets be simply ip forwarded by the FreeBSD stack. In this test > > we reached up to 5Gbps without being able to throttle the traffic. We just > > don't have more traffic to redirect there at the moment. So the bottlneck > > seem to be either in the upper level of the network stack or in the > > application > > code. > > > > I once tested F-Stack ported Nginx and used Nginx TCP proxy, I could > achieve above 6Gbps with iperf. After seeing your email, I setup PCI > passthrough to KVM VM and ran F-Stack Nginx as webserver > with http load test, no proxy, I could  achieve about 6.5Gbps > > Can I ask on how many cores you run the Nginx? I used 4 cores on the VM > The results from our tests are from single core. We are trying to reach  > max performance on single core because we know that the F-stack soulution  > has linear scalability. We tested in on 3 cores and got around 3 Gbps which > is 3 times the result on single core. > Also we test with traffic from one internet service provider. We just  > redirect few ip pools to the test machine for the duration of the tests and see > at which point the proxy will start choking the traffic and the switch the traffic back. I used mTCP ported apache bench to do load test, since the F-Stack and the apache bench are directed connected machine with cable and running capture on mTCP and F-Stack would affect performance, I do not have capture to see if there are significant packet drops or not when achieving 6.5Gbps > > > There is a huawei switch which redirects the traffic to this server. It > > regularly > > sends arping and if the server doesn't respond it stops the redirection. > > So we assumed that when the redirection stops it's because the server > > throttles the traffic and drops packets and can't respond to the arping > > because > > of the packets drop. > > I did have some weird issue with ARPing of F-Stack, I manually added > static ARP for F-Stack interface for each F-Stack process, not sure if it > is related to your ARPing, see https://github.com/F-Stack/f-stack/issues/515 > > Hmm, I've missed that. Thanks a lot for it because it may help for the tests and > for the next stage. > >   > >