From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34A61A054F; Tue, 18 Feb 2020 08:04:35 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7C7791C19B; Tue, 18 Feb 2020 08:04:34 +0100 (CET) Received: from mail-vs1-f67.google.com (mail-vs1-f67.google.com [209.85.217.67]) by dpdk.org (Postfix) with ESMTP id 216E21C197 for ; Tue, 18 Feb 2020 08:04:33 +0100 (CET) Received: by mail-vs1-f67.google.com with SMTP id b79so11988557vsd.9 for ; Mon, 17 Feb 2020 23:04:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zaA1ocDWomOzo59HmEqVRiwLgUCRb2UWtfV8hPpHjg4=; b=gwJGvrR4pKa5nxWehHk4XarPi6lCDQsR6dHMP7Vt9HedW2PXTobYs96Wzhc1kRH11v +1gYNLEymyWM8DEZ0ReNWqk1ac8OXPd6K1b06Ki+3+RnTMrNLZqi1TPufVUmcqjSaqHG QRTZdoUxeRxFskMZlGfHSoTW0FIf3LhrvyUufIMIDvR9MUDRDawehua3Ls+Sl26+ptAc 1R9JZXJhRySfH74snUaC2I6CqWCTTSbt/X/QgwrdZGwEtwbmk7hffTPCULQn6RYsQznL LXjKe21nytGAptAqheVb1YSfRaTq5ewAwju8Q0o0TC8Fry7xOytKmIZEjqt2qs5V/yG4 g5eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zaA1ocDWomOzo59HmEqVRiwLgUCRb2UWtfV8hPpHjg4=; b=bnOZBPOXgM307+iU0j3NccU68GXA+eXRy9a6BRuu/Rdamx8oLjxCGJvijzOBBkaTi5 jQ98TtA8PM/dBRQ1wgrMkYpZncO1QXAN+0JbQefZEaZcTEFRdRG5Oi6xzxIdTw4xwBK/ U2VqzqXM5UMDXtH+uMkqGG4k5Jgz0NtDs4Hn1Doc+MUOoLpMb2h+F+pJjRyQrvoeK79F 6E6vrPAAQrRddyD076zaZSxXc/BYLa6MDOGrchONcZnS7zVMc4Hq35TH7ldTAcw/3tiV G1QnRN42/Ekc2/Nnbi0XbncxTqgk9g/p/aX8PAoQPeHeyzwl9k3pBo8AgJI7pbTDWZom hfAg== X-Gm-Message-State: APjAAAVPBD7OaZu+oWaf4eHddivGIIaMt0RaEPGSkv7QJjnY5Z3gov0T A+1c/FjLhKHX3bq8hmpJUc1+YaIfYexHRiNkHoI= X-Google-Smtp-Source: APXvYqzBdr31TeIG1+54Kk/RttoTWbQwSpDj0rBfWm61zh6JbiulW63kPMhEKWnf2qwdLQqSSrW3HSO73FlD7zQDIfM= X-Received: by 2002:a67:ce93:: with SMTP id c19mr9786466vse.64.1582009472429; Mon, 17 Feb 2020 23:04:32 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Victor Huertas Date: Tue, 18 Feb 2020 08:04:21 +0100 Message-ID: To: James Huang Cc: dev , cristian.dumitrescu@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Fwd: high latency detected in IP pipeline example X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Thanks James for your quick answer. I guess that this configuration modification implies that the packets must be written one by one in the sw ring. Did you notice loose of performance (in throughput) in your aplicaci=C3=B3n because of that? Regards El mar., 18 feb. 2020 0:10, James Huang escribi=C3=B3: > Yes, I experienced similar issue in my application. In a short answer, se= t > the swqs write burst value to 1 may reduce the latency significantly. The > default write burst value is 32. > > On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas > wrote: > >> Hi all, >> >> I am developing my own DPDK application basing it in the dpdk-stable >> ip_pipeline example. >> At this moment I am using the 17.11 LTS version of DPDK and I amb >> observing >> some extrange behaviour. Maybe it is an old issue that can be solved >> quickly so I would appreciate it if some expert can shade a light on thi= s. >> >> The ip_pipeline example allows you to develop Pipelines that perform >> specific packet processing functions (ROUTING, FLOW_CLASSIFYING, etc...)= . >> The thing is that I am extending some of this pipelines with my own. >> However I want to take advantage of the built-in ip_pipeline capability = of >> arbitrarily assigning the logical core where the pipeline (f_run() >> function) must be executed so that i can adapt the packet processing pow= er >> to the amount of the number of cores available. >> Taking this into account I have observed something strange. I show you >> this >> simple example below. >> >> Case 1: >> [PIPELINE 0 MASTER core =3D0] >> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D2] -----SWQ2----> >> [PIPELINE 3 core=3D3] >> >> Case 2: >> [PIPELINE 0 MASTER core =3D0] >> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D1] -----SWQ2----> >> [PIPELINE 3 core=3D1] >> >> I send a ping between two hosts connected at both sides of the pipeline >> model which allows these pings to cross all the pipelines (from 1 to 3). >> What I observe in Case 1 (each pipeline has its own thread in different >> core) is that the reported RTT is less than 1 ms, whereas in Case 2 (all >> pipelines except MASTER are run in the same thread) is 20 ms. Furthermor= e, >> in Case 2, if I increase a lot (hundreds of Mbps) the packet rate this R= TT >> decreases to 3 or 4 ms. >> >> Has somebody observed this behaviour in the past? Can it be solved >> somehow? >> >> Thanks a lot for your attention >> -- >> Victor >> >> >> -- >> Victor >> >