From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3BDC1A0554; Tue, 18 Feb 2020 00:10:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 31D2B1C10C; Tue, 18 Feb 2020 00:10:50 +0100 (CET) Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by dpdk.org (Postfix) with ESMTP id 4299F1BF9C for ; Tue, 18 Feb 2020 00:10:48 +0100 (CET) Received: by mail-lf1-f67.google.com with SMTP id l18so13070160lfc.1 for ; Mon, 17 Feb 2020 15:10:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3CN9wLkeAelcTkDOyDGMJO4rWFJ2gRcnh6VZqdP5H8s=; b=bAWwsOlw8ny3dgxWZ5zzlQ4jwdKVEZtvGEdt5+8bHNYIM8HTq4MTFiWO/hnWvelBKM w0CTtS/HRwuAr+dV5B03KQ0uEF/3qeD7UvZtjVVvWhhz6ycVbY7L3jssuaE0R77iKyqs 65Fi+azB7yCSO/db6Vzcka9ppBmwyCGtZC7X/4Uaa1GBsEWoHMZXiv8S24UJZLJIaL76 dYineZJQYcLbgtmPz1W2fKC5Yzymn7ouLLuuzjdeReOtUXXAKyoj/W+dHFY9DW+kXIEi zJvbmS5XDVAXPw1+9MUI6QjrYFVZjuqBju7BN7YxiGZ5dTbiQ74ZWeoiihhJJJPpphZv dQwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3CN9wLkeAelcTkDOyDGMJO4rWFJ2gRcnh6VZqdP5H8s=; b=Cu4A4qtckR1yWbINlpexQuAEH4bJCb0UWQBbBhxobhkpNaHxbfMG0Ol1tKdDPOEhFy biuNTbsahLOLK61XqkpIr4HODxA3Zq+I3HWqNrBG1x4U52U1na0nOMG88Zdg1IKLC2ms wXD9Yyw6CX/ErDdmnSz8FxCWLT6Hw0oqa3PWpuazKOOJgTJrnKbTHNDoszztlqs16HIY XSwgg25XgBOpUcffgD+VNdq9MrNmoa9cwGrX0pKPi+yxV3XkATnkYfhHXy+Vq6iOAhGf rbsBrwNO+gIgcGgDk/3wh+HOKJ3w/lLxJdCuysSVeDwznu17Y8WwgekDIDGtPTkYp2NT 08DA== X-Gm-Message-State: APjAAAWXs2SInR1TSh+h2wyKnmoCxd+1E8AztE1ijRus4JmQrSwMy3iK qk09hUnGvkZSQ33oGWbUkZZ1XTw3sLNIC87f6bM= X-Google-Smtp-Source: APXvYqz6UocIkMoJmPMXZ27U7StCvjf/SwN68Y60Dt1diqGvwTqBbpznJvAxz1PrLPWQXszRk/W2cv9wQ88v/MdsCSk= X-Received: by 2002:ac2:5612:: with SMTP id v18mr8376838lfd.172.1581981047660; Mon, 17 Feb 2020 15:10:47 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: James Huang Date: Mon, 17 Feb 2020 15:10:35 -0800 Message-ID: To: Victor Huertas Cc: dev , cristian.dumitrescu@intel.com Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Fwd: high latency detected in IP pipeline example X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Yes, I experienced similar issue in my application. In a short answer, set the swqs write burst value to 1 may reduce the latency significantly. The default write burst value is 32. On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas wrote: > Hi all, > > I am developing my own DPDK application basing it in the dpdk-stable > ip_pipeline example. > At this moment I am using the 17.11 LTS version of DPDK and I amb observing > some extrange behaviour. Maybe it is an old issue that can be solved > quickly so I would appreciate it if some expert can shade a light on this. > > The ip_pipeline example allows you to develop Pipelines that perform > specific packet processing functions (ROUTING, FLOW_CLASSIFYING, etc...). > The thing is that I am extending some of this pipelines with my own. > However I want to take advantage of the built-in ip_pipeline capability of > arbitrarily assigning the logical core where the pipeline (f_run() > function) must be executed so that i can adapt the packet processing power > to the amount of the number of cores available. > Taking this into account I have observed something strange. I show you this > simple example below. > > Case 1: > [PIPELINE 0 MASTER core =0] > [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=2] -----SWQ2----> > [PIPELINE 3 core=3] > > Case 2: > [PIPELINE 0 MASTER core =0] > [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=1] -----SWQ2----> > [PIPELINE 3 core=1] > > I send a ping between two hosts connected at both sides of the pipeline > model which allows these pings to cross all the pipelines (from 1 to 3). > What I observe in Case 1 (each pipeline has its own thread in different > core) is that the reported RTT is less than 1 ms, whereas in Case 2 (all > pipelines except MASTER are run in the same thread) is 20 ms. Furthermore, > in Case 2, if I increase a lot (hundreds of Mbps) the packet rate this RTT > decreases to 3 or 4 ms. > > Has somebody observed this behaviour in the past? Can it be solved somehow? > > Thanks a lot for your attention > -- > Victor > > > -- > Victor >