From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F317DA054F; Tue, 18 Feb 2020 08:18:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 453A91C1AC; Tue, 18 Feb 2020 08:18:57 +0100 (CET) Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by dpdk.org (Postfix) with ESMTP id 055CF1C10D for ; Tue, 18 Feb 2020 08:18:54 +0100 (CET) Received: by mail-lf1-f47.google.com with SMTP id f24so13705319lfh.3 for ; Mon, 17 Feb 2020 23:18:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=J72hvXIxzpPJKySmd2iJrJFhu/+W6Kg3ADFlXNDj/7A=; b=SWexSCvqwvmCyzyqxGspfTAGLcqC6yPa9G8cRHpDdLIOZPdMTtkEWHB/Ut3HYWrw2A id+XGB0D4M1o4kyX1kDZX0MyoGD5UQoR5xLRZC0ghYy7dXj9meZEC9myEBqMaZGoq54H kByTgO25km+cqYeWSo+eIM2HxXkR8P7xeqhwynDButxRjf7phbXdo7fh8Jr1EsngHLK9 BblN+p8MSfDAQHAFx/uYZZJV+D8TD041vdhLnqcwh6FzpGNZmhiP+HC8lj79+3MSJzbQ zzEEJFw2yzFsKJC1tL+r8jGycg/0E6LJ/+dPjkwhNTinBza39ej8RLQBrxAcQyLpVbtn oznQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=J72hvXIxzpPJKySmd2iJrJFhu/+W6Kg3ADFlXNDj/7A=; b=TCXelv9Bkh9M9foshK442VTlZPespfYYZxcFFt0Dab5GeuQoSrEckS3Neqzhh2hre+ 07B090rICIQDgLlgld7QBgUt3HSWxzRk9kDRBnL0Bfy2LP3s3kaj30M00vI7+iXKdbAi 6+iWrGszH+zvMTgnybO3fZwxgGYAX0qz1/dlNDe5AmgDZiL56/JajsIyTH1myBrSMsOZ O4jseqYStTpzRVbZf1INyzlaQ2xDtNA7U7vPJeUcoOfVGKmfs5fDyFf6nMzVnle1Vc6t xM1NiG4qodazSiV1ib79uFUKXr9k4cwJPSuTFIaznyBfevHaxW9ziH9CMhYHpSodp+jy U/QA== X-Gm-Message-State: APjAAAX2QpOSZJBZy9vWH6XTRkTaCF4ALQB/Iv5aJ/LYuWJSqk1X1WRC EZ+59fZ2qzIvtocn9Lf4k38BoxeLQVCrJMnUX5U= X-Google-Smtp-Source: APXvYqzZxJVw4nK/QIMGZYmgTEDctoqZriU58JS6xtA2msyeqFr9NF0rw2IskR1zMK3yAFjMCP2lkkZVsm1mYPR7YZc= X-Received: by 2002:ac2:4c2b:: with SMTP id u11mr9774850lfq.46.1582010334394; Mon, 17 Feb 2020 23:18:54 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: James Huang Date: Mon, 17 Feb 2020 23:18:43 -0800 Message-ID: To: Victor Huertas Cc: dev , cristian.dumitrescu@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Fwd: high latency detected in IP pipeline example X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" No. We didn't see noticable throughput difference in our test. On Mon., Feb. 17, 2020, 11:04 p.m. Victor Huertas wrote: > Thanks James for your quick answer. > I guess that this configuration modification implies that the packets mus= t > be written one by one in the sw ring. Did you notice loose of performance > (in throughput) in your aplicaci=C3=B3n because of that? > > Regards > > El mar., 18 feb. 2020 0:10, James Huang escribi=C3= =B3: > >> Yes, I experienced similar issue in my application. In a short answer, >> set the swqs write burst value to 1 may reduce the latency significantly= . >> The default write burst value is 32. >> >> On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas >> wrote: >> >>> Hi all, >>> >>> I am developing my own DPDK application basing it in the dpdk-stable >>> ip_pipeline example. >>> At this moment I am using the 17.11 LTS version of DPDK and I amb >>> observing >>> some extrange behaviour. Maybe it is an old issue that can be solved >>> quickly so I would appreciate it if some expert can shade a light on >>> this. >>> >>> The ip_pipeline example allows you to develop Pipelines that perform >>> specific packet processing functions (ROUTING, FLOW_CLASSIFYING, etc...= ). >>> The thing is that I am extending some of this pipelines with my own. >>> However I want to take advantage of the built-in ip_pipeline capability >>> of >>> arbitrarily assigning the logical core where the pipeline (f_run() >>> function) must be executed so that i can adapt the packet processing >>> power >>> to the amount of the number of cores available. >>> Taking this into account I have observed something strange. I show you >>> this >>> simple example below. >>> >>> Case 1: >>> [PIPELINE 0 MASTER core =3D0] >>> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D2] -----SWQ2----> >>> [PIPELINE 3 core=3D3] >>> >>> Case 2: >>> [PIPELINE 0 MASTER core =3D0] >>> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D1] -----SWQ2----> >>> [PIPELINE 3 core=3D1] >>> >>> I send a ping between two hosts connected at both sides of the pipeline >>> model which allows these pings to cross all the pipelines (from 1 to 3)= . >>> What I observe in Case 1 (each pipeline has its own thread in different >>> core) is that the reported RTT is less than 1 ms, whereas in Case 2 (al= l >>> pipelines except MASTER are run in the same thread) is 20 ms. >>> Furthermore, >>> in Case 2, if I increase a lot (hundreds of Mbps) the packet rate this >>> RTT >>> decreases to 3 or 4 ms. >>> >>> Has somebody observed this behaviour in the past? Can it be solved >>> somehow? >>> >>> Thanks a lot for your attention >>> -- >>> Victor >>> >>> >>> -- >>> Victor >>> >>