From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8E21A0555; Wed, 19 Feb 2020 09:30:09 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8AA21BF98; Wed, 19 Feb 2020 09:30:09 +0100 (CET) Received: from mail-vs1-f66.google.com (mail-vs1-f66.google.com [209.85.217.66]) by dpdk.org (Postfix) with ESMTP id B46CD1BF95 for ; Wed, 19 Feb 2020 09:30:08 +0100 (CET) Received: by mail-vs1-f66.google.com with SMTP id p14so14584392vsq.6 for ; Wed, 19 Feb 2020 00:30:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BcOimmC+ZQCovcuX69VXpCqb9EFLl5P+6CiCBjWTwGk=; b=j9ozWezZeoWu4Kj1zFFxWqUhPy/zXwgDu7kmcb3M/C90CsfZ6sQoP2zmukH37DVpVA /uoo7bzfpxI/mlfBLtZFYpHy+P9mAlMjPrSoMH6IZwY3fg7qPlEf+drl9ApXk6Aiwa6J Q40qeW1w7VSqd8Z6D4hR+Lv1wthyTtKKLgs/5YCt+A5cwO/4FNAQJiiUNiiUukCgBctr fKAtJ8AFNsE0JQpL1pH9P4OjbPCEum1cucNZ1biWrE0ZXcqjvwvALAGRsrve8MTrDpd1 q80q3PABCD7gpTgy6wKqKI9tqQSZYGUsZl/o1YAomG98aX/BCHmeYziZRc8UfLAz/3bw otwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BcOimmC+ZQCovcuX69VXpCqb9EFLl5P+6CiCBjWTwGk=; b=U3Fg4wTzpuwJJVzgU1qXn/yWJahblaaB/20rSXeHmJo1LAMKnv1G/QmUNwZhcN5//G JXq5oe3dOKJimbVeOjc6EVTnlWGbOimmVCvq1E104fOhRTt7WMEinkqSLbybpWN68nnI S580GKGjjx4MfA4wGUH9LhuyKXN8OPu+Jl0ovn5m8SFvKYN9+nGUhsSU2oq83ZD1K4GU GB/wqMZm5HF7elGts5G4WVT14QMFGlySpKqW0yotrriFKSvCingFtRazHUiUY5IOixpJ Il1uCqrMMwqvTC0mDuT1cfp9W/tsCY+mnSIDmDVO898dbnWxEwDrXJda9VZMpN/pCos7 zhmA== X-Gm-Message-State: APjAAAV/9H0eyeJgYSbygglu/8ePuT8bsClh/7Gv4ukBhTRgS66Jyadh XkOeYUcXLhW12haYO7FJo0NGjdkle78IvUp/U84= X-Google-Smtp-Source: APXvYqyjPnAIhO27BQdN3okSx/jiVlEhHA1q3pbwysc469lYiBAnImkl7bifs3AzruGhTmggMW0ufLMZvmp7yUv3qpE= X-Received: by 2002:a05:6102:101e:: with SMTP id q30mr13875393vsp.2.1582101007911; Wed, 19 Feb 2020 00:30:07 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Victor Huertas Date: Wed, 19 Feb 2020 09:29:56 +0100 Message-ID: To: James Huang Cc: dev , cristian.dumitrescu@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Fwd: high latency detected in IP pipeline example X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" OK James, Thanks for sharing your own experience. What I would need right now is to know from maintainers if this latency behaviour is something inherent in DPDK in the particular case we are talking about. Furthermore, I would also appreciate it if some maintainer could tell us if there is some workaround or special configuration that completely mitigate this latency. I guess that there is one mitigation mechanism, which is the approach that the new ip_pipeline app example exposes: if two or more pipelines are in the same core the "connection" between them is not a software queue but a "direct table connection". This proposed approach has a big impact on my application and I would like to know if there is other mitigation approach taking into account the "old" version of ip_pipeline example. Thanks for your attention El mar., 18 feb. 2020 a las 23:09, James Huang () escribi=C3=B3: > No. I didn't notice the RTT bouncing symptoms. > In high throughput scenario, if multiple pipelines runs in a single cpu > core, it does increase the latency. > > > Regards, > James Huang > > > On Tue, Feb 18, 2020 at 1:50 AM Victor Huertas wrote= : > >> Dear James, >> >> I have done two different tests with the following configuration: >> [PIPELINE 0 MASTER core =3D0] >> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D1] -----SWQ2----> >> [PIPELINE 3 core=3D1] >> >> The first test (sending a single ping to cross all the pipelines to >> measure RTT) has been done by setting the burst_write to 32 in SWQ1 and >> SWQ2. NOTE: All the times we use rte_ring_enqueue_burst in the pipelines= 1 >> and 2 we set the number of packets to write to 1. >> >> The result of this first test is as shown subsquently: >> 64 bytes from 192.168.0.101: icmp_seq=3D343 ttl=3D63 time=3D59.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D344 ttl=3D63 time=3D59.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D345 ttl=3D63 time=3D59.2 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D346 ttl=3D63 time=3D59.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D347 ttl=3D63 time=3D59.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D348 ttl=3D63 time=3D59.2 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D349 ttl=3D63 time=3D59.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D350 ttl=3D63 time=3D59.1 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D351 ttl=3D63 time=3D58.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D352 ttl=3D63 time=3D58.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D353 ttl=3D63 time=3D58.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D354 ttl=3D63 time=3D58.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D355 ttl=3D63 time=3D58.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D356 ttl=3D63 time=3D57.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D357 ttl=3D63 time=3D56.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D358 ttl=3D63 time=3D57.2 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D359 ttl=3D63 time=3D57.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D360 ttl=3D63 time=3D57.3 ms >> >> As you can see, the RTT is quite high and the range of values is more or >> less stable. >> >> The second test is the same as the first one but setting burst_write to = 1 >> for all SWQs. The result is this one: >> >> 64 bytes from 192.168.0.101: icmp_seq=3D131 ttl=3D63 time=3D10.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D132 ttl=3D63 time=3D10.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D133 ttl=3D63 time=3D10.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D134 ttl=3D63 time=3D10.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D135 ttl=3D63 time=3D10.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D136 ttl=3D63 time=3D10.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D137 ttl=3D63 time=3D10.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D138 ttl=3D63 time=3D10.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D139 ttl=3D63 time=3D10.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D140 ttl=3D63 time=3D10.2 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D141 ttl=3D63 time=3D10.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D142 ttl=3D63 time=3D10.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D143 ttl=3D63 time=3D11.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D144 ttl=3D63 time=3D11.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D145 ttl=3D63 time=3D11.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D146 ttl=3D63 time=3D11.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D147 ttl=3D63 time=3D11.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D148 ttl=3D63 time=3D11.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D149 ttl=3D63 time=3D12.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D150 ttl=3D63 time=3D12.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D151 ttl=3D63 time=3D12.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D152 ttl=3D63 time=3D12.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D153 ttl=3D63 time=3D12.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D154 ttl=3D63 time=3D12.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D155 ttl=3D63 time=3D12.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D156 ttl=3D63 time=3D12.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D157 ttl=3D63 time=3D12.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D158 ttl=3D63 time=3D12.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D159 ttl=3D63 time=3D13.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D160 ttl=3D63 time=3D13.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D161 ttl=3D63 time=3D13.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D162 ttl=3D63 time=3D13.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D163 ttl=3D63 time=3D13.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D164 ttl=3D63 time=3D13.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D165 ttl=3D63 time=3D13.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D166 ttl=3D63 time=3D13.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D167 ttl=3D63 time=3D14.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D168 ttl=3D63 time=3D14.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D169 ttl=3D63 time=3D14.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D170 ttl=3D63 time=3D14.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D171 ttl=3D63 time=3D14.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D172 ttl=3D63 time=3D14.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D173 ttl=3D63 time=3D14.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D174 ttl=3D63 time=3D14.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D175 ttl=3D63 time=3D15.1 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D176 ttl=3D63 time=3D15.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D177 ttl=3D63 time=3D16.0 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D178 ttl=3D63 time=3D16.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D179 ttl=3D63 time=3D17.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D180 ttl=3D63 time=3D17.6 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D181 ttl=3D63 time=3D17.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D182 ttl=3D63 time=3D17.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D183 ttl=3D63 time=3D18.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D184 ttl=3D63 time=3D18.9 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D185 ttl=3D63 time=3D19.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D186 ttl=3D63 time=3D19.8 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D187 ttl=3D63 time=3D10.7 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D188 ttl=3D63 time=3D10.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D189 ttl=3D63 time=3D10.4 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D190 ttl=3D63 time=3D10.3 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D191 ttl=3D63 time=3D10.5 ms >> 64 bytes from 192.168.0.101: icmp_seq=3D192 ttl=3D63 time=3D10.7 ms >> As you mentioned, the delay has decreased a lot but it is still >> considerably high (in a normal router this delay is less than 1 ms). >> A second strange behaviour is seen in the evolution of the RTT detected. >> It begins in 10 ms and goes increasing little by litttle to reach a peak= of >> 20 ms aprox and then it suddely comes back to 10 ms again to increase ag= ain >> till 20 ms. >> >> Is this the behaviour you have in your case when the burst_write is set >> to 1? >> >> Regards, >> >> El mar., 18 feb. 2020 a las 8:18, James Huang () >> escribi=C3=B3: >> >>> No. We didn't see noticable throughput difference in our test. >>> >>> On Mon., Feb. 17, 2020, 11:04 p.m. Victor Huertas >>> wrote: >>> >>>> Thanks James for your quick answer. >>>> I guess that this configuration modification implies that the packets >>>> must be written one by one in the sw ring. Did you notice loose of >>>> performance (in throughput) in your aplicaci=C3=B3n because of that? >>>> >>>> Regards >>>> >>>> El mar., 18 feb. 2020 0:10, James Huang escribi= =C3=B3: >>>> >>>>> Yes, I experienced similar issue in my application. In a short answer= , >>>>> set the swqs write burst value to 1 may reduce the latency significan= tly. >>>>> The default write burst value is 32. >>>>> >>>>> On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> I am developing my own DPDK application basing it in the dpdk-stable >>>>>> ip_pipeline example. >>>>>> At this moment I am using the 17.11 LTS version of DPDK and I amb >>>>>> observing >>>>>> some extrange behaviour. Maybe it is an old issue that can be solved >>>>>> quickly so I would appreciate it if some expert can shade a light on >>>>>> this. >>>>>> >>>>>> The ip_pipeline example allows you to develop Pipelines that perform >>>>>> specific packet processing functions (ROUTING, FLOW_CLASSIFYING, >>>>>> etc...). >>>>>> The thing is that I am extending some of this pipelines with my own. >>>>>> However I want to take advantage of the built-in ip_pipeline >>>>>> capability of >>>>>> arbitrarily assigning the logical core where the pipeline (f_run() >>>>>> function) must be executed so that i can adapt the packet processing >>>>>> power >>>>>> to the amount of the number of cores available. >>>>>> Taking this into account I have observed something strange. I show >>>>>> you this >>>>>> simple example below. >>>>>> >>>>>> Case 1: >>>>>> [PIPELINE 0 MASTER core =3D0] >>>>>> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D2] -----SWQ2---= -> >>>>>> [PIPELINE 3 core=3D3] >>>>>> >>>>>> Case 2: >>>>>> [PIPELINE 0 MASTER core =3D0] >>>>>> [PIPELINE 1 core=3D1] --- SWQ1--->[PIPELINE 2 core=3D1] -----SWQ2---= -> >>>>>> [PIPELINE 3 core=3D1] >>>>>> >>>>>> I send a ping between two hosts connected at both sides of the >>>>>> pipeline >>>>>> model which allows these pings to cross all the pipelines (from 1 to >>>>>> 3). >>>>>> What I observe in Case 1 (each pipeline has its own thread in >>>>>> different >>>>>> core) is that the reported RTT is less than 1 ms, whereas in Case 2 >>>>>> (all >>>>>> pipelines except MASTER are run in the same thread) is 20 ms. >>>>>> Furthermore, >>>>>> in Case 2, if I increase a lot (hundreds of Mbps) the packet rate >>>>>> this RTT >>>>>> decreases to 3 or 4 ms. >>>>>> >>>>>> Has somebody observed this behaviour in the past? Can it be solved >>>>>> somehow? >>>>>> >>>>>> Thanks a lot for your attention >>>>>> -- >>>>>> Victor >>>>>> >>>>>> >>>>>> -- >>>>>> Victor >>>>>> >>>>> >> >> -- >> Victor >> > --=20 Victor