From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 05CB6C4E2 for ; Wed, 15 Jun 2016 12:04:40 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP; 15 Jun 2016 03:04:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,475,1459839600"; d="scan'208";a="976083710" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by orsmga001.jf.intel.com with ESMTP; 15 Jun 2016 03:04:36 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.183]) by IRSMSX106.ger.corp.intel.com ([169.254.8.145]) with mapi id 14.03.0248.002; Wed, 15 Jun 2016 11:04:34 +0100 From: "De Lara Guarch, Pablo" To: "Wang, Zhihong" , "dev@dpdk.org" CC: "Ananyev, Konstantin" , "Richardson, Bruce" , "thomas.monjalon@6wind.com" Thread-Topic: [PATCH v3 0/5] vhost/virtio performance loopback utility Thread-Index: AQHRxs0TcmYX1EyV+ESk3PqBcFVw2Z/qTMWQ Date: Wed, 15 Jun 2016 10:04:33 +0000 Message-ID: References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> In-Reply-To: <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMjRmMjFlNjItM2I5NC00ZDIwLWIxYTMtMmEyYWYzMmQzNWE0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6InlkeXVaaTY4RUZFcHBwYmxQZzRBY0xUYUlFcExndjU4QUJ2aVwvU0lCTW0wPSJ9 x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost/virtio performance loopback utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Jun 2016 10:04:41 -0000 > -----Original Message----- > From: Wang, Zhihong > Sent: Wednesday, June 15, 2016 12:08 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, Pablo; > thomas.monjalon@6wind.com > Subject: [PATCH v3 0/5] vhost/virtio performance loopback utility >=20 > This patch enables vhost/virtio pmd performance loopback test in testpmd. > All the features are for general usage. >=20 > The loopback test focuses on the maximum full-path packet forwarding > performance between host and guest, it runs vhost/virtio pmd only without > introducing extra overhead. >=20 > Therefore, the main requirement is traffic generation, since there's no > other packet generators like IXIA to help. >=20 > In current testpmd, iofwd is the best candidate to perform this loopback > test because it's the fastest possible forwarding engine: Start testpmd > iofwd in host with 1 vhost port, and start testpmd iofwd in the connected > guest with 1 corresponding virtio port, and these 2 ports form a forwardi= ng > loop: Host vhost Tx -> Guest virtio Rx -> Guest virtio Tx -> Host vhost R= x. >=20 > As to traffic generation, "start tx_first" injects a burst of packets int= o > the loop. >=20 > However 2 issues remain: >=20 > 1. If only 1 burst of packets are injected in the loop, there will > definitely be empty Rx operations, e.g. When guest virtio port send > burst to the host, then it starts the Rx immediately, it's likely > the packets are still being forwarded by host vhost port and haven'= t > reached the guest yet. >=20 > We need to fill up the ring to keep all pmds busy. >=20 > 2. iofwd doesn't provide retry mechanism, so if packet loss occurs, > there won't be a full burst in the loop. >=20 > To address these issues, this patch: >=20 > 1. Add retry option in testpmd to prevent most packet losses. >=20 > 2. Add parameter to enable configurable tx_first burst number. >=20 > Other related improvements include: >=20 > 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces = a > single core for each rxq which causes inconvenience and confusion. >=20 > This change doesn't break anything, we can still force a single cor= e > for each rxq, by giving the same number of cores with the number of > rxqs. >=20 > One example: One Red Hat engineer was doing multiqueue test, there'= re > 2 ports in guest each with 4 queues, and testpmd was used as the > forwarding engine in guest, as usual he used 1 core for forwarding,= as > a results he only saw traffic from port 0 queue 0 to port 1 queue 0= , > then a lot of emails and quite some time are spent to root cause it= , > and of course it's caused by this unreasonable testpmd behavior. >=20 > Moreover, even if we understand this behavior, if we want to test t= he > above case, we still need 8 cores for a single guest to poll all th= e > rxqs, obviously this is too expensive. >=20 > We met quite a lot cases like this, one recent example: > http://openvswitch.org/pipermail/dev/2016-June/072110.html >=20 > 2. Show topology at forwarding start: "show config fwd" also does this= , > but show it directly can reduce the possibility of mis-configuratio= n. >=20 > Like the case above, if testpmd shows topology at forwarding start, > then probably all those debugging efforts can be saved. >=20 > 3. Add throughput information in port statistics display for "show por= t > stats (port_id|all)". >=20 > Finally there's documentation update. >=20 > Example on how to enable vhost/virtio performance loopback test: >=20 > 1. Start testpmd in host with 1 vhost port only. >=20 > 2. Start testpmd in guest with only 1 virtio port connected to the > corresponding vhost port. >=20 > 3. "set fwd io retry" in testpmds in both host and guest. >=20 > 4. "start" in testpmd in guest. >=20 > 5. "start tx_first 16" in testpmd in host. >=20 > Then use "show port stats all" to monitor the performance. >=20 > -------------- > Changes in v2: >=20 > 1. Add retry as an option for existing forwarding engines except rxonl= y. >=20 > 2. Minor code adjustment and more detailed patch description. >=20 > -------------- > Changes in v3: >=20 > 1. Add more details in commit log. >=20 > 2. Give variables more meaningful names. >=20 > 3. Fix a typo in existing doc. >=20 > 4. Rebase the patches. >=20 >=20 > Zhihong Wang (5): > testpmd: add retry option > testpmd: configurable tx_first burst number > testpmd: show throughput in port stats > testpmd: handle all rxqs in rss setup > testpmd: show topology at forwarding start >=20 > app/test-pmd/Makefile | 1 - > app/test-pmd/cmdline.c | 116 ++++++++++++++++++- > app/test-pmd/config.c | 74 ++++++++++-- > app/test-pmd/csumonly.c | 12 ++ > app/test-pmd/flowgen.c | 12 ++ > app/test-pmd/icmpecho.c | 15 +++ > app/test-pmd/iofwd.c | 22 +++- > app/test-pmd/macfwd-retry.c | 167 ----------------------= ------ > app/test-pmd/macfwd.c | 13 +++ > app/test-pmd/macswap.c | 12 ++ > app/test-pmd/testpmd.c | 12 +- > app/test-pmd/testpmd.h | 11 +- > app/test-pmd/txonly.c | 12 ++ > doc/guides/testpmd_app_ug/run_app.rst | 1 - > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 18 +-- > 15 files changed, 299 insertions(+), 199 deletions(-) > delete mode 100644 app/test-pmd/macfwd-retry.c >=20 > -- > 2.5.0 Series-acked-by: Pablo de Lara