From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 495F22C56 for ; Wed, 1 Jun 2016 12:35:27 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 01 Jun 2016 03:33:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,400,1459839600"; d="scan'208";a="978590946" Received: from unknown (HELO dpdk5.sh.intel.com) ([10.239.129.244]) by fmsmga001.fm.intel.com with ESMTP; 01 Jun 2016 03:33:10 -0700 From: Zhihong Wang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com, thomas.monjalon@6wind.com Date: Tue, 31 May 2016 23:27:38 -0400 Message-Id: <1464751663-135211-1-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH v2 0/5] vhost/virtio performance loopback utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Jun 2016 10:35:27 -0000 This patch enables vhost/virtio pmd performance loopback test in testpmd. All the features are for general usage. The loopback test focuses on the maximum full-path packet forwarding performance between host and guest, it runs vhost/virtio pmd only without introducing extra overhead. Therefore, the main requirement is traffic generation, since there's no other packet generators like IXIA to help. In current testpmd, iofwd is the best candidate to perform this loopback test because it's the fastest possible forwarding engine: Start testpmd iofwd in host with 1 vhost port, and start testpmd iofwd in the connected guest with 1 corresponding virtio port, and these 2 ports form a forwarding loop: Host vhost tx -> Guest virtio rx -> Guest virtio tx -> Host vhost rx. As to traffic generation, "start tx_first" injects a burst of packets into the loop. However 2 issues remain: 1. If only 1 burst of packets are injected in the loop, there will definitely be empty rx operations, e.g. When guest virtio port send burst to the host, then it starts the rx immediately, it's likely the packets are still being forwarded by host vhost port and haven't reached the guest yet. We need to fill up the ring to keep all pmds busy. 2. iofwd doesn't provide retry mechanism, so if packet loss occurs, there won't be a full burst in the loop. To address these issues, this patch: 1. Add retry option in testpmd to prevent most packet losses. 2. Add parameter to enable configurable tx_first burst number. Other related improvements include: 1. Handle all rxqs when multiqueue is enabled: Current testpmd forces a single core for each rxq which causes inconvenience and confusion. This change doesn't break anything, we can still force a single core for each rxq, by giving the same number of cores with the number of rxqs. One example: One Red Hat engineer was doing multiqueue test, there're 2 ports in guest each with 4 queues, and testpmd was used as the forwarding engine in guest, as usual he used 1 core for forwarding, as a results he only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of emails and quite some time are spent to root cause it, and of course it's caused by this unreasonable testpmd behavior. Moreover, even if we understand this behavior, if we want to test the above case, we still need 8 cores for a single guest to poll all the rxqs, obviously this is too expensive. We met quite a lot cases like this. 2. Show topology at forwarding start: "show config fwd" also does this, but show it directly can reduce the possibility of mis-configuration. Like the case above, if testpmd shows topology at forwarding start, then probably all those debugging efforts can be saved. 3. Add throughput information in port statistics display for "show port stats (port_id|all)". Finally there's documentation update. Example on how to enable vhost/virtio performance loopback test: 1. Start testpmd in host with 1 vhost port only. 2. Start testpmd in guest with only 1 virtio port connected to the corresponding vhost port. 3. "set fwd io retry" in testpmds in both host and guest. 4. "start" in testpmd in guest. 5. "start tx_first 16" in testpmd in host. Then use "show port stats all" to monitor the performance. -------------- Changes in v2: 1. Add retry as an option for existing forwarding engines except rxonly. 2. Minor code adjustment and more detailed patch description. Zhihong Wang (5): testpmd: add retry option testpmd: configurable tx_first burst number testpmd: show throughput in port stats testpmd: handle all rxqs in rss setup testpmd: show topology at forwarding start app/test-pmd/Makefile | 1 - app/test-pmd/cmdline.c | 118 +++++++++++++++++++- app/test-pmd/config.c | 79 +++++++++++--- app/test-pmd/csumonly.c | 12 ++ app/test-pmd/flowgen.c | 12 ++ app/test-pmd/icmpecho.c | 15 +++ app/test-pmd/iofwd.c | 22 +++- app/test-pmd/macfwd-retry.c | 164 ---------------------------- app/test-pmd/macfwd.c | 13 +++ app/test-pmd/macswap.c | 12 ++ app/test-pmd/testpmd.c | 13 ++- app/test-pmd/testpmd.h | 14 ++- app/test-pmd/txonly.c | 12 ++ doc/guides/testpmd_app_ug/run_app.rst | 1 - doc/guides/testpmd_app_ug/testpmd_funcs.rst | 16 +-- 15 files changed, 303 insertions(+), 201 deletions(-) delete mode 100644 app/test-pmd/macfwd-retry.c -- 2.5.0