From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA5BAA0C43 for ; Fri, 17 Sep 2021 17:23:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6ADDD410E9; Fri, 17 Sep 2021 17:23:52 +0200 (CEST) Received: from mout.gmx.com (mout.gmx.com [74.208.4.200]) by mails.dpdk.org (Postfix) with ESMTP id 4CC4C406B4 for ; Fri, 17 Sep 2021 17:23:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=mail.com; s=dbd5af2cbaf7; t=1631892229; bh=PKNWtfrk8T48+Nf2np6fNqf0hdi7G3NceTfYrUfHvHs=; h=X-UI-Sender-Class:From:To:Subject:Date; b=pfo4avdFnQyWwXf2Ow9N8VZbLdSZ06sFh30K8USJkze9eL6xukGrvc0GiQ9MJvGoB DvQHUupuXKKupbX8ERXjBm0CFaZeKOQuPyW0QMDiXfd9VHTsm0Q5mf0Eodevi7+6iK RYEkgCbzM3O4/K+jkHp8Bh+M8AX4KHfMwqSq5Cy8= X-UI-Sender-Class: 214d933f-fd2f-45c7-a636-f5d79ae31a79 Received: from [87.92.117.73] ([87.92.117.73]) by web-mail.mail.com (3c-app-mailcom-lxa05.server.lan [10.76.45.6]) (via HTTP); Fri, 17 Sep 2021 17:23:49 +0200 MIME-Version: 1.0 Message-ID: From: Jared Brown To: users@dpdk.org Content-Type: text/plain; charset=UTF-8 Date: Fri, 17 Sep 2021 17:23:49 +0200 Importance: normal Sensitivity: Normal X-Priority: 3 X-Provags-ID: V03:K1:3FCjSVu0hwMLmUhQB+juOzZELhHQPnPd4zY7pGEDnlghOJ6QLMksbj+S0f99YPKGEU3Vs fZ8E4kLRyYLJ/Yoe/PYcPzatJ6bjZvKGzsxmQuNz098Uc+2cjywsRuNautgT+wr3nfYPi7bkKxrx DqZy0tz9NwtsisNn7F/diucP6S85qTYbbpQmWZgSdiUPWkMmTv6OP2fNV/ypdglI8qi6/5F7RHyv +VgAqVBN4on1h+DRGliqmgg+9rDR23ZgVnbM1xSZMow3BLbZCbMrLP7ZhasD0hTdSJPv1kvLjTpp pM= X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:aoJUag7AJAA=:Hs7wMZcZJy5Mqa57Toz4zI PMcKOUV6s5RLFfDRowG0OqfDB5k6JgM9ivL4uZIMZuV5B6y4TSfBV55PT0vc/5ZEwuoklXati DHVmWKA/t2CheY3r042MBFINvc69+I5CMEDpbu/Y96zDGc5jJQ4RwuaAVHgkExLsMUUlhuK08 V9nxoaSGC22E/ocvmkDghlJoB+YLN3i+sbVs7SSZEIQRINDi3gMWh0aTVevZLmmsDoKisnq6k kxiTHnJ7X+wmWnKr3Fh+qr/RdkuEO62dBT7wd0LtIX37J5ssN1/fWHkX6CU5XJH+8F6HTgdr+ puG/g8H2nGA+L7t7F54utQYoomPTyeJhaNB34chYiVVYizfKpILy87fgEnYMvdf0sXRXXy424 cxxyKYRahCCqOk4wZgFEFvQ4S4xpz4eyiBzPuOFHn2vvNrblLxjXD6EIft1j/eg3/qzc0b70D SNsqI5Q5WfEfmWwTBVFm+DJbJ06dsHzQhqdeLj1ZkFw9tUCBjl+yjhONlNzFX938uGRqQ5VeO bl5N2UqePhgqKHRxemo7SA1WAq9TDuWgWNMRr7I89X96Wre9tgBSA2qKSJmMNZncr3jCJwr06 CAEwtdqGcUjI1bJWtxePUd5sGqlImj91u+6hiYE3a0ZFtL6SjrUDIStp12ygnXzTKgEv7Wj/4 vhJTdOg4bqcv+sUxulUpVTdzxtvr7+cY9fLeBeXGcYnNy06KJULdzsZd2mUIVdErprlwRmiGj /SwxvgsR/0QK+q7TBQ+bCVmAKll9lOBypcpSTpq68pLWL5ui9JdMMy2O2QKAgHGtubmC6aoy7 mXdA8XC Subject: [dpdk-users] DPDK CPU selection criteria? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hello everybody! Is there some canonical resource or at least a recommendation on how to evaluate different CPUs for suitability for use with DPDK? My use case is software routers (for example DANOS, 6WIND, TNRS), so I am mainly interested in forwarding performance in terms of Mpps. What I am looking for is to develop some kind of heuristics to evaluate CPUs in terms of $/Mpps without having to purchase hundreds of SKUs and running tests on them. The official DPDK documentation[0] states thus: "7.1. Hardware and Memory Requirements For best performance use an Intel Xeon class server system such as Ivy Bridge, Haswell or newer." This is somewhat... vague. I suppose one could take [1] as a baseline, which states on page 2 that an Ivy Bridge Xeon E3-1230 V2 is able to forward unidirectional flows at linerate using 10G NICs at all frequencies above 1.6 GHz and bidirectional flows at linerate using 10G NICs at 3.3 GHz. This however pales compared with [2] that on page 23 shows that a 3rd Generation Scalable Xeon 8380 manages to very nearly saturate a 100G NIC at all packet sizes. As there is almost a magnitude in difference in forwarding performance per core, you can perhaps understand that I am somewhat at a loss when trying to gauge the performance of a particular CPU model. Reading [3] one learns that several aspects of the CPU affect the forwarding performance, but very little light is shed on how much each feature on its own contributes. On page 172 one learns that CPU frequency has a linear impact on the performance. This is borne out by [1], but does not take into consideration inter-generational gaps as witnessed by [2]. This begs the question, what are those inter-generational differences made of? - L3 cache latency (p. 54) as an upper limit on Mpps. Do newer generations have decidedly lower cache latencies and is this the defining performance factor? - Direct Data I/O (p. 69)? Is DDIO combined with lower L3 cache latency a major force multipler? Or is prefetch sufficient to keep caches hot? This is somewhat confusing, as [3] states on page 62 that DPDK can get one core to handleup to 33 mpps, on average. On one hand this is the performance [1] demonstrated the better part of a decade earlier, but on the other hand [2] demonstrates a magnitude larger performance per core. - New instructions? On page 171 [3] notes that the AVX512 instruction can move 64 bytes per cycle which [2] indicates has an almost 30% effect on Mpps on page 22. How important is Transactional Synchronization Extensions (TSX) support (see page 119 of [3]) for forwarding performance? - Other factors are mentioned, such as memory frequency, memory size, memory channels and cache sizes, but nothing is said how each of these affect forwarding performance in terms of Mpps. The official documentation [0] only states that: "Ensure that each memory channel has at least one memory DIMM inserted, and that the memory size for each is at least 4GB. Note: this has one of the most direct effects on performance." - Turbo boost and hyperthreading? Are these supposed to be enabled or disabled? I am getting conflicting information. Results listed in [2] show increased Mpps by enabling, but [1] notes that they were disabled due to them introducing measurement artifacts. I recall some documentation recommending disabling, since enabling increases latency and variance. - Xeon D, W, E3, E5, E7 and Scalable. Are these different processor siblings observably different from each other from the perspective of DPDK? Atoms certainly are as [3] notes on page 57, because they only perform at 50% compared to an equivalent Xeon core. A reson isn't given, but perhaps it is due to the missing L3 cache? - Something entirely else? Am I missing something completely obvious that explains the inter-generational differences between CPUS in terms of forwarding performance? So, given all this, how can I perform the mundane task of comparing for example the Xeon W-1250P with the Xeon W-1350P? The 1250 is older, but has a larger L2 cache and a higher frequency. The 1350 is newer, uses faster memory, has a higher max memory bandwidth, PCIe4.0, more PCI lanes and AVX-512. Or any other CPU model comparison, for that matter? - Jared [0] https://doc.dpdk.org/guides-16.04/linux_gsg/nic_perf_intel_platform.html [1] https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/ICN2015.pdf [2] http://fast.dpdk.org/doc/perf/DPDK_21_05_Intel_NIC_performance_report.pdf [3] https://www.routledge.com/Data-Plane-Development-Kit-DPDK-A-Software-Optimization-Guide-to-the/Zhu/p/book/9780367373955