From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 516E8A00C2 for ; Tue, 8 Mar 2022 15:16:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4983940E5A; Tue, 8 Mar 2022 15:16:19 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 6EADA4068F for ; Tue, 8 Mar 2022 15:16:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646748978; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q0HKxjU55nRRqU9l75a1PzIXo8TTUvKFIy11MobZ1Hs=; b=aFB0BZ0AEAadiwprGth8PbbhqxyclF3DcqjR9qFCDRT8AAuXK4CGpF2oG8F1cRMGHWTlQZ vYhJPb8EsD1x8FWv+ysSKpsYjTKJiSwvV0CN/Mv4aLEQaY/yzgmqDvp21jgGgBpJxVTqq7 NdefleOynCp/Irm/jQ3xxGx3xlmi/bc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-576-r37YNHJ8MSetnt-RL7aqYw-1; Tue, 08 Mar 2022 09:16:15 -0500 X-MC-Unique: r37YNHJ8MSetnt-RL7aqYw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 397ED1006AA8; Tue, 8 Mar 2022 14:16:14 +0000 (UTC) Received: from rh.Home (unknown [10.39.195.190]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2E3AB84034; Tue, 8 Mar 2022 14:16:12 +0000 (UTC) From: Kevin Traynor To: Wenwu Ma Cc: Chenbo Xia , dpdk stable Subject: patch 'examples/vhost: fix launch with physical port' has been queued to stable release 21.11.1 Date: Tue, 8 Mar 2022 14:14:48 +0000 Message-Id: <20220308141500.286915-33-ktraynor@redhat.com> In-Reply-To: <20220308141500.286915-1-ktraynor@redhat.com> References: <20220308141500.286915-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 21.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/14/22. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable/commit/3325293096f13729ab10503003591564804f463c Thanks. Kevin --- >From 3325293096f13729ab10503003591564804f463c Mon Sep 17 00:00:00 2001 From: Wenwu Ma Date: Fri, 4 Mar 2022 16:24:24 +0000 Subject: [PATCH] examples/vhost: fix launch with physical port [ upstream commit 917229c24e871bbc3225a0227eb3f0faaa7aaa69 ] dpdk-vhost will fail to launch with a 40G i40e port because there are not enough mbufs. This patch adds a new option --total-num-mbufs, through which the user can set larger mbuf pool to avoid this problem. Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application") Signed-off-by: Wenwu Ma Reviewed-by: Chenbo Xia --- examples/vhost/main.c | 83 +++++++++++++++---------------------------- 1 file changed, 29 insertions(+), 54 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 590a77c723..6c3bd9e4b0 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -33,4 +33,6 @@ #endif +#define NUM_MBUFS_DEFAULT 0x24000 + /* the maximum number of external ports supported */ #define MAX_SUP_PORTS 1 @@ -58,4 +60,7 @@ #define INVALID_PORT_ID 0xFF +/* number of mbufs in all pools - if specified on command-line. */ +static int total_num_mbufs = NUM_MBUFS_DEFAULT; + /* mask of enabled ports */ static uint32_t enabled_port_mask = 0; @@ -474,5 +479,6 @@ us_vhost_usage(const char *prgname) " --client register a vhost-user socket as client mode.\n" " --dma-type register dma type for your vhost async driver. For example \"ioat\" for now.\n" - " --dmas register dma channel for specific vhost device.\n", + " --dmas register dma channel for specific vhost device.\n" + " --total-num-mbufs [0-N] set the number of mbufs to be allocated in mbuf pools, the default value is 147456.\n", prgname); } @@ -505,4 +511,6 @@ enum { #define OPT_DMAS "dmas" OPT_DMAS_NUM, +#define OPT_NUM_MBUFS "total-num-mbufs" + OPT_NUM_MBUFS_NUM, }; @@ -544,4 +552,6 @@ us_vhost_parse_args(int argc, char **argv) {OPT_DMAS, required_argument, NULL, OPT_DMAS_NUM}, + {OPT_NUM_MBUFS, required_argument, + NULL, OPT_NUM_MBUFS_NUM}, {NULL, 0, 0, 0}, }; @@ -676,4 +686,17 @@ us_vhost_parse_args(int argc, char **argv) break; + case OPT_NUM_MBUFS_NUM: + ret = parse_num_opt(optarg, INT32_MAX); + if (ret == -1) { + RTE_LOG(INFO, VHOST_CONFIG, + "Invalid argument for total-num-mbufs [0..N]\n"); + us_vhost_usage(prgname); + return -1; + } + + if (total_num_mbufs < ret) + total_num_mbufs = ret; + break; + case OPT_CLIENT_NUM: client_mode = 1; @@ -1607,55 +1630,4 @@ sigint_handler(__rte_unused int signum) } -/* - * While creating an mbuf pool, one key thing is to figure out how - * many mbuf entries is enough for our use. FYI, here are some - * guidelines: - * - * - Each rx queue would reserve @nr_rx_desc mbufs at queue setup stage - * - * - For each switch core (A CPU core does the packet switch), we need - * also make some reservation for receiving the packets from virtio - * Tx queue. How many is enough depends on the usage. It's normally - * a simple calculation like following: - * - * MAX_PKT_BURST * max packet size / mbuf size - * - * So, we definitely need allocate more mbufs when TSO is enabled. - * - * - Similarly, for each switching core, we should serve @nr_rx_desc - * mbufs for receiving the packets from physical NIC device. - * - * - We also need make sure, for each switch core, we have allocated - * enough mbufs to fill up the mbuf cache. - */ -static void -create_mbuf_pool(uint16_t nr_port, uint32_t nr_switch_core, uint32_t mbuf_size, - uint32_t nr_queues, uint32_t nr_rx_desc, uint32_t nr_mbuf_cache) -{ - uint32_t nr_mbufs; - uint32_t nr_mbufs_per_core; - uint32_t mtu = 1500; - - if (mergeable) - mtu = 9000; - if (enable_tso) - mtu = 64 * 1024; - - nr_mbufs_per_core = (mtu + mbuf_size) * MAX_PKT_BURST / - (mbuf_size - RTE_PKTMBUF_HEADROOM); - nr_mbufs_per_core += nr_rx_desc; - nr_mbufs_per_core = RTE_MAX(nr_mbufs_per_core, nr_mbuf_cache); - - nr_mbufs = nr_queues * nr_rx_desc; - nr_mbufs += nr_mbufs_per_core * nr_switch_core; - nr_mbufs *= nr_port; - - mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", nr_mbufs, - nr_mbuf_cache, 0, mbuf_size, - rte_socket_id()); - if (mbuf_pool == NULL) - rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); -} - /* * Main function, does initialisation and calls the per-lcore functions. @@ -1716,6 +1688,9 @@ main(int argc, char *argv[]) * those queues we are going to use. */ - create_mbuf_pool(valid_num_ports, rte_lcore_count() - 1, MBUF_DATA_SIZE, - MAX_QUEUES, RTE_TEST_RX_DESC_DEFAULT, MBUF_CACHE_SIZE); + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", total_num_mbufs, + MBUF_CACHE_SIZE, 0, MBUF_DATA_SIZE, + rte_socket_id()); + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); if (vm2vm_mode == VM2VM_HARDWARE) { -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2022-03-08 13:55:29.231011721 +0000 +++ 0033-examples-vhost-fix-launch-with-physical-port.patch 2022-03-08 13:55:28.505315188 +0000 @@ -1 +1 @@ -From 917229c24e871bbc3225a0227eb3f0faaa7aaa69 Mon Sep 17 00:00:00 2001 +From 3325293096f13729ab10503003591564804f463c Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 917229c24e871bbc3225a0227eb3f0faaa7aaa69 ] + @@ -12 +13,0 @@ -Cc: stable@dpdk.org @@ -21 +22 @@ -index 68afd398bb..d94fabb060 100644 +index 590a77c723..6c3bd9e4b0 100644 @@ -24 +25 @@ -@@ -34,4 +34,6 @@ +@@ -33,4 +33,6 @@ @@ -31,2 +32,2 @@ -@@ -62,4 +64,7 @@ - #define DMA_RING_SIZE 4096 +@@ -58,4 +60,7 @@ + #define INVALID_PORT_ID 0xFF @@ -37,4 +38,3 @@ - struct dma_for_vhost dma_bind[RTE_MAX_VHOST_DEVICE]; - int16_t dmas_id[RTE_DMADEV_DEFAULT_MAX]; -@@ -609,5 +614,6 @@ us_vhost_usage(const char *prgname) - " --tso [0|1] disable/enable TCP segment offload.\n" + /* mask of enabled ports */ + static uint32_t enabled_port_mask = 0; +@@ -474,5 +479,6 @@ us_vhost_usage(const char *prgname) @@ -41,0 +42 @@ + " --dma-type register dma type for your vhost async driver. For example \"ioat\" for now.\n" @@ -47 +48 @@ -@@ -638,4 +644,6 @@ enum { +@@ -505,4 +511,6 @@ enum { @@ -54 +55 @@ -@@ -675,4 +683,6 @@ us_vhost_parse_args(int argc, char **argv) +@@ -544,4 +552,6 @@ us_vhost_parse_args(int argc, char **argv) @@ -61 +62 @@ -@@ -802,4 +812,17 @@ us_vhost_parse_args(int argc, char **argv) +@@ -676,4 +686,17 @@ us_vhost_parse_args(int argc, char **argv) @@ -79 +80 @@ -@@ -1731,55 +1754,4 @@ sigint_handler(__rte_unused int signum) +@@ -1607,55 +1630,4 @@ sigint_handler(__rte_unused int signum) @@ -133,3 +134,3 @@ - static void - reset_dma(void) -@@ -1861,6 +1833,9 @@ main(int argc, char *argv[]) + /* + * Main function, does initialisation and calls the per-lcore functions. +@@ -1716,6 +1688,9 @@ main(int argc, char *argv[])