From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 177D241B89; Tue, 31 Jan 2023 06:19:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E74F140EF0; Tue, 31 Jan 2023 06:19:49 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 915FF40DFB for ; Tue, 31 Jan 2023 06:19:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675142387; x=1706678387; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=aPubtXfazV/1w8sQEudFuhxtB661sPsXbq9oq398qtI=; b=Vj/M8ZMxn/jTfMxOPUniB/bj/6I4EshScwoQkCJ4y5qrW4+hFgesFCE8 5l5pRGZAIdAPsS/oVnGwLvB1wpN6zLd7/6jkE620Woyuyiga5ZbVIYimB Ks8gOs5iDlwWrqb268ajLJ0SqZHmxnnHH6dJ6ztUhoU1BuBRhJba0scUh f7ZkSB0NJFzEOOCVvkhlBIrLASiQ0h/UWUyzjFbVa3/Nd4qk/vLOvYymk oLdqZZBm9wt9ptXMuB/tcbIhW10t8ckpS2zp2EOJdXEqu8ieccsZEK9ZI 0de4/NX9QSY8mzPqU+dwK7eXGGT+9buo8BpwzGfENAEo+aA0dhtGhxkcH Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10606"; a="311375991" X-IronPort-AV: E=Sophos;i="5.97,259,1669104000"; d="scan'208";a="311375991" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2023 21:19:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10606"; a="694830420" X-IronPort-AV: E=Sophos;i="5.97,259,1669104000"; d="scan'208";a="694830420" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by orsmga008.jf.intel.com with ESMTP; 30 Jan 2023 21:19:46 -0800 Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Mon, 30 Jan 2023 21:19:46 -0800 Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Mon, 30 Jan 2023 21:19:45 -0800 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16 via Frontend Transport; Mon, 30 Jan 2023 21:19:45 -0800 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.101) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.16; Mon, 30 Jan 2023 21:19:45 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OBL0cfiNTAAckRwt4NMdMYrVlFZoEX3zGJ1ep7ZHKegQpT9dgvc1ON3hFuwrq4RDEtZojoXgZcIKCz9UNF2oeRV2k8kq1UjvcXiscwItcFwU72chw+CCe+kQtkDwFw5OWqHLFed26RSCA6RkRY/WlSfL1gu/ext7vRutetseRF8kdn4PO+u4iyKvzvJsZLfaOajopCt+zMi/dj+aEmaXHRfEnka4rJ1PFT2PTEk5qXX/tLSvEIGP7xHq0ll5Z+KddHRY7qnTnyRNCElEKqoiJPPnk5KayxZicO+PCjCf8djaHkyQ6k1tqLJuXqKYyyUuSzqoOo1Ag/kMbOryguR4hA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8k6P298Ey9CQ0i+GvnN6ThQnWJObiCIqXte+KzAIW0A=; b=euzIvTRUKUZjaqjdMr1QMp4uXv+KTQn9wSP5kmRXabZx2jeT0QY8VaGkJDNVNmHnqgORxu6THzyp7UfEhIQCpFXul098byUM0auAW3LcMP/Hh3mKr1D1ker8HnNYcakfeX3VwmPbDC6duWttkGzHoZbNGVn6/BEAGoNxlZUmbCjMHoWLeg+Ay+/3BrWghXgS+dCsAgyaWWcs7pahwEL/biif30HLPBL7iyFEWqIeKDkfllDhVHfwE8p1AwznEiaxtLJCZpyYZ56MGHDhNeyvj6lWdRSeSrD/9bYrL2g1T+TLRebsiYYEVcRul/h+cIflreErWMFRzt1B2S58Y4t2zw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from SN6PR11MB3504.namprd11.prod.outlook.com (2603:10b6:805:d0::17) by SJ0PR11MB4877.namprd11.prod.outlook.com (2603:10b6:a03:2d9::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 05:19:42 +0000 Received: from SN6PR11MB3504.namprd11.prod.outlook.com ([fe80::c8f8:a3e1:5b23:a9c3]) by SN6PR11MB3504.namprd11.prod.outlook.com ([fe80::c8f8:a3e1:5b23:a9c3%5]) with mapi id 15.20.6043.036; Tue, 31 Jan 2023 05:19:42 +0000 From: "Xia, Chenbo" To: "Coquelin, Maxime" , "dev@dpdk.org" , "david.marchand@redhat.com" , "eperezma@redhat.com" Subject: RE: [PATCH v1 21/21] net/virtio-user: remove max queues limitation Thread-Topic: [PATCH v1 21/21] net/virtio-user: remove max queues limitation Thread-Index: AQHZBNSHuYwqzR88P0O2GiXD58ug8K64XMeQ Date: Tue, 31 Jan 2023 05:19:42 +0000 Message-ID: References: <20221130155639.150553-1-maxime.coquelin@redhat.com> <20221130155639.150553-22-maxime.coquelin@redhat.com> In-Reply-To: <20221130155639.150553-22-maxime.coquelin@redhat.com> Accept-Language: en-US, zh-CN Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SN6PR11MB3504:EE_|SJ0PR11MB4877:EE_ x-ms-office365-filtering-correlation-id: 579a77fb-ab22-46ef-f8a3-08db034ac209 x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: cQQQFe7bq/RaTA1anNqsr9ffGVDlZzGXS68hgKoVRcsfwhPD+OXSFXXD56Z0ZTSJrX+/VlzS6Gl3xji1/4/yMDdaalaQPwW1wVAdEv4vckd6f6QhZNEs+K0KsQXLApIFAbONTQBcPM/kKc80v7dKY2Jm3ldiBGXM3QChJoXv8PT7WCENA0cFoqWGmSKGU7ByLC1Eozc3UcO2+eAPZcbVgxX8cipSScYdyk5qUXuOTvFiN1uJPUaAopBghbmKurRdIMzPCizVy8GIZAjCWLPbZ2rAaUuVs8NcZ0rrqQrDHBeHXD0JNAqUWZ75gebtVD5cjT46SaZqJHoN96Np08jbrXreknU2DOrKgc1+3xpVfkDHEUO6ma8vpMGdEXhswaURwhL+6OwiH5cMhEk/r91G/G1t3IHQUeYHuCogmvgyiztknurLG0z/2OY+4DhmqGLJ1seMo8izGWvUv4FTG0t+FYZt0H+eYiDHTrPlb7k3YGXDDDAbtfz3kfmokdE+Ch4zxd5bCr5aQAwA09sB5XFqROgvyueK0fWvIttyfusPiXs/3ZCrwV7gnjJE/i6W/SJYJFRQ1+jSAbeYwm5vWHpoLErynuKbWyYGSHsUHniUVPaMQmYeRyqsZaVMuaUazCWj5haZYQ2oPlh0MY6aRp68TSuBzaBdwP+SduKQ6X9O+pITr9ZAvlCEw8KexwouhjW1 x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3504.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230025)(136003)(39860400002)(376002)(366004)(346002)(396003)(451199018)(66946007)(110136005)(66556008)(316002)(76116006)(66446008)(8676002)(66476007)(64756008)(41300700001)(52536014)(8936002)(5660300002)(82960400001)(38100700002)(38070700005)(86362001)(33656002)(122000001)(71200400001)(186003)(26005)(53546011)(9686003)(6506007)(30864003)(55016003)(2906002)(478600001)(7696005)(83380400001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?yVPybaBJ4/mg2SNZICZdsw48V5hlQHznKnpXvH9Dem5Cgs/BRJt6u+UGs7PS?= =?us-ascii?Q?gWSnthYk5jIwAJJQunI4tv0RLCZY7qyaCVDZnsPJRyQfavOp8F6PHs62/PQh?= =?us-ascii?Q?P2kLsN8XvErW8suvMKx76XkGQePppAQVzmrIjHiVBqkmiy26IhmoJReMHLWj?= =?us-ascii?Q?Q0Hc99wvKUEWbPGjWkBbtrer82s6kRFt+SzI2pC8QiHYwltF8druHrgrhZNa?= =?us-ascii?Q?ZhTFHysxdduS8Gp6epKxX+DF2ppSSq+nnF0UFP2jig4HKMKQmnTa9DS1uCDu?= =?us-ascii?Q?QUf0JAlmV/YXnYWcH3Jhe5wvT8kV0mAkWjMg18937QigL8vR3K1RB/9KtTLj?= =?us-ascii?Q?i8Pt7CkW2m2Bn+RRbrQIFnxbImrL7u8sHlkl0eowyD/BeD2XmYR+c82rQMHa?= =?us-ascii?Q?FCSm1NEqGfREoKsPAAEB+U2YU5UYdFGrr1kcVzTtFQ7m995UBzP7+lt9axM/?= =?us-ascii?Q?Gv52hMNPGZZhLeDOP5adapH0YZGF8r23Fr1DLVJ1x7HXOBOTZKoSwtygSVO4?= =?us-ascii?Q?H+lyfSzS9eL5GF4XMx2rV08NgiVSiykCSefMJbPpYAZSpqsyMC/AYn/q3Ra1?= =?us-ascii?Q?obRZQAswYTko+Xn4+S7Mt9/MbeXZoHmkcXY9/5hYvI3tDq/uYofZCA5UpImh?= =?us-ascii?Q?HuweOh7BBpqzB435nGfBDPZtgCzbe5/YHAH9xn+JOwS98wBOCddX2q1gT9Mj?= =?us-ascii?Q?G2oFYmoqA9HbMjKUcjlm0NHI3WBjIp7SymSMZAVQAg3VhZ0HKC7tG+ADT/J/?= =?us-ascii?Q?yZaEQeSsANkNeeOz6EVqhKU5TJlO84mq0JBJPD7NH+4FwUXhuCWwt3atJOjD?= =?us-ascii?Q?1IuSdeQO5wWXdzg0HmKPblLcx+UkskMNZckNxU7t5mlcMG8XAubzheFLn+yK?= =?us-ascii?Q?MztY8VjTUm3c2YEqBBgJcw15PKjlsr6wSh+HMTQcCf2Fk6VPCgkpJAK9WbO7?= =?us-ascii?Q?d8c4r3BC0gAPzbRBRY40doXhrg6xTIoHLyx2FxgHS5HAXnzhOr6L+yVnKnbN?= =?us-ascii?Q?pRpmmzIH3Ed9Z4i7OnUnSQfDjyg+hB3RB2Z6Sk1tkLCXrXZdJERyWsPlkxmB?= =?us-ascii?Q?sTqRynFEzpchvpTz7GzAU6ny/GfIkmp5UFBmif3FtFYY9Vqc2APkIbpFVrL2?= =?us-ascii?Q?4hPhAEC8r4TfjSQkJWjTT3zU+yHjkv2Nbbjtd8AYLgnB8S3XUMHnLOYTOQUm?= =?us-ascii?Q?WdGcZSS4O7VJ8gDuUrlLJBPL7FJ6FbrVIyFSGe/rbSDpYOcs1gzZSu6zXAze?= =?us-ascii?Q?VnSR9TeIKGaseIS+JI9wh1x4wiNbATVAPfIpbnCBsNaES21cRn3KyFzl6p6R?= =?us-ascii?Q?JmgywwPvR7rWRg17M2xeyIoiLdy/Za8/GMxyLWlbhKRP2NQ+qAhgHiQYRO0g?= =?us-ascii?Q?64r7NmxDIwhZXinzRN7yask1LhgZYPd8Xj2mKkEyjUECXYbIVkwcEfszzQlM?= =?us-ascii?Q?7r9ct61jeXLFdK3aBqvYDmMJiJAjvG1n92a1UinokWt6QYFATOczzzqJnxid?= =?us-ascii?Q?cAATjd2PkDp/VtZjPB3Pyhqb/r6iJaA0vTaI6OeNmgb+4OAFeCmVudgfZu+A?= =?us-ascii?Q?DaW7ofqnfFMAWsXPz3iv/sOZvUwLBIZIKIcAji0R?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3504.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 579a77fb-ab22-46ef-f8a3-08db034ac209 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Jan 2023 05:19:42.0641 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: wstZMbzSGRywhg1ivDtmvSDs2aaxh7IPtuFYUVzcFeJRdWAMiVWUbvPXynAO3yzKOACbg0jSdRRYLYMUDSug2Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR11MB4877 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Maxime, > -----Original Message----- > From: Maxime Coquelin > Sent: Wednesday, November 30, 2022 11:57 PM > To: dev@dpdk.org; Xia, Chenbo ; > david.marchand@redhat.com; eperezma@redhat.com > Cc: Maxime Coquelin > Subject: [PATCH v1 21/21] net/virtio-user: remove max queues limitation >=20 > This patch removes the limitation of 8 queue pairs by > dynamically allocating vring metadata once we know the > maximum number of queue pairs supported by the backend. >=20 > This is especially useful for Vhost-vDPA with physical > devices, where the maximum queues supported may be much > more than 8 pairs. >=20 > Signed-off-by: Maxime Coquelin > --- > drivers/net/virtio/virtio.h | 6 - > .../net/virtio/virtio_user/virtio_user_dev.c | 118 ++++++++++++++---- > .../net/virtio/virtio_user/virtio_user_dev.h | 16 +-- > drivers/net/virtio/virtio_user_ethdev.c | 17 +-- > 4 files changed, 109 insertions(+), 48 deletions(-) >=20 > diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h > index 5c8f71a44d..04a897bf51 100644 > --- a/drivers/net/virtio/virtio.h > +++ b/drivers/net/virtio/virtio.h > @@ -124,12 +124,6 @@ > VIRTIO_NET_HASH_TYPE_UDP_EX) >=20 >=20 > -/* > - * Maximum number of virtqueues per device. > - */ > -#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8 > -#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1) > - > /* VirtIO device IDs. */ > #define VIRTIO_ID_NETWORK 0x01 > #define VIRTIO_ID_BLOCK 0x02 > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c > b/drivers/net/virtio/virtio_user/virtio_user_dev.c > index 7c48c9bb29..aa24fdea70 100644 > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c > @@ -17,6 +17,7 @@ > #include > #include > #include > +#include >=20 > #include "vhost.h" > #include "virtio_user_dev.h" > @@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, > uint32_t queue_sel) > int ret; > struct vhost_vring_file file; > struct vhost_vring_state state; > - struct vring *vring =3D &dev->vrings[queue_sel]; > - struct vring_packed *pq_vring =3D &dev->packed_vrings[queue_sel]; > + struct vring *vring =3D &dev->vrings.split[queue_sel]; > + struct vring_packed *pq_vring =3D &dev->vrings.packed[queue_sel]; > struct vhost_vring_addr addr =3D { > .index =3D queue_sel, > .log_guest_addr =3D 0, > @@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct > virtio_user_dev *dev, uint32_t user_ > return ret; > } >=20 > - if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) { > - /* > - * If the device supports control queue, the control queue > - * index is max_virtqueue_pairs * 2. Disable MQ if it happens. > - */ > - PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u, > max supported %u)", > - dev->path, dev->max_queue_pairs, > VIRTIO_MAX_VIRTQUEUE_PAIRS); > - dev->max_queue_pairs =3D 1; > - > - return -1; > - } > - > return 0; > } >=20 > @@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev) > return 0; > } >=20 > +static int > +virtio_user_alloc_vrings(struct virtio_user_dev *dev) > +{ > + int i, size, nr_vrings; > + > + nr_vrings =3D dev->max_queue_pairs * 2; > + if (dev->hw_cvq) > + nr_vrings++; > + > + dev->callfds =3D rte_zmalloc("virtio_user_dev", nr_vrings * > sizeof(*dev->callfds), 0); > + if (!dev->callfds) { > + PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path); > + return -1; > + } > + > + dev->kickfds =3D rte_zmalloc("virtio_user_dev", nr_vrings * > sizeof(*dev->kickfds), 0); > + if (!dev->kickfds) { > + PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path); > + goto free_callfds; > + } > + > + for (i =3D 0; i < nr_vrings; i++) { > + dev->callfds[i] =3D -1; > + dev->kickfds[i] =3D -1; > + } > + > + size =3D RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev- > >vrings.packed)); > + dev->vrings.ptr =3D rte_zmalloc("virtio_user_dev", nr_vrings * size, > 0); > + if (!dev->vrings.ptr) { > + PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev- > >path); > + goto free_kickfds; > + } > + > + dev->packed_queues =3D rte_zmalloc("virtio_user_dev", > + nr_vrings * sizeof(*dev->packed_queues), 0); Should we pass the info of packed vq or not to save the alloc of dev->packed_queues, also to know correct size of dev->vrings.ptr. Thanks, Chenbo > + if (!dev->packed_queues) { > + PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues > metadata", dev->path); > + goto free_vrings; > + } > + > + dev->qp_enabled =3D rte_zmalloc("virtio_user_dev", > + dev->max_queue_pairs * sizeof(*dev->qp_enabled), 0); > + if (!dev->qp_enabled) { > + PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", > dev->path); > + goto free_packed_queues; > + } > + > + return 0; > + > +free_packed_queues: > + rte_free(dev->packed_queues); > + dev->packed_queues =3D NULL; > +free_vrings: > + rte_free(dev->vrings.ptr); > + dev->vrings.ptr =3D NULL; > +free_kickfds: > + rte_free(dev->kickfds); > + dev->kickfds =3D NULL; > +free_callfds: > + rte_free(dev->callfds); > + dev->callfds =3D NULL; > + > + return -1; > +} > + > +static void > +virtio_user_free_vrings(struct virtio_user_dev *dev) > +{ > + rte_free(dev->qp_enabled); > + dev->qp_enabled =3D NULL; > + rte_free(dev->packed_queues); > + dev->packed_queues =3D NULL; > + rte_free(dev->vrings.ptr); > + dev->vrings.ptr =3D NULL; > + rte_free(dev->kickfds); > + dev->kickfds =3D NULL; > + rte_free(dev->callfds); > + dev->callfds =3D NULL; > +} > + > /* Use below macro to filter features from vhost backend */ > #define VIRTIO_USER_SUPPORTED_FEATURES \ > (1ULL << VIRTIO_NET_F_MAC | \ > @@ -607,16 +676,10 @@ virtio_user_dev_init(struct virtio_user_dev *dev, > char *path, uint16_t queues, > enum virtio_user_backend_type backend_type) > { > uint64_t backend_features; > - int i; >=20 > pthread_mutex_init(&dev->mutex, NULL); > strlcpy(dev->path, path, PATH_MAX); >=20 > - for (i =3D 0; i < VIRTIO_MAX_VIRTQUEUES; i++) { > - dev->kickfds[i] =3D -1; > - dev->callfds[i] =3D -1; > - } > - > dev->started =3D 0; > dev->queue_pairs =3D 1; /* mq disabled by default */ > dev->queue_size =3D queue_size; > @@ -661,9 +724,14 @@ virtio_user_dev_init(struct virtio_user_dev *dev, > char *path, uint16_t queues, > if (dev->max_queue_pairs > 1) > cq =3D 1; >=20 > + if (virtio_user_alloc_vrings(dev) < 0) { > + PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", > dev->path); > + goto destroy; > + } > + > if (virtio_user_dev_init_notify(dev) < 0) { > PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path); > - goto destroy; > + goto free_vrings; > } >=20 > if (virtio_user_fill_intr_handle(dev) < 0) { > @@ -722,6 +790,8 @@ virtio_user_dev_init(struct virtio_user_dev *dev, cha= r > *path, uint16_t queues, >=20 > notify_uninit: > virtio_user_dev_uninit_notify(dev); > +free_vrings: > + virtio_user_free_vrings(dev); > destroy: > dev->ops->destroy(dev); >=20 > @@ -742,6 +812,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev) >=20 > virtio_user_dev_uninit_notify(dev); >=20 > + virtio_user_free_vrings(dev); > + > free(dev->ifname); >=20 > if (dev->is_server) > @@ -897,7 +969,7 @@ static void > virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t > queue_idx) > { > struct virtio_user_queue *vq =3D &dev->packed_queues[queue_idx]; > - struct vring_packed *vring =3D &dev->packed_vrings[queue_idx]; > + struct vring_packed *vring =3D &dev->vrings.packed[queue_idx]; > uint16_t n_descs, flags; >=20 > /* Perform a load-acquire barrier in desc_is_avail to > @@ -931,7 +1003,7 @@ virtio_user_handle_cq_split(struct virtio_user_dev > *dev, uint16_t queue_idx) > uint16_t avail_idx, desc_idx; > struct vring_used_elem *uep; > uint32_t n_descs; > - struct vring *vring =3D &dev->vrings[queue_idx]; > + struct vring *vring =3D &dev->vrings.split[queue_idx]; >=20 > /* Consume avail ring, using used ring idx as first one */ > while (__atomic_load_n(&vring->used->idx, __ATOMIC_RELAXED) > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h > b/drivers/net/virtio/virtio_user/virtio_user_dev.h > index e8753f6019..7323d88302 100644 > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h > @@ -29,8 +29,8 @@ struct virtio_user_dev { > enum virtio_user_backend_type backend_type; > bool is_server; /* server or client mode */ >=20 > - int callfds[VIRTIO_MAX_VIRTQUEUES]; > - int kickfds[VIRTIO_MAX_VIRTQUEUES]; > + int *callfds; > + int *kickfds; > int mac_specified; > uint16_t max_queue_pairs; > uint16_t queue_pairs; > @@ -48,11 +48,13 @@ struct virtio_user_dev { > char *ifname; >=20 > union { > - struct vring vrings[VIRTIO_MAX_VIRTQUEUES]; > - struct vring_packed packed_vrings[VIRTIO_MAX_VIRTQUEUES]; > - }; > - struct virtio_user_queue packed_queues[VIRTIO_MAX_VIRTQUEUES]; > - bool qp_enabled[VIRTIO_MAX_VIRTQUEUE_PAIRS]; > + void *ptr; > + struct vring *split; > + struct vring_packed *packed; > + } vrings; > + > + struct virtio_user_queue *packed_queues; > + bool *qp_enabled; >=20 > struct virtio_user_backend_ops *ops; > pthread_mutex_t mutex; > diff --git a/drivers/net/virtio/virtio_user_ethdev.c > b/drivers/net/virtio/virtio_user_ethdev.c > index d23959e836..b1fc4d5d30 100644 > --- a/drivers/net/virtio/virtio_user_ethdev.c > +++ b/drivers/net/virtio/virtio_user_ethdev.c > @@ -186,7 +186,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq, > uint64_t used_addr; > uint16_t i; >=20 > - vring =3D &dev->packed_vrings[queue_idx]; > + vring =3D &dev->vrings.packed[queue_idx]; > desc_addr =3D (uintptr_t)vq->vq_ring_virt_mem; > avail_addr =3D desc_addr + vq->vq_nentries * > sizeof(struct vring_packed_desc); > @@ -216,10 +216,10 @@ virtio_user_setup_queue_split(struct virtqueue *vq, > struct virtio_user_dev *dev) > ring[vq->vq_nentries]), > VIRTIO_VRING_ALIGN); >=20 > - dev->vrings[queue_idx].num =3D vq->vq_nentries; > - dev->vrings[queue_idx].desc =3D (void *)(uintptr_t)desc_addr; > - dev->vrings[queue_idx].avail =3D (void *)(uintptr_t)avail_addr; > - dev->vrings[queue_idx].used =3D (void *)(uintptr_t)used_addr; > + dev->vrings.split[queue_idx].num =3D vq->vq_nentries; > + dev->vrings.split[queue_idx].desc =3D (void *)(uintptr_t)desc_addr; > + dev->vrings.split[queue_idx].avail =3D (void *)(uintptr_t)avail_addr; > + dev->vrings.split[queue_idx].used =3D (void *)(uintptr_t)used_addr; > } >=20 > static int > @@ -619,13 +619,6 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev) > } > } >=20 > - if (queues > VIRTIO_MAX_VIRTQUEUE_PAIRS) { > - PMD_INIT_LOG(ERR, "arg %s %" PRIu64 " exceeds the limit %u", > - VIRTIO_USER_ARG_QUEUES_NUM, queues, > - VIRTIO_MAX_VIRTQUEUE_PAIRS); > - goto end; > - } > - > if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) =3D=3D 1) { > if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF, > &get_integer_arg, &mrg_rxbuf) < 0) { > -- > 2.38.1