From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D00241C30; Tue, 7 Feb 2023 15:14:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC584410DF; Tue, 7 Feb 2023 15:14:32 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id DFED04021F for ; Tue, 7 Feb 2023 15:14:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675779271; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dXxipZFD54xrP0fTU0o2lm7LGjIQNNclANdm244b164=; b=Qi/ZGwLYZK95S6PL8EjIKFga6NJINGSZc+CjvwQkwNW/3pgpbSNhK3+L2gpMSFk2VzZK7B /OvElp7B+aNXzcY2uAoviJEYLwgR9mw1jnaObDd2/bwFp8uyP6L3lLXmN8uilLEoUyspt0 Y7BaM+TuUT54UAD/HqLHwip/W5qNBpQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-435-jr__3js7N1mEtAf7poLy7g-1; Tue, 07 Feb 2023 09:14:28 -0500 X-MC-Unique: jr__3js7N1mEtAf7poLy7g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1B1B28028B0; Tue, 7 Feb 2023 14:14:28 +0000 (UTC) Received: from [10.39.208.26] (unknown [10.39.208.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F23852026F70; Tue, 7 Feb 2023 14:14:26 +0000 (UTC) Message-ID: <6e3a002b-c137-1903-a37e-1898975b906c@redhat.com> Date: Tue, 7 Feb 2023 15:14:25 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 To: "Xia, Chenbo" , "dev@dpdk.org" , "david.marchand@redhat.com" , "eperezma@redhat.com" References: <20221130155639.150553-1-maxime.coquelin@redhat.com> <20221130155639.150553-22-maxime.coquelin@redhat.com> From: Maxime Coquelin Subject: Re: [PATCH v1 21/21] net/virtio-user: remove max queues limitation In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 1/31/23 06:19, Xia, Chenbo wrote: > Hi Maxime, > >> -----Original Message----- >> From: Maxime Coquelin >> Sent: Wednesday, November 30, 2022 11:57 PM >> To: dev@dpdk.org; Xia, Chenbo ; >> david.marchand@redhat.com; eperezma@redhat.com >> Cc: Maxime Coquelin >> Subject: [PATCH v1 21/21] net/virtio-user: remove max queues limitation >> >> This patch removes the limitation of 8 queue pairs by >> dynamically allocating vring metadata once we know the >> maximum number of queue pairs supported by the backend. >> >> This is especially useful for Vhost-vDPA with physical >> devices, where the maximum queues supported may be much >> more than 8 pairs. >> >> Signed-off-by: Maxime Coquelin >> --- >> drivers/net/virtio/virtio.h | 6 - >> .../net/virtio/virtio_user/virtio_user_dev.c | 118 ++++++++++++++---- >> .../net/virtio/virtio_user/virtio_user_dev.h | 16 +-- >> drivers/net/virtio/virtio_user_ethdev.c | 17 +-- >> 4 files changed, 109 insertions(+), 48 deletions(-) >> >> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h >> index 5c8f71a44d..04a897bf51 100644 >> --- a/drivers/net/virtio/virtio.h >> +++ b/drivers/net/virtio/virtio.h >> @@ -124,12 +124,6 @@ >> VIRTIO_NET_HASH_TYPE_UDP_EX) >> >> >> -/* >> - * Maximum number of virtqueues per device. >> - */ >> -#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8 >> -#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1) >> - >> /* VirtIO device IDs. */ >> #define VIRTIO_ID_NETWORK 0x01 >> #define VIRTIO_ID_BLOCK 0x02 >> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c >> b/drivers/net/virtio/virtio_user/virtio_user_dev.c >> index 7c48c9bb29..aa24fdea70 100644 >> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c >> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c >> @@ -17,6 +17,7 @@ >> #include >> #include >> #include >> +#include >> >> #include "vhost.h" >> #include "virtio_user_dev.h" >> @@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, >> uint32_t queue_sel) >> int ret; >> struct vhost_vring_file file; >> struct vhost_vring_state state; >> - struct vring *vring = &dev->vrings[queue_sel]; >> - struct vring_packed *pq_vring = &dev->packed_vrings[queue_sel]; >> + struct vring *vring = &dev->vrings.split[queue_sel]; >> + struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel]; >> struct vhost_vring_addr addr = { >> .index = queue_sel, >> .log_guest_addr = 0, >> @@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct >> virtio_user_dev *dev, uint32_t user_ >> return ret; >> } >> >> - if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) { >> - /* >> - * If the device supports control queue, the control queue >> - * index is max_virtqueue_pairs * 2. Disable MQ if it happens. >> - */ >> - PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u, >> max supported %u)", >> - dev->path, dev->max_queue_pairs, >> VIRTIO_MAX_VIRTQUEUE_PAIRS); >> - dev->max_queue_pairs = 1; >> - >> - return -1; >> - } >> - >> return 0; >> } >> >> @@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev) >> return 0; >> } >> >> +static int >> +virtio_user_alloc_vrings(struct virtio_user_dev *dev) >> +{ >> + int i, size, nr_vrings; >> + >> + nr_vrings = dev->max_queue_pairs * 2; >> + if (dev->hw_cvq) >> + nr_vrings++; >> + >> + dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * >> sizeof(*dev->callfds), 0); >> + if (!dev->callfds) { >> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path); >> + return -1; >> + } >> + >> + dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * >> sizeof(*dev->kickfds), 0); >> + if (!dev->kickfds) { >> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path); >> + goto free_callfds; >> + } >> + >> + for (i = 0; i < nr_vrings; i++) { >> + dev->callfds[i] = -1; >> + dev->kickfds[i] = -1; >> + } >> + >> + size = RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev- >>> vrings.packed)); >> + dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, >> 0); >> + if (!dev->vrings.ptr) { >> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev- >>> path); >> + goto free_kickfds; >> + } >> + >> + dev->packed_queues = rte_zmalloc("virtio_user_dev", >> + nr_vrings * sizeof(*dev->packed_queues), 0); > > Should we pass the info of packed vq or not to save the alloc of > dev->packed_queues, also to know correct size of dev->vrings.ptr. That's not ideal because the negotiation haven't taken place yet with the Virtio layer, but it should be doable for packed ring specifically since it is only possible to disable it via the devargs, not at run time. Thanks, Maxime