DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: "Xia, Chenbo" <chenbo.xia@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>,
	"eperezma@redhat.com" <eperezma@redhat.com>
Subject: Re: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
Date: Tue, 7 Feb 2023 15:14:25 +0100	[thread overview]
Message-ID: <6e3a002b-c137-1903-a37e-1898975b906c@redhat.com> (raw)
In-Reply-To: <SN6PR11MB35046D1A8D2855D2A808D0FD9CD09@SN6PR11MB3504.namprd11.prod.outlook.com>



On 1/31/23 06:19, Xia, Chenbo wrote:
> Hi Maxime,
> 
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, November 30, 2022 11:57 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com; eperezma@redhat.com
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v1 21/21] net/virtio-user: remove max queues limitation
>>
>> This patch removes the limitation of 8 queue pairs by
>> dynamically allocating vring metadata once we know the
>> maximum number of queue pairs supported by the backend.
>>
>> This is especially useful for Vhost-vDPA with physical
>> devices, where the maximum queues supported may be much
>> more than 8 pairs.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   drivers/net/virtio/virtio.h                   |   6 -
>>   .../net/virtio/virtio_user/virtio_user_dev.c  | 118 ++++++++++++++----
>>   .../net/virtio/virtio_user/virtio_user_dev.h  |  16 +--
>>   drivers/net/virtio/virtio_user_ethdev.c       |  17 +--
>>   4 files changed, 109 insertions(+), 48 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
>> index 5c8f71a44d..04a897bf51 100644
>> --- a/drivers/net/virtio/virtio.h
>> +++ b/drivers/net/virtio/virtio.h
>> @@ -124,12 +124,6 @@
>>   	VIRTIO_NET_HASH_TYPE_UDP_EX)
>>
>>
>> -/*
>> - * Maximum number of virtqueues per device.
>> - */
>> -#define VIRTIO_MAX_VIRTQUEUE_PAIRS 8
>> -#define VIRTIO_MAX_VIRTQUEUES (VIRTIO_MAX_VIRTQUEUE_PAIRS * 2 + 1)
>> -
>>   /* VirtIO device IDs. */
>>   #define VIRTIO_ID_NETWORK  0x01
>>   #define VIRTIO_ID_BLOCK    0x02
>> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> index 7c48c9bb29..aa24fdea70 100644
>> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
>> @@ -17,6 +17,7 @@
>>   #include <rte_alarm.h>
>>   #include <rte_string_fns.h>
>>   #include <rte_eal_memconfig.h>
>> +#include <rte_malloc.h>
>>
>>   #include "vhost.h"
>>   #include "virtio_user_dev.h"
>> @@ -58,8 +59,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev,
>> uint32_t queue_sel)
>>   	int ret;
>>   	struct vhost_vring_file file;
>>   	struct vhost_vring_state state;
>> -	struct vring *vring = &dev->vrings[queue_sel];
>> -	struct vring_packed *pq_vring = &dev->packed_vrings[queue_sel];
>> +	struct vring *vring = &dev->vrings.split[queue_sel];
>> +	struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
>>   	struct vhost_vring_addr addr = {
>>   		.index = queue_sel,
>>   		.log_guest_addr = 0,
>> @@ -299,18 +300,6 @@ virtio_user_dev_init_max_queue_pairs(struct
>> virtio_user_dev *dev, uint32_t user_
>>   		return ret;
>>   	}
>>
>> -	if (dev->max_queue_pairs > VIRTIO_MAX_VIRTQUEUE_PAIRS) {
>> -		/*
>> -		 * If the device supports control queue, the control queue
>> -		 * index is max_virtqueue_pairs * 2. Disable MQ if it happens.
>> -		 */
>> -		PMD_DRV_LOG(ERR, "(%s) Device advertises too many queues (%u,
>> max supported %u)",
>> -				dev->path, dev->max_queue_pairs,
>> VIRTIO_MAX_VIRTQUEUE_PAIRS);
>> -		dev->max_queue_pairs = 1;
>> -
>> -		return -1;
>> -	}
>> -
>>   	return 0;
>>   }
>>
>> @@ -579,6 +568,86 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
>>   	return 0;
>>   }
>>
>> +static int
>> +virtio_user_alloc_vrings(struct virtio_user_dev *dev)
>> +{
>> +	int i, size, nr_vrings;
>> +
>> +	nr_vrings = dev->max_queue_pairs * 2;
>> +	if (dev->hw_cvq)
>> +		nr_vrings++;
>> +
>> +	dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings *
>> sizeof(*dev->callfds), 0);
>> +	if (!dev->callfds) {
>> +		PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
>> +		return -1;
>> +	}
>> +
>> +	dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings *
>> sizeof(*dev->kickfds), 0);
>> +	if (!dev->kickfds) {
>> +		PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
>> +		goto free_callfds;
>> +	}
>> +
>> +	for (i = 0; i < nr_vrings; i++) {
>> +		dev->callfds[i] = -1;
>> +		dev->kickfds[i] = -1;
>> +	}
>> +
>> +	size = RTE_MAX(sizeof(*dev->vrings.split), sizeof(*dev-
>>> vrings.packed));
>> +	dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size,
>> 0);
>> +	if (!dev->vrings.ptr) {
>> +		PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev-
>>> path);
>> +		goto free_kickfds;
>> +	}
>> +
>> +	dev->packed_queues = rte_zmalloc("virtio_user_dev",
>> +			nr_vrings * sizeof(*dev->packed_queues), 0);
> 
> Should we pass the info of packed vq or not to save the alloc of
> dev->packed_queues, also to know correct size of dev->vrings.ptr.

That's not ideal because the negotiation haven't taken place yet with 
the Virtio layer, but it should be doable for packed ring specifically 
since it is only possible to disable it via the devargs, not at run
time.

Thanks,
Maxime


  reply	other threads:[~2023-02-07 14:14 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-30 15:56 [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 01/21] net/virtio: move CVQ code into a dedicated file Maxime Coquelin
2023-01-30  7:50   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 02/21] net/virtio: introduce notify callback for control queue Maxime Coquelin
2023-01-30  7:51   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 03/21] net/virtio: virtqueue headers alloc refactoring Maxime Coquelin
2023-01-30  7:51   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 04/21] net/virtio: remove port ID info from Rx queue Maxime Coquelin
2023-01-30  7:51   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 05/21] net/virtio: remove unused fields in Tx queue struct Maxime Coquelin
2023-01-30  7:51   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 06/21] net/virtio: remove unused queue ID field in Rx queue Maxime Coquelin
2023-01-30  7:52   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 07/21] net/virtio: remove unused Port ID in control queue Maxime Coquelin
2023-01-30  7:52   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 08/21] net/virtio: move vring memzone to virtqueue struct Maxime Coquelin
2023-01-30  7:52   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 09/21] net/virtio: refactor indirect desc headers init Maxime Coquelin
2023-01-30  7:52   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 10/21] net/virtio: alloc Rx SW ring only if vectorized path Maxime Coquelin
2023-01-30  7:49   ` Xia, Chenbo
2023-02-07 10:12     ` Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 11/21] net/virtio: extract virtqueue init from virtio queue init Maxime Coquelin
2023-01-30  7:53   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 12/21] net/virtio-user: fix device starting failure handling Maxime Coquelin
2023-01-31  5:20   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 13/21] net/virtio-user: simplify queues setup Maxime Coquelin
2023-01-31  5:21   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 14/21] net/virtio-user: use proper type for number of queue pairs Maxime Coquelin
2023-01-31  5:21   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 15/21] net/virtio-user: get max number of queue pairs from device Maxime Coquelin
2023-01-31  5:21   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 16/21] net/virtio-user: allocate shadow control queue Maxime Coquelin
2023-01-31  5:21   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 17/21] net/virtio-user: send shadow virtqueue info to the backend Maxime Coquelin
2023-01-31  5:22   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 18/21] net/virtio-user: add new callback to enable control queue Maxime Coquelin
2023-01-31  5:22   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 19/21] net/virtio-user: forward control messages to shadow queue Maxime Coquelin
2022-11-30 16:54   ` Stephen Hemminger
2022-12-06 12:58     ` Maxime Coquelin
2022-11-30 15:56 ` [PATCH v1 20/21] net/virtio-user: advertize control VQ support with vDPA Maxime Coquelin
2023-01-31  5:24   ` Xia, Chenbo
2022-11-30 15:56 ` [PATCH v1 21/21] net/virtio-user: remove max queues limitation Maxime Coquelin
2023-01-31  5:19   ` Xia, Chenbo
2023-02-07 14:14     ` Maxime Coquelin [this message]
2023-01-30  5:57 ` [PATCH v1 00/21] Add control queue & MQ support to Virtio-user vDPA Xia, Chenbo
2023-02-07 10:08   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6e3a002b-c137-1903-a37e-1898975b906c@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=eperezma@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).