From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66AABA0C47; Wed, 27 Oct 2021 22:00:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A4E94068C; Wed, 27 Oct 2021 22:00:04 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 2DDD44003F for ; Wed, 27 Oct 2021 22:00:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635364802; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/0s1ck/58CLkIBQPF0I02ZFiRjGpFvrOvtkRLshu83Y=; b=ZanrxVIuk/nftKZQURgduA/sQSNmhRywT/zCptM9AsHo188jJQT3Dk3bWboQJ3KncShEwl uQ1vypYCWdvt55kFW/xE4tzRk1UUQfvwqTEQTMK8FTljfZGEpFC+cuYTv1UhBeQkQBZbN5 geDvqOOBpWYeTURljypZascwu+k+iEs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-451-wQddaS33O5ipqo2oAM-u2g-1; Wed, 27 Oct 2021 15:59:58 -0400 X-MC-Unique: wQddaS33O5ipqo2oAM-u2g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B632A57226; Wed, 27 Oct 2021 19:59:56 +0000 (UTC) Received: from [10.39.208.8] (unknown [10.39.208.8]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E8C2760CA1; Wed, 27 Oct 2021 19:59:47 +0000 (UTC) Message-ID: <480d2265-9c26-5106-3b66-64d37e06f1bd@redhat.com> Date: Wed, 27 Oct 2021 21:59:46 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 To: Yuri Benditovich Cc: ": Yan Vugenfirer" , Andrew Rybchenko , dev@dpdk.org, chenbo.xia@intel.com, amorenoz@redhat.com, david.marchand@redhat.com, ferruh.yigit@intel.com, michaelba@nvidia.com, viacheslavo@nvidia.com, xiaoyun.li@intel.com, nelio.laranjeiro@6wind.com References: <20211018102045.255831-1-maxime.coquelin@redhat.com> <20211018102045.255831-2-maxime.coquelin@redhat.com> <3cf32ebd-47cd-05d0-c64f-67e9418839ba@oktetlabs.ru> <1424ee91-9407-593b-ab6b-817d88fc4ce3@oktetlabs.ru> <8ec46370-ffbb-1df2-a335-16c815db7166@redhat.com> From: Maxime Coquelin In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v5 1/5] net/virtio: add initial RSS support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Yuri, On 10/27/21 16:45, Yuri Benditovich wrote: > > > On Wed, Oct 27, 2021 at 1:55 PM Maxime Coquelin > > wrote: > > Hi, > > On 10/19/21 11:37, Andrew Rybchenko wrote: > > Hi Maxime, > > > > On 10/19/21 12:22 PM, Maxime Coquelin wrote: > >> Hi Andrew, > >> > >> On 10/19/21 09:30, Andrew Rybchenko wrote: > >>> On 10/18/21 1:20 PM, Maxime Coquelin wrote: > >>>> Provide the capability to update the hash key, hash types > >>>> and RETA table on the fly (without needing to stop/start > >>>> the device). However, the key length and the number of RETA > >>>> entries are fixed to 40B and 128 entries respectively. This > >>>> is done in order to simplify the design, but may be > >>>> revisited later as the Virtio spec provides this > >>>> flexibility. > >>>> > >>>> Note that only VIRTIO_NET_F_RSS support is implemented, > >>>> VIRTIO_NET_F_HASH_REPORT, which would enable reporting the > >>>> packet RSS hash calculated by the device into mbuf.rss, is > >>>> not yet supported. > >>>> > >>>> Regarding the default RSS configuration, it has been > >>>> chosen to use the default Intel ixgbe key as default key, > >>>> and default RETA is a simple modulo between the hash and > >>>> the number of Rx queues. > >>>> > >>>> Signed-off-by: Maxime Coquelin > > > > > [snip] > > > >>>> +    rss.unclassified_queue = 0; > >>>> +    memcpy(rss.indirection_table, hw->rss_reta, > >>>> VIRTIO_NET_RSS_RETA_SIZE * sizeof(uint16_t)); > >>>> +    rss.max_tx_vq = nb_queues; > >>> > >>> Is it guaranteed that driver is configured with equal number > >>> of Rx and Tx queues? Or is it not a problem otherwise? > >> > >> Virtio networking devices works with queue pairs. > > > > But it seems to me that I still can configure just 1 Rx queue > > and many Tx queues. Basically just non equal. > > The line is suspicious since I'd expect nb_queues to be > > a number of Rx queues in the function, but we set max_tx_vq > > count here. > > The Virtio spec says: > " > A driver sets max_tx_vq to inform a device how many transmit > virtqueues it may use (transmitq1. . .transmitq max_tx_vq). > " > > But looking at Qemu side, its value is interpreted as the number of > queue pairs setup by the driver, same as is handled virtqueue_pairs of > struct virtio_net_ctrl_mq when RSS is not supported. > > In this patch we are compatible with what is done in Qemu, and what is > done for multiqueue when RSS is not enabled. > > I don't get why the spec talks about transmit queues, Yan & Yuri, any > idea? > > > Indeed QEMU reference code uses the max_tx_vq as a number of queue pairs > the same way it uses a parameter of _MQ command. > Mainly this is related to vhost start flow which assumes that there is > some number of ready vq pairs. > When the driver sets max_tx_vq it guarantees that it does not use more > than max_tx_vq TX queues. > Actual RX queues that will be used can be taken from the indirection > table which contains indices of RX queues. Thanks for the quick reply. Then setting to MAX(nb_rx_queue, nb_tx_queue) is compliant with both the spec and the Qemu implementation. Maxime