From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB795A0C43; Mon, 18 Oct 2021 08:46:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 643B140041; Mon, 18 Oct 2021 08:46:37 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id D44E74003C for ; Mon, 18 Oct 2021 08:46:35 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id CC7C17F5F5; Mon, 18 Oct 2021 09:46:34 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru CC7C17F5F5 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1634539594; bh=kbihmy/BP972m5jbY2XC2J9RapMGq0MjaxrGf3nuVXQ=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=U7pqrKSvrsZ4xo9s6egOKER970UOgyCHE9sZb/bfC+zneX71tKH8hVkCglPQVaFqo uwfrzuMEiTgZu+XubHAlyccGxojjjv1ZfkxH/pPUBThEO4pya6wotxWUY2EfU2o3bP ZgiWovnbD29mq94yJapD8dkwEdzM/tD3iawRmlXs= To: "Xueming(Steven) Li" , NBU-Contact-Thomas Monjalon Cc: "jerinjacobk@gmail.com" , Lior Margalit , Slava Ovsiienko , "konstantin.ananyev@intel.com" , "dev@dpdk.org" , "ferruh.yigit@intel.com" References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211012143942.1133718-1-xuemingl@nvidia.com> <20211012143942.1133718-2-xuemingl@nvidia.com> <305b925bef72ca4a0c17ca359a5f2ddb8235b12e.camel@nvidia.com> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: Date: Mon, 18 Oct 2021 09:46:34 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <305b925bef72ca4a0c17ca359a5f2ddb8235b12e.camel@nvidia.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v6 1/5] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/15/21 1:54 PM, Xueming(Steven) Li wrote: > On Fri, 2021-10-15 at 12:28 +0300, Andrew Rybchenko wrote: >> On 10/12/21 5:39 PM, Xueming Li wrote: >>> In current DPDK framework, each Rx queue is pre-loaded with mbufs to >>> save incoming packets. For some PMDs, when number of representors scale >>> out in a switch domain, the memory consumption became significant. >>> Polling all ports also leads to high cache miss, high latency and low >>> throughput. >>> >>> This patch introduce shared Rx queue. Ports in same Rx domain and >>> switch domain could share Rx queue set by specifying non-zero sharing >>> group in Rx queue configuration. >>> >>> No special API is defined to receive packets from shared Rx queue. >>> Polling any member port of a shared Rx queue receives packets of that >>> queue for all member ports, source port is identified by mbuf->port. >>> >>> Shared Rx queue must be polled in same thread or core, polling a queue >>> ID of any member port is essentially same. >>> >>> Multiple share groups are supported by non-zero share group ID. Device >> >> "by non-zero share group ID" is not required. Since it must be >> always non-zero to enable sharing. >> >>> should support mixed configuration by allowing multiple share >>> groups and non-shared Rx queue. >>> >>> Even Rx queue shared, queue configuration like offloads and RSS should >>> not be impacted. >> >> I don't understand above sentence. >> Even when Rx queues are shared, queue configuration like >> offloads and RSS may differ. If a PMD has some limitation, >> it should care about consistency itself. These limitations >> should be documented in the PMD documentation. >> > > OK, I'll remove this line. > >>> >>> Example grouping and polling model to reflect service priority: >>> Group1, 2 shared Rx queues per port: PF, rep0, rep1 >>> Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127 >>> Core0: poll PF queue0 >>> Core1: poll PF queue1 >>> Core2: poll rep2 queue0 >> >> >> Can I have: >> PF RxQ#0, RxQ#1 >> Rep0 RxQ#0 shared with PF RxQ#0 >> Rep1 RxQ#0 shared with PF RxQ#1 >> >> I guess no, since it looks like RxQ ID must be equal. >> Or am I missing something? Otherwise grouping rules >> are not obvious to me. May be we need dedicated >> shared_qid in boundaries of the share_group? > > Yes, RxQ ID must be equal, following configuration should work: > Rep1 RxQ#1 shared with PF RxQ#1 But I want just one RxQ on Rep1. I don't need two. > Equal mapping should work by default instead of a new field that must > be set. I'll add some description to emphasis, how do you think? Sorry for delay with reply. I think that above limitation is not nice. It is better to avoid it. [snip]