From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D9D1A0C4B; Fri, 15 Oct 2021 11:28:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F278640692; Fri, 15 Oct 2021 11:28:52 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 78B6340041 for ; Fri, 15 Oct 2021 11:28:51 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id D321B7F690; Fri, 15 Oct 2021 12:28:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru D321B7F690 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1634290130; bh=WzMLlWz0NZQYDt2XEnh57Nj+862PIqurn8MQalPqukg=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=NAfvN/5ASmmknuUjS8aW/WbYt62FzlZR/XGLTWzESaSFuCZ4+aGbtSvWhQiVHV5TA 9ODhHVDlX+yzDr+I+yiHDgzRg/m3mvRxAUCPPpBK1Dg/8T2SskkGe8EIcy1hhdyXyM 4U6iFg+KEOHreS6lUlqETiRrpAQ75IED5hqO1tJ0= To: Xueming Li , Thomas Monjalon Cc: Jerin Jacob , Ferruh Yigit , Viacheslav Ovsiienko , Lior Margalit , Ananyev Konstantin , dev@dpdk.org References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211012143942.1133718-1-xuemingl@nvidia.com> <20211012143942.1133718-2-xuemingl@nvidia.com> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: Date: Fri, 15 Oct 2021 12:28:50 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211012143942.1133718-2-xuemingl@nvidia.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v6 1/5] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/12/21 5:39 PM, Xueming Li wrote: > In current DPDK framework, each Rx queue is pre-loaded with mbufs to > save incoming packets. For some PMDs, when number of representors scale > out in a switch domain, the memory consumption became significant. > Polling all ports also leads to high cache miss, high latency and low > throughput. > > This patch introduce shared Rx queue. Ports in same Rx domain and > switch domain could share Rx queue set by specifying non-zero sharing > group in Rx queue configuration. > > No special API is defined to receive packets from shared Rx queue. > Polling any member port of a shared Rx queue receives packets of that > queue for all member ports, source port is identified by mbuf->port. > > Shared Rx queue must be polled in same thread or core, polling a queue > ID of any member port is essentially same. > > Multiple share groups are supported by non-zero share group ID. Device "by non-zero share group ID" is not required. Since it must be always non-zero to enable sharing. > should support mixed configuration by allowing multiple share > groups and non-shared Rx queue. > > Even Rx queue shared, queue configuration like offloads and RSS should > not be impacted. I don't understand above sentence. Even when Rx queues are shared, queue configuration like offloads and RSS may differ. If a PMD has some limitation, it should care about consistency itself. These limitations should be documented in the PMD documentation. > > Example grouping and polling model to reflect service priority: > Group1, 2 shared Rx queues per port: PF, rep0, rep1 > Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127 > Core0: poll PF queue0 > Core1: poll PF queue1 > Core2: poll rep2 queue0 Can I have: PF RxQ#0, RxQ#1 Rep0 RxQ#0 shared with PF RxQ#0 Rep1 RxQ#0 shared with PF RxQ#1 I guess no, since it looks like RxQ ID must be equal. Or am I missing something? Otherwise grouping rules are not obvious to me. May be we need dedicated shared_qid in boundaries of the share_group? > > PMD driver advertise shared Rx queue capability via > RTE_ETH_DEV_CAPA_RXQ_SHARE. > > PMD driver is responsible for shared Rx queue consistency checks to > avoid member port's configuration contradict to each other. > > Signed-off-by: Xueming Li > --- > doc/guides/nics/features.rst | 13 ++++++++++++ > doc/guides/nics/features/default.ini | 1 + > .../prog_guide/switch_representation.rst | 10 +++++++++ > doc/guides/rel_notes/release_21_11.rst | 5 +++++ > lib/ethdev/rte_ethdev.c | 9 ++++++++ > lib/ethdev/rte_ethdev.h | 21 +++++++++++++++++++ > 6 files changed, 59 insertions(+) > > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst > index e346018e4b8..b64433b8ea5 100644 > --- a/doc/guides/nics/features.rst > +++ b/doc/guides/nics/features.rst > @@ -615,6 +615,19 @@ Supports inner packet L4 checksum. > ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``. > > > +.. _nic_features_shared_rx_queue: > + > +Shared Rx queue > +--------------- > + > +Supports shared Rx queue for ports in same Rx domain of a switch domain. > + > +* **[uses] rte_eth_dev_info**: ``dev_capa:RTE_ETH_DEV_CAPA_RXQ_SHARE``. > +* **[uses] rte_eth_dev_infoļ¼Œrte_eth_switch_info**: ``rx_domain``, ``domain_id``. > +* **[uses] rte_eth_rxconf**: ``share_group``. > +* **[provides] mbuf**: ``mbuf.port``. > + > + > .. _nic_features_packet_type_parsing: > > Packet type parsing > diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini > index d473b94091a..93f5d1b46f4 100644 > --- a/doc/guides/nics/features/default.ini > +++ b/doc/guides/nics/features/default.ini > @@ -19,6 +19,7 @@ Free Tx mbuf on demand = > Queue start/stop = > Runtime Rx queue setup = > Runtime Tx queue setup = > +Shared Rx queue = > Burst mode info = > Power mgmt address monitor = > MTU update = > diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides/prog_guide/switch_representation.rst > index ff6aa91c806..de41db8385d 100644 > --- a/doc/guides/prog_guide/switch_representation.rst > +++ b/doc/guides/prog_guide/switch_representation.rst > @@ -123,6 +123,16 @@ thought as a software "patch panel" front-end for applications. > .. [1] `Ethernet switch device driver model (switchdev) > `_ > > +- For some PMDs, memory usage of representors is huge when number of > + representor grows, mbufs are allocated for each descriptor of Rx queue. > + Polling large number of ports brings more CPU load, cache miss and > + latency. Shared Rx queue can be used to share Rx queue between PF and > + representors among same Rx domain. ``RTE_ETH_DEV_CAPA_RXQ_SHARE`` is > + present in device capability of device info. Setting non-zero share group > + in Rx queue configuration to enable share. Polling any member port can > + receive packets of all member ports in the group, port ID is saved in > + ``mbuf.port``. > + > Basic SR-IOV > ------------ > > diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst > index 5036641842c..d72fc97f4fb 100644 > --- a/doc/guides/rel_notes/release_21_11.rst > +++ b/doc/guides/rel_notes/release_21_11.rst > @@ -141,6 +141,11 @@ New Features > * Added tests to validate packets hard expiry. > * Added tests to verify tunnel header verification in IPsec inbound. > > +* **Added ethdev shared Rx queue support.** > + > + * Added new device capability flag and rx domain field to switch info. > + * Added share group to Rx queue configuration. > + * Added testpmd support and dedicate forwarding engine. Please, add one more empty line since it must be two before the next section. Also it should be put after the last ethdev item above since list of features has defined order. > > Removed Items > ------------- > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c > index 028907bc4b9..9b1b66370a7 100644 > --- a/lib/ethdev/rte_ethdev.c > +++ b/lib/ethdev/rte_ethdev.c > @@ -2159,6 +2159,15 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, > return -EINVAL; > } > > + if (local_conf.share_group > 0 && > + (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { > + RTE_ETHDEV_LOG(ERR, > + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%u while device doesn't support Rx queue share in %s()\n", > + port_id, rx_queue_id, local_conf.share_group, > + __func__); I'd remove function name logging here. Log is unique enough. > + return -EINVAL; > + } > + > /* > * If LRO is enabled, check that the maximum aggregated packet > * size is supported by the configured device. > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h > index 6d80514ba7a..041da6ee52f 100644 > --- a/lib/ethdev/rte_ethdev.h > +++ b/lib/ethdev/rte_ethdev.h > @@ -1044,6 +1044,13 @@ struct rte_eth_rxconf { > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ > + /** > + * Share group index in Rx domain and switch domain. > + * Non-zero value to enable Rx queue share, zero value disable share. > + * PMD driver is responsible for Rx queue consistency checks to avoid > + * member port's configuration contradict to each other. > + */ > + uint32_t share_group; I think that we don't need 32-bit for shared groups. 16-bits sounds more than enough. > /** > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. > * Only offloads set on rx_queue_offload_capa or rx_offload_capa > @@ -1445,6 +1452,14 @@ struct rte_eth_conf { > #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001 > /** Device supports Tx queue setup after device started. */ > #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 > +/** > + * Device supports shared Rx queue among ports within Rx domain and > + * switch domain. Mbufs are consumed by shared Rx queue instead of > + * every port. Multiple groups is supported by share_group of Rx > + * queue configuration. Polling any port in the group receive packets > + * of all member ports, source port identified by mbuf->port field. > + */ > +#define RTE_ETH_DEV_CAPA_RXQ_SHARE 0x00000004 Let's use RTE_BIT64(2) I think above two should be fixed in a separate cleanup patch. > /**@}*/ > > /* > @@ -1488,6 +1503,12 @@ struct rte_eth_switch_info { > * but each driver should explicitly define the mapping of switch > * port identifier to that physical interconnect/switch > */ > + uint16_t rx_domain; > + /**< > + * Shared Rx queue sub-domain boundary. Only ports in same Rx domain > + * and switch domain can share Rx queue. Valid only if device advertised > + * RTE_ETH_DEV_CAPA_RXQ_SHARE capability. > + */ Please, put the documentation before the documented field. [snip]