From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48CFEA0547; Sun, 17 Oct 2021 07:33:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B708440041; Sun, 17 Oct 2021 07:33:34 +0200 (CEST) Received: from mail-io1-f50.google.com (mail-io1-f50.google.com [209.85.166.50]) by mails.dpdk.org (Postfix) with ESMTP id EA3B94003C for ; Sun, 17 Oct 2021 07:33:32 +0200 (CEST) Received: by mail-io1-f50.google.com with SMTP id b188so7614519iof.8 for ; Sat, 16 Oct 2021 22:33:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=L6s9gNzNqgkMmEwC8hnHejB069e3otRFfI5zB3crAAw=; b=THgR5oSkG29vdrIhCLRgV/KiK5lsqDwVJXB9TJPtswgr/z0ihtrnvJN8Rn/a8MJ2FH p/g1ir7X8hGd2EzqAXV2bw3yIg1XAOXvi/lyvid2UBywpDTY86nClrlN+qxdSKCg8Zrl hBHogoqK9EK0tliidKxxZ+zCg2s2Pw52J8y/U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=L6s9gNzNqgkMmEwC8hnHejB069e3otRFfI5zB3crAAw=; b=qsTnON1qtnmho6jUcQLLXszm/z73X695ygSHUIbYoSrzS+FbYwl6lujjIz2kktYpXO 3hkK4LGAVKV8BvSMwhJP1d85fYTehBkY+yWl++ifJXyvhG2rgSa2rFCYU9eAgOQRlKyh QcIpJwJV6MekGIZ6r7hThwkZPFad0o0Bis38pQcT4xaWZ5zDrO0eS2DpwrTZdDzFPA/0 JPjX2Qwuc0ymkcSNJHp9xvl9U3FfKNGo3YIWTN4diZNG241imb7TtpOxtUcKp/MGHCSW vaJnGr6PlGDxZEZGrpj0d/x4f78c0XVpDtu3qlyrT5P2HwwkznPN5hSVP7HkdZr3Ljo5 A4aA== X-Gm-Message-State: AOAM532h7MHVtbyPT7E38xXLw8rmeiyBT4mWIdWWk7WofHsM9gvgL8cR Z7BT2RPCNWjstOsmZKqnffygAinZ5k+BwY47QvzXWQ== X-Google-Smtp-Source: ABdhPJwVyi4t6fvpoNv2fcb6eKmwxaKFSIsJDrY/7ctH0VMvojhamKY9f9zAvK3hSyZyskIRlD2H8iQGXB/GmjGwO2U= X-Received: by 2002:a05:6602:2d81:: with SMTP id k1mr9689308iow.87.1634448812127; Sat, 16 Oct 2021 22:33:32 -0700 (PDT) MIME-Version: 1.0 References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211016084237.1808161-1-xuemingl@nvidia.com> <20211016084237.1808161-2-xuemingl@nvidia.com> In-Reply-To: <20211016084237.1808161-2-xuemingl@nvidia.com> From: Ajit Khaparde Date: Sat, 16 Oct 2021 22:33:15 -0700 Message-ID: To: Xueming Li Cc: dpdk-dev , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Ananyev Konstantin Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH v7 1/5] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Sat, Oct 16, 2021 at 1:43 AM Xueming Li wrote: > > In current DPDK framework, each Rx queue is pre-loaded with mbufs to > save incoming packets. For some PMDs, when number of representors scale > out in a switch domain, the memory consumption became significant. > Polling all ports also leads to high cache miss, high latency and low > throughput. > > This patch introduce shared Rx queue. Ports in same Rx domain and > switch domain could share Rx queue set by specifying non-zero sharing > group in Rx queue configuration. > > Port A RxQ X can share RxQ with Port B RxQ X, but can't share with RxQ > Y. All member ports in share group share a list of shared Rx queue > indexed by Rx queue ID. > > No special API is defined to receive packets from shared Rx queue. > Polling any member port of a shared Rx queue receives packets of that > queue for all member ports, source port is identified by mbuf->port. Is this port the physical port which received the packet? Or does this port number correlate with the port_id seen by the application= ? > > Shared Rx queue must be polled in same thread or core, polling a queue > ID of any member port is essentially same. So it is upto the application to poll the queue of any member port or all ports or a designated port to handle Rx? > > Multiple share groups are supported. Device should support mixed > configuration by allowing multiple share groups and non-shared Rx queue. > > Example grouping and polling model to reflect service priority: > Group1, 2 shared Rx queues per port: PF, rep0, rep1 > Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127 > Core0: poll PF queue0 > Core1: poll PF queue1 > Core2: poll rep2 queue0 > > PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE. > > PMD is responsible for shared Rx queue consistency checks to avoid > member port's configuration contradict to each other. > > Signed-off-by: Xueming Li > --- > doc/guides/nics/features.rst | 13 ++++++++++++ > doc/guides/nics/features/default.ini | 1 + > .../prog_guide/switch_representation.rst | 10 +++++++++ > doc/guides/rel_notes/release_21_11.rst | 6 ++++++ > lib/ethdev/rte_ethdev.c | 8 +++++++ > lib/ethdev/rte_ethdev.h | 21 +++++++++++++++++++ > 6 files changed, 59 insertions(+) > > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst > index e346018e4b8..b64433b8ea5 100644 > --- a/doc/guides/nics/features.rst > +++ b/doc/guides/nics/features.rst > @@ -615,6 +615,19 @@ Supports inner packet L4 checksum. > ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM= ``. > > > +.. _nic_features_shared_rx_queue: > + > +Shared Rx queue > +--------------- > + > +Supports shared Rx queue for ports in same Rx domain of a switch domain. > + > +* **[uses] rte_eth_dev_info**: ``dev_capa:RTE_ETH_DEV_CAPA_RXQ_SHARE= ``. > +* **[uses] rte_eth_dev_info=EF=BC=8Crte_eth_switch_info**: ``rx_doma= in``, ``domain_id``. > +* **[uses] rte_eth_rxconf**: ``share_group``. > +* **[provides] mbuf**: ``mbuf.port``. > + > + > .. _nic_features_packet_type_parsing: > > Packet type parsing > diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/featu= res/default.ini > index d473b94091a..93f5d1b46f4 100644 > --- a/doc/guides/nics/features/default.ini > +++ b/doc/guides/nics/features/default.ini > @@ -19,6 +19,7 @@ Free Tx mbuf on demand =3D > Queue start/stop =3D > Runtime Rx queue setup =3D > Runtime Tx queue setup =3D > +Shared Rx queue =3D > Burst mode info =3D > Power mgmt address monitor =3D > MTU update =3D > diff --git a/doc/guides/prog_guide/switch_representation.rst b/doc/guides= /prog_guide/switch_representation.rst > index ff6aa91c806..de41db8385d 100644 > --- a/doc/guides/prog_guide/switch_representation.rst > +++ b/doc/guides/prog_guide/switch_representation.rst > @@ -123,6 +123,16 @@ thought as a software "patch panel" front-end for ap= plications. > .. [1] `Ethernet switch device driver model (switchdev) > `_ > > +- For some PMDs, memory usage of representors is huge when number of > + representor grows, mbufs are allocated for each descriptor of Rx queue= . > + Polling large number of ports brings more CPU load, cache miss and > + latency. Shared Rx queue can be used to share Rx queue between PF and > + representors among same Rx domain. ``RTE_ETH_DEV_CAPA_RXQ_SHARE`` is > + present in device capability of device info. Setting non-zero share gr= oup > + in Rx queue configuration to enable share. Polling any member port can > + receive packets of all member ports in the group, port ID is saved in > + ``mbuf.port``. > + > Basic SR-IOV > ------------ > > diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_note= s/release_21_11.rst > index 4c56cdfeaaa..1c84e896554 100644 > --- a/doc/guides/rel_notes/release_21_11.rst > +++ b/doc/guides/rel_notes/release_21_11.rst > @@ -67,6 +67,12 @@ New Features > * Modified to allow ``--huge-dir`` option to specify a sub-directory > within a hugetlbfs mountpoint. > > +* **Added ethdev shared Rx queue support.** > + > + * Added new device capability flag and rx domain field to switch info. > + * Added share group to Rx queue configuration. > + * Added testpmd support and dedicate forwarding engine. > + > * **Added new RSS offload types for IPv4/L4 checksum in RSS flow.** > > Added macros ETH_RSS_IPV4_CHKSUM and ETH_RSS_L4_CHKSUM, now IPv4 and > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c > index 028907bc4b9..bc55f899f72 100644 > --- a/lib/ethdev/rte_ethdev.c > +++ b/lib/ethdev/rte_ethdev.c > @@ -2159,6 +2159,14 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t = rx_queue_id, > return -EINVAL; > } > > + if (local_conf.share_group > 0 && > + (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) =3D=3D 0) { > + RTE_ETHDEV_LOG(ERR, > + "Ethdev port_id=3D%d rx_queue_id=3D%d, enabled sh= are_group=3D%hu while device doesn't support Rx queue share\n", > + port_id, rx_queue_id, local_conf.share_group); > + return -EINVAL; > + } > + > /* > * If LRO is enabled, check that the maximum aggregated packet > * size is supported by the configured device. > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h > index 6d80514ba7a..59d8904ac7c 100644 > --- a/lib/ethdev/rte_ethdev.h > +++ b/lib/ethdev/rte_ethdev.h > @@ -1044,6 +1044,13 @@ struct rte_eth_rxconf { > uint8_t rx_drop_en; /**< Drop packets if no descriptors are avail= able. */ > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_d= ev_start(). */ > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ > + /** > + * Share group index in Rx domain and switch domain. > + * Non-zero value to enable Rx queue share, zero value disable sh= are. > + * PMD driver is responsible for Rx queue consistency checks to a= void > + * member port's configuration contradict to each other. > + */ > + uint16_t share_group; > /** > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. > * Only offloads set on rx_queue_offload_capa or rx_offload_capa > @@ -1445,6 +1452,14 @@ struct rte_eth_conf { > #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP 0x00000001 > /** Device supports Tx queue setup after device started. */ > #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP 0x00000002 > +/** > + * Device supports shared Rx queue among ports within Rx domain and > + * switch domain. Mbufs are consumed by shared Rx queue instead of > + * every port. Multiple groups is supported by share_group of Rx > + * queue configuration. Polling any port in the group receive packets > + * of all member ports, source port identified by mbuf->port field. > + */ > +#define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) > /**@}*/ > > /* > @@ -1488,6 +1503,12 @@ struct rte_eth_switch_info { > * but each driver should explicitly define the mapping of switch > * port identifier to that physical interconnect/switch > */ > + /** > + * Shared Rx queue sub-domain boundary. Only ports in same Rx dom= ain > + * and switch domain can share Rx queue. Valid only if device adv= ertised > + * RTE_ETH_DEV_CAPA_RXQ_SHARE capability. > + */ > + uint16_t rx_domain; > }; > > /** > -- > 2.33.0 >