From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78308A0548; Tue, 17 Aug 2021 17:12:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E76BF407FF; Tue, 17 Aug 2021 17:12:25 +0200 (CEST) Received: from mail-il1-f171.google.com (mail-il1-f171.google.com [209.85.166.171]) by mails.dpdk.org (Postfix) with ESMTP id 8E08B4014E for ; Tue, 17 Aug 2021 17:12:24 +0200 (CEST) Received: by mail-il1-f171.google.com with SMTP id v2so16394729ilg.12 for ; Tue, 17 Aug 2021 08:12:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZdElyc/xLG8RbLE6eNVIigNsL9SjokmmGCE8j0HKq9E=; b=G4tWgbWzQiU6pw1zcQIW5gwQnQ+OpJQ4ocmJRG4oN4P3wMcFxOP3b/n33ffju9un+g WIM5/wFFxYZGgCnqaA05GmWk3K96ObR+yIAg6XMzkT3waDPXd6lioaS5MCsYygj4ChcA +MixhU4WDU1+shS1sfdlRmx+qse3J7DENoG23ym0wZpvPxstO06FKPNubXJ3jyWYNkPT IggXOm3+gInUZW8PNk3v5FtYPG0Xb0M+6p0EbtHGmgoM2SbaoNKY+6J8em7KzylhtHkL RucUJvCxADsYoBNixWzG55KnOMgwaDsPeB/UyPmWg0zsKIIYDbq0QYcKRk5aDdRwcauD j4fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZdElyc/xLG8RbLE6eNVIigNsL9SjokmmGCE8j0HKq9E=; b=OxaXW7tHai+SpVf9z0eQpATxgmPAlgUzQSiegZQSbSTrD8mteC/jhSAIcd4W/q7Sca LJoqmyYihFopVYYKmnpJXHpsOxYydFKIR0UaHI94tXC7QOIhPv8teTCde7ufO6IiSBdK ibt3uq5b7Q/dueWzZsAN9dXPJY7vZj3soFMhx9PbH15KeL+hXVo59IaLa9+8p6XmIqru cFCn3J38QCGg9kBKBSSuyG8zTpTSNN6ROj+yTxHdHlDxdONVSpmnd5RYN/CTtIv5mJ5l 4jo99JHi2JjI8Xcxxd3Obv/nzR2H0oifH8PeJx3Q8PzLekGlM+h2SR6bfxu6iWjcLzH1 /laA== X-Gm-Message-State: AOAM533wDz74m9PEj4/rfKlEsrCjUqhX2d0hK7Bfign4ORnrbc6Ng3Pk ngrmJygjQkRYYM1V5Nrq7Vx0cJGQvcOoZgKPWlE= X-Google-Smtp-Source: ABdhPJxAxOHQPsknli18g6TdCJD71cbgTD5rIdlZ4NjUjtaNiXMxGHmzSFHRFJaa29ovXn1Gm4KXSUKeequj3htcVI4= X-Received: by 2002:a92:cb04:: with SMTP id s4mr2754526ilo.130.1629213143914; Tue, 17 Aug 2021 08:12:23 -0700 (PDT) MIME-Version: 1.0 References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210811140418.393264-1-xuemingl@nvidia.com> In-Reply-To: From: Jerin Jacob Date: Tue, 17 Aug 2021 20:41:57 +0530 Message-ID: To: "Xueming(Steven) Li" Cc: dpdk-dev , Ferruh Yigit , NBU-Contact-Thomas Monjalon , Andrew Rybchenko Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li wrote: > > > > > -----Original Message----- > > From: Jerin Jacob > > Sent: Tuesday, August 17, 2021 5:33 PM > > To: Xueming(Steven) Li > > Cc: dpdk-dev ; Ferruh Yigit ; NBU-Contact-Thomas Monjalon ; > > Andrew Rybchenko > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li wrote: > > > > > > In current DPDK framework, each RX queue is pre-loaded with mbufs for > > > incoming packets. When number of representors scale out in a switch > > > domain, the memory consumption became significant. Most important, > > > polling all ports leads to high cache miss, high latency and low > > > throughput. > > > > > > This patch introduces shared RX queue. Ports with same configuration > > > in a switch domain could share RX queue set by specifying sharing group. > > > Polling any queue using same shared RX queue receives packets from all > > > member ports. Source port is identified by mbuf->port. > > > > > > Port queue number in a shared group should be identical. Queue index > > > is > > > 1:1 mapped in shared group. > > > > > > Share RX queue must be polled on single thread or core. > > > > > > Multiple groups is supported by group ID. > > > > > > Signed-off-by: Xueming Li > > > Cc: Jerin Jacob > > > --- > > > Rx queue object could be used as shared Rx queue object, it's > > > important to clear all queue control callback api that using queue object: > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html > > > > > #undef RTE_RX_OFFLOAD_BIT2STR > > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index > > > d2b27c351f..a578c9db9d 100644 > > > --- a/lib/ethdev/rte_ethdev.h > > > +++ b/lib/ethdev/rte_ethdev.h > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf { > > > uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ > > > uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ > > > uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. > > > */ > > > + uint32_t shared_group; /**< Shared port group index in switch > > > + domain. */ > > > > Not to able to see anyone setting/creating this group ID test application. > > How this group is created? > > Nice catch, the initial testpmd version only support one default group(0). > All ports that supports shared-rxq assigned in same group. > > We should be able to change "--rxq-shared" to "--rxq-shared-group" to support > group other than default. > > To support more groups simultaneously, need to consider testpmd forwarding stream > core assignment, all streams in same group need to stay on same core. > It's possible to specify how many ports to increase group number, but user must > schedule stream affinity carefully - error prone. > > On the other hand, one group should be sufficient for most customer, the doubt is > whether it valuable to support multiple groups test. Ack. One group is enough in testpmd. My question was more about who and how this group is created, Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW or other HW. - Create aggregation queue group - Attach multiple Rx queues to the aggregation queue group - Pull the packets from the queue group(which internally fetch from the Rx queues _attached_) Does the above kind of sequence, break your representor use case? > > > > > > > > /** > > > * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. > > > * Only offloads set on rx_queue_offload_capa or > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf { #define > > > DEV_RX_OFFLOAD_OUTER_UDP_CKSUM 0x00040000 > > > #define DEV_RX_OFFLOAD_RSS_HASH 0x00080000 > > > #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000 > > > +/** > > > + * Rx queue is shared among ports in same switch domain to save > > > +memory, > > > + * avoid polling each port. Any port in group can be used to receive packets. > > > + * Real source port number saved in mbuf->port field. > > > + */ > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ 0x00200000 > > > > > > #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ > > > DEV_RX_OFFLOAD_UDP_CKSUM | \ > > > -- > > > 2.25.1 > > >