From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by dpdk.org (Postfix) with ESMTP id 560D67CBD for ; Thu, 22 Mar 2018 10:02:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4332; q=dns/txt; s=iport; t=1521709364; x=1522918964; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=+fefwm3LMOvOhyy0pZUqqQ6RG++G5cze6UKdwVXa2SE=; b=Q1IcU4F5ykQJkMBw+nCnqnbcwIXuyR41ixpY1tdI1P2YI95mMV7ePtql iqVsmILeQWCf6mDSYjZHs3ot5h9H/StnxukEaUkduZNDnaRy4ZISUn1NY gO+rKCoyl8IYJL1A66qAjefueDQFHuwFAG/dbpjNFEI5C1w3m7zX4yo+P A=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0A7AQDccLNa/4wNJK1dGQEBAQEBAQEBA?= =?us-ascii?q?QEBAQcBAQEBAYM9YXAoCotRjQyBcoEQkleCBgsjhGICg1whNBgBAgEBAQEBAQJ?= =?us-ascii?q?rKIUlAQEBAwEnRA4FBwQCAQgRBAEBAScHMhQJCAEBBA4FCIR+CA+sbDWIQYF2B?= =?us-ascii?q?YUvghGBU0CBDIMGgxMDgUFOhSMDjE+LLwgChVyCYYYrgUCLSoc0gWqGRwIREwG?= =?us-ascii?q?BJAEcOIFScBU6DRWCIYIeAhmOFm+POgGBFQEB?= X-IronPort-AV: E=Sophos;i="5.48,343,1517875200"; d="scan'208";a="371032692" Received: from alln-core-7.cisco.com ([173.36.13.140]) by rcdn-iport-6.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Mar 2018 09:02:20 +0000 Received: from XCH-RTP-016.cisco.com (xch-rtp-016.cisco.com [64.101.220.156]) by alln-core-7.cisco.com (8.14.5/8.14.5) with ESMTP id w2M92KZI012154 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Thu, 22 Mar 2018 09:02:20 GMT Received: from xch-rtp-017.cisco.com (64.101.220.157) by XCH-RTP-016.cisco.com (64.101.220.156) with Microsoft SMTP Server (TLS) id 15.0.1320.4; Thu, 22 Mar 2018 05:02:19 -0400 Received: from xch-rtp-017.cisco.com ([64.101.220.157]) by XCH-RTP-017.cisco.com ([64.101.220.157]) with mapi id 15.00.1320.000; Thu, 22 Mar 2018 05:02:19 -0400 From: "Hanoch Haim (hhaim)" To: =?iso-8859-1?Q?N=E9lio_Laranjeiro?= CC: Yongseok Koh , "dev@dpdk.org" Thread-Topic: [dpdk-dev] mlx5 reta size is dynamic Thread-Index: AdPBRlLwtkvkc0EZRb2Y20W7rzsPggAOXX8AAApoimAADOGRgAAIP3wA Date: Thu, 22 Mar 2018 09:02:19 +0000 Message-ID: <92a7d23b9df748b6af83f7dda88672e4@XCH-RTP-017.cisco.com> References: <1b6a9384a5604f15948162766cde90a9@XCH-RTP-017.cisco.com> <20180321214749.GA53128@yongseok-MBP.local> <20180322085441.a3o2eyvols7jkzxo@laranjeiro-vm.dev.6wind.com> In-Reply-To: <20180322085441.a3o2eyvols7jkzxo@laranjeiro-vm.dev.6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [64.103.125.72] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] mlx5 reta size is dynamic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Mar 2018 09:02:44 -0000 Hi Nelio,=20 I think you didn't understand me. I suggest to keep the RETA table size con= stant (maximum 512 in your case) and don't change its base on the number of= configured Rx-queue. This will make the DPDK API consistent. As a user I need to do tricks (allo= cate an odd/prime number of rx-queues) to get the RETA size constant at 512= =20 I'm not talking about changing the values in the RETA table which can be do= ne while there is traffic.=20 Thanks,=20 Hanoh -----Original Message----- From: N=E9lio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]=20 Sent: Thursday, March 22, 2018 10:55 AM To: Hanoch Haim (hhaim) Cc: Yongseok Koh; dev@dpdk.org Subject: Re: [dpdk-dev] mlx5 reta size is dynamic On Thu, Mar 22, 2018 at 06:52:53AM +0000, Hanoch Haim (hhaim) wrote: > Hi Yongseok, >=20 >=20 > RSS has a DPDK API,application can ask for the reta table size and=20 > configure it. In your case you are assuming specific use case and=20 > change the size dynamically which solve 90% of the use-cases but break=20 > the 10% use-case. > Instead, you could provide the application a consistent API and with=20 > that 100% of the applications can work with no issue. This is what=20 > happen with Intel (ixgbe/i40e) Another minor issue the rss_key_size=20 > return as zero but internally it is 40 bytes Hi Hanoch, Legacy DPDK API has always considered there is only a single indirection ta= ble aka. RETA whereas this is not true [1][2] on this device. On MLX5 there is an indirection table per Hash Rx queue according to the li= st of queues making part of it. The Hash Rx queue is configured to make the hash with configured information: - Algorithm, - key - hash field (Verbs hash field) - Indirection table An Hash Rx queue cannot handle multiple RSS configuration, we have an Hash = Rx queue per protocol and thus a full configuration per protocol. In such situation, changing the RETA means stopping the traffic, destroying= every single flow, hash Rx queue, indirection table to remake everything w= ith the new configuration. Until then, we always recommended to any application to restart the port on= this device after a RETA update to apply this new configuration. Since the flow API is the new way to configure flows, application should mo= ve to this new one instead of using old API for such behavior. We should also remove such devop from the PMD to avoid any confusion. Regards, > Thanks, > Hanoh >=20 > -----Original Message----- > From: Yongseok Koh [mailto:yskoh@mellanox.com] > Sent: Wednesday, March 21, 2018 11:48 PM > To: Hanoch Haim (hhaim) > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] mlx5 reta size is dynamic >=20 > On Wed, Mar 21, 2018 at 06:56:33PM +0000, Hanoch Haim (hhaim) wrote: > > Hi mlx5 driver expert, > >=20 > > DPDK: 17.11 > > Any reason mlx5 driver change the rate table size dynamically based=20 > > on the rx- queues# ? >=20 > The device only supports 2^n-sized indirection table. For example, if the= number of Rx queues is 6, device can't have 1-1 mapping but the size of in= d tbl could be 8, 16, 32 and so on. If we configure it as 8 for example, 2 = out of 6 queues will have 1/4 of traffic while the rest 4 queues receives 1= /8. We thought it was too much disparity and preferred setting the max size= in order to mitigate the imbalance. >=20 > > There is a hidden assumption that the user wants to distribute the=20 > > packets evenly which is not always correct. >=20 > But it is mostly correct because RSS is used for uniform distribution. Th= e decision wasn't made based on our speculation but by many request from mu= ltiple customers. >=20 > > /* If the requested number of RX queues is not a power of two, use the > > * maximum indirection table size for better balancing. > > * The result is always rounded to the next power of two. */ > > reta_idx_n =3D (1 << log2above((rxqs_n & (rxqs_n - 1)) ? > > priv->ind_table_max_size : > > rxqs_n)); >=20 > Thanks, > Yongseok [1] https://dpdk.org/ml/archives/dev/2015-October/024668.html [2] https://dpdk.org/ml/archives/dev/2015-October/024669.html -- N=E9lio Laranjeiro 6WIND