From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5554DA0547 for ; Wed, 29 Sep 2021 11:54:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A7DC410D7; Wed, 29 Sep 2021 11:54:01 +0200 (CEST) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 3292140E3C for ; Wed, 29 Sep 2021 11:54:00 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id D332E5C00D0; Wed, 29 Sep 2021 05:53:58 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Wed, 29 Sep 2021 05:53:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= 8a2uTgKw3rZK5Gz+n0lPlYElLwlaXhKzJlq4YzdzhA8=; b=BR714IS9r6SyPebS OtupIjSL89ukl9kiFy37LTznTsp9vgWZird4PvOKqsAP8wc2knVvuyACXWZp+iWe s+Ixob8X7/BA5KnuzsS0Ygm+P/d4PkBXqHA4aLIA/XQxm6ey0PRFMR7k8AKGkHlR 7eBfXvcvfr7g2GImVaHg1vHT7YJP2ITQ1/yb31e3M3rqk0+tWMuTLBx+7XbMGl2l p1DXFnVzGkpjkNUTDsxSOSA9dJZg976gBHdnwMKnkE8766ZvoSIn7symbhPEQZwA gqv1aLjgqkRBXxn2As37/Hd9GB/BKcFVVkO6oMDb3yiYZlnBL7L1lIvvkQhK7JYa JAgSqw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=8a2uTgKw3rZK5Gz+n0lPlYElLwlaXhKzJlq4Yzdzh A8=; b=ddluFeuov0K/MTVAI27eBkHyqtkk+ULVAC50ZTeN38NJ8fT1rENGx//aL b/EtcdZgzO1pZH6mb6mDTT5C+/TiXzrRuxyf2E4LI2/UywwLFUmFE6RTf2cOTENG 8QB5skP0e94RO9iXnbgMcjOoa9voW/vy3QUTTeL36n6Ms3F8LfxebxeqiPpGsmdd SMF8uoXl0Bnw1MG5YPLM3p11WgMJI85xyKXICKrHPaYpCzx8IQYTFh/uxss310TD GdYTjhG4k0t8aq/xYheWdJejXDnvNnJqnxjfsvXR3VAH91YnyW6Ru61rYNdt2avh lglRMwxS0sNXasC/6WsPQHolzcLOA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudekvddgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkjghfggfgtgesthfure dttddtvdenucfhrhhomhepvfhhohhmrghsucfoohhnjhgrlhhonhcuoehthhhomhgrshes mhhonhhjrghlohhnrdhnvghtqeenucggtffrrghtthgvrhhnpedugefgvdefudfftdefge elgffhueekgfffhfeujedtteeutdejueeiiedvffegheenucevlhhushhtvghrufhiiigv pedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghlohhnrd hnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 29 Sep 2021 05:53:57 -0400 (EDT) From: Thomas Monjalon To: Anna A Cc: users@dpdk.org, matan@nvidia.com, viacheslavo@nvidia.com Subject: Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex Date: Wed, 29 Sep 2021 11:53:56 +0200 Message-ID: <1849453.UF46jR8BTF@thomas> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 29/09/2021 07:26, Anna A: > Hi, > > I'm trying to use rte_flow_action_type_rss to distribute packets all of the > same flow type among multiple Rx queues on a single port. Mellanox > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It doesn't > seem to work and all the packets are sent only to a single queue. Adding mlx5 maintainers Cc. > My queries are : > 1. What am I missing or doing differently? > 2. Should I be doing any other configurations in rte_eth_conf or > rte_eth_rxmode? Do you see any error log? For info, you can change log level with --log-level. Experiment options with '--log-level help' in recent DPDK. > My rte_flow configurations: > > struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {}; > struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {}; > struct rte_flow_attr attr; > struct rte_flow_item_eth eth; > struct rte_flow *flow = NULL; > struct rte_flow_error error; > int ret; > int no_queues =2; > uint16_t queues[2]; > struct rte_flow_action_rss rss; > memset(&error, 0x22, sizeof(error)); > memset(&attr, 0, sizeof(attr)); > attr.egress = 0; > attr.ingress = 1; > > memset(&pattern, 0, sizeof(pattern)); > memset(&action, 0, sizeof(action)); > /* setting the eth to pass all packets */ > pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; > pattern[0].spec = ð > pattern[1].type = RTE_FLOW_ITEM_TYPE_END; > > rss.types = ETH_RSS_IP; > rss.level = 0; > rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ; > rss.key_len =0; > rss.key = NULL; > rss.queue_num = no_queues; > for (int i= 0; i < no_queues; i++){ > queues[i] = i; > } > rss.queue = queues; > action[0].type = RTE_FLOW_ACTION_TYPE_RSS; > action[0].conf = &rss; > > action[1].type = RTE_FLOW_ACTION_TYPE_END; > > ret = rte_flow_validate(portid, &attr, pattern, action, &error); > if (ret < 0) { > printf( "Flow validation failed %s\n", error.message); > return; > } > flow = rte_flow_create(portid, &attr, pattern, action, &error); > > if (flow == NULL) > printf(" Cannot create Flow create"); > > And Rx queues configuration: > for (int j = 0; j < no_queues; j++) { > > int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd, > rte_eth_dev_socket_id(port_id), > NULL,mbuf_pool); > if (ret < 0) { > printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret, (unsigned) > portid); > exit(1); > } > } > > Thanks > Anna