From: Thomas Monjalon <thomas@monjalon.net>
To: Anna A <pacman.n908@gmail.com>
Cc: users@dpdk.org, matan@nvidia.com, viacheslavo@nvidia.com
Subject: Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
Date: Wed, 29 Sep 2021 11:53:56 +0200 [thread overview]
Message-ID: <1849453.UF46jR8BTF@thomas> (raw)
In-Reply-To: <CALgGc3UAerQXBwLOMao3ehtaCS7UptgiKzR0yuXF=3jke-8QwQ@mail.gmail.com>
29/09/2021 07:26, Anna A:
> Hi,
>
> I'm trying to use rte_flow_action_type_rss to distribute packets all of the
> same flow type among multiple Rx queues on a single port. Mellanox
> ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It doesn't
> seem to work and all the packets are sent only to a single queue.
Adding mlx5 maintainers Cc.
> My queries are :
> 1. What am I missing or doing differently?
> 2. Should I be doing any other configurations in rte_eth_conf or
> rte_eth_rxmode?
Do you see any error log?
For info, you can change log level with --log-level.
Experiment options with '--log-level help' in recent DPDK.
> My rte_flow configurations:
>
> struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> struct rte_flow_attr attr;
> struct rte_flow_item_eth eth;
> struct rte_flow *flow = NULL;
> struct rte_flow_error error;
> int ret;
> int no_queues =2;
> uint16_t queues[2];
> struct rte_flow_action_rss rss;
> memset(&error, 0x22, sizeof(error));
> memset(&attr, 0, sizeof(attr));
> attr.egress = 0;
> attr.ingress = 1;
>
> memset(&pattern, 0, sizeof(pattern));
> memset(&action, 0, sizeof(action));
> /* setting the eth to pass all packets */
> pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> pattern[0].spec = ð
> pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
>
> rss.types = ETH_RSS_IP;
> rss.level = 0;
> rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> rss.key_len =0;
> rss.key = NULL;
> rss.queue_num = no_queues;
> for (int i= 0; i < no_queues; i++){
> queues[i] = i;
> }
> rss.queue = queues;
> action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> action[0].conf = &rss;
>
> action[1].type = RTE_FLOW_ACTION_TYPE_END;
>
> ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> if (ret < 0) {
> printf( "Flow validation failed %s\n", error.message);
> return;
> }
> flow = rte_flow_create(portid, &attr, pattern, action, &error);
>
> if (flow == NULL)
> printf(" Cannot create Flow create");
>
> And Rx queues configuration:
> for (int j = 0; j < no_queues; j++) {
>
> int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> rte_eth_dev_socket_id(port_id),
> NULL,mbuf_pool);
> if (ret < 0) {
> printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret, (unsigned)
> portid);
> exit(1);
> }
> }
>
> Thanks
> Anna
next prev parent reply other threads:[~2021-09-29 9:54 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-29 5:26 Anna A
2021-09-29 9:53 ` Thomas Monjalon [this message]
2021-09-29 10:09 ` Wisam Monther
2021-09-30 0:29 ` Anna A
2021-09-30 1:13 ` Raslan Darawsheh
2021-09-30 5:38 ` Anna A
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1849453.UF46jR8BTF@thomas \
--to=thomas@monjalon.net \
--cc=matan@nvidia.com \
--cc=pacman.n908@gmail.com \
--cc=users@dpdk.org \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).