DPDK usage discussions
 help / color / mirror / Atom feed
From: Anna A <pacman.n908@gmail.com>
To: Raslan Darawsheh <rasland@nvidia.com>
Cc: Wisam Monther <wisamm@nvidia.com>,
	NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
	 "users@dpdk.org" <users@dpdk.org>,
	Matan Azrad <matan@nvidia.com>,
	 Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
Date: Wed, 29 Sep 2021 22:38:33 -0700
Message-ID: <CALgGc3V09DP+Tjfqw2gQG=VOjRczFmv+tDY2_BsVKXyCM8jW9w@mail.gmail.com> (raw)
In-Reply-To: <DM4PR12MB5312EAEED00CEF097D797B59CFAA9@DM4PR12MB5312.namprd12.prod.outlook.com>

Hi Raslan,

As a part of rte_flow configuration I did include rss.types = ETH_RSS_IP
for action type RTE_FLOW_ACTION_TYPE_RSS. Doesn't that support the
spreading in mlx5 pmd? Please correct me if my understanding is different
from what you suggested.

Thanks
Anna

On Wed, Sep 29, 2021 at 6:13 PM Raslan Darawsheh <rasland@nvidia.com> wrote:

> Hi Anna,
>
> What you are basically doing is trying to do RSS on eth layer which we
> don't support the spreading on it.
>
> To make it work you can do either adding ip layer to the items to make the
> RSS happen on L3 or simply through the rss types of the rss action which
> would cause an automatic expansion for the items in mlx5 pmd internally.
>
>
> Kindest regards,
> Raslan Darawsheh
> ------------------------------
> *From:* Anna A <pacman.n908@gmail.com>
> *Sent:* Thursday, September 30, 2021 3:29:51 AM
> *To:* Wisam Monther <wisamm@nvidia.com>
> *Cc:* NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org <
> users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>
> *Subject:* Re: Using rte_flow to distribute single flow type among
> multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
>
> Hi Wisam,
>
> I added .rxmode.mq_mode = ETH_MQ_RX_RSS to rte_eth_conf before calling the
> fn, rte_eth_dev_configure() but still have the packets sent to a single
> queue.
>
> My order of configuration is as follows:
>
> 1. Enable .rxmode.mq_mode = ETH_MQ_RX_RSS
> 2. Initialize port by rte_eth_dev_configure()
> 3. Setup multiple Rxqueues for a single port by calling
> rte_eth_rx_queue_setup() on each queueid.
> 4.setup a single txqueue by rte_eth_tx_queue_setup()
> 5. start the device with rte_eth_dev_start()
> 6. Configure rte_flow with pattern -> flow create port0 ingress pattern eth
> / end / action RSS on multiple queues / end
> 7. Add Mac address
> 8. Check the port link status
>
> If I try to configure rte_flow before calling rte_eth_dev_start, I get the
> error message "net_mlx5: port 0 is not started when inserting a flow and
> rte_flow_create() returns NULL ". Also i enabled debug logging with
> "--log-level=*:debug", but don't see any errors for flow validation/ flow
> creation . Please let me know if I'm missing something, or need to add any
> other configurations?
>
> Thanks
> Anna
>
> On Wed, Sep 29, 2021 at 3:09 AM Wisam Monther <wisamm@nvidia.com> wrote:
>
> > Hi Anna,
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > Sent: Wednesday, September 29, 2021 12:54 PM
> > > To: Anna A <pacman.n908@gmail.com>
> > > Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > > <viacheslavo@nvidia.com>
> > > Subject: Re: Using rte_flow to distribute single flow type among
> > multiple Rx
> > > queues using DPDK in Mellanox ConnectX-5 Ex
> > >
> > > 29/09/2021 07:26, Anna A:
> > > > Hi,
> > > >
> > > > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > > > of the same flow type among multiple Rx queues on a single port.
> > > > Mellanox
> > > > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > > > doesn't seem to work and all the packets are sent only to a single
> > queue.
> > >
> > > Adding mlx5 maintainers Cc.
> > >
> > > > My queries are :
> > > > 1. What am I missing or doing differently?
> > > > 2. Should I be doing any other configurations in rte_eth_conf or
> > > > rte_eth_rxmode?
> >
> > Can you please try to add?
> > .rxmode.mq_mode = ETH_MQ_RX_RSS,
> > in the rte_eth_conf and try again?
> >
> > >
> > > Do you see any error log?
> > > For info, you can change log level with --log-level.
> > > Experiment options with '--log-level help' in recent DPDK.
> > >
> > > > My rte_flow configurations:
> > > >
> > > >     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> > > >     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> > > >     struct rte_flow_attr attr;
> > > >     struct rte_flow_item_eth eth;
> > > >     struct rte_flow *flow = NULL;
> > > >     struct rte_flow_error error;
> > > >     int ret;
> > > >     int no_queues =2;
> > > >     uint16_t queues[2];
> > > >     struct rte_flow_action_rss rss;
> > > >     memset(&error, 0x22, sizeof(error));
> > > >     memset(&attr, 0, sizeof(attr));
> > > >     attr.egress = 0;
> > > >     attr.ingress = 1;
> > > >
> > > >     memset(&pattern, 0, sizeof(pattern));
> > > >     memset(&action, 0, sizeof(action));
> > > >     /* setting the eth to pass all packets */
> > > >     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > > >     pattern[0].spec = &eth;
> > > >     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > > >
> > > >     rss.types = ETH_RSS_IP;
> > > >     rss.level = 0;
> > > >     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> > > >     rss.key_len =0;
> > > >     rss.key = NULL;
> > > >     rss.queue_num = no_queues;
> > > >     for (int i= 0; i < no_queues; i++){
> > > >         queues[i] = i;
> > > >     }
> > > >     rss.queue = queues;
> > > >     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> > > >     action[0].conf = &rss;
> > > >
> > > >     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> > > >
> > > >     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> > > >      if (ret < 0) {
> > > >       printf( "Flow validation failed %s\n", error.message);
> > > >         return;
> > > >     }
> > > >     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> > > >
> > > >     if (flow == NULL)
> > > >         printf(" Cannot create Flow create");
> > > >
> > > > And Rx queues configuration:
> > > > for (int j = 0; j < no_queues; j++) {
> > > >
> > > >          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > > > rte_eth_dev_socket_id(port_id),
> > > >                                NULL,mbuf_pool);
> > > >      if (ret < 0) {
> > > >       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > > > (unsigned) portid);
> > > >         exit(1);
> > > >        }
> > > > }
> > > >
> > > > Thanks
> > > > Anna
> > >
> > >
> >
> > BRs,
> > Wisam Jaddo
> >
>

      reply	other threads:[~2021-09-30  5:38 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-29  5:26 Anna A
2021-09-29  9:53 ` Thomas Monjalon
2021-09-29 10:09   ` Wisam Monther
2021-09-30  0:29     ` Anna A
2021-09-30  1:13       ` Raslan Darawsheh
2021-09-30  5:38         ` Anna A [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALgGc3V09DP+Tjfqw2gQG=VOjRczFmv+tDY2_BsVKXyCM8jW9w@mail.gmail.com' \
    --to=pacman.n908@gmail.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    --cc=wisamm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git