DPDK usage discussions
 help / color / mirror / Atom feed
* Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
@ 2021-09-29  5:26 Anna A
  2021-09-29  9:53 ` Thomas Monjalon
  0 siblings, 1 reply; 6+ messages in thread
From: Anna A @ 2021-09-29  5:26 UTC (permalink / raw)
  To: users

Hi,

I'm trying to use rte_flow_action_type_rss to distribute packets all of the
same flow type among multiple Rx queues on a single port. Mellanox
ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It doesn't
seem to work and all the packets are sent only to a single queue.
My queries are :
1. What am I missing or doing differently?
2. Should I be doing any other configurations in rte_eth_conf or
rte_eth_rxmode?

My rte_flow configurations:

    struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
    struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
    struct rte_flow_attr attr;
    struct rte_flow_item_eth eth;
    struct rte_flow *flow = NULL;
    struct rte_flow_error error;
    int ret;
    int no_queues =2;
    uint16_t queues[2];
    struct rte_flow_action_rss rss;
    memset(&error, 0x22, sizeof(error));
    memset(&attr, 0, sizeof(attr));
    attr.egress = 0;
    attr.ingress = 1;

    memset(&pattern, 0, sizeof(pattern));
    memset(&action, 0, sizeof(action));
    /* setting the eth to pass all packets */
    pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
    pattern[0].spec = ð
    pattern[1].type = RTE_FLOW_ITEM_TYPE_END;

    rss.types = ETH_RSS_IP;
    rss.level = 0;
    rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
    rss.key_len =0;
    rss.key = NULL;
    rss.queue_num = no_queues;
    for (int i= 0; i < no_queues; i++){
        queues[i] = i;
    }
    rss.queue = queues;
    action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
    action[0].conf = &rss;

    action[1].type = RTE_FLOW_ACTION_TYPE_END;

    ret = rte_flow_validate(portid, &attr, pattern, action, &error);
     if (ret < 0) {
      printf( "Flow validation failed %s\n", error.message);
        return;
    }
    flow = rte_flow_create(portid, &attr, pattern, action, &error);

    if (flow == NULL)
        printf(" Cannot create Flow create");

And Rx queues configuration:
for (int j = 0; j < no_queues; j++) {

         int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
rte_eth_dev_socket_id(port_id),
                               NULL,mbuf_pool);
     if (ret < 0) {
      printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret, (unsigned)
portid);
        exit(1);
       }
}

Thanks
Anna

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
  2021-09-29  5:26 Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex Anna A
@ 2021-09-29  9:53 ` Thomas Monjalon
  2021-09-29 10:09   ` Wisam Monther
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Monjalon @ 2021-09-29  9:53 UTC (permalink / raw)
  To: Anna A; +Cc: users, matan, viacheslavo

29/09/2021 07:26, Anna A:
> Hi,
> 
> I'm trying to use rte_flow_action_type_rss to distribute packets all of the
> same flow type among multiple Rx queues on a single port. Mellanox
> ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It doesn't
> seem to work and all the packets are sent only to a single queue.

Adding mlx5 maintainers Cc.

> My queries are :
> 1. What am I missing or doing differently?
> 2. Should I be doing any other configurations in rte_eth_conf or
> rte_eth_rxmode?

Do you see any error log?
For info, you can change log level with --log-level.
Experiment options with '--log-level help' in recent DPDK.

> My rte_flow configurations:
> 
>     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
>     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
>     struct rte_flow_attr attr;
>     struct rte_flow_item_eth eth;
>     struct rte_flow *flow = NULL;
>     struct rte_flow_error error;
>     int ret;
>     int no_queues =2;
>     uint16_t queues[2];
>     struct rte_flow_action_rss rss;
>     memset(&error, 0x22, sizeof(error));
>     memset(&attr, 0, sizeof(attr));
>     attr.egress = 0;
>     attr.ingress = 1;
> 
>     memset(&pattern, 0, sizeof(pattern));
>     memset(&action, 0, sizeof(action));
>     /* setting the eth to pass all packets */
>     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
>     pattern[0].spec = &eth;
>     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> 
>     rss.types = ETH_RSS_IP;
>     rss.level = 0;
>     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
>     rss.key_len =0;
>     rss.key = NULL;
>     rss.queue_num = no_queues;
>     for (int i= 0; i < no_queues; i++){
>         queues[i] = i;
>     }
>     rss.queue = queues;
>     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
>     action[0].conf = &rss;
> 
>     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> 
>     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
>      if (ret < 0) {
>       printf( "Flow validation failed %s\n", error.message);
>         return;
>     }
>     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> 
>     if (flow == NULL)
>         printf(" Cannot create Flow create");
> 
> And Rx queues configuration:
> for (int j = 0; j < no_queues; j++) {
> 
>          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> rte_eth_dev_socket_id(port_id),
>                                NULL,mbuf_pool);
>      if (ret < 0) {
>       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret, (unsigned)
> portid);
>         exit(1);
>        }
> }
> 
> Thanks
> Anna




^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
  2021-09-29  9:53 ` Thomas Monjalon
@ 2021-09-29 10:09   ` Wisam Monther
  2021-09-30  0:29     ` Anna A
  0 siblings, 1 reply; 6+ messages in thread
From: Wisam Monther @ 2021-09-29 10:09 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon, Anna A; +Cc: users, Matan Azrad, Slava Ovsiienko

Hi Anna,

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, September 29, 2021 12:54 PM
> To: Anna A <pacman.n908@gmail.com>
> Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Subject: Re: Using rte_flow to distribute single flow type among multiple Rx
> queues using DPDK in Mellanox ConnectX-5 Ex
> 
> 29/09/2021 07:26, Anna A:
> > Hi,
> >
> > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > of the same flow type among multiple Rx queues on a single port.
> > Mellanox
> > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > doesn't seem to work and all the packets are sent only to a single queue.
> 
> Adding mlx5 maintainers Cc.
> 
> > My queries are :
> > 1. What am I missing or doing differently?
> > 2. Should I be doing any other configurations in rte_eth_conf or
> > rte_eth_rxmode?

Can you please try to add?
.rxmode.mq_mode = ETH_MQ_RX_RSS,
in the rte_eth_conf and try again?

> 
> Do you see any error log?
> For info, you can change log level with --log-level.
> Experiment options with '--log-level help' in recent DPDK.
> 
> > My rte_flow configurations:
> >
> >     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> >     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> >     struct rte_flow_attr attr;
> >     struct rte_flow_item_eth eth;
> >     struct rte_flow *flow = NULL;
> >     struct rte_flow_error error;
> >     int ret;
> >     int no_queues =2;
> >     uint16_t queues[2];
> >     struct rte_flow_action_rss rss;
> >     memset(&error, 0x22, sizeof(error));
> >     memset(&attr, 0, sizeof(attr));
> >     attr.egress = 0;
> >     attr.ingress = 1;
> >
> >     memset(&pattern, 0, sizeof(pattern));
> >     memset(&action, 0, sizeof(action));
> >     /* setting the eth to pass all packets */
> >     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> >     pattern[0].spec = &eth;
> >     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> >
> >     rss.types = ETH_RSS_IP;
> >     rss.level = 0;
> >     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> >     rss.key_len =0;
> >     rss.key = NULL;
> >     rss.queue_num = no_queues;
> >     for (int i= 0; i < no_queues; i++){
> >         queues[i] = i;
> >     }
> >     rss.queue = queues;
> >     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> >     action[0].conf = &rss;
> >
> >     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> >
> >     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> >      if (ret < 0) {
> >       printf( "Flow validation failed %s\n", error.message);
> >         return;
> >     }
> >     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> >
> >     if (flow == NULL)
> >         printf(" Cannot create Flow create");
> >
> > And Rx queues configuration:
> > for (int j = 0; j < no_queues; j++) {
> >
> >          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > rte_eth_dev_socket_id(port_id),
> >                                NULL,mbuf_pool);
> >      if (ret < 0) {
> >       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > (unsigned) portid);
> >         exit(1);
> >        }
> > }
> >
> > Thanks
> > Anna
> 
> 

BRs,
Wisam Jaddo

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
  2021-09-29 10:09   ` Wisam Monther
@ 2021-09-30  0:29     ` Anna A
  2021-09-30  1:13       ` Raslan Darawsheh
  0 siblings, 1 reply; 6+ messages in thread
From: Anna A @ 2021-09-30  0:29 UTC (permalink / raw)
  To: Wisam Monther
  Cc: NBU-Contact-Thomas Monjalon, users, Matan Azrad, Slava Ovsiienko

Hi Wisam,

I added .rxmode.mq_mode = ETH_MQ_RX_RSS to rte_eth_conf before calling the
fn, rte_eth_dev_configure() but still have the packets sent to a single
queue.

My order of configuration is as follows:

1. Enable .rxmode.mq_mode = ETH_MQ_RX_RSS
2. Initialize port by rte_eth_dev_configure()
3. Setup multiple Rxqueues for a single port by calling
rte_eth_rx_queue_setup() on each queueid.
4.setup a single txqueue by rte_eth_tx_queue_setup()
5. start the device with rte_eth_dev_start()
6. Configure rte_flow with pattern -> flow create port0 ingress pattern eth
/ end / action RSS on multiple queues / end
7. Add Mac address
8. Check the port link status

If I try to configure rte_flow before calling rte_eth_dev_start, I get the
error message "net_mlx5: port 0 is not started when inserting a flow and
rte_flow_create() returns NULL ". Also i enabled debug logging with
"--log-level=*:debug", but don't see any errors for flow validation/ flow
creation . Please let me know if I'm missing something, or need to add any
other configurations?

Thanks
Anna

On Wed, Sep 29, 2021 at 3:09 AM Wisam Monther <wisamm@nvidia.com> wrote:

> Hi Anna,
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Wednesday, September 29, 2021 12:54 PM
> > To: Anna A <pacman.n908@gmail.com>
> > Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>
> > Subject: Re: Using rte_flow to distribute single flow type among
> multiple Rx
> > queues using DPDK in Mellanox ConnectX-5 Ex
> >
> > 29/09/2021 07:26, Anna A:
> > > Hi,
> > >
> > > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > > of the same flow type among multiple Rx queues on a single port.
> > > Mellanox
> > > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > > doesn't seem to work and all the packets are sent only to a single
> queue.
> >
> > Adding mlx5 maintainers Cc.
> >
> > > My queries are :
> > > 1. What am I missing or doing differently?
> > > 2. Should I be doing any other configurations in rte_eth_conf or
> > > rte_eth_rxmode?
>
> Can you please try to add?
> .rxmode.mq_mode = ETH_MQ_RX_RSS,
> in the rte_eth_conf and try again?
>
> >
> > Do you see any error log?
> > For info, you can change log level with --log-level.
> > Experiment options with '--log-level help' in recent DPDK.
> >
> > > My rte_flow configurations:
> > >
> > >     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> > >     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> > >     struct rte_flow_attr attr;
> > >     struct rte_flow_item_eth eth;
> > >     struct rte_flow *flow = NULL;
> > >     struct rte_flow_error error;
> > >     int ret;
> > >     int no_queues =2;
> > >     uint16_t queues[2];
> > >     struct rte_flow_action_rss rss;
> > >     memset(&error, 0x22, sizeof(error));
> > >     memset(&attr, 0, sizeof(attr));
> > >     attr.egress = 0;
> > >     attr.ingress = 1;
> > >
> > >     memset(&pattern, 0, sizeof(pattern));
> > >     memset(&action, 0, sizeof(action));
> > >     /* setting the eth to pass all packets */
> > >     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > >     pattern[0].spec = &eth;
> > >     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > >
> > >     rss.types = ETH_RSS_IP;
> > >     rss.level = 0;
> > >     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> > >     rss.key_len =0;
> > >     rss.key = NULL;
> > >     rss.queue_num = no_queues;
> > >     for (int i= 0; i < no_queues; i++){
> > >         queues[i] = i;
> > >     }
> > >     rss.queue = queues;
> > >     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> > >     action[0].conf = &rss;
> > >
> > >     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> > >
> > >     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> > >      if (ret < 0) {
> > >       printf( "Flow validation failed %s\n", error.message);
> > >         return;
> > >     }
> > >     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> > >
> > >     if (flow == NULL)
> > >         printf(" Cannot create Flow create");
> > >
> > > And Rx queues configuration:
> > > for (int j = 0; j < no_queues; j++) {
> > >
> > >          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > > rte_eth_dev_socket_id(port_id),
> > >                                NULL,mbuf_pool);
> > >      if (ret < 0) {
> > >       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > > (unsigned) portid);
> > >         exit(1);
> > >        }
> > > }
> > >
> > > Thanks
> > > Anna
> >
> >
>
> BRs,
> Wisam Jaddo
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
  2021-09-30  0:29     ` Anna A
@ 2021-09-30  1:13       ` Raslan Darawsheh
  2021-09-30  5:38         ` Anna A
  0 siblings, 1 reply; 6+ messages in thread
From: Raslan Darawsheh @ 2021-09-30  1:13 UTC (permalink / raw)
  To: Anna A, Wisam Monther
  Cc: NBU-Contact-Thomas Monjalon, users, Matan Azrad, Slava Ovsiienko

Hi Anna,

What you are basically doing is trying to do RSS on eth layer which we don't support the spreading on it.

To make it work you can do either adding ip layer to the items to make the RSS happen on L3 or simply through the rss types of the rss action which would cause an automatic expansion for the items in mlx5 pmd internally.


Kindest regards,
Raslan Darawsheh
________________________________
From: Anna A <pacman.n908@gmail.com>
Sent: Thursday, September 30, 2021 3:29:51 AM
To: Wisam Monther <wisamm@nvidia.com>
Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex

Hi Wisam,

I added .rxmode.mq_mode = ETH_MQ_RX_RSS to rte_eth_conf before calling the
fn, rte_eth_dev_configure() but still have the packets sent to a single
queue.

My order of configuration is as follows:

1. Enable .rxmode.mq_mode = ETH_MQ_RX_RSS
2. Initialize port by rte_eth_dev_configure()
3. Setup multiple Rxqueues for a single port by calling
rte_eth_rx_queue_setup() on each queueid.
4.setup a single txqueue by rte_eth_tx_queue_setup()
5. start the device with rte_eth_dev_start()
6. Configure rte_flow with pattern -> flow create port0 ingress pattern eth
/ end / action RSS on multiple queues / end
7. Add Mac address
8. Check the port link status

If I try to configure rte_flow before calling rte_eth_dev_start, I get the
error message "net_mlx5: port 0 is not started when inserting a flow and
rte_flow_create() returns NULL ". Also i enabled debug logging with
"--log-level=*:debug", but don't see any errors for flow validation/ flow
creation . Please let me know if I'm missing something, or need to add any
other configurations?

Thanks
Anna

On Wed, Sep 29, 2021 at 3:09 AM Wisam Monther <wisamm@nvidia.com> wrote:

> Hi Anna,
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Wednesday, September 29, 2021 12:54 PM
> > To: Anna A <pacman.n908@gmail.com>
> > Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>
> > Subject: Re: Using rte_flow to distribute single flow type among
> multiple Rx
> > queues using DPDK in Mellanox ConnectX-5 Ex
> >
> > 29/09/2021 07:26, Anna A:
> > > Hi,
> > >
> > > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > > of the same flow type among multiple Rx queues on a single port.
> > > Mellanox
> > > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > > doesn't seem to work and all the packets are sent only to a single
> queue.
> >
> > Adding mlx5 maintainers Cc.
> >
> > > My queries are :
> > > 1. What am I missing or doing differently?
> > > 2. Should I be doing any other configurations in rte_eth_conf or
> > > rte_eth_rxmode?
>
> Can you please try to add?
> .rxmode.mq_mode = ETH_MQ_RX_RSS,
> in the rte_eth_conf and try again?
>
> >
> > Do you see any error log?
> > For info, you can change log level with --log-level.
> > Experiment options with '--log-level help' in recent DPDK.
> >
> > > My rte_flow configurations:
> > >
> > >     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> > >     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> > >     struct rte_flow_attr attr;
> > >     struct rte_flow_item_eth eth;
> > >     struct rte_flow *flow = NULL;
> > >     struct rte_flow_error error;
> > >     int ret;
> > >     int no_queues =2;
> > >     uint16_t queues[2];
> > >     struct rte_flow_action_rss rss;
> > >     memset(&error, 0x22, sizeof(error));
> > >     memset(&attr, 0, sizeof(attr));
> > >     attr.egress = 0;
> > >     attr.ingress = 1;
> > >
> > >     memset(&pattern, 0, sizeof(pattern));
> > >     memset(&action, 0, sizeof(action));
> > >     /* setting the eth to pass all packets */
> > >     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > >     pattern[0].spec = &eth;
> > >     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > >
> > >     rss.types = ETH_RSS_IP;
> > >     rss.level = 0;
> > >     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> > >     rss.key_len =0;
> > >     rss.key = NULL;
> > >     rss.queue_num = no_queues;
> > >     for (int i= 0; i < no_queues; i++){
> > >         queues[i] = i;
> > >     }
> > >     rss.queue = queues;
> > >     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> > >     action[0].conf = &rss;
> > >
> > >     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> > >
> > >     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> > >      if (ret < 0) {
> > >       printf( "Flow validation failed %s\n", error.message);
> > >         return;
> > >     }
> > >     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> > >
> > >     if (flow == NULL)
> > >         printf(" Cannot create Flow create");
> > >
> > > And Rx queues configuration:
> > > for (int j = 0; j < no_queues; j++) {
> > >
> > >          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > > rte_eth_dev_socket_id(port_id),
> > >                                NULL,mbuf_pool);
> > >      if (ret < 0) {
> > >       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > > (unsigned) portid);
> > >         exit(1);
> > >        }
> > > }
> > >
> > > Thanks
> > > Anna
> >
> >
>
> BRs,
> Wisam Jaddo
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
  2021-09-30  1:13       ` Raslan Darawsheh
@ 2021-09-30  5:38         ` Anna A
  0 siblings, 0 replies; 6+ messages in thread
From: Anna A @ 2021-09-30  5:38 UTC (permalink / raw)
  To: Raslan Darawsheh
  Cc: Wisam Monther, NBU-Contact-Thomas Monjalon, users, Matan Azrad,
	Slava Ovsiienko

Hi Raslan,

As a part of rte_flow configuration I did include rss.types = ETH_RSS_IP
for action type RTE_FLOW_ACTION_TYPE_RSS. Doesn't that support the
spreading in mlx5 pmd? Please correct me if my understanding is different
from what you suggested.

Thanks
Anna

On Wed, Sep 29, 2021 at 6:13 PM Raslan Darawsheh <rasland@nvidia.com> wrote:

> Hi Anna,
>
> What you are basically doing is trying to do RSS on eth layer which we
> don't support the spreading on it.
>
> To make it work you can do either adding ip layer to the items to make the
> RSS happen on L3 or simply through the rss types of the rss action which
> would cause an automatic expansion for the items in mlx5 pmd internally.
>
>
> Kindest regards,
> Raslan Darawsheh
> ------------------------------
> *From:* Anna A <pacman.n908@gmail.com>
> *Sent:* Thursday, September 30, 2021 3:29:51 AM
> *To:* Wisam Monther <wisamm@nvidia.com>
> *Cc:* NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org <
> users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>
> *Subject:* Re: Using rte_flow to distribute single flow type among
> multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex
>
> Hi Wisam,
>
> I added .rxmode.mq_mode = ETH_MQ_RX_RSS to rte_eth_conf before calling the
> fn, rte_eth_dev_configure() but still have the packets sent to a single
> queue.
>
> My order of configuration is as follows:
>
> 1. Enable .rxmode.mq_mode = ETH_MQ_RX_RSS
> 2. Initialize port by rte_eth_dev_configure()
> 3. Setup multiple Rxqueues for a single port by calling
> rte_eth_rx_queue_setup() on each queueid.
> 4.setup a single txqueue by rte_eth_tx_queue_setup()
> 5. start the device with rte_eth_dev_start()
> 6. Configure rte_flow with pattern -> flow create port0 ingress pattern eth
> / end / action RSS on multiple queues / end
> 7. Add Mac address
> 8. Check the port link status
>
> If I try to configure rte_flow before calling rte_eth_dev_start, I get the
> error message "net_mlx5: port 0 is not started when inserting a flow and
> rte_flow_create() returns NULL ". Also i enabled debug logging with
> "--log-level=*:debug", but don't see any errors for flow validation/ flow
> creation . Please let me know if I'm missing something, or need to add any
> other configurations?
>
> Thanks
> Anna
>
> On Wed, Sep 29, 2021 at 3:09 AM Wisam Monther <wisamm@nvidia.com> wrote:
>
> > Hi Anna,
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > Sent: Wednesday, September 29, 2021 12:54 PM
> > > To: Anna A <pacman.n908@gmail.com>
> > > Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > > <viacheslavo@nvidia.com>
> > > Subject: Re: Using rte_flow to distribute single flow type among
> > multiple Rx
> > > queues using DPDK in Mellanox ConnectX-5 Ex
> > >
> > > 29/09/2021 07:26, Anna A:
> > > > Hi,
> > > >
> > > > I'm trying to use rte_flow_action_type_rss to distribute packets all
> > > > of the same flow type among multiple Rx queues on a single port.
> > > > Mellanox
> > > > ConnectX-5 Ex and DPDK version 20.05 is used for this purpose. It
> > > > doesn't seem to work and all the packets are sent only to a single
> > queue.
> > >
> > > Adding mlx5 maintainers Cc.
> > >
> > > > My queries are :
> > > > 1. What am I missing or doing differently?
> > > > 2. Should I be doing any other configurations in rte_eth_conf or
> > > > rte_eth_rxmode?
> >
> > Can you please try to add?
> > .rxmode.mq_mode = ETH_MQ_RX_RSS,
> > in the rte_eth_conf and try again?
> >
> > >
> > > Do you see any error log?
> > > For info, you can change log level with --log-level.
> > > Experiment options with '--log-level help' in recent DPDK.
> > >
> > > > My rte_flow configurations:
> > > >
> > > >     struct rte_flow_item pattern[MAX_RTE_FLOW_PATTERN] = {};
> > > >     struct rte_flow_action action[MAX_RTE_FLOW_ACTIONS] = {};
> > > >     struct rte_flow_attr attr;
> > > >     struct rte_flow_item_eth eth;
> > > >     struct rte_flow *flow = NULL;
> > > >     struct rte_flow_error error;
> > > >     int ret;
> > > >     int no_queues =2;
> > > >     uint16_t queues[2];
> > > >     struct rte_flow_action_rss rss;
> > > >     memset(&error, 0x22, sizeof(error));
> > > >     memset(&attr, 0, sizeof(attr));
> > > >     attr.egress = 0;
> > > >     attr.ingress = 1;
> > > >
> > > >     memset(&pattern, 0, sizeof(pattern));
> > > >     memset(&action, 0, sizeof(action));
> > > >     /* setting the eth to pass all packets */
> > > >     pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > > >     pattern[0].spec = &eth;
> > > >     pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > > >
> > > >     rss.types = ETH_RSS_IP;
> > > >     rss.level = 0;
> > > >     rss.func = RTE_ETH_HASH_FUNCTION_TOEPLITZ;
> > > >     rss.key_len =0;
> > > >     rss.key = NULL;
> > > >     rss.queue_num = no_queues;
> > > >     for (int i= 0; i < no_queues; i++){
> > > >         queues[i] = i;
> > > >     }
> > > >     rss.queue = queues;
> > > >     action[0].type = RTE_FLOW_ACTION_TYPE_RSS;
> > > >     action[0].conf = &rss;
> > > >
> > > >     action[1].type = RTE_FLOW_ACTION_TYPE_END;
> > > >
> > > >     ret = rte_flow_validate(portid, &attr, pattern, action, &error);
> > > >      if (ret < 0) {
> > > >       printf( "Flow validation failed %s\n", error.message);
> > > >         return;
> > > >     }
> > > >     flow = rte_flow_create(portid, &attr, pattern, action, &error);
> > > >
> > > >     if (flow == NULL)
> > > >         printf(" Cannot create Flow create");
> > > >
> > > > And Rx queues configuration:
> > > > for (int j = 0; j < no_queues; j++) {
> > > >
> > > >          int ret = rte_eth_rx_queue_setup(portid, j, nb_rxd,
> > > > rte_eth_dev_socket_id(port_id),
> > > >                                NULL,mbuf_pool);
> > > >      if (ret < 0) {
> > > >       printf( "rte_eth_rx_queue_setup:err=%d, port=%u", ret,
> > > > (unsigned) portid);
> > > >         exit(1);
> > > >        }
> > > > }
> > > >
> > > > Thanks
> > > > Anna
> > >
> > >
> >
> > BRs,
> > Wisam Jaddo
> >
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-09-30  5:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-29  5:26 Using rte_flow to distribute single flow type among multiple Rx queues using DPDK in Mellanox ConnectX-5 Ex Anna A
2021-09-29  9:53 ` Thomas Monjalon
2021-09-29 10:09   ` Wisam Monther
2021-09-30  0:29     ` Anna A
2021-09-30  1:13       ` Raslan Darawsheh
2021-09-30  5:38         ` Anna A

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git