DPDK usage discussions
 help / color / mirror / Atom feed
* RSS queue problem with i40e on DPDK 20.11.3
@ 2021-11-19  8:45 Antti-Pekka Liedes
  2022-01-24 15:16 ` Eric Christian
  0 siblings, 1 reply; 3+ messages in thread
From: Antti-Pekka Liedes @ 2021-11-19  8:45 UTC (permalink / raw)
  To: users

Hi DPDK experts,

I have a problem with upgrading our software from DPDK 20.11.1 to DPDK 20.11.3: the RSS we use for i40e now delivers all the packets to the first queue 0 only. I'm using the rte_flow API to configure the queues first and then all the flow types one by one to distribute the incoming packets using symmetric Toeplitz hash to 8 queues.

Note that this is C++ code, and the m_ prefixed variables are members of the Port object, ie., port specific parameters.

The queue region setup is:

const struct rte_flow_attr attr = {
 .group = 0,
 .priority = 0,
 .ingress = 1,
 .egress = 0,
 .transfer = 0,
 .reserved = 0
};
uint16_t rss_queue[m_num_rx_queues];
for (int i = 0; i < m_num_rx_queues; i++)
 {
   rss_queue[i] = i;
 }

{
 const struct rte_flow_item pattern[] = {
   {
     .type = RTE_FLOW_ITEM_TYPE_END
   }
 };

 const struct rte_flow_action_rss action_rss = {
   .level = 0,
   .types = 0,
   .key_len = rss_key_len,
   .queue_num = m_num_rx_queues,
   .key = rss_key,
   .queue = rss_queue
 };
 const struct rte_flow_action action[] = {
   {
     .type = RTE_FLOW_ACTION_TYPE_RSS,
     .conf = &action_rss
   },
   {
     .type = RTE_FLOW_ACTION_TYPE_END
   }
 };
 struct rte_flow_error flow_error;

 struct rte_flow* flow = rte_flow_create(
   m_portid,
   &attr,
   pattern,
   action,
   &flow_error);

where m_num_rx_queues = 8, and rss_key and rss_key_len are our enforced RSS key originally pulled from an X710. The rss_key_len = 52.

After this I configure all the flow types:

uint64_t rss_types[] = {
 ETH_RSS_FRAG_IPV4,
 ETH_RSS_NONFRAG_IPV4_TCP,
 ETH_RSS_NONFRAG_IPV4_UDP,
 ETH_RSS_NONFRAG_IPV4_SCTP,
 ETH_RSS_NONFRAG_IPV4_OTHER,

 ETH_RSS_FRAG_IPV6,
 ETH_RSS_NONFRAG_IPV6_TCP,
 ETH_RSS_NONFRAG_IPV6_UDP,
 ETH_RSS_NONFRAG_IPV6_SCTP,
 ETH_RSS_NONFRAG_IPV6_OTHER
};

and for each type:

const struct rte_flow_attr attr = {
 .group = 0,
 .priority = 0,
 .ingress = 1,
 .egress = 0,
 .transfer = 0,
 .reserved = 0
};

// Room for L2 to L4.
struct rte_flow_item pattern[] = {
 {
   .type = RTE_FLOW_ITEM_TYPE_ETH
 },
 {
   .type = RTE_FLOW_ITEM_TYPE_END
 },
 {
   .type = RTE_FLOW_ITEM_TYPE_END
 },
 {
   .type = RTE_FLOW_ITEM_TYPE_END
 }
};

// Add L2/L3/L4 to pattern according to rss_type.

const struct rte_flow_action_rss action_rss = {
 .func = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
 .level = 0,
 .types = rss_type,
 .key_len = rss_key_len,
 .queue_num = 0,
 .key = rss_key,
 .queue = NULL
};
const struct rte_flow_action action[] = {
 {
   .type = RTE_FLOW_ACTION_TYPE_RSS,
   .conf = &action_rss
 },
 {
   .type = RTE_FLOW_ACTION_TYPE_END
 }
};
struct rte_flow_error flow_error;

struct rte_flow* flow = rte_flow_create(
 m_portid,
 &attr,
 pattern,
 action,
 &flow_error);

We also have a software Toeplitz calculator that agrees with the HW hash value for both DPDK 20.11.1 and 20.11.3, so the hash calculation by the HW seems to be ok. AFAICT, the above is similar to the RSS setup instructions in https://doc.dpdk.org/guides-20.11/nics/i40e.html for test-pmd, except that we give our own key as well.

Some random facts:
- Changing rss_queues to all 3's doesn't affect the distribution; all the packets still go to queue 0.
- I use Intel X710 for debugging and the observed behavior is from there, but according to performance testing X722 also exhibits the same problem.
- My X710 fw versions are: i40e 0000:01:00.0: fw 8.4.66032 api 1.14 nvm 8.40 0x8000aba4 1.2992.0.
- A quick test on DPDK 20.11.2 indicates correct spread among all 8 RX queues, so the problem is probably introduced in 20.11.3.

Thanks,

Antti-Pekka Liedes


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RSS queue problem with i40e on DPDK 20.11.3
  2021-11-19  8:45 RSS queue problem with i40e on DPDK 20.11.3 Antti-Pekka Liedes
@ 2022-01-24 15:16 ` Eric Christian
  2022-01-30  9:00   ` Antti-Pekka Liedes
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Christian @ 2022-01-24 15:16 UTC (permalink / raw)
  To: Antti-Pekka Liedes; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 3983 bytes --]

Hi,

I am curious if you resolved this?

Eric

On Fri, Nov 19, 2021 at 3:49 AM Antti-Pekka Liedes <apl@iki.fi> wrote:

> Hi DPDK experts,
>
> I have a problem with upgrading our software from DPDK 20.11.1 to DPDK
> 20.11.3: the RSS we use for i40e now delivers all the packets to the first
> queue 0 only. I'm using the rte_flow API to configure the queues first and
> then all the flow types one by one to distribute the incoming packets using
> symmetric Toeplitz hash to 8 queues.
>
> Note that this is C++ code, and the m_ prefixed variables are members of
> the Port object, ie., port specific parameters.
>
> The queue region setup is:
>
> const struct rte_flow_attr attr = {
>  .group = 0,
>  .priority = 0,
>  .ingress = 1,
>  .egress = 0,
>  .transfer = 0,
>  .reserved = 0
> };
> uint16_t rss_queue[m_num_rx_queues];
> for (int i = 0; i < m_num_rx_queues; i++)
>  {
>    rss_queue[i] = i;
>  }
>
> {
>  const struct rte_flow_item pattern[] = {
>    {
>      .type = RTE_FLOW_ITEM_TYPE_END
>    }
>  };
>
>  const struct rte_flow_action_rss action_rss = {
>    .level = 0,
>    .types = 0,
>    .key_len = rss_key_len,
>    .queue_num = m_num_rx_queues,
>    .key = rss_key,
>    .queue = rss_queue
>  };
>  const struct rte_flow_action action[] = {
>    {
>      .type = RTE_FLOW_ACTION_TYPE_RSS,
>      .conf = &action_rss
>    },
>    {
>      .type = RTE_FLOW_ACTION_TYPE_END
>    }
>  };
>  struct rte_flow_error flow_error;
>
>  struct rte_flow* flow = rte_flow_create(
>    m_portid,
>    &attr,
>    pattern,
>    action,
>    &flow_error);
>
> where m_num_rx_queues = 8, and rss_key and rss_key_len are our enforced
> RSS key originally pulled from an X710. The rss_key_len = 52.
>
> After this I configure all the flow types:
>
> uint64_t rss_types[] = {
>  ETH_RSS_FRAG_IPV4,
>  ETH_RSS_NONFRAG_IPV4_TCP,
>  ETH_RSS_NONFRAG_IPV4_UDP,
>  ETH_RSS_NONFRAG_IPV4_SCTP,
>  ETH_RSS_NONFRAG_IPV4_OTHER,
>
>  ETH_RSS_FRAG_IPV6,
>  ETH_RSS_NONFRAG_IPV6_TCP,
>  ETH_RSS_NONFRAG_IPV6_UDP,
>  ETH_RSS_NONFRAG_IPV6_SCTP,
>  ETH_RSS_NONFRAG_IPV6_OTHER
> };
>
> and for each type:
>
> const struct rte_flow_attr attr = {
>  .group = 0,
>  .priority = 0,
>  .ingress = 1,
>  .egress = 0,
>  .transfer = 0,
>  .reserved = 0
> };
>
> // Room for L2 to L4.
> struct rte_flow_item pattern[] = {
>  {
>    .type = RTE_FLOW_ITEM_TYPE_ETH
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  }
> };
>
> // Add L2/L3/L4 to pattern according to rss_type.
>
> const struct rte_flow_action_rss action_rss = {
>  .func = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
>  .level = 0,
>  .types = rss_type,
>  .key_len = rss_key_len,
>  .queue_num = 0,
>  .key = rss_key,
>  .queue = NULL
> };
> const struct rte_flow_action action[] = {
>  {
>    .type = RTE_FLOW_ACTION_TYPE_RSS,
>    .conf = &action_rss
>  },
>  {
>    .type = RTE_FLOW_ACTION_TYPE_END
>  }
> };
> struct rte_flow_error flow_error;
>
> struct rte_flow* flow = rte_flow_create(
>  m_portid,
>  &attr,
>  pattern,
>  action,
>  &flow_error);
>
> We also have a software Toeplitz calculator that agrees with the HW hash
> value for both DPDK 20.11.1 and 20.11.3, so the hash calculation by the HW
> seems to be ok. AFAICT, the above is similar to the RSS setup instructions
> in https://doc.dpdk.org/guides-20.11/nics/i40e.html for test-pmd, except
> that we give our own key as well.
>
> Some random facts:
> - Changing rss_queues to all 3's doesn't affect the distribution; all the
> packets still go to queue 0.
> - I use Intel X710 for debugging and the observed behavior is from there,
> but according to performance testing X722 also exhibits the same problem.
> - My X710 fw versions are: i40e 0000:01:00.0: fw 8.4.66032 api 1.14 nvm
> 8.40 0x8000aba4 1.2992.0.
> - A quick test on DPDK 20.11.2 indicates correct spread among all 8 RX
> queues, so the problem is probably introduced in 20.11.3.
>
> Thanks,
>
> Antti-Pekka Liedes
>
>

[-- Attachment #2: Type: text/html, Size: 4969 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RSS queue problem with i40e on DPDK 20.11.3
  2022-01-24 15:16 ` Eric Christian
@ 2022-01-30  9:00   ` Antti-Pekka Liedes
  0 siblings, 0 replies; 3+ messages in thread
From: Antti-Pekka Liedes @ 2022-01-30  9:00 UTC (permalink / raw)
  To: Eric Christian; +Cc: users

Hi,

it seems that with another round of firmware updates the problem just went away. We had some other moving parts as well, so I’m not 100% sure as to what exactly fixed this, but all is well now. We also upgraded to DPDK 20.11.4 and still going fine.

— 
Antti-Pekka Liedes <apl@iki.fi>



> On 24. Jan 2022, at 17.16, Eric Christian <erclists@gmail.com> wrote:
> 
> Hi,
> 
> I am curious if you resolved this?
> 
> Eric
> 
> On Fri, Nov 19, 2021 at 3:49 AM Antti-Pekka Liedes <apl@iki.fi> wrote:
> Hi DPDK experts,
> 
> I have a problem with upgrading our software from DPDK 20.11.1 to DPDK 20.11.3: the RSS we use for i40e now delivers all the packets to the first queue 0 only. I'm using the rte_flow API to configure the queues first and then all the flow types one by one to distribute the incoming packets using symmetric Toeplitz hash to 8 queues.
> 
> Note that this is C++ code, and the m_ prefixed variables are members of the Port object, ie., port specific parameters.
> 
> The queue region setup is:
> 
> const struct rte_flow_attr attr = {
>  .group = 0,
>  .priority = 0,
>  .ingress = 1,
>  .egress = 0,
>  .transfer = 0,
>  .reserved = 0
> };
> uint16_t rss_queue[m_num_rx_queues];
> for (int i = 0; i < m_num_rx_queues; i++)
>  {
>    rss_queue[i] = i;
>  }
> 
> {
>  const struct rte_flow_item pattern[] = {
>    {
>      .type = RTE_FLOW_ITEM_TYPE_END
>    }
>  };
> 
>  const struct rte_flow_action_rss action_rss = {
>    .level = 0,
>    .types = 0,
>    .key_len = rss_key_len,
>    .queue_num = m_num_rx_queues,
>    .key = rss_key,
>    .queue = rss_queue
>  };
>  const struct rte_flow_action action[] = {
>    {
>      .type = RTE_FLOW_ACTION_TYPE_RSS,
>      .conf = &action_rss
>    },
>    {
>      .type = RTE_FLOW_ACTION_TYPE_END
>    }
>  };
>  struct rte_flow_error flow_error;
> 
>  struct rte_flow* flow = rte_flow_create(
>    m_portid,
>    &attr,
>    pattern,
>    action,
>    &flow_error);
> 
> where m_num_rx_queues = 8, and rss_key and rss_key_len are our enforced RSS key originally pulled from an X710. The rss_key_len = 52.
> 
> After this I configure all the flow types:
> 
> uint64_t rss_types[] = {
>  ETH_RSS_FRAG_IPV4,
>  ETH_RSS_NONFRAG_IPV4_TCP,
>  ETH_RSS_NONFRAG_IPV4_UDP,
>  ETH_RSS_NONFRAG_IPV4_SCTP,
>  ETH_RSS_NONFRAG_IPV4_OTHER,
> 
>  ETH_RSS_FRAG_IPV6,
>  ETH_RSS_NONFRAG_IPV6_TCP,
>  ETH_RSS_NONFRAG_IPV6_UDP,
>  ETH_RSS_NONFRAG_IPV6_SCTP,
>  ETH_RSS_NONFRAG_IPV6_OTHER
> };
> 
> and for each type:
> 
> const struct rte_flow_attr attr = {
>  .group = 0,
>  .priority = 0,
>  .ingress = 1,
>  .egress = 0,
>  .transfer = 0,
>  .reserved = 0
> };
> 
> // Room for L2 to L4.
> struct rte_flow_item pattern[] = {
>  {
>    .type = RTE_FLOW_ITEM_TYPE_ETH
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  },
>  {
>    .type = RTE_FLOW_ITEM_TYPE_END
>  }
> };
> 
> // Add L2/L3/L4 to pattern according to rss_type.
> 
> const struct rte_flow_action_rss action_rss = {
>  .func = RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
>  .level = 0,
>  .types = rss_type,
>  .key_len = rss_key_len,
>  .queue_num = 0,
>  .key = rss_key,
>  .queue = NULL
> };
> const struct rte_flow_action action[] = {
>  {
>    .type = RTE_FLOW_ACTION_TYPE_RSS,
>    .conf = &action_rss
>  },
>  {
>    .type = RTE_FLOW_ACTION_TYPE_END
>  }
> };
> struct rte_flow_error flow_error;
> 
> struct rte_flow* flow = rte_flow_create(
>  m_portid,
>  &attr,
>  pattern,
>  action,
>  &flow_error);
> 
> We also have a software Toeplitz calculator that agrees with the HW hash value for both DPDK 20.11.1 and 20.11.3, so the hash calculation by the HW seems to be ok. AFAICT, the above is similar to the RSS setup instructions in https://doc.dpdk.org/guides-20.11/nics/i40e.html for test-pmd, except that we give our own key as well.
> 
> Some random facts:
> - Changing rss_queues to all 3's doesn't affect the distribution; all the packets still go to queue 0.
> - I use Intel X710 for debugging and the observed behavior is from there, but according to performance testing X722 also exhibits the same problem.
> - My X710 fw versions are: i40e 0000:01:00.0: fw 8.4.66032 api 1.14 nvm 8.40 0x8000aba4 1.2992.0.
> - A quick test on DPDK 20.11.2 indicates correct spread among all 8 RX queues, so the problem is probably introduced in 20.11.3.
> 
> Thanks,
> 
> Antti-Pekka Liedes
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-01-30  9:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-19  8:45 RSS queue problem with i40e on DPDK 20.11.3 Antti-Pekka Liedes
2022-01-24 15:16 ` Eric Christian
2022-01-30  9:00   ` Antti-Pekka Liedes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).