DPDK usage discussions
 help / color / mirror / Atom feed
From: 曾懷恩 <the@csie.io>
To: Tom Barbette <barbette@kth.se>,
	Adrien Mazarguil <adrien.mazarguil@6wind.com>,
	Wenzhuo Lu <wenzhuo.lu@intel.com>,
	Konstantin Ananyev <konstantin.ananyev@intel.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Flow director struct rte_flow_item_raw guild
Date: Wed, 22 May 2019 11:18:54 +0800	[thread overview]
Message-ID: <31A706AA-A3D2-46E9-9FB9-586245A6E898@csie.io> (raw)
In-Reply-To: <751e597b-bfbe-5e4d-d138-f388ffd3eab7@kth.se>

Hi all,

thanks for previous replying,

I tried to find information about flow director in 82599 ixgbevf.

In the datasheet https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82599-10-gbe-controller-datasheet.pdf <https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82599-10-gbe-controller-datasheet.pdf>, 

figure 7-7 shows 82599 NIC supports flow director in virtualization case.

However, I got flow creation failed with “function not implemented” when I tried to create a raw flow rule.

Here is the flow I created : 
	struct rte_flow_attr attr;
	struct rte_flow_item pattern[4]; 
	struct rte_flow_action action[4]; 
	struct rte_flow *flow = NULL; 
	struct rte_flow_action_queue queue = { .index = rx_q }; 
	struct rte_flow_item_eth eth_spec, eth_mask;
	struct rte_flow_item_ipv4 ip_spec, ip_mask; 

	memset(pattern,0,sizeof(pattern));
	memset(action,0,sizeof(action));
	memset(&attr,0,sizeof(struct rte_flow_attr));
	attr.ingress = 1;

	action[0].type = RTE_FLOW_ACTION_TYPE_QUEUE;
	action[0].conf = &queue;
	action[1].type = RTE_FLOW_ACTION_TYPE_END;

	memset(&eth_spec,0,sizeof(struct rte_flow_item_eth));
	memset(&eth_mask,0,sizeof(struct rte_flow_item_eth));
	for(int i=0; i<ETH_ALEN; i++) {
		eth_spec.dst.addr_bytes[i] = mac_addr[port_id][i];
		eth_mask.dst.addr_bytes[i] = 0xff;
	}
	eth_spec.type = rte_cpu_to_be_16(0x0800);
	eth_mask.type = 0xffff;
	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
	pattern[0].spec = &eth_spec;
	pattern[0].mask = &eth_mask;

	memset(&ip_spec,0,sizeof(struct rte_flow_item_ipv4));
	memset(&ip_mask,0,sizeof(struct rte_flow_item_ipv4));
	ip_spec.hdr.next_proto_id = 0x06;
	ip_mask.hdr.next_proto_id = 0xff;
	
	pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
	pattern[1].spec = &ip_spec;
	pattern[1].mask = &ip_mask;
	pattern[2].type = RTE_FLOW_ITEM_TYPE_END;

I also tried the DPDK flow_filtering sample application and got failed.

It shows the same error message when creating flow.

It looks like ixgbevf doesn’t support flow director but this seems to conflict with the 82599 datasheet.

Is there anything I miss?

Thanks, 

Best Regards,

> Tom Barbette <barbette@kth.se> 於 2019年5月16日 下午2:00 寫道:
> 
> Hi,
> 
> I learned to look at the datasheet first, or look at the code before using "fancy" patterns (specially for Mellanox products that have no datasheet :p).
> Another way is to just try testpmd "flow create ..." quickly and see how it goes.
> 
> Eg, for x520, you will have only access to what is called the 2 flex bytes, see the 82599 datasheet.
> 
> If you grep the code you'll see the constraint are quite huge :
> 
> It will fail if any of these is true :
> Mask :
> 	raw_mask->pattern[0] != 0xff ||
>            raw_mask->pattern[1] != 0xff)
> --> It's a fixed value search on two bytes. And you'll have to set that value for all patterns if I remember the datasheet correctly.
> 
> Value:
> 		raw_spec->relative != 0 ||
>            raw_spec->search != 0 ||
>            raw_spec->reserved != 0 ||
>            raw_spec->offset > IXGBE_MAX_FLX_SOURCE_OFF ||
>            raw_spec->offset % 2 ||
>            raw_spec->limit != 0 ||
>            raw_spec->length != 2 ||
>            /* pattern can't be 0xffff */
>            (raw_spec->pattern[0] == 0xff &&
>             raw_spec->pattern[1] == 0xff))
> 
> I think XL710 (i40e) will be as restrained. From what I remember the flex bytes became a flex payload bytes. So you can't even match on the header.
> 
> 
> Tom
> 
> 
> For relevance, other mask stuff :
>   raw_mask->relative != 0x1 ||
>            raw_mask->search != 0x1 ||
>            raw_mask->reserved != 0x0 ||
>            (uint32_t)raw_mask->offset != 0xffffffff ||
>            raw_mask->limit != 0xffff ||
>            raw_mask->length != 0xffff
> 
> 
> 
> 
> Le 16/05/2019 à 07:34, 曾懷恩 a écrit :
>> Hi Adrien,
>> Thanks for reply,
>> I tried to trace the mlx5 and ixgbe driver source code in DPDK source and notice that the flow director APIs are processed in it.
>> I also compared the difference of flow director APIs between both two drivers and found that there is no rte_flow_item_raw defined in mlx5 driver.
>> Is there any possible mlx5 series NICs support rte_flow_item_raw in the future?
>> Thanks a lot.
>> Best Regards,
>>> Adrien Mazarguil <adrien.mazarguil@6wind.com> 於 2019年5月13日 下午4:49 寫道:
>>> 
>>> On Sat, May 11, 2019 at 02:20:48AM +0800, Huai-En Tseng wrote:
>>>> Thanks for reply,
>>>> 
>>>> Actually I’d like to use  rte_flow_item_raw structure with PPP header, the ICMP format is my little trial.
>>> 
>>> OK, makes sense, there's no pattern item for PPP yet.
>>> 
>>>> I will try X520 next week.
>>>> 
>>>> Another question, is ixgbevf also support RTE_FLOW_ITEM_RAW?
>>> 
>>> The ixgbe driver looks like it does but I'm not sure regarding the VF
>>> variant, Cc'ing maintainers just in case.
>>> 
>>> -- 
>>> Adrien Mazarguil
>>> 6WIND
> 


      reply	other threads:[~2019-05-22  3:19 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09  2:01 曾懷恩
2019-05-09  2:10 ` 張敬昊
2019-05-09  3:51   ` 曾懷恩
2019-05-09 12:38     ` Adrien Mazarguil
2019-05-10  6:38       ` 曾懷恩
2019-05-10 13:44         ` Adrien Mazarguil
2019-05-10 18:20           ` Huai-En Tseng
2019-05-13  8:49             ` Adrien Mazarguil
2019-05-16  5:34               ` 曾懷恩
2019-05-16  6:00                 ` Tom Barbette
2019-05-22  3:18                   ` 曾懷恩 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=31A706AA-A3D2-46E9-9FB9-586245A6E898@csie.io \
    --to=the@csie.io \
    --cc=adrien.mazarguil@6wind.com \
    --cc=barbette@kth.se \
    --cc=konstantin.ananyev@intel.com \
    --cc=users@dpdk.org \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).