DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] rte_flow update support?
@ 2019-02-14 11:31 Tom Barbette
  2019-02-14 13:15 ` Shahaf Shuler
  0 siblings, 1 reply; 4+ messages in thread
From: Tom Barbette @ 2019-02-14 11:31 UTC (permalink / raw)
  To: dev; +Cc: adrien.mazarguil, Shahaf Shuler

Hi all,

Are there plans to add support for modifying rules using the rte_flow API ?

The first problem with destroy+create is atomicity. During the process 
some packets will get lost.

Then the second problem is performance. We measured Mellanox CX5 (mlx5 
driver) to be able to "update" at best 2K rules/sec, but that drops to 
200 rules/sec when updating TC rules ("transfer" rules, to switch 
packets between VFs). Real support for update should boost those numbers.

I saw the ibverbs API backing the mlx5 supports updating the action of a 
rule. This would already solve a lot of use cases. Eg, changing 
destination queue(s) of some rules. Given that mlx5 does not support 
changing global RSS queues without restarting the device, this would 
also solve re-balancing issue by using rte_flow.

Then, beyond updating only the action of a rule, some researchers have 
shown[1] that updating rules patterns data instead of creating and 
deleting rules with similar patterns improve drastically the 
performance. Eg that could be very interesting to accelerate the 
offloading of OVS's flow cache (5-tuples), or similar setups.

Thanks,
Tom


[1] Turboflow: information rich flow record generation on commodity 
switches,  J Sonchack et al.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] rte_flow update support?
  2019-02-14 11:31 [dpdk-dev] rte_flow update support? Tom Barbette
@ 2019-02-14 13:15 ` Shahaf Shuler
  2019-02-15 10:45   ` Tom Barbette
  0 siblings, 1 reply; 4+ messages in thread
From: Shahaf Shuler @ 2019-02-14 13:15 UTC (permalink / raw)
  To: Tom Barbette, dev; +Cc: Adrien Mazarguil, Olga Shern

Hi Tom,

Thursday, February 14, 2019 1:31 PM, Tom Barbette:
> Subject: rte_flow update support?
> 
> Hi all,
> 
> Are there plans to add support for modifying rules using the rte_flow API ?
> 
> The first problem with destroy+create is atomicity. During the process some
> packets will get lost.
> 
> Then the second problem is performance. We measured Mellanox CX5 (mlx5
> driver) to be able to "update" at best 2K rules/sec, but that drops to
> 200 rules/sec when updating TC rules ("transfer" rules, to switch packets
> between VFs). Real support for update should boost those numbers.

Yes you are right, the current update rate of verbs and TC is not so good. 

> 
> I saw the ibverbs API backing the mlx5 supports updating the action of a rule.
> This would already solve a lot of use cases. Eg, changing destination queue(s)
> of some rules. Given that mlx5 does not support changing global RSS queues
> without restarting the device, this would also solve re-balancing issue by
> using rte_flow.

Updating the action list will solve only part of issue, what you really want (for OVS case) is to update the flow pattern as well (since TCP connection got terminated and new one was created). 

> 
> Then, beyond updating only the action of a rule, some researchers have
> shown[1] that updating rules patterns data instead of creating and deleting
> rules with similar patterns improve drastically the performance. Eg that could
> be very interesting to accelerate the offloading of OVS's flow cache (5-
> tuples), or similar setups.

Stay tuned, we are working on it.
There is a new engine for flow rules which will be very fast. Expected performance will be ~300K updates per second and will include both transfer and regular flow rules.
It is in plans for 19.XX releases. 

Would recommend you to have a try on it once ready. 


> 
> Thanks,
> Tom
> 
> 
> [1] Turboflow: information rich flow record generation on commodity
> switches,  J Sonchack et al.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] rte_flow update support?
  2019-02-14 13:15 ` Shahaf Shuler
@ 2019-02-15 10:45   ` Tom Barbette
  2019-02-17  5:53     ` Shahaf Shuler
  0 siblings, 1 reply; 4+ messages in thread
From: Tom Barbette @ 2019-02-15 10:45 UTC (permalink / raw)
  To: Shahaf Shuler, dev; +Cc: Adrien Mazarguil, Olga Shern

Hi Shahaf,

This is great news! I'll definitely stay tuned.

Is there any way to support replacement with the current system with 
some patching? Eg the driver refuses to overwrite rules with kernel 
message such as "FTE flow tag 196608 already exists with different flow 
tag 327680". Would it be possible to ignore the message and overwrite?

Tom

On 2019-02-14 14:15, Shahaf Shuler wrote:
> Hi Tom,
> 
> Thursday, February 14, 2019 1:31 PM, Tom Barbette:
>> Subject: rte_flow update support?
>>
>> Hi all,
>>
>> Are there plans to add support for modifying rules using the rte_flow API ?
>>
>> The first problem with destroy+create is atomicity. During the process some
>> packets will get lost.
>>
>> Then the second problem is performance. We measured Mellanox CX5 (mlx5
>> driver) to be able to "update" at best 2K rules/sec, but that drops to
>> 200 rules/sec when updating TC rules ("transfer" rules, to switch packets
>> between VFs). Real support for update should boost those numbers.
> 
> Yes you are right, the current update rate of verbs and TC is not so good.
> 
>>
>> I saw the ibverbs API backing the mlx5 supports updating the action of a rule.
>> This would already solve a lot of use cases. Eg, changing destination queue(s)
>> of some rules. Given that mlx5 does not support changing global RSS queues
>> without restarting the device, this would also solve re-balancing issue by
>> using rte_flow.
> 
> Updating the action list will solve only part of issue, what you really want (for OVS case) is to update the flow pattern as well (since TCP connection got terminated and new one was created).
> 
>>
>> Then, beyond updating only the action of a rule, some researchers have
>> shown[1] that updating rules patterns data instead of creating and deleting
>> rules with similar patterns improve drastically the performance. Eg that could
>> be very interesting to accelerate the offloading of OVS's flow cache (5-
>> tuples), or similar setups.
> 
> Stay tuned, we are working on it.
> There is a new engine for flow rules which will be very fast. Expected performance will be ~300K updates per second and will include both transfer and regular flow rules.
> It is in plans for 19.XX releases.
> 
> Would recommend you to have a try on it once ready.
> 
> 
>>
>> Thanks,
>> Tom
>>
>>
>> [1] Turboflow: information rich flow record generation on commodity
>> switches,  J Sonchack et al.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] rte_flow update support?
  2019-02-15 10:45   ` Tom Barbette
@ 2019-02-17  5:53     ` Shahaf Shuler
  0 siblings, 0 replies; 4+ messages in thread
From: Shahaf Shuler @ 2019-02-17  5:53 UTC (permalink / raw)
  To: Tom Barbette, dev; +Cc: Adrien Mazarguil, Olga Shern

Friday, February 15, 2019 12:46 PM, Tom Barbette:
> Subject: Re: rte_flow update support?
> 
> Hi Shahaf,
> 
> This is great news! I'll definitely stay tuned.
> 
> Is there any way to support replacement with the current system with some
> patching? Eg the driver refuses to overwrite rules with kernel message such
> as "FTE flow tag 196608 already exists with different flow tag 327680". Would
> it be possible to ignore the message and overwrite?

This will require some kernel patches. Not sure you want to go this way. 

> 
> Tom
> 
> On 2019-02-14 14:15, Shahaf Shuler wrote:
> > Hi Tom,
> >
> > Thursday, February 14, 2019 1:31 PM, Tom Barbette:
> >> Subject: rte_flow update support?
> >>
> >> Hi all,
> >>
> >> Are there plans to add support for modifying rules using the rte_flow API
> ?
> >>
> >> The first problem with destroy+create is atomicity. During the
> >> process some packets will get lost.
> >>
> >> Then the second problem is performance. We measured Mellanox CX5
> >> (mlx5
> >> driver) to be able to "update" at best 2K rules/sec, but that drops
> >> to
> >> 200 rules/sec when updating TC rules ("transfer" rules, to switch
> >> packets between VFs). Real support for update should boost those
> numbers.
> >
> > Yes you are right, the current update rate of verbs and TC is not so good.
> >
> >>
> >> I saw the ibverbs API backing the mlx5 supports updating the action of a
> rule.
> >> This would already solve a lot of use cases. Eg, changing destination
> >> queue(s) of some rules. Given that mlx5 does not support changing
> >> global RSS queues without restarting the device, this would also
> >> solve re-balancing issue by using rte_flow.
> >
> > Updating the action list will solve only part of issue, what you really want
> (for OVS case) is to update the flow pattern as well (since TCP connection got
> terminated and new one was created).
> >
> >>
> >> Then, beyond updating only the action of a rule, some researchers
> >> have shown[1] that updating rules patterns data instead of creating
> >> and deleting rules with similar patterns improve drastically the
> >> performance. Eg that could be very interesting to accelerate the
> >> offloading of OVS's flow cache (5- tuples), or similar setups.
> >
> > Stay tuned, we are working on it.
> > There is a new engine for flow rules which will be very fast. Expected
> performance will be ~300K updates per second and will include both transfer
> and regular flow rules.
> > It is in plans for 19.XX releases.
> >
> > Would recommend you to have a try on it once ready.
> >
> >
> >>
> >> Thanks,
> >> Tom
> >>
> >>
> >> [1] Turboflow: information rich flow record generation on commodity
> >> switches,  J Sonchack et al.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-02-17  5:53 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-14 11:31 [dpdk-dev] rte_flow update support? Tom Barbette
2019-02-14 13:15 ` Shahaf Shuler
2019-02-15 10:45   ` Tom Barbette
2019-02-17  5:53     ` Shahaf Shuler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).