DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] VMDQ question
@ 2019-03-27  9:14 Roman Novikov
  2019-03-27  9:14 ` Roman Novikov
  0 siblings, 1 reply; 2+ messages in thread
From: Roman Novikov @ 2019-03-27  9:14 UTC (permalink / raw)
  To: dev

Dear developers! I have some troubles with VDMQ configuring.
Goal: I want to split physical port to 2 virtual depend on vlan.  I need 
rss (32 queues on each virtual port, and 32 queues for untagged packets) 
and keep LLDP, LACP/LAG functionality.
so i need 3 Rss group, each 32 queues (Untagged, Vlan A, Vlan B)
hardware and OS:
Xeon E5 2695 v4, X710 4X10Gb (firmware 6.8), Ubunti server 18.04 LTS, 
dpdk 19.02

I am trying to configure VMDQ with rss.
DPDK config  changes:
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=32
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=32
Then i configured 8 pools and all queues( queues, that i am not going to 
use i configured with pool, that i am not going to use (to save some 
memory))
Then i filled rte_eth_vmdq_rx_conf strcut. It has field "rx_mode", where 
i set 0 ( so i  set nither  ETH_VMDQ_ACCEPT_UNTAG  not 
ETH_VMDQ_ACCEPT_BROADCAST );
I expect, that i will have packets with vlan Tag A on queues 32-63,  
with Tag B  qeueus 64-95 and UNTAG traffic on queues 0-31 with RSS.
It works, but some packets appear on each pool , like ( Ether(dst= 
"ff:ff:ff:ff:ff:ff:ff")/IP(dst="255.255.255.255")).
I looked in i40e driver code, and found, that driver ignore field 
"rx_mode".
Questions
1. Is it bug? If Yes, when it will be fixed?
2. Has this case workaround ? (maby i can do it with vlan mirroring? ) 
If Yes, can you help me with configuration? ( I will be glad to have 
simple example with mirroring)
3. Maby i can fix it by myself? (so i need few words about, what should 
i do?

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-dev] VMDQ question
  2019-03-27  9:14 [dpdk-dev] VMDQ question Roman Novikov
@ 2019-03-27  9:14 ` Roman Novikov
  0 siblings, 0 replies; 2+ messages in thread
From: Roman Novikov @ 2019-03-27  9:14 UTC (permalink / raw)
  To: dev

Dear developers! I have some troubles with VDMQ configuring.
Goal: I want to split physical port to 2 virtual depend on vlan.  I need 
rss (32 queues on each virtual port, and 32 queues for untagged packets) 
and keep LLDP, LACP/LAG functionality.
so i need 3 Rss group, each 32 queues (Untagged, Vlan A, Vlan B)
hardware and OS:
Xeon E5 2695 v4, X710 4X10Gb (firmware 6.8), Ubunti server 18.04 LTS, 
dpdk 19.02

I am trying to configure VMDQ with rss.
DPDK config  changes:
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=32
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=32
Then i configured 8 pools and all queues( queues, that i am not going to 
use i configured with pool, that i am not going to use (to save some 
memory))
Then i filled rte_eth_vmdq_rx_conf strcut. It has field "rx_mode", where 
i set 0 ( so i  set nither  ETH_VMDQ_ACCEPT_UNTAG  not 
ETH_VMDQ_ACCEPT_BROADCAST );
I expect, that i will have packets with vlan Tag A on queues 32-63,  
with Tag B  qeueus 64-95 and UNTAG traffic on queues 0-31 with RSS.
It works, but some packets appear on each pool , like ( Ether(dst= 
"ff:ff:ff:ff:ff:ff:ff")/IP(dst="255.255.255.255")).
I looked in i40e driver code, and found, that driver ignore field 
"rx_mode".
Questions
1. Is it bug? If Yes, when it will be fixed?
2. Has this case workaround ? (maby i can do it with vlan mirroring? ) 
If Yes, can you help me with configuration? ( I will be glad to have 
simple example with mirroring)
3. Maby i can fix it by myself? (so i need few words about, what should 
i do?


















^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-03-27 11:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-27  9:14 [dpdk-dev] VMDQ question Roman Novikov
2019-03-27  9:14 ` Roman Novikov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).