DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
To: Matan Azrad <matan@mellanox.com>,
	Slava Ovsiienko <viacheslavo@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on	VLAN header
Date: Tue, 26 Nov 2019 16:10:50 +0900	[thread overview]
Message-ID: <20191126161049.EAB7.17218CA3@ntt-tx.co.jp_1> (raw)
In-Reply-To: <20191119203609.3FFD.17218CA3@ntt-tx.co.jp_1>

Hello Matan and Slava,

Thanks for your quick response.

How about the following?

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hello Matan and Slava,
> 
> Thanks for your quick response.
> 
> 1. What you were saying is correct.
>  When I create flow with dst mac 10:22:33:44:55:66 instead of
> 11:22:33:44:55:66, received packets are queued to specified queue.
> 
> Thanks for your advice!
> 
> Q1. What is the problem with broadcast/multicast address? 
> Q2. What is the bug number on Bugzilla of DPDK?
> Q3. What is the default behavior of unmatched packets?
> Discard packet or queue those to default queue(e.g. queue=0)?
> 
> When I tested packet distribution with vlan id, unmatched packets 
> looked discarded..
> I would like to know what is the default handling.
> 
> Thanks!
> 
> Best Regards,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> > Hi
> > 
> > This bit on in dst mac address = "01:00:00:00:00:00" means the packets is L2 multicast packet.
> > 
> > When you run Testpmd application the multicast configuration is forwarded to the device by default.
> > 
> > So, you have 2 rules:
> > The default which try to match on the above dst mac bit and to do RSS action for all the queues.
> > Your rule which try to match on dst mac 11:22:33:44:55:66 (the multicast bit is on) and more and to send it to queue 1.
> > 
> > So, your flow is sub flow of the default flow.
> > 
> > Since the current behavior in our driver is to put all the multicast rules in the same priority - the behavior for the case is unpredictable:
> > 1. you can get the packet twice for the 2 rules.
> > 2. you can get the packet onl for the default RSS action.
> > 3. you can get the packet only on queue 1 as in your rule.
> > 
> > Unfortunately, here, you got option 1 I think (you get the packet twice in your application).
> > 
> > This behavior in our driver to put the 2 rules in same behavior is in discussion by us - maybe it will be changed later.
> > 
> > To workaround the issue:
> > 1. do not configure the default rules (run with --flow-isolate-all in testpmd cmdline).
> > 2. do not configure 2 different multicast rules (even with different priorities).
> > 
> > Enjoy, let me know if you need more help....
> > 
> > Matan
> > 
> > From: Hideyuki Yamashita
> > > Hi Slava,
> > > 
> > > 
> > > Thanks for your response.
> > > 
> > > 1. Is the bug number is the follwoing?
> > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.
> > > dpdk.org%2Fshow_bug.cgi%3Fid%3D96&amp;data=02%7C01%7Cmatan%40
> > > mellanox.com%7Ce10ce5ee0f8f4350c6a508d76bee3a97%7Ca652971c7d2e4d
> > > 9ba6a4d149256f461b%7C0%7C0%7C637096543244123210&amp;sdata=V3V21
> > > gwJExHt7mkg0sAEwW%2FLTCIbJEkHznNtUCVkN%2BA%3D&amp;reserved=0
> > > 
> > > 2.I've sent packets using scapy with the follwing script and I think it is unicast
> > > ICMP.
> > > How did you thought the packets are broadcast/muticast?
> > > Note that I am not familiar with log of testpmd.
> > > 
> > > ----------------------------------------------------------------------------------------------
> > > from scapy.all import *
> > > 
> > > vlan_vid = 100
> > > vlan_prio = 0
> > > vlan_id = 0
> > > vlan_flg = True
> > > src_mac = "CC:CC:CC:CC:CC:CC"
> > > dst_mac = "11:22:33:44:55:66"
> > > dst_ip = "192.168.200.101"
> > > iface = "p7p1"
> > > pps = 5
> > > loop = 5
> > > 
> > > def icmp_send():
> > >     ls(Dot1Q)
> > >     if vlan_flg:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid,
> > > prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
> > >     else:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
> > >     pkt.show()
> > >     sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)
> > > 
> > > icmp_send()
> > > -----------------------------------------------------------------------------
> > > 
> > > Thanks!
> > > 
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > > 
> > > > Hi, Hideyuki
> > > >
> > > > The frame in your report is broadcast/multicast. Please, try unicast one.
> > > > For broadcast we have the ticket, currently issue is under investigation.
> > > > Anyway, thanks for reporting.
> > > >
> > > > With best regards, Slava
> > > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Thursday, November 14, 2019 7:02
> > > > > To: dev@dpdk.org
> > > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Hello Slava,
> > > > >
> > > > > As I reported to you, creating flow was successful with Connect-X5.
> > > > > However when I sent packets to the NIC from outer side of the host,
> > > > > I have problem.
> > > > >
> > > > >
> > > > > [Case 1]
> > > > > Packet distribution on multi-queue based on dst MAC address.
> > > > >
> > > > > NIC config:
> > > > > 04:00.0 Mellanox Connect-X5
> > > > > 0.5.00.0 Intel XXV710
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> type mask 0xffff / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > >
> > > > > Packet reception:(no VLAN tag)
> > > > > port 0/queue 0: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > port 1/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > > > >
> > > > > Expectation:
> > > > > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > > > > received on queue=1 port=0.
> > > > >
> > > > > Question:
> > > > > Why matching packet is NOT enqueued into queue=1 on port=0?
> > > > >
> > > > >
> > > > > [Case 2]
> > > > > Packet distribution on multi-queue based on VLAN tag
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end
> > > > > actions queue index 1 / of_pop_vlan / end flow create 0 ingress
> > > > > group 0 pattern eth / end actions jump group 1 / end
> > > > >
> > > > > Packet Reception: (VLAN100)
> > > > > port 0/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packetd queued to queue=1, port=0 Other packet(VLAN101
> > > > > packet) discarded.
> > > > >
> > > > > Expectation:
> > > > > Matched packet queued to queue =1, port=0 Non Matched packet
> > > queued
> > > > > to queue=0, port=0
> > > > >
> > > > > Question:
> > > > > Is above behavior collect?
> > > > > What is the default behavior of unmatchedd packets (queue to queue=0
> > > > > or discard packet)
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > 
> > 
> 
> 



  reply	other threads:[~2019-11-26  7:11 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-16 15:20 [dpdk-dev] [RFC] " Moti Haimovsky
2019-08-06  8:24 ` [dpdk-dev] [PATCH 0/7] " Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 1/7] net/mlx5: support for an action search in a list Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 2/7] net/mlx5: add VLAN push/pop DR commands to glue Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 3/7] net/mlx5: support pop flow action on VLAN header Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 4/7] net/mlx5: support push " Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 5/7] net/mlx5: support modify VLAN priority on VLAN hdr Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 6/7] net/mlx5: supp modify VLAN ID on new VLAN header Moti Haimovsky
2019-08-06  8:24   ` [dpdk-dev] [PATCH 7/7] net/mlx5: supp modify VLAN ID on existing VLAN hdr Moti Haimovsky
2019-09-01 10:40   ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: support for flow action on VLAN header Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: support for an action search in a list Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add VLAN push/pop DR commands to glue Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: support pop flow action on VLAN header Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 4/7] net/mlx5: support push " Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: support modify VLAN priority on VLAN hdr Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: supp modify VLAN ID on new VLAN header Moti Haimovsky
2019-09-01 10:40     ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: supp modify VLAN ID on existing VLAN hdr Moti Haimovsky
2019-09-02 15:00     ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: support for flow action on VLAN header Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: support for an action search in a list Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add VLAN push/pop DR commands to glue Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: support pop flow action on VLAN header Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 4/7] net/mlx5: support push " Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: support modify VLAN priority on VLAN hdr Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: supp modify VLAN ID on new VLAN header Moti Haimovsky
2019-09-02 15:00       ` [dpdk-dev] [PATCH v3 7/7] net/mlx5: supp modify VLAN ID on existing VLAN hdr Moti Haimovsky
2019-09-03 15:13       ` [dpdk-dev] [PATCH v4 0/7] net/mlx5: support for flow action on VLAN header Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: support for an action search in a list Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 2/7] net/mlx5: add VLAN push/pop DR commands to glue Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 3/7] net/mlx5: support pop flow action on VLAN header Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 4/7] net/mlx5: support push " Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 5/7] net/mlx5: support modify VLAN priority on VLAN hdr Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 6/7] net/mlx5: supp modify VLAN ID on new VLAN header Moti Haimovsky
2019-09-03 15:13         ` [dpdk-dev] [PATCH v4 7/7] net/mlx5: supp modify VLAN ID on existing VLAN hdr Moti Haimovsky
2019-09-09 15:56         ` [dpdk-dev] [PATCH v5 0/7] net/mlx5: support for flow action on VLAN header Moti Haimovsky
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 1/7] net/mlx5: support for an action search in a list Moti Haimovsky
2019-09-10  8:12             ` Slava Ovsiienko
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 2/7] net/mlx5: add VLAN push/pop DR commands to glue Moti Haimovsky
2019-09-10  8:12             ` Slava Ovsiienko
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 3/7] net/mlx5: support pop flow action on VLAN header Moti Haimovsky
2019-09-10  8:13             ` Slava Ovsiienko
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 4/7] net/mlx5: support push " Moti Haimovsky
2019-09-10 10:42             ` Slava Ovsiienko
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 5/7] net/mlx5: support modify VLAN priority on VLAN hdr Moti Haimovsky
2019-09-10  8:13             ` Slava Ovsiienko
2019-09-10  8:13             ` Slava Ovsiienko
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 6/7] net/mlx5: supp modify VLAN ID on new VLAN header Moti Haimovsky
2019-09-09 15:56           ` [dpdk-dev] [PATCH v5 7/7] net/mlx5: supp modify VLAN ID on existing VLAN hdr Moti Haimovsky
2019-09-10  8:13             ` Slava Ovsiienko
2019-09-10  6:10           ` [dpdk-dev] [PATCH v5 0/7] net/mlx5: support for flow action on VLAN header Slava Ovsiienko
2019-09-10 13:34           ` Raslan Darawsheh
2019-10-01 12:17   ` [dpdk-dev] [PATCH " Hideyuki Yamashita
2019-10-04 10:35     ` Hideyuki Yamashita
2019-10-04 10:51       ` Slava Ovsiienko
2019-10-18 10:55         ` Hideyuki Yamashita
2019-10-21  7:11           ` Hideyuki Yamashita
2019-10-21  7:29             ` Slava Ovsiienko
2019-10-25  4:48               ` Hideyuki Yamashita
2019-10-29  5:45                 ` Slava Ovsiienko
2019-10-30 10:04                   ` Hideyuki Yamashita
2019-10-30 10:08                     ` Slava Ovsiienko
2019-10-30 10:46                       ` Hideyuki Yamashita
2019-10-31  7:11                         ` Slava Ovsiienko
2019-10-31  9:51                           ` Hideyuki Yamashita
2019-10-31 10:36                             ` Slava Ovsiienko
2019-11-05 10:26                               ` Hideyuki Yamashita
2019-11-06 11:03                                 ` Hideyuki Yamashita
2019-11-06 16:35                                   ` Slava Ovsiienko
2019-11-07  4:46                                     ` Hideyuki Yamashita
2019-11-07  6:01                                       ` Slava Ovsiienko
2019-11-07 11:02                                         ` Hideyuki Yamashita
2019-11-14  5:01                                           ` Hideyuki Yamashita
2019-11-14  5:06                                             ` Hideyuki Yamashita
2019-11-15  7:16                                             ` Slava Ovsiienko
2019-11-18  6:11                                               ` Hideyuki Yamashita
2019-11-18 10:03                                                 ` Matan Azrad
2019-11-19 11:36                                                   ` Hideyuki Yamashita
2019-11-26  7:10                                                     ` Hideyuki Yamashita [this message]
2019-12-04  2:43                                                     ` Hideyuki Yamashita

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191126161049.EAB7.17218CA3@ntt-tx.co.jp_1 \
    --to=yamashita.hideyuki@ntt-tx.co.jp \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).