From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E9BBA034E; Thu, 7 Nov 2019 12:05:41 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8387C1E985; Thu, 7 Nov 2019 12:05:40 +0100 (CET) Received: from dish-sg.nttdocomo.co.jp (dish-sg.nttdocomo.co.jp [202.19.227.74]) by dpdk.org (Postfix) with ESMTP id 089C81E97F for ; Thu, 7 Nov 2019 12:05:38 +0100 (CET) X-dD-Source: Outbound Received: from zssg-mailmd103.ddreams.local (zssg-mailmd900.ddreams.local [10.160.172.63]) by zssg-mailou102.ddreams.local (Postfix) with ESMTP id 8AAD212011E; Thu, 7 Nov 2019 20:05:35 +0900 (JST) Received: from t131sg-mailcc11.ddreams.local (t131sg-mailcc11.ddreams.local [100.66.31.86]) by zssg-mailmd103.ddreams.local (dDREAMS) with ESMTP id <0Q0L019IWITBJK50@dDREAMS>; Thu, 07 Nov 2019 20:05:35 +0900 (JST) Received: from t131sg-mailcc11 (localhost [127.0.0.1]) by t131sg-mailcc11.ddreams.local (unknown) with SMTP id xA7B5ZQk050454; Thu, 7 Nov 2019 20:05:35 +0900 Received: from zssg-mailmf101.ddreams.local (unknown [127.0.0.1]) by zssg-mailmf101.ddreams.local (Postfix) with ESMTP id BCDD77E603C; Thu, 7 Nov 2019 20:04:34 +0900 (JST) Received: from zssg-mailmf101.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B98EF8E605B; Thu, 7 Nov 2019 20:04:34 +0900 (JST) Received: from localhost (unknown [127.0.0.1]) by IMSVA (Postfix) with SMTP id AC9958E6072; Thu, 7 Nov 2019 20:04:34 +0900 (JST) X-IMSS-HAND-OFF-DIRECTIVE: localhost:10026 Received: from zssg-mailmf101.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A07B88E60BB; Thu, 7 Nov 2019 20:02:15 +0900 (JST) Received: from zssg-mailua102.ddreams.local (unknown [10.160.172.62]) by zssg-mailmf101.ddreams.local (Postfix) with ESMTP; Thu, 7 Nov 2019 20:02:15 +0900 (JST) Received: from [10.87.198.18] (unknown [10.160.183.129]) by zssg-mailua102.ddreams.local (dDREAMS) with ESMTPA id <0Q0L00NIHINOFG40@dDREAMS>; Thu, 07 Nov 2019 20:02:12 +0900 (JST) Date: Thu, 07 Nov 2019 20:02:12 +0900 From: Hideyuki Yamashita In-reply-to: References: <20191107134606.4C32.17218CA3@ntt-tx.co.jp_1> Message-id: <20191107200211.4C38.17218CA3@ntt-tx.co.jp_1> MIME-version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7bit X-Mailer: Becky! ver. 2.74.02 [ja] X-TM-AS-GCONF: 00 To: Slava Ovsiienko Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on VLAN header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello Slava, About 1, when I turned on "CONFIG_RTE_LIBRTE_MLX5_PMD=y" it worked. About 2, I used the latest dpdk-next-net, creating flow for entag VLAN was successful as following: Configuring Port 0 (socket 0) Port 0: B8:59:9F:C1:4A:CE Configuring Port 1 (socket 0) Port 1: B8:59:9F:C1:4A:CF Checking link statuses... Done testpmd> flow create 0 egress group 1 pattern eth src is BB:BB:BB:BB:BB:BB / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp vlan_pcp 3 / end Flow rule #0 created testpmd> flow create 0 egress group 0 pattern eth dst [TOKEN]: destination MAC src [TOKEN]: source MAC type [TOKEN]: EtherType / [TOKEN]: specify next pattern item testpmd> flow create 0 egress group 0 pattern eth / a any [TOKEN]: match any protocol for the current layer arp_eth_ipv4 [TOKEN]: match ARP header for Ethernet/IPv4 testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1 Bad arguments testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1 / end Flow rule #1 created In short, my questions resolved! Thanks! BR, Hideyuki Yamashita NTT TechnoCross > Hi, Hideyuki > > 1. As you pointed out, it was configuration issue > > (CONFIG_RTE_LIBRTE_MLX5_DEBUG=y)! > > When I turned out the configuration, 19.11 rc1 recognized Connect-X5 > > corrcetly. > No-no, it is not configuration, this just enables debug features and Is helpful to locate > the reason why ConnectX-5 was not detected on your setup. In release product, of coarse, > the CONFIG_RTE_LIBRTE_MLX5_DEBUG must be "n" > Or was it just missed "CONFIG_RTE_LIBRTE_MLX5_PMD=y" ? > > > > > Thanks for your help. > > > > 2. How about the question I put in my previouse email (how to create flow > > for entag VLAN tag on not-tagged packet) > > I'm sorry, I did not express my answer in clear way. > This issue is fixed, now you entagging Flow can be created successfully, I rechecked. > > Now it works: > > > > > > testpmd> flow create 0 egress group 1 pattern eth src is > > > > > testpmd> BB:BB:BB:BB:BB:BB / end actions of_push_vlan ethertype > > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp > > > > > testpmd> vlan_pcp 3 / end > > Please, take (coming on Friday) 19.11rc2 and try. > > With best regards, Slava > > > > Thanks again. > > > > > > BR, > > Hideyuki Yamashita > > NTT TechnoCross > > > > > Hi, Hideyuki > > > > > > > -----Original Message----- > > > > From: Hideyuki Yamashita > > > > Sent: Wednesday, November 6, 2019 13:04 > > > > To: Slava Ovsiienko > > > > Cc: dev@dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow > > > > action on VLAN header > > > > > > > > Dear Slava, > > > > > > > > Additional question. > > > > When I use testpmd in dpdk-next-net repo, it works in general. > > > > However when I use dpdk19.11-rc1, testpmd does not recognize > > > > connectX-5 NIC. > > > > > > It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now. > > > Could you, please: > > > - configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in > > ./config/common_base > > > - reconfigure DPDK and rebuild testpmd > > > - run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before > > > -- separator) > > > - see (and provide) the log, where it drops the eth_dev object > > > spawning > > > > > > > > > > > Is it correct that ConnectX-5 will be recognized in 19.11 release finally? > > > > > > It should be recognized in 19.11rc1, possible we have some > > > configuration issue, let's have a look at. > > > > > > > If yes, which release candidate the necessary change will be mergerd > > > > and available? > > > > > > > > BR, > > > > Hideyuki Yamashita > > > > NTT TechnoCross > > > > > > > > > > > > > Dear Slava, > > > > > > > > > > Thanks for your response. > > > > > > > > > > Inputting other flows failed while some flows are created. > > > > > Please help on the following two cases. > > > > > > > > > > 1) I would like to detag vlan tag which has specific destionation > > > > > MAC address. No condition about vlan id value. > > > > > > > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is > > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan > > > > > testpmd> / queue index 1 / end > > > > > Caught error type 10 (item specification): VLAN cannot be empty: > > > > > Invalid argument > > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is > > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions > > > > > testpmd> of_pop_vlan / queue index 1 / end > > > > > Flow rule #0 created > > > > > > I'll check, possible this validation reject is imposed by HW > > > limitations - it requires the VLAN header presence and (IIRC) VID match. If > > possible - we'll fix. > > > > > > > > > > > > > 2) I would like to entag vlan tag > > > > > > > > > > testpmd> flow create 0 egress group 1 pattern eth src is > > > > > testpmd> BB:BB:BB:BB:BB:BB / end actions of_push_vlan ethertype > > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp > > > > > testpmd> vlan_pcp 3 / end > > > > > Caught error type 16 (specific action): cause: 0x7ffdc9d98348, > > > > > match on VLAN is required in order to set VLAN VID: Invalid > > > > > argument > > > > > > > > > > > It is fixed (and patch Is already merged - > > > > > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch > > > > > es.dpdk.org%2Fpatch%2F62295%2F&data=02%7C01%7Cviacheslavo%4 > > 0mellan > > > > > ox.com%7Ca17dfb64b04f430237ff08d7633d7346%7Ca652971c7d2e4d9ba6 > > a4d14925 > > > > > 6f461b%7C0%7C1%7C637086987908448715&sdata=Uvi1bWYT%2BaHo > > TSHkQ8AF6% > > > 2FnTx%2FP5UrMqtZ3gAzjqGAA%3D&reserved=0), > > > let's try coming 19.11rc2. I inserted your Flow successfully on current > > Upstream.. > > > > > > With best regards, Slava > > > > > > > > > > > > > > Thanks! > > > > > > > > > > BR, > > > > > Hideyuki Yamashita > > > > > NTT TechnoCross > > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: Hideyuki Yamashita > > > > > > > Sent: Thursday, October 31, 2019 11:52 > > > > > > > To: Slava Ovsiienko > > > > > > > Cc: dev@dpdk.org > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow > > > > > > > action on VLAN header > > > > > > > > > > > > > > Dear Slava, > > > > > > > > > > > > > > Your guess is corrrect. > > > > > > > When I put flow into Connect-X5, it was successful. > > > > > > Very nice. > > > > > > > > > > > > > > > > > > > > General question. > > > > > > As we know - general questions are the most hard ones to answer ??. > > > > > > > > > > > > > Are there any way to input flow to ConnectX-4? > > > > > > As usual - with RTE flow API. Just omit dv_flow_en, or specify > > > > > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs > > > > > > engine, > > > > supported by ConnectX-4. > > > > > > > > > > > > > In another word, are there any way to activate Verb? > > > > > > > And which type of flow is supported in Verb? > > > > > > Please, see flow_verbs_validate() routine in the > > > > > > mlx5_flow_verbs.c, it shows which RTE flow items and actions are > > > > > > actually supported by > > > > Verbs. > > > > > > > > > > > > With best regards, Slava > > > > > > > > > > > > > > > > > > > > > > > > > > ----------------------------------------------------------- > > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native- > > linuxapp- > > > > > > > gcc/app$ sudo ./te stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 > > -- > > > > socket- > > > > > > > mem 512,512 --huge-dir=/mnt/h > > > > > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 > > > > > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita: > > > > > > > EAL: Detected 48 lcore(s) > > > > > > > EAL: Detected 2 NUMA nodes > > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > > > > > > EAL: Selected IOVA mode 'PA' > > > > > > > EAL: Probing VFIO support... > > > > > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0 > > > > > > > EAL: probe driver: 15b3:1017 net_mlx5 > > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port > > > > > > > 1 on > > > > device > > > > > > > mlx5_ 1 > > > > > > > > > > > > > > Interactive-mode selected > > > > > > > > > > > > > > testpmd: create a new mbuf pool : > > n=171456, > > > > > > > size=2176, socke t=0 > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > > > > testpmd: create a new mbuf pool : > > n=171456, > > > > > > > size=2176, socke t=1 > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > > > > > > > > > > > Warning! port-topology=paired and odd forward ports number, > > > > > > > the last > > > > port > > > > > > > will p air with itself. > > > > > > > > > > > > > > Configuring Port 0 (socket 0) > > > > > > > Port 0: B8:59:9F:C1:4A:CE > > > > > > > Checking link statuses... > > > > > > > Done > > > > > > > testpmd> > > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth > > > > > > > testpmd> dst is > > > > > > > 00:16:3e:2 e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan > > / > > > > queue > > > > > > > index 0 / end > > > > > > > Flow rule #0 created > > > > > > > testpmd> > > > > > > > -------------------------------------------------------------- > > > > > > > ---- > > > > > > > --------------------------- > > > > > > > ----------------- > > > > > > > > > > > > > > BR, > > > > > > > Hideyuki Yamashita > > > > > > > NTT TechnoCross > > > > > > > > > > > > > > > Hi, Hideyuki > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > From: Hideyuki Yamashita > > > > > > > > > Sent: Wednesday, October 30, 2019 12:46 > > > > > > > > > To: Slava Ovsiienko > > > > > > > > > Cc: dev@dpdk.org > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for > > > > > > > > > flow action on VLAN header > > > > > > > > > > > > > > > > > > Hello Slava, > > > > > > > > > > > > > > > > > > Thanks for your help. > > > > > > > > > I added magic phrase. with chaging PCI number with proper > > > > > > > > > one in my > > > > > > > env. > > > > > > > > > > > > > > > > > It changes situation but still result in error. > > > > > > > > > > > > > > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage > > dynamically. > > > > > > > > > Your help is appreciated. > > > > > > > > > > > > > > > > > > I think it is getting closer. > > > > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native- > > > > linuxapp- > > > > > > > > > gcc/app$ > > > > > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 > > > > > > > > > --socket-mem > > > > > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i > > > > > > > > > --portmask=0x1 --nb-cores=2 > > > > > > > > > > > > > > > > mlx5 PMD supports two flow engines: > > > > > > > > - Verbs, this is legacy one, almost no new features are > > > > > > > > being added, just > > > > > > > bug fixes, > > > > > > > > provides slow rule insertion rate, etc. > > > > > > > > - Direct Rules, the new one, all new features are being added > > here. > > > > > > > > > > > > > > > > (We had one more intermediate engine - Direct Verbs, it was > > > > > > > > dropped, but prefix dv in dv_flow_en remains ??) > > > > > > > > > > > > > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, > > > > > > > > ConnectX-5, > > > > > > > ConnectX-6, etc. > > > > > > > > Direct Rules is supported for NICs starting from ConnectX-5. > > > > > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see > > > > > > > > you run testpmd over 03:00.0 which is ConnectX-4, not > > > > > > > > supporting Direct > > > > Rules. > > > > > > > > Please, run over ConnectX-5 you have on your host. > > > > > > > > > > > > > > > > As for error - it is not related to memory, rdma core just > > > > > > > > failed to create the group table, because ConnectX-4 does > > > > > > > > not > > > > support DR. > > > > > > > > > > > > > > > > With best regards, Slava > > > > > > > > > > > > > > > > > --txq=16 --rxq=16 > > > > > > > > > EAL: Detected 48 lcore(s) > > > > > > > > > EAL: Detected 2 NUMA nodes > > > > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > > > > > > > > EAL: Selected IOVA mode 'PA' > > > > > > > > > EAL: Probing VFIO support... > > > > > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0 > > > > > > > > > EAL: probe driver: 15b3:1015 net_mlx5 > > > > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx > > > > > > > > > port > > > > > > > > > 1 on device > > > > > > > > > mlx5_3 > > > > > > > > > > > > > > > > > > Interactive-mode selected > > > > > > > > > testpmd: create a new mbuf pool : > > > > > > > > > n=171456, size=2176, socket=0 > > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > > > > > > testpmd: create a new mbuf pool : > > > > > > > > > n=171456, size=2176, socket=1 > > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > > > > > > > > > > > > > > > Warning! port-topology=paired and odd forward ports > > > > > > > > > number, the last port will pair with itself. > > > > > > > > > > > > > > > > > > Configuring Port 0 (socket 0) Port 0: B8:59:9F:DB:22:20 > > > > > > > > > Checking link statuses... > > > > > > > > > Done > > > > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern > > > > > > > > > testpmd> eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / > > > > > > > > > testpmd> end actions of_pop_vlan / queue index 0 / end > > > > > > > > > Caught error type 1 (cause unspecified): cannot create table: > > > > > > > > > Cannot allocate memory > > > > > > > > > > > > > > > > > > > > > > > > > > > BR, > > > > > > > > > Hideyuki Yamashita > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >