From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D92B4A00D7; Thu, 31 Oct 2019 10:52:06 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B14A1C1F4; Thu, 31 Oct 2019 10:52:06 +0100 (CET) Received: from dish-sg.nttdocomo.co.jp (dish-sg.nttdocomo.co.jp [202.19.227.74]) by dpdk.org (Postfix) with ESMTP id 725781C1F0 for ; Thu, 31 Oct 2019 10:52:05 +0100 (CET) X-dD-Source: Outbound Received: from zssg-mailmd104.ddreams.local (zssg-mailmd900.ddreams.local [10.160.172.63]) by zssg-mailou104.ddreams.local (Postfix) with ESMTP id A5D00120112; Thu, 31 Oct 2019 18:52:04 +0900 (JST) Received: from t131sg-mailcc12.ddreams.local (t131sg-mailcc12.ddreams.local [100.66.31.87]) by zssg-mailmd104.ddreams.local (dDREAMS) with ESMTP id <0Q0800D11GQST4C0@dDREAMS>; Thu, 31 Oct 2019 18:52:04 +0900 (JST) Received: from t131sg-mailcc12 (localhost [127.0.0.1]) by t131sg-mailcc12.ddreams.local (unknown) with SMTP id x9V9q4Fo045775; Thu, 31 Oct 2019 18:52:04 +0900 Received: from zssg-mailmf106.ddreams.local (unknown [127.0.0.1]) by zssg-mailmf106.ddreams.local (Postfix) with ESMTP id 296A87E6034; Thu, 31 Oct 2019 18:51:36 +0900 (JST) Received: from zssg-mailmf106.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2818F8E605A; Thu, 31 Oct 2019 18:51:36 +0900 (JST) Received: from localhost (unknown [127.0.0.1]) by IMSVA (Postfix) with SMTP id 1D4858E6057; Thu, 31 Oct 2019 18:51:36 +0900 (JST) X-IMSS-HAND-OFF-DIRECTIVE: localhost:10026 Received: from zssg-mailmf106.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B938C8E6042; Thu, 31 Oct 2019 18:51:35 +0900 (JST) Received: from zssg-mailua105.ddreams.local (unknown [10.160.172.62]) by zssg-mailmf106.ddreams.local (Postfix) with ESMTP; Thu, 31 Oct 2019 18:51:35 +0900 (JST) Received: from [10.87.198.18] (unknown [10.160.183.129]) by zssg-mailua105.ddreams.local (dDREAMS) with ESMTPA id <0Q0800VUOGPXTY71@dDREAMS>; Thu, 31 Oct 2019 18:51:33 +0900 (JST) Date: Thu, 31 Oct 2019 18:51:33 +0900 From: Hideyuki Yamashita In-reply-to: References: <20191030194618.5C5B.17218CA3@ntt-tx.co.jp_1> Message-id: <20191031185133.4C1D.17218CA3@ntt-tx.co.jp_1> MIME-version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7bit X-Mailer: Becky! ver. 2.74.02 [ja] X-TM-AS-GCONF: 00 To: Slava Ovsiienko Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on VLAN header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Dear Slava, Your guess is corrrect. When I put flow into Connect-X5, it was successful. General question. Are there any way to input flow to ConnectX-4? In another word, are there any way to activate Verb? And which type of flow is supported in Verb? ----------------------------------------------------------- tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-gcc/app$ sudo ./te stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --socket-mem 512,512 --huge-dir=/mnt/h uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16 [sudo] password for tx_h-yamashita: EAL: Detected 48 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: Probing VFIO support... EAL: PCI device 0000:04:00.0 on NUMA socket 0 EAL: probe driver: 15b3:1017 net_mlx5 net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device mlx5_ 1 Interactive-mode selected testpmd: create a new mbuf pool : n=171456, size=2176, socke t=0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=171456, size=2176, socke t=1 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=paired and odd forward ports number, the last port will p air with itself. Configuring Port 0 (socket 0) Port 0: B8:59:9F:C1:4A:CE Checking link statuses... Done testpmd> testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2 e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue index 0 / end Flow rule #0 created testpmd> -------------------------------------------------------------------------------------------------------------- BR, Hideyuki Yamashita NTT TechnoCross > Hi, Hideyuki > > > -----Original Message----- > > From: Hideyuki Yamashita > > Sent: Wednesday, October 30, 2019 12:46 > > To: Slava Ovsiienko > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on > > VLAN header > > > > Hello Slava, > > > > Thanks for your help. > > I added magic phrase. with chaging PCI number with proper one in my env. > > > It changes situation but still result in error. > > > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically. > > Your help is appreciated. > > > > I think it is getting closer. > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp- > > gcc/app$ > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem 512,512 - > > -huge-dir=/mnt/h uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 > > mlx5 PMD supports two flow engines: > - Verbs, this is legacy one, almost no new features are being added, just bug fixes, > provides slow rule insertion rate, etc. > - Direct Rules, the new one, all new features are being added here. > > (We had one more intermediate engine - Direct Verbs, it was dropped, > but prefix dv in dv_flow_en remains ??) > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5, ConnectX-6, etc. > Direct Rules is supported for NICs starting from ConnectX-5. > "dv_flow_en=1" partameter engages Direct Rules, but I see you run testpmd > over 03:00.0 which is ConnectX-4, not supporting Direct Rules. > Please, run over ConnectX-5 you have on your host. > > As for error - it is not related to memory, rdma core just failed to create the group table, > because ConnectX-4 does not support DR. > > With best regards, Slava > > > --txq=16 --rxq=16 > > EAL: Detected 48 lcore(s) > > EAL: Detected 2 NUMA nodes > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'PA' > > EAL: Probing VFIO support... > > EAL: PCI device 0000:03:00.0 on NUMA socket 0 > > EAL: probe driver: 15b3:1015 net_mlx5 > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device > > mlx5_3 > > > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=171456, > > size=2176, socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool : n=171456, > > size=2176, socket=1 > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > Warning! port-topology=paired and odd forward ports number, the last port > > will pair with itself. > > > > Configuring Port 0 (socket 0) > > Port 0: B8:59:9F:DB:22:20 > > Checking link statuses... > > Done > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is > > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan > > testpmd> / queue index 0 / end > > Caught error type 1 (cause unspecified): cannot create table: Cannot allocate > > memory > > > > > > BR, > > Hideyuki Yamashita >