From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D8C6A2EEB for ; Mon, 7 Oct 2019 23:11:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 22B8D1C1A0; Mon, 7 Oct 2019 23:11:01 +0200 (CEST) Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40063.outbound.protection.outlook.com [40.107.4.63]) by dpdk.org (Postfix) with ESMTP id 3886D1D14D for ; Mon, 7 Oct 2019 19:02:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ff406KtAA4Jb2nabd8mZmpHNfvDcuWmy6+l/xpj3QB5gIzqixxpod4oggugVpuHKBYB8J8iBbMgCXYH9zr4VgNGd1AriKW5DkXWLcs+9SelnMINPnPysrhoAML/gahtlKTkaeQ3PJhIoMmOsA6eusODsb8+a53dFNAIiJ5GwIBP/bGBN2L6H/KFL1BGzXRRPxFfKujhpislId3+PVT9V7NcVp+BFc3Dxwh+7MESUnp4G8MQGpfQiyZqyrtVMmPPHooTctxInIOiWpgyRyKlzf44MztxQpzeT1raxZgrku+RTabGccY6jY3PI+UubC9Hc3t0aXCPwLCsH0TKc6pTxOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L88zS+rwwP2s+3DQeOZsOAjDhDq7nR7TvGYDZeNj7Ss=; b=A6etCW5MOskhsSusLHeMdmmHbd+CESltLWbI0QnlCYoi1GtxMEdk+XDjAlKHpP/GlmM597/VIToFxJnm0Bw20gLLK37BivZ6uOE8dSaTr2RkOmEl/xsr5IAajEzlYfIvNyYgf4pB1yqb3/FAf6gKueZTBDaWFL2yvTCFPpgdNqGpW45mOLFHMYlUji66hZ1MAVwEAcyuXqzDHSvXr0tq5JuYZ5PgpswT0FW+ELej2CEUdfNqBQuTSC6eLYEdHQl0PCdQm4/J2Ees5ihgXMEvlIfQCqrTzqd6tyKNogjRFUEIwA6whLz/tgUlaCoMMhSqURZzUDdJIBY2KWRTe0wlgw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L88zS+rwwP2s+3DQeOZsOAjDhDq7nR7TvGYDZeNj7Ss=; b=W3UeyqQuHwnSHZoEq/n2AmyFsBy1LEMxPZapq2/ScM++U5pA6wTLN1OIG1zk2ajE7p0MwXdZaqBeRdSmsVuUQ9gBJTOtfJ0zwaxFqtZOjZoT08gWr5xEItdEZlbVjxAhapaXuGOidttNmF84yRQ8GuSDVPvMTa7khtxIDDpbGbg= Received: from AM6PR05MB5910.eurprd05.prod.outlook.com (20.179.3.81) by AM6PR05MB4136.eurprd05.prod.outlook.com (52.135.160.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2327.24; Mon, 7 Oct 2019 17:02:25 +0000 Received: from AM6PR05MB5910.eurprd05.prod.outlook.com ([fe80::5959:5341:23af:b0c7]) by AM6PR05MB5910.eurprd05.prod.outlook.com ([fe80::5959:5341:23af:b0c7%3]) with mapi id 15.20.2327.025; Mon, 7 Oct 2019 17:02:25 +0000 From: Kiran Vedere To: Jim Vaigl , Asaf Penso , 'Stephen Hemminger' CC: "users@dpdk.org" , Erez Ferber , Olga Shern , Danny Vogel Thread-Topic: [dpdk-users] DPDK on Mellanox BlueField Ref Platform Thread-Index: AdVy9ZHo1HqoOeJURCCyk0eROU1EBQABnMsAAAPtiAAAU1unIAASMIgQAYqaHLAABBg+EACQO9pgAASnYJAAACgI8A== Date: Mon, 7 Oct 2019 17:02:25 +0000 Message-ID: References: <004101d572f5$92d479d0$b87d6d70$@com> <20190924101802.273c25d9@hermes.lan> <000001d5730b$bc69ea30$353dbe90$@com> <005001d57ada$1cc52900$564f7b00$@com> <002201d57d2f$8db6ccc0$a9246640$@com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=kiranv@mellanox.com; x-originating-ip: [71.184.199.187] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 2e6d3923-802d-43ca-bbda-08d74b4820d5 x-ms-office365-filtering-ht: Tenant x-ms-traffictypediagnostic: AM6PR05MB4136:|AM6PR05MB4136:|AM6PR05MB4136: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-forefront-prvs: 01834E39B7 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(346002)(366004)(136003)(376002)(39860400002)(396003)(129404003)(51914003)(189003)(199004)(13464003)(33656002)(52536014)(11346002)(446003)(8936002)(2940100002)(25786009)(107886003)(6246003)(4326008)(6506007)(76176011)(53546011)(102836004)(256004)(14444005)(7696005)(99286004)(26005)(305945005)(66066001)(7736002)(186003)(476003)(74316002)(486006)(2906002)(5660300002)(71190400001)(71200400001)(76116006)(66946007)(316002)(54906003)(110136005)(6116002)(3846002)(30864003)(14454004)(6436002)(86362001)(66476007)(81166006)(64756008)(55016002)(478600001)(8676002)(9686003)(229853002)(66446008)(66556008)(81156014); DIR:OUT; SFP:1101; SCL:1; SRVR:AM6PR05MB4136; H:AM6PR05MB5910.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: eFmmBEFeHvMNpRTtBoTCLsOj+OoSNpdEVGSFsGvroB8M+uiZQCMEdJ0k6bulQTj2G8IaTBYBBJXC02uyI6qthYpDR/ySorHabtuwW8k2KauZxH5HiTyiA8FlFRFzL6ElAR/fHOQHs2wktAgy76YQScXFS/im+4SeOm5V+VLM2NYDj42nmbKLpCnSNWZU02GQyPn5+XuFCcpuQ34jZ8QriDHbRrM1Dwc0wS2Hp6HJeCup58FN61BkvJ+yHZb16c6t8VP6USXi7vLza//IamGzY1F6x5A3M3kS1cObwXlZJ3Uu/CEO1rq0xnlwvrYa0fjf77HCVUUb7e0jk4tMsC+k4meJ5FPHKwPBJgl2umO5gFaxVl75CD1+IB/LzMG+Mev+YSruDPA4XRrvFj3+bPg9qvrUJwmQls+r4URTEGIKXgk= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2e6d3923-802d-43ca-bbda-08d74b4820d5 X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Oct 2019 17:02:25.5072 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: lJlRkdCPT5G+ElbjoXaVY1eDpWTz2COtY/MTbeAfYqzt/FmuMMekcniSvNATURBff7oDUsEQ4QH432wAs8tWHw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR05MB4136 X-Mailman-Approved-At: Mon, 07 Oct 2019 23:10:58 +0200 Subject: Re: [dpdk-users] DPDK on Mellanox BlueField Ref Platform X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi Jim, I am sorry. I meant reduce the --mbuf-size to little over jumbo frame size = (ex 9216) Regards, kiran -----Original Message----- From: Kiran Vedere=20 Sent: Monday, October 7, 2019 1:01 PM To: Jim Vaigl ; Asaf Penso ; 'Stephen Hemminger' Cc: users@dpdk.org; Erez Ferber ; Olga Shern ; Danny Vogel Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform Hi Jim, Looks like n=3D344064, size=3D16384 exceeds 5 G. I used 4K 2M Pages (so tha= t's 8G). Can you try with that? You can use more hugepages (8K for ex) as w= ell just to be on safeside or reduce the max-pkt-len to little over 9000 (9= 216 maybe) and give it a try? Regards, Kiran -----Original Message----- From: Jim Vaigl Sent: Monday, October 7, 2019 12:52 PM To: Kiran Vedere ; Asaf Penso ; 'S= tephen Hemminger' Cc: users@dpdk.org; Erez Ferber ; Olga Shern ; Danny Vogel Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform Hi Kiran, When I try this command line with testpmd (with the -w just changed to my p= ort 0's PCIe address), I get "Creation of mbuf pool for socket 0 failed: Cannot allocate memory". I've tried adding --total-num-mbufs to= restrict that, but that didn't help. It runs if I try restricting it to j= ust two cores, but then I drop most of my packets. Here's the output runni= ng it as you suggested: [root@localhost bin]# ./testpmd --log-level=3D"mlx5,8" -l 3,4,5,6,7,8, 9,10,11,12,13,14,15 -n 4 -w 0f:00.0 --socket-mem=3D2048 ---socket-num= =3D0 --burst=3D64 --txd=3D2048 --rxd=3D2048 --mbcache=3D512 --rxq=3D12 --txq= =3D12 --nb-cores=3D12 -i -a --forward-mode=3Dmac --max-pkt-len=3D9000 --mbuf-size=3D16384 EAL: Detected 16 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:0f:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 15b3:a2d2 net_mlx5 net_mlx5: mlx5.c:2145: mlx5_pci_probe(): checking device "mlx5_1" net_mlx5: mlx5.c:2145: mlx5_pci_probe(): checking device "mlx5_0" net_mlx5: mlx5.c:2154: mlx5_pci_probe(): PCI information matches for de= vice "mlx5_0" net_mlx5: mlx5.c:2342: mlx5_pci_probe(): no E-Switch support detected net_mlx5: mlx5.c:1557: mlx5_dev_spawn(): naming Ethernet device "0f:00.= 0" net_mlx5: mlx5.c:363: mlx5_alloc_shared_ibctx(): DevX is NOT supported net_mlx5: mlx5_mr.c:212: mlx5_mr_btree_init(): initialized B-tree 0x17fec8c68 with table 0x17fec60c0 net_mlx5: mlx5.c:1610: mlx5_dev_spawn(): enhanced MPW is supported net_mlx5: mlx5.c:1623: mlx5_dev_spawn(): SWP support: 7 net_mlx5: mlx5.c:1632: mlx5_dev_spawn(): min_single_stride_log_num_of_bytes: 6 net_mlx5: mlx5.c:1634: mlx5_dev_spawn(): max_single_stride_log_num_of_bytes: 13 net_mlx5: mlx5.c:1636: mlx5_dev_spawn(): min_single_wqe_log_num_of_strides: 3 net_mlx5: mlx5.c:1638: mlx5_dev_spawn(): max_single_wqe_log_num_of_strides: 16 net_mlx5: mlx5.c:1640: mlx5_dev_spawn(): supported_qpts: 256 net_mlx5: mlx5.c:1641: mlx5_dev_spawn(): device supports Multi-Packet R= Q net_mlx5: mlx5.c:1674: mlx5_dev_spawn(): tunnel offloading is supported net_mlx5: mlx5.c:1686: mlx5_dev_spawn(): MPLS over GRE/UDP tunnel offloading is not supported net_mlx5: mlx5.c:1783: mlx5_dev_spawn(): checksum offloading is support= ed net_mlx5: mlx5.c:1803: mlx5_dev_spawn(): maximum Rx indirection table s= ize is 512 net_mlx5: mlx5.c:1807: mlx5_dev_spawn(): VLAN stripping is supported net_mlx5: mlx5.c:1811: mlx5_dev_spawn(): FCS stripping configuration is= supported net_mlx5: mlx5.c:1840: mlx5_dev_spawn(): enhanced MPS is enabled net_mlx5: mlx5.c:1938: mlx5_dev_spawn(): port 0 MAC address is 50:6b:4b:e0:9a:22 net_mlx5: mlx5.c:1945: mlx5_dev_spawn(): port 0 ifname is "enp15s0f0" net_mlx5: mlx5.c:1958: mlx5_dev_spawn(): port 0 MTU is 9000 net_mlx5: mlx5.c:1980: mlx5_dev_spawn(): port 0 forcing Ethernet interf= ace up net_mlx5: mlx5.c:1356: mlx5_set_min_inline(): min tx inline configured: 0 net_mlx5: mlx5_flow.c:377: mlx5_flow_discover_priorities(): port 0 flow maximum priority: 5 Interactive-mode selected Auto-start selected Set mac packet forwarding mode testpmd: create a new mbuf pool : n=3D344064, size= =3D16384, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc EAL: Error - exiting with code: 1 Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate mem= ory This is with 2048 2M hugepages defined, so I think I have plenty of memory = available. I used dpdk-setup to set and verify the hugepages' configuration and availability. I'm trying to do some experiments to see i= f I get to the bottom of this. Any thoughts? Regards, --Jim -----Original Message----- From: Kiran Vedere [mailto:kiranv@mellanox.com] Sent: Friday, October 04, 2019 2:28 PM To: Jim Vaigl; Asaf Penso; 'Stephen Hemminger' Cc: users@dpdk.org; Erez Ferber; Olga Shern; Danny Vogel Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform Hi Jim, I tried your test with 9000 Byte MTU Size. On BlueField Reference Platform = I set the MTU of the interface to 9000 and on TRex I am sending 8096 size b= yte packets. I am able to loop back packets fine w/o any issues. Below is t= he command line I use for testpmd ./testpmd --log-level=3D"mlx5,8" -l 3,4,5,6,7,8,9,10,11,12,13,14,15 -n 4 -w 17:00.0 --socket-mem=3D2048 -- --socket-num=3D0 --burst=3D64 --txd=3D2048 -= -rxd=3D2048 --mbcache=3D512 --rxq=3D12 --txq=3D12 --nb-cores=3D12 -i -a --forward-mode= =3Dmac --max-pkt-len=3D9000 --mbuf-size=3D16384 Two things to consider: The max Rx packet len is used by the PMD during it= s Rx Queue initialization. By default this is set to 1518 Bytes for testpmd= /l3fwd. For jumbo frames you need to pass --max-pkt-len=3D9000 (for testpmd) or --enable-jumbo --max-pkt-len=3D9000 (for l3fwd). Are you passin= g these values to l3fwd/testpmd when you run your test? Also since the mbuf= _size is 2048 by default, you need to increase the mbuf_size to > Jumbo fra= me size unless you enable scatter in the PMD. For testpmd you can increase = the mbuf size by using --mbuf-size parameter. For l3fwd I don't think there= is a command line option to increase mbuf size in runtime. So you might ne= ed to recompile the l3fwd code to increase mbuf size. Are you doing this? Hope this helps. Regards, Kiran -----Original Message----- From: Jim Vaigl Sent: Friday, October 4, 2019 1:35 PM To: Asaf Penso ; 'Stephen Hemminger' Cc: users@dpdk.org; Kiran Vedere ; Erez Ferber ; Olga Shern ; Danny Vogel Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform A final update on this issue. Kiran Vedere went above and beyond the call = of duty: he completely reproduced my hardware setup, showed that it worked = using trex to generate similar traffic to mine, and then provided me with a= bundled-up .bfb of his CentOS (with updated kernel) and OFED install to tr= y so that there would be no configuration stuff for me to mess up. Using this, I saw exactly the same crashes I had seen in my setup. After some thought, I realized the only meaningful difference was that my t= raffic generator and IP configuration relied on an MTU size of 9000. Once I set the MTU size down to 1500, the crashes stopped. So, the answer is clearly that I'm just not setting up for the larger MTU s= ize. I need to start to understand how to get DPDK to manage that, but the= crashing is at least understood now, and I have a way forward. Thanks very much to Kiran. Regards, --Jim -----Original Message----- From: Jim Vaigl [mailto:jimv@rockbridgesoftware.com] Sent: Thursday, September 26, 2019 3:47 PM To: 'Asaf Penso'; 'Stephen Hemminger' Cc: 'users@dpdk.org'; 'Kiran Vedere'; 'Erez Ferber'; 'Olga Shern' Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform > From: Asaf Penso [mailto:asafp@mellanox.com] > Sent: Thursday, September 26, 2019 7:00 AM > To: Jim Vaigl; 'Stephen Hemminger' > Cc: users@dpdk.org; Kiran Vedere; Erez Ferber; Olga Shern > Subject: RE: [dpdk-users] DPDK on Mellanox BlueField Ref Platform > > Hello Jim, > > Thanks for your mail. > In order for us to have a better resolution please send a mail to our support team > - support@mellanox.com > Please provide as much info about the setup, configuration etc as you can= . > > In parallel, I added Erez Ferber here to assist. > > Regards, > Asaf Penso Thanks for the kind offer, Asaf. I'll take this debug effort off-line with= you and Erez and post back to the list here later with any resolution so e= veryone can see the result. By the way, the prior suggestion of using v. 25 of rdma-core didn't pan out= : the current build script just makes a local build in a subdirectory off the= source tree and there's no obvious way to integrate it with the MLNX_OFED = environment and the dpdk install. After resolving package dependencies to = get rdma-core to build from the GitHub repo, I realized the instructions sa= y this: --- Building This project uses a cmake based build system. Quick start: $ bash build.sh build/bin will contain the sample programs and build/lib will contain the shared libraries. The build is configured to run all the programs 'in-place' and cannot be installed. NOTE: It is not currently easy to run from the build directory, the plugins only load from the system path. --- --Jim >> -----Original Message----- >> From: users On Behalf Of Jim Vaigl >> Sent: Tuesday, September 24, 2019 10:11 PM >> To: 'Stephen Hemminger' >> Cc: users@dpdk.org >> Subject: Re: [dpdk-users] DPDK on Mellanox BlueField Ref Platform >>=20 >> On Tue, 24 Sep 2019 12:31:51 -0400 >> "Jim Vaigl" wrote: >>=20 >>>> Since no one has chimed in with any build/install/configure=20 >>>> suggestion >> for >> >> the >> >> BlueField, I've spent some time debugging and thought I'd share=20 >> >> the >> results. >> >> Building the l3fwd example application and running it as the docs >> suggest, >> >> when >> >> I try to send it UDP packets from another machine, it dumps core. >> >> >> >> Debugging a bit with gdb and printf, I can see that from inside >> >> process_packet() >> >> and processx4_step1() the calls to rte_pktmbuf_mtod() return Nil=20 >> >> or suspicious pointer values (i.e. 0x80). The sample apps don't=20 >> >> guard against NULL pointers being returned from this rte call, so=20 >> >> that's why it's dumping core. >> >> >> >> I still think the problem is related to the driver config, but=20 >> >> thought >> this >> >> might ring a bell for anyone who's had problems like this. >> >> >> >> The thing that still bothers me is that rather than seeing what I=20 >> >> was expecting at init based on what the documentation shows: >> >> [...] >> >> EAL: probe driver: 15b3:1013 librte_pmd_mlx5 >> >> >> >> ... when rte_eal_init() runs, I'm seeing: >> >> [...] >> >> EAL: Selected IOVA mode 'PA' >> >> EAL: Probing VFIO support... >> >> >> >> This still seems wrong, and I've verified that specifying the BlueField >> >> target ID >> >> string in the make is causing "CONFIG_RTE_LIBRTE_MLX5_PMD=3Dy" to >> appear in >> >> the .config. >> >> >> >> Regards, >> >> --Jim Vaigl >> >> 614 886 5999 >> >> >> >> >> > >> >From: Stephen Hemminger [mailto:stephen@networkplumber.org] >> >Sent: Tuesday, September 24, 2019 1:18 PM >> >To: Jim Vaigl >> >Cc: users@dpdk.org >> > >> >Subject: Re: [dpdk-users] DPDK on Mellanox BlueField Ref Platform=20 >> >make sure you have latest version of rdma-core installed (v25). >> >The right version is not in most distros >>=20 >> Great suggestion. I'm using the rdma-core from the MLNX_OFED >> 4.6-3.5.8.0 install. I can't figure out how to tell what version=20 >> that thing includes, >> even looking at the source, since there's no version information in=20 >> the source files, BUT I went to github and downloaded rdma-core v24=20 >> and v25 and neither diff cleanly with the source RPM that comes in=20 >> the OFED install. I don't know yet if it's because this is some=20 >> different version or if it's because Mellanox has made their own tweaks. >>=20 >> I would hope that the very latest OFED from Mellanox would include an=20 >> up-to-date and working set of libs/modules, but maybe you're on to=20 >> something. It sounds like a risky move, but maybe I'll try just=20 >> installing rdma-core from github over top of the OFED install. I=20 >> have a fear that I'll end up with inconsistent versions, but it's=20 >> worth a try. >>=20 >> Thanks, >> --Jim =20