From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.fortinet.com (smtp.fortinet.com [208.91.113.81]) by dpdk.org (Postfix) with ESMTP id 3775B1B4D6 for ; Sat, 5 Jan 2019 01:35:15 +0100 (CET) Received: from mail.fortinet.com ([192.168.221.213]) by smtp.fortinet.com with ESMTP id x050ZEvg031729-x050ZEvi031729 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=FAIL); Fri, 4 Jan 2019 16:35:14 -0800 Received: from FGT-EXCH-MBX131.fortinet-us.com (192.168.221.131) by FGT-EXCH-CAS213.fortinet-us.com (192.168.221.213) with Microsoft SMTP Server (TLS) id 14.3.389.1; Fri, 4 Jan 2019 16:35:14 -0800 Received: from FGT-EXCH-MBX132.fortinet-us.com (192.168.221.132) by FGT-EXCH-MBX131.fortinet-us.com (192.168.221.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1531.3; Fri, 4 Jan 2019 16:35:13 -0800 Received: from FGT-EXCH-MBX132.fortinet-us.com ([fe80::78b9:5f97:8cec:b419]) by FGT-EXCH-MBX132.fortinet-us.com ([fe80::78b9:5f97:8cec:b419%11]) with mapi id 15.01.1531.003; Fri, 4 Jan 2019 16:35:13 -0800 From: Liwu Liu To: Stephen Hemminger CC: "users@dpdk.org" Thread-Topic: [dpdk-users] Cannot run DPDK applications using Mellanox NIC under Hyper-V Thread-Index: AdSkaIlXmtS1/HLnTKOuGVPhM0hYtgAZRFKAABClZND//3+XgIAAhQ0g Date: Sat, 5 Jan 2019 00:35:13 +0000 Message-ID: <02a2fbe93b93458190386e8650c3fb2c@fortinet.com> References: <20190104160647.7f17e932@hermes.lan> <596318879eb14b32971e4c182489f537@fortinet.com> <20190104162349.264da710@hermes.lan> In-Reply-To: <20190104162349.264da710@hermes.lan> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [96.45.36.15] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; d=fortinet.com; s=dkim; c=relaxed/relaxed; h=from:to:cc:subject:date:message-id:references:content-type:mime-version; bh=ajbCa0rKPGhihBGJioul5reMvQCJRbSfLNngXwPzmp8=; b=k4tQRL7JeD9cv1hwlPg/ZcRT1UfCsjcE1Vj+iXJsqFIMKKZWO+w5DTKCOMl5DNjQkVC3rTfpJK4J Csp4nnFq/C7wnBJcnh3Jfjp7RWIKFCcAmAD6BJLFiNW6e/+0/ou2FcLQRGcWbDjRBPf0k7A39T+j YBuhCsCrIb2fr8mTOhDMN33McU1MXzMGjMkB3dpcNTrDhMNyi1kfa80MKZDDclKf+s4RVY5PKWfA K+cBTeqyOjwDhnbDhLZHHSKVTCzcR2rzx7L8ia/qEGGjVYCNyC8J5mhQ1BySPjvIAdghDu4jvMEI mobLmqSBAI1ZLi+eP7mxKVsTKByq2GMDnXENCA== Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox NIC under Hyper-V X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Jan 2019 00:35:15 -0000 Hi Stephen, Yeah we are seeing all NETVSC, failsafe PMD, TAP, MLX4 VF. We are expecti= ng bifurcation mode where default traffic goes with MLX4_EN slave to NETVSC= master to enter Linux net stack and certain flow-defined traffic shall go = into DPDK RX queue rather than MLX4_EN net-dev queue. Unfortunately as we c= annot define flow into VF NIC, we cannot re-direct traffic into DPDK user-l= and.=20 In fact I am looking for a way to steer all RX traffic from MLX4_EN slav= e device into DPDK RX ring.=20 Hypervisor: Windows server + MLNX VPI 5.5; VM: Linux dpdk-18.11 mlnx-ofed= -kernel-4.4 rdma-core-43mlnx1=20 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3 Vir= tual Function] Best, Liwu -----Original Message----- From: Stephen Hemminger =20 Sent: Friday, January 4, 2019 4:24 PM To: Liwu Liu Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox NIC u= nder Hyper-V On Sat, 5 Jan 2019 00:10:43 +0000 Liwu Liu wrote: > Hi Stephen, >=20 > It is SR-IOV >=20 > Thanks, >=20 > Liwu >=20 > -----Original Message----- > From: Stephen Hemminger > Sent: Friday, January 4, 2019 4:07 PM > To: Liwu Liu > Cc: users@dpdk.org > Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox=20 > NIC under Hyper-V >=20 > On Fri, 4 Jan 2019 20:06:48 +0000 > Liwu Liu wrote: >=20 > > Hi Team, > > We used to have similar problem for Ubuntu 18.04 hypervisor and=20 > > resolved by set log_num_mgm_entry_size=3D-1. (Refer to=20 > > https://mails.dpdk.org/archives/users/2018-November/003647.html > > ) > >=20 > > Somehow for Windows Servers with MLNX VPI, I do not know where to set= such and DPDK over MLX4 on linux vm has same failure of attaching flow. > >=20 > > [ 374.568992] __mlx4_ib_create_flow: mcg table is=20 > > full. Fail to register network rule. size =3D 64 (out of memory error=20 > > code) > >=20 > > Would like to get your help on this. It seems to be the case that t= he PF interface is not configured to trust VF interfaces to be able to add = new flow rules. > >=20 > > Many thanks for help, > >=20 > > Liwu > >=20 > >=20 > >=20 > >=20 > > *** Please note that this message and any attachments may contain=20 > > confidential and proprietary material and information and are=20 > > intended only for the use of the intended recipient(s). If you are=20 > > not the intended recipient, you are hereby notified that any review,=20 > > use, disclosure, dissemination, distribution or copying of this=20 > > message and any attachments is strictly prohibited. If you have=20 > > received this email in error, please immediately notify the sender=20 > > and destroy this e-mail and any attachments and all copies, whether=20 > > electronic or printed. Please also note that any views, opinions,=20 > > conclusions or commitments expressed in this message are those of=20 > > the individual sender and do not necessarily reflect the views of=20 > > Fortinet, Inc., its affiliates, and emails are not binding on=20 > > Fortinet and only a writing manually signed by Fortinet's General=20 > > Counsel can be a binding commitment of Fortinet to Fortinet's=20 > > customers or partners. Thank you. *** > > =20 >=20 > How are using the Mellanox device with Windows Server? PCI passthrough or= SR-IOV? If using SR-IOV then you can't use the Mellanox device directly. You have t= o use the synthetic device. If you use vdev_netvsc pseudo-device, then it s= ets up a failsafe PMD, TAP and the VF (Mellanox) PMD for you. The experimental way is to use the netvsc PMD which will manage VF if avail= able.