From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 070532904 for ; Mon, 8 Jan 2018 12:54:57 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2018 03:54:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,330,1511856000"; d="scan'208";a="22290570" Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157]) by orsmga001.jf.intel.com with ESMTP; 08 Jan 2018 03:54:55 -0800 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.67]) by IRSMSX103.ger.corp.intel.com ([169.254.3.138]) with mapi id 14.03.0319.002; Mon, 8 Jan 2018 11:54:54 +0000 From: "Ananyev, Konstantin" To: "Wu, Yanglong" , "dev@dpdk.org" Thread-Topic: [PATCH v5] net/ixgbe: fix l3fwd start failed on Thread-Index: AQHTiC3kfXWzew4oiEqYytW2sKcz3qNp3fzQ Date: Mon, 8 Jan 2018 11:54:54 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725880E388A9@irsmsx105.ger.corp.intel.com> References: <20171120024026.152048-1-yanglong.wu@intel.com> <20180108030601.5622-1-yanglong.wu@intel.com> In-Reply-To: <20180108030601.5622-1-yanglong.wu@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzA0YzBhNjItM2MwNi00Y2I4LWE1N2ItZWI0NzMzYWNiNjM0IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6Im5SaUtPeVlmWDg0bkhpV3VSYjZwdHA4RTgzNVE2QWNVRTI3bThmbjhSWjg9In0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5] net/ixgbe: fix l3fwd start failed on X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Jan 2018 11:54:58 -0000 > -----Original Message----- > From: Wu, Yanglong > Sent: Monday, January 8, 2018 3:06 AM > To: dev@dpdk.org > Cc: Ananyev, Konstantin ; Wu, Yanglong > Subject: [PATCH v5] net/ixgbe: fix l3fwd start failed on >=20 > L3fwd start failed on PF, for tx_q check failed. > That occurred when the SRIOV is active and tx_q > rx_q. > The tx_q is equal to nb_q_per_pool. The number of nb_q_per_pool > should equeal to max number of queues supported by HW not nb_rx_q. But then 2 (or more cores) could try to TX packets through the same TX que= ue? Why not just fil to start gracefully (call rte_exit() or so) if such situat= ion occurred? Konstantin >=20 > Fixes: 27b609cbd1c6 (ethdev: move the multi-queue mode check to > specific drivers) >=20 > Signed-off-by: Yanglong Wu > --- > v5: > Rework according to comments > --- > drivers/net/ixgbe/ixgbe_ethdev.c | 10 +++++++--- > 1 file changed, 7 insertions(+), 3 deletions(-) >=20 > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_e= thdev.c > index ff19a564a..baaeee5d9 100644 > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > @@ -95,6 +95,9 @@ > /* Timer value included in XOFF frames. */ > #define IXGBE_FC_PAUSE 0x680 >=20 > +/*Default value of Max Rx Queue*/ > +#define IXGBE_MAX_RX_QUEUE_NUM 128 > + > #define IXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */ > #define IXGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */ > #define IXGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr= . */ > @@ -2194,9 +2197,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev= , uint16_t nb_rx_q) > return -EINVAL; > } >=20 > - RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =3D nb_rx_q; > - RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =3D pci_dev->max_vfs * nb_rx_q; > - > + RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool =3D > + IXGBE_MAX_RX_QUEUE_NUM / RTE_ETH_DEV_SRIOV(dev).active; > + RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx =3D > + pci_dev->max_vfs * RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool; > return 0; > } >=20 > -- > 2.11.0