From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0068.outbound.protection.outlook.com [104.47.42.68]) by dpdk.org (Postfix) with ESMTP id 53F6C6910 for ; Fri, 30 Sep 2016 14:06:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=lvldqqbP4aD22X9uR0IpogTu5VHDxtnhlrGvNc5hQ4k=; b=hvyTLQ4LaJIjSebLpOMHstmFRH09wm3sQilnTSYkKAUocBRht2X7UMr49EkU6IdU/qxYvF4USvRRRXYpdJCJFvUsQIXpL6QaUmdS2UQXnhy2I38g1R/s28XMQvnCL3KRw5HRVXyBUXWESkpkBVmcNZKZ8X32+wmCb/rpgsPHU5g= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Kamil.Rytarowski@cavium.com; Received: from cavium1.semihalf.local (31.172.191.173) by CY4PR07MB3063.namprd07.prod.outlook.com (10.172.116.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.549.15; Fri, 30 Sep 2016 12:06:52 +0000 From: Kamil Rytarowski To: CC: , , , , , , , Kamil Rytarowski Date: Fri, 30 Sep 2016 14:05:46 +0200 Message-ID: <1475237154-25388-8-git-send-email-krytarowski@caviumnetworks.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1475237154-25388-1-git-send-email-krytarowski@caviumnetworks.com> References: <1472230448-17490-1-git-send-email-krytarowski@caviumnetworks.com> <1475237154-25388-1-git-send-email-krytarowski@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [31.172.191.173] X-ClientProxiedBy: VI1PR07CA0001.eurprd07.prod.outlook.com (10.163.160.139) To CY4PR07MB3063.namprd07.prod.outlook.com (10.172.116.144) X-MS-Office365-Filtering-Correlation-Id: 4f10510b-c205-48fc-e573-08d3e92a44de X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3063; 2:R5rBdqUurEK5WwXJG3aNvFM1gwhVtqi76CZoj63NFfRhSfnqHNlbDVNdfZ3/JHk9gqHg6MXZnj7F/oM4G19C4jra5V/WAwqatd+e9n6NpKbwEyXfD2YI9+pIFiBdX2mxGCMANEj9ctJ3xjarqBZ1LNfYlFoLFjg5/1oIK/h9TWIe3eSgvI0yn80Jbe/0A4mb; 3:g+FkZ3ei0yIW0lWqd3XKXXSb2x0xEZztCBHzR+bFlIBrdc+/CPorznD82VOeiVO+BrkE9t1gZe6wSfQJEtiroiPEAMU8avTdkhqJGL99crv0IJ0SXGF4P2F0OZRC4SJf X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY4PR07MB3063; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3063; 25:zuamTJDnEM5VSu4chWB/EOO5i/pwzc/BynfyMnqwcffjKDbh9FjqI4O64A9nzhS2yyTLdukIfbyiMDCNYM1oPnFFPSVh+XHHWpjHkuGkMSazL5Rfw0mmBpfjaZmPRunYYy0XpLBGfb58gzMDcAuJoQKgJo1653wertnCyAwJnMO2cM94ULGb9+8XeBdn+efpYQIxyDObs2s4aw5jdn5bM7xVNUAmhAeHhZiOwYfBcTkjFjJRqX2b82W6epDoweLb6wZS3RIVnfM1h+xsXSPQ0KlIYgR4pj79ko/wxIqBf46xznyKmipnzCQJ/LD3iMv0YW7OFh9PY7pyt5A8TH0HZB8k8a5WvvLD8OXGoYdW1SD6LIzgRxIhGQFiJvqVhUx9uElmnWLoJvj4WqmBWNN9gITTx/NqLkh+WSmxPtby4GtQTYyNEEdwpHwrzjx+AJIo9/dcIoeQWvo/tyd1bAbpLq/nED8tXCVRZP3wVXV4jwviC318Wh/n23KSsERg9TOSphfY/s/uL4zji99NTY7IXu9JUl9dd1S1fCfcYK+xsKAbMrZtSBieUPRfmOlsk4HvXEtcvUaUB8QdeCXum+NRA1l3bwaiqHEcW3n+2YQumD5gXmjbJIeMnSrytYlaRZfp+OlIkLGGiQo+wj846RGrY0KI7RFWvvvcqFIAzXyAZ+gS+bkNI/mGSfaduDsw0FQXsfwnsyVHVpTpcrV3YwJrIOyG65azJYuQ9iTKh/ZmWp0= X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3063; 31:aaWthVWOdz195xyQe5khjew5jzLHv/xEH3adGikBA0FQ+dDHAvz3XxRaZlAszTPlsqwZzYe1ToWH/HDVKnzCkvrnF4Bqv8ReZxpMU1jT2SO7TaYkZRvhK6GkpzkXu1rSkhLWBtVKuxBeK9/oOfWj00VVaU8CXhPcyNuGmfR9DP7+OFToGiXqPetSEf4H9Yjjjvy7gNlVuHNShgpNNIKh5BWrdK79/7jn5Ub/QYAl/eY=; 20:jH/kTAkwNb77+NHRluLzC4Yi2XPVOtVfF4iMNCwkUoR3TpmY9AY1AltUCKFe55jgwf21inb+HNM+3cizvRwGd1FAdoAZvL+7j3dbsA8YV+6yi3b+408J3MifJ+xxbaG7TG9uX8d8saKEEkbXfhA8H8ofrbz7E43+GJRlJCZEZlFvOWDpb/9V3EZaZT6FXGC/N1VaVEHPe4on8aZ5upeN5v/KoxtdAWwyWZMorR1InDk9jfyLvEt20njUu5hQidzWXaaDi63AJm5ufRmKclHhdzfhYv7KQobJRRA3yOjLuG8L8D1XqDPEOC9lqMNXH0TP3revp9ogWKrGNc3IM+ytT5b+0P/nlTv0+Gk2kSDjdJs+9oecTUft/davyKPq/5jxW4EgbnqEIZK501JXsmQQTSTyGyVnlwmQf8N8vedv4NCZ8YlLE2dny/8Rd0vnjTBRuo7T5h9OMYF1uhvWe5pOdQPMFhmvasz/roMTv2TdLEwFt9FyHNzDwWjD0xov3Ehh+uR1b+lR84t3n/1ZBXHEdM7pq0UxBIWM52CYijsQsGbZH6odoYfHejTYWK7viJa1CfQycf2h7Cxkq0UofzXakNxJL5Y2bSC7zQZDta1+pJw= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001); SRVR:CY4PR07MB3063; BCL:0; PCL:0; RULEID:; SRVR:CY4PR07MB3063; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3063; 4:bHC02DB8UwA9ykekm4m3/HWP68tUr9Sxb+QLtEVvNZq0eQ0J2LaQt0qrWo+caT0691bkodOTOhYUSugcHALTAzqivw4rPSJ7sayF8OBkRkVfXDEpo814uQW6vJTXQPo3xzxEEykKFzUzFqrrE3jSesVlMWu4ajomppxSCizRcnaA4q7tuS1u+SflHrNubMBsH1gKg/sosKEqehRDeL5SGOmmDXsjtl3rTFgsAbr+a3RivYh0S8HcIpMyg/XqSxVsThMiKMJPyxjtclOpgYmtPxbIt9bPGVNR0QeygdijHnaIypm8e2hwfqIAbqiA8hGMjz4OyJrh7b2VZmC6Hh3Bf4fxjE8nD3GQZQUNwgLAk5ZzwBBMcEqYl+AK0H2DCpulbDSJSidrTDCZecBuArmElg== X-Forefront-PRVS: 008184426E X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(199003)(92566002)(3846002)(66066001)(189998001)(47776003)(42882006)(36756003)(105586002)(19580395003)(97736004)(6916009)(6666003)(68736007)(42186005)(19580405001)(50986999)(50226002)(77096005)(76176999)(5003940100001)(6116002)(7736002)(2950100002)(101416001)(586003)(106356001)(7846002)(5660300001)(4001430100002)(305945005)(33646002)(229853001)(110136003)(8676002)(4326007)(81166006)(107886002)(81156014)(50466002)(2906002)(48376002)(2351001); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3063; H:cavium1.semihalf.local; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3063; 23:A20PziaJHfR2qhbYDOt+xyJpw2pbVPqi+yieveYaZ?= =?us-ascii?Q?H4yhAcHnY1mJj9jdehC5zDktFMN7hbMBM5EXF8EIlENo2p21iqvY7tN7gI+c?= =?us-ascii?Q?BgTS3Ra2Sc4SOQ4aNYkHWR8JlVx6NBauxWInm0fyXCNCyYmE0hVH580HXE0v?= =?us-ascii?Q?kBUNif2sOMX2tOkfhj6QOfSsh23wG+XTy9WdasawKWXWN8srmR46CFuc2u6F?= =?us-ascii?Q?bqFnaFFOsLpwbSSP/gx9upLwV7MoqHVsxEjqh3fvIx4KNWs4pfUQJRa0g87k?= =?us-ascii?Q?DPeL99klXi+j1PVqcg+niPbQLOYmw+3sFTqlh0Cgz7/EoDerNIfu7JSu03bl?= =?us-ascii?Q?dMwUKMfEGKzlD3Lvw1ecArH6NHfqK0EybFz9rsOVfaN4XIfZx+vqFQ/Cq44u?= =?us-ascii?Q?eWpYuaWvwQSUXlYlwhj7Xz4AFl53Eflqy8pc25me7YxGr+npAuWI1+cki8y7?= =?us-ascii?Q?lO/E3+9MORNUEDQE0ATb6mWEWGBLRUbDO6YQcESgC7AiJXlYeYyCwAdiBatO?= =?us-ascii?Q?DkSEtazSwvrZ4qYXCmtEoqIKRcTLS3RiVBFUavNMuJNsa0rrjhKgpaTGA2sx?= =?us-ascii?Q?9FUXs0z1uc+WOW8fQRIqRSea1tA5Y0Bx4Vk85vzqWnfcqkIhOIP5fNYY5Av6?= =?us-ascii?Q?EFMcu3Vq4QyJZHAfPjxGHm/88JOd+VuEHle1rBO1KLxnnuun2Ec2brqppV9o?= =?us-ascii?Q?HfmTovIlsSQvGFQjp2VDgyWCLYmyxlLnkbtMAds7RyA3BwhFyJlEXvKhiGXA?= =?us-ascii?Q?A4K+yCDT7qsC/fvbV0YlmQt2FUgF+CSLmNdWVUP1pj2kuiyQ24bHYwHYhI/J?= =?us-ascii?Q?GGwU+mGVjHhFFtgeUl3n0SNoVTl3Bv80ja+rupbhw3x9RawWUjZpDXALxLr8?= =?us-ascii?Q?r0j6OiNiyVkShHskYfTqQCUG+Rp2w3NrCLw0P0tr2RDpbIDZcr8WJ+o7PFc2?= =?us-ascii?Q?oRhDx6N6HFruJvUnY9k0BKPlYkxhv+KV94yA2Bl8vDEn4/H16haAIJNyWls5?= =?us-ascii?Q?+LN5j0q0Bvazuf9nve8xFBfXN74iQL0Oevnpl5JQ8GJC65gk0DGjyv/P3Xe4?= =?us-ascii?Q?TdC4dJxoq5IVop2oExznRJFtZSXtba6QQAhXmNKnzS/wapflPJpYiBODT+vv?= =?us-ascii?Q?jci7F4i5kdrS+ivurzfNJ1j7ZEiJkk++OgoHW47AQMkcZc24wGhgI3TS4R+s?= =?us-ascii?Q?dEpV4Ta9+L1QaU=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3063; 6:vzwwcXK5qUXZsqXA/sGW7DWk66+Z8J96L135LJqjwOwnYVTUw7204bGqJbjIQrUdEoKFivDV9ilJF32bnXdUZJucWI5IYh0q0G2b6QfU0WRTwWTly85PGW3xEbstdz/GhVGZUf7OhWRBHiX3BSXL+JttlRri6xrVU+CK9XoRGEEdPLHRJJOwvgQSnJzxoqvS6p5eUVcfGkSErSeqzRgyD7obTQJ0V1oNIzAH0acyw97L85XH5ksqQFTz57Hg4uuOpQeS6KnWPHvynghvAIn8bm9MP2H3WM3HOVg9Xt4b5yQ=; 5:xXhbWt2mrBhhjwmSYi+moTSQ+uCzBkNd/kvP/t3KkYZmS4SCSsnoDwD29EfuTfIB/6P+BpjpqvMYd7sn4MtJs1ejx5D0JNHe9jT1sIR1RVE3k+aU1zmSA1wdycZIbwwM092fpW0e3Ox63Fl8cVKybw==; 24:DXSbqQ9XMpin8rGk+4kcB8n6mfUS2EcG46IzOhf17V3V09Fp9uLYFd+KO5RWT+zSa354hh+0kL7x2fzByd+PqRmDihalL6YQ7gZe3x6LhJE=; 7:7LYEfxyNaoz4+Wpu3gc7vNHgACFnxf0ENzMwDR+Sf1pOqLj9qaOh0sv/sxKcnIfo1Y7QCL+ckl7PzpcFk0WBIxDnKIMJZNVnw2XGoJHncNH2miuWR5HvLDOul04whH4LMQxp+6ODHFzqg5AY2pCj4kiPq3B7aoe6aW+0rFlCxkf0ywSTvbTSh5MeD7C5Eu+f14UfgxeRIZi8VAqaMesTNdUJJ98t6Hu/L8PSrbAU0p5EhrIpm/L61nt+kGjx5+tQLXFMNe0UdCThnG3UG1zi5AdT9ZU5vV9ecQm4YQX/3OfD8j0LjPd+NxASdTvYgtz4 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2016 12:06:52.5864 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3063 Subject: [dpdk-dev] [PATCH v2 07/15] net/thunderx: remove problematic private_data->eth_dev link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Sep 2016 12:06:58 -0000 From: Kamil Rytarowski In case of the multiprocess mode a shared nicvf struct between processes cannot point with the eth_dev pointer to master device, therefore remove it allong with references to it refactoring the code where needed. This change fixes multiprocess issues detected in stats. Fixes: 7413feee662d ("net/thunderx: add device start/stop and close") Signed-off-by: Maciej Czekaj Signed-off-by: Kamil Rytarowski Signed-off-by: Zyta Szpak Signed-off-by: Slawomir Rosek Signed-off-by: Radoslaw Biernacki Signed-off-by: Jerin Jacob --- drivers/net/thunderx/nicvf_ethdev.c | 69 ++++++++++++++++++------------------- drivers/net/thunderx/nicvf_rxtx.c | 3 +- drivers/net/thunderx/nicvf_struct.h | 1 - 3 files changed, 36 insertions(+), 37 deletions(-) diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 72e6667..c8b42f7 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -497,14 +497,14 @@ nicvf_dev_rss_hash_update(struct rte_eth_dev *dev, } static int -nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx, - uint32_t desc_cnt) +nicvf_qset_cq_alloc(struct rte_eth_dev *dev, struct nicvf *nic, + struct nicvf_rxq *rxq, uint16_t qidx, uint32_t desc_cnt) { const struct rte_memzone *rz; uint32_t ring_size = CMP_QUEUE_SZ_MAX * sizeof(union cq_entry_t); - rz = rte_eth_dma_zone_reserve(nic->eth_dev, "cq_ring", qidx, ring_size, - NICVF_CQ_BASE_ALIGN_BYTES, nic->node); + rz = rte_eth_dma_zone_reserve(dev, "cq_ring", qidx, ring_size, + NICVF_CQ_BASE_ALIGN_BYTES, nic->node); if (rz == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate mem for cq hw ring"); return -ENOMEM; @@ -520,13 +520,13 @@ nicvf_qset_cq_alloc(struct nicvf *nic, struct nicvf_rxq *rxq, uint16_t qidx, } static int -nicvf_qset_sq_alloc(struct nicvf *nic, struct nicvf_txq *sq, uint16_t qidx, - uint32_t desc_cnt) +nicvf_qset_sq_alloc(struct rte_eth_dev *dev, struct nicvf *nic, + struct nicvf_txq *sq, uint16_t qidx, uint32_t desc_cnt) { const struct rte_memzone *rz; uint32_t ring_size = SND_QUEUE_SZ_MAX * sizeof(union sq_entry_t); - rz = rte_eth_dma_zone_reserve(nic->eth_dev, "sq", qidx, ring_size, + rz = rte_eth_dma_zone_reserve(dev, "sq", qidx, ring_size, NICVF_SQ_BASE_ALIGN_BYTES, nic->node); if (rz == NULL) { PMD_INIT_LOG(ERR, "Failed allocate mem for sq hw ring"); @@ -543,7 +543,8 @@ nicvf_qset_sq_alloc(struct nicvf *nic, struct nicvf_txq *sq, uint16_t qidx, } static int -nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz) +nicvf_qset_rbdr_alloc(struct rte_eth_dev *dev, struct nicvf *nic, + uint32_t desc_cnt, uint32_t buffsz) { struct nicvf_rbdr *rbdr; const struct rte_memzone *rz; @@ -558,7 +559,7 @@ nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz) } ring_size = sizeof(struct rbdr_entry_t) * RBDR_QUEUE_SZ_MAX; - rz = rte_eth_dma_zone_reserve(nic->eth_dev, "rbdr", 0, ring_size, + rz = rte_eth_dma_zone_reserve(dev, "rbdr", 0, ring_size, NICVF_RBDR_BASE_ALIGN_BYTES, nic->node); if (rz == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate mem for rbdr desc ring"); @@ -583,14 +584,15 @@ nicvf_qset_rbdr_alloc(struct nicvf *nic, uint32_t desc_cnt, uint32_t buffsz) } static void -nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy) +nicvf_rbdr_release_mbuf(struct rte_eth_dev *dev, struct nicvf *nic __rte_unused, + nicvf_phys_addr_t phy) { uint16_t qidx; void *obj; struct nicvf_rxq *rxq; - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) { - rxq = nic->eth_dev->data->rx_queues[qidx]; + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) { + rxq = dev->data->rx_queues[qidx]; if (rxq->precharge_cnt) { obj = (void *)nicvf_mbuff_phy2virt(phy, rxq->mbuf_phys_off); @@ -602,7 +604,7 @@ nicvf_rbdr_release_mbuf(struct nicvf *nic, nicvf_phys_addr_t phy) } static inline void -nicvf_rbdr_release_mbufs(struct nicvf *nic) +nicvf_rbdr_release_mbufs(struct rte_eth_dev *dev, struct nicvf *nic) { uint32_t qlen_mask, head; struct rbdr_entry_t *entry; @@ -612,7 +614,7 @@ nicvf_rbdr_release_mbufs(struct nicvf *nic) head = rbdr->head; while (head != rbdr->tail) { entry = rbdr->desc + head; - nicvf_rbdr_release_mbuf(nic, entry->full_addr); + nicvf_rbdr_release_mbuf(dev, nic, entry->full_addr); head++; head = head & qlen_mask; } @@ -724,14 +726,13 @@ nicvf_configure_rss(struct rte_eth_dev *dev) dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); PMD_DRV_LOG(INFO, "mode=%d rx_queues=%d loopback=%d rsshf=0x%" PRIx64, dev->data->dev_conf.rxmode.mq_mode, - nic->eth_dev->data->nb_rx_queues, - nic->eth_dev->data->dev_conf.lpbk_mode, rsshf); + dev->data->nb_rx_queues, + dev->data->dev_conf.lpbk_mode, rsshf); if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE) ret = nicvf_rss_term(nic); else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) - ret = nicvf_rss_config(nic, - nic->eth_dev->data->nb_rx_queues, rsshf); + ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf); if (ret) PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret); @@ -915,7 +916,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, return -ENOMEM; } - if (nicvf_qset_sq_alloc(nic, txq, qidx, nb_desc)) { + if (nicvf_qset_sq_alloc(dev, nic, txq, qidx, nb_desc)) { PMD_INIT_LOG(ERR, "Failed to allocate mem for sq %d", qidx); nicvf_dev_tx_queue_release(txq); return -ENOMEM; @@ -932,12 +933,11 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, } static inline void -nicvf_rx_queue_release_mbufs(struct nicvf_rxq *rxq) +nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq) { uint32_t rxq_cnt; uint32_t nb_pkts, released_pkts = 0; uint32_t refill_cnt = 0; - struct rte_eth_dev *dev = rxq->nic->eth_dev; struct rte_mbuf *rx_pkts[NICVF_MAX_RX_FREE_THRESH]; if (dev->rx_pkt_burst == NULL) @@ -1017,7 +1017,7 @@ nicvf_stop_rx_queue(struct rte_eth_dev *dev, uint16_t qidx) other_error = ret; rxq = dev->data->rx_queues[qidx]; - nicvf_rx_queue_release_mbufs(rxq); + nicvf_rx_queue_release_mbufs(dev, rxq); nicvf_rx_queue_reset(rxq); ret = nicvf_qset_cq_reclaim(nic, qidx); @@ -1163,7 +1163,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, /* Alloc completion queue */ - if (nicvf_qset_cq_alloc(nic, rxq, rxq->queue_id, nb_desc)) { + if (nicvf_qset_cq_alloc(dev, nic, rxq, rxq->queue_id, nb_desc)) { PMD_INIT_LOG(ERR, "failed to allocate cq %u", rxq->queue_id); nicvf_dev_rx_queue_release(rxq); return -ENOMEM; @@ -1281,7 +1281,7 @@ nicvf_dev_start(struct rte_eth_dev *dev) */ /* Validate RBDR buff size */ - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) { + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) { rxq = dev->data->rx_queues[qidx]; mbp_priv = rte_mempool_get_priv(rxq->pool); buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; @@ -1299,7 +1299,7 @@ nicvf_dev_start(struct rte_eth_dev *dev) } /* Validate mempool attributes */ - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) { + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) { rxq = dev->data->rx_queues[qidx]; rxq->mbuf_phys_off = nicvf_mempool_phy_offset(rxq->pool); mbuf = rte_pktmbuf_alloc(rxq->pool); @@ -1323,12 +1323,12 @@ nicvf_dev_start(struct rte_eth_dev *dev) /* Check the level of buffers in the pool */ total_rxq_desc = 0; - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) { + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) { rxq = dev->data->rx_queues[qidx]; /* Count total numbers of rxq descs */ total_rxq_desc += rxq->qlen_mask + 1; exp_buffs = RTE_MEMPOOL_CACHE_MAX_SIZE + rxq->rx_free_thresh; - exp_buffs *= nic->eth_dev->data->nb_rx_queues; + exp_buffs *= dev->data->nb_rx_queues; if (rte_mempool_avail_count(rxq->pool) < exp_buffs) { PMD_INIT_LOG(ERR, "Buff shortage in pool=%s (%d/%d)", rxq->pool->name, @@ -1354,7 +1354,7 @@ nicvf_dev_start(struct rte_eth_dev *dev) /* Allocate RBDR and RBDR ring desc */ nb_rbdr_desc = nicvf_qsize_rbdr_roundup(total_rxq_desc); - ret = nicvf_qset_rbdr_alloc(nic, nb_rbdr_desc, rbdrsz); + ret = nicvf_qset_rbdr_alloc(dev, nic, nb_rbdr_desc, rbdrsz); if (ret) { PMD_INIT_LOG(ERR, "Failed to allocate memory for rbdr alloc"); goto qset_reclaim; @@ -1379,7 +1379,7 @@ nicvf_dev_start(struct rte_eth_dev *dev) nic->rbdr->tail, nb_rbdr_desc); /* Configure RX queues */ - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) { + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) { ret = nicvf_start_rx_queue(dev, qidx); if (ret) goto start_rxq_error; @@ -1389,7 +1389,7 @@ nicvf_dev_start(struct rte_eth_dev *dev) nicvf_vlan_hw_strip(nic, dev->data->dev_conf.rxmode.hw_vlan_strip); /* Configure TX queues */ - for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) { + for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) { ret = nicvf_start_tx_queue(dev, qidx); if (ret) goto start_txq_error; @@ -1448,14 +1448,14 @@ nicvf_dev_start(struct rte_eth_dev *dev) qset_rss_error: nicvf_rss_term(nic); start_txq_error: - for (qidx = 0; qidx < nic->eth_dev->data->nb_tx_queues; qidx++) + for (qidx = 0; qidx < dev->data->nb_tx_queues; qidx++) nicvf_stop_tx_queue(dev, qidx); start_rxq_error: - for (qidx = 0; qidx < nic->eth_dev->data->nb_rx_queues; qidx++) + for (qidx = 0; qidx < dev->data->nb_rx_queues; qidx++) nicvf_stop_rx_queue(dev, qidx); qset_rbdr_reclaim: nicvf_qset_rbdr_reclaim(nic, 0); - nicvf_rbdr_release_mbufs(nic); + nicvf_rbdr_release_mbufs(dev, nic); qset_rbdr_free: if (nic->rbdr) { rte_free(nic->rbdr); @@ -1501,7 +1501,7 @@ nicvf_dev_stop(struct rte_eth_dev *dev) /* Move all charged buffers in RBDR back to pool */ if (nic->rbdr != NULL) - nicvf_rbdr_release_mbufs(nic); + nicvf_rbdr_release_mbufs(dev, nic); /* Reclaim CPI configuration */ if (!nic->sqs_mode) { @@ -1666,7 +1666,6 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev) nic->vendor_id = pci_dev->id.vendor_id; nic->subsystem_device_id = pci_dev->id.subsystem_device_id; nic->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; - nic->eth_dev = eth_dev; PMD_INIT_LOG(DEBUG, "nicvf: device (%x:%x) %u:%u:%u:%u", pci_dev->id.vendor_id, pci_dev->id.device_id, diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c index e15c730..fc43b74 100644 --- a/drivers/net/thunderx/nicvf_rxtx.c +++ b/drivers/net/thunderx/nicvf_rxtx.c @@ -368,7 +368,8 @@ nicvf_fill_rbdr(struct nicvf_rxq *rxq, int to_fill) void *obj_p[NICVF_MAX_RX_FREE_THRESH] __rte_cache_aligned; if (unlikely(rte_mempool_get_bulk(rxq->pool, obj_p, to_fill) < 0)) { - rxq->nic->eth_dev->data->rx_mbuf_alloc_failed += to_fill; + rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed += + to_fill; return 0; } diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h index a72f752..c900e12 100644 --- a/drivers/net/thunderx/nicvf_struct.h +++ b/drivers/net/thunderx/nicvf_struct.h @@ -113,7 +113,6 @@ struct nicvf { uint16_t subsystem_vendor_id; struct nicvf_rbdr *rbdr; struct nicvf_rss_reta_info rss_info; - struct rte_eth_dev *eth_dev; struct rte_intr_handle intr_handle; uint8_t cpi_alg; uint16_t mtu; -- 1.9.1